file stringlengths 13 16 | year int64 2.02k 2.02k | label int64 0 2 | text stringlengths 10 378k | page_no int64 0 156 | bbox list |
|---|---|---|---|---|---|
RQLLzMCefQu.pdf | 2,022 | 0 | provable rl with exogenous distractors via multistep inverse dynamics yonathan efroni1, dipendra misra1, akshay krishnamurthy1, alekh agarwal2 1microsoft research, new york, ny 2google †, john langford1 abstract many real-world applications of reinforcement learning (rl) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information from raw observations and subsequently plan efficiently. however, such approaches can fail in the presence of temporally correlated noise in the observations, a phenomenon that is common in practice. we initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the exogenous block mdp (ex-bmdp), for rich observation rl. we start by establishing several negative results, by highlighting failure cases of prior representation learning based approaches. then, we introduce the predictive path elimination (ppe) algorithm, that learns a generalization of inverse dynamics and is provably sample and computationally efficient in ex-bmdps when the endogenous state dynamics are near deterministic. the sample complexity of ppe depends polynomially on the size of the latent endogenous state space while not directly depending on the size of the observation space, nor the exogenous state space. we provide experiments on challenging exploration problems which show that our approach works empirically. introduction in many real-world applications such as robotics there can be large disparities in the size of agent’s observation space (for example, the image generated by agent’s camera), and a much smaller latent state space (for example, the agent’s location and orientation) governing the rewards and dynamics. this size disparity offers an opportunity: how can we construct reinforcement learning (rl) algorithms which can learn an optimal policy using samples that scale with the size of the latent state space rather than the size of the observation space? several families of approaches have been proposed based on solving various ancillary prediction problems including autoencoding (tang et al., 2017; hafner et al., 2019), inverse modeling (pathak et al., 2017; burda et al., 2018), and contrastive learning (laskin et al., 2020) based approaches. these works have generated some significant empirical successes, but are there provable (and hence more reliable) foundations for their success? more generally, what are the right principles for learning with latent state spaces? in real-world applications, a key issue is robustness to noise in the observation space. when noise comes from the observation process itself, such as due to measurement error, several approaches have been recently developed to either explicitly identify (du et al., 2019; misra et al., 2020; agarwal et al., 2020a) or implicitly leverage (jiang et al., 2017) the presence of latent state structure for provably sample-efficient rl. however, in many real-world scenarios, the observations consist of many elements (e.g. weather, lighting conditions, etc.) with temporally correlated dynamics (see e.g. figure 1 and the example below) that are entirely independent of the agent’s actions and rewards. the temporal dynamics of these elements precludes us from treating them as uncorrelated noise, and as such, most previous approaches resort to modeling their dynamics. however, this is clearly wasteful as these elements have no bearing on the rl problem being solved. †work was done while the author was at microsoft research. {yefroni, dimisra, akshaykr, jcl}@microsoft.com, alekhagarwal@google.com figure 1: left: an agent is walking next to a pond in a park and observes the world as an image. the world consists of a latent endogenous state, containing variable such as agent’s position, and a much larger latent exogenous state containing variables such as motion of ducks, ripples in the water, etc. center: graphical model of the ex-bmdp. right: ppe learns a generalized form of inverse dynamics that recovers the endogenous state. as an example, consider the setting in figure 1. an agent is walking in a park on a lonely sidewalk next to a pond. the agent’s observation space is the image generated by its camera, the latent endogenous state is its position on the sidewalk, and the exogenous noise is provided by motion of ducks, swaying of trees and changes in lighting conditions, typically unaffected by the agent’s actions. while there is a line of recent empirical work that aims to remove causally irrelevant aspects of the observation (gelada et al., 2019; zhang et al., 2020), theoretical treatment is quite limited (dietterich et al., 2018) and no prior works address sample-efficient learning with provable guarantees. given this, the key question here is: how can we learn using an amount of data scaling with just the size of the endogenous latent state, while ignoring the temporally correlated exogenous observation noise? we initiate a formal treatment of rl settings where the learner’s observations are jointly generated by a latent endogenous state and an uncontrolled exogenous state, which is unaffected by the agent’s actions and does not affect the agent’s task. we study a subset of such problems called exogenous block mdps (ex-bmdps), where the endogenous state is discrete and decodable from the observations. we first highlight the challenges in solving ex-bmdps by illustrating the failures of many prior representation learning approaches (pathak et al., 2017; misra et al., 2020; jiang et al., 2017; agarwal et al., 2020a; zhang et al., 2020). these failure happen either due to creating too many latent states, such as one for each combination of ducks and passers-by in the example above leading to sample inefficiency in exploration, or due to lack of exhaustive exploration. we identify one recent approach developed by du et al. (2019) with favorable properties for exbmdps with near-deterministic latent state dynamics. in section 4 and section 5, we develop a variation of their algorithm and analyze its performance. the algorithm, called path prediction and elimination (ppe), learns a form of multi-step inverse dynamics by predicting the identity of the path that generates an observation. for near-deterministic ex-bmdps, we prove that ppe successfully explores the environment using o((sa)2h log( /δ)) samples where s is the size of the latent endogenous state space, a is the number of actions, h is the horizon and is a function class employed to solve a maximum likelihood problem. several prior works (gregor et al., 2016; paster et al., 2020) have also considered a multi-step inverse dynamics approach to learn a near optimal policy. yet, these works do not consider the ex-bmdp model. further, it is unknown whether these algorithms have provable guarantees, as ppe. theoretical analysis of the performance of these algorithms in the presence of exogenous noise is an interesting future work direction. |f| f empirically, in section 6, we demonstrate the performance of ppe and various prior baselines in a challenging exploration problem with exogenous noise. we show that baselines fail to decode the endogenous state as well as learning a good policy. we further, show that ppe is able to recover the latent endogenous model in a visually complex navigation problem, in accordance with the theory. exogenous block mdp setting we introduce a novel exogenous block markov decision process (ex-bmdp) setting to model systems with exogenous noise. we describe notations before formalizing the ex-bmdp model. notations. for a given set u for a given natural number n for a probability distribution p , we use ∆( ) to denote the set of all probability distributions over u n, we use the notation [n ] to denote the set ), we define its support as supp(p) = ∆( . u . lastly, } . p(u) > 0, u , n u x z q( x ∆( z × a → x × a → , a set of latent states ), a reward function r : with cardinality a, a transition function t : z z [0, 1], a horizon h · · · , zh ), instead receiving only the observations (x1, , zh , xh , ah , rh ) where z1 ∼ µ( zh), rh = r(xh, ah), and if h < h, then zh+1 ∼ ∈ u} we start with describing the block markov decision process (bmdp) du et al. (2019). this process with cardinality z, a finite set consists of a finite set of observations ), an emission function of actions a n, and a start state q : z × a → ). the agent interacts with the environment by repeatedly generating h-step distribution µ trajectories (z1, x1, a1, r1, [h] we have ∈ zh, ah). the agent does not xh ∼ · | observe the states (z1, , xh ) and rewards · · · , rh ). we assume that the emission distributions of any two latent states are disjoint, usually (r1, referred as the block assumption: supp(q( = z2. the agent when z1 (cid:54) z1)) ∅ ∩ ). we also define the set of non-stationary policies chooses actions using a policy π : πns = πh as a h-length tuple, with (π1, πns denoting that the action at time step h , πh ) is taken as ah ∼ xh). the value v (π) of a policy π is the expected episodic sum of rewards πh( v (π) := eπ[(cid:80)h h=1 r(xh, ah)]. the optimal policy is given by π(cid:63) = arg maxπ πns v (π). we denote by ph(x π) the probability distribution over observations x at time step h when following | h sequences of actions. a policy π. lastly, we refer to an open loop policy as an element in all an open loop policy follows a pre-determined sequence of actions for h time steps, unaffected by state information. ) and for every h · t ( a a1, .., ah } { a · · · supp(q( x → given the aforementioned definitions, we define an ex-bmdp as follows: definition 1 (exogenous block markov decision processes). an ex-bmdp is a bmdp such that the latent state can be decoupled into two parts z = (s, ξ) where s is the endogenous state ∈ s and ξ the initial distribution and transition functions are ξ is the exogenous state. for z ∈ z decoupled, that is: µ(z) = µ(s)µξ(ξ), and t (z(cid:48) s, a)tξ(ξ(cid:48) | the observation space can be arbitrarily large to model which could be a high-dimensional real vector denoting an image, sound, or haptic data in an ex-bmdp. the endogenous state s captures the information that can be manipulated by the agent. figure 1, center, visualizes the transition dynamics is finite with cardinality s. the factorization. we assume that the set of all endogenous states exogenous state ξ captures all the other information that the agent cannot control and does not affect the information it can manipulate. again, we make no assumptions on the exogenous dynamics which may be arbitrarily large. we note that the block assumption of the nor on its cardinality ξ | ex-bmdp implies the existence of two inverse mappings: φ(cid:63) : to map an observation to its endogenous state, and φ(cid:63) ξ : ξ to map it to its exogenous state. z, a) = t (s(cid:48) x → s x s x → justification of assumptions. the block assumption has been made by prior work (e.g., du et al. (2019), zhang et al. (2020)) to model many real-world settings where the observation is rich, i.e., it contains enough information to decode the latent state. the decoupled dynamics assumption made in the ex-bmdp setting is a natural way to characterize exogenous noise; the type of noise that is not affected by our actions nor affects the endogenous state but may have non-trivial dynamic. this decoupling captures the movement of ducks, captured in the visual field of the agent in figure 1, and many additional exogenous processes (e.g., movement of clouds in a navigation task). goal. our formal objective is reward-free learning. we wish to find a set of policies, we call a policy cover, that can be used to explore the entire state space. given a policy cover, and for any reward function, we can find a near optimal policy by applying dynamic programming (e.g., bagnell et al. (2004)), policy optimization (e.g., kakade and langford (2002); agarwal et al. (2020b); shani et al. (2020)) or value (e.g., antos et al. (2008)) based methods. definition 2 (α-policy cover). let ψh be a finite set of non-stationary policies. we say ψh is an α-policy cover for the hth time step if it holds that maxπ πn s ∈ z α. if α = 0 we call ψh a policy cover. for all z ph(z ph(z maxπ ψh for standard bmdps the policy cover is simply the set of policies that reaches each latent state of the bmdp (du et al., 2019; misra et al., 2020; agarwal et al., 2020a). thus, for a bmdp, the cardinality . the structure of ex-bmdps allows to reduce the size of the of the policy cover scales with |z| policy cover significantly to when the size of the exogenous state space is large. specifically, we show that the set of policies that reach each endogenous state, and do not depend on the exogenous part of the state is also a policy cover (see appendix b, proposition 4). |s| (cid:28) |z| |s| | failures of prior approaches we now describe the limitation of prior rl approaches in the presence of exogenous noise. we provide an intuitive analysis over here, and defer a formal statement and proof to appendix a. limitation of noise-contrastive learning. noise-contrastive learning has been used in rl to learn a state abstraction by exploiting temporal information. specifically, the homer algorithm (misra et al., 2020) trains a model to distinguish between real and imposter transitions. this is done by collecting a dataset of quads (x, a, x(cid:48), y) where y = 1 means the transition was (x, a, x(cid:48)) was observed and y = 0 means that (x, a, x(cid:48)) was not observed. homer then trains a model pθ(y x, a, φθ(x(cid:48))) with parameters θ, on the dataset, by predicting whether a given pair of transition was observed n for exploring the environment. homer or not. this provides a state abstraction φθ : can provably solve block mdps. unfortunately, in the presence of exogenous noise, homer distinguishes between two transitions that represent transition between the same latent endogenous states but different exogenous states. in our walk in the park example, even if the agent moves between same points in two transitions, the model maybe able to tell these transitions apart by looking at the position of ducks which may have different behaviour in the two transitions. this results in the homer creating ) many abstract states. we call this the under-abstraction problem. x → ( |z| o limitation of inverse dynamics. another common approach in empirical works is based on modeling the inverse dynamics of the system, such as the icm module of pathak et al. (2017). in such approaches, we learn a representation by using consecutive observations to predict the action that was taken between them. such a representation can ignore all information that is not relevant for action prediction, which includes all exogenous/uncontrollable information. however, it can also ignore controllable information. this may result in a failure to sufficiently explore the environment. in this sense, inverse dynamics approaches result in an over-abstraction problem where observations from different endogenous states can be mapped to the same abstract state. the over-abstraction problem was described at misra et al. (2020), when the starting state is random. in appendix a.3 we show inverse dynamics may over-abstract when the initial starting state is deterministic. limitation of bisimulation. zhang et al. (2020) proposed learning a bisimulation metric to learn a representation which is invariant to exogenous noise. unfortunately, it is known that bisimulation metric cannot be learned in a sample-efficient manner (modi et al. (2020), proposition b.1). intuitively, when the reward is same everywhere, then bisimulation merges all states into a single abstract state. this creates an over-abstraction problem in sparse reward settings, since the agent can falsely merge all states into a single abstract state until it receives a non-trivial reward. bellman rank might depend on . the bellman rank was introduced in jiang et al. (2017) as a ξ | | complexity measure for the learnability of an rl problem with function approximations. to date, most of the learnable rl problems have a small bellman rank. however, we show in appendix a that bellman rank for ex-bmdp can scale as ). this shows that ex-bmdp is a highly non-trivial ξ | | setting as we don’t even have sample-efficient algorithms regardless of computationally-efficient. o in appendix a we also describe the failures of flambe (agarwal et al., 2020a)) and autoencoding based approaches (tang et al., 2017). reinforcement learning for ex-bmdps in this section, we present an algorithm predictive path elimination (ppe) that we later show can provably solve any ex-bmdp with nearly deterministic dynamics and start state distribution of the endogenous state, while making no assumptions on the dynamics or start state distribution of the exogenous state (algorithm 1). before describing ppe, we highlight that ppe can be thought of as , stochasticity level η algorithm 1 ppe(δ, η): predictive path elimination 1: set ψ1 = 2: for h = 2, . . . , h do set n = 16 ( ψh 3: | collect a dataset solve multi-class classification problem: ˆfh = arg maxf for 1 |f || 1 ◦ of n i.i.d. tuples (x, υ) where υ )2 log i < j do ah d a ψh 1 ◦ a| calculate the path prediction gap: (cid:98)∆(i, j) = 1 n if (cid:98)∆(i, j) ◦a| denotes an empty path unf(ψh 1 ◦ a − (cid:80) (x,υ) ∈f ∈d ) and x ln f (idx(υ) p(xh | x). | (x,υ) ∈d ˆfh(i | x) ˆfh(j , then eliminate path υ with idx(υ) = j. //υi and υj visit same state ψh is defined as the set of all paths in ψh that have not been eliminated in line 8. a computationally-efficient and simpler alternative to algorithm 4 of du et al. (2019) who studied rich-observation setting without exogenous noise.1 , h ∈ a} where π a is an open-loop policy that follows π till time step h . in the hth iteration, it learns a policy ppe performs iterations over the time steps h 2, } cover ψh for time step h containing open-loop policies. this is done by first augmenting the policy cover for previous time step by one step. formally, we define υh = ψh = a ∈ ◦ 1 and then takes ψh action a. since we assume the transition dynamics to be near-deterministic, therefore, we know that there exists a policy cover for time step h that is a subset of υh and whose size is equal to the number of reachable states at time step h. further, as the transitions are near-deterministic, we refer to an open-loop policy as a path, as we can view the policy as tracing a path in the latent transition model. ppe works by eliminating paths in υh so that we are left with just a single path for each reachable of tuples (x, υ) where υ is a uniformly sampled from state. this is done by collecting a dataset υh and x by predicting the index idx(υ) of the path υ from the observation x (line 5). index of paths in υh are computed with respect to υh and remain fixed throughout training. intuitively, if ˆfh(i x) is sufficiently large, then we can hope that the path υi visits the state φ(cid:63)(x). further, we can view this prediction problem as learning a multistep inverse dynamics model since the open-loop policy contains information about all previous actions and not just the last action. for every pair of paths in υh, we first compute a path prediction gap (cid:98)∆(line 7). if the gap is too small, we show it implies that these paths reach the same endogenous state, hence we can eliminate a single redundant path from this pair (line 8). finally, ψh is defined as the set of all paths in υh which were not eliminated. ppe reduces rl to performing h standard classification problems. further, the algorithm is very simple and in practice requires just a single hyperparameter (n ). we believe these properties will make it well-suited for many problems. υ) (line 4). we train a classifier ˆfh using ph(x d d recovering an endogenous state decoder. we can recover a endogenous state decoder ˆφh for directly from ˆfh as shown below: each time step h , h ∈ { · · · ˆφh(x) = min } ˆfh(i i x) max j ˆfh(j x) ), i | ] υh| − o intuitively, this assigns the observation to the path with smallest index that has the highest chance of visiting x, and therefore, φ(cid:63)(x). we are implicitly using the decoder for exploring, since we rely on using ˆfh for making planning decisions. we will evaluate the accuracy of this decoder in section 6. recovering the latent transition dynamics. ppe can also be used to recover a latent endogenous transition dynamics. the direct way is to use the learned decoder ˆφh along with episodes collected by ppe during the course of training and do count-based estimation. however, for most problems, recovering an approximate deterministic transition dynamics suffices, which can be directly read 1alg. 4 has time complexity of o(s4a4h) compared to o(s3a3h) for ppe. furthermore, alg. 4 requires an upper bound on s, whereas ppe is adaptive to it. lastly, du et al. (2019) assumed deterministic setting while we provide a generalization to near-determinism. from the path elimination data. we accomplish this by recovering a partition of paths in ψh 1 × a where two paths in the same partition set are said to be merged with each other. in the beginning, each path is only merged with itself. when we eliminate a path υj on comparison with υi in line 8, then all paths currently merged with υj get merged with υi. we then define an abstract state space ψh. further, we recover a sh for time step h that contains an abstract state j for each path υj ∈ (cid:98) latent deterministic transition dynamics for time step h sh where we set 1 : (cid:98) (cid:98) 1 × a → sh − ˆth ψh where υ(cid:48)i ∈ ψh − 1 as ˆth − ψh gets merged with path υ(cid:48)i ◦ a ∈ 1(i, a) = j if the path υj ∈ learning a near optimal policy given a policy cover. ppe runs in a reward-free setting. however, the recovered policy cover and dynamics can be directly used to optimize any given reward function with existing methods. if the reward function depends on the exogenous state then we can use the psdp algorithm (bagnell et al., 2004) to learn a near-optimal policy. psdp is a model-free dynamic programming method that only requires policy cover as input (see appendix d.1 for details). if the reward function only depends on the endogenous state, we can use a computationally cheaper value-iteration vi that uses the recovered transition dynamics. vi is a model-based algorithm that estimates the reward for each state and action, and performs dynamic programming on the model (see appendix d.2 for details). in each case, the sample complexity of learning a near-optimal policy, given the output of ppe, scales with the size of endogenous and not the exogenous state space. theoretical analysis and discussion we provide the main sample complexity guarantee for ppe as well as additional intuition for why it works. we analyze the algorithm in near-deterministic mdps defined as follows: two transition functions t1 and t2 are η-close if for all h [h], a · | η. s, a) we emphasize that near-deterministic dynamics are common in real-world applications like robotics. assumption 1 (near deterministic endogenous dynamics). we assume the endogenous dynamics is η-close to a deterministic model (µd,η, td,η) where η ∈ sh it holds that ∈ η. analogously, two starting distribution µ1 and µ2 are η-close if || ∈ a , s f we make a realizability assumption for the regression problem solved by ppe (line 5). we assume is expressive enough to represent the bayes optimal classifier of the regression problems that created by ppe. assumption 2 (realizability). for any h where [h], and any set of paths υ h denotes the set of all paths of length h, there exists f (cid:63) ph(φ(cid:63)(x)) h with sa and υ ⊆ a | ≤ such that: f (cid:63) υ,h(idx(υ) υ(cid:48)) > 0. υ,h ∈ f ph(φ(cid:63)(x)) with (cid:80) υ and x x) = υ) | ph(φ(cid:63)(x)) υ(cid:48)) , for all υ | ∈ x realizability assumptions are common in theoretical analysis (e.g., misra et al. (2020), agarwal et al. (2020a)). in practice, we use expressive neural networks to solve the regression problem, so we expect the realizability assumption to hold. note that there are at most as(h+1) bayes classifiers for different prediction problems. however, this is acceptable since our guarantees will scale as ln and, therefore, the function class can be exponentially large to accommodate all of them. |f| f we now state the formal sample complexity guarantees for ppe below. theorem 1 (sample complexity). fix δ returns a policy cover h h=1 such that for any h (0, 1). then, with probability greater than 1 δ, ppe [h], ψh is a ηh-policy cover for time step h ψh} s, which gives the total number of episodes used by ppe as o s2a2h ln |f | sah δ and ψh| ≤ | we defer the proof to appendix c. our sample complexity guarantees do not depend directly on the size of observation space or the exogenous space. further, since our analysis only uses standard uniform convergence arguments, it extends straightforwardly to infinitely large function classes by replacing ln with other suitable complexity measures such as rademacher complexity. |f| why does ppe work? we provide an asymptotic analysis to explain why ppe works. consider a deterministic setting and the hth iteration of ppe. assume by induction that ψh 1 is an exact policy cover for time step h is also a policy cover for time step h. however, 1 ◦ a it may contain redundancies; it may contain several paths that reach the same endogenous state. we now show how a generalized inverse dynamics objective can eliminate such redundant paths. 1. therefore, υh = ψh (a) combination lock (h = 2). (b) regret plot (c) decoding accuracy figure 2: results on combination lock. left: we show the latent transition dynamics of combination lock. observations are not shown for brevity. center: shows minimal number of episodes needed to achieve a mean regret of at most v (π(cid:63))/2. right: state decoding accuracy (in percent) of decoders learned by different methods. solid lines implies no exogenous dimension while dashed lines imply an exogenous dimension of 100. let ph(ξ) denote the distribution over exogenous states at time step h which is independent of agent’s policy. the bayes optimal classifier (f (cid:63) h := fυh,h) of the prediction problem can be derived as: ph(x | υ) ph(x | υ)p(υ) ph(φ(cid:63)(x)) | υ) ph(x | υ(cid:48))p(υ(cid:48)) ph(x | υ(cid:48)) (a) = (b) = f (cid:63) h (idx(υ) | x) := ph(υ | x) = where (a) holds since all paths in υh are chosen uniformly, and (b) critically uses the fact that for any open-loop policy υ we have a factorization property, ph(x υ) = q (cid:0)x ξ(x)(cid:1) ph(φ(cid:63)(x) υ)ph(φ(cid:63) ξ(x)). xh) xh) f (cid:63) h(i υh be two paths with indices i and j respectively. we define their exact path prediction let υ1, υ2 ∈ gap as ∆(i, j) := exh [ f (cid:63) ]. assume that υ1 visits an endogenous state s at h(j | | − time step h and denote ω(s) as the number of paths in υh that reaches s. then f (cid:63) xh) = 1/ω(s) if φ(cid:63)(xh) = s, and 0 otherwise. if υ2 also visits s at time step h, then f (cid:63) xh) for h(j all xh. this implies ∆(i, j) = 0 and ppe will filter out the path with higher index since it detected both paths reach to the same endogenous state. conversely, let υ2 visit a different state at time step h. if x is an observation that maps to s, then f (cid:63) x) = 0. this gives h(i f (cid:63) and, consequently, ∆(i, j) > 0. in fact, we can show h(i x) | | | ). thus, ppe will not eliminate these paths upon comparison. our complete ∆(i, j) | analysis in the appendix generalizes the above reasoning to finite sample setting where we can only approximate f (cid:63) h and ∆, as well as to ex-bmdps with near-deterministic dynamics. as evident, the analysis critically relies on the factorization property that holds for open-loop policies but not for arbitrary ones. this is the reason why we build a policy cover with open-loop policies. x) = 1/ω(s) and f (cid:63) h(i | xh) = f (cid:63) − ≥ o h(j h(i x) υh experiments we evaluate ppe on two domains: a challenging exploration problem called combination lock to test whether ppe can learn an optimal policy and an accurate state decoder, and a visual-grid world with complex visual representations to test whether ppe is able to recover the latent dynamics. s a sh,a, sh,b, sh,c} h , an action space 0, 1 } combination lock experiments. the combination lock problem is defined for a given horizon h h by an endogenous state space h=2, an exogenous state space with 10 actions, and a deterministic endogenous start state of s1,a. ξ = for any state sh,g we call g as its type which can be a, b or c. states with type a and b are considered good states and those with type c are considered bad states. each instance of this problem is defined by two good action sequences (ah)h = a(cid:48)h, which are chosen uniformly randomly h=2 with ah (cid:54) and kept fixed throughout. at h = 1, the agent is in s1,a and action a1 leads to s2,a, a(cid:48)h leads to s2,b, and all other actions lead to s2,c. for h > 2, taking action ah in sh,a leads to sh+1,a and taking action a(cid:48)h in sh,b leads to sh+1,b. in all other cases involving taking an action in a state sh,g, we transition to the next bad state sh+1,c. we visualize the latent endogenous dynamics in figure 2a. the exogenous [h]. at state evolves as follows. we set ξ1 ∈ { ∈ 1 independently with time step h, ξh is generated from ξh | 6 | [
108,
58.90055,
505.7435378,
193.1034166
] |
fPhKeld3Okz.pdf | 2,022 | 1 | gradient step denoiser for convergent plugand-play samuel hurault ∗, arthur leclaire & nicolas papadakis univ. bordeaux, bordeaux inp, cnrs, imb, umr 5251,f-33400 talence, france abstract plug-and-play (pnp) methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. although pnp methods can lead to tremendous visual performance for various image problems, the few existing convergence guarantees are based on unrealistic (or suboptimal) hypotheses on the denoiser, or limited to strongly convex data-fidelity terms. we propose a new type of pnp method, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step on a functional parameterized by a deep neural network. exploiting convergence results for proximal gradient descent algorithms in the nonconvex setting, we show that the proposed pnp algorithm is a convergent iterative scheme that targets stationary points of an explicit global functional. besides, experiments show that it is possible to learn such a deep denoiser while not compromising the performance in comparison to other state-of-the-art deep denoisers used in pnp schemes. we apply our proximal gradient algorithm to various ill-posed inverse problems, e.g. deblurring, superresolution and inpainting. for all these applications, numerical results empirically confirm the convergence results. experiments also show that this new algorithm reaches state-of-the-art performance, both quantitatively and qualitatively. introduction image restoration (ir) problems can be formulated as inverse problems of the form x∗ ∈ arg min f (x) + λg(x) x where f is a term measuring the fidelity to a degraded observation y, and g is a regularization term weighted by a parameter λ ≥ 0. generally, the degradation of a clean image ˆx can be modeled by a linear operation y = aˆx + ξ, where a is a degradation matrix and ξ a white gaussian noise. in this context, the maximum a posteriori (map) derivation relates the data-fidelity term to the likelihood f (x) = − log p(y|x) = 1 2σ2 ||ax − y||2, while the regularization term is related to the chosen prior. regularization is crucial since it tackles the ill-posedness of the ir task by bringing a priori knowledge on the solution. a lot of research has been dedicated to designing accurate priors g. among the most classical priors, one can single out total variation (rudin et al., 1992), wavelet sparsity (mallat, 2009) or patch-based gaussian mixtures (zoran & weiss, 2011). designing a relevant prior g is a difficult task and recent approaches rather apply deep learning techniques to directly learn a prior from a database of clean images (lunz et al., 2018; prost et al., 2021; gonz´alez et al., 2021). generally, the problem (1) does not have a closed-form solution, and an optimization algorithm is required. first-order proximal splitting algorithms (combettes & pesquet, 2011) operate individually on f and g via the proximity operator proxf (x) = arg min z ||x − z||2 + f (z). among them, half-quadratic splitting (hqs) (geman & yang, 1995) alternately applies the proximal operators of f and g. proximal methods are particularly useful when either f or g is nonsmooth. plug-and-play (pnp) methods (venkatakrishnan et al., 2013) build on proximal splitting algorithms by replacing the proximity operator of g with a generic denoiser, e.g. a pretrained deep network. ∗corresponding author: samuel.hurault@math.u-bordeaux.fr these methods achieve state-of-the-art results (buzzard et al., 2018; ahmad et al., 2020; yuan et al., 2020; zhang et al., 2021) in various ir problems. however, since a generic denoiser cannot generally be expressed as a proximal mapping (moreau, 1965), convergence results, which stem from the properties of the proximal operator, are difficult to obtain. moreover, the regularizer g is only made implicit via the denoising operation. therefore, pnp algorithms do not seek the minimization of an explicit objective functional which strongly limits their interpretation and numerical control. in order to keep tractability of a minimization problem, romano et al. (2017) proposed, with regularization by denoising (red), an explicit prior g that exploits a given generic denoiser d in the form g(x) = 1 2 (cid:104)x, x − d(x)(cid:105). with strong assumptions on the denoiser (in particular a symmetric jacobian assumption), they show that it verifies ∇xg(x) = x − d(x). (3) such a denoiser is then plugged in gradient-based minimization schemes. despite having shown very good results on various image restoration tasks, as later pointed out by reehorst & schniter (2018) or saremi (2019), existing deep denoisers lack jacobian symmetry. hence, red does not minimize an explicit functional and is not guaranteed to converge. contributions. in this work, we develop a pnp scheme with novel theoretical convergence guarantees and state-of-the-art ir performance. departing from the pnp-hqs framework, we plug a denoiser that inherently satisfies equation (3) without sacrificing the denoising performance. the resulting fixed-point algorithm is guaranteed to converge to a stationary point of an explicit functional. this convergence guarantee does not require strong convexity of the data-fidelity term, thus encompassing ill-posed ir tasks like deblurring, super-resolution or inpainting. related works pnp methods have been successfully applied in the literature with various splitting schemes: hqs (zhang et al., 2017b; 2021), admm (romano et al., 2017; ryu et al., 2019), proximal gradient descent (pgd) (terris et al., 2020). first used with classical non deep denoisers such as bm3d (chan et al., 2016) and pseudo-linear denoisers (nair et al., 2021; gavaskar et al., 2021), more recent pnp approaches (meinhardt et al., 2017; ryu et al., 2019) rely on efficient off-theshelf deep denoisers such as dncnn (zhang et al., 2017a). state-of-the-art ir results are currently obtained with denoisers that are specifically designed to be integrated in pnp schemes, like ircnn (zhang et al., 2017b) or drunet (zhang et al., 2021). though providing excellent restorations, such schemes are not guaranteed to converge for all kinds of denoisers or ir tasks. designing convergence proofs for pnp algorithms is an active research topic. sreehari et al. (2016) used the proximal theorem of moreau (moreau, 1965) to give sufficient conditions for the denoiser to be an explicit proximal map, which are applied to a pseudo-linear denoiser. the convergence with pseudo-linear denoisers have been extensively studied (gavaskar & chaudhury, 2020; nair et al., 2021; chan, 2019). however, state-of-the-art pnp results are obtained with deep denoisers. various assumptions have been made to ensure the convergence of the related pnp schemes. with a “bounded denoiser” assumption, chan et al. (2016); gavaskar & chaudhury (2019) showed convergence of pnp-admm with stepsizes decreasing to 0. red (romano et al., 2017) and red-pro (cohen et al., 2021) respectively consider the classes of denoisers with symmetric jacobian or demicontractive mappings, but these conditions are either too restrictive or hard to verify in practice. in appendix a.3, more details are given on red-based methods. many works focus on lipschitz properties of pnp operators. depending on the splitting algorithm in use, convergence can be obtained by assuming the denoiser averaged (sun et al., 2019b), firmly nonexpansive (sun et al., 2021; terris et al., 2020) or simply nonexpansive (reehorst & schniter, 2018; liu et al., 2021). these settings are unrealistic as deep denoisers do not generally satisfy such properties. ryu et al. (2019); terris et al. (2020) propose different ways to train deep denoisers with constrained lipschitz constants, in order to fit the technical properties required for convergence. but imposing hard lipschitz constraints on the network alters its denoising performance (bohra et al., 2021; hertrich et al., 2021). yet, ryu et al. (2019) manages to get a convergent pnp scheme without assuming the nonexpansiveness of d. this comes at the cost of imposing strong convexity on the data-fidelity term f , which excludes many ir tasks like deblurring, super-resolution or inpainting. hence, given the ill-posedness of ir problems, looking for a unique solution via contractive operators is a restrictive assumption. in this work, we do not impose contractiveness, but still obtain convergence results with realistic hypotheses. one can relate the ideal deep denoiser to the “true” natural image prior p via tweedie’s identity. in (efron, 2011), it is indeed shown that the minimum mean square error (mmse) denoiser d∗ σ (at noise level σ) verifies dσ(x) = x + σ2∇x log pσ(x) where pσ is the convolution of p with the density of n (0, σ2 id). in a recent line of research (bigdeli et al., 2017; xu et al., 2020; laumont et al., 2021; kadkhodaie & simoncelli, 2020), this relation is used to plug a denoiser in gradient-based dynamics. in practice, the mmse denoiser cannot be computed explicitly and tweedie’s identity does not hold for deep approximations of the mmse. in order to be as exhaustive as possible, we detailed the addressed limitations of existing pnp methods in appendix a.1. the gradient step plug-and-play the proposed method is based on the pnp version of half-quadratic-splitting (pnp-hqs) that amounts to replacing the proximity operator of the prior g with an off-the-shelf denoiser dσ. in order to define a convergent pnp scheme, we first set up in section 3.1 a gradient step (gs) denoiser. we then introduce the gradient step pnp (gs-pnp) algorithm in section 3.2. gradient step denoiser we propose to plug a denoising operator dσ that takes the form of a gradient descent step dσ = id −∇gσ, (4) with gσ : rn → r. contrary to romano et al. (2017), our denoiser exactly represents a conservative vector field. the choice of the parameterization of gσ is fundamental for the denoising performance. as already noticed in salimans & ho (2021), we experimentally found that directly modeling gσ as a neural network (e.g. a standard network used for classification) leads to poor denoising performance. in order to keep the strength of state-of-the-art unconstrained denoisers, we rather use gσ(x) = ||x − nσ(x)||2, which leads to dσ(x) = x − ∇gσ(x) = nσ(x) + jnσ (x)t (x − nσ(x)), (6) where nσ : rn → rn is parameterized by a neural network and jnσ (x) is the jacobian of nσ at point x. as discussed in appendix a.2, the formulation (5) for gσ has been proposed in (romano et al., 2017, section 5.2) and (bigdeli & zwicker, 2017) for a distinct but related purpose, and not exploited for convergence analysis. thanks to our definition (6) for dσ, we can parameterize nσ with any differentiable neural network architecture rn → rn that has proven efficient for image denoising. although the representation power of the denoiser is limited by the particular form (6), we show (see section 5.1) that such parameterization still yields state-of-the-art denoising results. we train the denoiser dσ for gaussian noise by minimizing the mse loss function l(dσ) = ex∼p,ξσ∼n (0,σ2i)[||dσ(x + ξσ) − x||2], or l(gσ) = ex∼p,ξσ∼n (0,σ2i)[||∇gσ(x + ξσ) − ξσ||2], when written in terms of gσ using equation (4). remark 1. by definition, the optimal solution g∗ σ ∈ arg min l is related to the mmse denoiser d∗ σ, that is, the best non-linear predictor of x given x + ξσ. therefore, it satisfies tweedie’s formula σ = −σ2 log pσ + c, for some c ∈ r. hence and ∇g∗ approximating the mmse denoiser with a denoiser parameterized as (4) is related to approximating the logarithm of the smoothed image prior of pσ with − 1 σ2 gσ. this relation was used for image generation with “denoising score matching” by saremi & hyvarinen (2019); bigdeli et al. (2020). σ = −σ2∇ log pσ (efron, 2011) i.e. g∗ a plug-and-play method for explicit minimization the standard pnp-hqs operator is tpnp-hqs = dσ ◦proxτ f , i.e. (id −∇gσ)◦proxτ f when using the gs denoiser as dσ. for convergence analysis, we wish to fit the proximal gradient descent (pgd) algorithm. we thus propose to switch the proximal and gradient steps and to relax the denoising step with a parameter λ ≥ 0. our pnp algorithm with gs denoiser (gs-pnp) then writes xk+1 = t τ,λ gs-pnp(xk) with t τ,λ gs-pnp = proxτ f ◦(τ λdσ + (1 − τ λ) id), = proxτ f ◦(id −τ λ∇gσ). under suitable conditions on f and gσ (see lemma 1 in appendix c), fixed points of the pgd operator t τ,λ gs-pnp correspond to critical points of a classical objective function in ir problems f (x) = f (x) + λgσ(x). therefore, using the gs denoiser from equation (4) is equivalent to include an explicit regularization and thus leads to a tractable global optimization problem solved by the pnp algorithm. our complete pnp scheme is presented in algorithm 1. it includes a backtracking procedure on the stepsize τ that will be detailed in section 4.2. also, after convergence, we found it useful to apply an extra gradient step id −λτ ∇gσ in order to discard the residual noise brought by the last proximal step proxτ f . convergence analysis | 3 | [
108.299,
582.2096768,
259.2274027,
594.1648768
] |
NX1He-aFO_F.pdf | 2,021 | 1 | learning value functions in deep policy gradients using residual variance yannis flet-berliac∗ inria, scool team univ. lille, cristal, cnrs yannis.flet-berliac@inria.fr odalric-ambrym maillard inria, scool team reda ouhamma∗ inria, scool team univ. lille, cristal, cnrs reda.ouhamma@inria.fr philippe preux inria, scool team univ. lille, cristal, cnrs abstract policy gradient algorithms have proven to be successful in diverse decision making and control tasks. however, these methods suffer from high sample complexity and instability issues. in this paper, we address these challenges by providing a different approach for training the critic in the actor-critic framework. our work builds on recent studies indicating that traditional actor-critic algorithms do not succeed in fitting the true value function, calling for the need to identify a better objective for the critic. in our method, the critic uses a new state-value (resp. state-actionvalue) function approximation that learns the value of the states (resp. state-action pairs) relative to their mean value rather than the absolute value as in conventional actor-critic. we prove the theoretical consistency of the new gradient estimator and observe dramatic empirical improvement across a variety of continuous control tasks and algorithms. furthermore, we validate our method in tasks with sparse rewards, where we provide experimental evidence and theoretical insights. introduction model-free deep reinforcement learning (rl) has been successfully used in a wide range of problem domains, ranging from teaching computers to control robots to playing sophisticated strategy games (silver et al., 2014; schulman et al., 2016; lillicrap et al., 2016; mnih et al., 2016). stateof-the-art policy gradient algorithms currently combine ingenious learning schemes with neural networks as function approximators in the so-called actor-critic framework (sutton et al., 2000; schulman et al., 2017; haarnoja et al., 2018). while such methods demonstrate great performance in continuous control tasks, several discrepancies persist between what motivates the conceptual framework of these algorithms and what is implemented in practice to obtain maximum gains. for instance, research aimed at improving the learning of value functions often restricts the class of function approximators through different assumptions, then propose a critic formulation that allows for a more stable policy gradient. however, new studies (tucker et al., 2018; ilyas et al., 2020) indicate that state-of-the-art policy gradient methods (schulman et al., 2015; 2017) fail to fit the true value function and that recently proposed state-action-dependent baselines (gu et al., 2016; liu et al., 2018; wu et al., 2018) do not reduce gradient variance more than state-dependent ones. these findings leave the reader skeptical about actor-critic algorithms, suggesting that recent research tends to improve performance by introducing a bias rather than stabilizing the learning. consequently, attempting to find a better baseline is questionable, as critics would typically fail to fit it (ilyas et al., 2020). in tucker et al. (2018), the authors argue that “much larger gains could be achieved by instead improving the accuracy of the value function”. following this line of thought, we are interested in ways to better approximate the value function. one approach addressing this issue is to put more focus on relative state-action values, an idea introduced in the literature on advantage reinforcement ∗equal contribution. learning (harmon & baird iii) followed by works on dueling (wang et al., 2016) neural networks. more recent work (lin & zhou, 2020) also suggests that considering the relative action values, or more precisely the ranking of actions in a state leads to better policies. the main argument behind this intuition is that it suffices to identify the optimal actions to solve a task. we extend this principle of relative action value with respect to the mean value to cover both state and state-action-value functions with a new objective for the critic: minimizing the variance of residual errors. in essence, this modified loss function puts more focus on the values of states (resp. state-actions) relative to their mean value rather than their absolute values, with the intuition that solving a task corresponds to identifying the optimal action(s) rather than estimating the exact value of each state. in summary, this paper: • introduces actor with variance estimated critic (avec), an actor-critic method providing a new training objective for the critic based on the residual variance. • provides evidence for the improvement of the value function approximation as well as theoretical consistency of the modified gradient estimator. • demonstrates experimentally that avec, when coupled with state-of-the-art policy gradient algorithms, yields a significant performance boost on a set of challenging tasks, including environments with sparse rewards. • provides empirical evidence supporting a better fit of the true value function and a substantial stabilization of the gradient. related work | 1 | [
108.299,
442.1786768,
211.1957635,
454.1338768
] |
apv504XsysP.pdf | 2,022 | 2 | ab-initio potential energy surfaces pairing gnns with neural wave functions by nicholas gao & stephan g ¨unnemann department of informatics & munich data science institute technical university of munich, germany @in.tum.de gaoni,guennemann { abstract solving the schr¨odinger equation is key to many quantum mechanical properties. however, an analytical solution is only tractable for single-electron systems. recently, neural networks succeeded at modeling wave functions of many-electron systems. together with the variational monte-carlo (vmc) framework, this led to solutions on par with the best known classical methods. still, these neural methods require tremendous amounts of computational resources as one has to train a separate model for each molecular geometry. in this work, we combine a graph neural network (gnn) with a neural wave function to simultaneously solve the schr¨odinger equation for multiple geometries via vmc. this enables us to model continuous subsets of the potential energy surface with a single training pass. compared to existing state-of-the-art networks, our potential energy surface network (pesnet) speeds up training for multiple geometries by up to 40 times while matching or surpassing their accuracy. this may open the path to accurate and orders of magnitude cheaper quantum mechanical calculations. introduction in recent years, machine learning gained importance in computational quantum physics and chemistry to accelerate material discovery by approximating quantum mechanical (qm) calculations (huang & von in particular, a lot of work has lilienfeld, 2021). gone into building surrogate models to reproduce qm properties, e.g., energies. these models learn from datasets created using classical techniques such as density functional theory (dft) (ramakrishnan et al., 2014; klicpera et al., 2019) or coupled clusters (ccsd) (chmiela et al., 2018). while this approach has shown great success in recovering the baseline calculations, it suffers from several disadvantages. firstly, due to the tremendous success of graph neural networks (gnns) in this area, the regression target quality became the limiting factor for accuracy (klicpera et al., 2019; qiao et al., 2021; batzner et al., 2021), i.e., the network’s prediction is closer to the data label than the data label is to the actual qm property. secondly, these surrogate models are subject to the usual difficulties of neural networks such as overconfidence outside the training domain (pappu & paige, 2020; guo et al., 2017). for figure 1: schematic of pesnet. the each molecular structure (top row), metagnn takes the nuclei graph and parametrizes the wfmodel via ω and ωm. given these, the wfmodel evaluates the electronic wave function ψ(⃗r). in orthogonal research, neural networks have been used as wave function ans¨atze to solve the stationary schr¨odinger equation (kessler et al., 2021; han et al., 2019). these methods use the variational monte carlo (vmc) (mcmillan, 1965) framework to iteratively optimize a neural wave function to obtain the ground-state electronic wave function of a given system. chemists refer to such methods as ab-initio, whereas the machine learning community may refer to this as a form of self-generative learning as no dataset is required. the data (electron positions) are sampled from the wave function itself, and the loss is derived from the schr¨odinger equation (ceperley et al., 1977). this approach has shown great success as multiple authors report results outperforming the traditional ‘gold-standard’ ccsd on various systems (pfau et al., 2020; hermann et al., 2020). however, these techniques require expensive training for each geometry, resulting in high computational requirements and, thus, limiting their application to small sets of configurations. in this work, we accelerate vmc with neural wave functions by proposing an architecture that solves the schr¨odinger equation for multiple systems simultaneously. the core idea is to predict a set of parameters such that a given wave function, e.g., ferminet (pfau et al., 2020), solves the schr¨odinger equation for a specific geometry. previously, these parameters were obtained by optimizing a separate wave function for each geometry. we improve this procedure by generating the parameters with a gnn, as illustrated in figure 1. this enables us to capture continuous subsets of the potential energy surface in one training pass, removing the need for costly retraining. additionally, we take inspiration from supervised surrogate networks and enforce the invariances of the energy to physical symmetries such as translation, rotation, and reflection (sch¨utt et al., 2018). while these symmetries hold for observable metrics such as energies, the wave function itself may not have these symmetries. we solve this issue by defining a coordinate system that is equivariant to the symmetries of the energy. in our experiments, our potential energy surface network (pesnet) consistently matches or surpasses the results of the previous best neural wave functions while training less than 1 40 of the time for high-resolution potential energy surface scans. related work molecular property prediction has seen a surge in publications in recent years with the goal of predicting qm properties such as the energy of a system. classically, features were constructed by hand and fed into a machine learning model to predict target properties (christensen et al., 2020; behler, 2011; bart´ok et al., 2013). lately, gnns have proven to be more accurate and took over the field (yang et al., 2019; klicpera et al., 2019; sch¨utt et al., 2018). as gnns approach the accuracy limit, recent work focuses on improving generalization by integrating calculations from computational chemistry. for instance, qdf (tsubaki & mizoguchi, 2020) and eann (zhang et al., 2019) approximate the electron density while orbnet (qiao et al., 2020) and unite (qiao et al., 2021) include features taken from qm calculations. another promising direction is ∆-ml models, which only predict the delta between a high-accuracy qm calculation and a faster lowaccuracy one (wengert et al., 2021). despite their success, surrogate models lack reliability. even if uncertainty estimates are available (lamb & paige, 2020; hirschfeld et al., 2020), generalization outside of the training regime is unpredictable (guo et al., 2017). while such supervised models are architecturally related, they pursue a fundamentally different objective than pesnet. where surrogate models approximate qm calculations from data, this work focuses on performing the exact qm calculations from first principles. neural wave function ans¨atze in combination with the vmc framework have recently been proposed as an alternative (carleo & troyer, 2017) to classical self-consistent field (scf) methods such as hartree-fock, dft, or ccsd to solve the schr¨odinger equation (szabo & ostlund, 2012). however, early works were limited to small systems and low accuracy (kessler et al., 2021; han et al., 2019; choo et al., 2020). recently, ferminet (pfau et al., 2020) and paulinet(hermann et al., 2020) presented more scalable approaches and accuracy on par with the best traditional qm computations. to further improve accuracy, wilson et al. (2021) coupled ferminet with diffusion monte-carlo (dmc). but, all these methods need to be trained for each configuration individually. to address this issue, weight-sharing has been proposed to reduce the time per training, but this was initially limited to non-fermionic systems (yang et al., 2020). in a concurrent work, scherbela et al. (2021) extend this idea to electronic wave functions. however, their deeperwin model still requires separate models for each geometry, does not account for symmetries and achieves lower accuracy, as we show in section 4. method to build a model that solves the schr¨odinger equation for many geometries simultaneously and accounts for the symmetries of the energy, we use three key ingredients. figure 2: pesnet’s architecture is split into two main components, the metagnn and the wfmodel. denotes the vector concircles indicate parameter-free and rectangles parametrized functions, catenation, a↑ and a↓ denote the index sets of the spin-up and spin-down electrons, respectively. to avoid clutter, we left out residual connections. firstly, to solve the schr¨odinger equation, we leverage the vmc framework, i.e., we iteratively update our wave function model (wfmodel) until it converges to the ground-state electronic wave r is a function parametrized by θ that maps electron function. the wfmodel ψθ(−→r ) : rn configurations to amplitudes. it must obey the fermi-dirac statistics, i.e., the sign of the output must flip under the exchange of two electrons of the same spin. as we cover in section 3.4, the wfmodel is essential for sampling electron configurations and computing energies. secondly, we extend this to multiple geometries by introducing a gnn that reparametrizes the wfmodel. in reference to meta-learning, we call this the metagnn. it takes the nuclei coordinates −→rm and charges zm and outputs subsets ω, ωm of wfmodel’s parameters. thanks to message passing, the metagnn can capture the full 3d geometry of the nuclei graph. θ, m lastly, as we prove in appendix a, to predict energies invariant to rotations and reflections the wave function needs to be equivariant. we accomplish this by constructing an equivariant coordinate system e = [−→e 1, −→e 2, −→e 3] based on the principle component analysis (pca). together, these components form pesnet, whose architecture is shown in figure 2. since sampling and energy computations only need the wfmodel, a single forward pass of the metagnn is sufficient for each geometry during evaluation. furthermore, its end-to-end differentiability facilitates optimization, see section 3.4, and we may benefit from better generalization thanks to our equivariant wave function (elesedy & zaidi, 2021; kondor & trivedi, 2018). notation. we use bold lower-case letters h for vectors, bold upper-case w letters for matrices, −−−−→arrows to indicate vectors in 3d, −→r i to denote electron coordinates, −→rm, zm for nuclei coordinates and charge, respectively. [ ◦ ]n i=1 denote vector concatenations. ] and [ ◦ wave function model | 2 | [
108.249,
172.1100784,
240.1607551,
182.0726784
] |
ZKy2X3dgPA.pdf | 2,022 | 1 | it takes two to tango: mixup for deep metric learning shashanka venkataramanan1∗ bill psomas3∗ konstantinos karantzalos3 ewa kijak1 yannis avrithis2 laurent amsaleg1 1inria, univ rennes, cnrs, irisa 2athena rc 3national technical university of athens abstract metric learning involves learning a discriminative representation such that embeddings of similar classes are encouraged to be close, while embeddings of dissimilar classes are pushed far apart. state-of-the-art methods focus mostly on sophisticated loss functions or mining strategies. on the one hand, metric learning losses consider two or more examples at a time. on the other hand, modern data augmentation methods for classification consider two or more examples at a time. the combination of the two ideas is under-studied. in this work, we aim to bridge this gap and improve representations using mixup, which is a powerful data augmentation approach interpolating two or more examples and corresponding target labels at a time. this task is challenging because unlike classification, the loss functions used in metric learning are not additive over examples, so the idea of interpolating target labels is not straightforward. to the best of our knowledge, we are the first to investigate mixing both examples and target labels for deep metric learning. we develop a generalized formulation that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing metric mix, or metrix. we also introduce a new metric—utilization—to demonstrate that by mixing examples during training, we are exploring areas of the embedding space beyond the training classes, thereby improving representations. to validate the effect of improved representations, we show that mixing inputs, intermediate representations or embeddings along with target labels significantly outperforms state-of-the-art metric learning methods on four benchmark deep metric learning datasets. introduction classification is one of the most studied tasks in machine learning and deep learning. it is a common source of pre-trained models for transfer learning to other tasks (donahue et al., 2014; kolesnikov et al., 2020). it has been studied under different supervision settings (caron et al., 2018; sohn et al., 2020), knowledge transfer (hinton et al., 2015) and data augmentation (cubuk et al., 2018), including the recent research on mixup (zhang et al., 2018; verma et al., 2019), where embeddings and labels are interpolated. deep metric learning is about learning from pairwise interactions such that inference relies on instance embeddings, e.g. for nearest neighbor classification (oh song et al., ∗equal contribution class label interpolation anchor positive negative mixed figure 1: metrix (= metric mix) allows an anchor to interact with positive (same class), negative (different class) and interpolated examples, which also have interpolated labels. 2016), instance-level retrieval (gordo et al., 2016), few-shot learning (vinyals et al., 2016), face recognition (schroff et al., 2015) and semantic textual similarity (reimers & gurevych, 2019). following (xing et al., 2003), it is most often fully supervised by one class label per example, like classification. the two most studied problems are loss functions (musgrave et al., 2020) and hard example mining (wu et al., 2017; robinson et al., 2021). tuple-based losses with example weighting (wang et al., 2019) can play the role of both. unlike classification, classes (and distributions) at training and inference are different in metric learning. thus, one might expect interpolation-based data augmentation like mixup to be even more important in metric learning than in classification. yet, recent attempts are mostly limited to special cases of embedding interpolation and have trouble with label interpolation (ko & gu, 2020). this raises the question: what is a proper way to define and interpolate labels for metric learning? in this work, we observe that metric learning is not different from classification, where examples are replaced by pairs of examples and class labels by “positive” or “negative”, according to whether class labels of individual examples are the same or not. the positive or negative label of an example, or a pair, is determined in relation to a given example which is called an anchor. then, as shown in figure 1, a straightforward way is to use a binary (two class) label per pair and interpolate it linearly as in standard mixup. we call our method metric mix, or metrix for short. to show that mixing examples improves representation learning, we quantitatively measure the properties of the test distributions using alignment and uniformity (wang & isola, 2020). alignment measures the clustering quality and uniformity measures its distribution over the embedding space; a well clustered and uniformly spread distribution indicates higher representation quality. we also introduce a new metric, utilization, to measure the extent to which a test example, seen as a query, lies near any of the training examples, clean or mixed. by quantitatively measuring these three metrics, we show that interpolation-based data augmentation like mixup is very important in metric learning, given the difference between distributions at training and inference. in summary, we make the following contributions: 1. we define a generic way of representing and interpolating labels, which allows straightforward extension of any kind of mixup to deep metric learning for a large class of loss functions. we develop our method on a generic formulation that encapsulates these functions (section 3). 2. we define the “positivity” of a mixed example and we study precisely how it increases as a function of the interpolation factor, both in theory and empirically (subsection 3.6). 3. we systematically evaluate mixup for deep metric learning under different settings, including mixup at different representation levels (input/manifold), mixup of different pairs of examples (anchors/positives/negatives), loss functions and hard example mining (subsection 4.2). 4. we introduce a new evaluation metric, utilization, validating that a representation more appropriate for test classes is implicitly learned during exploration of the embedding space in the presence of mixup (subsection 4.3). 5. we improve the state of the art on four common metric learning benchmarks (subsection 4.2). related work metric learning metric learning aims to learn a metric such that positive pairs of examples are nearby and negative ones are far away. in deep metric learning, we learn an explicit non-linear mapping from raw input to a low-dimensional embedding space (oh song et al., 2016), where the euclidean distance has the desired properties. although learning can be unsupervised (hadsell et al., 2006), deep metric learning has mostly followed the supervised approach, where positive and negative pairs are defined as having the same or different class label, respectively (xing et al., 2003). loss functions can be distinguished into pair-based and proxy-based (musgrave et al., 2020). pairbased losses use pairs of examples (wu et al., 2017; hadsell et al., 2006), which can be defined over triplets (wang et al., 2014; schroff et al., 2015; weinberger & saul, 2009; hermans et al., 2017), quadruples (chen et al., 2017) or tuples (sohn, 2016; oh song et al., 2016; wang et al., 2019). proxy-based losses use one or more proxies per class, which are learnable parameters in the embedding space (movshovitz-attias et al., 2017; qian et al., 2019; kim et al., 2020c; teh et al., 2020; zhu et al., 2020b). pair-based losses capture data-to-data relations, but they are sensitive to noisy labels and outliers. they often involve terms where given constraints are satisfied, which produce zero gradients and do not contribute to training. this necessitates mining of hard examples that violate the constraints, like semi-hard (schroff et al., 2015) and distance weighted (wu et al., 2017). by contrast, proxy-based losses use data-to-proxy relations, assuming proxies can capture the global structure of the embedding space. they involve less computations that are more likely to produce nonzero gradient, hence have less or no dependence on mining and converge faster. mixup input mixup (zhang et al., 2018) linearly interpolates between two or more examples in the input space for data augmentation. numerous variants take advantage of the structure of the input space to interpolate non-linearly, e.g. for images (yun et al., 2019; kim et al., 2020a; 2021; hendrycks et al., 2020; devries & taylor, 2017; qin et al., 2020; uddin et al., 2021). manifold mixup (verma et al., 2019) interpolates intermediate representations instead, where the structure is learned. this can be applied to or assisted by decoding back to the input space (berthelot et al., 2018; liu et al., 2018; beckham et al., 2019; zhu et al., 2020a; venkataramanan et al., 2021). in both cases, corresponding labels are linearly interpolated too. most studies are limited to cross-entropy loss for classification. pairwise loss functions have been under-studied, as discussed below. interpolation for pairwise loss functions as discussed in subsection 3.3, interpolating target labels is not straightforward in pairwise loss functions. in deep metric learning, embedding expansion (ko & gu, 2020), hdml (zheng et al., 2019) and symmetrical synthesis (gu & ko, 2020) interpolate pairs of embeddings in a deterministic way within the same class, applying to pair-based losses, while proxy synthesis (gu et al., 2021) interpolates between classes, applying to proxy-based losses. none performs label interpolation, which means that (gu et al., 2021) risks synthesizing false negatives when the interpolation factor λ is close to 0 or 1. in contrastive representation learning, mochi (kalantidis et al., 2020) interpolates anchor with negative embeddings but not labels and chooses λ ∈ [0, 0.5] to avoid false negatives. this resembles thresholding of λ at 0.5 in opttransmix (zhu et al., 2020a). finally, i-mix (lee et al., 2021) and mixco (kim et al., 2020b) interpolate pairs of anchor embeddings as well as their (virtual) class labels linearly. there is only one positive, while all negatives are clean, so it cannot take advantage of interpolation for relative weighting of positives/negatives per anchor (wang et al., 2019). by contrast, metrix is developed for deep metric learning and applies to a large class of both pairbased and proxy-based losses. it can interpolate inputs, intermediate features or embeddings of anchors, (multiple) positives or negatives and the corresponding two-class (positive/negative) labels per anchor, such that relative weighting of positives/negatives depends on interpolation. mixup for metric learning | 2 | [
108.299,
285.1006768,
284.7491658,
297.0558768
] |
ivwZO-HnzG_.pdf | 2,023 | 2 | recon: reducing conflicting gradients from the root for multi-task learning guangyuan shi, qimai li, wenlong zhang, jiaxin chen, xiao-ming wu(cid:12) department of computing, the hong kong polytechnic university, hong kong s.a.r., china {guang-yuan.shi, qee-mai.li, wenlong.zhang}@connect.polyu.hk, jiax.chen@connect.polyu.hk, xiao-ming.wu@polyu.edu.hk abstract a fundamental challenge for multi-task learning is that different tasks may conflict with each other when they are solved jointly, and a cause of this phenomenon is conflicting gradients during optimization. recent works attempt to mitigate the influence of conflicting gradients by directly altering the gradients based on some criteria. however, our empirical study shows that “gradient surgery” cannot effectively reduce the occurrence of conflicting gradients. in this paper, we take a different approach to reduce conflicting gradients from the root. in essence, we investigate the task gradients w.r.t. each shared network layer, select the layers with high conflict scores, and turn them to task-specific layers. our experiments show that such a simple approach can greatly reduce the occurrence of conflicting gradients in the remaining shared layers and achieve better performance, with only a slight increase in model parameters in many cases. our approach can be easily applied to improve various state-of-the-art methods including gradient manipulation methods and branched architecture search methods. given a network architecture (e.g., resnet18), it only needs to search for the conflict layers once, and the network can be modified to be used with different methods on the same or even different datasets to gain performance improvement. the source code is available at https://github.com/moukamisama/recon. introduction multi-task learning (mtl) is a learning paradigm in which multiple different but correlated tasks are jointly trained with a shared model (caruana, 1997), in the hope of achieving better performance with an overall smaller model size than learning each task independently. by discovering shared structures across tasks and leveraging domain-specific training signals of related tasks, mtl can achieve efficiency and effectiveness. indeed, mtl has been successfully applied in many domains including natural language processing (hashimoto et al., 2017), reinforcement learning (parisotto et al., 2016; d’eramo et al., 2020) and computer vision (vandenhende et al., 2021). a major challenge for multi-task learning is negative transfer (ruder, 2017), which refers to the performance drop on a task caused by the learning of other tasks, resulting in worse overall performance than learning them separately. this is caused by task conflicts, i.e., tasks compete with each other and unrelated information of individual tasks may impede the learning of common structures. from the optimization point of view, a cause of negative transfer is conflicting gradients (yu et al., 2020), which refers to two task gradients pointing away from each other and the update of one task will have a negative effect on the other. conflicting gradients make it difficult to optimize the multitask objective, since task gradients with larger magnitude may dominate the update vector, making the optimizer prioritize some tasks over others and struggle to converge to a desirable solution. prior works address task/gradient conflicts mainly by balancing the tasks via task reweighting or gradient manipulation. task reweighting methods adaptively re-weight the loss functions by homoscedastic uncertainty (kendall et al., 2018), balancing the pace at which tasks are learned chen et al. (2018); liu et al. (2019), or learning a loss weight parameter (liu et al., 2021b). gradient manipulation methods reduce the influence of conflicting gradients by directly altering the gradients based on different criteria (sener & koltun, 2018; yu et al., 2020; chen et al., 2020; liu et al., 2021a) or rotating the shared features (javaloy & valera, 2022). while these methods have demonstrated effectiveness in different scenarios, in our empirical study, we find that they cannot reduce the occurrence of conflicting gradients (see sec. 3.3 for more discussion). we propose a different approach to reduce conflicting gradients for mtl. specifically, we investigate layer-wise conflicting gradients, i.e., the task gradients w.r.t. each shared network layer. we first train the network with a regular mtl algorithm (e.g., joint-training) for a number of iterations, compute the conflict scores for all shared layers, and select those with highest conflict scores (indicating severe conflicts). we then set the selected shared layers task-specific and train the modified network from scratch. as demonstrated by comprehensive experiments and analysis, our simple approach recon has the following key advantages: (1) recon can greatly reduce conflicting gradients with only a slight increase in model parameters (less than 1% in some cases) and lead to significantly better performance. (2) recon can be easily applied to improve various gradient manipulation methods and branched architecture search methods. given a network architecture, it only needs to search for the conflict layers once, and the network can be modified to be used with different methods and even on different datasets to gain performance improvement. (3) recon can achieve better performance than branched architecture search methods with a much smaller model. related works in this section, we briefly review related works in multi-task learning in four categories: tasks clustering, architecture design, architecture search, and task balancing. tasks clustering methods mainly focus on identifying which tasks should be learned together (thrun & o’sullivan, 1996; zamir et al., 2018; standley et al., 2020; shen et al., 2021; fifty et al., 2021). architecture design methods include hard parameter sharing methods (kokkinos, 2017; long et al., 2017; bragman et al., 2019), which learn a shared feature extractor and task-specific decoders, and soft parameters sharing methods (misra et al., 2016; ruder et al., 2019; gao et al., 2019; 2020; liu et al., 2019), where some parameters of each task are assigned to do cross-task talk via a sharing mechanism. compared with soft parameters sharing methods, our approach recon has much better scalability when dealing with a large number of tasks. instead of designing a fixed network structure, some methods (rosenbaum et al., 2018; meyerson & miikkulainen, 2018; yang et al., 2020) propose to dynamically self-organize the network for different tasks. among them, branched architecture search (guo et al., 2020; bruggemann et al., 2020) methods are more related to our work. they propose an automated architecture search algorithm to build a tree-structured network by learning where to branch. in contrast, our method recon decides which layers to be shared across tasks by considering the severity of layer-wise conflicting gradients, resulting in a more compact architecture with lower time cost and better performance. another line of research is task balancing methods. to address task/gradient conflicts, some methods attempt to re-weight the multi-task loss function using homoscedastic uncertainty (kendall et al., 2018), task prioritization (guo et al., 2018), or similar learning pace (liu et al., 2019; 2021b). gradnorm (chen et al., 2018) learns task weights by dynamically tuning gradient magnitudes. mgda (sener & koltun, 2018) find the weights by minimizing the norm of the weighted sum of task gradients. to reduce the influence of conflicting gradients, pcgrad (yu et al., 2020) projects each gradient onto the normal plane of another gradient and uses the average of projected gradients for update. graddrop (chen et al., 2020) randomly drops some elements of gradients based on element-wise conflict. cagrad (liu et al., 2021a) ensures convergence to a minimum of the average loss across tasks by gradient manipulation. rotograd (javaloy & valera, 2022) re-weights task gradients and rotates the shared feature space. instead of manipulating gradients, our method recon leverages gradient information to modify network structure to mitigate task conflicts from the root. pilot study: task conflicts in multi-task learning multi-task learning: problem definition multi-task learning (mtl) aims to learn a set of correlated tasks {ti}t i=1 simultaneously. for each task ti, the empirical loss function is li(θsh, θi), where θsh are parameters shared among all tasks figure 1: the distributions of gradient conflicts (in terms of cos ϕij) of the joint-training baseline and state-of-the-art gradient manipulation methods on multi-fashion+mnist benchmark. and θi are task-specific parameters. the goal is to find optimal parameters θ = {θsh, θ1, θ2, · · · , θt } to achieve high performance across all tasks. formally, it aims to minimize a multi-task objective: θ∗ = arg min wili(θsh, θi), i where wi are pre-defined or dynamically computed weights for different tasks. a popular choice is to use the average loss (i.e., equal weights). however, optimizing the multi-task objective is difficult, and a known cause is conflicting gradients. conflicting gradients | 2 | [
108.249,
448.8270784,
244.5652576,
458.7896784
] |
yqPnIRhHtZv.pdf | 2,021 | 2 | learning hyperbolic representations of topological features panagiotis kyriakis university of southern california los angeles, usa pkyriaki@usc.edu iordanis fostiropoulos university of southern california los angeles, usa fostirop@usc.edu paul bogdan university of southern california los angeles, usa pbogdan@usc.edu abstract learning task-specific representations of persistence diagrams is an important problem in topological data analysis and machine learning. however, current methods are restricted in terms of their expressivity as they are focused on euclidean representations. persistence diagrams often contain features of infinite persistence (i.e., essential features) and euclidean spaces shrink their importance relative to non-essential features because they cannot assign infinite distance to finite points. to deal with this issue, we propose a method to learn representations of persistence diagrams on hyperbolic spaces, more specifically on the poincare ball. by representing features of infinite persistence infinitesimally close to the boundary of the ball, their distance to non-essential features approaches infinity, thereby their relative importance is preserved. this is achieved without utilizing extremely high values for the learnable parameters, thus, the representation can be fed into downstream optimization methods and trained efficiently in an end-to-end fashion. we present experimental results on graph and image classification tasks and show that the performance of our method is on par with or exceeds the performance of other state of the art methods. introduction persistent homology is a topological data analysis tool which tracks how topological features (e.g. connected components, cycles, cavities) appear and disappear as we analyze the data at different scales or in nested sequences of subspaces (1; 2). a nested sequence of subspaces is known as a filtration. as an informal example of a filtration consider an image of variable brightness. as the brightness is increased, certain features (edges, texture) may become less or more prevalent. the birth of a topological feature refers to the "time" (i.e., the brightness value) when it appears in the filtration and the death refers to the "time" when it disappears. the lifespan of the feature is called persistence. persistent homology summarizes these topological characteristics in a form of multiset called persistence diagram, which is a highly robust and versatile descriptor of the data. persistence diagrams enjoy the stability property, which ensures that the diagrams of two similar objects are similar (3). additionally, under some assumptions, one can approximately reconstruct the input space from a diagram (which is known as solving the inverse problem) (4). however, despite their strengths, the space of persistence diagrams lacks structure as basic operations, such as addition and scalar multiplication, are not well defined. the only imposed structure is induced by the bottleneck and wasserstein metrics, which are notoriously hard to compute, thereby preventing us from leveraging them for machine learning tasks. related work. to address these issues, several vectorization methods have been proposed. some of the earliest approaches are based on kernels, i.e., generalized products that turn persistence diagrams into elements of a hilbert space. kusano et al. (5) propose a persistence weighted gaussian kernel which allows them to explicitly control the effect of persistence. alternatively, carrière et al. (6) leverage the sliced wasserstein distance to define a kernel that mimics the distance between diagrams. the approaches by bubenik (7) based on persistent landscapes, by reininghaus et al. (8) based on scale space theory and by le et al. (9) based on the fisher information metric are along the same line of work. the major drawback in utilizing kernel methods is that they suffer from scalability issues as the training scales poorly with the number of samples. in another line of work, researchers have constructed finite-dimensional embeddings, i.e., transformations turning persistence diagrams into vectors in a euclidean space. adams et al. (10) map the diagrams to persistence images and discretize them to obtain the embedding vector. carrière et al. (11) develop a stable vectorization method by computing pairwise distances between points in the persistence diagram. an approach based on interpreting the points in the diagram as roots of a complex polynomial is presented by di fabio (12). adcock et al. (13) identify an algebra of polynomials on the diagram space that can be used as coordinates and the approach is extended by kališnik in (14) to tropical functions which guarantee stability. the common drawback of these embeddings is that the representation is pre-defined, i.e., there exist no learnable parameters, therefore, it is agnostic to the specific learning task. this is clearly sub-optimal as the eminent success of deep learning has demonstrated that it is preferable to learn the representation. the more recent approaches aim at learning the representation of the persistence diagram in an end-to-end fashion. hofer et al. (15) present the first input layer based on a parameterized family of gaussian-like functionals, with the mean and variance learned during training. they extend their method in (16) allowing for a broader class of parameterized function families to be considered. it is quite common to have topological features of infinite persistence (1), i.e., features that never die. such features are called essential and in practice are usually assigned a death time equal to the maximum filtration value. this may restrict their expressivity because it shrinks their importance relative to non-essential features. while we may be able to increase the scale sufficiently high and end up having only one trivial essential feature (i.e., the 0-th order persistent homology group that becomes a single connected component at a scale that is sufficiently large), the resulting persistence diagrams may not be the ones that best summarize the data in terms of performance on the underlying learning task. this is evident in the work by hofer et al. (15) where the authors showed that essential features offer discriminative power. the work by carrière et al. (17), which introduces a network input layer the encompasses several vectorization methods, emphasizes the importance of essential features and is the first one to introduce a deep learning method incorporating extended persistence as a way to deal with them. in this paper, we approach the issue of essential features from the geometric viewpoint. we are motivated by the recent success of hyperbolic geometry and the interest in extending machine learning models to hyperbolic spaces or general manifolds. we refer the reader to the review paper by bronstein et al. (18) for an overview of geometric deep learning. here, we review the most relevant and pivotal contributions in the field. nickel et al. (19; 20) propose poincaré and lorentz embeddings for learning hierarchical representations of symbolic data and show that the representational capacity and generalization ability outperform euclidean embeddings. sala et al. (21) propose low-dimensional hyperbolic embeddings of hierarchical data and show competitive performance on worldnet. ganea et al. (22) generalize neural networks to the hyperbolic space and show that hyperbolic sentence embeddings outperform their euclidean counterparts on a range of tasks. gulcherhe et al. (23) introduce hyperbolic attention networks which show improvements in terms of generalization on machine translation and graph learning while keeping a compact representation. in the context of graph representation learning, hyperbolic graph neural networks (24) and hyperbolic graph convolutional neural networks (25) have been developed and shown to lead to improvements on various benchmarks. however, despite this success of geometric deep learning, little work has been done in applying these methods to topological features, such as persistence diagrams. the main contribution of this paper is to bridge the gap between topological data analysis and hyperbolic representation learning. we introduce a method to represent persistence diagrams on a hyperbolic space, more specifically on the poincare ball. we define a learnable parameterization of the poincare ball and leverage the vectorial structure of the tangent space to combine (in a manifoldpreserving manner) the representations of individual points of the persistence diagram. our method learns better task-specific representations than the state of the art because it does not shrink the relative importance of essential features. in fact, by allowing the representations of essential features to get infinitesimally close to the boundary of the poincare ball, their distance to the representations of non-essential features approaches infinity, therefore preserving their relative importance. to the best of our knowledge, this is the first approach for learning representations of persistence diagrams in non-euclidean spaces. background in this section, we provide a brief overview of persistent homology leading up to the definition of persistence diagrams. we refer the interested reader to the papers by edelsbrunner et al. (1; 2) for a detailed overview of persistent homology. an overview of homology can be found in the appendix. persistent homology. let k be a simplicial complex. a filtration of k is a nested sequence of subcomplexes that starts with the empty complex and ends with k, ∅ = k0 ⊆ k1 ⊆ . . . ⊆ kd = k. a typical way to construct a filtration is to consider sublevel sets of a real valued function, f : k → r. let a1 < · · · < ad be a sorted sequence of the values of f (k). then, we obtain a filtration by setting and ki = f −1((−∞, ai]) for 1 ≤ i ≤ d. we can apply simplicial homology to each of the subcomplexes of the filtration. when 0 ≤ i ≤ j ≤ d, the inclusion ki ⊆ kj induces a homomorphism f i,j n : hn(ki) → hn(kj) on the simplicial homology groups for each homology dimension n. we call the image of f i,j n a n-th persistent homology group and it consists of homology classes born before i that are still alive at j. a homology class α is born at ki if it is not in the image of the map induced by the inclusion ki−1 ⊆ ki. furthermore, if α is born at ki, it dies entering kj if the image of the map induced by ki−1 ⊆ kj−1 does not contain the image of α but the image of the map induced by ki−1 ⊆ kj does. the persistence of the homology class α is j − i. since classes may be born at the same i and die at the same j, we can use inclusion-exclusion to determine the multiplicity of each (i, j), n = βi,j−1 µi,j n − βi,j n + βi−1,j n where the n-th persistent betti numbers βi,j homology group, i.e., βi,j features that persist from i to j. by setting µi,∞ still persist at the end of the filtration (j = d), which are known as essential features. n are the ranks of the images of the n-th persistent n )), and capture the number of n-dimensional topological we can account for features that n = rank(im(f i,j n − βi−1,d n = βi,d n persistence diagrams. persistence diagrams are multisets supported by the upper diagonal part of the real plane and capture the birth/death of topological features (i.e., homology classes) across the filtration. definition 2.1 (persistence diagram). let ∆ = {x ∈ r∆ : mult(x) = ∞} be the multiset of the diagonal r∆ = {(x1, x2) ∈ r2 : x1 = x2}, where mult(·) denotes the multiplicity function and let ∗ = {(x1, x2) ∈ r ∪ (r ∪ ∞) : x2 > x1}. also, let n be a homology dimension and consider the r2 sublevel set filtration induced by a function f : k → r over the complex k. then, a persistence diagram, dn(f ), is a multiset of the form dn(f ) = {x : x ∈ r2 ∗} ∪ ∆ constructed by inserting each point (ai, aj) for i < j with multiplicity µi,j if it is an essential feature). we denote the space of all persistence diagrams with d. definition 2.2 (wasserstein distance and stability). let dn(f ), en(g) be two persistence diagrams generated by the filtration induced by the functions f, g : k → r, respectively. we define the wasserstein distance n (or µi,∞ n wq p(dn(f ), eg(g)) = inf η x∈d where p, q ∈ n and the infimum is taken over all bijections η : dn(f ) → en(g). the special case p = ∞ is known as bottleneck distance. the persistence diagrams are stable with respect to the wasserstein distance if and only if wq p(dn(f ), eg(g)) ≤ (cid:107)f − g(cid:107)∞. note that a bijection η between persistence diagrams is guaranteed to exist because their cardinalities are equal, considering that, as per def. 2.1, the points on the diagonal are added with infinite multiplicity. the strength of persistent homology stems from the above stability definition, which essentially states that the map taking a sublevel function to the persistence diagram is lipschitz continuous. this implies that if two objects are similar then their persistence diagrams are close. dn(f ) death f n o i t a r t l i f k x e l p m o c φ ◦ ρ(x) birth y z y tx0 b φ(d, θ) ∈ b figure 1: illustration of our method: initially, the points are transferred via the auxiliary transformation ρ and the parameterization φ to the poincare ball b, where learnable parameters θ are added. then, the logarithmic map is used for transforming the points to the tangent space tx0 b. finally, the resulting vectors are added and transformed back to the manifold via the exponential map. note that the persistence diagram is mapped to a single point on the poincare ball (i.e., φ(d, θ) ∈ b). persistent poincare representations in this section, we introduce our method (fig. 1) for learning representations of persistence diagrams on the poincare ball. we refer the reader to the appendix for some fundamental concepts of differential geometry. x ), where b = {x ∈ rm : (cid:107)x(cid:107) < 1} is the the poincare ball is an m-dimensional manifold (b, gb open unit ball. the space in which the ball is embedded is called ambient space and is assumed to be equal to rm. the poincare ball is conformal (i.e., angle-preserving) to the euclidean space but it does not preserve distances. the metric tensor and distance function are as follows (cid:107)x − y(cid:107)2 (1 − (cid:107)x(cid:107)2)(1 − (cid:107)y(cid:107))2 db(x, y) = arccos xge λx = x = λ2 gb where ge = im is the euclidean metric tensor. eq. 6 highlights the benefit of using the poincare ball for representing persistence diagrams. contrary to euclidean spaces, distances in the poincare ball can approach infinity for finite points. this space is ideal for representing essential features appearing in persistence diagrams without squashing their importance relative to non-essential features. informally, this is achieved by allowing the representations of the former ones to get infinitesimally close to the boundary, thereby their distances to the later ones approach infinity. fig. 2 provides an illustration. we gradually construct our representation through a composition of 3 individual transformations. the first step is to transfer the points to the ambient space (i.e., rm) of the poincare ball. let d1 be a persistence diagram. we introduce the following auxiliary transformation ∗ → rm. this auxiliary transformation is essentially a high-dimensional embedding and may contain learnable parameters. nonetheless, our main focus is to learn a hyperbolic representation and, therefore, we assume that ρ is not learnable. later in this section, we analyze conditions on ρ to guarantee the stability and expressiveness of the hyperbolic representation. the second step is to transform the embedded points from the ambient space to the poincare ball. when referring to points on a manifold, it is important to define a coordinate system. a homeomorphism ψ : b → rm is called coordinate chart and gives the local coordinates on the manifold. the inverse map φ : rm → b, is called a parameterization of b and gives the ambient coordinates. the main idea is to inject learnable parameters into this parameterization. the injected parameters could be any form of differentiable functional that preserves the homomorphic property. differentiability is needed such that our representation can be fed to downstream optimization 1the sublevel set function f and the homology dimension n are omitted. methods. in our construction, we utilize a variant of the generalized spherical coordinates. let θ ∈ rm be a vector of m parameters. we define the learnable parameterization φ : rm × rm → b as follows arctan θ1r1 and yi = θi + arccos , for i = 2, 3, ..m, where r2 i + (cid:15). the small positive constant (cid:15) is added to ensure that the denominator in eq. 8 is not zero. intuitively, eq. 8 corresponds to scaling the radius of the point by a factor θ1 and rotating it by θi radians across the angular axes. the scaling and rotation parameters are learned during training. note that the form of y1 ensures that representation belongs in the unit ball for all values of θ1. the coordinate chart is not explicitly used in our representation; it is provided in the appendix for the sake of completeness. the third step is to combine the representations of each individual point of the persistence diagram into a single point in the hyperbolic space. typically, in euclidean spaces, this is done by concatenating or adding the corresponding representations. however, in non-euclidean spaces such operations are not manifold-preserving. therefore, we transform the points from the manifold to the tangent space, combine the vectors via standard vectorial addition and transform the resulting vector back to the manifold. this approach is based on the exponential and logarithmic maps expx : txb → b and logx : b → txb. the exponential map allows us to transform a vector from the tangent space to the manifold and its inverse (i.e., the logarithmic map) from the manifold to the tangent space. for a general manifold, it is hard to find these maps as we need to solve for the minimal geodesic curve (see appendix for more details). fortunately, for the poincare ball case, they have analytical expressions, given as follows expx(v) = x ⊕ tanh , logx(y) = where ⊕ denotes the möbius addition, which is a manifold-preserving operator (i.e., for any x, y ∈ b =⇒ x ⊕ y ∈ b). the analytical expression is given in the appendix. the transformations given by these maps are norm-preserving, i.e., for example, the geodesic distance from x to the transformed point expx(v) coincides with the metric norm (cid:107)v(cid:107)g induced by the metric tensor gb x . this is an important property as we need the distance between points (and therefore the relative importance of topological features) to be preserved when transforming to and from the tangent space. we now combine the aforementioned transformations and define the poincare hyperbolic representation followed by its stability theorem. definition 3.1 (poincare representation). let d ∈ d be the persistence diagram to be represented x ) embedded in rm and x0 ∈ b be a given point. the in an m-dimensional poincare ball (b, gb representation of d on the manifold b is defined as follows φ : d × rm → b, φ(d, θ) = expx0 (cid:0)φ(ρ(x))(cid:1)(cid:17) logx0 x∈d where the exponential and logarithmic maps are given by eq. 10 and the learnable parameterization and the auxiliary transformation by eq. 8 and eq. 7, respectively. theorem 1 (stability of hyperbolic representation). let d, e be two persistence diagrams and ∗ → rm that is consider an auxiliary transformation ρ : r2 • lipschitz continuous w.r.t the induced metric norm (cid:107)·(cid:107)g, • ρ(x) = 0 for all x ∈ r∆. additionally, assume that x0 = 0. then, the hyperbolic representation given by eq. 11 is stable w.r.t the wasserstein distance when p = 1, i.e., there exists constant k > 0 such that db(φ(d, θ), φ(e, θ)) ≤ kwg where db is the geodesic distance and wg induced norm (cid:107)·(cid:107)g (i.e., the norm induced by the metric tensor g, see appendix a.2). 1 is the wasserstein metric with the q-norm replaced by the h t a e d (b, gb x ) x = λ2 gb x birth figure 2: left: example graph from the imdb-binary dataset. middle: persistence diagrams extracted using the vietoris-rips filtration. the dashed line denotes features of infinite persistence, which are represented by points of maximal death value equal to 90 (i.e., by points of finite persistence). right: equivalent representation on the 2-dimensional poincare ball. features of infinite persistence are mapped infinitesimally close to the boundary. therefore, their distance to finite persistence features approaches infinity (d ∼ (cid:15)−2). the proof of theorem 1 (given in the appendix) results from a general stability theorem (3) and is on par with similar results for other vectorizations (10) or representations (15) of persistence diagrams. one subtle difference is that theorem 1 uses the induced norm rather than the q-norm appearing in the wasserstein distance. however, since the induced norm implicitly depends on the chosen point x0, which, per requirements of theorem 1, is assumed to be equal to the origin, there is no substantial difference. the fact that we require the auxiliary transformation ρ to be zero on the diagonal is important to theoretically guarantee stability. intuitively, this can be understood by recalling (def. 2.1) that all (infinite) points on the diagonal are included in the persistence diagram. by mapping the diagonal to zero and taking x0 = 0, we ensure that the summation in eq. 11 collapses to zero when summing over the diagonal. finally, we note that the assumptions of theorem 1 are not restrictive. in fact, we can easily find lipschitz continuous transformations that are zero on the diagonal r∆, such as the exponential and rational transformations proposed by hofer et al. (15). additionally, we note that the assumptions of theorem 1 do not prohibit us from choosing an "under-powered" or degenerate ρ. for example, ρ = 0 satisfies the assumptions and therefore leads to a stable representation. however, such representation is obviously not useful for learning tasks. an implicit requirement, that guarantees not only the stability but the expressiveness of the results representation, is that ρ does not result in any information loss. this requirement is satisfied by picking a ρ that it is injective, which, given that it is a higher dimensional embedding, is a condition easy to satisfy. in practice, we use a variant of the exponential transformation by hofer et al. (15). the exact expression is given in the appendix. experiments we present experiments on diverse datasets focusing on persistence diagrams extracted from graphs and grey-scale images. the learning task is classification. our representation acts as an input to a neural network and the parameters are learned end-to-end via standard gradient methods. the architecture as well as other training details are discussed in the appendix. the code to reproduce our experiments is publicly available at https://github.com/pkyriakis/permanifold/. ablation study: to highlight to what extent our results are driven by the hyperbolic embedding, we perform an ablation study. in more detail, we consider three variants of our method: 1. persistent poincare (p-poinc): this is the original method as presented in sec. 3, 2. persistent hybrid (p-hybrid): same as p-poinc with the poincare ball replaced by the euclidean space. this implies that the exponential and logarithmic maps (eq. 10) reduce to the identity maps, i.e., expx(v) = x + v logx(y) = y − x. the learnable parameterization is as in eq. 8. 3. persistent euclidean (p-eucl): same as p-hybrid with eq. 8 replaced with simple addition of the learnable parameters, i.e., y = x + θ. baseline - essential features separation: to highlight the benefit of a unified poincare representation, we design a natural baseline that treats essential and non-essential features separately. in more detail, for each point (b, d) ∈ d, we calculate its persistence d − b and then compute the histogram of the resulting persistence values. for essential features, we compute the histogram of their birth times. then, we concatenate those histograms and feed them as input to the neural network (architecture described in the appendix). we consider the case where the essential features are included (baseline w/ essential) and the case where they are discarded (baseline w/o essential). manifold dimension and projection bases: since our method essentially represents each persistence diagram on a m−dimensional poincare ball, it may introduce substantial information compression when the points in the diagrams are not of the same order as m. a trivial approach to counteract this issue is to use a high value for m. however, we experimentally observed that a high manifold dimension does not give the optimal classification performance and it adds a computational overhead in the construction of the computation graph. empirically, the best approach is to keep m at moderate values (in the range m = 3 to m = 12), replicate the representation k times and concatenate the outputs. each replica is called a projection base and for their number we explored values dependant on the number of points in the persistence diagram. persistence diagrams obtained from images tend to have substantially fewer points than diagrams obtained from graphs. therefore, for images, we explored moderate values for k, i.e., 5 − 10, whereas for graphs we increased k in the range 200 − 500. essentially, we treat both m and k as hyper-parameters, explore their space following the aforementioned empirical rules and pick the optimal via the validation dataset. as we increase m, it is usually prudent to decrease k to maintain similar model capacity. graph classification | 6 | [
108.249,
448.6240784,
236.5554711,
458.5866784
] |
E3Ys6a1NTGT.pdf | 2,021 | 0 | under review as a conference paper at iclr 2021 the importance of pessimism in fixed-dataset policy optimization anonymous authors paper under double-blind review abstract we study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. this analysis reveals that for na¨ıve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. to avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. we show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. these theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four minatar environments. introduction we consider fixed-dataset policy optimization (fdpo), in which a dataset of transitions from an environment is used to find a policy with high return.1 we compare fdpo algorithms by their worst-case performance, expressed as high-probability guarantees on the suboptimality of the learned policy. it is perhaps obvious that in order to maximize worst-case performance, a good fdpo algorithm should select a policy with high worst-case value. we call this the pessimism principle of exploitation, as it is analogous to the widely-known optimism principle (lattimore & szepesv´ari, 2020) of exploration.2 our main contribution is a theoretical justification of the pessimism principle in fdpo, based on a bound that characterizes the suboptimality incurred by an fdpo algorithm. we further demonstrate how this bound may be used to derive principled algorithms. note that the core novelty of our work is not the idea of pessimism, which is an intuitive concept that appears in a variety of contexts; rather, our contribution is a set of theoretical results rigorously explaining how pessimism is important in the specific setting of fdpo. an example conveying the intuition behind our results can be found in appendix g.1. we first analyze a family of non-pessimistic na¨ıve fdpo algorithms, which estimate the environment from the dataset via maximum likelihood and then apply standard dynamic programming techniques. we prove a bound which shows that the worst-case suboptimality of these algorithms is guaranteed to be small when the dataset contains enough data that we are certain about the value of every possible policy. this is caused by the outsized impact of value overestimation errors on suboptimality, sometimes called the optimizer’s curse (smith & winkler, 2006). it is a fundamental consequence of ignoring the disconnect between the true environment and the picture painted by our limited observations. importantly, it is not reliant on errors introduced by function approximation. 1we use the term fixed-dataset policy optimization to emphasize the computational procedure; this setting has also been referred to as batch rl (ernst et al., 2005; lange et al., 2012) and more recently, offline rl (levine et al., 2020). we emphasize that this is a well-studied setting, and we are simply choosing to refer to it by a more descriptive name. 2the optimism principle states that we should select a policy with high best-case value. under review as a conference paper at iclr 2021 we contrast these findings with an analysis of pessimistic fdpo algorithms, which select a policy that maximizes some notion of worst-case expected return. we show that these algorithms do not require datasets which inform us about the value of every policy to achieve small suboptimality, due to the critical role that pessimism plays in preventing overestimation. our analysis naturally leads to two families of principled pessimistic fdpo algorithms. we prove their improved suboptimality guarantees, and confirm our claims with experiments on a gridworld. finally, we extend one of our pessimistic algorithms to the deep learning setting. recently, several deep-learning-based algorithms for fixed-dataset policy optimization have been proposed (agarwal et al., 2019; fujimoto et al., 2019; kumar et al., 2019; laroche et al., 2019; jaques et al., 2019; kidambi et al., 2020; yu et al., 2020; wu et al., 2019; wang et al., 2020; kumar et al., 2020; liu et al., 2020). our work is complementary to these results, as our contributions are conceptual, rather than algorithmic. our primary goal is to theoretically unify existing approaches and motivate the design of pessimistic algorithms more broadly. using experiments in the minatar game suite (young & tian, 2019), we provide empirical validation for the predictions of our analysis. the problem of fixed-dataset policy optimization is closely related to the problem of reinforcement learning, and as such, there is a large body of work which contains ideas related to those discussed in this paper. we discuss these works in detail in appendix e. background we anticipate most readers will be familiar with the concepts and notation, which is fairly standard in the reinforcement learning literature. in the interest of space, we relegate a full presentation to appendix a. here, we briefly give an informal overview of the background necessary to understand the main results. the environment as a markov decision process (mdp), denoted m := we represent (cid:104)s, a, r, p, γ, ρ(cid:105). we assume without loss of generality that r((cid:104)s, a(cid:105)) ∈ [0, 1], and denote its expectation as r((cid:104)s, a(cid:105)). ρ represents the start-state distribution. policies π can act in the environment, represented by action matrix aπ, which maps each state to the probability of each state-action when following π. value functions v assign some real value to each state. we use vπ m to denote the value function which assigns the sum of discounted rewards in the environment when following policy π. a dataset d contains transitions sampled from the environment. from a dataset, we can compute the empirical reward and transition functions, rd and pd, and the empirical policy, ˆπd. an important concept for our analysis is the value uncertainty function, denoted µπ d,δ, which returns a high-probability upper-bound to the error of a value function derived from dataset d. certain value uncertainty functions are decomposable by states or state-actions, meaning they can be written as the weighted sum of more local uncertainties. see appendix b for more detail. our goal is to analyze the suboptimality of a specific class of fdpo algorithms, called value-based fdpo algorithms, which have a straightforward structure: they use a fixed-dataset policy evaluation (fdpe) algorithm to assign a value to each policy, and then select the policy with the maximum value. furthermore, we consider fdpe algorithms whose solutions satisfy a fixed-point equation. thus, a fixed-point equation defines a fdpe objective, which in turn defines a value-based fdpo objective; we call the set of all algorithms that implement these objectives the family of algorithms defined by the fixed-point equation. over/under decomposition of suboptimality our first theoretical contribution is a simple but informative bound on the suboptimality of any value-based fdpo algorithm. next, in section 4, we make this concrete by defining the family of na¨ıve algorithms and invoking this bound. this bound is insightful because it distinguishes the impact of errors of value overestimation from errors of value underestimation, defined as: definition 1. consider any fixed-dataset policy evaluation algorithm e on any dataset d and any policy π. denote vπ d] and overestimation error as eρ[vπ d := e(d, π). we define the underestimation error as eρ[vπ m − vπ d − vπ m]. the following lemma shows how these quantities can be used to bound suboptimality. under review as a conference paper at iclr 2021 lemma 1 (value-based fdpo suboptimality bound). consider any value-based fixed-dataset policy optimization algorithm ovb, with fixed-dataset policy evaluation subroutine e. for any policy π and dataset d, denote vπ d := e(d, π). the suboptimality of ovb is bounded by subopt(ovb(d)) ≤ inf π proof. see appendix c.1. eρ[vπ∗ m m − vπ m] + eρ[vπ m − vπ d] + sup π eρ[vπ d − vπ m] this bound is tight; see appendix c.2. the bound highlights the potentially outsized impact of overestimation on the suboptimality of a fdpo algorithm. to see this, we consider each of its terms in isolation: subopt(ovb(d)) ≤ inf π (cid:124) + sup π the term labeled (a) reflects the degree to which the dataset informs us of a near-optimal policy. for any policy π, (a1) captures the suboptimality of that policy, and (a2) captures its underestimation error. since (a) takes an infimum, this term will be small whenever there is at least one reasonable policy whose value is not very underestimated. on the other hand, the term labeled (b) corresponds to the largest overestimation error on any policy. because it consists of a supremum over all policies, it will be small only when no policies are overestimated at all. even a single overestimation can lead to significant suboptimality. we see from these two terms that errors of overestimation and underestimation have differing impacts on suboptimality, suggesting that algorithms should be designed with this asymmetry in mind. we will see in section 5 how this may be done. but first, let us further understand why this is necessary by studying in more depth a family of algorithms which treats its errors of overestimation and underestimation equivalently. na¨ive algorithms the goal of this section is to paint a high-level picture of the worst-case suboptimality guarantees of a specific family of non-pessimistic approaches, which we call na¨ıve fdpo algorithms. informally, the na¨ıve approach is to take the limited dataset of observations at face value, treating it as though it paints a fully accurate picture of the environment. na¨ıve algorithms construct a maximum-likelihood mdp from the dataset, then use standard dynamic programming approaches on this empirical mdp. definition 2. a na¨ıve algorithm is any algorithm in the family defined by the fixed-point function fna¨ıve(vπ) := aπ(rd + γpdvπ). various fdpe and fdpo algorithms from this family could be described; in this work, we do not study these implementations in detail, although we do give pseudocode for some implementations in appendix d.1. one example of a na¨ıve fdpo algorithm which can be found in the literature is certainty equivalence (jiang, 2019a). the core ideas behind na¨ıve algorithms can also be found in the function approximation literature, for example in fqi (ernst et al., 2005; jiang, 2019b). additionally, when available data is held fixed, nearly all existing deep reinforcement learning algorithms are transformed into na¨ıve value-based fdpo algorithms. for example, dqn (mnih et al., 2015) with a fixed replay buffer is a na¨ıve value-based fdpo algorithm. theorem 1 (na¨ıve fdpo suboptimality bound). consider any na¨ıve value-based fixed-dataset policy optimization algorithm ovb na¨ıve. let µ be any value uncertainty function. with probability at least 1 − δ, the suboptimality of ovb na¨ıve is bounded with probability at least 1 − δ by subopt(ovb na¨ıve(d)) ≤ inf π eρ[vπ∗ m m − vπ m] + eρ[µπ d,δ] + sup π eρ[µπ d,δ] under review as a conference paper at iclr 2021 proof. this result follows directly from lemma 1 and lemma 3. the infimum term is small whenever there is some reasonably good policy with low value uncertainty. in practice, this condition can typically be satisfied, for example by including expert demonstrations in the dataset. on the other hand, the supremum term will only be small if we have low value uncertainty for all policies – a much more challenging requirement. this explains the behavior of pathological examples, e.g. in appendix g.1, where performance is poor despite access to virtually unlimited amounts of data from a near-optimal policy. such a dataset ensures that the first term will be small by reducing value uncertainty of the near-optimal data collection policy, but does little to reduce the value uncertainty of any other policy, leading the second term to be large. however, although pathological examples exist, it is clear that this bound will not be tight on all environments. it is reasonable to ask: is it likely that this bound will be tight on real-world examples? we argue that it likely will be. we identify two properties that most real-world tasks share: (1) the set of policies is pyramidal: there are an enormous number of bad policies, many mediocre policies, a few good policies, etc. (2) due to the size of the state space and cost of data collection, most policies have high value uncertainty. given that these assumptions hold, na¨ıve algorithms will perform as poorly on most real-world environments as they do on pathological examples. consider: there are many more policies than there is data, so there will be many policies with high value uncertainty; na¨ıve algorithms will likely overestimate several of these policies, and erroneously select one; since good policies are rare, the selected policy will likely be bad. it follows that running na¨ıve algorithms on real-world problems will typically yield suboptimality close to our worst-case bound. and, indeed, on deep rl benchmarks, which are selected due to their similarity to real-world settings, overestimation has been widely observed, typically correlated with poor performance (bellemare et al., 2016; van hasselt et al., 2016; fujimoto et al., 2019). the pessimism principle “behave as though the world was plausibly worse than you observed it to be.” the pessimism principle tells us how to exploit our current knowledge to find the stationary policy with the best worstcase guarantee on expected return. we consider two specific families of pessimistic algorithms, the uncertainty-aware pessimistic algorithms and proximal pessimistic algorithms, and bound the worst-case suboptimality of each. these algorithms each include a hyperparameter, α, controlling the amount of pessimism, interpolating from fully-na¨ıve to fully-pessimistic. (for a discussion of the implications of the latter extreme, see appendix g.2.) then, we will compare the two families, and see how the proximal family is simply a trivial special case of the more general uncertainty-aware family of methods. uncertainty-aware pessimistic algorithms our first family of pessimistic algorithms is the uncertainty-aware (ua) pessimistic algorithms. as the name suggests, this family of algorithms estimates the state-wise bellman uncertainty and penalizes policies accordingly, leading to a pessimistic value estimate and a preference for policies with low value uncertainty. definition 3. an uncertainty-aware pessimistic algorithm, with a bellman uncertainty function uπ d,δ and pessimism hyperparameter α ∈ [0, 1], is any algorithm in the family defined by the fixed-point function fua(vπ) = aπ(rd + γpdvπ) − αuπ d,δ this fixed-point function is simply the na¨ıve fixed-point function penalized by the bellman uncertainty. this can be interpreted as being pessimistic about the outcome of every action. note that it remains to specify a technique to compute the bellman uncertainty function, e.g. appendix b.1, in order to get a concrete algorithm. it is straightforward to construct algorithms from this family by modifying na¨ıve algorithms to subtract the penalty term. similar algorithms have been explored in the safe rl literature (ghavamzadeh et al., 2016; laroche et al., 2019) and the robust mdp literature (givan et al., 1997), where algorithms with high-probability performance guarantees are useful in the context of ensuring safety. under review as a conference paper at iclr 2021 theorem 2 (uncertainty-aware pessimistic fdpo suboptimality bound). consider an uncertaintyaware pessimistic value-based fixed-dataset policy optimization algorithm ovb d,δ be any bellman uncertainty function, µπ d,δ be a corresponding value uncertainty function, and α ∈ [0, 1] be any pessimism hyperparameter. the suboptimality of ovb ua is bounded with probability at least 1 − δ by ua . let uπ subopt(ovb ua (d)) ≤ inf π proof. see appendix c.7. eρ[vπ∗ m m − vπ d,δ] sup π eρ[µπ d,δ] this bound should be contrasted with our result from theorem 1. with α = 0, the family of pessimistic algorithms reduces to the family of na¨ıve algorithms, so the bound is correspondingly identical. we can add pessimism by increasing α, and this corresponds to a decrease in the magnitude of the supremum term. when α = 1, there is no supremum term at all. in general, the optimal value of α lies between the two extremes. to further understand the power of this approach, it is illustrative to compare it to imitation learning. consider the case where the dataset contains a small number of expert trajectories but also a large number of interactions from a random policy, i.e. when learning from suboptimal demonstrations (brown et al., 2019). if the dataset contained only a small amount of expert data, then both an ua pessimistic fdpo algorithm and an imitation learning algorithm would return a high-value policy. however, the injection of sufficiently many random interactions would degrade the performance of imitation learning algorithms, whereas ua pessimistic algorithms would continue to behave similarly to the expert data. proximal pessimistic algorithms the next family of algorithms we study are the proximal pessimistic algorithms, which implement pessimism by penalizing policies that deviate from the empirical policy. the name proximal was chosen to reflect the idea that these algorithms prefer policies which stay “nearby” to the empirical policy. many fdpo algorithms in the literature, and in particular several recently-proposed deep learning algorithms (fujimoto et al., 2019; kumar et al., 2019; laroche et al., 2019; jaques et al., 2019; wu et al., 2019; liu et al., 2020), resemble members of the family of proximal pessimistic algorithms; see appendix e. also, another variant of the proximal pessimistic family, which uses state density instead of state-conditional action density, can be found in appendix c.9. definition 4. a proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function fproximal(vπ) = aπ(rd + γpdvπ) − α theorem 3 (proximal pessimistic fdpo suboptimality bound). consider any proximal pessimistic value-based fixed-dataset policy optimization algorithm ovb proximal. let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. for any dataset d, the suboptimality of ovb proximal is bounded with probability at least 1 − δ by subopt(oproximal(d)) ≤ inf π proof. see appendix c.8. (cid:18) eρ[vπ∗ m m − vπ m] + eρ (cid:18) eρ + sup π d,δ + α(i − γaπpd)−1 d,δ − α(i − γaπpd)−1 once again, we see that as α grows, the large supremum term shrinks; similarly, by lemma 5, when we have α = 1, the supremum term is guaranteed to be non-positive.3 the primary limitation of 3initially, it will contain µπ(cid:48) d,δ, but this can be removed since it is not dependent on π. under review as a conference paper at iclr 2021 the proximal approach is the looseness of the value lower-bound. intuitively, this algorithm can be understood as performing imitation learning, but permitting minor deviations. constraining the policy to be near in distribution to the empirical policy can fail to take advantage of highly-visited states which are reached via many trajectories. in fact, in contrast to both the na¨ıve approach and the ua pessimistic approach, in the limit of infinite data this approach is not guaranteed to converge to the optimal policy. also, note that when α ≥ 1 − γ, this algorithm is identical to imitation learning. the relationship between uncertainty-aware and proximal algorithms though these two families may appear on the surface to be quite different, they are in fact closely related. a key insight of our theoretical work is that it reveals the important connection between these two approaches. concretely: proximal algorithms are uncertainty-aware algorithms which use a trivial value uncertainty function. to see this, we show how to convert an uncertainty-aware penalty into a proximal penalty. let µ be any state-action-wise decomposable value uncertainty function. for any dataset d, we have d,δ = µˆπd d,δ + (i − γaπpd)−1 (cid:16) (aπ − aˆπd )(ud,δ + γpdµˆπd (cid:18) ≤ µˆπd d,δ + (i − γaπpd)−1 tvs (π, π(cid:48)) (lemma 4) (lemma 5) we began with the uncertainty penalty. in the first step, we rewrote the uncertainty for π into the sum of two terms: the uncertainty for ˆπd, and the difference in uncertainty between π and ˆπd on various 1 actions. in the second step, we chose our state-action-wise bellman uncertainty to be 1−γ , which is a trivial upper bound; we also upper-bound the signed policy difference with the total variation. this results in the proximal penalty.4 thus, we see that proximal penalties are equivalent to uncertainty-aware penalties which use a specific, trivial uncertainty function. this result suggests that uncertainty-aware algorithms are strictly better than their proximal counterparts. there is no looseness in this result: for any proximal penalty, we will always be able to find a tighter uncertainty-aware penalty by replacing the trivial uncertainty function with something tighter. however, currently, proximal algorithms are quite useful in the context of deep learning. this is because the only uncertainty function that can currently be implemented for neural networks is the trivial uncertainty function. until we discover how to compute uncertainties for neural networks, proximal pessimistic algorithms will remain the only theoretically-motivated family of algorithms. experiments we implement algorithms from each family to empirically investigate whether their performance of follows the predictions of our bounds. below, we summarize the key predictions of our theory. • imitation. this algorithm simply learns to copy the empirical policy. it performs well if and only if the data collection policy performs well. • na¨ıve. this algorithm performs well only when almost no policies have high value uncertainty. this means that when the data is collected from any mostly-deterministic policy, performance of this algorithm will be poor, since many states will be missing data. stochastic data collection improves performance. as the size of the dataset grows, this algorithm approaches optimality. • uncertainty-aware. this algorithm performs well when there is data on states visited by near-optimal policies. this is the case when a small amount of data has been collected from a near-optimal policy, or a large amount of data has been collected from a worse policy. as the size of the dataset grows, this algorithm to approaches optimality. this approach outperforms all other approaches. 4when constructing the penalty, we can ignore the first term, which does not contain π, and so is irrelevant to optimization. under review as a conference paper at iclr 2021 (a) performance of fdpo algorithms on a dataset of 2000 transitions, as the data collection policy is interpolated from random to optimal. (b) performance of fdpo algorithms as dataset size increases. data is collected with an optimal (cid:15)-greedy policy, with (cid:15) = 50%. figure 1: tabular gridworld experiments. • proximal. this algorithm roughly mirrors the performance of the imitation approach, but improves upon it. as the size of the dataset grows, this algorithm does not approach optimality, as the penalty persists even when the environment’s dynamics are perfectly captured by the dataset. our experimental results qualitatively align with our predictions in both the tabular and deep learning settings, giving evidence that the picture painted by our theoretical analysis truly describes the fdpo setting. see appendix d for pseudocode of all algorithms; see appendix f for details on the experimental setup; see appendix g.3 for additional experimental considerations for deep learning experiments that will be of interest to practicioners. for an open-source implementation, including full details suitable for replication, please refer to the code in the accompanying github repository: github.com/anonymized tabular. the first tabular experiment, whose results are shown in figure 1(a), compares the performance of the algorithms as the policy used to collect the dataset is interpolated from the uniform random policy to an optimal policy using (cid:15)-greedy. the second experiment, whose results are shown in figure 1(b), compares the performance of the algorithms as we increase the size of the dataset from 1 sample to 200000 samples. in both experiments, we notice a qualitative difference between the trends of the various algorithms, which aligns with the predictions of our theory. neural network. the results of these experiments can be seen in figure 2. similarly to the tabular experiments, we see that the na¨ıve approach performs well when data is fully exploratory, and poorly when data is collected via an optimal policy; the pure imitation approach performs better when the data collection policy is closer to optimal. the pessimistic approach achieves the best of both worlds: it correctly imitates a near-optimal policy, but also learns to improve upon it somewhat when the data is more exploratory. one notable failure case is in freeway, where the performance of the pessimistic approach barely improves upon the imitation policy, despite the na¨ıve approach performing near-optimally for intermediate values of (cid:15). discussion and conclusion in this work, we provided a conceptual and mathematical framework for thinking about fixed-dataset policy optimization. starting from intuitive building blocks of uncertainty and the over-under decomposition, we showed the core issue with na¨ıve approaches, and introduced the pessimism principle as the defining characteristic of solutions. we described two families of pessimistic algorithms, uncertainty-aware and proximal. we see theoretically that both of these approaches have advantages over the na¨ıve approach, and observed these advantages empirically. comparing these two families of pessimistic algorithms, we see both theoretically and empirically that uncertainty-aware under review as a conference paper at iclr 2021 figure 2: performance of deep fdpo algorithms on a dataset of 500000 transitions, as the data collection policy is interpolated from near-optimal to random. note that here, the only pessimistic algorithm evaluated is proximal. algorithms are strictly better than proximal algorithms, and that proximal algorithms may not yield the optimal policy, even with infinite data. future directions. our results indicate that research in fdpo should not focus on proximal algorithms. the development of neural uncertainty estimation techniques will enable principled uncertainty-aware deep learning algorithms. as is evidenced by our tabular results, we expect these approaches to yield dramatic performance improvements, rendering algorithms derived from the proximal family (kumar et al., 2019; fujimoto et al., 2019; laroche et al., 2019; kumar et al., 2020) obsolete. on ad-hoc solutions. it is undoubtably disappointing to see that proximal algorithms, which are far easier to implement, are fundamentally limited in this way. it is tempting to propose various adhoc solutions to mitigate the flaws of proximal pessimistic algorithms in practice. however, in order to ensure that the resulting algorithm is principled, one must be careful. for example, one might consider tuning α; however, doing the tuning requires evaluating each policy in the environment, which involves gaining information by interacting with the environment, which is not permitted by the problem setting. or, one might consider e.g. an adaptive pessimism hyperparameter which decays with the size of the dataset; however, in order for such a penalty to be principled, it must be based on an uncertainty function, at which point we may as well just use an uncertainty-aware algorithm. stochastic policies. one surprising property of pessimsitic algorithms is that the optimal policy is often stochastic. this is because the penalty term included in their fixed-point objective is often minimized by stochastic policies. for the penalty of proximal pessimistic algorithms, it is easy to see that this will be the case for any non-deterministic empirical policy; for ua pessimsitic algorithms, it is dependent on the choice of bellman uncertainty function, but often still holds (see appendix b.2 for the derivation of a bellman uncertainty function with this property). this observation lends mathematical rigor to the intuition that agents should ‘hedge their bets’ in the face of epistemic uncertainty. this property also means that the simple approach of selecting the argmax action is no longer adequate for policy improvement. in appendix d.2.2 we discuss a policy improvement procedure that takes into account the proximal penalty to find the stochastic optimal policy. implications for rl. finally, due to the close connection between the fdpo and rl settings, this work has implications for deep reinforcement learning. many popular deep rl algorithms utilize a replay buffer to break the correlation between samples in each minibatch (mnih et al., 2015). however, since these algorithms typically alternate between collecting data and training the network, the replay buffer can also be viewed as a ‘temporarily fixed’ dataset during the training phase. these algorithms are often very sensitive to hyperparemters; in particular, they perform poorly when the number of learning steps per interaction is large (fedus et al., 2020). this effect can be explained by our analysis: additional steps of learning cause the policy to approach its na¨ıve fdpo fixed-point, which has poor worst-case suboptimality. a pessimistic algorithm with a better fixed-point could therefore allow us to train more per interaction, improving sample efficiency. a potential direction of future work is therefore to incorporate pessimism into deep rl. under review as a conference paper at iclr 2021 references rishabh agarwal, dale schuurmans, and mohammad norouzi. striving for simplicity in off-policy deep reinforcement learning. arxiv preprint arxiv:1907.04543, 2019. andr´as antos, csaba szepesv´ari, and r´emi munos. value-iteration based fitted policy iteration: learning with a single trajectory. in 2007 ieee international symposium on approximate dynamic programming and reinforcement learning, pp. 330–337. ieee, 2007. marc g bellemare, georg ostrovski, arthur guez, philip s thomas, and r´emi munos. increasing in thirtieth aaai conference on the action gap: new operators for reinforcement learning. artificial intelligence, 2016. daniel s. brown, wonjoon goo, prabhat nagarajan, and scott niekum. extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. in proceedings of the international conference on machine learning, 2019. michael k cohen and marcus hutter. pessimism about unknown unknowns inspires conservatism. in conference on learning theory, pp. 1344–1373. pmlr, 2020. damien ernst, pierre geurts, and louis wehenkel. tree-based batch mode reinforcement learning. journal of machine learning research, 6:503–556, 2005. william fedus, prajit ramachandran, rishabh agarwal, yoshua bengio, hugo larochelle, mark arxiv preprint rowland, and will dabney. revisiting fundamentals of experience replay. arxiv:2007.06700, 2020. scott fujimoto, david meger, and doina precup. off-policy deep reinforcement learning without exploration. in international conference on machine learning, pp. 2052–2062, 2019. mohammad ghavamzadeh, marek petrik, and yinlam chow. safe policy improvement by minimizing robust baseline regret. in advances in neural information processing systems, pp. 2298–2306, 2016. robert givan, sonia leach, and thomas dean. bounded parameter markov decision processes. in european conference on planning, pp. 234–246. springer, 1997. vineet goyal and julien grand-clement. robust markov decision process: beyond rectangularity. wei hu, lechao xiao, and jeffrey pennington. provable benefit of orthogonal initialization in optimizing deep linear networks. arxiv preprint arxiv:2001.05992, 2020. ahmed hussein, mohamed medhat gaber, eyad elyan, and chrisina jayne. imitation learning: a survey of learning methods. acm computing surveys (csur), 50(2):1–35, 2017. garud n iyengar. robust dynamic programming. mathematics of operations research, 30(2): natasha jaques, asma ghandeharioun, judy hanwen shen, craig ferguson, agata lapedriza, noah jones, shixiang gu, and rosalind picard. way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arxiv preprint arxiv:1907.00456, 2019. nan jiang. note on certainty equivalence, 2019a. nan jiang. note on fqi, 2019b. nan jiang and jiawei huang. minimax confidence interval for off-policy evaluation and policy sham kakade and john langford. approximately optimal approximate reinforcement learning. in rahul kidambi, aravind rajeswaran, praneeth netrapalli, and thorsten joachims. morel: modelbased offline reinforcement learning. arxiv preprint arxiv:2005.05951, 2020. under review as a conference paper at iclr 2021 aviral kumar, justin fu, matthew soh, george tucker, and sergey levine. stabilizing off-policy q-learning via bootstrapping error reduction. in advances in neural information processing systems, pp. 11784–11794, 2019. aviral kumar, aurick zhou, george tucker, and sergey levine. conservative q-learning for offline reinforcement learning. arxiv preprint arxiv:2006.04779, 2020. sascha lange, thomas gabel, and martin riedmiller. batch reinforcement learning. reinforcement romain laroche, paul trichelair, and remi tachet des combes. safe policy improvement with baseline bootstrapping. in international conference on machine learning, pp. 3652–3661, 2019. tor lattimore and csaba szepesv´ari. bandit algorithms. cambridge university press, 2020. sergey levine, aviral kumar, george tucker, and justin fu. offline reinforcement learning: tuto | 9 | [
108,
549.4730784,
504.0033874,
559.4356784
] |
_4GFbtOuWq-.pdf | 2,022 | 1 | capacity of group-invariant linear readouts from equivariant representations: how many objects can be linearly classified under all possible views? matthew farrell∗‡, blake bordelon∗‡, shubhendu trivedi†, & cengiz pehlevan‡ ‡ harvard university {msfarrell,blake bordelon,cpehlevan}@seas.harvard.edu † massachusetts institute of technology shubhendu@csail.mit.edu abstract equivariance has emerged as a desirable property of representations of objects subject to identity-preserving transformations that constitute a group, such as translations and rotations. however, the expressivity of a representation constrained by group equivariance is still not fully understood. we address this gap by providing a generalization of cover’s function counting theorem that quantifies the number of linearly separable and group-invariant binary dichotomies that can be assigned to equivariant representations of objects. we find that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action. we show how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling. while other operations do not change the fraction of separable dichotomies, local pooling decreases the fraction, despite being a highly nonlinear operation. finally, we test our theory on intermediate representations of randomly initialized and fully trained convolutional neural networks and find perfect agreement. introduction the ability to robustly categorize objects under conditions and transformations that preserve the object categories is essential to animal intelligence, and to pursuits of practical importance such as improving computer vision systems. however, for general-purpose understanding and geometric reasoning, invariant representations of these objects in sensory processing circuits are not enough. perceptual representations must also accurately encode their transformation properties. one such property is that of exhibiting equivariance to transformations of the object. when such transformations are restricted to be an algebraic group, the resulting equivariant representations have found significant success in machine learning starting with classical convolutional neural networks (cnns) (denker et al., 1989; lecun et al., 1989) and recently being generalized by the influential work of cohen & welling (2016). such representations have elicited burgeoning interest as they capture many transformations of practical interest such as translations, permutations, rotations, and reflections. furthermore, equivariance to these transformations can be easily “hard-coded” into neural networks. indeed, a new breed of cnn architectures that explicitly account for such transformations are seeing diverse and rapidly growing applications (townshend et al., 2021; baek et al., 2021; satorras et al., 2021; anderson et al., 2019; bogatskiy et al., 2020; klicpera et al., 2020; winkels & cohen, 2019; gordon et al., 2020; sosnovik et al., 2021; eismann et al., 2020). in addition, equivariant cnns have been shown to capture response properties of neurons in the primary visual cortex beyond classical g´abor filter models (ecker et al., 2018). ∗these authors contributed equally. while it is clear that equivariance imposes a strong constraint on the geometry of representations and thus of perceptual manifolds (seung & lee, 2000; dicarlo & cox, 2007) that are carved out by these representations as the objects transform, the implications of such constraints on their expressivity are not well understood. in this work we take a step towards addressing this gap. our starting point is the classical notion of the perceptron capacity (sometimes also known as the fractional memory/storage capacity) – a quantity fundamental to the task of object categorization and closely related to vc dimension (vapnik & chervonenkis, 1968). defined as the maximum number of points for which all (or 1-δ fraction of) possible binary label assignments (i.e. dichotomies) afford a hyperplane that separates points with one label from the points with the other, it can be seen to offer a quantification of the expressivity of a representation. classical work on perceptron capacity focused on points in general position (wendel, 1962; cover, 1965; schl¨afli, 1950; gardner, 1987; 1988). however, understanding the perceptron capacity when the inputs are not merely points, but are endowed with richer structure, has only recently started to attract attention. for instance, work by chung et al. (2018); pastore et al. (2020); rotondo et al. (2020); cohen et al. (2020) considered general perceptual manifolds and examined the role of their geometry to obtain extensions to the perceptron capacity results. however, such work crucially relied on the assumption that each manifold is oriented randomly, a condition which is strongly violated by equivariant representations. with these motivations, our particular contributions in this paper are the following: • we extend cover’s function counting theorem and vc dimension to equivariant representations, finding that both scale with the dimension of the subspace fixed by the group action. • we demonstrate the applicability of our result to g-convolutional network layers, including pooling layers, through theory and verify through simulation. 1.1 related works work most related to ours falls along two major axes. the first follows the classical perceptron capacity result on the linear separability of points (schl¨afli, 1950; wendel, 1962; cover, 1965; gardner, 1987; 1988). this result initiated a long history of investigation in theoretical neuroscience, (brunel et al., 2004; chapeton et al., 2012; rigotti et al., 2013; brunel, 2016; rubin et al., e.g. 2017; pehlevan & sengupta, 2017), where it is used to understand the memory capacity of neuronal architectures. similarly, in machine learning, the perceptron capacity and its variants, including notions for multilayer perceptrons, have been fundamental to a fruitful line of study in the context of finite sample expressivity and generalization (baum, 1988; kowalczyk, 1997; sontag, 1997; huang, 2003; yun et al., 2019; vershynin, 2020). work closest in spirit to ours comes from theoretical neuroscience and statistical physics (chung et al., 2018; pastore et al., 2020; rotondo et al., 2020; cohen et al., 2020), which considered general perceptual manifolds, albeit oriented randomly, and examined the role of their geometry to obtain extensions to the perceptron capacity result. the second line of relevant literature is that on group-equivariant convolutional neural networks (gcnns). the main inspiration for such networks comes from the spectacular success of classical cnns (lecun et al., 1989) which directly built in translational symmetry into the network architecture. in particular, the internal representations of a cnn are approximately1 translation equivariant: if the input image is translated by an amount t, the feature map of each internal layer is translated by the same amount. furthermore, an invariant read-out on top ensures that a cnn is translation invariant. cohen & welling (2016) observed that a viable approach to generalize cnns to other data types could involve considering equivariance to more general transformation groups. this idea has been used to construct networks equivariant to a wide variety of transformations such as planar rotations (worrall et al., 2017; weiler et al., 2018b; bekkers et al., 2018; veeling et al., 2018; smets et al., 2020), 3d rotations (cohen et al., 2018; esteves et al., 2018; worrall & brostow, 2018; weiler et al., 2018a; kondor et al., 2018a; perraudin et al., 2019), permutations (zaheer et al., 2017; hartford et al., 2018; kondor et al., 2018b; maron et al., 2019a; 2020), general euclidean isometries (weiler et al., 2018a; weiler & cesa, 2019; finzi et al., 2020), scaling (marcos et al., 2018; worrall & welling, 2019; sosnovik et al., 2020) and more exotic symmetries (bogatskiy et al., 2020; shutty & wierzynski, 2020; finzi et al., 2021) etc. 1some operations such as max pooling and boundary effects of the convolutions technically break strict equivariance, as well as the final densely connected layers. a quite general theory of equivariant/invariant networks has also emerged. kondor & trivedi (2018) gave a complete description of gcnns for scalar fields on homogeneous spaces of compact groups. this was generalized further to cover the steerable case in (cohen et al., 2019b) and to general gauge fields in (cohen et al., 2019a; weiler et al., 2021). this theory also includes universal approximation results (yarotsky, 2018; keriven & peyr´e, 2019; sannai et al., 2019b; maron et al., 2019b; segol & lipman, 2020; ravanbakhsh, 2020). nevertheless, while benefits of equivariance/invariance in terms of improved sample complexity and ease of training are quoted frequently, a firm theoretical understanding is still largely missing. some results however do exist, going back to (shawe-taylor, 1991). abu-mostafa (1993) made the argument that restricting a classifier to be invariant can not increase its vc dimension. sokolic et al. (2017) extend this idea to derive generalization bounds for invariant classifiers, while sannai et al. (2019a) do so specifically working with the permutation group. elesedy & zaidi (2021) show a strict generalization benefit for equivariant linear models, showing that the generalization gap between a least squares model and its equivariant version depends on the dimension of the space of anti-symmetric linear maps. some benefits of related ideas such as data augmentation and invariant averaging are formally shown in (lyle et al., 2020; chen et al., 2020). here we focus on the limits to expressivity enforced by equivariance. 2 problem formulation suppose x abstractly represents an object and let r(x) ∈ rn be some feature map of x to an n -dimensional space (such as an intermediate layer of a deep neural network). we consider transformations of this object, such that they form a group in the algebraic sense of the word. we denote the abstract transformation of x by element g ∈ g as gx. groups g may be represented by invertible matrices, which act on a vector space v (which themselves form the group gl(v ) of invertible linear transformations on v ). we are interested in feature maps r which satisfy the following group equivariance condition: r(gx) = π(g)r(x), where π : g → gl(rn ) is a linear representation of g which acts on feature map r(x). note that many representations of g are possible, including the trivial representation: π(g) = i for all g. we are interested in perceptual object manifolds generated by the actions of g. each of the p manifolds can be written as a set of points {π(g)rµ : g ∈ g} where µ ∈ [p ] ≡ {1, 2, . . . , p }; that is, these manifolds are orbits of the point rµ ≡ r(xµ) under the action of π. we will refer to such manifolds as π-manifolds.2 each of these π-manifolds represents a single object under the transformation encoded by π; hence, each of the points in a π-manifold is assigned the same class label. a perceptron endowed with a set of linear readout weights w will attempt to determine the correct class of every point in every manifold. the condition for realizing (i.e. linearly separating) the dichotomy {yµ}µ can be written as yµw(cid:62)π(g)rµ > 0 for all g ∈ g and µ ∈ [p ], where yµ = +1 if the µth manifold belongs to the first class and yµ = −1 if the µth manifold belongs to the second class. the perceptron capacity is the fraction of dichotomies that can be linearly separated; that is, separated by a hyperplane. for concreteness, one might imagine that each of the rµ is the neural representation for an image of a dog (if yµ = +1) or of a cat (if yµ = −1). the action π(g) could, for instance, correspond to the image shifting to the left or right, where the size of the shift is given by g. different representations of even the same group can have different coding properties, an important point for investigating biological circuits and one that we leverage to construct a new gcnn architecture in section 5. perceptron capacity of group-generated manifolds | 2 | [
108.299,
156.2966768,
440.3236016,
168.2518768
] |
zEn1BhaNYsC.pdf | 2,023 | 1 | minimax optimal kernel operator learning via multilevel training jikai jin school of mathematical sciences peking university beijing, china jkjin@pku.edu.cn yiping lu institute for computational & mathematical engineering stanford university stanford, ca, us yplu@stanford.edu jose blanchet management science and engineering stanford university stanford, ca, us jose.blanchet@stanford.edu lexing ying department of mathematics stanford university stanford, ca, us lexing@stanford.edu abstract learning mappings between infinite-dimensional function spaces have achieved empirical success in many disciplines of machine learning, including generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning. in this paper, we study the statistical limit of learning a hilbertschmidt operator between two infinite-dimensional sobolev reproducing kernel hilbert spaces (rkhss). we establish the information-theoretic lower bound in terms of the sobolev hilbert-schmidt norm and show that a regularization that learns the spectral components below the bias contour and ignores the ones above the variance contour can achieve the optimal learning rate. at the same time, the spectral components between the bias and variance contours give us flexibility in designing computationally feasible machine learning algorithms. based on this observation, we develop a multilevel kernel operator learning algorithm that is optimal when learning linear operators between infinite-dimensional function spaces. introduction | 0 | [
126.82956,
288.1406768,
205.9888518,
300.0958768
] |
a4COps0uokg.pdf | 2,023 | 1 | user-interactive offline reinforcement learning phillip swazinna siemens & tu munich munich, germany swazinna@in.tum.de steffen udluft siemens technology munich, germany steffen.udluft@siemens.com thomas runkler siemens & tu munich munich, germany thomas.runkler@siemens.com abstract offline reinforcement learning algorithms still lack trust in practice due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. at the same time, offline rl algorithms are not able to tune their most important hyperparameter - the proximity of the learned policy to the original policy. we propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby addressing both of the above mentioned issues simultaneously. this allows users to start with the original behavior and grant successively greater deviation, as well as stopping at any time when the policy deteriorates or the behavior is too far from the familiar one. introduction recently, offline reinforcement learning (rl) methods have shown that it is possible to learn effective policies from a static pre-collected dataset instead of directly interacting with the environment (laroche et al., 2019; fujimoto et al., 2019; yu et al., 2020; swazinna et al., 2021b). since direct interaction is in practice usually very costly, these techniques have alleviated a large obstacle on the path of applying reinforcement learning techniques in real world problems. a major issue that these algorithms still face is tuning their most important hyperparameter: the proximity to the original policy. virtually all algorithms tackling the offline setting have such a hyperparameter, and it is obviously hard to tune, since no interaction with the real environment is permitted until final deployment. practitioners thus risk being overly conservative (resulting in no improvement) or overly progressive (risking worse performing policies) in their choice. additionally, one of the arguably largest obstacles on the path to deployment of rl trained policies in most industrial control problems is that (offline) rl algorithms ignore the presence of domain experts, who can be seen as users of the final product - the policy. instead, most algorithms today can be seen as trying to make human practitioners obsolete. we argue that it is important to provide these users with a utility - something that makes them want to use rl solutions. other research fields, such as machine learning for medical diagnoses, have already established the idea that domain experts are crucially important to solve the task and complement human users in various ways babbar et al. (2022); cai et al. (2019); de-arteaga et al. (2021); fard & pineau (2011); tang et al. (2020). we see our work in line with these and other researchers (shneiderman, 2020; schmidt et al., 2021), who suggest that the next generation of ai systems needs to adopt a user-centered approach and develop systems that behave more like an intelligent tool, combining both high levels of human control and high levels of automation. we seek to develop an offline rl method that does just that. furthermore, we see giving control to the user as a requirement that may in the future be much more enforced when regulations regarding ai systems become more strict: the eu’s high level expert group on ai has already recognized “human autonomy and oversight” as a key requirement for trustworthy ai in their ethics guidelines for trustworthy ai (smuha, 2019). in the future, solutions found with rl might thus be required by law to exhibit features that enable more human control. in this paper, we thus propose a simple method to provide users with more control over how an offline rl policy will behave after deployment. the algorithm that we develop trains a conditional policy, that can after training adapt the trade-off between proximity to the data generating policy on the one hand and estimated performance on the other. close proximity to a known solution naturally facilitates trust, enabling conservative users to choose behavior they are more inclined to confidently deploy. that way, users may benefit from the automation provided by offline rl (users don’t need to handcraft controllers, possibly even interactively choose actions) yet still remain in control as they can e.g. make the policy move to a more conservative or more liberal trade-off. we show how such an algorithm can be designed, as well as compare its performance with a variety of offline rl baselines and show that a user can achieve state of the art performance with it. furthermore, we show that our method has advantages over simpler approaches like training many policies with diverse hyperparameters. finally, since we train a policy conditional on one of the most important hyperparameters in offline rl, we show how a user could potentially use it to tune this hyperparameter. in many cases of our evaluations, this works almost regret-free, since we observe that the performance as a function of the hyperparameter is mostly a smooth function. related work offline rl recently, a plethora of methods has been published that learn policies from static datasets. early works, such as fqi and nfq (ernst et al., 2005; riedmiller, 2005), were termed batch instead of offline since they didn’t explicitly address issue that the data collection cannot be influenced. instead, similarly to other batch methods (depeweg et al., 2016; hein et al., 2018; kaiser et al., 2020), they assumed a uniform random data collection that made generalization to the real environment simpler. among the first to explicitly address the limitations in the offline setting under unknown data collection were spibb(-dqn) (laroche et al., 2019) in the discrete and bcq (fujimoto et al., 2019) in the continuous actions case. many works with different focuses followed: some treat discrete mdps and come with provable bounds on the performance at least with a certain probability thomas et al. (2015); nadjahi et al. (2019), however many more focused on the continuous setting: emaq, bear, brac, abm, various dice based methods, rem, pebl, psec-td-0, cql, iql, bail, crr, coil, o-raac, opal, td3+bc, and rvs (ghasemipour et al., 2021; kumar et al., 2019; wu et al., 2019; siegel et al., 2020; nachum et al., 2019; zhang et al., 2020; agarwal et al., 2020; smit et al., 2021; pavse et al., 2020; kumar et al., 2020; kostrikov et al., 2021; chen et al., 2019; wang et al., 2020; liu et al., 2021; urpí et al., 2021; ajay et al., 2020; brandfonbrener et al., 2021; emmons et al., 2021) are just a few of the proposed model-free methods over the last few years. additionally, many model-based as well as hybrid approaches have been proposed, such as mopo, morel, moose, combo, rambo, and wsbc (yu et al., 2020; kidambi et al., 2020; swazinna et al., 2021b; yu et al., 2021; rigter et al., 2022; swazinna et al., 2021a). even approaches that train policies purely supervised, by conditioning on performance, have been proposed (peng et al., 2019; emmons et al., 2021; chen et al., 2021). model based algorithms more often use model uncertainty, while model-free methods use a more direct behavior regularization approach. offline policy evaluation or offline hyperparameter selection is concerned with evaluating (or at least ranking) policies that have been found by an offline rl algorithm, in order to either pick the best performing one or to tune hyperparameters. often, dynamics models are used to evaluate policies found in model-free algorithms, however also model-free evaluation methods exist (hans et al., 2011; paine et al., 2020; konyushova et al., 2021; zhang et al., 2021b; fu et al., 2021). unfortunately, but also intuitively, this problem is rather hard since if any method is found that can more accurately assess the policy performance than the mechanism in the offline algorithm used for training, it should be used instead of the previously employed method for training. also, the general dilemma of not knowing in which parts of the state-action space we know enough to optimize behavior seems to always remain. works such as zhang et al. (2021a); lu et al. (2021) become applicable if limited online evaluations are allowed, making hyperparameter tuning much more viable. offline rl with online adaptation other works propose an online learning phase that follows after offline learning has conceded. in the most basic form, kurenkov & kolesnikov (2021) introduce an online evaluation budget that lets them find the best set of hyperparameters for an offline rl algorithm given limited online evaluation resources. in an effort to minimize such a budget, yang et al. (2021) train a set of policies spanning a diverse set of uncertainty-performance trade-offs. ma et al. (2021) propose a conservative adaptive penalty, that penalizes unknown behavior more during the beginning and less during the end of training, leading to safer policies during training. in pong et al. (2021); nair et al. (2020); zhao et al. (2021) methods for effective online learning phases that follow the offline learning phase are proposed. in contrast to these methods, we are not aiming for a fully automated solution. instead, we want to provide the user with a valuable tool after training, so we do not propose an actual online phase, also since practitioners may find any performance deterioration inacceptable. to the best of our knowledge, no prior offline rl method produces policies that remain adaptable after deployment without any further training. lion: learning in interactive offline environments in this work, we address two dilemmas of the offline rl setting: first and foremost, we would like to provide the user with a high level control option in order to influence the behavior of the policy, since we argue that the user is crucially important for solving the task and not to be made obsolete by an algorithm. further we address the issue that in offline rl, the correct hyperparameter controlling the trade-off between conservatism and performance is unknown and can hardly be tuned. by training a policy conditioned in the proximity hyperparameter, we aim to enable the user to find a good trade-off hyperparameter. code will be made available at https://github.com/pswazinna/lion. as mentioned, behavior cloning, will most likely yield the most trustworthy solution due to its familiarity, however the solution is of very limited use since it does not outperform the previous one. offline rl on the other hand is problematic since we cannot simply evaluate policy candidates on the real system and offline policy evaluation is still an open problem (hans et al., 2011; paine et al., 2020; konyushova et al., 2021; zhang et al., 2021b; fu et al., 2021). in the following, we thus propose a solution that moves the hyperparameter choice from training to deployment time, enabling the user to interactively find the desired trade-off between bc and offline optimization. a user may then slowly move from conservative towards better solutions. training during training time, we optimize three components: a model of the original policy βϕ(s), an ensemble of transition dynamics models {f i (s, a)|i ∈ 0, . . . , n − 1}, as well as the user adaptive ψi policy πθ(s, λ). the dynamics models {f i} as well as the original policy β are trained in isolation before the actual policy training starts. both π and β are always simple feedforward neural networks which map states directly to actions in a deterministic fashion (practitioners likely favor deterministic policies over stochastic ones due to trust issues). β is trained to simply imitate the behavior present in the dataset by minimizing the mean squared distance to the observed actions: l(ϕ) = [at − βϕ(st)]2 st,at∼d depending on the environment, the transition models are either also feedforward networks or simple recurrent networks with a single recurrent layer. the recurrent networks build their hidden state over g steps and are then trained to predict a window of size f into the future (similarly to (hein et al., 2017b)), while the feedforward dynamics simply predict single step transitions. both use mean squared error as loss: l(ψi) = l(ψi) = (cid:2)st+1 − f i ψi (st, at)(cid:3)2 st,at,st+1∼d t∼d [st+g+f +1 − f i ψi (st, at, . . . st+g, at+g, . . . ˆst+g+f , at+g+f )]2 where ˆst+h+f are the model predictions that are fed back to be used as input again. for simplicity, in this notation we assume the reward to be part of the state. also we do not explicitly show the recurrence and carrying over of the hidden states. after having trained the two components βϕ(s) and {f i (s, a)}, we can then move on to policy ψi training. similarly to moose and wsbc, we optimize the policy πθ by sampling start states from d and performing virtual rollouts throughout the dynamics ensemble using the current policy candidate. in every step, the ensemble predicts the reward as the minimum among its members and the next state that goes with it. at the same time we collect the mean squared differences between the actions that πθ took in the rollout and the one that βϕ would have taken. the loss is then computed as a weighted sum of the two components. crucially, we sample the weighting factor λ randomly and pass it to the policy as an additional input - the policy thus needs to learn all behaviors ranging from pure behavior cloning to entirely free optimization: l(θ) = − t γt[λe(st, at) − (1 − λ)p(at)] at = πθ(st, λ) where we sample λ between 0 & 1, e(st, at) = min{r(f i ψi(st, at))|i ∈ 0, ..., n − 1} denotes the output of the ensemble prediction for reward (we omit explicit notation of recurrence for simplicity) and p(at) = [βψ(st) − at]2 denotes the penalty based on the mean squared distance between the original policy and the actions proposed by πθ. see fig. 1 for a visualization of our proposed training procedure. figure 1: schematic of lion policy training. during policy training (eq. 3) only πθ (in green) is adapted, while the original policy model βϕ (orange) and the dynamics ensemble {fψi} (blue) are already trained and remain unchanged. from left to right, we first sample a start state (black) from the dataset and a λ value from its distribution. then, we let the original policy (orange) as well as the currently trained policy (green) predict actions - note that the newly trained policy is conditioned on λ. both actions are then compared to calculate the penalty for that timestep (red). the action from the currently trained policy is then also fed into the trained transition model (blue) together with the current state (black / blue), to get the reward for that timestep (yellow) as well as the next state (blue). this procedure is repeated until the horizon of the episode is reached. the rewards and penalties are then summed up and weighted by λ to be used as a loss function for policy training. we motivate our purely model-based approach (no value function involved) with the fact that we have fewer moving parts: our ensemble can be kept fixed once it is trained, while a value function has to be learned jointly with πθ, which is in our case more complex than usual. see experimental results in fig. 10 a brief attempt at making our approach work in the model-free domain. in addition to eq. 3, we need to penalize divergence not only from the learned model of the original policy during virtual roll-outs, but also from the actual actions in the dataset at λ = 0. it seems that if this is not done, the trained policy π sticks to the (also trained) original policy β during the rollouts, but during those rollouts, there are states that did not appear in the original dataset, enabling π to actually diverge from the true trajectory distribution. we thus penalize both rollout as well as data divergence at λ = 0: l(θ) = − t γt[λe(st, at) − (1 − λ)p(at)] + η s,a∼d where η controls the penalty weight for not following dataset actions at λ = 0, see appendix a for more details. furthermore, we normalize states to have zero mean and unit variance during every forward pass through dynamics model or policy, using the mean and standard deviation observed in the dataset. we also normalize the rewards provided by the ensemble rt = e(st, at), so that they live in the same magnitude as the action penalties (we assume actions to be in [−1, 1]d, so that the penalty can be in [0, 4]d where d is the action dimensionality). intuitively, one might choose to sample λ uniformly between zero and one, however instead we choose a beta distribution with parameters (0.1, 0.1), which could be called bathtub-shaped. similarly to (seo et al., 2021), we find that it is important to put emphasis on the edge cases, so that the extreme behavior is properly learned, rather than putting equal probability mass on each value in the [0, 1] range. the interpolation between the edges seems to be easier and thus require less samples. fig. 11 shows policy results for different lambda distributions during training. deployment at inference time, the trained policy can at any point be influenced by the user that would otherwise be in control of the system, by choosing the λ that is passed to the policy together with the current system state to obtain an action: algorithm 1 lion (training) 1: require dataset d = {τi}, randomly initialized parameters θ, ϕ, ψ, lambda distribution parameters beta(a, b), horizon h, number of policy updates u ψi with d and equation 2 sample start states s0 ∼ d sample lambda values λ ∼ beta(a, b) initialize policy loss l(θ) = 0 for t in 0..h do 2: // dynamics and original policy models can be trained supervised and independently of other components 3: train original policy model βϕ using d and equation 1 4: train dynamics models f i 5: for j in 1..u do 6: 7: 8: 9: 10: 11: 12: 13: at = πθ(st, λ) λ ∈ user(st). (5) he or she may choose to be conservative or adventurous, observe the feedback and always adjust the proximity parameter of the policy accordingly. at this point, any disliked behavior can immediately be corrected without any time loss due to re-training and deploying a new policy, even if the user’s specific preferences were not known at training time. we propose to initially start with λ = 0 during deployment, in order to check whether the policy is actually able to reproduce the original policy and to gain the user’s trust in the found solution. then, depending on how critical failures are and how much time is at hand, λ may be increased in small steps for as long as the user is still comfortable with the observed behavior. figure 3 shows an example of how the policy behavior changes over the course of λ. once the performance stops to increase or the user is otherwise not satisfied, we can immediately return to the last satisfying λ value. ψi(st, at) i = arg mini{r(f i l(θ)+ = −γt[λrt − (1 − λ)p(at)] update πθ using gradient ∇θl(θ) and adam calculate policy actions at = πθ(st, λ) calculate behavioral actions bt = βϕ(st) calculate penalty term p(at) = [βψ(st) − at]2 rt, st+1 = f i ψi(st, at))} s.t. experiments at first, we intuitively showcase lion in a simple 2d-world in order to get an understanding of how the policy changes its behavior based on λ. afterwards, we move to a more serious test, evaluating our algorithm on the 16 industrial benchmark (ib) datasets (hein et al., 2017a; swazinna et al., 2021b). we aim to answer the following questions: • do lion policies behave as expected, i.e. do they reproduce the original policy at λ = 0 and deviate more and more from it with increased freedom to optimize for return? • do lion policies at least in parts of the spanned λ space perform better or similarly well to state of the art offline rl algorithms? • is it easy to find the λ values that maximize return for practitioners? that is, are the performance courses smooth or do they have multiple local mini- & maxima? • is it possible for users to exploit the λ regularization at runtime to restrict the policy to only exhibit behavior he or she is comfortable with? 2d-world we evaluate the lion approach on a simplistic 2d benchmark. the states are x & y coordinates in the environment and rewards are given based on the position of the agent, following a gaussian distribution around a fixed point in the e−0.5((st−µ)/σ)2 state space, i.e. r(st) = 1 . √ in this example we set µ = (3, 6)t and σ = (1.5, 1.5)t. a visualization of the reward distribution can be seen in fig. 2 (b). we collect data from the environment using a simple policy that moves either to position (2.5, 2.5)t or to (7.5, 7.5)t, depending on which is closer to the randomly drawn start state (shown in fig. 2(a)), adding ε = 10% random actions as exploration. then we follow the outlined training procedure, by training a transition model, original policy model and finally a new policy that can at runtime change its behavior based on the desired proximity to the original policy. fig. 3 shows policy maps for λ ∈ {0.0, 0.6, 0.65, 0.7, 0.85, 1.0}, moving from simply imitating the original policy, over different mixtures, to pure return optimization. since the task is easy and accurately modeled by the dynamics ensemble, one may give absolute freedom to the policy and optimize for return only. as it can be seen, the policy moves quickly to the center of the reward distribution for λ = 1. figure 2: (a) original policy for data collection and - color represents action direction (b) reward distribution in the 2d environment - color represents reward value figure 3: policy maps for increasing values of λ in the 2d environment - colors represent action direction. initially, the policy simply imitates the original policy (see fig. 2 (a)). with increased freedom, the policy moves less to the upper right and more to the bottom left goal state of the original policy, since that one is closer to the high rewards. then, the policy moves its goal slowly upwards on the y-axis until it is approximately at the center of the reward distribution. since enough data was available (1,000 interactions) and the environment so simple, the models capture the true dynamics well and the optimal solution is found at λ = 1. this is however not necessarily the case if not enough or not the right data was collected (e.g. due to a suboptimal original policy - see fig. 4). industrial benchmark datasets we evaluate lion on the industrial benchmark datasets initially proposed in (swazinna et al., 2021b). the 16 datasets are created with three different baseline original policies (optimized, mediocre, bad) mixed with varying degrees of exploration. the optimized baseline is an rl trained policy and simulates an expert practitioner. the mediocre baseline moves the system back and forth around a fixed point that is rather well behaved, while the bad baseline steers to a point on the edge of the state space in which rewards are deliberately bad. each baseline is combined with ε ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0}-greedy exploration to collect a dataset (making the ε = 0.0 datasets extreme cases of the narrow distribution problem). together, they constitute a diverse set of offline rl settings. the exact baseline policies are given by: πbad = πmed = πopt = the datasets contain 100,000 interactions collected by the respective baseline policy combined with the ε-greedy exploration. the ib is a high dimensional and partially observable environment - if access to the full markov state were provided, it would contain 20 state variables. since only six of those are observable, and the relationship to the other variables and their subdynamics are complex and feature heavily delayed components, prior work hein et al. (2017b) has stated that up to 30 past time steps are needed to form a state that can hope to recover the true dynamics, so the state can be considered 180 dimensional. in our case we thus set the number of history steps g = 30. the action space is 3 dimensional. the benchmark is not supposed to mimic a single industrial application, but rather exhibit common issues observable in many different applications (partial observability, delayed rewards, multimodal and heteroskedastic noise, ...). the reward is a weighted combination of the observable variables fatigue and consumption, which are conflicting (usually move in opposite directions and need trade-off) and are influenced by various unobservable variables. as in prior work hein et al. (2018); depeweg et al. (2016); swazinna et al. (2021b) we optimize for a horizon of 100. the datasets are available at https://github.com/siemens/industrialbenchmark/ tree/offline_datasets/datasets under the apache license 2.0. figure 4: evaluation performance (top portion of each graph) and distance to the original policy (lower portion of each graph) of the lion approach over the chosen λ hyperparameter. various state of the art baselines are added as dashed lines with their standard set of hyperparameters (results from (swazinna et al., 2022)). even though the baselines all exhibit some hyperparameter that controls the distance to the original policy, all are implemented differently and we can neither map them to a corresponding lambda value of our algorithm, nor change the behavior at runtime, which is why we display them as dashed lines over the entire λ-spectrum. see fig. 12 for the 100% exploration dataset. baselines we compare performances of lion with various state of the art offline rl baselines: • bear, brac, bcq, cql and td3+bc (kumar et al., 2019; wu et al., 2019; fujimoto et al., 2019; kumar et al., 2020; fujimoto & gu, 2021) are model-free algorithms. they mostly regularize the policy by minimizing a divergence to the original policy. bcq samples only likely actions and cql searches for a q-function that lower bounds the true one. • moose and wsbc (swazinna et al., 2021a;b) are purely model based algorithms that optimize the policy via virtual trajectories through the learned transition model. moose penalizes reconstruction loss of actions under the original policy (learned by an autoencoder), while wsbc constrains the policy directly in weight space. moose is from the policy training perspective the closest to our lion approach. • mopo and morel (yu et al., 2020; kidambi et al., 2020) are hybrid methods that learn a transition model as well as a value function. both use the models to collect additional data and regularize the policy by means of model uncertainty. mopo penalizes uncertainty directly, while morel simply stops episodes in which future states become too unreliable. morel uses model-disagreement and mopo gaussian outputs to quantify uncertainty. evaluation in order to test whether the trained lion policies are able to provide state of the art performance anywhere in the λ range, we evaluate them for λ from 0 to 1 in many small steps. figs. 4 and 12 show results for the 16 ib datasets. we find that the performance curves do not exhibit many local optima. rather, there is usually a single maximum before which the performance is rising and after which the performance is strictly dropping. this is a very desirable characteristic for usage in the user interactive setting, as it enables users to easily find the best performing λ value for the policy. in 13 out of 16 datasets, users can thus match or outperform the current state of the art method on that dataset, and achieve close to on-par performance on the remaining three. the distance-to-original-policy curves are even monotonously increasing from start to finish, making it possible for the practitioner to find the best solution he or she is still comfortable with in terms of distance to the familiar behavior. discrete baseline a simpler approach might be to train an existing offline rl algorithm for many trade-offs in advance, to provide at least discrete options. two downsides are obvious: (a) we wouldn’t be able to handle the continuous case, i.e. when a user wants a trade-off that lies between two discrete policies, and (b) the computational cost increases linearly with the number of policies trained. we show that a potentially even bigger issue exists in figure 5: when we train a discrete collection of policies with different hyperparameters, completely independently of each other, they often exhibit wildly different behaviors even when the change in hyperparameter was small. lion instead expresses the collection as a single policy network, training them jointly and thus forcing them to smoothly interpolate among each other. this helps to make the performance a smooth function of the hyperparameter (although this must not always be the case) and results in a performance landscape that is much easier to navigate for a user searching for a good trade-off. figure 5: prior offline rl algorithms like mopo do not behave consistently when trained across a range of penalizing hyperparameters. return conditioning baseline another interesting line of work trains policies conditional on the return to go, such as rvs (emmons et al., 2021) (reinforcement learning via supervised learning) or dt (chen et al., 2021) (decision transformer). a key advantage of these methods is their simplicity they require neither transition model nor value function, just a policy suffices, and the learning can be performed in an entirely supervised fashion. the resulting policies could be interpreted in a similar way as lion policies: conditioning on returns close to the original performance would result in the original behavior, while choosing to condition on higher returns may lead to improved performance if the extrapolation works well. in fig. 6 we report results of the rvs algorithm on the same datasets as the discrete baseline. the returns in the datasets do not exhibit a lot of variance, so it is unsurprising that the approach did not succeed in learning a lot of different behaviors. figure 6: return conditioned policies did not learn many different behaviors on the ib datasets. finding a suitable λ we would like to emphasize that we neither want to optimize all offline hyperparameters with our solution, nor are we interested in a fully automated solution. users may thus adopt arbitrary strategies to finding their personal trade-off of preference. we will however provide a conservative example strategy: the operator starts with the most conservative value available and then moves in small, but constant steps towards more freedom. whenever the performance drops below the previous best or the baseline performance, he immediately stops and uses the last λ before that. table 1 summarizes how this strategy would perform. dataset bad-0.4 mediocre-0.0 optimized-0.6 rvs lion mopo final λ final λ final ˆr return -102.4 -70.8 -58.9 table 1: if a user adopts the simple strategy of moving in small steps (0.05 for lion, 0.1 for mopo since its range is larger, 10.0 / 1.0 for rvs) from conservative towards better solutions, immediately stopping when a performance drop is observed, lion finds much better solutions due to the consistent interpolation between trade-offs. note that in mopo, we start with large λ = 2.5 (1.0 is the default) since there it controls the penalty, while we start with λ = 0 in lion, where it controls the return. discussion & conclusion in this work we presented a novel offline rl approach that, to the best of our knowledge, is the first to let the user adapt the policy behavior after training is finished. we let the user tune the behavior by allowing him to choose the desired proximity to the original policy, in an attempt to solve two issues: (1) the problem that practitioners cannot tune the hyperparameter in offline rl & (2) the general issue that users have no high level control option when using rl policies (they might even have individual preferences with regards to the behavior of a policy that go beyond just performance). we find that effectively, lion provides a high level control option to the user, while still profiting from a high level of automation. it furthermore takes much of the risk that users normally assume in offline rl away since deployments can always start with a bc policy when they start at λ = 0, before moving to better options. while behavior cloning does not have to work in general, we did not experience any issues with it in our experiments, and it should be easier than performing rl since it can be done entirely in a supervised fashion. given that bc works, deployments can thus start with minimal risk. in prior offline algorithms, users experienced the risk that the algorithm did not produce a satisfying policy on the particular dataset they chose. e.g.: wsbc produces state of the art results for many of the ib datasets, however for mediocre-0.6 it produces a catastrophic -243 (original performance is -75). similarly, cql is the prior best method on optimized-0.8, however the same method produces a performance of -292 on bad-0.2 (moose, mopo, and wsbc get between -110 & -130). due to the smoothness of the interpolation of behaviors in lion, practitioners should be able to use it to find better trade-offs with lower risk than prior methods. adaptable policies are thus likely a step towards more deployments in industrial applications. future work as outlined at the end of section c of the appendix, we were unable to incorporate value functions into our approach. this can be seen as a limiting factor, since there exist environments with sparse or very delayed rewards or that for other reasons exhibit long planning horizons. the industrial benchmark features delayed rewards and evaluation trajectories are 100 steps long, however other environments can be more extreme in their characteristics. at some point, even the best dynamics models suffer from compounding errors and cannot accurately predict the far away future. we do not believe that it is in principle not possible to combine the lion approach with value functions, however future work will likely need to find methods to stabilize the learning process. other potential limitations of our approach include difficulties with the behavior cloning, e.g. when the original policy is stochastic or was not defined in the same state space as we use (e.g. different human operators controlled the system at different times in the dataset), as well as difficulties when interpolating between vastly different behaviors on the pareto front spanned by proximity and performance. we mention these potential limitations only for the sake of completeness since we were unable to observe them in our practical experiments. ethics statement | 9 | [
108.299,
697.5936768,
229.2579856,
709.5488768
] |
bZJbzaj_IlP.pdf | 2,022 | 2 | a non-parametric regression viewpoint : generalization of overparametrized deep relu network under noisy observations namjoon suh, hyunouk ko, xiaoming huo h.milton stewart school of industrial and systems engineering georgia institute of technology atlanta, ga, usa {namjsuh,hko39,huo}@gatech.edu abstract we study the generalization properties of the overparameterized deep neural network (dnn) with rectified linear unit (relu) activations. under the nonparametric regression framework, it is assumed that the ground-truth function is from a reproducing kernel hilbert space (rkhs) induced by a neural tangent kernel (ntk) of relu dnn, and a dataset is given with the noises. without a delicate adoption of early stopping, we prove that the overparametrized dnn trained by vanilla gradient descent does not recover the ground-truth function. it turns out that the estimated dnn’s l2 prediction error is bounded away from 0. as a complement of the above result, we show that the (cid:96)2-regularized gradient descent enables the overparametrized dnn to achieve the minimax optimal convergence rate of the l2 prediction error, without early stopping. notably, the rate we obtained is faster than o(n−1/2) known in the literature. introduction over the past few years, neural tangent kernel (ntk) [arora et al., 2019b; jacot et al., 2018; lee et al., 2018; chizat & bach, 2018] has been one of the most seminal discoveries in the theory of neural network. the underpinning idea of the ntk-type theory comes from the observation that in a wide-enough neural net, model parameters updated by gradient descent (gd) stay close to their initializations during the training, so that the dynamics of the networks can be approximated by the first-order taylor expansion with respect to its parameters at initialization. the linearization of learning dynamics on neural networks has been helpful in showing the linear convergence of the training error on both overparametrized shallow [li & liang, 2018; du et al., 2018] and deep neural networks [allen-zhu et al., 2018; zou et al., 2018; 2020], as well as the characterizations of generalization error on both models [arora et al., 2019a; cao & gu, 2019]. these findings clearly lead to the equivalence between learning dynamics of neural networks and the kernel methods in reproducing kernel hilbert spaces (rkhs) associated with ntk. 1 specifically, arora et al. [2019a] provided the o(n−1/2) generalization bound of shallow neural network, where n denotes the training sample size. in the context of nonparametric regression, recently, two papers, nitanda & suzuki [2020] and hu et al. [2021], showed that neural network can obtain the convergence rate faster than o(n−1/2) by specifying the complexities of target function and hypothesis space. specifically, nitanda & suzuki [2020] showed that the shallow neural network with smoothly approximated relu (swish, see ramachandran et al. [2017]) activation trained via (cid:96)2-regularized averaged stochastic gradient descent (sgd) can recover the target function from rkhss induced from ntk with swish activation. similarly, hu et al. [2021] showed that a shallow neural network with relu activation trained via (cid:96)2-regularized gd can generalize well, when the target function (i.e., f (cid:63) ρ ) is from hntk . 1henceforth, we denote hntk and hntk l networks l ≥ 2 with relu activations, respecitvely. as rkhss induced from ntk of shallow l = 1 and deep neural notably, the rate that the papers nitanda & suzuki [2020] and hu et al. [2021] obtained is minimax optimal, meaning that no estimators perform substantially better than the (cid:96)2-regularized gd or averaged sgd algorithms for recovering functions from respective function spaces. nevertheless, these results are restricted to shallow neural networks, and cannot explain the generalization abilities of deep neural network (dnn). similarly with arora et al. [2019a], cao & gu [2019] obtained the o(n−1/2) generalization bound, showing that the sgd generalize well for f (cid:63) l , when f (cid:63) ρ has a bounded rkhs norm. however, the rate they obtained is slower than the minimax rate we can actually achieve. furthermore, their results become vacuous under the presence of additive noises on the data set. motivated from these observations, the fundamental question in this study is as follows: ρ ∈ hntk when the noisy dataset is generated from a function from hntk l , does the overparametrized dnn obtained via ((cid:96)2-regularized) gd provably generalize well the unseen data? we consider a neural network that has l ≥ 2 hidden layers with width m (cid:29) n. (i.e., overparametrized deep neural network.) we focus on the least-squares loss and assume that the activation function is relu. a positivity assumption of ntk from relu dnn is imposed, meaning that λ∞ > 0, where λ∞ denotes the minimum eigenvalue of the ntk. we give a more formal mathematical definition of relu dnn in the following subsection 2.2. under these settings, we provide an affirmative answer to the above question by investigating the behavior of l2-prediction error of the obtained neural network with respect to gd iterations. contributions our derivations of algorithm-dependent prediction risk bound require the analysis on training dynamics of the estimated neural network through (regularized) gd algorithm. we include these results as the contributions of our paper, which can be of independent interests as well. • in an unregulaized case, under the assumption λ∞ > 0, we show that the training loss converges to 0 at a linear rate. as will be detailed in subsection 3.3, this is the different result from the seminal work of allen-zhu et al. [2018], where they also prove a linear convergence of training loss of relu dnn, but under different data distribution assumption. • we show that the dnn updated via vanilla gd does not recover the ground truth function f (cid:63) ρ ∈ hntk under noisy observations, if the dnn is trained for either too short or too long: that is, the prediction error is bounded away from 0 by some constant as n goes to infinity. • in regularized case, we prove the mean-squared error (mse) of dnn is upper bounded by some positive constant. additionally, we proved the dynamics of the estimated neural network get close to the solution of kernel ridge regression associated with ntk from relu dnn. l • we show that the (cid:96)2-regularization can be helpful in achieving the minimax optimal rate of the prediction risk for recovering f (cid:63) ρ ∈ hntk under the noisy data. specifically, it is shown that after some iterations of (cid:96)2-regularized gd, the minimax optimal rate (which is o(cid:0)n− d 2d−1 (cid:1), where d is a feature dimension.) can be achieved. l note that our paper is an extension of hu et al. [2021] to dnn model, showing that the (cid:96)2-regularized dnn can achieve a minimax optimal rate of prediction error for recovering f (cid:63) l . however, we would like to emphasize that our work is not a trivial application of their work from at least two technical aspects. these aspects are more detailed in the following subsection. ρ ∈ hntk technical comparisons with hu et al. [2021] | 1 | [
108.249,
146.2630784,
351.6933716,
156.2256784
] |
qyTBxTztIpQ.pdf | 2,022 | 1 | crowdplay: crowdsourcing human demonstrations for offline learning matthias gerstgrasser, rakshit trivedi & david c. parkes school of engineering and applied sciences harvard university {matthias,rstrivedi,parkes}@seas.harvard.edu abstract crowdsourcing has been instrumental for driving ai advances that rely on largescale data. at the same time, reinforcement learning has seen rapid progress through benchmark environments that strike a balance between tractability and real-world complexity, such as ale and openai gym. in this paper, we aim to fill a gap at the intersection of these two: the use of crowdsourcing to generate largescale human demonstration data in the support of advancing research into imitation learning and offline learning. to this end, we present crowdplay, a complete crowdsourcing pipeline for any standard rl environment including openai gym (made available under an open-source license); a large-scale publicly available crowdsourced dataset of human gameplay demonstrations in atari 2600 games, including multimodal behavior and human-human and human-ai multiagent data; offline learning benchmarks with extensive human data evaluation; and a detailed study of incentives, including real-time feedback to drive high quality data. we hope that this will drive the improvement in design of algorithms that account for the complexity of human, behavioral data and thereby enable a step forward in direction of effective learning for real-world settings. our code and dataset are available at https://mgerstgrasser.github.io/crowdplay/. introduction crowdsourcing has been instrumental in many ai advances, especially recent rapid progress in deep neural network models, which often rely on large training sets. for instance imagenet (deng et al., 2009), a large database of annotated images, has enabled a number of breakthroughs in image classification (krizhevsky et al., 2012). at the same time, reinforcement learning (rl) has seen rapid progress in the last few years, fueled in part by the development of standard, easily accessible, benchmark environments like the arcade learning environment (ale) (bellemare et al., 2013; machado et al., 2018) and openai gym (brockman et al., 2016). what has been underexplored is the intersection of the two: using large-scale crowdsourced human data for offline learning, including imitation learning and offline rl. we present crowdplay, a framework, methodology, and dataset that we hope will do for offline learning what ale and openai gym did for online learning. crowdplay supports flexible and scalable crowdsourcing that is geared towards multi-channel recruitment, and is able to interface with any openai gym or gym-like markov decision process (mdp) environment. it supports realtime feedback to participants that can be used to boost data quality, as well as both purely human and mixed human-ai multiagent environments. crowdplay is also the first dataset based on atari 2600 games that features multimodal and multiagent behavior. it includes both data from normal gameplay as well as explicitly multimodal behavioral data, where players are given instructions to follow a specific behavior. in addition to single-agent data, the dataset includes data from two-player, human-ai and human-human games, and with both competitive and cooperative rewards. participants were recruited through multiple channels (under irb, harvard irb18-0416) including amazon mechanical turk, lab in the wild, undergraduate students, and multiple social media channels. for some platforms we also include data with a range of different incentive structures for participants. the atari games were run using ale and a multiagent version of openai gym, guaranteeing that transitions are identical to what would be seen in standard atari rl environments. in this paper we focus on the use of crowdplay for atari 2600 games, but a major advantage of the approach is that it works for any gym-like mdp environment. we believe that atari 2600 games are interesting for imitation learning (il) and offline rl for the same reasons that they were seminal as a challenge problem in the development of rl: they offer a balance between achievable short-term research advances and sufficient complexity. more recent work in psychology has also shown them to be of sufficient richness to support the study of human learning behavior (tsividis et al., 2017). further, and despite the richness of the data, it is easy to collect at scale through the use of crowdsourcing and a web browser, and moreover, it can be used together with an established simulator for evaluation purposes. the crowdplay pipeline directly interfaces with standard rl environments. in particular this means that trajectories and transitions are guaranteed to be the same for human data as they are for rl environments; that offline learning methods automatically have access to a simulator; and that crowdsourcing with human players need not develop environments and tasks from scratch, but can make use of ai agents that can interact with human agents in a benchmark environment. related work crowdsourcing previous platforms for crowdsourcing such as turkserver (mao et al., 2012) have focused on supporting synchronous participation through amazon mechanical turk participants and used to study economic behavior in simple experimental environments (and not for the generation of human behavioral data). tylkin et al. (2021) use a combination of ale and javatari for crowdsourcing atari data in the context of evaluating the performance of an ai agent in human-ai collaboration. they propose modifying two-player space invaders to make it cooperative, and to train ai agents using randomized starting position, both of which we adopt in (some of) our multiagent environments. their crowdsourcing approach is atari-specific and not publicly available. much work has been done on the study of incentives for participants in paid crowdsourcing studies, and this is also part of the focus of our work. prior work (mao et al., 2013; mason & watts, 2009; yin et al., 2013; harris, 2011; shaw et al., 2011) has largely found that quality-dependent payments may increase quantity of work more than quality of work, and has not looked at real-time feedback on work quality. offline learning. much work has been done on offline learning (rashidinejad et al., 2021), including behavior cloning (bc) (pomerleau, 1989), batch constrained q-learning (bcq) (fujimoto et al., 2019), conservative q-learning (cql) (kumar et al., 2020), implicit quantile network (iqn) (dabney et al., 2018), dqn (mnih, 2015) and off-policy version of soft actorcritic (haarnoja et al., 2018). aytar et al. (2018) demonstrate learning hard exploration games from unaligned human demonstration videos. recent work (schrittwieser et al., 2021) shows a sample-efficient model-based online and offline learning algorithm. specific to atari, kanervisto et al. (2020) benchmark behavioral cloning algorithms on existing data from several video games, including atari 2600 games. laurens & kazmi (2021) clone river raid agents using existing datasets, and develop evaluation metrics based on action distribution and playstyle. datasets atari grand challenge (agc) (kurin et al., 2017) is a dataset consisting of 45 hours of standard gameplay from five atari 2600 games. the authors also make their atari-specific data collection software available. they use a browser app running the atari emulator in the browser based on javatari. it is unclear to us if this can be guaranteed to always have identical execution to the ale emulator. atari-head (zhang et al., 2020) features 117 hours of gameplay data and includes eye-tracking data. this data was collected using ale. however, the emulator was run in a semi-frame-by-frame mode, advancing the emulator state only when a key was pressed, and at a maximum of 20 frames per second. the focus of the study was on attention tracking, and is not intended to be representative of natural human gameplay behavior. figure 1: key parts of the crowdplay software architecture. arrows show the flow of keypresses, observations and metadata between browser client, mdp environment, ai policies, and the database, and the eventual flow to an offline learning pipeline. the backend, load balancer, and database are hosted on cloud infrastructure. d4rl (fu et al., 2020) and rl unplugged (gulcehre et al., 2020) both also provide datasets for offline learning, but both focus on synthetic data. crowdplay: the pipeline overview the heart of our pipeline are the crowdplay backend and frontend, which is a client-server architecture that streams openai gym environments and similar mdp environments to web browser clients. it is highly extensible, scalable to hundreds or thousands of concurrent users, and allows the real-time capture of both trajectories as well as related statistics. it is geared toward multi-channel recruitment of participants and strong incentives. as its most important feature, it interfaces directly with openai gym and similar environments, thus opening up the entire array of standard rl environments to rapid crowdsourcing of behavioral data into the support of research into il and offline rl. it also supports multi-agent environments, including mixed human-ai environments. complementing this is an engine to support the local download of the generated dataset, including storage of metadata in a relational database for fast access, and compressed trajectories for storage efficiency. the data download can be real-time and is incremental. we give a short overview of the crowdplay architecture, and refer the reader to appendix a.1 for more information. software architecture crowdplay provides a highly extensible, high performance client-server architecture for streaming mdp environments to remote web browser clients. the backend interfaces with openai gym and similar mdp environments. actions are collected as keypresses in the browser client, and sent to the backend where they are fed into the mdp’s “step” function. the returned observation is sent back to the browser client for display. this is repeated to generate an episode trajectory. the remainder of the crowdplay software infrastructure is built to make this basic loop into a structure that is robust, performant, extensible, user-friendly and scalable. figure 1 shows the key parts of the crowdplay architecture. figure 2: screenshots of the main screen of crowdplay (left) and an enlarged detail of the realtime incentives (right). communication between the browser client and backend is through high-performance socket connections. the backend is built to be scalable both within-instance, using multiple processes, as well as across-instance using a load balancer and autoscaling instance groups. trajectories are stored directly as compressed, serialized python objects, allowing both very easy modification of data capture as well as immediate decoding for use in existing python-based learning pipelines. crowdplay also supports multi-agent environments. it allows multiple human participants by routing multiple browser clients to a single mdp environment. mixed human-ai environments are supported through pre-trained neural network policies. for robustness, ai agents can also take over control of a human agent on the fly, in the case that a human player disconnects from a multiagent environment, allowing uninterrupted gameplay for the remaining human players. a major focus in the design of crowdplay is providing the ease of access of the generated data in downstream ml pipelines. we believe it is crucial for these pipelines to have access not only to the same simulator as the crowdsourcing pipeline, but also to the same observation pre-processing tools that are used in state-of-the-art rl methods on these simulators. addressing these design goals, crowdplay includes a local metadata search engine and a custom, deepmind-style (mnih et al., 2015) observation processing function for offline data. we give more details in appendix a.1. crowdplay provides an extensible and easy-to-use framework for collecting structured metadata and real-time statistics per user, session, episode, and individual steps. this is used for capturing data quality information, as well as for driving real-time incentives for participants, depending on recruitment platform. we discuss the platforms that we target in more detail in appendix a.3, and the various incentive designs and their effect on data in section 4. crowdplay atari: the dataset scope and motivation our main focus in creating the first crowdplay dataset has been on atari 2600 games, as we believe that human data for these environments can enable advances in il and offline rl (just as they have been in driving advances in online rl, and for the same reasons). in curating the dataset, we have been especially motivated by diversity—we have used multiple recruitment channels, each with different extrinsic and intrinsic user motivation and incentive models, and have reached over 800 users in 1300 sessions over a three week period in september 2021. the crowdplay atari dataset currently holds over 250 hours, or 54 million transitions, and was collected across six different games. we have targeted not only implicitly multimodal behavior through recruiting different users and through different channels, but also explicit multimodal behavior through explicit instructions that are reinforced through tight incentives and real-time feedback. we include both single agent as well as multi-agent data, with the latter reported for both human-human and human-ai gameplay, and with both competitive as well as cooperative rewards. we believe this is the first multi-agent human behavioral dataset, at the very least for atari 2600 games. task design we used the following atari 2600 games for our dataset: space invaders, river raid, montezuma’s revenge, q*bert, breakout and beamrider. space invaders makes up the largest part of our dataset. we chose this game as a focus for several reasons: it is well-known and popular both among atari players as well as in the rl community, it was easy to come up with several specific and distinct behaviors that are still compatible with progressing in the game, and it has a native two-player mode. river raid provides the second largest amount of data and was chosen for similar reasons, in that it is accessible and well understood from a learning perspective, and has obvious opportunities to promote multimodal behavior. the remaining games were chosen for diversity of game types and their popularity in previous rl and il work. table 1 shows a list of available data by game and recruitment channel. multimodal behavior for space invaders and river raid, in addition to standard gameplay, we asked some participants to follow specific behavior in the game. we believe that this data will be useful for multiple research strands. in imitation learning, this type of data can provide a testbed to understand whether algorithms are robust to divergent expert behavior. in offline rl, this can allow for controlled experiments with different reward functions in an otherwise identical environment. for most of this data, we recruited participants via mechanical turk, and gave incentives via both a minimum level of task adherence required to complete the hit (unit of task), as well as an explicit reward function tied to a bonus payment, and with real-time feedback on the same. the behavior types we instructed participants to follow were to either stay on either half of the game screen (in both space invaders and river raid, or to shoot aliens in a particular order (row by row from the bottom, column by column from the outside in, or column by column from the inside out; in space invaders only.) we discuss these tasks in more detail in appendix a.2.1 multiplayer games crowdplay also contains a set of trajectories from multi-agent environments. for these, we used the two-player mode of space invaders. we include data from two variants: (1) the standard two-player space invaders mode, and (2) a custom cooperative mode. in the latter, we modify the original game in two ways. first, in the standard two-player mode there is a score bonus if the other player loses a live. we removed this score bonus. second, we gave both players the sum of their individual scores. these modifications were done entirely in python, in the case of the score bonus detecting these events from emulator state. participants were instructed to ignore the score shown on the game screen, and were instead shown their “cooperative” score next to the game screen. most of our multiplayer data is from mturk, and people were incentivized through bonus payments to maximize this score. further, we include data from games with two human players, as well as games with one human and one ai player. we give details on this in appendix a.2.6 data analysis a unique feature of the crowdplay atari dataset is its size and diversity. in addition to explicit multimodal behavior, it also comprises data generated by participants recruited via multiple different channels, and different demographics. figure 3 (left) shows a t-sne embedding of action distributions in episodes of standard space invaders gameplay. episodes are colored according to recruitment channel (mturk, email or social media). we notice that there is a clear divide between action distributions of mturk users and those of email and social media users, which we take as evidence that different demographics already lead to interesting diversity in the data.1 this shows that different recruitment channels lead to diversity in the data, this already apparent in aggregate statistics such as action distributions. for multiagent data, figure 3 (right) shows a t-sne embedding for cooperative and standard humanai space invaders for mturk users. we note several things. first, there is a clear distinction between human and ai actions. second, there are two clear clusters within the ai data that correspond to cooperative and competitive behavior. third, there is a clear relationship between cooperative and 1the mostly-orange (email users) cluster on the left corresponds to episodes where no key was pressed at all, while the green (mturk) cluster at the bottom right corresponds to episodes where the fire button was held the entire episode (and no other key was pressed). the minimum requirement to complete the task for each group had the same requirement of ten minutes of active playtime, which the no-keypress episodes in the orange cluster would not contribute toward. table 1: dataset by game and recruitment channel task data collected (hours) mturk email social media raffle beamrider breakout montezuma’s revenge q*bert riverraid - of which multimodal space invaders - of which incentives - of which multimodal space invaders (2p) space invaders (2p w/ai) total total figure 3: t-sne embedding of action distributions of different (human) participant types in singleagent games (left), and human and ai agents in cooperative and standard multiagent games (right). standard settings and action distributions for human data and this is more complex than in the ai data. challenges and limitations we discuss here some challenges that we experienced in collecting this data and the corresponding limitations in the dataset. one, our incentive design has continued to evolve, while a significant amount of the data has been collected with suboptimal incentives. this is part of why we discuss incentive design in the next section in detail, to allow other researchers to build on what we have learned. two, some of our dataset is unbalanced, especially for the different behavior types in the multimodal space invaders data, where we have 2-3 times as much data for some behaviors as for others. this is partly due to suboptimal incentives and some tasks being harder than we had intended, and partly because we improved the way the pipeline assigned mturk workers to tasks over time. we have since then improved the automatic task assignment logic to more carefully take into account the amount of data already collected. three, we have found that collecting humanhuman multiagent data is a difficult challenge; e.g., we expected that the ability to play with friends would be a major draw in social media recruitment, but found that we had virtually zero uptake on this. on the other hand, we have made significant advances in making multiplayer data collection more feasible on mturk. in particular, the use of fallback ai agents has solved many practical problems with workers needing to depend on one another for successful task completion. still, we were able to collect much more single-player and human-ai data than human-human data. incentive design incentive design and recruitment crowdplay has been designed with multichannel recruitment and strong participant incentives in mind. it supports configuring multiple environments and “tasks” for participants to complete, including different user instructions, incentives, and ai agent configuration. at the heart of its capabilities is its ability to capture an extensible list of metadata live during gameplay, including data such as playtime, score, and various in-game behavioral characteristics. on platforms where users are paid for their participation, we use this data to define both a minimum acceptable effort by users to get paid at all, as well as a dynamic bonus payment that can depend on a fine-grained analysis of user effort. both progress toward minimum required effort as well as bonus payments can be displayed to users during live gameplay. on platforms where users are not compensated, we use the same real-time statistics to reinforce intrinsic motivation through live “high score” pages and by reframing non-standard gameplay tasks as “challenges.” for instance, in one space invaders task we asked participants to shoot aliens in a specific order. our architecture is able to determine adherence to these instructions by evaluating emulator state at every frame, detecting when an alien has been shot and which one, and keeping a running count of aliens shot in and out of the prescribed order. users were rewarded based on both the total number of aliens they shot in the correct order, as well as the fraction of aliens that they shot correctly. using this framework, we can mimic the reward structure of an mdp through monetary incentives and also make use of realtime feedback; and we can additionally shape incentives to achieve particular modalities. figure 2 (right) shows an example of this realtime feedback. incentive models we provide various incentive models, as well as real-time feedback to participants. for social media participants, this was feedback-only. for students we made use of a raffle, and required a minimum time and effort, e.g. 80% of aliens shot in the correct order. for participants from mechanical turk, we experimented with multiple models of incentives. experimental design specifically, we report the results of an extensive experiment with the space invaders “inside-out” task. in this, we gave participants identical instructions for the behavior they were asked to follow, but adopted five different ways of remunerating them: (1) no incentives (payment for everyone), (2) active time (payment subject to non-trivial behavior), (3) minimum requirement (payment subject to minimum performance requirement), (4) sliding bonus (variable payment depending on performance) and (5) all incentives (sliding bonus contingent on meeting minimum requirement). appendix a.4 details these. for both task requirements and bonus payments, participants were given real-time feedback. for minimum requirements, this took the form of an itemized list of requirements and their performance, e.g. “correct aliens shot: 40 (required: 50).” for bonus payments, this took the form of a live estimate of the expected payment, but did not include details on how the payment was computed. task instructions stated that the bonus payment would depend on both the number of aliens shot as well as how many were shot in the correct order. we also compare data from these treatments with data collected from social media users, as well as students reached on campus via email. for email participants, we enforced a similar minimum requirement as for the minimum requirement treatment on mturk, with successful completion of the requirement earning an entry into a raffle. for social media participants, we did not provide any monetary incentives, and instead phrased the task as a “challenge.” results in regard to the structure of incentives and effect on data, we report several main findings. first, among paid participants, data quality (measured as the fraction of data per participant meeting a threshold of at least 80% of aliens shot in the specified order) was significantly higher when using any kind of quality-based incentives, be it a minimum requirement tied to a fixed-amount payment (“quality requirement”), a sliding bonus payment (“sliding bonus”), or a combination of both (p < figure 4: data quality for different incentive treatments and recruitment channels. blue bars show the total amount (in seconds) of “good data” collected per user, where this is defined as episodes with at least 80% task adherence. orange bars show the fraction of good data compared to the total data collected per user. figure 5: evaluation performance of offline rl algorithms across different tasks. the bars indicate the median normalized score across tasks, and the error bars show a bootstrapped estimate of the [25, 75] percentile interval for the median estimate. the score normalization is computed using the best score achieved by humans across each task. .05 for any pair of treatments, comparing with either of “no incentives” or “active time”). see figure 4 (orange bars). non-incentivized data was low quality even when enforcing that participants actively play the game, thus preventing users from putting in no effort at all (e.g., by starting the game and then switching to a different window). second, participants recruited via social media had the highest data quality (statistically significant at p < .05 for pairwise comparison with any other treatment), but we also found this channel to be the least scalable (see appendix a.2.4). users recruited via an email raffle had nearly as high data quality, but here the difference against the quality-based incentive treatments is insignificant. third, among paid participants the incentive treatments led to a larger amount of good data being collected per participant (p < .05 for any pair of treatments, comparing with either of “no incentives” or “active time”). see figure 4 (blue bars). the “quality requirement” and “all incentives” treatments showed the highest amount of good data per participant, although the comparison with “sliding bonus” is not significant (due to some outliers in both treatments). benchmark results our framework provides a variety of data compositions that exhibit real-world intricacies and diversities. we envision that this will help facilitate rapid progress for offline learning methods that rely on learning from diverse data compositions without access to online exploration (rashidinejad et al., 2021). we therefore evaluate our datasets on recently proposed offline learning algorithms and hope that it will inspire potential future directions in this area. as atari is a discrete action domain, we focus on algorithms that can handle discrete actions effectively. specifically, our baseline algorithms include behavior cloning (bc) (pomerleau, 1989), batch constrained q-learning (bcq) (fujimoto et al., 2019), conservative q-learning (cql) (kumar et al., 2020), implicit quantile network (iqn) (dabney et al., 2018), dqn (mnih, 2015) and an offline version of soft actorcritic (haarnoja et al., 2018). as we evaluate the algorithms on different tasks, in figure 5, we provide the normalized median performance across tasks (agarwal et al., 2020) where the normalization is done using the best performance achieve by humans on corresponding games. from the results, we observe that the recent advancements in offline reinforcement learning algorithms contribute towards outperforming behavioral regularized algorithms. further, we found that the off-policy algorithm dqn serves as a strong baseline for offline rl albeit with a higher variance across tasks and seeds. overall, the results for all the algorithms demonstrate the open challenges on learning from human demonstrations where the data is both limited and noisy which raises the need for more robust and sample efficient methods. first, we contrast our results with benchmark figure 6: separate t-sne embeddings of action distributions and behavioral statistics in multimodal behaviors for human participants (left) and bc agents (right). results in the rl unplugged paper (gulcehre et al., 2020), where a subset of these algorithms were evaluated on specific games we benchmark here but the demonstrations were collected using an online dqn agent trained to maximize the game performance. their results show that both in general (figure 6 therein) and on the specific games we benchmark here (table 9 and figure 7 therein), the offline algorithms performed significantly and consistently better than the original policy performance. in comparison, none of the algorithms in our experiments were able to achieve better than around 45% of best human performance on average (with the exception of iqn on couple of tasks), and all algorithms demonstrated high variance in performance across seeds and tasks. second, conservative algorithms that constrain the policy to the dataset (e.g. bcq) and are tailored to address erroneous optimistic value function estimation in the presence of complex and multi-modal distributions (e.g. cql) do not perform well on data collected from human demonstrations. this is in sharp contrast to their high performance reported on datasets collected using ai agents, as reported in fu et al. (2020). this demonstrates that human data poses an interesting and distinct challenge to state-of-the-art offline learning algorithms compared with synthetic data. details on the experimental setup, performance of algorithms on a suite of robust metrics, individual normalized and unnormalized score for each task and algorithm, and task-specific performance profile for each algorithm are provided in appendix a.7. beyond performance comparisons, offline algorithms also struggle to qualitatively capture human data. figure 6 shows separate t-sne embeddings of behavior of human participants (left) and ai agents trained using bc (right) for the five explicit multimodal behavior types in space invaders. we see that while human data is clustered in clearly distinguishable regions for the behavior types, this is not the case for the bc agents. this shows that multimodal human data poses a challenge for offline learning algorithms that aim to emulate human behavioral characteristics such as bc and other imitation learning methods. see appendix a.6 for details on t-sne parameters and data. discussion | 8 | [
108.299,
239.5206768,
190.2013701,
251.4758768
] |
qrwe7XHTmYb.pdf | 2,021 | 1 | gshard: scaling giant models with conditional computation and automatic sharding dmitry lepikhin lepikhin@google.com hyoukjoong lee hyouklee@google.com yuanzhong xu yuanzx@google.com dehao chen dehao@google.com orhan firat orhanf@google.com yanping huang huangyp@google.com maxim krikun krikun@google.com noam shazeer noam@google.com zhifeng chen zhifengc@google.com abstract neural network scaling has been critical for improving the model quality in many real-world machine learning applications with vast amounts of training data and compute. although this trend of scaling is affirmed to be a sure-fire approach for better model quality, there are challenges on the path such as the computation cost, ease of programming, and efficient implementation on parallel devices. in this paper we demonstrate conditional computation as a remedy to the above mentioned impediments, and demonstrate its efficacy and utility. we make extensive use of gshard, a module composed of a set of lightweight annotation apis and an extension to the xla compiler to enable large scale models with up to trillions of parameters. gshard and conditional computation enable us to scale up multilingual neural machine translation transformer model with sparsely-gated mixture-ofexperts. we demonstrate that such a giant model with 600 billion parameters can efficiently be trained on 2048 tpu v3 cores in 4 days to achieve far superior quality for translation from 100 languages to english compared to the prior art. introduction scaling neural networks brings dramatic quality gains over a wide array of machine learning problems such as computer vision, language understanding and neural machine translation (devlin et al., 2018; mahajan et al., 2018; arivazhagan et al., 2019; huang et al., 2019; brown et al., 2020b). this general tendency motivated recent studies to scrutinize the factors playing a critical role in the success of scaling, including the amounts of training data, the model size, and the computation being utilized as found by past studies (advani & saxe, 2017; hestness et al., 2019; geiger et al., 2020). while the final model quality was found to have a power-law relationship with these factors (hestness et al., 2017; kaplan et al., 2020), the significant quality gains brought by larger models also came with various practical challenges. training efficiency, which we define as the amount of compute and time used to achieve a superior model quality against the best system existed, is oftentimes left out. in this study, we strive for improving the model quality while being training efficiently. we built a 600 billion parameters sequence-to-sequence transformer model with sparsely-gated mixture-of-experts layers, which enjoys sub-linear computation cost and o(1) compilation time. we trained this model with 2048 tpu v3 devices for 4 days on a multilingual machine translation task and achieved far superior translation quality compared to prior art when translating 100 languages to english with a single non-ensemble model. we conducted experiments with various model sizes and found that the translation quality increases as the model gets bigger, yet the total wall-time to train only increases sub-linearly with respect to the model size, as illustrated in figure 1. to train such an extremely large model, we relied on the following key design choices. figure 1: multilingual translation quality (average ∆bleu comparing to bilingual baselines) improved as moe model size grows up to 600b, while the end-to-end training cost (in terms of tpu v3 core-year) only increased sublinearly. increasing the model size from 37.5b to 600b (16x), results in computation cost increase from 6 to 22 years (3.6x). the 600b parameters model that achieved the best translation quality was trained with 2048 tpu v3 cores for 4 days, a total cost of 22 tpu v3 core-years. in contrast, training all 100 bilingual baseline models would have required 29 tpu v3 core-years. our best quality dense single transformer model (2.3b parameters) achieving ∆bleu of 6.1, was trained with gpipe for a total of 235.5 tpu v3 core-years. conditional computation first, model architecture should be designed to keep the computation and communication requirements sublinear in the model capacity. conditional computation enables us to satisfy training and inference efficiency by having a sub-network activated on the per-input basis. shazeer et al. (2017) has shown that scaling rnn model capacity by adding sparsely gated mixture-of-experts (moe) layers allowed to achieve improved results with sub-linear cost. we therefore present our approach to extend transformer architecture with moe layers in this study. gshard annotation second, the model description should be separated from the partitioning implementation and optimization. this separation of concerns let model developers focus on the network architecture and flexibly change the partitioning strategy, while the underlying system applies semantic-preserving transformations and implements efficient parallel execution. to this end we propose a module, gshard, which only requires the user to annotate a few critical tensors in the model with partitioning policies. it consists of a set of simple apis for annotations, and a compiler extension in xla for automatic parallelization. model developers write models as if there is a single device with huge memory and computation capacity, and the compiler automatically partitions the computation for the target based on the user annotations and their own heuristics. model | 1 | [
108.299,
293.3376768,
165.3222491,
305.2928768
] |
TBWA6PLJZQm.pdf | 2,022 | 2 | learning with noisy labels revisited: a study using real-world human annotations jiaheng wei∗ †, zhaowei zhu∗†, hao cheng†, tongliang liu‡, gang niu§, and yang liu† †university of california, santa cruz, †{jiahengwei,zwzhu,haocheng,yangliu}@ucsc.edu, ‡ tongliang.liu@sydney.edu.au, ‡tml lab, university of sydney, § gang.niu.ml@gmail.com §riken abstract existing research on learning with noisy labels mainly focuses on synthetic label noise. the synthetic noise, though has clean structures which greatly enabled statistical analyses, often fails to model the real-world noise patterns. the recent literature has observed several efforts to offer real-world noisy datasets, e.g., food101n, webvision, and clothing1m. yet the existing efforts suffer from two caveats: firstly, the lack of ground-truth verification makes it hard to theoretically study the property and treatment of real-world label noise. secondly, these efforts are often of large scales, which may result in unfair comparisons of robust methods within reasonable and accessible computation power. to better understand real-world label noise, it is important to establish controllable, easy-to-use and moderate-sized real-world noisy datasets with both ground-truth and noisy labels. this work presents two new benchmark datasets, which we name as cifar-10n, cifar100n (jointly we call them cifar-n), equipping the training datasets of cifar-10 and cifar-100 with human-annotated real-world noisy labels we collected from amazon mechanical turk. we quantitatively and qualitatively show that realworld noisy labels follow an instance-dependent pattern rather than the classically assumed and adopted ones (e.g., class-dependent label noise). we then initiate an effort to benchmarking a subset of the existing solutions using cifar-10n and cifar-100n. we further proceed to study the memorization of correct and wrong predictions, which further illustrates the difference between human noise and class-dependent synthetic noise. we show indeed the real-world noise patterns impose new and outstanding challenges as compared to synthetic label noise. these observations require us to rethink the treatment of noisy labels, and we hope the availability of these two datasets would facilitate the development and evaluation of future learning with noisy label solutions. the corresponding datasets and the leaderboard are available at http://noisylabels.com. introduction image classification task in deep learning requires assigning labels to specific images. annotating labels for training use often requires tremendous expenses on the payment for hiring human annotators. the pervasive noisy labels from data annotation present significant challenges to training a quality machine learning model. the problem of dealing with label noise has been receiving increasing attentions. typical approaches include unbiased estimators and weighted loss functions (natarajan et al., 2013; liu & tao, 2015), loss correction (patrini et al., 2017; liu & guo, 2020), sample selection aided (jiang et al., 2018; han et al., 2018; yu et al., 2019), etc. the majority of existing solutions are often developed under stylish synthetic noise model, where the noise rates are either class-dependent or homogeneous across data instances. however, real-world supervision biases may come from humans (peterson et al., 2019), sensors (wang et al., 2021b), or models (zhu et al., 2022), which are likely to be instance-dependent. recent works on instance-dependent settings (cheng et al., 2021; jiang et al., 2022) also have some structural assumptions, e.g., the noise transition differs ∗equal contributions in alphabetical ordering. †corresponding author: yang liu <yangliu@ucsc.edu>. table 1: summarized information of existed noisy-label benchmarks: the “estimated” noisy levels are obtained through a subset of the dataset with verified clean labels. “moderate-resolution” means the max image width pixel is less than 250. dataset train/test size classes noise level food-101 (bossard et al., 2014) clothing1m (xiao et al., 2015) webvision (li et al., 2017) food-101n (lee et al., 2018) animal-10n (song et al., 2019) red mini-imagenet (jiang et al., 2020) red stanford cars (jiang et al., 2020) cifar10h (peterson et al., 2019) cifar-10n-aggregate (ours) cifar-10n-random (ours) cifar-10n-worse (ours) cifar-100n-coarse (ours) cifar-100n-fine (ours) moderate resolution (cid:55) (cid:55) (cid:55) (cid:55) (cid:51) (cid:51) (cid:51) clean label no interventions in different parts of features (xia et al., 2020b) or sub-populations (wang et al., 2021a; zhu et al., 2021a). although these statistical assumptions facilitate the derivation of theoretical solutions, it is unclear how the existing models captured the real-world noise scenario. to empirically validate the robustness of proposed methods, synthetic noisy labels on cifar-10 and cifar-100 (krizhevsky et al., 2009) are the most widely accepted benchmarks. the literature has also observed approaches to the simulation of human annotators in data labeling (hua et al., 2013; long & hua, 2015; liao et al., 2021), and real-world label noise benchmarks, including food-101 (bossard et al., 2014), clothing-1m (xiao et al., 2015), webvision (li et al., 2017), etc. we summarize the above real-world noisy label datasets in table 1. while a more detailed description and discussion of the existing datasets can be found in the related works, we want to highlight several outstanding issues in existing benchmarks and evaluations. as noted in table 1, except for cifar related noisy label datasets, all other datasets suffer from at least one of the three caveats: • complex task (high-resolution): when learning with large-scale and relative high-resolution data, the complex data pattern, various augmentation strategies (xiao et al., 2015), the use of extra train or clean data (bossard et al., 2014; xiao et al., 2015; lee et al., 2018), different computation power (for hyper-parameter tuning such as batch-size, learning rate, etc) jointly contribute to the model performance and then result in unfair comparison. • missing clean labels: the lack of clean labels for verification in most existed noisy-label datasets makes the evaluation of robust methods intractable. • interventions: human interventions in data generation (jiang et al., 2020) and non-representative data collection process (song et al., 2019) might disturb the original noisy label pattern. in addition, despite synthetically labeled cifar datasets are popular and highly used benchmarks for evaluating the robustness of proposed methods, there exists no publicly available human annotated labels for cifar training datasets to perform either validation of existing methods or verification of popular noise models 1. a human-annotated version of cifar datasets would greatly facilitate the evaluations of existing and future solutions, due to the already standardized procedures for experimenting with cifar datasets. all above issues motivate us to revisit the problem of learning with noisy labels and establish accessible and easy-to-use, verifiable datasets that would be broadly usable to the research community. our contributions can be summarized as follows : • we present two new benchmarks cifar-10n, cifar-100n which provide cifar-10 and cifar100 with human annotated noisy labels. jointly we call our datasets cifar-n. our efforts built upon the cifar datasets and provide easily usable benchmark data for the weakly supervised learning community (section 3). we expect to continue to maintain the datasets to facilitate future development of results. • we introduce new observations for the distribution of human annotated noisy labels on tiny images, i.e., imbalanced annotations, the flipping of noisy labels among similar features, co-existence of multiple clean labels for cifar-100 train images (which leads to a new pattern of label noise), etc. we further distinguish noisy labels in cifar-10n and cifar-100n with synthetic classdependent label noise, from the aspect of noise transitions for different features qualitatively and quantitatively (via hypothesis testing) (section 4). 1cifar10h (peterson et al., 2019) only provides test images with noisy human annotations. • we empirically compare the robustness of a comprehensive list of popular methods when learning with cifar-10n, cifar-100n. we observe consistent performance gaps between human noise and synthetic noise. the different memorization behavior further distinguishes the human noise and synthetic noise (section 5). the corresponding datasets and the leaderboard are publicly available at http://noisylabels.com. related works learning from noisy labels earlier approaches for learning from noisy labels mainly focus on loss adjustment techniques. to mitigate the impact of label noise, a line of approaches modify the loss of image samples by multiplying an estimated noise transition matrix (patrini et al., 2017; hendrycks et al., 2018; xia et al., 2019; yao et al., 2020), re-weight the loss to encourage deep neural nets to fit on correct labels (liu & tao, 2015), propose robust loss functions (natarajan et al., 2013; ghosh et al., 2017; zhang & sabuncu, 2018; amid et al., 2019; wang et al., 2019; liu & guo, 2020), or introduce a robust regularizer (liu et al., 2020; xia et al., 2020a; cheng et al., 2021; wei et al., 2021). another line of popular approaches behaves like a semi-supervised manner which begins with a clean sample selection procedure, then makes use of the wrongly-labeled samples. for example, several methods (jiang et al., 2018; han et al., 2018; yu et al., 2019; wei et al., 2020) adopt a mentor/peer network to select small-loss samples as “clean” ones for the student/peer network. to further explore the benefits of wrongly-labeled samples and improve the model performance, li et al. (2020) chose the mixmatch (berthelot et al., 2019) technique which has shown success in semi-supervised learning. benchmarks noisy labels datasets food-101 (bossard et al., 2014), clothing-1m (xiao et al., 2015), webvision (li et al., 2017) are three large-scale noisy labeled web-image databases which consist of food images, clothes images or other web images, respectively. however, the majority of images in these three datasets do not have a corresponding clean label to perform controlled verification (e.g., verifying the noise levels). later, a much larger-scale food dataset is collected by lee et al. (2018), which contains exactly the same classes as food-101 (bossard et al., 2014). more recently, peterson et al. (2019) present a noisily labeled benchmarks on cifar-10 test dataset where each test image has 51 human annotated labels in average. jiang et al. (2020) construct noisily labeled mini-imagenet (vinyals et al., 2016) and stanford cars datasets (krause et al., 2013) with controlled noise levels by substituting human annotated incorrect labels for synthetic wrong labels. synthetic label noise in this section, we discuss a few popular synthetic models for generating noisy labels. we focus on a k-class classification task. denote by d := {(xn, yn)}n∈[n ] the training samples where [n ] := {1, 2, ..., n }. (xn, yn)s are given by random variables (x, y ) ∈ x × y drawn from the joint distribution d, where x , y can be viewed as the space of feature and label, respectively. in real-world scenarios, a classifier f only has access to noisily labeled training sets (cid:101)d := {(xn, ˜yn)}n∈[n ]. we assume the noisy samples (xn, ˜yn)s are given by random variables (x, (cid:101)y ) ∈ x × (cid:101)y which are drawn from the joint distribution (cid:101)d. clearly, there may exist n ∈ [n ] such that yn (cid:54)= ˜yn. the flipping from clean to noisy label is usually formulated by a noise transition matrix t (x), with elements: ti,j(x) := p( (cid:101)y = j|y = i, x). we shall specify different modeling choices of t (x) below. class-dependent label noise the first family of noise transition matrix is the class-dependent noise where the label noise is assumed to be conditionally independent of the feature x. mathematically, t (x) ≡ t and ti,j(x) = p( (cid:101)y = j|y = i), ∀i, j ∈ [k]. symmetric t the symmetric noise transition matrix (natarajan et al., 2013) describes the scenario where an amount of human labelers maliciously assign a random label for the given task. it assumes that the probability of randomly flipping the clean class to the other possible class with probability (cid:15). assume the noise level is (cid:15), the diagonal entry of the symmetric t is denoted as ti,i = 1 − (cid:15). for any other off-diagonal entry ti,j where i (cid:54)= j, the corresponding element is ti,j = (cid:15) asymmetric t the asymmetric noise transition matrix (patrini et al., 2017) simulates the case where there exists ambiguity classes, i.e, human labelers may wrongly annotate the truck as automobile due to the low-resolution images. there are two types of widely adopted asymmetric t . the assymetric-next t assumes that the clean label flips to the next class with probability (cid:15), i.e, i → (i + 1) mod k for i ∈ [k]. the assymetric-pair t considers [ k 2 ] disjoint class pairs (ic, jc) where ic < jc. for c ∈ [ k 2 ], tic,jc = tjc,ic = (cid:15), and the diagonal entries are 1 − (cid:15). instance-dependent label noise beyond the feature-independent assumption, recent works pay more attention to a challenging case where the label noise is jointly determined by feature x and clean label y . there are some techniques for synthesizing the instance-dependent label noise, such as the polynomial margin diminishing label noise (zhang et al., 2021b) where instances near decision boundary are easier to be mislabeled, the part-dependent label noise (xia et al., 2020b) where different parts of feature may contribute different noise transition matrices, and the group-dependent label noise (wang et al., 2021a; zhu et al., 2021a) where different sub-populations may have different noise rates. all of these noise models are proposed with some statistical assumptions, which facilitate the derivation of theoretical solutions. human annotated noisy labels on cifar-10, cifar-100 in this section, we introduce two new benchmark datasets for learning with noisy labels: cifar10n and cifar-100n. both datasets are built using human annotated labels collected on amazon mechanical turk (m-turk): we post cifar-10 and cifar-100 training images as the annotation human intelligence tasks (hits), and workers receive payments by completing hits. cifar-10n real-world noisy label benchmark cifar-10 (krizhevsky et al., 2009) dataset contains 60k 32 × 32 color images, 50k images for training and 10k images for testing. each image belongs to one of ten completely mutually exclusive classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck. dataset collection we randomly split the training dataset of cifar-10 without replacement into ten batches. in the mturk interface, each batch contains 500 hits with 10 images per hit. the training images and test dataset remain unchanged. each hit is then randomly assigned to three independent workers. workers gain base reward $0.03 after submitting the answers of each hit. we reward workers with huge bonus salary if the worker contributes more hits than the averaged number of submissions. we did not make use of any ground-truth clean labels to approve or reject submissions. we only block and reject workers who submit answers with fixed/regular distribution patterns. we defer more details of the dataset collection to appendix a. dataset statistics for cifar-10n dataset, each training image contains one clean label and three human annotated labels. we provide five noisy-label sets as follows. • aggregate: aggregation of three noisy labels by majority voting. if the submitted three labels are different for an image, the aggregated label will be randomly selected among the three labels. • random i (i ∈ {1, 2, 3}): the i-th submitted label for each image. note our collection procedure ensures that one image cannot be repeatedly labeled by the same worker. • worst: dataset with the highest noise rate. for each image, if there exist any wrongly annotated labels in three noisy labels, the worst label is randomly selected from wrong labels. otherwise, the worst label is equal to the clean label. in cifar-10n, 60.27% of the training images have received unanimous label from three independent labelers. the noise rates of prepared five noisy label sets are 9.03% (aggregate), 17.23% (random 1), 18.12% (random 2), 17.64% (random 3) and 40.21% (worst). a complete dataset comparison among existing benchmarks and ours are given in table 1. we defer the noise level of each batch to table 4 (appendix). aggregating the annotated labels significantly decreases the noise rates. all three random sets have ≈ 18% noise level. to provide a challenging noisy setting, we also prepare a worst label set which serves to cover highly possible mistakes from human annotators on cifar-10. cifar-100n real-world noisy label benchmark | 4 | [
108.249,
698.0240784,
369.1514979,
707.9866784
] |
q7n2RngwOM.pdf | 2,022 | 1 | β-intact-vae: identifying and estimating causal effects under limited overlap pengzhou (abel) wu & kenji fukumizu department of statistical science, the graduate university for advanced studies & the institute of statistical mathematics tachikawa, tokyo {wu.pengzhou,fukumizu}@ism.ac.jp abstract as an important problem in causal inference, we discuss the identification and estimation of treatment effects (tes) under limited overlap; that is, when subjects with certain features belong to a single treatment group. we use a latent variable to model a prognostic score which is widely used in biostatistics and sufficient for tes; i.e., we build a generative prognostic model. we prove that the latent variable recovers a prognostic score, and the model identifies individualized treatment effects. the model is then learned as β-intact-vae––a new type of variational autoencoder (vae). we derive the te error bounds that enable representations balanced for treatment groups conditioned on individualized features. the proposed method is compared with recent methods using (semi-)synthetic datasets. introduction causal inference (imbens & rubin, 2015; pearl, 2009), i.e, inferring causal effects of interventions, in this work, we focus on treatment effects (tes) based on is a fundamental field of research. a set of observations comprising binary labels t for treatment/control (non-treated), outcome y , and other covariates x. typical examples include estimating the effects of public policies or new drugs based on the personal records of the subjects. the fundamental difficulty of causal inference is that we never observe counterfactual outcomes that would have been if we had made the other decision (treatment or control). while randomized controlled trials (rcts) control biases through randomization and are ideal protocols for causal inference, they often have ethical and practical issues, or suffer from expensive costs. thus, causal inference from observational data is important. causal inference from observational data has other challenges as well. one is confounding: there may be variables, called confounders, that causally affect both the treatment and the outcome, and spurious correlation/bias follows. the other is the systematic imbalance (difference) of the distributions of the covariates between the treatment and control groups––that is, x depends on t , which introduces bias in estimation. a majority of studies on causal inference, including the current work, have relied on unconfoundedness; this means that the confounding can be controlled by conditioning on the covariates. the more covariates are collected the more likely unconfoundedness holds; however, more covariates tends to introduce a stronger imbalance between treatment and control. the current work studies the issue of imbalance in estimating individualized tes conditioned on x. classical approaches aim for covariate balance, x independent of t , by matching and re-weighting (stuart, 2010; rosenbaum, 2020). machine learning methods have also been exploited; there are semi-parametric methods––e.g., van der laan & rose (2018, tmle)––which improve finite sample performance, as well as non-parametric methods––e.g., wager & athey (2018, cf). notably, from johansson et al. (2016), there has been a recent increase in interest in balanced representation learning (brl) to learn representations z of the covariates, such that z independent of t . the most serious form of imbalance is the limited (or weak) overlap of covariates, which means that sample points with certain covariate values belong to a single treatment group. in this case, a straightforward estimation of tes is not possible at non-overlapping covariate values due to lack of data. there are works that provide robustness to limited overlap (armstrong & koles´ar, 2021), trim non-overlapping data points (yang & ding, 2018), weight data points by overlap (li & li, 2019), or study convergence rates depending on overlap (hong et al., 2020). limited overlap is particularly relevant to machine learning methods that exploit high-dimensional covariates. this is because, with higher-dimensional covariates, overlap is harder to satisfy and verify (d’amour et al., 2020). to address imbalance and limited overlap, we use a prognostic score (hansen, 2008); it is a sufficient statistic of outcome predictors and is among the key concepts of sufficient scores for te estimation. as a function of covariates, it can map some non-overlapping values to an overlapping value in a space of lower-dimensions. for individualized tes, we consider conditionally balanced representation z, such that z is independent of t given x––which, as we will see, is a necessary condition for a balanced prognostic score. moreover, prognostic score modeling can benefit from methods in predictive analytics and exploit rich literature, particularly in medicine and health (hajage et al., 2017). thus, it is promising to combine the predictive power of prognostic modeling and machine learning. with this idea, our method builds on a generative prognostic model that models the prognostic score as a latent variable and factorizes to the score distribution and outcome distribution. as we consider latent variables and causal inference, identification is an issue that must be discussed before estimation is considered. “identification” means that the parameters of interest (in our case, representation function and tes) are uniquely determined and expressed using the true observational distribution. without identification, a consistent estimator is impossible to obtain, and a model would fail silently; in other words, the model may fit perfectly but will return an estimator that converges to a wrong one, or does not converge at all (lewbel, 2019, particularly sec. 8). identification is even more important for causal inference; because, unlike usual (non-causal) model misspecification, causal assumptions are often unverifiable through observables (white & chalak, 2013). thus, it is critical to specify the theoretical conditions for identification, and then the applicability of the methods can be judged by knowledge of an application domain. a major strength of our generative model is that the latent variable is identifiable. this is because the factorization of our model is naturally realized as a combination of identifiable vae (khemakhem et al., 2020a, ivae) and conditional vae (sohn et al., 2015, cvae). based on model identifiability, we develop two identification results for individualized tes under limited overlap. a similar vae architecture was proposed in wu & fukumizu (2020b); the current study is different in setting, theory, learning objective, and experiments. the previous work studies unobserved confounding but not limited overlap, with different set of assumptions and identification theories. the current study further provides bounds on individualized te error, and the bounds justify a conditionally balancing term controlled by hyperparameter β, as an interpolation between the two identifications. in summary, we study the identification (sec. 3) and estimation (sec. 4) of individualized tes under limited overlap. our approach is based on recovering prognostic scores from observed variables. to this end, our method exploits recent advances in identifiable representation––particularly ivae. the code is in supplementary material, and the proofs are in sec. a. our main contributions are: 1) te identification under limited overlap of x, via prognostic scores and an identifiable model; 2) bounds on individualized te error, which justify our conditional brl; 3) a new regularized vae, β-intact-vae, realizing the identification and conditional balance; 4) experimental comparison to the state-of-the-art methods on (semi-)synthetic datasets. related work limited overlap. under limited overlap, luo et al. (2017) estimate the average te (ate) by reducing covariates to a linear prognostic score. farrell (2015) estimates a constant te under a partial linear outcome model. d’amour & franks (2021) study the identification of ate by a general class of scores, given the (linear) propensity score and prognostic score. machine learning studies on this topic have focused on finding overlapping regions (oberst et al., 2020; dai & stultz, 2020), or indicating possible failure under limited overlap (jesson et al., 2020), but not remedies. an exception is johansson et al. (2020), which provides bounds under limited overlap. to the best of our knowledge, our method is the first machine learning method that provides identification under limited overlap. prognostic scores have been recently combined with machine learning approaches, mainly in the biostatistics community. for example, huang & chan (2017) estimate individualized te by reducing covariates to a linear score which is a joint propensity-prognostic score. tarr & imai (2021) use svm to minimize the worst-case bias due to prognostic score imbalance. however, in the machine learning community, few methods consider prognostic scores; zhang et al. (2020a) and hassanpour & greiner (2019) learn outcome predictors, without mentioning prognostic score––while johansson et al. (2020) conceptually, but not formally, connects brl to prognostic score. our work is the first to formally connect generative learning and prognostic scores for te estimation. identifiable representation. recently, independent component analysis (ica) and representation learning––both ill-posed inverse problems––meet together to yield nonlinear ica and identifiable representation; for example, using vaes (khemakhem et al., 2020a), and energy models (khemakhem et al., 2020b). the results are exploited in causal discovery (wu & fukumizu, 2020a) and out-of-distribution (ood) generalization (sun et al., 2020). this study is the first to explore identifiable representations in te identification. brl and related methods amount to a major direction. early brl methods include blr/bnn (johansson et al., 2016) and tarnet/cfr (shalit et al., 2017). in addition, yao et al. (2018) exploit the local similarity between data points. shi et al. (2019) use similar architecture to tarnet, considering the importance of treatment probability. there are also methods that use gan (yoon et al., 2018, ganite) and gaussian processes (alaa & van der schaar, 2017). our method shares the idea of brl, and further extends to conditional balance––which is natural for individualized te. more. our work lays conceptual and theoretical foundations of vae methods for tes (e.g., cevae louizos et al., 2017; lu et al., 2020). see sec. d for more related works, there we also make detailed comparisons to cfr and cevae, which are well-known machine learning methods. setup and preliminaries counterfactuals, treatment effects, and identification following imbens & rubin (2015), we assume there exist potential outcomes y (t) ∈ rd, t ∈ {0, 1}. y (t) is the outcome that would have been observed if the treatment value t = t was applied. we see y (t) as the hidden variables that give the factual outcome y under factual assignment t = t. formally, y (t) is defined by the consistency of counterfactuals: y = y (t) if t = t; or simply y = y (t ). the fundamental problem of causal inference is that, for a unit under research, we can observe only one of y (0) or y (1)––w.r.t. the treatment value applied. that is, “factual” refers to y or t , which is observable; or estimators built on the observables. we also observe relevant covariate(s) x ∈ x ⊆ rm, which is associated with individuals, with distribution d := (x, y, t ) ∼ p(x, y, t). we use upper-case (e.g. t ) to denote random variables, and lower-case (e.g. t) for realizations. the expected potential outcome is denoted by µt(x) = e(y (t)|x = x) conditioned on x = x. the estimands in this work are the conditional ate (cate) and ate, defined, respectively, by: (1) cate is seen as an individual-level, personalized, treatment effect, given highly discriminative x. ν = e(τ (x)). standard results (rubin, 2005)(hernan & robins, 2020, ch. 3) show sufficient conditions for te identification in general settings. they are exchangeability: y (t)|=t |x, and overlap: p(t|x) > 0 for any x ∈ x . both are required for t ∈ {0, 1}. when t appears in statements without quantification, we always mean “for both t”. often, consistency is also listed; however, as mentioned, it is better known as the well-definedness of counterfactuals. exchangeability means, just as in rcts, but additionally given x, that there is no correlation between factual t and potential y (t). note that the popular assumption y (0), y (1)|=t |x is stronger than y (t)|=t |x and is not necessary for identification (hernan & robins, 2020, pp. 15). overlap means that the supports of p(x|t = 0) and p(x|t = 1) should be the same, and this ensures that there are data for µt(x) on any (x, t). we rely on consistency and exchangeability, but in sec. 3.2, will relax the condition of the overlapping covariate to allow some non-overlapping values x––that is, covariate x is limited-overlapping. in this paper, we also discuss overlapping variables other than x (e.g., prognostic scores), and provide a definition for any random variable v with support v as follows: definition 1. v is overlapping if p(t|v = v) > 0 for any t ∈ {0, 1}, v ∈ v. if the condition is violated at some value v, then v is non-overlapping and v is limited-overlapping. prognostic scores our method aims to recover a prognostic score (hansen, 2008), adapted to account for both t as in definition 2. on the other hand, balancing scores (rosenbaum & rubin, 1983) b(x) are defined by t|=x|b(x), of which the propensity score p(t = 1|x) is a special case. see sec. b.1 for detail. definition 2. a pgs is {p(x, t)}t∈{0,1} such that y (t)|=x|p(x, t), where p(x, t) (pt(x) hereafter) is a function defined on x × {0, 1}. a pgs is called balanced (and a bpgs) if p0 = p1. we say a pgs is overlapping, if both p0(x) and p1(x) are overlapping. obviously, a bpgs p(x) is a conditionally balanced representation (defined as z|=t |x in introduction) and is thus named. we often write t of the function argument in subscripts. we use bpgs or pgs to construct representations for cate estimation. why not balancing scores? while balancing scores b(x) have been widely used in causal inference, pgss are more suitable for discussing overlap. our purpose is to recover an overlapping score for limited-overlapping x. it is known that overlapping b(x) implies overlapping x (d’amour et al., 2020), which counters in contrast, overlapping bpgs does not imply overlapping b(x). example. let our purpose. t = i(x + (cid:15) > 0) and y = f (|x|, t ) + e, where i is the indicator function, (cid:15) and e are exogenous zero-mean noises, and the support of x is on the entire real line while (cid:15) is bounded. now, x itself is a balancing score and |x| is a bpgs; and |x| is overlapping but x is not. moreover, with theoretical and experimental evidence, it is recently conjectured that pgss maximize overlap among a class of sufficient scores, including b(x) (d’amour & franks, 2021). in general, hajage et al. (2017) show that prognostic score methods perform better––or as well as––propensity score methods. below is a corollary of proposition 5 in hansen (2008); note that pt(x) satisfies exchangeability. proposition 1 (identification via pgs). if pt(x) is a pgs and y |pˆt(x), t ∼ py |pˆt,t (y|p, t) where ˆt ∈ {0, 1} is a counterfactual assignment, then cate and ate are identified, using (1) and µˆt(x) = e(y (ˆt)|pˆt(x), x = x) = e(y |pˆt(x), t = ˆt) = (cid:82) py |pˆt,t (y|pˆt(x), ˆt)ydy (2) with the knowledge of pt and py |pˆt,t , we choose one of p0, p1 and set t = ˆt in the density function, w.r.t the µˆt of interest. this counterfactual assignment resolves the problem of non-overlap at x. note that a sample point with x = x may not have t = ˆt. we consider additive noise models for y (t), which ensures the existence of pgss. (g1)1 (additive noise model) the data generating process (dgp) for y is y = f ∗(m(x, t ), t )+ e where f ∗, m are functions and e is a zero-mean exogenous (external) noise. the dgp is causal and defines potential outcomes by y (t) := f ∗ t (mt(x)) + e, and specifies m(x, t ), t , and e as the only direct causes of y . particularly, mt(x) is a sufficient statistics of x for y (t). for example, 1) mt(x) can be the component(s) of x that affect y (t) directly, or 2) if y (t)|x follows a generalized linear model, then mt(x) can be the linear predictor of y (t). under (g1), 1) mt(x) is a pgs; 2) µt(x) = f ∗ t (mt(x)) is a pgs; 3) x is a (trivial) bpgs; and 4) u(x) := (µ0(x), µ1(x)) is a bpgs. the essence of our method is to recover the pgs mt(x) as a representation, assuming mt(x) is not higher-dimensional than y and approximately balanced. note that µt(x), our final target, is a low-dimensional pgs but not balanced, and we estimate it conditioning on the approximate bpgs mt(x). identification under generative prognostic model in sec. 3.1, we specify the generative prognostic model p(y, z|x, t), and show its identifiability. in sec. 3.2, we prove the identification of cates, which is one of our main contributions. the theoretical analysis involves only our generative model (i.e., prior and decoder), but not the encoder. the encoder is not part of the generative model and is involved as an approximate posterior in the estimation, which is studied in sec. 4. model, architecture, and identifiability our goal is to build a model that can be learned by vae from observational data to obtain a pgs, or better, a bpgs, via the latent variable z. the generative prognostic model of the proposed method is in (3), figure 1: cvae, ivae, and intact-vae: graphical models of the decoders. 1the labels g, m, or d mean generating process (of y ), probabilistic model, or distribution (of x). we introduce assumptions when appropriate but compile them in one place in sec. c.1. where θ := (f , h, k) contains the functional parameters. the first factor pf (y|z, t), our decoder, models py |pt,t (y|p, t) in (2) and is an additive noise model, with (cid:15) ∼ p(cid:15) as the exogenous noise. the second factor pλ(z|x, t), our conditional prior, models pt (x) and is a factorized gaussian, with λt (x) := diag−1(kt (x))(ht (x), − 1 2 )t as its natural parameter in the exponential family, where diag() gives a diagonal matrix from a vector. pθ(y, z|x, t) = pf (y|z, t)pλ(z|x, t), pf (y|z, t) = p(cid:15)(y − ft(z)), pλ(z|x, t) ∼ n (z; ht(x), diag(kt(x))). we denote n := dim(z). for inference, the elbo is given by the standard variational lower bound log p(y|x, t) ≥ ez∼q log pf (y|z, t) − dkl(q(z|x, y, t)(cid:107)pλ(z|x, t)). (4) note that the encoder q conditions on all the observables (x, y, t ); this fact plays an important role in sec. 4.1. full parameterization of the encoder and decoder is also given in sec. 4.1. this architecture is called intact-vae (identifiable treatment-conditional vae). see figure 1 for comparison in terms of graphical models (which have not causal implications here). see sec. c.2 for more expositions and sec. b.2 for basics of vaes. our model identifiability extends the theory of ivae, and the following conditions are inherited. (m1) i) ft is injective, and ii) ft is differentiable. (d1) λt(x) is non-degenerate, i.e., the linear hull of its support is 2n-dimensional. under (m1) and (d1), we obtain the following identifiability of the parameters in the model: if pθ(y|x, t) = pθ(cid:48)(y|x, t), we have, for any yt in the image of ft: (yt) = diag(a)f (cid:48) t (5) where diag(a) is an invertible n-diagonal matrix and b is an n-vector, both of which depend on λt(x) and λ(cid:48) t = ft ◦ at; that is, ft can be identified (learned) up to an affine transformation at. see sec. a for the proof and a relaxation of (d1). in this paper, symbol (cid:48) (prime) always indicates another parameter (variable, etc.): θ(cid:48) = (f (cid:48), λ(cid:48)). t(x). the essence of the result is that f (cid:48) −1(yt) + b =: at(f (cid:48) t identifications under limited-overlapping covariate in this subsection, we present two results of cate identification based on the recovery of equivalent bpgs and pgs, respectively. since pgss are functions of x, the theory assumes a noiseless prior for simplicity, i.e., k(x) = 0; the prior zλ,t ∼ pλ(z|x, t) degenerates to function ht(x). pgss with dimensionality lower than or equal to d = dim(y ) are essential to address limited overlapping, as shown below. we set n = d because µt is a pgs of the same dimension as y under (g1). in practice, n = d means that we seek a low-dimensional representation of x. we introduce (g1’) (low-dimensional pgs) (g1) is true, and µt = jt ◦ pt for some pt and injective jt, which is equivalent to (g1) because µt = jt ◦ pt is trivially satisfied with jt is identity and pt = µt. (g1’) is used instead in this subsection. first, it explicitly restricts dim(pt) via injectivity, which ensures that n = dim(y ) ≥ dim(pt). second, it reminds us that, possibly, the decomposition is not unique; and, clearly, all pt that satisfy (g1’) are pgss. for example, if f ∗ t is injective, then jt = f ∗ t and pt = mt satisfies µt = jt ◦ pt. finally, it is then natural to introduce (g2) (low-dimensional bpgs) (g1) is true, and µt = jt ◦ p for some p and injective jt, which is stronger than (g1), gives bpgs p(x), and ensures that n ≥ dim(p). (g2) is satisfied if t is injective and m0 = m1. (g2) implies µ1 = i ◦ µ0 where i := j1 ◦ j−1 f ∗ 0 ; in words, cates are given by µ0 and an invertible function. see sec. c.3 for real-world examples and more discussions. with (g1’) or (g2), overlapping x can be relaxed to overlapping bpgs or pgs plus the following: (m2) (score partition preserving) for any x, x(cid:48) ∈ x , if pt(x) = pt(x(cid:48)), then ht(x) = ht(x(cid:48)). note that (m2) is only required for the optimal h specified in proposition 2 or theorem 1. the intuition is that pt maps each non-overlapping x to an overlapping value, and ht preserves this property through learning. this is non-trivial because, for a given t, some values of x are unobserved due to limited overlap. thus, (m2) can be seen as a weak form of ood generalization: the nns for h can learn the ood score partition. while unnecessary for us, linear pt and ht trivially imply (m2) and are often assumed, e.g., in huang & chan (2017); luo et al. (2017); d’amour & franks (2021). our first identification, proposition 2, relies on (g2) and our generative model, without model identifiability (so differentiable ft is not needed). proposition 2 (identification via recovery of bpgs). suppose we have dgp (g2) and model (3) with n = d. assume (m1)-i) and (m3) (ps matching) let h0(x) = h1(x) and k(x) = 0. then, if epθ (y |x, t ) = e(y |x, t ), we have 1) (recovery of bpgs) zλ,t = ht(x) = v(p(x)) on overlapping x, where v : p → rn is an injective function, and p := {p(x)|overlapping x}; 2) (cate identification) if p(x) in (g2) is overlapping, and (m2) is satisfied, then µt(x) = ˆµt(x) := epλ(z|x,t)epf (y |z, t) = ft(ht(x)), for any t ∈ {0, 1} and x ∈ x . in essence, i) the true dgp is identified up to an invertible mapping v, such that ft = jt ◦ v−1 and h = v ◦ p; and ii) pt is recovered up to v, and y (t)|=x|pt(x) is preserved––with same v for both t. theorem 1 below also achieves the essence i) and ii), under p0 (cid:54)= p1. the existence of bpgs is preferred, because it satisfies overlap and (m2) more easily than pgs which requires the conditions for each of the two functions of pgs. however, the existence of lowdimensional bpgs is uncertain in practice when our knowledge of the dgp is limited. thus, we depend on theorem 1 based on the model identifiability to work under pgs which generally exists. theorem 1 (identification via recovery of pgs). suppose we have dgp (g1’) and model (3) with n = d. for the model, assume (m1) and (m3’) (noise matching) let pe = p(cid:15) and k(x) = kk(cid:48)(x), k → 0. assume further that (d1) and (d2) (balance from data) a0 = a1 in (5). then, if pθ(y|x, t) = p(y|x, t); conclusions 1) and 2) in proposition 2 hold with p replaced with pt in (g1’); and the domain of v becomes p := {pt(x)|p(t, x) > 0}. theorem 1 implies that, without bpgs, we need to know or learn the distribution of hidden noise (cid:15) to have pe = p(cid:15). proposition 2 and theorem 1 achieve recovery and identification in a complementary manner; the former starts from the prior by p0 = p1 and h0 = h1, while the latter starts from the decoder by a0 = a1 and pe = p(cid:15). we see that a0 = a1 acts as a kind of balance because it replaces p0 = p1 in proposition 2. we show in sec. a a sufficient and necessary condition (d2’) on data that ensures a0 = a1. note that the singularities due to k → 0 (e.g., λ → 0) cancel out in (5). see sec. c.4 for more on the complementarity between the two identifications. estimation by β-intact-vae prior as bpgs, posterior as pgs, and β as regularization strength in sec. 3.2, we see that the existence of bpgs (proposition 2) is preferable in identifying the true dgp up to an equivalent expression––while theorem 1 allows us to deal with pgs by adding other conditions. in learning our model with data, we formally require (g1) and further expect that (g2) holds approximately; the latter is true when f ∗ t is injective and m0 ≈ m1 (mt(x) is an approximate bpgs). instead of the trivial regression µt(x) = e(y |x, t = t), we want to recover the approximate bpgs mt(x). this idea is common in practice. for example, in a real-world nutrition study (huang & chan, 2017), a reduction of 11 covariates recovers a 1-dimensional linear bpgs. we consider two ways to recover an approximate bpgs by a vae. one is to use a prior which does not depend on t, indicating a preference for bpgs. namely, we set λ0 = λ1, denote λ(x) := λ(x) and have pλ(z|x) as the prior in (3). the decoder and encoder are factorized gaussians: pf ,g(y|z, t) = n (y; ft(z), diag(gt(z))), qφ(z|x, y, t) = n (z; rt(x, y), diag(st(x, y))), (6) where φ = (r, s). the other is to introduce a hyperparameter β in the elbo as in β-vae (higgins et al., 2017). the modified elbo with β, up to the additive constant, is derived as: ed{−βdkl(qφ(cid:107)pλ) − ez∼qφ[(y − ft(z))2/2gt(z)] − ez∼qφ log |gt(z)|}. (7) for convenience, here and in lf in sec. 4.2, we omit the summation as if y is univariate. the encoder qφ depends on t and can realize a pgs. with β, we control the trade-off between the first and second terms: the former is the divergence of the posterior from the balanced prior, and the latter is the reconstruction of the outcome. note that a larger β encourages the conditional balance z|=t |x on the posterior. by choosing β appropriately, e.g., by validation, the elbo can recover an approximate bpgs while fitting the outcome well. in summary, we base the estimation on proposition 2 and bpgs as much as possible, but step into theorem 1 and noise modeling required by pe = p(cid:15) when necessary. note also that the parameters g and k, which model the outcome noise and express the uncertainty of the prior, respectively, are both learned by the elbo. this deviates from the theoretical conditions described in sec. 3.2, but it is more practical and yields better results in our experiments. see sec. c.5 for more ideas and connections behind the elbo. once the vae is learned2 by the elbo, the estimate of the expected potential outcomes is given by: ˆµˆt(x) = eq(z|x)fˆt(z) = ed|x∼p(y,t|x)ez∼qφfˆt(z), ˆt ∈ {0, 1}, (8) where q(z|x) := ep(y,t|x)qφ(z|x, y, t) is the aggregated posterior. we mainly consider the case where x is observed in the data, and the sample of (y, t ) is taken from the data given x = x. when x is not in the data, we replace qφ with pλ in (8) (see sec. c.7 for details and e for results). note that ˆt in (8) indicates a counterfactual assignment that may not be the same as the factual t = t in the data. that is, we set t = ˆt in the decoder. the assignment is not applied to the encoder which is learned from factual x, y, t (see also the explanation of (cid:15)cf,t in sec. 4.2). the overall algorithm steps are i) train the vae using (7), and ii) infer cate ˆτ (x) = ˆµ1(x) − ˆµ0(x) by (8). conditionally balanced representation learning we formally justify our elbo (7) from the brl viewpoint. we show that the conditional brl via the kl (first) term of the elbo results from bounding a cate error; particularly, the error due to the imprecise recovery of jt in (g1’) is controlled by the elbo. previous works (shalit et al., 2017; lu et al., 2020) instead focus on unconditional balance and bound pehe which is marginalized on x. sec. 5.2 experimentally shows the advantage of our bounds and elbo. further, we connect the bounds to identification and consider noise modeling through gt(z). sec sec. d.3 for detailed comparisons to previous works. in sec. e.4, we empirically validate our bounds, and, particularly, the bounds are more useful under weaker overlap. we introduce the objective that we bound. using (8) to estimate cate, ˆτf (z) := f1(z) − f0(z) is marginalized on q(z|x). on the other hand, the true cate, given the covariate x or score z, is: (9) where jt is associated with an approximate bpgs pt (say, mt) as the target of recovery by our vae. accordingly, given x, the error of posterior cate, with or without knowing pt, is defined as τj(z) = j1(z) − j0(z), f (x) := eq(z|x)(ˆτf (z) − τ (x))2; (cid:15)∗ (10) we bound (cid:15)f instead of (cid:15)∗ f because the error between τ (x) and τj(z) is small––if the score recovery works well, then z ≈ p0(x) ≈ p1(x) in (9). we consider the error between ˆτf and τj below. we define the risks of outcome regression, into which (cid:15)f is decomposed. definition 3 (cate risks). let y (ˆt)|pˆt(x) ∼ py (ˆt)|pˆt ep(y|x,t)qφ. the potential outcome loss at (z, t), factual risk, and counterfactual risk are: (cid:15)f (x) := eq(z|x)(ˆτf (z) − τj(z))2. (y|p ) and qt(z|x) := q(z|x, t) = lf (z, ˆt) := epy (ˆt)|pˆt (y|p =z)(y − fˆt(z))2/gˆt(z) = gˆt(z)−1 (cid:82) (y − fˆt(z))2py (ˆt)|pˆt (y|z)dy; (cid:15)f,t(x) := eqt(z|x)lf (z, t); (cid:15)cf,t(x) := eq1−t(z|x)lf (z, t). with y (t) involved, lf is a potential outcome loss on f , weighted by g. the factual and counterfactual counterparts, (cid:15)f,t and (cid:15)cf,t, are defined accordingly. in (cid:15)f,t, unit u = (x, y, t) is involved in the learning of qt(z|x), as well as in lf (z, t) since y (t) = y for the unit. in (cid:15)cf,t, however, unit u(cid:48) = (x, y(cid:48), 1 − t) is involved in q1−t(z|x), but not in lf (z, t) since y (t) (cid:54)= y(cid:48) = y (1 − t). thus, the regression error (second) term in elbo (7) controls (cid:15)f,t via factual data. on the other hand, (cid:15)cf,t is not estimable due to the unobservable y (1 − t ), but is bounded by (cid:15)f,t plus m d(x) in theorem 2 below––which, in turn, bounds (cid:15)f by decomposing it to (cid:15)f,t, (cid:15)cf,t, and vy . 2as usual, we expect the variational inference and optimization procedure to be (near) optimal; that is, consistency of vae. consistent estimation using the prior is a direct corollary of the consistent vae. see sec. c.6 for formal statements and proofs. under gaussian models, it is possible to prove the consistency of the posterior estimation, as shown in bonhomme & weidner (2021). theorem 2 (cate error bound). assume |lf (z, t)| ≤ m and |gt(z)| ≤ g, then: where d(x) := (cid:80) t (cid:15)f (x) ≤ 2[g((cid:15)f,0(x) + (cid:15)f,1(x) + m d(x)) − vy (x)] (cid:112)dkl(qt(cid:107)q1−t)/2, and vy (x) := eq(z|x) epy (t)|pt (y|z)(y − jt(z))2. d(x) measures the imbalance between qt(z|x) and is symmetric for t. correspondingly, the kl term in elbo (7) is symmetric for t and balances qt(z|x) by encouraging z|=t |x for the posterior. vy (x) reflects the intrinsic variance in the dgp and can not be controlled. estimating g, m is nontrivial. instead, we rely on β in the elbo (7) to weight the terms. we do not need two hyperparameters since g is implicitly controlled by the third term, a norm constraint, in elbo. experiments | 7 | [
108.299,
570.8906768,
200.0834953,
582.8458768
] |
dpXL6lz4mOQ.pdf | 2,022 | 1 | learning guarantees for graph convolutional networks on the stochastic block model wei lu department of mathematics brandeis university waltham, ma 02453, usa luwei@brandeis.edu abstract an abundance of neural network models and algorithms for diverse tasks on graphs have been developed in the past five years. however, very few provable guarantees have been available for the performance of graph neural network models. this state of affairs is in contrast with the steady progress on the theoretical underpinnings of traditional dense and convolutional neural networks. in this paper we present the first provable guarantees for one of the best-studied families of graph neural network models, graph convolutional networks (gcns), for semisupervised community detection tasks. we show that with high probability over the initialization and training data, a gcn will efficiently learn to detect communities on graphs drawn from a stochastic block model. our proof relies on a fine-grained analysis of the training dynamics in order to overcome the complexity of a nonconvex optimization landscape with many poorly-performing local minima. introduction there is presently a large gap between what can be accomplished in practice using deep learning, and what can be satisfactorily explained and predicted by the theory of deep learning. nevertheless, the past several years have seen substantial developments in the theory of deep learning (ge et al., 2017; brutzkus & globerson, 2017; zhang et al., 2019a; goel et al., 2020; chen et al., 2020a). one factor contributing to the gap between the theory and practice of traditional nns is that realworld data sets tend to have complex structure that is difficult to capture with formal definitions. for example, popular image classification models are capable of memorizing arbitrary data (zhang et al., 2016), and yet they exhibit astonishing generalization performance on accurately-labeled natural images. hence, any rigorous proof of the observed generalization performance of deep learning models on image classification tasks will necessarily require assumptions about the data that are sharp enough to separate random inputs from natural images. because of the difficulty of giving an adequate characterization of real-world data, much of the recent progress in deep learning theory has instead focused on proving results using very simple (e.g. gaussian) input distributions or in distribution-free settings (ge et al., 2017; brutzkus & globerson, 2017; zhang et al., 2019a; vempala & wilmes, 2019). compared to traditional feed-forward (dense, convolutional, etc.) nns, the theory of graph neural networks (gnns) is still in its infancy. on the other hand, it appears substantially easier to give plausible descriptions of the combinatorial structure of real-world graph data sets than, e.g., to characterize the distribution of natural images (drobyshevskiy & turdakov, 2019). we therefore believe that gnns offer a natural setting for developing provable guarantees that are able to capture the power of deep learning on real-world datasets. in this paper, we contribute to that goal by giving the first rigorous guarantees of efficient semi-supervised learning of stochastic block models via a gnn. graph neural networks many natural datasets for diverse machine learning problems have a graph structure, including social networks, molecular structures, and transit networks. in order to efficiently exploit such combinatorial structure, a variety of gnn models have been proposed, tuned for different kinds of tasks. a number of taxonomies of gnn models have been proposed (zhou et al., 2018; wu et al., 2021); one of the most essential differences between different gnn models is whether they are meant to label the graph as a whole, or to label individual components of the graph, particularly vertices. from a theoretical perspective, the best understood tasks for gnns concern labeling the graph as a whole, for example for the task of classifying a graph by its isomorphism type (sato, 2020). in particular, it has been established that many gnn models are of comparable power to various versions of the weisfeiler-leman hierarchy1 (xu et al., 2018; morris et al., 2019). some progress has also been made on the theory of gnns for vertex-labeling tasks. recent works by sato et al. describe the representational power of certain gnn models for tasks such as computing minimum vertex covers (sato et al., 2019). garg et al. also give bounds on the representational power of gnn models, as well as using rademacher bounds to estimate the generalization ability of gnns (garg et al., 2020). our results concern the task of semi-supervised community detection. in this problem, each vertex belongs to one community, and some subset of the vertices are labeled according to their community membership. the task is to classify the community membership of the remaining vertices. this task has been one of the most intensively studied problems in the gnn literature, but there have not yet been any provable guarantees on the performance of proposed models. we study (spatial-based) graph convolutional models similar to the gcn model proposed in kipf & welling (2017). a single layer of such a model computes weights at each node by aggregating the weights at neighboring nodes and applying an activation function with learned parameters, e.g., a linear map followed by a relu. many variations on this theme, including various sophisticated training regimes, have been proposed (chen et al., 2017; gao et al., 2018; li et al., 2018; zhang et al., 2019b; chen et al., 2018), but no provable guarantees have been available for the performance of such models on natural data distributions, until the present work. main results one motivation for gnns as a target for progress in deep learning theory is that there are well-studied graph distributions that plausibly capture some of the structure of real-world data (drobyshevskiy & turdakov, 2019). for example, even fairly simple preferential attachment models plausibly capture some of the essential structure of the web (kumar et al., 2000). other graph models naturally capture community structures, the simplest of which is the stochastic block model (sbm) (holland et al., 1983). a graph is sampled from a sbm by first partitioning vertices into communities (with fixed or random sizes). two vertices are connected with probability p if they belong to the same community and probability q if they belong to different communities. in this paper, we consider the case of an sbm with two equal-sized communities in which vertices have label 0 and 1 respectively. we denote the label of vertex x by ℓ(x) ∈ {0, 1}. the graphs are parameterized as sbm(n, p, q) where n is the number of vertices, p is the probability of an intra-community connection, and q is the probability of a cross-community connection. we allow n to vary (but will require it to be sufficiently large), while p and q are of the form p = a log3 n and q = b log3 n for some fixed constants a > b. in the semi-supervised setting, the community labels of some portion of the labels are revealed. we assume the label of each vertex is revealed independently with probability λ. the input-layer features at a vertex x is (0, 0) if its label is not revealed, (1, 0) if its label is revealed to be 0, and (0, 1) if its label is revealed to be 1. n n assumption 2.1 (sparse stochastic block model). the probabilities of intra and cross-community connections are p = a log3 n , where a > b are constants. and q = b log3 n n n 1weisfeiler-leman hierarchy is a polynomial-time iterative algorithm which provides a necessary but insufficient condition for graph isomorphism. we study the problem of recovering the communities from such graphs using gnn models. of course, recovering the communities of an sbm graph has been well-studied and its computational complexity is fully understood in most cases (abbe & sandon, 2015; kawamoto et al., 2019). sbm models are therefore a natural test-case for understanding the power of gnn models for learning community structure, and experimental studies have been done in this setting (chen et al., 2020b; yadav et al., 2019). (abbe et al., 2014) shows a sharp threshold in the task of community recovery: √ log n > ( 2. this threshold clearly holds for our case (at sufficiently large values of q) n), since p = a log3 n and a > b. the contribution here is not to learn the community models. rather it’s showing that (multi-layer) gcns solve the classification problem, which is very much not trivial (it is non-convex, and the training loss curve is empirically non-monotonic). , q = b log3 n p − n n our gnn models will be trained on a graph or several graphs generated by the sbm(n, p, q) model, and seek to understand their accuracy on arbitrary sbm(n, p, q) graphs not necessarily in the training set but with the same parameters a, b determining p and q (with n allowed to vary). in particular, we study spatial-based graph convolutional models along the lines of the graph convolutional networks (gcn) introduced in (kipf & welling, 2017). each layer of the model computes a feature vector at every vertex of an input graph based on features of nearby vertices in the previous layer. a typical layer-wise update rule is of the form x (k+1) = ϕ(cid:0) ˆax (k)w (k)(cid:1), where • ˆa is a suitably-normalized adjacency matrix of shape n × n where n is the number of vertices. usually ˆa includes self-loops. • x (k) gives the feature vector in the k-th layer at each vertex as a matrix of shape n × mk, where mk is the number of features in layer k. • ϕ is an activation function, such as the relu. • w (k) are the trainable weights in the k-th layer, a matrix of shape mk × mk+1. in our version of this model, we define ˆa = ˜a, where ˜a = a + i, a is the adjacency matrix of a given graph, and i is the identity matrix. for the given sbm(n, p, q), a randomly selected vertex 2 (p + q) neighbors in expectation, so ˆa is obtained by normalizing each row of a + i with has n the average size of a neighborhood. since very deep gcn models seem to provide little empirical benefit (li et al., 2018), we use a single hidden layer with a softmax output layer. furthermore, we introduce a bias term b at the second layer. so the model has the following form: f (x, a) = softmax(cid:0) ˆaϕ(cid:0) ˆaxw (0)(cid:1)w (1) + b(cid:1) = softmax ˜aϕ(cid:0) ˜axw (0)(cid:1)w (1) + b where x is the input feature of the graph and w (0), w (1) and b are trainable parameters. let h denote the number of hidden features, which equals the number of columns of w (0) and the number of rows of w (1). we define the accuracy of the model as the probability of predicting correctly the label of a single vertex in a randomly generated sbm(n, p, q) graph where the label of each vertex is revealed with probability λ. we can now state our main result. theorem 2.2. for any ϵ > 0 and δ > 0, given a gcn model with 1 δ ≤ h ≤ n hidden features and with parameters initialized independently from n (0, 1), if training graphs are sampled from sbm(n, p, q) with n ≥ max(ω( 1 δ )) and the label of each vertex revealed with probability λ, and if the model is trained by coordinate descent for k = o(log log 1 ϵ ) epochs, then with probability ≥ 1 − δ, the model achieves accuracy ≥ 1 − 4ϵ. remark. we treat λ as constants, so it is omitted in the big o and ω notation in the sampling and training complexity. we emphasize that the novelty of this theorem is not in learning two-class sbm models as such; this is a long-solved problem. instead, this is the first proof of efficient learning for a gcn on semi-supervised community detection tasks using a natural family of random graph models. preliminaries in this section, we first introduce notations (a table of notations is also shown in the appendix for readers’ convenience) and some interpretations. then we introduce the structure of the paper. given a vertex y, denote the row of ˜ax corresponding to y as (ty 1 give the numbers of neighbors of y (including perhaps y itself) with revealed labels in class 0 and class 1 respectively. let 1), so ty 0 and ty , b = i=1 βiϕ(αity ity then αity 1, 1 ≤ i ≤ h gives h features of vertex y in the hidden layer. the inner product of the y-th row of ϕ( ˜axw (0)) and the columns of w (1) gives weighted sums of features of y : (cid:80)h 0 + α′ given a vertex x, the row of ˆaϕ( ˆaxw (0))w (1) corresponding to x is denoted by (f0(x), f1(x)) and is of the form (cid:18) ity 1), where ϕ represents the relu function. 1) and (cid:80)h ity iϕ(αity y∈g βiϕ(αity ity 1), y∈g iϕ(αity β′ ity 1) where 1[y ∼ x] is equal to 1 if y and x are connected, 0 otherwise. denote f i 0(x) := y∈g 1[y ∼ x]ϕ(αity ity 1) f i 1(x) := y∈g 1[y ∼ x]ϕ(αity ity 1), i=1 f i 0(x) and f1(x) = (cid:80)h so f0(x) = (cid:80)h denote gj(x) := fj(x) + bj, j = 0, 1, where (g0(x), g1(x)) represents the logit of the model corresponding to x. denote ∆(x) := g0(x) − g1(x). in order to make correct predictions, we need ∆(x) > 0 when ℓ(x) = 0 and ∆(x) < 0 when ℓ(x) = 1. i=1 f i the bias term b is useful in our analysis because its derivative controls how imbalanced the current loss is between the classes. in training we consider the cross-entropy loss denoted as l, and have ] = −e[ e[ ∂l ∂b0 exp (g1−ℓ(x)(x)) (e[z|ℓ(x) = 0] − e[z|ℓ(x) = 1]), where z = exp (g0(x))+exp (g1(x)) . z can be regarded as a measure of wrong prediction: the numerator is the exponential of the output corresponding to the wrong label and the denominator is a normalizer. it is easy to see that z > 1 2 if prediction is correct. when (cid:12) (cid:12)e[ ∂l (cid:12) ≈ 0. ∂b0 in order to have balanced performance in every epoch, we train the model through coordinate descent instead of conventional gradient descent. specifically, in each epoch we first update b0 and b1 until (cid:12) (cid:12)e[ ∂l ∂b0 ](cid:12) (cid:12) is smaller than some threshold. then we update the other parameters. (cid:12) ≈ 0, the model’s loss is balanced in the sense of that (cid:12) ](cid:12) 2 if the prediction is wrong; z < 1 in order to make a learning guarantee of the model, we need a high probability estimation of ∆(x). in section 4, we show that ∆(x) is concentrated at one of two values, denoted by µ0 and µ1, for ℓ(x) = 0 and 1 respectively. the proof depends on different parameter regimes of hidden neurons. furthermore, to avoid the overlap between the concentration range of ∆(x), we also show the separation between µ0 and µ1. in section 5, we analyze the dynamics of hidden neurons throughout training to show that the concentration and separation improve at a controlled rate. based on this information, in section 6 we prove the main theorem. section 7 shows some experimental results to verify our theory. the paper ends with future directions in section 8. concentration and separation of output in this section we show that ∆(x) is concentrated at µ0 and µ1 and their separation. the difference of the logits is where ∆i(x) = f i 0(x) − f i y∈g 1[y ∼ x]ϕ(αity ity 1). for brevity, we write ∆(x) as ∆ and ∆i(x) as ∆i. in order to estimate ∆, we need to estimate each ∆i, 1 ≤ i ≤ h. we denote the high probability estimate of ∆ as µ0 and µ1 for ℓ(x) = 0 and 1 respectively. our fine-grained analysis of the dynamics of coordinate descent on gcns relies on a classification of neurons into three families based on the sign and scale of the parameters: “good type”, “bad type” and “harmless type”. the names also indicate whether the neuron has positive contribution to the value of µ0 − µ1. we show that “good type” neuron makes positive contribution; the contribution of “bad type” neuron is negative but lower bounded; “harmless type” neuron’s contribution is non negative (see corollary a.4 and the remark following it). we will specifically describe parameter regime of each type in the following subsections. we analyze the dynamics of these types throughout coordinate descent in the next section. first we give some definitions. definition 1. for 1 ≤ i ≤ h, we call (αi, α′ is the i-th column of w (0), (βi, β′ definition 2. we say that the i-th neuron is order-aligned if (αi − α′ say it is order-misaligned. i, βi, β′ i) is the i-th row of w (1). i) the i-th neuron of the model, where (αi, α′ i) > 0, otherwise we i)(βi − β′ i)⊤ classification of neuron parameter regimes we say the i-th neuron is of “good type” if it satisfies either (g1) or (g2) below. (there is also the symmetric case obtained by switching αi with α′ i. for brevity, we only consider the cases that αi > α′ i. this applies to the “bad” and “harmless” types below as well). neurons in this type are order-aligned and both αi and α′ i is large enough. (g1) i are positive or the ratio between αi and α′ i and βi with β′ > 1 and βi > β′ i we say the i-th neuron is of “bad type” if it satisfies either (b1), (b2) or (b3). neurons in this type are order-misaligned and αi, α′ i are either both positive or have the opposite signs. αi α′ i q p αi > α′ i > 0 and βi < β′ i 3 n) and βi < β′ i αi α′ i q p we say that the i-th neuron is of “harmless type” if it satisfies either (h1) or (h2): (cid:12) (cid:12) (cid:12) (cid:12) 3 n), 1] and βi > β′ i q p αi α′ i αi ≤ 0 and α′ concentration and separation theorem 4.1. if the i-th neuron is of “good type” satisfying (g1) or of “bad type” satisfying (b1), then for ℓ(x) = 0: (αi − α′ i)(βi − β′ i)o(log− 1 (cid:1), for ℓ(x) = 1: (αi − α′ i)(βi − β′ i)o(log− 1 (cid:1). similar concentration hold for neurons satisfying (g2), (b2) and (b3), and for neurons of “harmless type.” we apply the method of bounded differences to show the concentration. the details are shown in the appendix. given the concentration of ∆i for each type of neurons, we estimate the concentration of the output ∆ = (cid:80)h i=1 ∆i + b0 − b1. for the i-th neuron, we denote the high-probability estimate of ∆i given in the statement of theorem 4.1 as mi 1 when ℓ(x) = 1. by union bound, we have the following corollary. corollary 4.2. given a vertex x ∈ g with label unrevealed, we have 0 when ℓ(x) = 0 and mi p[|∆ − µj| ≤ δ|ℓ(x) = j] ≥ 1 − o( where µj = ( mi |αi − α′ i||βi − β′ i|o(log− 1 for any ϵ > 0, we require the probability of concentration in (3) to be at least 1 − ˜ϵ, where ˜ϵ = o(ϵ). if we choose ˜ϵ = ϵ2, then we set 1 − o( 1 ϵ )2. our following analysis will be based on this condition. from theorem 4.1, we have the following result about the value of mi corollary 4.3. • if the i-th neuron is of “good type” and satisfies (g1), then mi (cid:18) p − q i||βi − β′ i| p + q • if the i-th neuron is of “bad type” and satisfies (b1), then mi (cid:18) p − q i||βi − β′ i| p + q • if the i-th neuron is of “harmless type” and satisfies (h1), then mi i||pαi + qα′ i| p − q (p + q)2 . similar results for neurons satisfying (g2),(b2),(b3) and (h1) are stated in the appendix, along with the proof. remark. • as we can see from corollary 4.3, the value of mi 1 is positive for “good type” neurons, non-negative for “harmless type” neurons and may be negative (but lower bounded) for “bad type” neurons. since positive values of mi 1 decrease the loss of the model, this explains the names for the types of neurons. • mi 1 is proportional to |αi − α′ i|. in the next section, we analyze the dynamics of the parameters αi, α′ i. using our understanding of these dynamics, in theorem 6.2 we present a refined result about the separation of output which only depends on the initialization of parameters. i||βi − β′ i, βi, β′ i=1(mi 1). by the two corollaries above, we have δ = o(|c|). the balanced loss guaranteed by the bias term and the coordinate descent scheme ensure that µ0 = ω(c) and µ1 = ω(c). it then follows that if the loss is sufficiently small, both µ0 and µ1 have correct sign, i.e. µ0 > 0 > µ1. (otherwise, due to concentration of the output, the model makes wrong prediction and the loss is large). so we will eventually have δ = o(µ0) and δ = o(|µ1|). dynamics of parameters in this section, we describe the dynamics of each type of neurons through coordinate descent, which can be visualized in the following figure in which the arrows indicate movement between types that can happen with non-negligible probability. good type harmless type bad type figure 1: dynamics of hidden neurons there are two noteworthy points from this figure. first, “good type” parameters are preserved under coordinate descent. second, there are no arrows coming into “bad type” except from itself. these dynamics are proved by estimating the gradient with respect to the loss function for each type of neuron. because of the non-linearity of the activation, we rely heavily on the concentration result proved above to get tight estimates. without these concentration results, even estimating the sign of the gradient seems difficult. the proof and experiments about the dynamics of hidden neurons are deferred to the appendix. learning guarantee | 6 | [
108.299,
365.6586768,
249.392498,
377.6138768
] |
CJd-BtnwtXq.pdf | 2,023 | 2 | a non-asymptotic analysis of oversmoothing in graph neural networks xinyi wu1, zhengdao chen2∗, william wang1, ali jadbabaie1 1laboratory for information and decision systems (lids), mit 2courant institute of mathematical sciences, new york university {xinyiwu,wwang314,jadbabai}@mit.edu, zc1216@nyu.edu abstract oversmoothing is a central challenge of building more powerful graph neural networks (gnns). while previous works have only demonstrated that oversmoothing is inevitable when the number of graph convolutions tends to infinity, in this paper, we precisely characterize the mechanism behind the phenomenon via a non-asymptotic analysis. specifically, we distinguish between two different effects when applying graph convolutions—an undesirable mixing effect that homogenizes node representations in different classes, and a desirable denoising effect that homogenizes node representations in the same class. by quantifying these two effects on random graphs sampled from the contextual stochastic block model (csbm), we show that oversmoothing happens once the mixing effect starts to dominate the denoising effect, and the number of layers required for this transition is o(log n/ log(log n )) for sufficiently dense graphs with n nodes. we also extend our analysis to study the effects of personalized pagerank (ppr), or equivalently, the effects of initial residual connections on oversmoothing. our results suggest that while ppr mitigates oversmoothing at deeper layers, ppr-based architectures still achieve their best performance at a shallow depth and are outperformed by the graph convolution approach on certain graphs. finally, we support our theoretical results with numerical experiments, which further suggest that the oversmoothing phenomenon observed in practice can be magnified by the difficulty of optimizing deep gnn models. introduction graph neural networks (gnns) are a powerful framework for learning with graph-structured data (gori et al., 2005; scarselli et al., 2009; bruna et al., 2014; duvenaud et al., 2015; defferrard et al., 2016; battaglia et al., 2016; li et al., 2016). most gnn models are built by stacking graph convolutions or message-passing layers (gilmer et al., 2017), where the representation of each node is computed by recursively aggregating and transforming the representations of its neighboring nodes. the most representative and popular example is the graph convolutional network (gcn) (kipf & welling, 2017), which has demonstrated success in node classification, a primary graph task which asks for node labels and identifies community structures in real graphs. despite these achievements, the choice of depth for these gnn models remains an intriguing question. gnns often achieve optimal classification performance when networks are shallow. many widely used gnns such as the gcn are no deeper than 4 layers (kipf & welling, 2017; wu et al., 2019), and it has been observed that for deeper gnns, repeated message-passing makes node representations in different classes indistinguishable and leads to lower node classification accuracy—a phenomenon known as oversmoothing (kipf & welling, 2017; li et al., 2018; klicpera et al., 2019; wu et al., 2019; oono & suzuki, 2020; chen et al., 2020a;b; keriven, 2022). through the insight that graph convolutions can be regarded as low-pass filters on graph signals, prior studies have established that oversmoothing is inevitable when the number of layers in a gnn increases to infinity (li et al., 2018; oono & suzuki, 2020). however, these asymptotic analyses do not fully explain the rapid occurrence of oversmoothing when we increase the network depth, let alone the fact that for some datasets, having no graph convolution is even optimal (liu et al., 2021). these observations motivate the following key questions about oversmoothing in gnns: ∗now at google. figure 1: stacking gnn layers increases both the mixing and denoising effects counteracting each other. depending on the graph properties, either the denoising effect dominates the mixing effect, resulting in less difficulty classifying nodes (a), or the mixing effect dominates the denoising effect, resulting in more difficulty classifying nodes (b)—this is when oversmoothing starts to happen. why does oversmoothing happen at a relatively shallow depth? can we quantitatively model the effect of applying a finite number of graph convolutions and theoretically predict the “sweet spot” for the choice of depth? in this paper, we propose a non-asymptotic analysis framework to study the effects of graph convolutions and oversmoothing using the contextual stochastic block model (csbm) (deshpande et al., 2018). the csbm mimics the community structure of real graphs and enables us to evaluate the performance of linear gnns through the probabilistic model with ground truth community labels. more importantly, as a generative model, the csbm gives us full control over the graph structure in particular, we and allows us to analyze the effect of graph convolutions non-asymptotically. distinguish between two counteracting effects of graph convolutions: • mixing effect (undesirable): homogenizing node representations in different classes; • denoising effect (desirable): homogenizing node representations in the same class. adding graph convolutions will increase both the mixing and denoising effects. as a result, oversmoothing happens not just because the mixing effect keeps accumulating as the depth increases, on which the asymptotic analyses are based (li et al., 2018; oono & suzuki, 2020), but rather because the mixing effect starts to dominate the denoising effect (see figure 1 for a schematic illustration). by quantifying both effects as a function of the model depth, we show that the turning point of the tradeoff between the two effects is o(log n/ log(log n )) for graphs with n nodes sampled from the csbm in sufficiently dense regimes. besides new theory, this paper also presents numerical results directly comparing theoretical predictions and empirical results. this comparison leads to new insights highlighting the fact that the oversmoothing phenomenon observed in practice is often a mixture of pure oversmoothing and difficulty of optimizing weights in deep gnn models. in addition, we apply our framework to analyze the effects of personalized pagerank (ppr) on oversmoothing. personalized propagation of neural predictions (ppnp) and its approximate variant (appnp) make use of ppr and its approximate variant, respectively, and were proposed as a solution to mitigate oversmoothing while retaining the ability to aggregate information from larger neighborhoods in the graph (klicpera et al., 2019). we show mathematically that ppr makes the model performance more robust to increasing number of layers by reducing the mixing effect at each layer, while it nonetheless reduces the desirable denoising effect at the same time. for graphs with a large size or strong community structure, the reduction of the denoising effect would be greater than the reduction of the mixing effect and thus ppnp and appnp would perform worse than the vanilla gnn on those graphs. our contributions are summarized as follows: • we show that adding graph convolutions strengthens the denoising effect while exacerbates the mixing effect. oversmoothing happens because the mixing effect dominates the denoising effect beyond a certain depth. for sufficiently dense csbm graphs with n nodes, the required number of layers for this to happen is o(log n/ log(log n )). • we apply our framework to rigorously characterize the effects of ppr on oversmoothing. we show that ppr reduces both the mixing effect and the denoising effect of message-passing and thus does not necessarily improve node classification performance. • we verify our theoretical results in experiments. through comparison between theory and experiments, we find that the difficulty of optimizing weights in deep gnn architectures often aggravates oversmoothing. additional related work oversmoothing problem in gnns oversmoothing is a well-known issue in deep gnns, and many techniques have been proposed to relieve it practically (xu et al., 2018; li et al., 2019; chen et al., 2020b; huang et al., 2020; zhao & akoglu, 2020). on the theory side, prior works have shown that as the model depth goes to infinity, the node representations within each connected component of the graph will converge to the same values (li et al., 2018; oono & suzuki, 2020). however, the early onset of oversmoothing renders it an important concern in practice, and it has not been satisfyingly explained by the previous asymptotic studies. our work addresses this gap by quantifying the effects of graph convolutions as a function of model depth and justifying why oversmoothing happens in shallow gnns. a recent study shared a similar insight of distinguishing between two competing effects of message-passing and showed the existence of an optimal number of layers for node prediction tasks on a latent space random graph model. but the result had no further quantification on the optimal depth and hence the oversmoothing phenomenon was still only characterized asymptotically (keriven, 2022). analysis of gnns on csbms stochastic block models (sbms) and their contextual counterparts have been widely used to study node classification problems (abbe, 2018; chen et al., 2019). recently there have been several works proposing to use csbms to theoretically analyze gnns for the node classification task. wei et al. (2022) used csbms to study the function of nonlinearity on the node classification performance, while fountoulakis et al. (2022) used csbms to study the attention-based gnns. more relevantly, baranwal et al. (2021; 2022) showed the advantage of applying graph convolutions up to three times for node classification on csbm graphs. nonetheless, they only focused on the desirable denoising effect of graph convolution instead of its tradeoff with the undesirable mixing effect, and therefore did not explain the occurrence of oversmoothing. problem setting and main results we first introduce our theoretical analysis setup using the contextual stochastic block model (csbm), a random graph model with planted community structure (deshpande et al., 2018; baranwal et al., 2021; 2022; ma et al., 2022; wei et al., 2022; fountoulakis et al., 2022). we then present a set of theoretical results establishing bounds for the representation power of gnns in terms of the best-case node classification accuracy. the proofs of all the theorems and additional claims will be provided in the appendix. notations we represent an undirected graph with n nodes by g = (a, x), where a ∈ {0, 1}n ×n is the adjacency matrix and x ∈ rn is the node feature vector. for nodes u, v ∈ [n ], auv = 1 if and only if u and v are connected with an edge in g, and xu ∈ r represents the node feature of u. we let 1n denote the all-one vector of length n and d = diag(a1n ) be the degree matrix of g. theoretical analysis framework | 2 | [
108.249,
254.7020784,
299.1576267,
264.6646784
] |
DlpCotqdTy.pdf | 2,023 | 0 | provably auditing ordinary least squares in low dimensions ankur moitra & dhruv rohatgi massachusetts institute of technology {moitra, drohatgi}@mit.edu abstract auditing the stability of a machine learning model to small changes in the training procedure is critical for engendering trust in practical applications. for example, a model should not be overly sensitive to removing a small fraction of its training data. however, algorithmically validating this property seems computationally challenging, even for the simplest of models: ordinary least squares (ols) linear regression. concretely, recent work defines the stability of a regression as the minimum number of samples that need to be removed so that rerunning the analysis overturns the conclusion (broderick et al., 2020), specifically meaning that the sign of a particular coefficient of the ols regressor changes. but the only known approach for estimating this metric, besides the obvious exponentialtime algorithm, is a greedy heuristic that may produce severe overestimates and therefore cannot certify stability. we show that stability can be efficiently certified in the low-dimensional regime: when the number of covariates is a constant but the number of samples is large, there are polynomial-time algorithms for estimating (a fractional version of) stability, with provable approximation guarantees. applying our algorithms to the boston housing dataset, we exhibit regression analyses where our estimator outperforms the greedy heuristic, and can successfully certify stability even in the regime where a constant fraction of the samples are dropped. introduction a key facet of interpretability of machine learning models is understanding how different subsets of the training data influence the learned model and its predictions. computing the influences of individual training points has been shown to be a useful tool for enhancing trust in the model (zhou et al., 2019), for tracing the origins of model bias (brunet et al., 2019), and for identifying mislabelled training data and other model debugging (koh & liang, 2017). modelling the influence of groups of training points has applications to measuring fairness (chen et al., 2018), vulnerability to contamination of multi-source training data (hayes & ohrimenko, 2018), and (most relevant to this paper) identification of unstable predictions (ilyas et al., 2022) and models (broderick et al., 2020). in a high-stakes machine learning application, it would likely be alarming if some data points were so influential that the removal of, say, 1% of the training data dramatically changed the model. an ideal, trustworthy machine learning pipeline therefore should include validation that this does not happen. but the obvious algorithm for checking if a model trained on n data points exhibits this instability would require computing the group influences of (cid:0) n (cid:1) different subsets of the data, which is computationally infeasible even for fairly small n. instead, current methods for estimating the stability of a model simply use the first-order approximation of group influence: namely, the sum of individual influences of data points in the group. with this approximation, vulnerability of a model to dropping αn data points is heuristically estimated by dropping the αn most individually influential data points (broderick et al., 2020; ilyas et al., 2022). this heuristic can be thought of as using “local” stability as a proxy for “global” stability, and it has found substantial anecdotal success in diagnosing unstable models. unfortunately, for correlated groups of data points, the first-order approximation of the group influence is often an underestimate (koh et al., 2019), so large local stability does not actually certify that a model is provably stable to removing small subsets of data. in fact, stability certification is a challenging and open problem even in the simplest of models: linear regression via ordinary least squares (ols). concretely, given a regression dataset, a natural metric for the stability of the ols regressor is the minimum number of data points that need to be removed from the dataset to flip the sign of a particular coefficient of the regressor (e.g., in causal inference, the coefficient measuring the treatment effect). recent work has used the local stability heuristic to diagnose unstable ols regressions in several prominent economics studies (broderick et al., 2020), identifying examples where even a statistically significant conclusion can be overturned by removing less than 1% of the data points. however, the converse question of validating stable conclusions remains unaddressed: given a regression dataset, can we efficiently certify non-trivial lower bounds on the stability of the ols regressor? our work takes steps towards addressing this question, via the following contributions: • we introduce a natural fractional relaxation of the above notion of ols stability, where we allow removing fractions of data points, and seek to minimize the total removed weight. we call this finite-sample stability, and henceforth refer to the prior notion as “integral” stability. • we develop approximation algorithms for the finite-sample stability, with (a) provable guarantees under reasonable anti-concentration assumptions on the dataset, and (b) running time polynomial in the size of the dataset, so long as the dimension of the data is a constant (in contrast, the naive algorithm is exponential in the size of the dataset). moreover, we prove that (at least for exact algorithms) exponential dependence of the running time on the dimension is unavoidable under standard complexity assumptions. • we use modifications of our algorithms to compute assumption-free upper and lower bounds on the finite-sample stability of several simple synthetic and real datasets, achieving tighter upper bounds than prior work and the first non-trivial lower bounds, i.e. certifications that the ols regressor is stable. why define stability this way? the definition of integral stability was introduced in (broderick et al., 2020), along with several variants (e.g. smallest perturbation which causes the first coordinate to lose significance). we choose the definition based on the sign of the first coordinate, because it has clear practical interpretation—does the first covariate positively or negatively affect the response?— which does not depend on choice of additional parameters such as significance level. we study the fractional relaxation so that the stability is defined by a continuous optimization problem. note that certifying a lower bound on fractional stability immediately certifies a lower bound on the integral stability; we will see later (remark 3.1) that a near-converse holds in low dimensions. why is low-dimensional regression important? given that much of machine learning happens in high-dimensional settings, where the number of covariates can even be larger than the number of datapoints, it is natural to wonder why low-dimensional settings are still important. first, in application areas such as econometrics, linear regressions with as few as two to four covariates are very common (britto et al., 2022; bianchi & bigio, 2022; hopenhayn et al., 2022), often serving as proofs-of-concept for more complex models. second, even in settings where the number of covariates is larger, it is often the expectation that few covariates are relevant. in such applications, analysis often consists of a variable selection step followed by regression on a much-reduced set of covariates (cai & wang, 2011). in all these settings, understanding the stability of an estimator is important, and our work gives some of the first provable guarantees that avoid making strong distributional assumptions. moreover our lower bounds show that certifying stability of truly high-dimensional models, even linear ones, is intractable. formal problem statement we are given a deterministic and arbitrary set of n samples (xi, yi)n i=1, where each xi is a vector of d real-valued covariates, and each yi is a real-valued response. we are interested in a single coefficient of the ols regressor (without loss of generality, the first coordinate): in an application, the first covariate may be the treatment and the rest may be controls. the sign of this coefficient is important because it estimates whether the treatment has a positive or negative effect. thus, we want to determine if it can be changed by dropping a few samples from the regression. formally, we consider the fractional relaxation, where we allow dropping fractions of samples: definition 1.1. fix (xi, yi)n the weight-w ols solution set of (xi, yi)n i=1 with x1, . . . , xn ∈ rd and y1, . . . , yn ∈ r. for any w ∈ [0, 1]n, i=1 is ols(x, y, w) := arg min β∈rd wi(⟨xi, β⟩ − yi)2. the finite-sample stability of (xi, yi)n i=1 is stability(x, y) := inf w∈[0,1]n,β∈rd {n − ∥w∥1 : β1 = 0 and β ∈ ols(x, y, w)}. this is the minimum number of samples (in a fractional sense) which need to be removed to zero out the first coordinate of the ols regressor. if the ols solution set contains multiple regressors, then it suffices if any regressor β in the solution set has β1 = 0. our algorithmic goal is to compute stability(x, y), or at least to approximate stability(x, y) up to an additive ϵn error. results by brute-force search, the (integral) stability can be computed in time 2n · poly(n). however, because the complexity is exponential in the number of samples, it is computationally infeasible even when the dimension d of the data is low, which is a common situation in many scientific applications. similarly, the fractional stability (definition 1.1) is the solution to a non-convex optimization problem in more than n variables, which seems no simpler. can we still hope for a polynomial-time algorithm in constant dimensions? we show that the answer is yes. theorem 1.2. there is an no(d3)-time algorithm which, given n arbitrary samples (xi, yi)n x1, . . . , xn ∈ rd and y1, . . . , yn ∈ r, and given k ≥ 0, decides whether stability(x, y) ≤ k. i=1 with we also show that the exponential dependence on dimension d is necessary under standard complexity assumptions: theorem 1.3. under the exponential time hypothesis, there is no no(d)-time algorithm which, given (xi, yi)n i=1 and k ≥ 0, decides whether stability(x, y) ≤ k. this theorem in particular rules out fixed-parameter tractability, i.e. algorithms with time complexity f (d) · poly(n). however, it only applies to exact algorithms. in practice, it is unlikely to matter whether stability(x, y) = 0.01n or stability(x, y) = 0.02n; in both cases, the conclusion is sensitive to dropping a very small fraction of the data. this motivates our next two algorithmic results on ϵn-additive approximation of the stability (where we think of ϵ > 0 as a constant). first, we make a mild anti-concentration assumption, under which the stability can ϵn-approximated in time roughly nd+o(1). while still not fixed-parameter tractable, this algorithm can now be run on moderate sized problems in low dimensions, unlike the algorithm in theorem 1.2. assumption a. let ϵ, δ > 0. we say that samples (xi, yi)n every β ∈ rd, it holds that i=1 satisfy (ϵ, δ)-anti-concentration if for i ∈ [n] : |⟨xi, β⟩ − yi| < δ √ n ≤ ϵn, where x : n × d is the matrix with rows x1, . . . , xn, and β(0) ∈ ols(x, y, 1) is any unweighted ols regressor of y against x. see appendix f.1 for discussion of when assumption a holds. under this assumption, we present an o(ϵn)-approximation algorithm: theorem 1.4. for any ϵ, δ, η > 0, there is an algorithm partitionandapprox with time complexity n + cd ϵ2 log log which, given ϵ, δ, η, and samples (xi, yi)n ˆs such that with probability at least 1 − η, i=1 satisfying (ϵ, δ)-anti-concentration, returns an estimate | ˆs − stability(x, y)| ≤ 12ϵn + 1. in fact, partitionandapprox also can detect when assumption a fails (see theorem d.6 for a precise statement), so it can be used to compute unconditional lower bounds on stability with high probability (where the lower bound is provably tight if the data satisfies anti-concentration). moreover, as discussed in appendix f.1, the required anti-concentration is very mild. if ϵ, η > 0 are constants, the algorithm has time complexity nd+o(1), so long as the samples satisfy (ϵ, exp(−ω(n)))-anticoncentration. this is true for arbitrary smoothed data. finally, unlike the exact algorithm, partitionandapprox avoids heavy algorithmic machinery; it only requires solving linear programs. fixed-parameter tractability? our final result is that ϵn-approximation of the stability is in fact fixed-parameter tractable, under a stronger anti-concentration assumption. assumption b. let ϵ, δ > 0. we say that samples (xi, yi)n if for every β ∈ rd+1, it holds that i=1 satisfy (ϵ, δ)-strong anti-concentration i ∈ [n] : |⟨x i, β⟩| < δ √ n ≤ ϵn where x : n × (d + 1) is the matrix with columns (x t )1, . . . , (x t )d, y. this assumption holds with constant ϵ, δ > 0 under certain distributional assumptions on (xi, yi)n e.g. centered gaussian mixtures with uniformly bounded condition number (appendix f.2). d/(ϵδ2))d · poly(n)-time algorithm netapprox which, theorem 1.5. for any ϵ, δ > 0, there is a ( i=1 satisfying (ϵ, δ)-strong anti-concentration, returns an estimate ˆs given ϵ,δ, and samples (xi, yi)n satisfying stability(x, y) ≤ ˆs ≤ stability(x, y) + 3ϵn + 1. moreover, stability(x, y) ≤ ˆs holds for arbitrary (xi, yi)n extensions. another model, frequently used in causal inference and econometrics, is instrumental variables (iv) linear regression. when the noise η in a hypothesized causal relationship y = ⟨x, β∗⟩ + η is believed to be endogenous (i.e. correlated with x), a common approach (sargan, 1958; angrist et al., 1996; card, 2001) is to find a p-dimensional variable z (the instrument) for which domain knowledge suggests that e[η|z] = 0. positing that β∗ is identified by the moment condition e[z(y − ⟨x, β⟩)] = 0, the iv estimator set given samples (xi, yi, zi)n i=1 is then iv(x, y, z) = {β ∈ rd : z t (w ⋆ (xβ − y) = 0} where a ⋆ b denotes elementwise product, and z : n × p and x : n × d are the matrices of instruments and covariates respectively. stability can be defined as in definition 1.1. although for simplicity we state all of our results for ols (i.e. the special case z = x), it can be seen that theorem 1.2 and theorem 1.5 both extend directly to the iv regression setting. see appendix g for further discussion. experiments. we implement modifications of netapprox and partitionandapprox which give unconditional, exact upper and lower bounds on stability, respectively. we use these algorithms to obtain tight data-dependent bounds on stability of isotropic gaussian datasets for a broad range of signal-to-noise ratios, and we demonstrate heterogeneous synthetic datasets where our algorithms’ upper bounds are an order of magnitude better than upper bounds obtained by the prior heuristic. on the boston housing dataset (harrison jr & rubinfeld, 1978), we regress house values against all pairs of features. for the majority of these regressions, we bound the stability within a factor of two. on the one hand, we detect many sensitive conclusions (including some which the greedy heuristic claims are stable); on the other hand, we certify that some conclusions are stable to dropping as much as half the dataset. organization in section 2 we review related work. in section 3 we collect notation and formulas that will be useful later. in section 4 we sketch the intuition behind our algorithmic results. section 5 covers our experiments. in appendices b, c, d, and e we prove theorems 1.2, 1.3, 1.4, and 1.5 respectively. related work there is a rich literature on topics related to finite-sample stability, including sensitivity analysis and robustness to distribution shift and contamination. due to space constraints, here we only discuss the works most relevant to ours, and we postpone broader discussion to appendix a. most directly related is the prior work on heuristics for the (integral) stability (broderick et al., 2020; kuschnig et al., 2021). the heuristic given by broderick et al. (2020) (to approximate the most-influential k samples) is simply the local approximation: compute the local influence of each sample at w = 1, sort the samples from largest to smallest influence, and output the top k samples. subsequent work (kuschnig et al., 2021) refines this heuristic by recomputing the influences after removing each sample, which alleviates issues such as masking (chatterjee & hadi, 1986). but this is still just a greedy heuristic, and it may fail when samples are jointly but not individually influential. except under the strong assumption that the sample covariance remains nearly constant when we remove any ϵn samples (see theorem 1 in broderick et al. (2020), which relies on condition 1 in giordano et al. (2019)), the local influence approach can upper bound the finite-sample stability but cannot provably lower bound it. in fact, in section 5 we provide examples where the greedy heuristic of kuschnig et al. (2021) is very inaccurate due to instability in the sample covariance. closely related to finite-sample stability, the s-value (gupta & rothenhäusler, 2021) is the minimum kullback-leibler divergence d(p ||p0) over all distributions p for which the conclusion is null, where p0 is the empirical distribution of the samples. unfortunately, while the s-value is an interesting and well-motivated metric, computing the s-value for ols estimation appears to be computationally intractable, and the algorithms given by gupta & rothenhäusler (2021) lack provable guarantees. preliminaries for vectors u, v ∈ rm, we let u ⋆ v denote the elementwise product (u ⋆ v)i = uivi. throughout the paper, we will frequently use the closed-form expression for the (weighted) ols solution set ols(x, y, w) = {β ∈ rd : x t (w ⋆ (xβ − y)) = 0} where x : n × d is the matrix with rows x1, . . . , xn. in particular, setting λ = β2:d, this means that the finite-sample stability can be rewritten as stability(x, y) = {n − ∥w∥1 : x t (w ⋆ ( ˜xλ − y)) = 0} where (here and throughout the paper) ˜x : n×(d−1) is the matrix with columns (x t )2, . . . , (x t )d. remark 3.1. as previously mentioned, the finite-sample stability always lower bounds the integral stability (the minimum number of samples that need to be removed to make the first coordinate of the regressor change sign), by continuity of the ols solution set in w. additionally, it can be seen from equation 1 that a partial converse holds in low dimensions. for any feasible (w, λ), the set of w′ such that (a) (w′, λ) is feasible, and (b) ∥w∥1 = ∥w′∥1, has the form [0, 1]n ∩ v for some subspace v ⊆ rn of codimension at most d + 1. thus, there is some w′ ∈ [0, 1]n ∩ v with at most d + 1 non-integral weights. if stability(x, y) = αn, then w′ witnesses that the first coordinate of the ols regressor can be zeroed out by downweighting at most αn + d + 1 samples. overview of algorithms an exact algorithm. our main tool for theorem 1.2 is the following special case of an important result due to renegar (1992) on solving quantified polynomial systems of inequalities: theorem 4.1 (renegar (1992)). given an expression ∀x ∈ rn1 : ∃y ∈ rn2 : p (x, y), where p (x, y) is a system of m polynomial inequalities with maximum degree d, the truth value of the expression can be decided in time (md)o(n1n2).1 1this is in the real number model; a similar statement can be made in the bit complexity model. roughly, for a constant number of quantifier alternations, a quantified polynomial system can be decided in time exponential in the number of variables. unfortunately, a naive formulation of the expression stability(x, y) ≤ k, by direct evaluation of equation 1, has n + d − 1 variables: wi ≥ n − k and x t (w ⋆ ( ˜xλ − y)) = 0. intuitively, it may not be necessary to search over all w ∈ [0, 1]n; for fixed λ, the maximum-weight w is described by a simple linear program. formally, the linear program can be rewritten (lemma b.1) by the separating hyperplane theorem, so that the overall expression becomes: ∃λ ∈ rd−1 : ∀u ∈ rd : ∃w ∈ [0, 1]n : ∥w∥1 ≥ n − k and wi(⟨ ˜xi, λ⟩ − yi)⟨xi, u⟩ ≥ 0. now, for fixed λ and u, the maximum-weight w has very simple description: it only depends on the relative ordering of the n summands (⟨ ˜x, λ⟩ − yi)⟨xi, u⟩. by classical results on connected components of varieties, since the summands have only 2d − 1 variables, the number of achievable orderings is only nω(d) rather than n!, and the orderings can be enumerated efficiently (milnor, 1964; renegar, 1992). this allows the quantifier over w ∈ [0, 1]n to be replaced by a quantifier over the nω(d) achievable orderings, after which theorem 4.1 implies that the overall expression can be decided in time nω(d3). see appendix b for details. approximation via partitioning. next, we show how to avoid the heavy algorithmic machinery used in the previous result. for theorem 1.4, the strategy of partitionandapprox is to partition the ols solution space rd−1 into roughly nd regions, such that if we restrict λ to any one region, the bilinear program which defines the stability can be approximated by a linear program. concretely, we start by writing the formulation (1) as n − stability(x, y) = i∈[n] wi x t (w ⋆ ( ˜xλ − y)) = 0 this has a nonlinear (and nonconvex) constraint due to the pointwise product between w and the residual vector ˜xλ − y. thus, we can introduce the change of variables gi = wi(⟨ ˜xi, λ⟩ − yi) for i ∈ [n]. this causes two issues. first, the constraint 0 ≤ wi ≤ 1 becomes 0 ≤ gi/(⟨xi, λ⟩ − yi) ≤ 1, which is no longer linear. to fix this, suppose that instead of maximizing over all λ ∈ rd−1, we maximize over a region r ⊆ rd−1 where each residual ⟨ ˜x, λ⟩ − yi has constant sign σi. the constraint 0 ≤ wi ≤ 1 then becomes one of two linear constraints, depending on σi. let vr denote the value of program 3 restricted to λ ∈ r. then with the change of variables, we have that x t g = 0 ∀i ∈ [n] : σi = 1 0 ≤ gi ≤ ⟨xi, λ⟩ − yi ⟨xi, λ⟩ − yi ≤ gi ≤ 0 ∀i ∈ [n] : σi = −1 vr = sup g∈rn,λ∈r i∈[n] gi ⟨xi, λ⟩ − yi with the convention that 0/0 = 1. now the constraints are linear. unfortunately, (and this is the second issue), the objective is no longer linear. the solution is to partition the region r further: if the region were small enough that every residual ⟨xi, λ⟩ − yi had at most (1 ± ϵ)-multiplicative variation, then the objective could be approximated to within 1 ± ϵ by a linear objective. how many regions do we need? let m = (cid:13) (cid:13)2 be the unweighted ols error. if all the n and m in magnitude, for all λ ∈ rd−1, then the regions residuals were bounded between δm/ could be demarcated by o(n log1+ϵ(n/δ)) hyperplanes, for a total of o(n log1+ϵ(n/δ))d regions. of course, for some λ, some residuals may be very small or very large. but (ϵ, δ)-anti-concentration implies that for every λ, at most ϵn residuals are very small, and it can be shown that if λ is a weighted ols solution, the total weight on samples with large residuals is low. thus, for any region, we can exclude from the objective function the samples with residuals that are not well-approximated within the region, and this only affects the objective by o(ϵn). this gives an algorithm with time complexity (nϵ−1 log(1/δ))d+o(1). to achieve the time complexity in theorem 1.4, where the log(n/δ) is additive rather than multiplicative, we use subsampling. every residual is still partitioned by sign, but we multiplicatively partition only a random ˜o(d/ϵ)-size subset of the residuals. intuitively, most residuals will still be well-approximated in any given region. this can roughly be formalized via a vc dimension argument, albeit with some technical complications. see section d for details and appendix j for formal pseudocode of the algorithm. net-based approximation. the algorithm netapprox for theorem 1.5 is intuitively the simplest. for any fixed λ ∈ rd−1, program 1 reduces to a linear program with value denoted s(λ). thus, an obvious approach is to construct a net n ⊆ rd−1 in some appropriate metric, and compute minλ∈n s(λ). this always upper bounds the stability, but to prove that it’s an approximate lower bound, we need s(λ) to be lipschitz under the metric. the right metric turns out to be d(λ, λ′) = ˜xλ − y (cid:13) (cid:13) ˜xλ − y (cid:13) (cid:13) (cid:13)2 (cid:13) under this metric, rd−1 essentially embeds into a d-dimensional subspace of the euclidean sphere s n−1, and therefore has a γ-net of size o(1/γ)d. why is s(λ) lipschitz under d? first, if ˜xλ − y equals ˜xλ′ − y up to rescaling, then it can be seen from program 3 that s(λ) = s(λ′). more generally, if the residuals are close up to rescaling, we apply the dual formulation of s(λ) from expression (2): n − s(λ) = inf u∈rd sup w∈[0,1]n wi(⟨ ˜xi, λ⟩ − yi)⟨xi, u⟩ ≥ 0. for any u, the optimal w for λ and u can be rounded to some feasible w′ for λ′ and u without decreasing the ℓ1 norm too much, under strong anti-concentration. this shows that s(λ) and s(λ′) are close. see appendix e for details and appendix j for formal pseudocode of netapprox. experiments in this section, we apply (modifications of) netapprox and partitionandapprox to several datasets in two and three dimensions. due to space constraints, we defer detailed discussion of the algorithmic modifications to appendix i.1; we simply note that the modifications are made to improve practical efficiency and usability. most saliently, the modified algorithms do not rely on assumptions a and b: the modified netapprox provides an unconditional upper bound on stability (referred to henceforth as “net upper bound”), and the modified partitionandapprox provides an unconditional lower bound (“lp lower bound”). as a result, we are able to experimentally verify that our algorithms provide tight (and unconditional) bounds on stability for a variety of datasets. as a baseline upper bound, we implement the greedy heuristic of kuschnig et al. (2021) which refines broderick et al. (2020). we are not aware of any prior work on lower bounding stability, so we implement a simplification of our full lower bound algorithm as a baseline. see appendix i for implementation details, hyperparameter choices, and discussion of error bars. synthetic data heterogeneous data. we start with a simple two-dimensional dataset with two disparate subpopulations, where the greedy baseline fails to estimate the stability but our algorithms give tight estimates. for parameters n, k, and σ, we generate k independent samples (xi, yi), where xi ∈ r2 has independent coordinates xi1 ∼ n (−1, 0.01) and xi2 ∼ n (0, 1), and yi = xi1. then, we generate n − k independent samples (xi, yi) where xi1 = 0 and xi2 ∼ n (0, 1), and y ∼ n (0, 1). it always suffices to remove the first subpopulation, so the stability is at most k. however, the first subpopulation has small individual influences, because the ols regressor on the whole dataset can nearly interpolate the first subpopulation. thus, we expect that the greedy algorithm will fail to notice the first subpopulation, and therefore remove far more than k samples. indeed, this is what happens. for n = 1000 and k varying from 10 to 500, we compare our net upper bound and lp lower bound with the baselines. as seen in figure 1, our methods are always better (a) d = 2 and n = 1000; median of 10 trials for each noise level and algorithm (b) d = 3 and n = 500; one trial for each noise level and algorithm figure 1: heterogeneous data figure 2: isotropic gaussian data than the baselines, and certifiably approximate the stability within a small constant factor. in the regime where k is small, our upper bound outperforms the greedy upper bound by a factor of 30. in the previous example, removing k samples caused a pathological change in covariance shift. the sample covariance; it became singular. however, even modest, constant-factor instability in the sample covariance can cause the greedy algorithm to fail; see appendix i.5 for details. isotropic gaussian data. instability can arise even in homogeneous data, as a result of a low signalto-noise ratio (broderick et al., 2020). but when the noise level is low, can we certify stability? for a broad range of noise levels, we experimentally show that this is the case. specifically, for d ∈ {2, 3} and noise parameter σ ranging from 0.1 to 10, we generate n independent samples (xi, yi)n i=1 where xi ∼ n (0, id) and yi = ⟨xi, 1⟩ + n (0, σ2). for d = 2 and n = 1000 (figure 2a), our lp lower bound is nearly tight with the upper bounds, particularly as the noise level increases (in comparison, the baseline lower bound quickly degenerates towards zero). for d = 3 and n = 500 (figure 2b), the bounds are looser for small noise levels but still always within a small constant factor. boston housing dataset the boston housing dataset (harrison jr & rubinfeld, 1978; gilley et al., 1996) consists of data from 506 census tracts of greater boston in the 1970 census. there are 14 real-valued features, one of which—the median house value in usd 1000s—we designate as the response. unfortunately the entire set of features is too large for our algorithms, so for our experiments we pick various subsets of two or three features to use as covariates. a tale of two datasets. we exemplify our results with two particular feature subsets. first, we investigate the effect of zn (percentage of residential land zoned for large lots) on house values, controlling for rm (average number of rooms per home) and rad (highway accessibility index) but no bias term. on the entire dataset, we find a modest positive effect: the estimated coefficient of zn is roughly 0.06. both the greedy heuristic and our net algorithm find subsets of just 8% of the data (38-40 samples) which, if removed, would nullify the effect. but is this tight, or could there be a much smaller subset with the same effect? our lp lower bound certifies that removing at least 22.4 samples is necessary. second, we investigate the effect of zn on house values, this time controlling only for crim (per capita crime rate). our net algorithm finds a subset of just 27% of the data which was driving the effect, and the lp lower bound certifies that the stability is at least 8%. but this time, the greedy algorithm removes 90% of the samples, a clear failure. what happened? plotting zn against crim reveals a striking heterogeneity in the data: 73% of the samples have zn = 0, and the remaining 27% of the samples (precisely those removed by the net algorithm) have crim < 0.83, i.e. very low crime rates. as in the synthetic example, this heterogeneity explains the greedy algorithm’s failure. but heterogeneity is very common in real data: in this case, it’s between the city proper and the suburbs, and in fact the ols regressors of these two subpopulations on all 13 features are markedly different (a) net upper bound (y) vs greedy upper bound (x) (b) lp lower bound (y) vs net upper bound (x) (c) crim (y) vs zn (x) figure 3: results from boston housing dataset. figure (a) plots the net upper bounds on the y-axis against the greedy upper bounds on the x-axis; figure (b) plots the lp lower bounds on the y-axis against the net upper bounds on the x-axis. in both (a) and (b), each mark corresponds to one of the 156 feature pairs. figure (c) plots the feature zn against the feature crim (on log scale); each mark is one of the 506 datapoints. (appendix i.6). thus, it’s important to have algorithms with provable guarantees for detecting when heterogeneity causes (or doesn’t cause) unstable conclusions. all-feature-pairs analysis. to be thorough, we also apply our algorithms to all 156 ordered pairs of features. for each pair, we regress the response (i.e. median house value) against the two features by ordinary least squares, and we use our algorithms on this 2-dimensional dataset to estimate how many samples need to be removed to nullify the effect of the first feature on the response. we also compare to the greedy upper bound. see figure 3 for a perspective on the results. in each figure, each point corresponds to the results of one dataset. the left figure plots the net upper bound against the greedy upper bound: we can see that our net algorithm substantially outperforms the greedy heuristic on some datasets (i.e. finds a much smaller upper bound) and never performs much worse. the right figure plots the lp lower bound against the net upper bound (along with the line y = x). for a majority of the datasets, the upper bound and lower bound are close. concretely, for 116 of the 156 datasets, we certifiably estimate the stability up to a factor of two – some are sensitive to removing less than 10 samples, and some are stable to removing even a majority of the samples. conclusions in this work, we studied efficient estimation of the stability of ols regressions to removing subsets of the training data. we showed that in low dimensions the problem is both theoretically and experimentally tractable, whereas in high dimensions exact computation of the stability likely requires exponential time. however, this is only the beginning of the story. most immediately, since our lower bound algorithm takes time nω(d), our experiments were limited to no more than three dimensions. certifying stability of ols regressions from e.g. recent econometric studies may require additional heuristics or insights (e.g. developing a fixed-parameter tractable lower bound algorithm). beyond that, identifying reasonable assumptions under which exponential dependence on dimension can be entirely circumvented is another valuable direction for future work. of course, machine learning extends far beyond linear regression, and for more and more complex and opaque models, stability certification is all the more crucial as a tool for enhancing trustworthiness. certainly, ols is important in its own right, but inasmuch as it is a key building block in more complex machine learning systems (from regression trees (loh, 2011) to generative adversarial networks (mao et al., 2017) and policy iteration in linear mdps (lagoudakis & parr, 2003)), our work on estimating stability of ols is also a first step towards estimating stability for these systems. finally, we remark that care must be taken when interpreting stability in practice. large stability may increase trust in a model’s parameters or predictions, but it does not mean that conclusions drawn from the model are “correct." conversely, even if the stability is small, the conclusions may still be useful, with the caveat that they may be driven by a small sub-population. understanding whether this heterogeneity is problematic or not is context-dependent, and is a separate but important issue. references joshua d angrist, guido w imbens, and donald b rubin. identification of causal effects using instrumental variables. journal of the american statistical association, 91(434):444–455, 1996. ainesh bakshi and adarsh prasad. robust linear regression: optimal rates in polynomial time. in proceedings of the 53rd annual acm sigact symposium on theory of computing, pp. 102–115, 2021. david a belsley, edwin kuh, and roy e welsch. regression diagnostics: identifying influential data and sources of collinearity. john wiley & sons, 1980. javier bianchi and saki bigio. banks, liquidity management, and monetary policy. econometrica, diogo gc britto, paolo pinotti, and breno sampaio. the effect of job loss and unemployment insurance on crime in brazil. econometrica, 90(4):1393–1423, 2022. tamara broderick, ryan giordano, and rachael meager. an automatic finite-sample robustness metric: can dropping a little data change conclusions. arxiv preprint arxiv:2011.14999, pp. 16, 2020. marc-etienne brunet, colleen alkalay-houlihan, ashton anderson, and richard zemel. understanding the origins of bias in word embeddings. in international conference on machine learning, pp. 803–811. pmlr, 2019. t tony cai and lie wang. orthogonal matching pursuit for sparse signal recovery with noise. ieee david card. estimating the return to schooling: progress on some persistent econometric problems. enrique castillo, ali s hadi, antonio conejo, and alfonso fernández-canteli. a general method for local sensitivity analysis with application to regression models and other optimization problems. technometrics, 46(4):430–444, 2004. maxime cauchois, suyash gupta, alnur ali, and john c duchi. robust validation: confident predictions even when distributions shift. arxiv preprint arxiv:2008.04267, 2020. michal ˇcern`y, jaromír antoch, and milan hladík. on the possibilistic approach to linear regression models involving uncertain, indeterminate or interval data. information sciences, 244:26–47, 2013. samprit chatterjee and ali s hadi. influential observations, high leverage points, and outliers in linear regression. statistical science, pp. 379–393, 1986. irene chen, fredrik d johansson, and david sontag. why is my classifier discriminatory? advances in neural information processing systems, 31, 2018. r dennis cook. detection of influential observation in linear regression. technometrics, 19(1):15–18, ilias diakonikolas, gautam kamath, daniel kane, jerry li, jacob steinhardt, and alistair stewart. in international conference on sever: a robust meta-algorithm for stochastic optimization. machine learning, pp. 1596–1606. pmlr, 2019. john duchi and hongseok namkoong. learning models with uniform performance via distributionally robust optimization. arxiv preprint arxiv:1810.08750, 2018. panos giannopoulos, christian knauer, and günter rote. the parameterized complexity of some geometric problems in unbounded dimension. in international workshop on parameterized and exact computation, pp. 198–209. springer, 2009. otis w gilley, r kelley pace, et al. on the harrison and rubinfeld data. journal of environmental ryan giordano, william stephenson, runjing liu, michael jordan, and tamara broderick. a swiss army infinitesimal jackknife. in the 22nd international conference on artificial intelligence and statistics, pp. 1139–1147. pmlr, 2019. suyash gupta and dominik rothenhäusler. the r-value: evaluating stability with respect to distribugurobi optimization, llc. gurobi optimizer reference manual, 2022. url https://www. gurobi.com. ali s hadi and jeffrey s simonoff. procedures for the identification of multiple outliers in linear models. journal of the american statistical association, 88(424):1264–1272, 1993. david harrison jr and daniel l rubinfeld. hedonic housing prices and the demand for clean air. journal of environmental economics and management, 5(1):81–102, 1978. jamie hayes and olga ohrimenko. contamination attacks and mitigation in multi-party machine learning. advances in neural information processing systems, 31, 2018. hugo hopenhayn, julian neira, and rish singhania. from population growth to firm demographics: implications for concentration, entrepreneurship and the labor share. econometrica, 90(4):1879– 1914, 2022. peter j huber. robust statistics, volume 523. john wiley & sons, 2004. andrew ilyas, sung min park, logan engstrom, guillaume leclerc, and aleksander madry. datamodels: understanding predictions with data and data with predictions. in international conference on machine learning, pp. 9525–9587. pmlr, 2022. sookyo jeong and hongseok namkoong. robust causal inference under covariate shift via worst-case subpopulation treatment effects. in conference on learning theory, pp. 2079–2084. pmlr, 2020. michael j kearns and umesh vazirani. an introduction to computational learning theory. mit press, adam klivans, pravesh k kothari, and raghu meka. efficient algorithms for outlier-robust regression. in conference on learning theory, pp. 1420–1430. pmlr, 2018. pang wei koh and percy liang. understanding black-box predictions via influence functions. in international conference on machine learning, pp. 1885–1894. pmlr, 2017. pang wei w koh, kai-siang ang, hubert teo, and percy s liang. on the accuracy of influence functions for measuring group effects. advances in neural information processing systems, 32, 2019. nikolas kuschnig, gregor zens, and jes cuaresma. hidden in plain sight: influential sets in linear michail g lagoudakis and ronald parr. least-squares policy iteration. the journal of machine edward e leamer. global sensitivity results for generalized least squares estimates. journal of the wei-yin loh. classification and regression trees. wiley interdisciplinary reviews: data mining and xudong mao, qing li, haoran xie, raymond yk lau, zhen wang, and stephen paul smolley. least squares generative adversarial networks. in proceedings of the ieee international conference on computer vision, pp. 2794–2802, 2017. john milnor. on the betti numbers of real varieties. proceedings of the american mathematical wolfgang polasek. regression diagnostics for general linear regression models. journal of the james renegar. on the computational complexity and geometry of the first-order theory of the reals. part i: introduction. preliminaries. the geometry of semi-algebraic sets. the decision problem for the existential theory of the reals. journal of symbolic computation, 13(3):255–299, 1992. john d sargan. the estimation of economic relationships using instrumental variables. econometrica: journal of the econometric society, pp. 393–415, 1958. aman sinha, hongseok namkoong, riccardo volpi, and john duchi. certifying some distributional robustness with principled adversarial training. arxiv preprint arxiv:1710.10571, 2017. hideo tanaka, isao hayashi, and junzo watada. possibilistic linear regression analysis for fuzzy data. european journal of operational research, 40(3):389–396, 1989. roman vershynin. high-dimensional probability: an introduction with applications in data science, volume 47. cambridge university press, 2018. jianlong zhou, zhidong li, huaiwen hu, kun yu, fang chen, zelin li, and yang wang. effects of influence on user trust in predictive decision making. in extended abstracts of the 2019 chi conference on human factors in computing systems, pp. 1–6, 2019. a further related work local and global sensitivity metrics. post-hoc evaluation of the sensitivity of a statistical inference to various types of model misspecification has long been recognized as an important research direction. within this area, there is a distinction between local sensitivity metrics, which measure the sensitivity of the inference to infinitesimal misspecifications of the assumed model m0 (e.g. polasek (1984); castillo et al. (2004); belsley et al. (1980)), and global sensitivity metrics, which measure the set of possible inferences as the model ranges in some fixed set m around m0 (e.g. leamer (1984); tanaka et al. (1989); ˇcern`y et al. (2013)). for ols in particular, there is a well-established literature on the influences of individual data points (cook, 1977; chatterjee & hadi, 1986), which falls under local sensitivity analysis, since deleting a single data point is an infinitesimal perturbation to a dataset of size n as n → ∞. in contrast, identifying jointly influential subsets of the data (the “global” analogue) has been a long-standing challenge due to computational issues (see e.g. page 274 of belsley et al. (1980)). existing approaches typically focus on identifying outliers in a generic sense rather than with respect to a specific inference (hadi & simonoff, 1993), or study computationally tractable variations of deletion (e.g. constant-factor reweighting (leamer, 1984)). robustified estimators. ever since the work of tukey and huber, one of the central areas of statistics has been robustifying statistical estimators to be resilient to outliers (see, e.g. huber (2004)). while a valuable branch of research, we view robust statistics as incomparable if not orthogonal to post-hoc sensitivity evaluation, for three reasons. first, samples that drive the conclusion (in the sense that deleting them would nullify the conclusion) are not synonymous with outliers: removing an outlier that works against the conclusion only makes the conclusion stronger. indeed, outlier-trimmed datasets are not necessarily finite-sample robust (broderick et al., 2020). rather, finite-sample stability (along with the s-value (gupta & rothenhäusler, 2021)), in the regime where a constant fraction of samples is removed, may be thought of as a measure of resilience to heterogeneity and distribution shift. second, it is unreasonable to argue that using robustified estimators obviates the need for sensitivity evaluation. robust statistics has seen a recent algorithmic revival, with the development of computationally efficient estimators, for problems such as linear regression, that are robust in the strong contamination model (e.g. klivans et al. (2018); diakonikolas et al. (2019); bakshi & prasad (2021)). however, even positing that the strong contamination model is correct, estimation guarantees for these algorithms require strong, unverifiable (and unavoidable (klivans et al., 2018)) assumptions about the uncorrupted data, such as hypercontractivity. sensitivity analyses should support modeling assumptions, not depend upon them. third and perhaps most salient, classical estimators such as ols are ubiquitous in practice, despite the existence of robust estimators. this alone justifies sensitivity analysis of the resulting scientific conclusions. distributionally robust optimization. a recent line of work in machine learning (sinha et al., 2017; duchi & namkoong, 2018; cauchois et al., 2020; jeong & namkoong, 2020) suggests that the lack of resilience of empirical risk minimization to distribution shift can be mitigated by minimizing the supremum of risks with respect to distributions near the empirical training distribution (under e.g. wasserstein distance or an f -divergence). again, this approach of robustifying the estimator is valuable but incomparable to sensitivity analysis. b proof of theorem 1.2 in this section, we show how to exactly compute the stability of a d-dimensional dataset in time no(d3), proving theorem 1.2. our main tool is theorem 4.1, a special case of an important result due to renegar (1992) on solving quantified polynomial systems of inequalities. the expression stability(x, y) ≤ k can indeed be written as a polynomial system of (degree-2) equations, with only an ∃ quantifier. unfortunately, the number of variables in this naive formulation is n + d − 1 (n for the weights and d − 1 for the regressor), which yields an algorithm exponential in n. thus, to take advantage of the above theorem, we need to reformulate the expression with fewer variables. the following lemma rewrites the stability, via the separation theorem for convex sets, in a form where the variable reduction will become apparent. lemma b.1. for any (xi, yi)n i=1 and k ≥ 0, it holds that stability(x, y) ≤ k if and only if n (cid:88) wi(⟨ ˜xi, λ⟩ − yi)⟨xi, u⟩ ≥ 0, ∃λ ∈ rd−1 : ∀u ∈ rd : ∃w ∈ [0, 1]n : ∥w∥1 ≥ n − k ∧ where ˜x : n × (d − 1) is the matrix with columns (x t )2, . . . , (x t )d. proof. from formulation (1) of the stability, we know that stability(x, y) ≤ k if and only if ∃λ ∈ rd−1 : ∃w ∈ [0, 1]n : ∥w∥1 ≥ n − k ∧ x t (w ⋆ ( ˜xλ − y)) = 0. fix λ ∈ rd−1. define the set d(n − k) = we are interested in the predicate d(n−k)∩ker(x t ) ̸= ∅, or equivalently 0 ∈ d(n−k)+ker(x t ). observe that d(n − k) is convex, since w ranges over a convex set. thus, by the separation theorem for a point and a convex set, 0 ∈ d(n − k) + ker(x t ) if and only if for every v ∈ rn, we have supx∈d(n−k)+ker(x t )⟨v, x⟩ ≥ 0. if v is not orthogonal to ker(x t ), then the inner product can be made arbitrarily large. thus, it suffices to restrict to v ∈ span(x t ), in which case the supremum is simply over x ∈ d(n − k). that is, 0 ∈ d(n − k) ∩ ker(x t ) if and only if ∀u ∈ rd : ∃w ∈ [0, 1]n : ∥w∥1 ≥ n − k ∧ (cid:69) (cid:68) xu, (w ⋆ ( ˜xλ − y)) quantifying over λ, we get the claimed expression. the expression in lemma b.1 still has o(n) variables. however, we can now actually eliminate the variable w at the cost of increasing the number of equations. this is because the optimal w for fixed λ and u only depends on the relative order of the terms (⟨ ˜xi, λ⟩ − yi)⟨xi, u⟩. we make the following definition: definition b.2. for any λ ∈ rd−1 and u ∈ rd, let π(λ, u) be the unique permutation on [n] such that for all 1 ≤ i ≤ n − 1, (⟨ ˜xπi, λ⟩ − yπi)⟨xπi, u⟩ ≥ (⟨ ˜xπi+1 , λ⟩ − yπi+1)⟨xπi+1 , u⟩, and such that equality implies πi < πi+1. let π = {π(λ, u) : λ ∈ rd−1, u ∈ rd}. then it can be seen that for fixed λ and u, the optimal choice of w has coefficients 1 on π(λ, u)1, . . . , π(λ, u)⌊n−k⌋, and coefficient n − k − ⌊n − k⌋ for π(λ, u)⌊n−k⌋+1: if there is any feasible w which makes the sum non-negative, then this choice of w makes the sum non-negative as well. denoting this vector by w(π(λ, u)), we have that in equation 4 it suffices to restrict to w ∈ {w(π) : π ∈ π}. a priori, the number of achievable permutations could be n!, in which case we would not have gained anything. however, because π(λ, u) is defined by low-degree polynomials in only 2d − 1 variables, we can actually show that |π| is at most exponential in d, using the following result: theorem b.3 (sign partitions (milnor, 1964; renegar, 1992)). let g1, . . . , gm : rn → r be arbitrary polynomials each with total degree at most d. let sg(g) be the set of vectors σ ∈ {−1, 0, 1}m such that σ is an achievable sign vector, i.e. there exists some x ∈ rn with sign(gi) = σi for all i ∈ [m]. then |sg(g)| ≤ (md)o(n). moreover, sg(g) can be enumerated in time (md)o(n). putting everything together, we have the following theorem, which proves theorem 1.2. theorem b.4. for any permutation π on [n], define w(π) ∈ [0, 1]n by w(π)πi = if i ≤ n − k if i = n − k + 1 otherwise then for any k ∈ [0, n], it holds that stability(x, y) > k if and only if ∀λ ∈ rd−1 : ∃u ∈ rd : ∀π ∈ π : w(π)i(⟨ ˜xi, λ⟩ − yi)⟨xi, u⟩ < 0. moreover, π can be enumerated in time no(d). thus, the expression stability(x, y) > k can be decided in time no(d3). proof. fix λ ∈ rd−1 and u ∈ rd. if w(π)i(⟨ ˜xi, λ⟩ − yi)⟨xi, u⟩ ≥ 0, then because ∥w(π)∥1 ≥ n − k, we obviously get wi(⟨ ˜xi, λ⟩ − yi)⟨xi, u⟩ ≥ 0. if is false, | 14 | [
214.516532664,
484.3460784,
236.14095372,
494.3086784
] |
rzvOQrnclO0.pdf | 2,022 | 1 | gradient information matters in policy optimization by back-propagating through model chongchong li 1∗, yue wang 2†, wei chen 3†, yuting liu 1, zhi-ming ma 4 & tie-yan liu 2 1 beijing jiaotong university {18118002,ytliu}@bjtu.edu.cn 2 microsoft research asia {yuwang5,tyliu}@microsoft.com 3 institute of computing technology, chinese academy of sciences chenwei2022@ict.ac.cn 4 academy of mathematics and systems science, chinese academy of sciences mazm@amt.ac.cn abstract model-based reinforcement learning provides an efficient mechanism to find the optimal policy by interacting with the learned environment. in addition to treating the learned environment like a black-box simulator, a more effective way to use the model is to exploit its differentiability. such methods require the gradient information of the learned environment model when calculating the policy gradient. however, since the error of gradient is not considered in the model learning phase, there is no guarantee for the model’s accuracy. to address this problem, we first analyze the convergence rate for the policy optimization methods when the policy gradient is calculated using the learned environment model. the theoretical results show that the model gradient error matters in the policy optimization phrase. then we propose a two-model-based learning method to control the prediction error and the gradient error. we separate the different roles of these two models at the model learning phase and coordinate them at the policy optimization phase. after proposing the method, we introduce the directional derivative projection policy optimization (ddppo) algorithm as a practical implementation to find the optimal policy. finally, we empirically demonstrate the proposed algorithm has better sample efficiency when achieving a comparable or better performance on benchmark continuous control tasks. codes are available at https://github.com/ccreal/ddppo introduction reinforcement learning (rl) is a powerful technique for solving the sequential decision making problems (li, 2018; sutton & barto, 1998). recent work on model-based rl (nagabandi et al., 2018; luo et al., 2018; kurutach et al., 2018; wang et al., 2019; janner et al., 2019; pan et al., 2020), has shown the power of first learning the environment model and then using it to do the policy optimization. several methods are proposed to achieve the goal of getting similar performance by using fewer data, such as ensembles (kurutach et al., 2018), probabilistic models (chua et al., 2018), and meta-learning (clavera et al., 2018). in addition to treating the learned environment as a black-box simulator, a more effective way of using the model is to exploit its differentiability (heess et al., 2015; clavera et al., 2019; d’ oro & ja´skowski, 2020; amos et al., 2021), which is mainly focused in this paper. to get the policy updating direction, these methods compute the policy gradient directly by back-propagating through the model. therefore, the model gradient is used in the calculation and its error will influence the accuracy of the policy gradient. however, since the traditional model learning only aims to get the accurate prediction for the next state and the reward, there is no guarantee for the accuracy of the ∗this work was done when chongchong li was interning at msra. †corresponding author. model gradient. in other words, the algorithm requires the accurate model gradient, but we only learn to decrease the prediction error which results in an objective mismatch. in this paper, to address these problems, we first theoretically analyze the problem and then propose our solution based on the theoretical results. first of all, we present the convergence rate analysis for the policy optimization algorithms in which the policy gradient is calculated using the learned environment model. by taking the model gradient error into account, we can see that the gradients of the transition and reward models matter in the policy optimization. specifically, the bias of the estimated policy gradient used to update the policy is not only introduced by the prediction error of the learned model but also introduced by the gradient error of the learned model. furthermore, the policy gradient bias due to the different types of model error will finally influence the convergence rate of the policy optimization process. then, inspired by the theoretical results, we propose the two-model-based learning method. according to the policy gradient bias and convergence rate analysis, in order to optimize the policy efficiently and accurately, we need the learned environment model both with small prediction error and small gradient error. therefore, we propose to set separate models for different purposes of usage. in the model learning phase, the prediction model aims to reduce the prediction error, and the gradient model focuses on minimizing gradient error. in the policy optimization phase, we will use the prediction model to rollout the data and use the gradient model to calculate the policy gradient. to make the proposed method applicable, we introduce the directional derivative projection policy optimization (ddppo) algorithm. our first goal is to use data to estimate the gradient or jacobian matrix for the environment model and use the estimator to learn the model’s gradient explicitly. the challenge is that the state and action are usually with high dimensions and directly estimating the gradient or jacobian matrix using data is intractable. thus, we first estimate the directional derivative using data and then project the model’s gradient or jacobian matrix into these directions. minimizing the error between the estimated directional derivative and the projection value, we can learn the model’s gradient. secondly, after learning the environment model with a more accurate gradient, we can leverage two-model-based learning method to do the policy optimization. finally, we conduct experiments on the simple environments and the benchmark mujoco continuous control environments. the experimental results verify our theoretical findings and demonstrate the effectiveness of the proposed method. our main contribution can be summarized as follows: 1. we theoretically depict how the different model errors influence the convergence rate of the model-based policy optimization algorithm. the result shows that the gradient error of the model indeed matters in the convergence of the policy optimization. 2. we propose the two-model-based learning method and the practical ddppo algorithm which learns and uses two environment models (prediction model used for rollout and gradient model used to provide the gradient information) for the model-based policy optimization. 3. empirically, we can achieve better sample efficiency in the mujoco continuous control tasks than state-of-the-art model-based and model-free methods . preliminaries | 1 | [
108.299,
204.3936768,
207.7271341,
216.3488768
] |
O-G91-4cMdv.pdf | 2,023 | 2 | words are all you need? language as an approximation for human similarity judgments raja marjieh1,*, pol van rijn2,*, ilia sucholutsky3,*, theodore r. sumers3, harin lee2,4, thomas l. griffiths1,3,**, nori jacoby2,** ∗*/**equal contribution. 1department of psychology, princeton university 2max planck institute for empirical aesthetics 3department of computer science, princeton university 4max planck institute for cognitive and brain sciences abstract human similarity judgments are a powerful supervision signal for machine learning applications based on techniques such as contrastive learning, information retrieval, and model alignment, but classical methods for collecting human similarity judgments are too expensive to be used at scale. recent methods propose using pre-trained deep neural networks (dnns) to approximate human similarity, but pre-trained dnns may not be available for certain domains (e.g., medical images, low-resource languages) and their performance in approximating human similarity has not been extensively tested. we conducted an evaluation of 611 pre-trained models across three domains – images, audio, video – and found that there is a large gap in performance between human similarity judgments and pre-trained dnns. to address this gap, we propose a new class of similarity approximation methods based on language. to collect the language data required by these new methods, we also developed and validated a novel adaptive tag collection pipeline. we find that our proposed language-based methods are significantly cheaper, in the number of human judgments, than classical methods, but still improve performance over the dnn-based methods. finally, we also develop ‘stacked’ methods that combine language embeddings with dnn embeddings, and find that these consistently provide the best approximations for human similarity across all three of our modalities. based on the results of this comprehensive study, we provide a concise guide for researchers interested in collecting or approximating human similarity data. to accompany this guide, we also release all of the similarity and language data, a total of 206,339 human judgments, that we collected in our experiments, along with a detailed breakdown of all modeling results. introduction similarity judgments have long been used as a tool for studying human representations, both in cognitive science (shepard, 1980; 1987; tversky, 1977; tenenbaum & griffiths, 2001), as well as in neuroscience, as exemplified by the rich literature on representational similarity between humans and machines (schrimpf et al., 2020; kell et al., 2018; linsley et al., 2017; langlois et al., 2021; yamins et al., 2014) whereby similarity patterns of brain activity are compared to those arising from a model of interest. recent research in machine learning suggests that incorporating human similarity judgments in model training can play an important role in a variety of paradigms such as human alignment (esling et al., 2018), contrastive learning (khosla et al., 2020), information retrieval (parekh et al., 2020), and natural language processing (gao et al., 2021). however, building a large dataset based on human similarity judgments is very expensive and often infeasible since the number of judgments required is quadratic in the number of stimuli – for n ∗correspondence: {raja.marjieh,is2961}@princeton.edu, pol.van-rijn@@ae.mpg.de figure 1: comparing human similarity scores gathered through crowdsourcing with ml pipelines. we used data from three modalities: images, audio, and video. for each modality, we extracted deep model embeddings and gathered human captions and tags. word- and language-embedding models, as well as simple word-frequency analysis, were used to predict human similarity judgments. stimuli, o(n 2) judgments are required1. for example, to fully quantify the similarity of all possible dyadic pairs of 50,000 images, one needs to collect on the order of 1.25 billion (∼ 500002 ) human similarity judgments. thus, human judgments are the main bottleneck for machine-learning methods based on similarity. for this reason, the majority of available human similarity datasets are small by machine learning standards (up to a few thousand objects). advancements in deep learning have brought an alternative approach that does not require extensive collection of human judgments. specifically, the idea is to use the similarity between hidden representations in pre-trained deep neural networks (dnns) to approximate human similarity (peterson et al., 2018; jha et al., 2020; marjieh et al., 2022; hebart et al., 2020; roads & love, 2021). some of these methods also suggest fine-tuning representations on a small training set of human similarity judgments (peterson et al., 2018). this, in turn, results in a significant reduction in the number of required human judgments down to o(1) (given the pre-trained model). while such methods are promising, they still require access to strong pre-trained models which may not necessarily be available in all domains (e.g., medical datasets, niche modalities, low-resource languages, etc.). in addition, representations obtained from neural networks may not always overlap with human similarity representations, given that the models can be trained for different objectives (i.e., their embeddings may be poor approximations for human similarity). a comprehensive comparison to assess which models perform well in predicting human similarity across different modalities is currently lacking in the literature. to this end, one of our main contributions in this paper is providing a first-of-its-kind large-scale evaluation of over 600 publiclyavailable pre-trained models as approximations for human similarity judgments on three modalities 1depending on various assumptions, the full range of classical methods can require between o(n log n ) (jamieson & nowak, 2011) and o(n 3) (hebart et al., 2020) human judgments. in this work, we used o(n 2) human judgments (collecting all unique dyadic pairs) as the baseline for comparison (images, audio, video). our experiments reveal that there is a large gap in performance between the o(1) dnn methods and the classical o(n 2) similarity method we used as the baseline. to address this gap, we propose a new class of o(n ) methods to efficiently and accurately approximate human similarity based on language. this is motivated by a long line of research in cognitive science suggesting that language is an extremely efficient way for humans to communicate information about their sensory environment (murphy, 2004; zaslavsky et al., 2018; piantadosi et al., 2011; jaeger & levy, 2006). this in turn suggests that we can use textual descriptors to approximate similarity judgments across different modalities. moreover, such textual descriptors can be collected at the cost of o(n ) human judgments (as people describe individual stimuli rather than pairs), which renders this method scalable. we consider two approaches for approximating similarity from text data. one approach is to use pre-trained large language models (llm) to produce vector embeddings of the textual descriptions, and then use a measure of distance between these embeddings to approximate human similarity. this method is more domain-agnostic than the o(1) deep learning methods as it only requires access to a pre-trained llm regardless of the modality of the original dataset. however, there are some cases where the domain may be out-of-distribution for all available llms (e.g., niche technical fields), or where no llms are available at all (e.g., low-resource languages). in such cases, the other approach is to use word-frequency analysis (wfa) methods from classical text processing literature (barrios et al., 2016; rouge, 2004; beel et al., 2016), as for the textual descriptions themselves, we consider two types, namely, free-text captions and concise word tags. collecting captions for machine learning datasets is a well-established practice and can easily be done through crowdsourcing platforms. on the other hand, there is no consensus on best practices for collecting tags without a pre-existing taxonomy (i.e., open-set labels). to address this, we propose a novel adaptive tag mining pipeline called sequential transmission evaluation pipeline (step-tag) which we describe in section 2.2.4. as we will show, step-tag allows to collect meaningful, diverse, and high-quality word tags for target stimuli in an online crowdsourcing environment. finally, we propose one additional set of hybrid approximation methods that combine sensory information with textual descriptions while still requiring o(n ) human judgments. for this approach, we propose to stack the embeddings derived from both domain-specific models (e.g., output from the last layer of an image classifier) with the llm embedding of the respective textual description. when multi-modal models are available, we can similarly leverage the joint embedding of both the stimulus and its textual description. we evaluate all of these novel and existing methods across multiple modalities. we test the relative contributions of linguistic and sensory information in approximating human similarity and show that our proposed language-based methods provide both accurate and efficient approximations across modalities, even though they do not require a trained modality-specific deep learning model. crucially, with this large-scale evaluation, we are able for the first time to provide researchers with a comprehensive guide of the tools to use for approximating human similarity at scale. to summarize, our contributions are as follows: • we conduct a comprehensive comparison of human similarity approximation methods. • we propose a novel modality-agnostic method for approximating similarity based on text and show that it is both efficient and competitive in terms of performance. • we propose step-tag, a novel adaptive tagging pipeline, and show that it is effective for crowdsourcing high-quality and diverse sets of word tags. • we synthesize our findings into a detailed guide for researchers interested in approximating human similarity judgments at scale. • we collect and release ground-truth and approximated versions of a large behavioral dataset (n = 1,492) across three different domains (images, audio, video), including two textapproximated similarity matrices for 1,000 audio clips and 1,000 video clips. datasets stimuli throughout this work, we considered five stimulus datasets across three different modalities – images, audio, and video – consisting of a total of 31,320 dyadic pairs labeled with similarity. images for images, we considered three datasets of common objects introduced in peterson et al. (2018) – namely, animals, furniture, and vegetables – each consisting of 7,140 dyadic pairs (all unique pairs over 120 images). audio for audio, we used the ravdess corpus (livingstone & russo (2018), released under a cc attribution license), which consists of semantically neutral sentences spoken by 24 us american actors to convey a specific target emotion. to construct a 1,000-recording subset, we selected 3 emotions per speaker per sentence. we randomly omitted 104 emotional stimuli and included all 96 neutral recordings (the dataset only contains 2 neutral recordings per speaker per sentence). to construct the subset composed of 4,950 dyadic pairs (all unique pairs over 100 recordings), we randomly selected ∼13 recordings per emotion from the 1,000. video finally, for the video dataset, we considered the mini-kinetics-200 dataset (xie et al., 2018) (released under a cc by 4.0 international license), which contains a large set of short video clips of human activities from 200 activity classes. specifically, we focused on the validation split, which contains 5,000 videos in total. to construct our 1,000-video dataset, we sampled 5 random videos from each of the 200 activity categories. the 100-video subset (4,950 dyadic pairs) used in the similarity judgment collection experiment was then generated by sampling 100 random stimuli from the 1,000 list. human judgment collection 2.2.1 participants we collected data from n = 1,492 us participants for the new behavioral experiments reported in this paper. participants were recruited anonymously from amazon mechanical turk and provided informed consent under an approved protocol by either the institutional review board (irb) at princeton university (application 10859) or the max planck ethics council (application 2021_42) before taking part. participants earned 9-12 usd per hour, and each session lasted less than 30 minutes. to help recruit reliable participants, we required that participants are at least 18 years of age, reside in the united states and have participated in more than 5,000 previous tasks with a 99% approval rate (see supplementary section b for additional details about the behavioral experiments). all experiments were implemented with the dallinger and psynet frameworks designed for automation of large-scale behavioral research (harrison et al., 2020). in supplementary section a.1, we include the data that was collected, instructions used, and code for replication of the behavioral experiments. we also provide the code for computational experiments and analysis. 2.2.2 similarity judgments we collected two batches of pairwise similarity judgements, one for each of the audio and video subsets, and were provided access to the similarity matrices for the three image datasets by the authors of peterson et al. (2018). for each pair we collected ∼ 5 similarity judgments to average out inter-rater noise. 2.2.3 captions we collected free-text captions for the video and audio datasets. captions for the image datasets were already collected by marjieh et al. (2022) and used here with permission. for each stimulus, we collected ∼ 10 captions. figure 2: step-tag, our novel tag-mining paradigm. we ran an adaptive process in which results of one iteration are used as inputs for subsequent iterations. in every iteration, participants can add a new tag, rate the relevance of existing tags or flag tags that are inappropriate. we propose a novel adaptive tag pipeline for simultaneous data collection and evaluation called sequential transmission evaluation pipeline (step) and apply it in the context of semantic tag mining (step-tag). our paradigm, step-tag, allows researchers to efficiently collect high-quality word tags for a given stimulus (figure 2) and extends existing crowdsourcing text-mining techniques (von ahn & dabbish, 2008; 2004; krishna et al., 2017; law et al., 2007) by integrating ideas from transmission chain experiments (kirby et al., 2008; griffiths & kalish, 2005). in step-tag, participants adaptively create tags for a set of target stimuli and simultaneously evaluate the annotations made by previous participants. in each trial, participants are first given a stimulus (e.g., an image or audio fragment) and rate the relevance of tags that were created by other participants (on a 5-interval likert scale) or flag a tag if they find it inappropriate (with tags removed if more than two people flag the tag). next, participants are also given the opportunity to add new tags if they feel a relevant tag that describes the stimulus is missing. the results of the annotation procedure of one participant then propagate to the next participant (additional details about the paradigm, and screenshots are provided in supplementary section b.6). ultimately, as the process unfolds over many iterations, meaningful tags are extracted and validated by multiple participants, enabling efficient open-label collection of a desired dataset. to validate step-tag, we compared it against several baselines: (i) randomly selecting only a single high-rated tag from the last iteration of step-tag per stimulus, (ii) using tags only from the first iteration of step-tag (equivalent to non-adaptive tag collection), and (iii) using class labels instead of tags. we found that tags produced after multiple iterations of step-tag outperformed all three baselines in terms of quality (i.e., downstream performance for similarity reconstruction) and diversity (see supplementary section b.6.1). models dnn-based methods we tested a wide range of pre-trained ml models that do not rely on text (overall we tested 611 models) and compared their internal representations to human similarity judgments and text-based predictions (figure 1a). we compiled our model pool by leveraging pre-trained model repositories (or zoos) available online. in particular, for images we use 569 pre-trained models from the pytorch-image-models package timm (wightman, 2019), for audio we use 36 pre-trained models available in the torchaudio package (yang et al., 2021) (see also supplementary figure 10 for an analysis of layer depth), and for video we use 6 pre-trained models available from the pytorchvideo package (fan et al., 2021). because of the recent success of multimodal training, we additionally included 9 multimodal models based on clip from openai’s public implementation (https://github.com/openai/clip) for the image datasets, and compared them to “stacked” representations (i.e., concatenating embeddings from separate image and text models). llm-based methods | 5 | [
108.249,
661.6480784,
230.7728668,
671.6106784
] |
kkpL4zUXtiw.pdf | 2,023 | 0 | bi-level physics-informed neural networks for pde constrained optimization using broyden’s hypergradients zhongkai hao1,2,3, chengyang ying1, hang su1,4, jun zhu1,3,4∗, jian song2, ze cheng5 1dept. of comp. sci. & tech., institute for ai, bnrist center, thbi lab, tsinghua-bosch joint center for ml, tsinghua university 2dept. of ee, tsinghua university, 3 realai, 4pazhou lab, guangzhou, 510330, china,5bosch china investment ltd {hzj21, ycy21}@mail.tsinghua.edu.cn, {dcszj, suhangss, jsong}@tsinghua.edu.cn, ze.cheng@cn.bosch.com abstract deep learning based approaches like physics-informed neural networks (pinns) and deeponets have shown promise on solving pde constrained optimization (pdeco) problems. however, existing methods are insufficient to handle those pde constraints that have a complicated or nonlinear dependency on optimization targets. in this paper, we present a novel bi-level optimization framework to resolve the challenge by decoupling the optimization of the targets and constraints. for the inner loop optimization, we adopt pinns to solve the pde constraints only. for the outer loop, we design a novel method by using broyden’s method based on the implicit function theorem (ift), which is efficient and accurate for approximating hypergradients. we further present theoretical explanations and error analysis of the hypergradients computation. extensive experiments on multiple large-scale and nonlinear pde constrained optimization problems demonstrate that our method achieves state-of-the-art results compared with strong baselines. introduction pde constrained optimization (pdeco) aims at optimizing the performance of a physical system constrained by partial differential equations (pdes) with desired properties. it is a fundamental task in numerous areas of science (chakrabarty & hanson, 2005; ng & dubljevic, 2012) and engineering (hicks & henne, 1978; chen et al., 2009), with a wide range of important applications including image denoising in computer vision (de los reyes & schönlieb, 2013), design of aircraft wings in aerodynamics (hicks & henne, 1978), and drug delivery (chakrabarty & hanson, 2005) in biology etc. these problem have numerous inherent challenges due to the diversity and complexity of physical constraints and practical problems. traditional numerical methods like adjoint methods (herzog & kunisch, 2010) based on finite element methods (fems) (zienkiewicz et al., 2005) have been studied for decades. they could be divided into continuous and discretized adjoint methods (mitusch et al., 2019). the former one requires complex handcraft derivation of adjoint pdes and the latter one is more flexible and more frequently used. however, the computational cost of fems grows quadratically to cubically (xue et al., 2020) w.r.t mesh sizes. thus compared with other constrained optimization problems, it is much more expensive or even intractable to solve high dimensional pdeco problems with a large search space or mesh size. to mitigate this problem, neural network methods like deeponet (lu et al., 2019) have been proposed as surrogate models of fems recently. deeponet learns a mapping from control (decision) variables to solutions of pdes and further replaces pde constraints with the operator network. but these methods require pretraining a large operator network which is non-trivial and inefficient. moreover, ∗corresponding author. its performance may deteriorate if the optimal solution is out of the training distribution (lanthaler et al., 2022). another approach of neural methods (lu et al., 2021; mowlavi & nabi, 2021) proposes to use a single pinn (raissi et al., 2019) to solve the pdeco problem instead of pretraining an operator network. it uses the method of lagrangian multipliers to treat the pde constraints as regularization terms, and thus optimize the objective and pde loss simultaneously. however, such methods introduce a trade-off between optimization targets and regularization terms (i.e., pde losses) which is crucial for the performance (nandwani et al., 2019). it is generally non-trivial to set proper weights for balancing these terms due to the lack of theoretical guidance. existing heuristic approaches for selecting the weights may usually yield an unstable training process. therefore, it is imperative to develop an effective strategy to handle pde constraints for solving pdeco problems. to address the aforementioned challenges, we propose a novel bi-level optimization framework named bi-level physics-informed neural networks with broyden’s hypergradients (bpn) for solving pde constrained optimization problems. specifically, we first present a bi-level formulation of the pdeco problems, which decouples the optimization of the targets and pde constraints, thereby naturally addressing the challenge of loss balancing in regularization based methods. to solve the bi-level optimization problem, we develop an iterative method that optimizes pinns with pde constraints in the inner loop while optimizes the control variables for objective functions in the outer loop using hypergradients. in general, it is nontrivial to compute hypergradients in bi-level optimization for control variables especially if the inner loop optimization is complicated (lorraine et al., 2020). to address this issue, we further propose a novel strategy based on implicit differentiation using broyden’s method which is a scalable and efficient quasi-newton method in practice (kelley, 1995; bai et al., 2020). we then theoretically prove an upper bound for the approximation of hypergradients under mild assumptions. extensive experiments on several benchmark pde constrained optimization problems show that our method is more effective and efficient compared with the alternative methods. we summarize our contributions as follows: • to the best of our knowledge, it is the first attempt that solves general pdeco problems based on deep learning using a bi-level optimization framework that enjoys scalability and theoretical guarantee. • we propose a novel and efficient method for hypergradients computation using broyden’s method to solve the bi-level optimization. • we conduct extensive experiments and achieve state-of-the-art results among deep learning methods on several challenging pdeco problems with complex geometry or non-linear naiver-stokes equations. related work neural networks approaches for pde constrained optimization. surrogate modeling is an important class of methods for pde constrained optimization (queipo et al., 2005). physics-informed neural networks (pinns) are powerful and flexible surrogates to represent the solutions of pdes (raissi et al., 2019). hpinn (lu et al., 2021) treats pde constraints as regularization terms and optimizes the control variables and states simultaneously. it uses the penalty method and the lagrangian method to adjust the weights of multipliers. (mowlavi & nabi, 2021) also adopts the same formulation but uses a line search to find the largest weight when the pde error is within a certain range. the key limitation of these approaches is that heuristically choosing methods for tuning weights of multipliers might be sub-optimal and unstable. another class of methods train an operator network from control variables to solutions of pdes or objective functions. several works (xue et al., 2020; sun et al., 2021; beatson et al., 2020) use mesh-based methods and predict states on all mesh points from control variables at the same time. pi-deeponet (wang et al., 2021a;c) adopts the architecture of deeponet (lu et al., 2019) and trains the network using physics-informed losses (pde losses). however, they produce unsatisfactory results if the optimal solution is out of the distribution (lanthaler et al., 2022). bi-level optimization in machine learning. bi-level optimization is widely used in various machine learning tasks, e.g., neural architecture search (liu et al., 2018), meta learning (rajeswaran et al., 2019) and hyperparameters optimization (lorraine et al., 2020; bao et al., 2021). one of the key challenges is to compute hypergradients with respect to the inner loop optimization (liu et al., 2021). some previous works (maclaurin et al., 2015; liu et al., 2018) use unrolled optimization or truncated unrolled optimization which is to differentiate the optimization process. however, this is not scalable if the inner loop optimization is a complicated process. some other works (lorraine et al., 2020; clarke et al., 2021; rajeswaran et al., 2019) compute the hypergradient based on implicit function theorem. this requires computing of the inverse hessian-vector product (inverse-hvp). in (lorraine et al., 2020) it proposes to use neumann series to approximate hypergradients. some works also use the conjugated gradient method (pedregosa, 2016). the approximation for implicit differentiation is crucial for the accuracy of hypergradients computation (grazzi et al., 2020). methodology preliminaries let y, u, v be three banach spaces. the solution fields of pdes are called state variables, i.e., y ∈ yad ⊂ y , and functions or variables we can control are control variables, i.e., u ∈ uad ⊂ u where yad, uad are called admissible spaces, e.g. a subspace parameterized by neural networks or finite element basis. the pde constrained optimization can be formulated as: min y∈yad,u∈uad j (y, u), s.t. e(y, u) = 0, where j : y × u → r is the objective function and e : y × u → v are pde constraints. usually, the pde system of e(y, u) = 0 contains multiple equations and boundary/initial conditions as f(y, u)(x) = 0, b(y, u)(x) = 0, ∀x ∈ ω ∀x ∈ ∂ω, where f : y × u → (ω → rd1) is the differential operator representing pdes and b : y × u → (ω → rd2) represents boundary/initial conditions. existing methods based on regularization (e.g. the penalty method) solve the pdeco problem by minimizing the following objective (lu et al., 2021): ˆj = j (yw, uθ) + min w,θ |λ1 · f(yw, uθ)(x)|2dx + |λ2 · b(yw, uθ)(x)|2dx, where the solutions y and control variables u are respectively parameterized by w ∈ rm and θ ∈ rn with w being the weights of pinns. λi ∈ rdi are hyper-parameters balancing these terms of the optimization targets. one main difficulty is that λi are hard to set and the results are sensitive to them due to the complex nature of regularization terms (pde constraints). in general, large λi makes it difficult to optimize the objective j , while small λi can result in a nonphysical solution of yw. besides, the optimal λi may also vary with the different phases of training. reformulating pdeco as bi-level optimization to resolve the above challenges of regularization based methods, we first present a new perspective that interprets pdeco as a bi-level optimization problem (liu et al., 2021), which can facilitate a new solver consequentially. specifically, we solve the following bi-level optimization problem: j (w∗, θ) min θ s.t. w∗ = arg minw e(w, θ). in the outer loop, we only minimize j with respect to θ given the optimal value of w∗, and we optimize pde losses using pinns in the inner loop with the fixed θ. the objective e of the inner loop sub-problem is: e = |f(yw, uθ)(x)|2dx + |b(yw, uθ)(x)|2dx. by transforming the problem in eq. equation 1 into a bi-level optimization problem, the optimization of pdes’ state variables and control variables can be decoupled, which relieves the headache of setting a proper hyper-parameter of λi in eq. equation 3. to solve this bi-level optimization problem, we design inner loops and outer loops that are executed iteratively. as shown in figure 1, we train pinns in the inner loop with pde losses in eq. equation 5. in the outer loop, we compute the hypergradients based on implicit function differentiation inspired figure 1: illustration of our bi-level optimization framework (bpn) for solving pde constrained optimization problems. in each iteration, we compute hypergradients of control parameters θ using ift differentiation in the outer loop. we calculate inverse vector-hessian product based on broyden’s method which uses low rank approximation for acceleration. then in the inner loop, we fine-tune pinns using pde losses only. by lorraine et al. (2020). along this line, we need to calculate a highly complex inverse hessianjacobian product. to address this issue, we propose to use broyden’s method which provides an efficient approximation at a superlinear convergence speed (rodomanov & nesterov, 2021). in particular, we fine-tune the pinns using the pde losses in each iteration of the inner loop. we further compute the gradients of j with respect to parameters of control variables θ in the outer loop, which is also recognized as hypergradients in bi-level optimization and detailed in the following section. hypergradients computation using broyden iterations the upper-level objective j depends on the optimal w∗ of the lower level optimization, i.e., dj dθ ∂j ∂θ ∂j ∂w∗ ∂w∗ ∂θ thus we need to consider the jacobian of w∗with respect to θ when calculating the hypergradients. since w∗ minimizes the lower level problem, we can derive ∂w∗ ∂θ by applying cauchy implicit function theorem (lorraine et al., 2020) as, proposition 1 (proof in appendix b.1). if for some (w′, θ′), the lower level optimization is solved, i.e. ∂w |(w′,θ′) = 0 and ( ∂2e ∂e ∂w∂w⊤ )−1 is invertible, then there exists a function w∗ = w∗(θ) surrounding (w′, θ′) s.t. ∂e ∂w |(w∗(θ′),θ′) = 0 and we have: ∂w∗ ∂θ by proposition 1, we could compute the hypergradients analytically as dj dθ ∂j ∂θ ∂j ∂w∗ · ∂w∂w⊤ however, computing the inverse of hessian matrix, i.e., ∂w∂w⊤ , is intractable for parameters of neural networks. to handle this challenge, we can first compute z∗ ≜ ∂j which is also called the inverse vector-hessian product. previous works (lorraine et al., 2020) use the neumann series to approximate z∗ with linear convergence speed. however, in practice this approach is usually a coarse and imprecise estimation of the hypergradients (grazzi et al., 2020). here we employ a more efficient and effective approach to compute the inverse vector-hessian product which enjoys superlinear convergence speed (rodomanov & nesterov, 2021). it equals to finding the root ∂w∗ · ∂w∂w⊤ z∗ for the following linear equation as gw(z) = ∂ ∂w ∂w⊤ · z ∂j ∂w note that for each evaluation of gw(z), we only need to compute two jacobian-vector products with a low computational cost, which does not need to create a giant instance of hessian matrix. specifically, we use a low rank broyden’s method (broyden, 1965; rodomanov & nesterov, 2021) to iteratively approximate the solution z∗. in each iteration, we first approximate the inverse of ∂2e ∂w∂w⊤ as ∂w∂w⊤ ≈ bi = −i + ukv⊤ k . we update u, v and z according to the following rules, zi+1 = zi − α · bigi(zi) ∆zi+1 − bi∆gi+1 (∆zi+1)⊤bi∆gi+1 vi+1 = bi∆zi+1 where ∆zi+1 = zi+1 − zi, ∆gi+1 = gi+1 − gi, and α is the step size (usually set to 1 or using line search). in summary, the inversion of the hessian could be updated by ∂w∂w⊤ ≈ bi+1 = bi + ∆zi+1 − bi∆gi+1 (∆zi+1)⊤bi∆gi+1 ∆z⊤ i+1bi. after m iterations, we use zm as the approximation of z∗. we store ui, vi in low rank matrices which is displayed as two red thin matrices in figure 1. since we use a low rank approximation of the inverse of hessian matrix, we no longer need to store the whole matrix and only need to record uk and vk, where k = 1 . . . k and k is a tunable parameter depending on the memory limit. we run broyden iterations until the maximum iteration is reached or the error is within a threshold. our method is named bi-level physics-informed neural networks with broyden’s hypergradients (bpn), of which the pseudo code is outlined in algorithm 1. given a pdeco problem, we initialize pinns with random parameters w0 and a guess on control parameters θ0. first, we train pinns under initial control θ0 for nw epochs as warm up (influence of hyperparameters will be discussed in appendix c). then, we compute the hypergradients for θ using broyden’s method and update it with gradient descent. after that, we fine-tune pinns under θ for nf epochs. these two steps are iteratively executed until convergence. algorithm 1 bi-level physics-informed neural networks with broyden’s hypergradients (bpn). input: pinns uw with parameters w0, initial guess θ0, loss functions e and j , warmup and finetune epochs nw, nf , learning rates ϵ1, ϵ2 for pinns and θ respectively output:optimal control parameters θ 1: train uw under control θ0 for nw epochs 2: for i = 1, 2, ..., niter do 3: 4: 5: 6: 7: end if 8: 9: end for compute hypergradients ∇θi using broyden’s method in eq (9) and eq (14). θi+1 = θi − ϵ2∇θi. train uw under control θi+1 for nf epochs. if converged then break the cycle. theoretical analysis and discussion error analysis of hypergradients approximation. we now present analyses of the hypergradients approximation using bpn. due to the complexity of the problem and computational resources, it is extremely difficult to calculate the hypergradients exactly (grazzi et al., 2020). our bpn using broyden’s method based on implicit function differentiation is also an approximation for hypergradients. then the approximation error or the convergence rate for hypergradients is one of the key factors that determine the performance. here we show that the approximation error could be bounded under mild assumptions using broyden’s method. inspired by the idea of (grazzi et al., 2020; ji et al., 2021), we prove that the approximation error could be decomposed into two parts, the former error term is induced by the inner loop optimization and the latter term is caused by the linear solver. moreover, since our hypergradients are approximated by solving the linear equations using broyden’s method, it enjoys a superlinear convergence rate (rodomanov & nesterov, 2021), which is superior compared with other methods such as neumann series (lorraine et al., 2020). specifically, we state the assumptions to be satisfied as below. assumption 1. we denote ∇1 = ∂ dw , d2 = d dθ for all (w, θ), 1e is invertible and ∥∇2 1e and ∇1∇2e are lipschitz continuous with constants ρ and τ respectively. • ∇1j , ∇2j , ∇1e and ∇2e are lipschitz continuous with constant l. • the inner loop optimization is solved by an iterative algorithm that satisfies ∥wt − w∥2 ≤ pt∥w∥2 and pt < 1, pt → 0 as t → ∞ where t is the number of iterations. the above assumptions hold for most practical pdeco problems. for the linear equation in eq. equation 9, we use a m step broyden’s method to solve it. denote κ = l µ and d2jt, d2j are the computed and real hypergradients for θ respectively, we have the following theorem. theorem 1 (proof in appendix b.2). if assumption 1 holds and the linear equation equation 9 is solved with broyden’s method, there exists a constant m > 0, such that the following inequality holds at iteration t, l τ m µ pt∥w∥2 + m κ the theorem above provides a rigorous theoretical guarantee that the hypergradients are close to real hypergradients when the inner loop optimization, i.e., pinns’ training, could be solved. we observe that the convergence rate relies on the inner loop optimization pt which is usually locally linear (jiao et al., 2022) for gradient descent and a superlinear term introduced by broyden’s method (rodomanov & nesterov, 2021). the high convergence speed could nearly eliminate this term in a few iterations, so the main error is usually dominated by the first term which is also verified in our experiments. connections to traditional adjoint methods. our bpn and adjoint methods (herzog & kunisch, 2010) share a similar idea of solving a transposed linear system in eq. equation 9 to reduce the computational cost. but our bi-level optimization is a more general framework compared with constrained optimization in adjoint methods. it allows more flexible design choices like replacing fem solvers with pinns. additionally, eq. (9) is different from the adjoint equation (herzog & kunisch, 2010). our eq (9) solves a system in the parameter space of neural networks but the traditional adjoint equation corresponds to a real field defined on admissible space u or meshes. experiments in this section, we conduct extensive experiments on practical pde constrained optimization problems to show the effectiveness of our method. experimental setup and evaluation protocol benchmark problems. we choose several classic and challenging pde constrained optimization problems containing both boundary, domain and time distributed optimization for both linear and non-linear equations. more details of problems are listed in appendix a. (1) poisson’s equations. we solve this classic problem on an irregular domain which could be viewed as a prototype of the heat exchanger (diersch et al., 2011) that is widely used in multiple domains. we denote this problem as poisson’s 2d cg in table 1. (2) heat equations. a time-distributed control problem for two dimensional heat equation similar to (wang et al., 2021a) (denoted by heat 2d). (3) burgers equations. a time-distributed control problem for burgers equations (burgers 1d) which are used in nonlinear acoustics and gas dynamics. (4)∼(6) navier-stokes equations. they are highly nonlinear equations characterizing flow of fluids. we solve one shape optimization problem similar to (wang et al., 2021a) (denoted by ns shape in table 1) and two boundary control problems (denoted by ns 2inlets and ns backstep in table 1) (mowlavi & nabi, 2021). baselines. to demonstrate the effectiveness and superiority of our method, we compare it with several neural methods for pdeco recently proposed. (1)∼(2) hpinn (lu et al., 2021): it trains pinns with pde losses and objective functions jointly with adaptive weights as a regularization term. there are two strategies, i.e. the penalty method and the augmented lagragian method to update weights and we use hpinn-p and hpinn-a to denote them respectively. (3) pinn-ls (mowlavi & nabi, 2021): it also treats the objective function as regularization term with adaptive weights. and it uses line-search rules to find the maximum tolerated weights for the objective function. (4) pi-deeponet (wang et al., 2021a): as a two-stage method, it first trains a deeponet using physics-informed losses and then optimizes the objective using the operator network. apart from the methods mentioned above, we also implement several other bi-level optimization algorithms and measure their performance on these pdeco tasks. (5) truncated unrolled differentiation(trmd) (shaban et al., 2019): it assumes that the inner loop optimization is solved by several gradient updates and uses reverse mode backpropagation to calculate hypergradients. (6) t 1 − t 2 (luketina et al., 2016): it computes the hypergradients using implicit differentiation theorem and uses an identity matrix instead of calculating the exact inversion of hessian matrix. (7) neumann series (neumann) (lorraine et al., 2020): it proposed to use the neumann series to iteratively approximates the inverse of hessian and uses the vector-jacobian product and the vector-hessian product to avoid instantiation of hessian matrices. beyond these baselines, we also choose adjoint method (herzog & kunisch, 2010) which is a traditional pdeco method as a reference. we run adjoint methods based on high fidelity finite element solvers (mitusch et al., 2019) to calculate the reference solutions of these tasks. for linear problems, it could be viewed as the ground truth solution. however, it is computationally expensive and sensitive to initial guesses since it solves the whole system in each iteration. hyperparameters and evaluation protocol. we use multi-layer perceptrons (mlp) with width from 64∼128 and depth from 3 ∼ 5 for different problems, and train them using adam optimizer (kingma & ba, 2014) with a learning rate 10−3. since the accuracy of pinns of regularization based methods could not be guaranteed, we resort to finite element methods (fem) to evaluate the performance. the evaluation metric is the objective function for each problem which is specified in appendix a. we save and interpolate the control variables and solve the system using fem and calculate the objective function numerically in every validation epoch. other details and method-specific hyperparameters are reported in appendix f. we run experiments on a single 2080 ti gpu. main results based on the experimental results for all pdeco tasks in table 1, we have the following observations. first, our bpn achieves state-of-the-art performance compared with all baselines. it shows that bi-level optimization is an effective and scalable way for large-scale pdeco problems. second, we observe that our bpn reaches nearly the global optimal result for linear equations (poisson’s and heat equation). for these equations, the problem is convex and reference values provided by adjoint methods could be viewed as the ground truth. for non-linear problems, the reference values provide locally optimal solutions and our bpn sometimes outperforms adjoint methods. third, the update strategy of loss weights is critical for regularization based methods like hpinn-p, hpinn-a and objective (j ) initial guess hpinn-p hpinn-a pinn-ls pi-deeponet ours reference values poisson’s 2d cg (2d) heat 2d (1d) burgers 1d (1d) ns shape (v) ns 2inlets (1d) ns backstep (1d) table 1: main results for performance comparison of different algorithms on several pdeco tasks. lower score means better performance. we bold the best results across all baselines except from the reference values. “–” means that this method cannot solve the problem. “2d”/“1d”/“v” means the control variable is a 2d/1d function or a vector. objective (j ) initial guess trmd t 1 − t 2 neumann broyden(ours) poisson’s 2d cg heat 2d burgers 1d ns shape ns 2inlets ns backstep table 2: performance comparison for different strategies of computing hypergradients. lower score means better performance. we bold the best results across all methods. pinn-ls which limits their performance without theoretical guidelines. a possible reason is that the balance between the objective function and pde losses is sensitive and unstable for complex pdes. comparison with other bi-level optimization strategies we list the performance of all bi-level optimization methods in table 2. first, we observe that our method using broyden’s method for hypergradients computation achieves the best results compared with other methods. this is a natural consequence since our method gives the most accurate approximation for the response gradients. second, we could see that all bi-level optimization methods are effective on solving pdeco problems. third, the results are better if the hypergradients are more accurate in most cases. for example, neumann series and trmd use better approximation for inverse hessian and they perform better compared with t 1 − t 2. experiments on iteration efficiency to show that our method is computationally efficient, we conduct efficiency experiments by comparing the iterations required. since pi-deeponet is a two-stage method and fems are not based on gradient descent, we only choose three regularization based methods as baselines. note that for our bpn we count total inner loop iterations for fairness. we plot values of objective functions j in figure 22. we found that our bpn is much more efficient with a nearly linear convergence speed for linear problems. this shows that the hypergradients provided by bi-level optimization are more stable and effective compared with regularization based methods. besides, we found that pinn-ls is more efficient compared with hpinns. however, it is a common drawback that regularization based methods are not stable which might take a lot of effort to find suitable loss weights. fidelity of hypergradients and ablation studies fidelity of hypergradients compared with other methods. in this experiment, we aim to show how accurate the hypergradients are for these bi-level optimization methods. however, all tasks in our main experiments do not have a closed form solution for response gradients. here we conduct this experiment on a toy task of one dimensional poisson’s equation with an analytical solution in appendix a. we first compare our method with several other bi-level optimization methods. we ∥x∥∥y∥ for any two vectors x, y, between computed measure the cosine similarity, which is defined as hypergradients and analytical hypergradients and the results are shown in figure 3. note that the data are collected in the first 75 outer iterations since all methods converge fast on this problem. the left part of the figure shows that broyden’s method gives the most accurate hypergradients with similarity x·y figure 2: results of efficiency experiments on poisson 2d cg (left) and heat 2d (right) problem. figure 3: cosine similarity of hypergradients for different methods (left) and number of iterations (right) on a toy task of poisson’s equation. close to 1. the other methods also provide a positive approximation that helps to optimize the control variable. however, the stability of neumann series is not good and in some iterations it provides a negative estimation of hypergradients. fidelity of hypergradients using different number of broyden iterations. since broyden’s method is an iterative algorithm for solving the optimization problem, there is a trade-off between efficiency and performance. here we compare the fidelity of hypergradients using different numbers of iterations for broyden’s method. we also use box plots for this experiment and the results are in the right part of figure 3. we observe that as the number of broyden iterations increases, the hypergradients become more accurate. and the median cosine similarity of hypergradients is more than 0.9 even if we only use 8 iterations which shows the efficiency of broyden’s method. we also find that the increment is minor after 16 iterations for this simple problem. this shows that with the number of iterations increases, broyden’s method converges fast and the rest of the error is dominated by the term caused by inexactness of inner loop optimization in theorem 1 and eq. equation 35. we conduct more ablation studies including the impact of hyperparameters in broyden’s method and pinns’ optimization in appendix c, the comparison between our bpn and continuous adjoint method with adjoint pde solved with pinns in appendix g, comparison of running time in appendix g, and provide more visualization results for these pdeco problems in appendix e. conclusions in this paper, we proposed a novel bi-level optimization framework named bi-level physics-informed neural networks (bpn) for solving pde constrained optimization problems. we used pinns for solving inner loop optimization and broyden’s method for computing hypergradients. experiments on multiple pdeco tasks, including complex geometry or non-linear naiver-stokes equations, verified the effectiveness of our method. as for potential negative social impact, the interpretability of pinns is not comparable with traditional numerical solvers, which is left for future work. reproducibility statement we ensure the reproducibility of our paper from three aspects. (1) experiment: the implementation of our experiment is described in sec. 5.1. ablation study for our experiments is in sec. 5.5. further details are in appendix a and appendix c. (2) code: our code is included in supplementary materials. (3) theory and method: a complete proof of the theoretical results described is provided in appendix b. ethics statement pde constrained optimization has a wide range of real-world applications in science and engineering including physics, fluids dynamics, heat engineering and aerospace industry, etc. our bpn is a general framework for pdeco and thus might accelerate the development of these fields. the potential negative impact is that methods based on neural networks like pinns lack theoretical guarantee and interpretability. accident investigation becomes more difficult if these unexplainable are deployed in risk-sensitive areas. a possible solution to mitigate this impact is to develop more explainable and robust methods with better theoretical guidance or corner case protection when they are applied to risk-sensitive areas. acknowledgement this work was supported by the national key research and development program of china (2020aaa0106000, 2020aaa0106302, 2021yfb2701000), nsfc projects (nos. 62061136001, 62076147, u19b2034, u1811461, u19a2081, 61972224), bnrist (bnr2022rc01006), tsinghua institute for guo qiang, and the high performance computing center, tsinghua university. j.z was also supported by the xplorer prize. references shaojie bai, vladlen koltun, and j zico kolter. multiscale deep equilibrium models. advances in neural information processing systems, 33:5238–5250, 2020. fan bao, guoqiang wu, chongxuan li, jun zhu, and bo zhang. stability and generalization of bilevel programming in hyperparameter optimization. advances in neural information processing systems, 34, 2021. alex beatson, jordan ash, geoffrey roeder, tianju xue, and ryan p adams. learning composable energy surrogates for pde order reduction. advances in neural information processing systems, 33: 338–348, 2020. charles g broyden. a class of methods for solving nonlinear simultaneous equations. mathematics siddhartha p chakrabarty and floyd b hanson. optimal control of drug delivery to brain tumors for a distributed parameters model. in proceedings of the 2005, american control conference, 2005., pp. 973–978. ieee, 2005. qun chen, moran wang, ning pan, and zeng-yuan guo. optimization principles for convective heat ross m clarke, elre t oldewage, and josé miguel hernández-lobato. scalable one-pass optimisation of high-dimensional weight-update hyperparameters by implicit differentiation. arxiv preprint arxiv:2110.10461, 2021. juan carlos de los reyes and carola-bibiane schönlieb. image denoising: learning the noise model via nonsmooth pde-constrained optimization. inverse problems & imaging, 7(4):1183, 2013. h-jg diersch, d bauer, w heidemann, wolfram rühaak, and peter schätzl. finite element modeling of borehole heat exchanger systems: part 2. numerical simulation. computers & geosciences, 37 (8):1136–1147, 2011. riccardo grazzi, luca franceschi, massimiliano pontil, and saverio salzo. on the iteration complexity of hypergradient computation. in international conference on machine learning, pp. 3748–3758. pmlr, 2020. roland herzog and karl kunisch. algorithms for pde-constrained optimization. gamm-mitteilungen, raymond m hicks and preston a henne. wing design by numerical optimization. journal of aircraft, kaiyi ji, junjie yang, and yingbin liang. bilevel optimization: convergence analysis and enhanced design. in international conference on machine learning, pp. 4882–4892. pmlr, 2021. yuling jiao, yanming lai, dingwei li, xiliang lu, fengru wang, jerry zhijian yang, et al. a rate of convergence of physics informed neural networks for the linear second order elliptic pdes. communications in computational physics, 31(4):1272–1295, 2022. carl t kelley. iterative methods for linear and nonlinear equations. siam, 1995. diederik p kingma and jimmy ba. adam: a method for stochastic optimization. arxiv preprint samuel lanthaler, siddhartha mishra, and george e karniadakis. error estimates for deeponets: a deep learning framework in infinite dimensions. transactions of mathematics and its applications, 6(1):tnac001, 2022. hanxiao liu, karen simonyan, and yiming yang. darts: differentiable architecture search. arxiv risheng liu, jiaxin gao, jin zhang, deyu meng, and zhouchen lin. investigating bi-level optimization for learning and vision from a unified perspective: a survey and beyond. ieee transactions on pattern analysis and machine intelligence, 2021. jonathan lorraine, paul vicol, and david duvenaud. optimizing millions of hyperparameters by implicit differentiation. in international conference on artificial intelligence and statistics, pp. 1540–1552. pmlr, 2020. lu lu, pengzhan jin, and george em karniadakis. deeponet: learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. arxiv preprint arxiv:1910.03193, 2019. lu lu, raphael pestourie, wenjie yao, zhicheng wang, francesc verdugo, and steven g johnson. physics-informed neural networks with hard constraints for inverse design. siam journal on scientific computing, 43(6):b1105–b1132, 2021. jelena luketina, mathias berglund, klaus greff, and tapani raiko. scalable gradient-based tuning of continuous regularization hyperparameters. in international conference on machine learning, pp. 2952–2960. pmlr, 2016. dougal maclaurin, david duvenaud, and ryan adams. gradient-based hyperparameter optimization through reversible learning. in international conference on machine learning, pp. 2113–2122. pmlr, 2015. sebastian k mitusch, simon w funke, and jørgen s dokken. dolfin-adjoint 2018.1: automated adjoints for fenics and firedrake. journal of open source software, 4(38):1292, 2019. saviz mowlavi and saleh nabi. optimal control of pdes using physics-informed neural networks. yatin nandwani, abhishek pathak, and parag singla. a primal dual formulation for deep learning with constraints. advances in neural information processing systems, 32, 2019. james ng and stevan dubljevic. optimal boundary control of a diffusion–convection-reaction pde model with time-dependent spatial domain: czochralski crystal growth process. chemical engineering science, 67(1):111–119, 2012. fabian pedregosa. hyperparameter optimization with approximate gradient. in international conference on machine learning, pp. 737–746. pmlr, 2016. nestor v queipo, raphael t haftka, wei shyy, tushar goel, rajkumar vaidyanathan, and p kevin tucker. surrogate-based analysis and optimization. progress in aerospace sciences, 41(1):1–28, 2005. maziar raissi, paris perdikaris, and george e karniadakis. physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. journal of computational physics, 378:686–707, 2019. aravind rajeswaran, chelsea finn, sham m kakade, and sergey levine. meta-learning with implicit gradients. advances in neural information processing systems, 32, 2019. anton rodomanov and yurii nesterov. greedy quasi-newton methods with explicit superlinear convergence. siam journal on optimization, 31(1):785–811, 2021. amirreza shaban, ching-an cheng, nathan hatch, and byron boots. truncated back-propagation for bilevel optimization. in the 22nd international conference on artificial intelligence and statistics, pp. 1723–1732. pmlr, 2019. xingyuan sun, tianju xue, szymon rusinkiewicz, and ryan p adams. amortized synthesis of constrained configurations using a differentiable surrogate. advances in neural information processing systems, 34, 2021. sifan wang, mohamed aziz bhouri, and paris perdikaris. fast pde-constrained optimization via self-supervised operator learning. arxiv preprint arxiv:2110.13297, 2021a. sifan wang, yujun teng, and paris perdikaris. understanding and mitigating gradient flow pathologies in physics-informed neural networks. siam journal on scientific computing, 43(5):a3055–a3081, 2021b. sifan wang, hanwen wang, and paris perdikaris. learning the solution operator of parametric partial differential equations with physics-informed deeponets. science advances, 7(40):eabi8605, 2021c. tianju xue, alex beatson, sigrid adriaenssens, and ryan adams. amortized finite element analysis for fast pde-constrained optimization. in international conference on machine learning, pp. 10638–10647. pmlr, 2020. olek c zienkiewicz, robert l taylor, and jian z zhu. the finite element method: its basis and fundamentals. elsevier, 2005. a details of pdeco tasks in this section, we give a mathematical description of these pdeco tasks in detail. 2d poisson’s equation with complex geometry (poisson’s 2d cg) in this problem, we optimize a two-dimensional poisson’s equation defined on a complex geometry which is a prototype of heat exchanger (diersch et al., 2011). we solve the equation on a rectangle area ωr = [−4, 4]2 minus four circles ωc i } where xi, yi, ri are ±2.4, ±2.4, 0.8. we define a field variable u(x, y) ∈ r on domain ω = ωr\ (cid:83)4 i=1 ωc i . the control source f (x, y) ∈ r is distributed on a circle χ = {(x, y) : x2 + y2 ⩽ 1.62}. our goal is to solve the following optimization problem, i = {(x, y) : (x − xi)2 + (y − yi)2 ⩽ r2 j = min f 1 |ω| s.t. ∆u = −f i{x ∈ χ}, x ∈ ω u = 1, x ∈ ∂ωr u = 0, x ∈ ∂ωc i we could visualize the geometry of this problem in figure 4. the source function is distributed in a circle at the origin with radius of 1.6 (not displayed here). a.2 time distributed control of 2d heat equation (heat 2d) in this problem, our goal is to solve an optimal control task of a system governed by the heat equation. the temperature field u(x, y, t) ∈ r is defined on a rectangle domain ω = [0, 1]2 with time t ∈ [0, 2] and the control signal is a function f (t) ∈ r depends on time but does not depend on spatial coordinates x and y. our goal is to make u close to a target function ˆu(x, y, t), min f j = |u − ˆu|2dx − ν∆u = f, (x, y, t) ∈ ω × [0, 2] s.t. ∂u ∂t u(x, y, t) = 0, (x, y, t) ∈ ∂ω × [0, 2] u(x, y, 0) = 0, (x, y) ∈ ω the coefficient is ν = 0.001 and the target function is chosen as ˆu = 32x(1 − x)y(1 − y) sin πt. we choose f (t) = 0.1 as the initial guess for the problem. the solution at timestep t = 2.0 is shown in figure 6. a.3 time distributed control of 1d burgers equation (burgers 1d) burgers equation is a nonlinear pde widely used in various areas like applied mathematics, fluid mechanics, gas dynamics, traffic flow and nonlinear acoustics. here we use viscous burgers equations as an example which is a dissipative system. the field variable u(x, t) is defined on ω = [−1, 1] and t ∈ [0, 1]. the control variable f (t) depends only on time. we aim to solve the following optimization problem, min f j = |u(x, 1) − ˆu|2dx ∂u ∂t + u ∂u ∂x − ν∆u = f, x ∈ ω u(x, t) = 0, x ∈ ∂ω u(x, 0) = sin(πx)e−2x2 the diffusion coefficient ν = 0.01 and the target function is chosen as ˆu = e−(x− 1 figure 4: visualization of geometry shape of poisson’s 2d cg. figure 5: init guess solution of poisson’s 2d cg. (simulated using high fidelity fem) figure 6: initial guess solution for heat2d problem at final time t = 2.0. the result is simulated by high fidelity fem. figure 7: visualization of the initial guess solution for burgers 1d problem. the horizontal axis is spatial coordinates x and the vertical axis is time t. which is a symmetric wave. we use f (t) = 0 as the initial guess for this problem. the visualization of the initial solution is shown in the figure 7 inlet flow control for 2d steady naiver-stokes equations (ns 2inlets) naiver-stokes equations are one of the most important equations in fluid mechanics, aerodynamics and applied mathematics which are notoriously difficult to solve due to the high non-linearity. in this problem, we solve ns equations in a pipeline and aim to find the best inlet flow distribution f (y) to make the outlet flow as uniform as possible. the flow velocity field is u = (u, v) and the pressure field is p and they are defined on a rectangle domain ω = [0, 1.5] × [0, 1.0]. we have two inlets and two outlets and several walls for this domain, 2 ∪ γout 1 ∪ γout the whole problem is as follows, min f j = s.t.(u · ∇)u = |u − ˆu|2dy γout 1 1 re u = (f (y), 0), (x, y) ∈ γin 1 u = (0, v2(x)), (x, y) ∈ γin 2 ∪ γout u = 0, (x, y) ∈ γw p = 0, (x, y) ∈ γout the target function is a parabolic function ˆu(x, y) = 4y(1 − y) and the velocity field on the second inlet and outlet is v2(x) = 18(x − 0.5)(1 − x). the reynold number is set to 100 in this problem. we initialize the solution f (y) the same with the target function, i.e. f (y) = 4y(1 − y). in this case, the outlet velocity and the whole velocity field is shown in the following pictures 8. figure 8: visualization of the initial guess solution for ns 2inlet. the top two images are velocity fieldd of the x and y directions. the bottom one is the pressure field. figure 9: geometry shape and collocation points for ns 2d backstep problem. a.5 drag minimization over an obstacle of ns equations (ns shape) this problem is a shape optimization task that is to find the best shape of the obstacle that minimizes the drag forces from the flow. the inlet is the left side of the area and the outlet is the right side of the area, γin = {(0, y) : 0 ⩽ y ⩽ 8} γout = {(8, y) : 0 ⩽ y ⩽ 8} γw = ∂ω\(γin ∪ γout) | 16 | [
241.84,
393.4282556,
370.1593,
434.5516
] |
9SDQB3b68K.pdf | 2,022 | 0 | dara: dynamics-aware reward augmentation in offline reinforcement learning jinxin liu123∗ hongyin zhang1∗ donglin wang13† 1 westlake university. 3 institute of advanced technology, westlake institute for advanced study. {liujinxin, zhanghongyin, wangdonglin}@westlake.edu.cn 2 zhejiang university. abstract offline reinforcement learning algorithms promise to be applicable in settings where a fixed dataset is available and no new experience can be acquired. however, such formulation is inevitably offline-data-hungry and, in practice, collecting a large offline dataset for one specific task over one specific environment is also in this paper, we thus 1) formulate the offline dynamics costly and laborious. adaptation by using (source) offline data collected from another dynamics to relax the requirement for the extensive (target) offline data, 2) characterize the dynamics shift problem in which prior offline methods do not scale well, and 3) derive a simple dynamics-aware reward augmentation (dara) framework from both modelfree and model-based offline settings. specifically, dara emphasizes learning from those source transition pairs that are adaptive for the target environment and mitigates the offline dynamics shift by characterizing state-action-next-state pairs instead of the typical state-action distribution sketched by prior offline rl methods. the experimental evaluation demonstrates that dara, by augmenting rewards in the source offline dataset, can acquire an adaptive policy for the target environment and yet significantly reduce the requirement of target offline data. with only modest amounts of target offline data, our performance consistently outperforms the prior offline rl methods in both simulated and real-world tasks. introduction offline reinforcement learning (rl) (levine et al., 2020; lange et al., 2012), the task of learning from the previously collected dataset, holds the promise of acquiring policies without any costly active interaction required in the standard online rl paradigm. however, we note that although the active trail-and-error (online exploration) is eliminated, the performance of offline rl method heavily relies on the amount of offline data that is used for training. as shown in figure 1, the performance deteriorates dramatically as the amount of offline data decreases. a natural question therefore arises: can we reduce the amount of the (target) offline data without significantly affecting the final performance for the target task? figure 1: solid and dashed lines denote offline mediumreplay and medium-expert data in d4rl (walker2d) resp. bringing the idea from the transfer learning (pan & yang, 2010), we assume that we have access to another (source) offline dataset, hoping that we can leverage this dataset to compensate for the performance degradation caused by the reduced (target) offline dataset. in the offline setting, previous work (siegel et al., 2020; chebotar et al., 2021) has characterized the reward (goal) difference between the source and target, relying on the ”conflicting” or multi-goal offline dataset (fu et al., 2020), while we focus on the relatively unexplored transition dynamics difference between the source dataset and the target environment. meanwhile, we believe that this dynamics shift is not arbitrary in reality: in healthcare treatment, offline data for a particular patient is often limited, whereas we can obtain diagnostic data from other patients with the same case (same reward/goal) ∗equal contribution. †corresponding author. and there often exist individual differences between patients (source dataset with different transition dynamics). careful treatment with respect to the individual differences is thus a crucial requirement. given source offline data, the main challenge is to cope with the transition dynamics difference, i.e., strictly tracking the state-action supported by the source offline data can not guarantee that the same transition (state-action-next-state) can be achieved in the target environment. however, in the offline setting, such dynamics shift is not explicitly characterized by the previous offline rl methods, where they typically attribute the difficulty of learning from offline data to the state-action distribution shift (chen & jiang, 2019; liu et al., 2018). the corresponding algorithms (fujimoto et al., 2019; abdolmaleki et al., 2018; yu et al., 2020) that model the support of state-action distribution induced by the learned policy, will inevitably suffer from the transfer problem where dynamics shift happens. our approach is motivated by the well established connection between reward modification and dynamics adaptation (kumar et al., 2020b; eysenbach & levine, 2019; eysenbach et al., 2021), which indicates that, by modifying rewards, one can train a policy in one environment and make the learned policy to be suitable for another environment (with different dynamics). thus, we propose to exploit the joint distribution of state-action-next-state: besides characterizing the state-action distribution shift as in prior offline rl algorithms, we additionally identify the dynamics (i.e., the conditional distribution of next-state given current state-action pair) shift and penalize the agent with a dynamics-aware reward modification. intuitively, this reward modification aims to discourage the learning from these offline transitions that are likely in source but are unlikely in the target environment. unlike the concurrent work (ball et al., 2021; mitchell et al., 2021) paying attention to the offline domain generalization, we explicitly focus on the offline domain (dynamics) adaptation. our principal contribution in this work is the characterization of the dynamics shift in offline rl and the derivation of dynamics-aware reward augmentation (dara) framework built on prior modelfree and model-based formulations. dara is simple and general, can accommodate various offline rl methods, and can be implemented in just a few lines of code on top of dataloader at training. in our offline dynamics adaptation setting, we also release a dataset, including the gym-mujoco tasks (walker2d, hopper and halfcheetah), with dynamics (mass, joint) shift compared to d4rl, and a 12-dof quadruped robot in both simulator and real-world. with only modest amounts of target offline data, we show that dara-based offline methods can acquire an adaptive policy for the target tasks and achieve better performance compared to baselines in both simulated and real-world tasks. related work offline rl describes the setting in which a learner has access to only a fixed dataset of experience, while no interactive data collection is allowed during policy learning (levine et al., 2020). prior work commonly assumes that the offline experience is collected by some behavior policies on the same environment that the learned policy be deployed on. thus, the main difficulty of such offline setting is the state-action distribution shift (fujimoto et al., 2019; liu et al., 2018). algorithms address this issue by following the two main directions: the model-free and model-based offline rl. model-free methods for such setting typically fall under three categories: 1) typical methods mitigate this problem by explicitly (fujimoto et al., 2019; kumar et al., 2019; wu et al., 2019) or implicitly (siegel et al., 2020; peng et al., 2019; abdolmaleki et al., 2018) constraining the learned policy away from ood state-action pairs. 2) conservative estimation based methods learn pessimistic value functions to prevent the overestimation (kumar et al., 2020a; xu et al., 2021). 3) importance sampling based methods directly estimate the state-marginal importance ratio and obtain an unbiased value estimation (zhang et al., 2020; nachum & dai, 2020; nachum et al., 2019b). model-based methods typically eliminate the state-action distribution shift by incorporating a reward penalty, which relies on the uncertainty quantification of the learned dynamics (kidambi et al., 2020; yu et al., 2020). to remove this uncertainty estimation, yu et al. (2021) learns conservative critic function by penalizing the values of the generated state-action pairs that are not in the offline dataset. these methods, however, define their objective based on the state-action distribution shift, and ignore the potential dynamics shift between the fixed offline data and the target mdp. in contrast, we account for dynamics (state-action-next-state) shift and explicitly propose the dynamics aware reward augmentation. a counterpart, close to our work, is off-dynamics rl (eysenbach et al., 2021), where they set up dynamics shift in the interactive environment while we focus on the offline setting. preliminaries we study rl in the framework of markov decision processes (mdps) specified by the tuple m := (s, a, r, t, ρ0, γ), where s and a denote the state and action spaces, r(s, a) ∈ [−rmax, rmax] is the reward function, t (s(cid:48)|s, a) is the transition dynamics, ρ0(s) is the initial state distribution, and γ is the discount factor. the goal in rl is to optimize a policy π(a|s) that maximizes the expected discounted return ηm (π) := eτ ∼pπ t=0 γtr(st, at)], where τ := (s0, a0, s1, a1, ...). we also define q-values q(s, a) := eτ ∼pπ t=0 γtr(st, at)|s0 = s, a0 = a], v-values v (s) := ea∼π(a|s) [q(s, a)], and the (unnormalized) state visitation distribution dπ t=0 γtp (s|π, m, t), where p (s|π, m, t) denotes the probability of reaching state s at time t by running π in m . in the offline rl problem, we are provided with a static dataset d := {(s, a, r, s(cid:48))}, which consists of transition tuples from trajectories collected by running one or more behavioral policies, denoted by πb, on mdp m . with a slight abuse of notation, we write d = {(s, a, r, s(cid:48)) ∼ dd(s)πb(a|s)r(s, a)t (s(cid:48)|s, a)}, where the dd(s) denotes state-marginal distribution in d. in the offline setting, the goal is typically to learn the best possible policy using the fixed offline dataset. m (s) := (cid:80)∞ es∼dπ model-free rl algorithms based on dynamic programming typically perform policy iteration to find the optimal policy. such methods iteratively conduct 1) policy improvement with gm q := m (s),a∼π(a|s) [q(s, a)] and 2) policy evaluation by iterating the bellman equation arg maxπ q(s, a) = bπ m (s)π(a|s). given es∼dd(s),a∼π(a|s) [q(s, a)] and off-policy d, we resort to 1) improvement with gdq := arg maxπ dq(s, a) := r(s, a) + γes(cid:48)∼td(s(cid:48)|s,a),a(cid:48)∼π(a(cid:48)|s(cid:48)) [q(s(cid:48), a(cid:48))] 2) evaluation by iterating q(s, a) = bπ over all (s, a) in d. specifically, given any initial q0, it iterates1 m q(s, a) := r(s, a) + γes(cid:48)∼t (s(cid:48)|s,a),a(cid:48)∼π(a(cid:48)|s(cid:48)) [q(s(cid:48), a(cid:48))] over dπ policy improvement: πk+1 = gdqk, policy evaluation: qk+1 = bπk+1 d qk. model-free offline rl based on the above iteration suffers from the state-action distribution shift, i.e., policy evaluation bπk d qk−1 may encounter unfamiliar state action regime that is not covered by the fixed offline dataset d, causing erroneous estimation of qk. policy improvement gdqk further exaggerates such error, biasing policy πk+1 towards out-of-distribution (ood) actions with erroneously high q-values. to address this distribution shift, prior works 1) explicitly constrain policy to be close to the behavior policy (fujimoto et al., 2019; kumar et al., 2019; wu et al., 2019; ghasemipour et al., 2021), introducing penalty αd(π(a|s), πb(a|s)) into gd or bπ d in equation 1: gdq = arg max es∼dd(s),a∼π(a|s) [q(s, a) − αd(π(a|s), πb(a|s))] , dq(s, a) = r(s, a) + γes(cid:48)∼td(s(cid:48)|s,a),a(cid:48)∼π(a(cid:48)|s(cid:48)) [q(s(cid:48), a(cid:48)) − αd(π(a(cid:48)|s(cid:48)), πb(a(cid:48)|s(cid:48)))] , bπ where d is a divergence function between distributions over actions (e.g., mmd or kl divergence), or 2) train pessimistic value functions (kumar et al., 2020a; yu et al., 2021; xu et al., 2021), penalizing q-values at states in the offline dataset d for actions generated by the current policy π: es∼dd(s),a∼π(a|s) [q(s, a)] , s.t. q = bπ q = arg min dq. q model-based rl algorithms iteratively 1) model the transition dynamics t (s(cid:48)|s, a), using the data m (s)π(a|s)t (s(cid:48)|s,a)[log ˆt (s(cid:48)|s, a)], and 2) infer a policy π from the collected in m : max ˆt modeled ˆm = (s, a, r, ˆt , ρ0, γ), where we assume that r and ρ0 are known, maximizing η ˆm (π) with a planner or the dyna-style algorithms (sutton, 1990). in this paper, we focus on the latter. es,a,s(cid:48)∼dπ model-based offline rl algorithms similarly suffer from ood state-action (kidambi et al., 2020; es,a,s(cid:48)∼d[log ˆt (s(cid:48)|s, a)]. cang et al., 2021) if we directly apply policy iteration over ˆt := max ˆt like the conservative estimation approach described in equation 3, recent conservative model-based offline rl methods provide the policy with a penalty for visiting states under the estimated ˆt where ˆt is likely to be incorrect. taking u(s, a) as the oracle uncertainty (yu et al., 2020) that provides a consistent estimate of the accuracy of model ˆt at (s, a), we can modify the reward function to obtain a conservative mdp: ˆmc = (s, a, r − αu, ˆt , ρ0, γ), then learn a policy π by maximizing η ˆmc (π). 1for parametric q-function, we often perform qk+1 ← arg minq e(s,a)∼d[(bπk+1 d qk(s, a)−q(s, a))2]. problem formulation in standard offline rl problem, the static offline dataset d consists of samples {(s, a, r, s(cid:48)) ∼ dd(s)πb(a|s)r(s, a)t (s(cid:48)|s, a)}. although offline rl methods learn policy for the target mdp m := (s, a, r, t , ρ0, γ) without (costly) online data, as we shown in figure 1, it requires a fair amount of (target) offline data d collected on m . suppose we have another (source) offline dataset d(cid:48), consisting of samples {(s, a, r, s(cid:48)) ∼ dd(cid:48)(s)πb(cid:48)(a|s)r(s, a)t (cid:48)(s(cid:48)|s, a)} collected by the behavior policy πb(cid:48) on mdp m (cid:48) := (s, a, r, t (cid:48), ρ0, γ), then we hope the transfer of knowledge between offline dataset {d(cid:48) ∪ d} can reduce the data requirements on d for learning policy for the target m . dynamics shift in offline rl although offline rl methods in section 3 have incorporated the state-action distribution constrained backups (policy constraints or conservative estimation), they also fail to learn an adaptive policy for the target mdp m with the mixed datasets {d(cid:48) ∪ d}, as we show in figure 4 (appendix). we attribute this failure to the dynamics shift (definition 2) between d(cid:48) and m in this adaptation setting. definition 1 (empirical mdp) an empirical mdp estimated from d is ˆm := (s, a, r, ˆt , ρ0, γ) where ˆt = max ˆt es,a,s(cid:48)∼d[log ˆt (s(cid:48)|s, a)] and ˆt (s(cid:48)|s, a) = 0 for all (s, a, s(cid:48)) not in dataset d. definition 2 (dynamics shift) let ˆm := (s, a, r, ˆt , ρ0, γ) be the empirical mdp estimated from d. to evaluate a policy π for m := (s, a, r, t, ρ0, γ) with offline dataset d, we say that the dynamics shift (between d and m ) in offline rl happens if there exists at least one transition pair (s, a, s(cid:48)) ∈ {(s, a, s(cid:48)) : dπ ˆm (s)π(a|s) ˆt (s(cid:48)|s, a) > 0} such that ˆt (s(cid:48)|s, a) (cid:54)= t (s(cid:48)|s, a). in practice, for a stochastic m and any finite offline data d collected in m , there always exists the dynamics shift. the main concern is that finite samples are always not sufficient to exactly model stochastic dynamics. following fujimoto et al. (2019), we thus assume both mdps m and m (cid:48) are deterministic, which means the empirical ˆm and ˆm (cid:48) are both also deterministic. more importantly, such assumption enables us to explicitly characterize the dynamics shift under finite offline samples. lemma 1 under deterministic transition dynamics, there is no dynamics shift between d and m . for offline rl tasks, prior methods generally apply bπ dq along with the state-action distribution correction (equations 2 and 3), which overlooks the potential dynamics shift between the (source) offline dataset and the target mdp (e.g., d(cid:48) → m ). as a result, these methods do not scale well to the setting in which dynamics shift happens, e.g., learning an adaptive policy for m with (source) d(cid:48). dynamics shift in model-free and model-based offline formulations m q(s, a) for all (s, a) in sπ or s(cid:48) m q(s, a) for all (s, a) such that dπ d(cid:48)q(s, a) approximates the oracle bπ π denote the sets {(s, a) : dd(s)π(a|s) > 0} and {(s, a) : dd(cid:48)(s)π(a|s) > 0} respectively. from the model-free (policy iteration) view, an exact policy evaluation on m is characterized by iterating q(s, a) = bπ m (s)π(a|s) > 0. thus, to formalize the policy evaluation with offline d or d(cid:48) (for an adaptive π on target m ), we require that bellman operator bπ dq(s, a) or bπ π, where sπ and s(cid:48) 1) to evaluate a policy π for m with d (i.e., calling the bellman operator bπ d), notable modelfree offline method bcq (fujimoto et al., 2019) translates the requirement of bπ d = bπ m into the requirement of ˆt (s(cid:48)|s, a) = t (s(cid:48)|s, a). note that under deterministic environments, we have the property that for all (s, a, s(cid:48)) in offline data d, ˆt (s(cid:48)|s, a) = t (s(cid:48)|s, a) (lemma 1). as a result, such property permits bcq to evaluate a policy π by calling bπ m , meanwhile constraining sπ to be a subset of the support of dd(s)πb(a|s). this means a policy π which only traverses transitions contained in (target) offline data d, can be evaluated on m without error. 2) to evaluate a policy π for m with d(cid:48) (i.e., calling the bellman operator bπ d, replacing the oracle bπ d(cid:48)), we have lemma 2: lemma 2 dynamics shift produces that bπ d(cid:48)q(s, a) (cid:54)= bπ m q(s, a) for some (s, a) in s(cid:48) π. with the offline data d(cid:48), lemma 2 suggests that the above requirement bπ m becomes infeasible, which limits the practical applicability of prior offline rl methods under the dynamics shift. d(cid:48) = bπ to be specific, characterizing an adaptive policy for target mdp m with d(cid:48) moves beyond the reach of the off-policy evaluation based on iterating q = bπ d(cid:48)q (equations 2 and 3). such iteration may cause the evaluated q (or learned policy π) overfits to ˆt (cid:48) and struggle to adapt to the target t . to overcome the dynamics shift, we would like to resort an additional compensation ∆ ˆt (cid:48),t such that d(cid:48)q(s, a) + ∆ ˆt (cid:48),t (s, a) = bπ bπ m q(s, a) for all (s, a) in s(cid:48) π. thus, we can apply bπ d(cid:48)q + ∆ ˆt (cid:48),t to act as a substitute for the oracle bπ m q. from the model-based view, the oracle ηm (π) (calling the bellman operator bπ and the viable η ˆm (cid:48)(π) (calling bπ m on the target m ) ˆm (cid:48) on the estimated ˆm (cid:48) from source d(cid:48)) have the following lemma. (cid:2)r(s, a) + γes(cid:48)∼t (s(cid:48)|s,a) [v (s(cid:48))](cid:3). for any π, we have: lemma 3 let bπ m v (s) = ea∼π(a|s) η ˆm (cid:48)(π) = ηm (π) + es∼dπ ˆm (cid:48) (s) (cid:2)bπ ˆm (cid:48)vm (s) − bπ m vm (s)(cid:3) . lemma 3 states that if we maximize η ˆm (cid:48)(π) subject to |es∼dπ ηm (π) will be improved. if f is a set of functions f : s → r that contains vm , then we have (cid:105) ˆm (cid:48)vm (s) − bπ ˆm (cid:48) (s)[bπ m vm (s)]| ≤ (cid:15), es∼dπ ˆm (cid:48) (s) (cid:2)bπ ˆm (cid:48)vm (s) − bπ m vm (s)(cid:3)(cid:12) (cid:12) ≤ γes,a∼dπ (cid:12) (cid:104) df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) ˆm (cid:48) (s)π(a|s) where df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) = supf ∈f |e s(cid:48)∼ ˆt (cid:48)(s(cid:48)|s,a) [f (s(cid:48))] − es(cid:48)∼t (s(cid:48)|s,a) [f (s(cid:48))] |, which is the integral probability metric (ipm). note that if we directly follow the admissible error assumption in mopo (yu et al., 2020) i.e., assuming df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ u(s, a) for all (s, a), this would be too restrictive: given that ˆt (cid:48) is estimated from the source offline samples collected under t (cid:48), not the target t , thus such error would not decrease as the source data increases. further, we find df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) + df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)). thus, we can bound the df ( ˆt (cid:48), t ) term with the admissible error assumption over df ( ˆt , t ), as in mopo, and the auxiliary constraints df ( ˆt (cid:48), ˆt ). see next section for the detailed implementation. in summary, we show that both prior offline model-free and model-based formulations suffer from the dynamics shift, which also suggests us to learn a modification (∆ or df ) to eliminate this shift. dynamics-aware reward augmentation in this section, we propose the dynamics-aware reward augmentation (dara), a simple data augmentation procedure based on prior (model-free and model-based) offline rl methods. we first provide an overview of our offline reward augmentation motivated by the compensation ∆ ˆt (cid:48),t in equation 4 and the auxiliary constraints df ( ˆt (cid:48), ˆt ) in equation 6, and then describe its theoretical derivation in both model-free and model-based formulations. with the (reduced) target offline data d and the source offline data d(cid:48), we summarize the overall dara framework in algorithm 1. algorithm 1 framework for dynamics-aware reward augmentation (dara) require: target offline data d (reduced) and source offline data d(cid:48) 1: learn classifiers (qsas and qsa) that distinguish source data d(cid:48) from target data d. (see appendix a.1.3) 2: set dynamics-aware ∆r(st, at, st+1) = log qsas(source|st,at,st+1) 3: modify rewards for all (st, at, rt, st+1) in d(cid:48): rt ← rt − η∆r. 4: learn policy with {d ∪ d(cid:48)} using prior model-free or model-based offline rl algorithms. qsas(target|st,at,st+1) − log qsa(source|st,at) qsa(target|st,at) . dynamics-aware reward augmentation in model-free formulation motivated by the well established connection of rl and probabilistic inference (levine, 2018), we first cast the model-free rl problem as that of inference in a particular probabilistic model. specifically, we introduce the binary random variable o that denotes whether the trajectory τ := (s0, a0, s1, ...) is optimal (o = 1) or not (o = 0). the likelihood of a trajectory can then be modeled as p(o = 1|τ ) = exp ((cid:80) t rt/η), where rt := r(st, at) and η > 0 is a temperature parameter. (reward augmentation with explicit policy/value constraints) we now introduce a variational ˆt (cid:48)(st+1|st, at)π(at|st) to approximate the posterior distribution distribution pπ t=1 m (τ |o = 1), which leads to the evidence lower bound of log pπ pπ m (o = 1): (cid:34) log pπ m (o = 1) = log eτ ∼pπ m (τ ) [p(o = 1|τ )] ≥ eτ ∼pπ log p(o = 1|τ ) + log = eτ ∼pπ t rt/η − log ˆt (cid:48)(st+1|st, at) t (st+1|st, at) pπ m (τ ) pπ ˆm (cid:48)(τ ) rt − η log ˆt (cid:48)(st+1|st,at) t (st+1|st,at) ˆt (cid:48)(s(cid:48)|s,a) t (s(cid:48)|s,a) . intuitively, the −η log since we are interested in infinite horizon problems, we introduce the discount factor γ and take the limit of steps in each rollout, i.e., h → ∞. thus, the rl problem on the mdp m , cast as the inference problem arg maxπ log pπ m (o = 1), can be stated as a maximum of the lower bound t=0 γt (cid:16) . this is equivalent to an rl problem on ˆm (cid:48) with eτ ∼pπ ˆt (cid:48)(s(cid:48)|s,a) the augmented reward r ← r(s, a) − η log t (s(cid:48)|s,a) term discourages transitions (state-action-next-state) in d(cid:48) that have low transition probability in the target m . in the model-free offline setting, we can add the explicit policy or q-value constraints (equations2 and 3) to mitigate the ood state-actions. thus, such formulation allows the oracle bπ m to be reexpressed by bπ t , which makes the motivation in equation 4 practical. (reward augmentation with implicit policy constraints) if we introduce the variational distribution pπ(cid:48) ˆt (cid:48)(st+1|st, at)π(cid:48)(at|st), we can recover the weighted-regressionstyle (wang et al., 2020; peng et al., 2019; abdolmaleki et al., 2018; peters et al., 2010) objective by maximizing j (π(cid:48), π) := e (lower bound of log pπ m (o = 1)). following the expectation maximization (em) algorithm, we can maximize j (π(cid:48), π) by iteratively (e-step) improving j (π(cid:48), ·) w.r.t. π(cid:48) and (m-step) updating π w.r.t. π(cid:48). ˆt (cid:48)(st+1|st,at) t (st+1|st,at) − η log π(cid:48)(at|st) d(cid:48) and the modification log ˆt (cid:48) rt − η log π(at|st) (e-step) we define ˜q(s, a, s(cid:48)) = e . then, given offline data d(cid:48), we can rewrite j (π(cid:48), ·) as a constrained objective (abdolmaleki et al., 2018): t γt log ˆt (cid:48)(s(cid:48)|s,a) max π(cid:48) e dd(cid:48) (s)π(cid:48)(a|s) ˆt (cid:48)(s(cid:48)|s,a) (cid:105) q(s, a) − η ˜q(s, a, s(cid:48)) s.t. es∼dd(cid:48) (s) [dkl (π(cid:48)(a|s)(cid:107)π(a|s))] ≤ (cid:15). when considering a fixed π, the above optimization over π(cid:48) can be solved analytically (vieillard ∗(a|s) ∝ et al., 2020; geist et al., 2019; peng et al., 2019). the optimal π(cid:48) π(a|s) exp (q(s, a)) exp(−η ˜q(s, a, ˆt (cid:48)(s(cid:48)|s, a))). as the policy evaluation in equation 1 (footnote2), we estimate q(s, a) and ˜q(s, a, s(cid:48)) by minimizing the bellman error with offline samples in d(cid:48). (m-step) then, we can project π(cid:48) ∗ is then given by π(cid:48) ∗ onto the manifold of the parameterized π: ∗(a|s)(cid:107)π(a|s))] es∼dd(cid:48) (s) [dkl (π(cid:48) es,a,s(cid:48)∼d(cid:48) (cid:104) log π(a|s) exp (q(s, a)) exp −η ˜q(s, a, s(cid:48)) arg min π = arg max from the regression view, prior work mpo (abdolmaleki et al., 2018) infers actions with q-value weighted regression, progressive approach compared to behavior cloning; however, such paradigm lacks the ability to capture transition dynamics. we explicitly introduce the exp(−η ˜q(s, a, s(cid:48))) term, which as we show in experiments, is a crucial component for eliminating the dynamics shift. implementation: in practice, we adopt offline samples in d to approximate the true dynamics t of ˆt (cid:48)(s(cid:48)|s,a) m , and introduce a pair of binary classifiers, qsas(·|s, a, s(cid:48)) and qsa(·|s, a), to replace log t (s(cid:48)|s,a) as in eysenbach et al. (2021): log qsa(target|s,a) . (see appendixa.1.3 for details). although the amount of data d sampled from the target m is reduced in our problem setup, we experimentally find that such classifiers are sufficient to achieve good performance. qsas(target|s,a,s(cid:48)) − log qsa(source|s,a) ˆt (cid:48)(s(cid:48)|s,a) t (s(cid:48)|s,a) = log qsas(source|s,a,s(cid:48)) dynamics-aware reward augmentation in model-based formulation following equation 6, we then characterize the dynamics shift compensation term as in the above model-free analysis in the model-based offline formulation. we will find that across different derivations, our reward augmentation ∆r has always maintained the functional consistency and simplicity. following mopo, we assume f = {f : (cid:107)f (cid:107)∞ ≤ 1}, then we have df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) = dtv( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) ≤ (dkl( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a))/2) 1 2 , where dtv is the total variance distance. then we introduce the admissible error u(s, a) such that df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ 2 ≤ ηdkl( ˆt (cid:48), ˆt ) + δ. following u(s, a) for all (s, a), and η and δ such that (dkl( ˆt (cid:48), ˆt )/2) 1 lemma 3, we thus can maximize the following lower bound with the samples in ˆm (cid:48) (λ := γrmax 1−γ ): ηm (π) ≥ e s,a,s(cid:48)∼dπ ˆm (cid:48) (s)π(a|s) ˆt (cid:48)(s(cid:48)|s,a) r(s, a) − ηλ log ˆt (cid:48)(s(cid:48)|s, a) ˆt (s(cid:48)|s, a) − λu(s, a) − λδ ˆt (cid:48)(µθ(cid:48)(s, a), σφ(cid:48)(s, a)) and n i ˆt implementation: we model the dynamics ˆt (cid:48) and ˆt with an ensemble of 2*n parameterized gaussian distributions: n i (µθ(s, a), σφ(s, a)), where i ∈ [1, n ]. we approximate u with the maximum standard deviation of the learned models in the ensemble: u(s, a) = maxn i=1 (cid:107)σφ(s, a)(cid:107)f, omit the training-independent δ, and treat λ as a hyperparameter as in mopo. for the log ˆt (cid:48) term, we resort to the above classifiers (qsas and qsa) in model-free setˆt ting. (see appendix-a.3.2 for comparison between using classifiers and estimated-dynamics ratio.) experiments we present empirical demonstrations of our dynamics-aware reward augmentation (dara) in a variety of settings. we start with two simple control experiments that illustrate the significance of dara under the domain (dynamics) adaptation setting. then we incorporate dara into state-ofthe-art (model-free and model-based) offline rl methods and evaluate the performance on the d4rl tasks. finally, we compare our framework to several cross-domain-based baselines on simulated and real-world tasks. note that for the dynamics adaptation, we also release a (source) dataset as a complement to d4rl, along with the quadruped robot dataset in simulator (source) and real (target). how does dara handle the dynamics shift in offline setting? figure 2: external dynamics shift: (left) source and target mdps (target contains an obstacle represented with the dashed line); (middle) top plots (w/o aug.) depict the trajectories that are generated by the learned policy with vanilla mpo; (middle) bottom plots (dara) depict the trajectories that are generated by the learned policy with dara-based mpo; (right) learned qvalues on the state-action pairs in left subfigure. figure 3: internal dynamics shift: (left) source and target mdps (range of the right-back-leg of the ant (state[11]) is limited: [−0.52, 0.52] in source mdp → [−0.26, 0.26] in target mdp); (right) the solid (orange) line denotes the state of the right-back-leg over one trajectory collected in source, dashed (blue) line denotes the learned reward modification −∆r over the trajectory, and green and red slices denote transition pairs where −∆r ≥ and −∆r < 0, resp. here we characterize both external and internal dynamics shifts: in map tasks (figure 2 left), the source dataset d(cid:48) is collected in a 2d map and the target d is collected in the same environment but with an obstacle (the dashed line); in ant tasks (figure 3 left), the source dataset d(cid:48) is collected using the mujoco ant and the target d is collected with the same ant but one joint of which is restricted. using mpo, as an example of offline rl method, we train a policy on dataset {d(cid:48) ∪ d} and deploy the acquired policy in both source and target mdps. as shown in figure 2 (middle-top, w/o aug.), such training paradigm does not produce an adaptive policy for the target. by modifying rewards in table 1: normalized scores for the (target) d4rl tasks, where our results are averaged over 5 seeds. the arrows in each four-tuple indicate whether the current performance has improved (↑) or not (↓) compared to the previous value. if 1t+10s dara achieves comparable (less than 10% degradation) or better performance compared to baseline 10t, we highlight our scores in bold (in each four-tuple). body mass shift bear random medium medium-r medium-e bcq random medium medium-r medium-e bear bcq random medium medium-r medium-e random medium medium-r medium-e r e p p o h r e p p o h d 2 r e k l a w d 2 r e k l a w brac-p cql brac-p cql awr mopo awr mopo source d(cid:48), we show that applying the same training paradigm on the reward augmented data exhibits a positive transfer ability in figure 2 (middle-bottom, dara). in figure 2 (right), we show that our dara produces low q-values on the obstructive state-action pairs (in left) compared to the vanilla mpo, which thus prevents the q-value weighted-regression on these unproductive state-action pairs. more generally, we illustrate how dara can handle the dynamics adaptation from the reward modification view. in figure 3 (right), the learned reward modification −∆r (dashed blue line) clearly produces a penalty (red slices) on these state-action pairs (in source) that produce infeasible nextstate transitions in the target mdp. if we directly apply prior offline rl methods, these transitions that are beyond reach in target and yet are high valued, would yield a negative transfer. thus, we can think of dara as finding out these transitions that exhibit dynamics shifts and enabling dynamics adaptation with reward modifications, e.g., penalizing transitions covered by red slices (−∆r < 0). can dara enable an adaptive policy with reduced offline data in target? to characterize the offline dynamics shift, we consider the hopper, walker2d and halfcheetah from the gym-mujoco environment, using offline samples from d4rl as our target offline dataset. for the source dataset, we change the body mass of agents or add joint noise to the motion, and, similar to d4rl, collect the random, medium, medium-r and medium-e offline datasets for the three environments. based on various offline rl algorithms (bear, brac-p, bcq, cql, awr, mopo), we perform the following comparisons: 1) employing the 100% of d4rl data (10t), 2) employing only 10% of the d4rl data (1t), 3) employing 10% of the d4rl data and 100% of our collected source offline data (1t+10s w/o aug.), and 4) employing 10% of the d4rl data and 100% of our collected source offline data along with our reward augmentation (1t+10s dara). due to page limit, here we focus on the dynamics shift concerning the body mass on walker2d and hopper. we refer the reader to appendix for more experimental details, tasks, and more baselines (bc, combo). as shown in table 1, in most of the tasks, the performance degrades substantially when we decrease the amount of target offline data, i.e., 10t → 1t. training with additional ten times source offline data (1t+10s w/o aug.) also does not bring substantial improvement (compensating for the reduced data in target), which even degrades the performance in some tasks. we believe that such degradation (compared to 10t) is caused by the lack of target offline data as well as the dynamics shift (induced by the source data). incorporating our reward augmentation, we observe that compared to 1t and 1t+10s w/o aug. that both use 10% of the target offline data, our 1t+10s dara significantly improves the performance across a majority of tasks. moreover, dara can achieve comparable or better performance compared to baseline 10t that training with ten times as much target offline data. table 2: normalized scores in (target) d4rl tasks, where ”tune” denotes baseline ”fine-tune”. we observe that with same amount (10%) of target offline data, dara greatly outperforms baselines. body mass shift tune dara tune dara tune dara tune dara tune dara πp ˆt ˆt πp bear brac-p bcq cql mopo mabe random medium medium-r medium-e bear brac-p bcq cql mopo mabe random medium medium-r medium-e r e p p o h d 2 r e k l a w can dara perform better than cross-domain baselines? in section 6.2, 1t+10s w/o aug. does not explicitly learn policy for the target dynamics, thus one proposal (1t+10s fine-tune) for adapting the target dynamics is fine-tuning the model that learned with source offline data, using the (reduced) target offline data. moreover, we also compare dara with the recently proposed mabe (cang et al., 2021), which is suitable well for our cross-dynamics setting by introducing behavioral priors πp in the model-based offline setting. thus, we implement two baselines, 1) 1t+10s mabe πp ˆt and 2) 1t+10s mabe ˆt πp, which denote 1) learning πp with target domain data and ˆt with source domain data, and 2) learning πp with source domain data and ˆt with target domain data, respectively. we show the results for the walker (with body mass shift) in table 2, and more experiments in appendix a.3.5. our results show that dara achieves significantly better performance than the na¨ıve fine-tune-based approaches in a majority of tasks (67 ”↑” vs. 13 ”↓”, including results in appendix). on twelve out of the sixteen tasks (including results in appendix), dara-based methods outperform the mabe-based methods. we attribute mabe’s failure to the difficulty of the reduced target offline data, which limits the generalization of the learned πp or ˆt under such data. however, such reduced data (10% of target) is sufficient to modify rewards in the source offline data, which thus encourages better performance for our dara. table 3: average distance covered in an episode in real robot. for real-world tasks, we also test dara in a new offline dataset on the quadruped robot (see appendix for details). note that we can not access the privileged information (e.g., coordinate) in real robot, thus the target offline data (collected in real-world) does not contain rewards. this means that prior fine-tune-based and mabe-based methods become unavailable. however, our reward augmentation frees us from the requisite of rewards in target domain. we can freely perform offline training only using the augmented source offline data as long as the learned ∆r is sufficient. for comparison, we also employ a baseline (w/o aug.): directly deploying the learned policy with source data into the (target) real-world. we present the results (deployed in real with obstructive stairs) in table 3 and videos in supplementary material. we can observe that training with our reward augmentation, the performance can be substantially improved. due to page limit, we refer readers to appendix a.3.6 for more experimental results and discussion. medium medium-e medium–r-e w/o aug. dara (bcq) conclusion in this paper, we formulate the dynamics shift in offline rl. based on prior model-based and modelfree offline algorithms, we propose the dynamics-award reward augmentation (dara) framework that characterizes constraints over state-action-next-state distributions. empirically we demonstrate dara can eliminate the dynamics shift and outperform baselines in simulated and real-world tasks. in appendix a.2, we characterize our dynamics-aware reward augmentation from the density regularization view, which shows that it is straightforward to derive the reward modification built on prior regularized max-return objective e.g., algaedice (nachum et al., 2019b). we list some related works in table 4, where the majority of the existing work focuses on regularizing state-action distribution, while dynamics shift receives relatively little attention. thus, we hope to shift the focus of the community towards analyzing how dynamics shift affects rl and how to eliminate the effect. reproducibility statement our experimental evaluation is conducted with publicly available d4rl (fu et al., 2020) and neorl (qin et al., 2021). in appendix a.4 and a.5, we provide the environmental details and training setup for our real-world sim2real tasks. in supplementary material, we upload our source code and the collected offline dataset for the the quadruped robot. acknowledgments we thank zifeng zhuang, yachen kang and qiangxing tian for helpful feedback and discussions. this work is supported by nsfc general program (62176215). references abbas abdolmaleki, jost tobias springenberg, yuval tassa, r´emi munos, nicolas heess, and martin a. riedmiller. maximum a posteriori policy optimisation. in 6th international conference on learning representations, iclr 2018, vancouver, bc, canada, april 30 - may 3, 2018, conference track proceedings. openreview.net, 2018. philip j. ball, cong lu, jack parker-holder, and stephen j. roberts. augmented world models facilitate zero-shot dynamics generalization from a single offline environment. in marina meila and tong zhang (eds.), proceedings of the 38th international conference on machine learning, icml 2021, 18-24 july 2021, virtual event, volume 139 of proceedings of machine learning research, pp. 619–629. pmlr, 2021. catherine cang, aravind rajeswaran, pieter abbeel, and michael laskin. behavioral priors improving performance and domain transfer in offline rl. corr, yevgen chebotar, karol hausman, yao lu, ted xiao, dmitry kalashnikov, jacob varley, alex irpan, benjamin eysenbach, ryan julian, chelsea finn, and sergey levine. actionable models: unsupervised offline reinforcement learning of robotic skills. in marina meila and tong zhang (eds.), proceedings of the 38th international conference on machine learning, icml 2021, 1824 july 2021, virtual event, volume 139 of proceedings of machine learning research, pp. 1518– 1528. pmlr, 2021. url http://proceedings.mlr.press/v139/chebotar21a. html. jinglin chen and nan jiang. information-theoretic considerations in batch reinforcement learning. in kamalika chaudhuri and ruslan salakhutdinov (eds.), proceedings of the 36th international conference on machine learning, icml 2019, 9-15 june 2019, long beach, california, usa, volume 97 of proceedings of machine learning research, pp. 1042–1051. pmlr, 2019. xinyue chen, zijian zhou, zheng wang, che wang, yanqiu wu, and keith ross. bail: best-action imitation learning for batch deep reinforcement learning. arxiv preprint arxiv:1910.12179, 2019. erwin coumans and yunfei bai. pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016–2021. benjamin eysenbach and sergey levine. if maxent rl is the answer, what is the question? corr, benjamin eysenbach, shreyas chaudhari, swapnil asawa, sergey levine, and ruslan salakhutdinov. off-dynamics reinforcement learning: training for transfer with domain classifiers. in 9th international conference on learning representations, iclr 2021, virtual event, austria, may 3-7, 2021. openreview.net, 2021. justin fu, aviral kumar, ofir nachum, george tucker, and sergey levine. d4rl: datasets for deep data-driven reinforcement learning. corr, abs/2004.07219, 2020. url https://arxiv. org/abs/2004.07219. scott fujimoto, david meger, and doina precup. off-policy deep reinforcement learning without exploration. in kamalika chaudhuri and ruslan salakhutdinov (eds.), proceedings of the 36th international conference on machine learning, icml 2019, 9-15 june 2019, long beach, california, usa, volume 97 of proceedings of machine learning research, pp. 2052–2062. pmlr, 2019. matthieu geist, bruno scherrer, and olivier pietquin. a theory of regularized markov decision in kamalika chaudhuri and ruslan salakhutdinov (eds.), proceedings of the 36th processes. international conference on machine learning, icml 2019, 9-15 june 2019, long beach, california, usa, volume 97 of proceedings of machine learning research, pp. 2160–2169. pmlr, 2019. seyed kamyar seyed ghasemipour, dale schuurmans, and shixiang shane gu. emaq: expectedmax q-learning operator for simple yet effective offline and online rl. in marina meila and tong zhang (eds.), proceedings of the 38th international conference on machine learning, icml 2021, 18-24 july 2021, virtual event, volume 139 of proceedings of machine learning research, pp. 3682–3691. pmlr, 2021. tuomas haarnoja, aurick zhou, kristian hartikainen, george tucker, sehoon ha, jie tan, vikash kumar, henry zhu, abhishek gupta, pieter abbeel, and sergey levine. soft actor-critic algorithms and applications. corr, abs/1812.05905, 2018. url http://arxiv.org/abs/ 1812.05905. behzad haghgoo, allan zhou, archit sharma, and chelsea finn. discriminator augmented modelbased reinforcement learning. corr, abs/2103.12999, 2021. url https://arxiv.org/ abs/2103.12999. atil iscen, ken caluwaerts, jie tan, tingnan zhang, erwin coumans, vikas sindhwani, and in 2nd annual conference vincent vanhoucke. policies modulating trajectory generators. on robot learning, corl 2018, z¨urich, switzerland, 29-31 october 2018, proceedings, volume 87 of proceedings of machine learning research, pp. 916–926. pmlr, 2018. url http://proceedings.mlr.press/v87/iscen18a.html. nan jiang and jiawei huang. minimax value interval for off-policy evaluation and policy opin hugo larochelle, marc’aurelio ranzato, raia hadsell, maria-florina balcan, timization. and hsuan-tien lin (eds.), advances in neural information processing systems 33: annual conference on neural information processing systems 2020, neurips 2020, december 6-12, 2020, virtual, 2020. url https://proceedings.neurips.cc/paper/2020/hash/ 1cd138d0499a68f4bb72bee04bbec2d7-abstract.html. rahul kidambi, aravind rajeswaran, praneeth netrapalli, and thorsten joachims. morel: modelbased offline reinforcement learning. in hugo larochelle, marc’aurelio ranzato, raia hadsell, maria-florina balcan, and hsuan-tien lin (eds.), advances in neural information processing systems 33: annual conference on neural information processing systems 2020, neurips 2020, december 6-12, 2020, virtual, 2020. ilya kostrikov, rob fergus, jonathan tompson, and ofir nachum. offline reinforcement learning with fisher divergence critic regularization. in marina meila and tong zhang (eds.), proceedings of the 38th international conference on machine learning, icml 2021, 18-24 july 2021, virtual event, volume 139 of proceedings of machine learning research, pp. 5774–5783. pmlr, 2021. aviral kumar, justin fu, matthew soh, george tucker, and sergey levine. stabilizing off-policy q-learning via bootstrapping error reduction. in hanna m. wallach, hugo larochelle, alina beygelzimer, florence d’alch´e-buc, emily b. fox, and roman garnett (eds.), advances in neural information processing systems 32: annual conference on neural information processing systems 2019, neurips 2019, december 8-14, 2019, vancouver, bc, canada, pp. 11761–11771, 2019. aviral kumar, aurick zhou, george tucker, and sergey levine. conservative q-learning for offline reinforcement learning. in hugo larochelle, marc’aurelio ranzato, raia hadsell, maria-florina balcan, and hsuan-tien lin (eds.), advances in neural information processing systems 33: annual conference on neural information processing systems 2020, neurips 2020, december 6-12, 2020, virtual, 2020a. one solution is not saurabh kumar, aviral kumar, sergey levine, and chelsea finn. all you need: few-shot extrapolation via structured maxent rl. in hugo larochelle, marc’aurelio ranzato, raia hadsell, maria-florina balcan, and hsuan-tien lin (eds.), advances in neural information processing systems 33: annual conference on neuinformation processing systems 2020, neurips 2020, december 6-12, 2020, virral url https://proceedings.neurips.cc/paper/2020/hash/ tual, 2020b. 5d151d1059a6281335a10732fc49620e-abstract.html. sascha lange, thomas gabel, and martin riedmiller. batch reinforcement learning. in reinforcement learning, pp. 45–73. springer, 2012. joonho lee, jemin hwangbo, lorenz wellhausen, vladlen koltun, and marco hutter. learning quadrupedal locomotion over challenging terrain. science robotics, 5(47), 2020. doi: 10.1126/scirobotics.abc5986. url https://robotics.sciencemag.org/content/ 5/47/eabc5986. sergey levine. reinforcement learning and control as probabilistic inference: tutorial and review. sergey levine, aviral kumar, george tucker, and justin fu. offline reinforcement learning: tutorial, review, and perspectives on open problems. corr, abs/2005.01643, 2020. url https://arxiv.org/abs/2005.01643. jinxin liu, hao shen, donglin wang, yachen kang, and qiangxing tian. unsupervised domain adaptation with dynamics-aware rewards in reinforcement learning. advances in neural information processing systems, 34, 2021. qiang liu, of the 5361–5371, infinite-horizon https://proceedings.neurips.cc/paper/2018/hash/ lihong li, horizon: url and dengyong zhou. ziyang tang, estimation. off-policy breaking pp. eric mitchell, rafael rafailov, xue bin peng, sergey levine, and chelsea finn. offline metareinforcement learning with advantage weighting. in marina meila and tong zhang (eds.), proceedings of the 38th international conference on machine learning, icml 2021, 18-24 july 2021, virtual event, volume 139 of proceedings of machine learning research, pp. 7780–7791. pmlr, 2021. ofir nachum and bo dai. reinforcement learning via fenchel-rockafellar duality. corr, ofir nachum, yinlam chow, bo dai, and lihong li. dualdice: behavior-agnostic estimation of discounted stationary distribution corrections. arxiv preprint arxiv:1906.04733, 2019a. ofir nachum, bo dai, ilya kostrikov, yinlam chow, lihong li, and dale schuurmans. algaedice: policy gradient from arbitrary experience. corr, abs/1912.02074, 2019b. url http://arxiv.org/abs/1912.02074. ashvin nair, murtaza dalal, abhishek gupta, and sergey levine. accelerating online reinforcement learning with offline datasets. corr, abs/2006.09359, 2020. url https://arxiv.org/ abs/2006.09359. xue bin peng, marcin andrychowicz, wojciech zaremba, and pieter abbeel. sim-to-real transfer of robotic control with dynamics randomization. in 2018 ieee international conference on robotics and automation, icra 2018, brisbane, australia, may 21-25, 2018, pp. 1–8. ieee, doi: 10.1109/icra.2018.8460528. url https://doi.org/10.1109/icra. 2018. 2018.8460528. xue bin peng, aviral kumar, grace zhang, and sergey levine. advantage-weighted regression: simple and scalable off-policy reinforcement learning. corr, abs/1910.00177, 2019. url http://arxiv.org/abs/1910.00177. jan peters, katharina mulling, and yasemin altun. relative entropy policy search. in twenty-fourth aaai conference on artificial intelligence, 2010. rongjun qin, songyi gao, xingyuan zhang, zhen xu, shengkai huang, zewen li, weinan zhang, and yang yu. neorl: a near real-world benchmark for offline reinforcement learning. arxiv preprint arxiv:2102.00714, 2021. y. sakakibara, k. kan, y. hosoda, m. hattori, and m. fujie. foot trajectory for a quadruped walking machine. in eee international workshop on intelligent robots and systems, towards a new frontier of applications, pp. 315–322 vol.1, 1990. doi: 10.1109/iros.1990.262407. noah y siegel, jost tobias springenberg, felix berkenkamp, abbas abdolmaleki, michael neunert, thomas lampe, roland hafner, nicolas heess, and martin riedmiller. keep doing what worked: behavioral modelling priors for offline reinforcement learning. arxiv preprint arxiv:2002.08396, 2020. richard s. sutton. integrated architectures for learning, planning, and reacting based on approximating dynamic programming. in bruce w. porter and raymond j. mooney (eds.), machine learning, proceedings of the seventh international conference on machine learning, austin, texas, usa, june 21-23, 1990, pp. 216–224. morgan kaufmann, 1990. doi: 10.1016/b978-1-55860-141-3.50030-4. josh tobin, rachel fong, alex ray, jonas schneider, wojciech zaremba, and pieter abbeel. domain randomization for transferring deep neural networks from simulation to the real world. 2017. masatoshi uehara, jiawei huang, and nan jiang. minimax weight and q-function learning for offpolicy evaluation. in proceedings of the 37th international conference on machine learning, icml 2020, 13-18 july 2020, virtual event, volume 119 of proceedings of machine learning research, pp. 9659–9668. pmlr, 2020. url http://proceedings.mlr.press/v119/ uehara20a.html. nino vieillard, tadashi kozuno, bruno scherrer, olivier pietquin, r´emi munos, and matthieu in geist. leverage the average: an analysis of kl regularization in reinforcement learning. hugo larochelle, marc’aurelio ranzato, raia hadsell, maria-florina balcan, and hsuan-tien lin (eds.), advances in neural information processing systems 33: annual conference on neural information processing systems 2020, neurips 2020, december 6-12, 2020, virtual, 2020. xingxing wang. unitree robotics. https://www.unitree.com/products/a1, 2020. ziyu wang, alexander novikov, konrad zolna, jost tobias springenberg, scott reed, bobak shahriari, noah siegel, josh merel, caglar gulcehre, nicolas heess, et al. critic regularized regression. arxiv preprint arxiv:2006.15134, 2020. yifan wu, george tucker, and ofir nachum. behavior regularized offline reinforcement learning. haoran xu, xianyuan zhan, and xiangyu zhu. constraints penalized q-learning for safe offline reinforcement learning. corr, abs/2107.09003, 2021. url https://arxiv.org/abs/ 2107.09003. mengjiao yang, ofir nachum, bo dai, lihong li, and dale schuurmans. off-policy evaluation via the regularized lagrangian. in hugo larochelle, marc’aurelio ranzato, raia hadsell, mariaflorina balcan, and hsuan-tien lin (eds.), advances in neural information processing systems 33: annual conference on neural information processing systems 2020, neurips 2020, december 6-12, 2020, virtual, 2020. url https://proceedings.neurips.cc/paper/ 2020/hash/488e4104520c6aab692863cc1dba45af-abstract.html. tianhe yu, garrett thomas, lantao yu, stefano ermon, james y. zou, sergey levine, chelsea finn, and tengyu ma. mopo: model-based offline policy optimization. in hugo larochelle, marc’aurelio ranzato, raia hadsell, maria-florina balcan, and hsuan-tien lin (eds.), advances in neural information processing systems 33: annual conference on neural information processing systems 2020, neurips 2020, december 6-12, 2020, virtual, 2020. tianhe yu, aviral kumar, rafael rafailov, aravind rajeswaran, sergey levine, and chelsea finn. combo: conservative offline model-based policy optimization. corr, abs/2102.08363, 2021. url https://arxiv.org/abs/2102.08363. hongyin zhang, jilong wang, zhengqing wu, yinuo wang, and donglin wang. terrain-aware risk-assessment-network-aided deep reinforcement learning for quadrupedal locomotion in tough terrain. in 2021 ieee/rsj international conference on intelligent robots and systems (iros), pp. 4538–4545. ieee. ruiyi zhang, bo dai, lihong li, and dale schuurmans. gendice: generalized offline estiin 8th international conference on learning representations, mation of stationary values. iclr 2020, addis ababa, ethiopia, april 26-30, 2020. openreview.net, 2020. url https: //openreview.net/forum?id=hkxlcnvfwb. a appendix a.1 derivation a.1.1 proof of lemma 3 m v (s) = ea∼π(a|s) let bπ we have η ˆm (cid:48)(π) − ηm (π) = es0∼ρ0(s) (cid:2)v ˆm (cid:48)(s0) − vm (s0)(cid:3) (cid:2)r(s, a) + γes(cid:48)∼t (s(cid:48)|s,a) [v (s(cid:48))](cid:3) and r(s) = ea∼π(a|s) [r(s, a)]. then, γte st∼p (st|π, ˆm (cid:48),t) eat∼π(at|st) [r(st, at)] − es0∼ρ0(s) [vm (s0)] γte st∼p (st|π, ˆm (cid:48),t) [r(st) + vm (st) − vm (st)] − es0∼ρ0(s) [vm (s0)] γte st∼p (st|π, ˆm (cid:48),t) st+1∼p (st+1|π, ˆm (cid:48),t+1) γte st∼p (st|π, ˆm (cid:48),t) st+1∼p (st+1|π, ˆm (cid:48),t+1) [r(st) + γvm (st+1) − vm (st)] (cid:2)r(st) + γvm (st+1) − (cid:0)r(st) + γea∼π(a|st),s(cid:48)∼t (st,a) [vm (s(cid:48))](cid:1)(cid:3) γte st∼p (st|π, ˆm (cid:48),t) (cid:2)bπ t=0 = es∼dπ ˆm (cid:48) (s) (cid:2)bπ ˆm (cid:48)vm (s) − bπ ˆm (cid:48)vm (st) − bπ m vm (s)(cid:3) . m vm (st)(cid:3) a.1.2 model-based formulation here we provide detailed derivation of the lower bound in equation 9 in the main text. assumption 1 assume a scale c and a function class f such that vm ∈ cf. following mopo (yu et al., 2020), we set f = {f : (cid:107)f (cid:107)∞ ≤ 1}. in section preliminaries, we have that the reward function is bounded: r(s, a) ∈ [−rmax, rmax]. thus, we have (cid:107)vm (cid:107)∞ ≤ (cid:80)∞ t=0 γtrmax = rmax 1−γ and hence the scale c = rmax 1−γ . as a direct corollary of assumption 1 and equation 5, we have es∼dπ ˆm (cid:48) (s) (cid:2)bπ ˆm (cid:48)vm (s) − bπ m vm (s)(cid:3)(cid:12) (cid:12) ≤ γc · es,a∼dπ (cid:12) ˆm (cid:48) (s)π(a|s) df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) further, we find df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) + df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)) for the first term df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) in equation 12, through pinsker’s inequality, we have df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) = dtv( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) ≤ dkl( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) (12) to keep consistent with the dara-based method-free offline methods, we introduce scale η and bias δ to eliminate the square root in equation 12. to be specific, we assume2 scale η and bias δ such that 2 dkl( ˆt (cid:48), ˆt ) ≤ ηdkl( ˆt (cid:48), ˆt ) + δ. thus, we obtain df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) = dtv( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) ≤ ηdkl( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) + δ 2in implementation, we clip the maximum deviation of log ˆt (cid:48)(s(cid:48)|s,a) ˆt (s(cid:48)|s,a) for each (s, a, s(cid:48)), which thus makes dkl( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) bounded. for the second term df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)) in equation 11, we assume that we have access to an oracle uncertainty qualification module that provides an upper bound on the error of the estimated empirical mdp ˆm := {s, a, r, ˆt , ρ0, γ}. assumption 2 let f be the function class in assumption 1. we say u : s ×a → r is an admissible error estimator for ˆt if df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ u(s, a) for all (s, a). thus, we have es,a∼dπ ˆm (cid:48) (s)π(a|s) df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ es,a∼dπ ˆm (cid:48) (s)π(a|s) [u(s, a)] bring inequations 10, 11, 13, and 14 into lemma 3, we thus have ηm (π) ≥ e s,a,s(cid:48)∼dπ ˆm (cid:48) (s)π(a|s) ˆt (cid:48)(s(cid:48)|s,a) r(s, a) − ηγc log ˆt (cid:48)(s(cid:48)|s, a) ˆt (s(cid:48)|s, a) − γcu(s, a) − γcδ a.1.3 learning classifiers applying bayes’ rule, we have ˆt (cid:48)(s(cid:48)|a, s) := p(s(cid:48)|s, a, source) = ˆt (s(cid:48)|a, s) := p(s(cid:48)|s, a, target) = p(source|s, a, s(cid:48))p(s, a, s(cid:48)) p(source|s, a)p(s, a) p(target|s, a, s(cid:48))p(s, a, s(cid:48)) p(target|s, a)p(s, a) then we parameterize p(·|s, a, s(cid:48)) and p(·|s, a) with the two classifiers qsas and qsa respectively. using the standard cross-entropy loss, we learn qsas and qsa with the following optimization objective: max e(s,a,s(cid:48))∼d(cid:48) [log qsas(source|s, a, s(cid:48))] + e(s,a,s(cid:48))∼d [log qsas(target|s, a, s(cid:48))] , max e(s,a)∼d(cid:48) [log qsa(source|s, a)] + e(s,a)∼d [log qsa(target|s, a)] . with the trained qsas and qsa, we have log ˆt (cid:48)(s(cid:48)|s, a) ˆt (s(cid:48)|s, a) = log qsas(source|s, a, s(cid:48)) qsas(target|s, a, s(cid:48)) − log qsa(source|s, a) qsa(target|s, a) in our implementation, we also clip the above reward modification between −10 and 10. a.2 regularization view of dynamics-aware reward augmentation here we shortly characterize our dynamics-aware reward augmentation from the density regularization. note the standard max-return objective ηm (π) in rl can be written exclusively in terms of the on-policy distribution dπ m (s)π(a|s). to introduce an off-policy distribution dd(s)πb(a|s) in the objective, prior works often incorporate a regularization (penalty): d(dπ m (s)π(a|s)(cid:107)dd(s)πb(a|s)), as in equations 2 and 3. however, facing dynamics shift, such regularization should take into account m (s)π(a|s)t (s(cid:48)|s, a)(cid:107)dd(cid:48)(s)πb(cid:48)(a|s) ˆt (cid:48)(s(cid:48)|s, a)). the transition dynamics, which is penalizing d(dπ from this view, it is also straightforward to derive the reward modification built on prior regularized off-policy max-return objective e.g., the off-policy approach algaedice (nachum et al., 2019b). in table 4, we provide some related works with respect to the (state-action pair) dd(s)πb(a|s) regularization and the (state-action-next-state pair) dd(cid:48)(s)πb(cid:48)(a|s) ˆt (cid:48)(s(cid:48)|s, a) regularization. we can find that the majority of the existing work focuses on regularizing state-action distribution, while dynamics shift receives relatively little attention. thus, we hope to shift the focus of the community towards analyzing how the dynamics shift affects rl and how to eliminate the effect. table 4: some related works with explicit (state-action p(s, a) or state-action-next-state p(s, a, s(cid:48))) regularization. more papers with respect to unsupervised rl, inverse rl (imitation learning), meta rl, multi-agent rl, and hierarchical rl are not included. reg. with dd(s)πb(a|s) reg. with dd(cid:48)(s)πb(cid:48)(a|s) ˆt (cid:48)(s(cid:48)|s, a) online: see summarization in geist et al. (2019) and vieillard et al. (2020). eysenbach et al. (2021) (darc) liu et al. (2021) (dars); haghgoo et al. (2021) offline (off-policy evaluation): fujimoto et al. (2019) (bcq); wu et al. (2019) (brac-p); peng et al. (2019) (awr); wang et al. (2020) (crr); chen et al. (2019); (bail) xu et al. (2021) (cpq); liu et al. (2018); nachum et al. (2019b) (algaedice); zhang et al. (2020) (gendice); yang et al. (2020); jiang & huang (2020); yu et al. (2020) (mopo); yu et al. (2021) (combo); kumar et al. (2019) (bear); abdolmaleki et al. (2018) (mpo); nair et al. (2020) (awac); siegel et al. (2020); kumar et al. (2020a) (cql); kostrikov et al. (2021) (fisher-brc); nachum et al. (2019a) (dualdice); nachum & dai (2020); uehara et al. (2020); kidambi et al. (2020) (morel); cang et al. (2021) (mabe); a.3 more experiments a.3.1 training with {d(cid:48) ∪ d} as we show in figure 1 in section introduction, the performance of prior offline rl methods deteriorates dramatically as the amount of (target) offline data d decreases. in figure 4, we show that directly training with the mixed dataset {d(cid:48) ∪ d} will not compensate for the deteriorated performance caused by the reduced target offline data, and training with such additional source offline data can even lead the performance degradation in some tasks. figure 4: final performance on the d4rl (walker2d) task: the orange bars denote the final performance with different amount (50%d, 20%d, 10%d, 5%d) of target offline data; the blue bars denote the final performance of mixing 100% of source offline data d(cid:48) and different amount of target data x%d (x ∈ [50, 20, 10, 5]), i.e., training with {100%d(cid:48) ∪ x%d}; the red lines denote the final performance of training with 100% of target offline data d. we can observe that 1) the performance deteriorates dramatically as the amount of (target) offline data decreases (100%d (red line) → 50%d (orange bar) → 20%d (orange bar) → 10%d (orange bar) → 5%d (orange bar)), 2) after training with the additional 100% of source offline data, {100%d(cid:48) ∪ x%d}, the final performance is improved in some tasks, but most of the improvement is a pittance compared to the original performance degradation (compared to that training with the 100% of target offline data, i.e., the red lines), and 3) what is worse is that adding source offline data d(cid:48) even leads performance degradation in some tasks, e.g., cql with 50%d and 20%d in medium-random. a.3.2 comparison between learning classifiers and learning dynamics (for the reward modification) table 5: normalized scores for the hopper tasks with the body mass (dynamics) shift. rat. and cla. denote estimating the reward modification with the estimated-dynamics ratio and learned classifiers (appendix a.1.3), respectively. body mass shift bear brac-p awr bcq cql mopo rat. cla. cla. cla. rat. cla. rat. cla. rat. cla. rat. cla. r e p p o h in table 5, we show the comparison between learning classifiers and learning dynamics (for our reward modification) in the hopper tasks. we can observe that the two schemes for estimating the reward modification have similar performance. thus, for simplicity and following eysenbach et al. (2021), we adopt the classifiers to modify rewards in the source offline data in our experiments. a.3.3 more examples with respect to the reward augmentation figure 5: we can observe that our reward augmentation 1) encourages (−∆r > 0, i.e., the green slice parts) these transitions (−0.26 ≤ next-state[11] ≤ 0.26) that have the same dynamics with the target environment, and 2) discourages (−∆r < 0, i.e., the red slice parts) these transitions that have different (unreachable) dynamics (next-state[11] ≤ −0.26 or next-state[11] ≥ 0.26) in the target. in figure 5, we provide more examples with respect to the reward augmentation in the ant task in figure 3 (left). a.3.4 comparison between 10t, 1t, 1t+10s w/o aug., and 1t+10s dara based on various offline rl algorithms (bear (kumar et al., 2019), brac-p (wu et al., 2019), bcq (fujimoto et al., 2019), cql (kumar et al., 2020a), awr (peng et al., 2019), mopo (yu et al., 2020), bc (behavior cloning), combo (yu et al., 2021)), we provide the additional results in tables 6, 7, 8, 9, and 10. table 6: normalized scores for the hopper tasks with the body mass (dynamics) shift. (the comparison results for bear, brac-p, awr, cql, and mopo are provided in the main text.) body mass shift bc combo r e p p o h random medium medium-r medium-e table 7: normalized scores for the hopper tasks with the joint noise (dynamics) shift. joint noise shift bear brac-p r e p p o h r e p p o h r e p p o h random medium medium-r medium-e random medium medium-r medium-e random medium medium-r medium-e bcq bc cql combo awr mopo table 8: normalized scores for the walker2d tasks with the body mass (dynamics) shift. comparison results for bear, brac-p, awr, cql, and mopo are provided in the main text.) (the body mass shift bc combo d 2 r e k l a w 1.6 random 6.6 medium medium-r 11.3 6.4 medium-e table 9: normalized scores for the walker2d tasks with the joint noise (dynamics) shift. joint noise shift d 2 r e k l a w d 2 r e k l a w d 2 r e k l a w 7.3 random medium 59.1 medium-r 19.2 40.1 medium-e random medium medium-r medium-e 1.6 random medium 6.6 medium-r 11.3 6.4 medium-e bear bcq bc brac-p cql awr mopo combo table 10: normalized scores for the halfcheetah tasks with the joint noise (dynamics) shift. joint noise shift 2.2 random medium 40.7 medium-r 38.2 64.7 medium-e random 2.1 36.1 medium medium-r 38.4 35.8 medium-e bear bcq bc h a t e e h c f l a h h a t e e h c f l a h h a t e e h c f l a h brac-p cql combo awr mopo a.3.5 comparison with the cross-domain based baselines in tables 11 and 12, we provide the comparison between our dara-based methods, fine-tune based methods, and mabe-based methods in hopper and walker2d tasks, over the dynamics shift concerning the joint noise of motion. we can observe that in a majority of tasks, our darabased methods outperforms the fine-tune-based method (67 ”↑” vs. 13 ”↓”, including the results in the main text). moreover, our dara can achieve comparable or better performance compared to mabe-based baselines on eleven out of sixteen tasks (including the results in the main text). table 11: normalized scores in the (target) d4rl hopper tasks with the joint noise shift., where ”tune” denotes baseline ”fine-tune”. joint noise shift tune dara tune dara tune dara tune dara tune dara πp ˆt ˆt πp bear brac-p bcq cql mopo mabe r e p p o h random medium medium-r medium-e table 12: normalized scores in the (target) d4rl walker2d tasks with the joint noise shift., where ”tune” denotes baseline ”fine-tune”. joint noise shift tune dara tune dara tune dara tune dara tune dara πp ˆt ˆt πp bear brac-p bcq cql mopo mabe d 2 r e k l a w random medium medium-r medium-e a.3.6 additional results on the quadruped robot in this offline sim2real setting, we collect the source offline data in the simulator (106 or 2 ∗ 106 steps) and target offline data in the real world (3 ∗ 104 steps). see appendix a.4 for details. for testing, we directly deploy the learned policy in the real (flat or obstructive) environment and adopt the average distance covered in an episode (300 steps) as our evaluation metrics. figure 6: illustration of the real environment (for testing): (left) the flat and static environment, (right) the obstructive and dynamic environment. table 13: average distance (m) covered in an episode (300 steps) in flat and static (real) environment. sim2real (flat and static) w/o aug. dara w/o aug. dara w/o aug. dara quadruped robot medium medium-r medium-e medium-r-e average performance improvement bcq cql mopo (flat and static environment) we first deploy our learned policy in the flat and static environment. the results (distance covered in an episode) are provided in table 13. 1) bcq (figure 7): we find that with medium-r offline data, w/o aug. bcq and dara bcq both could not acquire the locomotion skills, which we think is caused by the lack of high-quality offline data. with more ”expert” data (medium-r → medium → medium-e, or medium-r → mediumr-e), w/o-aug. bcq allows for progressive performance (0.00 → 1.56 → 2.16, or 0.00 → 1.69 in bcq), but with our reward augmentation, such performance can be further improved (with average improvement 13.6%). 2) cql (figure 8): we find that with medium-r or medium-r-e offline data, w/o aug. cql and dara cql both could not learn the locomotion skills, which we think is caused by the low-quality ”replay” offline data. with medium or medium-e offline data, w/o aug. cql and dara cql acquire similar performance on this flat and static environment. 3) mopo: we find that the model-based mopo (both w/o aug. and dara) could hardly learn the locomotion skill under the provided offline data. table 14: average distance (m) covered in an episode (300 steps) in the obstructive and dynamic (real) environment. sim2real (obstructive and dynamic) w/o aug. dara w/o aug. dara w/o aug. dara quadruped robot medium medium-r medium-e medium-r-e average performance improvement | 20 | [
113.56396372,
166.07832791,
244.8082071659,
175.350918234
] |
vXj_ucZQ4hA.pdf | 2,021 | 0 | robust pruning at initialization soufiane hayou, jean-francois ton, arnaud doucet & yee whye teh department of statistics university of oxford united kingdom {soufiane.hayou, ton, doucet, teh}@stats.ox.ac.uk abstract overparameterized neural networks (nn) display state-of-the-art performance. however, there is a growing need for smaller, energy-efficient, neural networks to be able to use machine learning applications on devices with limited computational resources. a popular approach consists of using pruning techniques. while these techniques have traditionally focused on pruning pre-trained nn (lecun et al., 1990; hassibi et al., 1993), recent work by lee et al. (2018) has shown promising results when pruning at initialization. however, for deep nns, such procedures remain unsatisfactory as the resulting pruned networks can be difficult to train and, for instance, they do not prevent one layer from being fully pruned. in this paper, we provide a comprehensive theoretical analysis of magnitude and gradient based pruning at initialization and training of sparse architectures. this allows us to propose novel principled approaches which we validate experimentally on a variety of nn architectures. introduction overparameterized deep nns have achieved state of the art (sota) performance in many tasks (nguyen and hein, 2018; du et al., 2019; zhang et al., 2016; neyshabur et al., 2019). however, it is impractical to implement such models on small devices such as mobile phones. to address this problem, network pruning is widely used to reduce the time and space requirements both at training and test time. the main idea is to identify weights that do not contribute significantly to the model performance based on some criterion, and remove them from the nn. however, most pruning procedures currently available can only be applied after having trained the full nn (lecun et al., 1990; hassibi et al., 1993; mozer and smolensky, 1989; dong et al., 2017) although methods that consider pruning the nn during training have become available. for example, louizos et al. (2018) propose an algorithm which adds a l0 regularization on the weights to enforce sparsity while carreira-perpi˜n´an and idelbayev (2018); alvarez and salzmann (2017); li et al. (2020) propose the inclusion of compression inside training steps. other pruning variants consider training a secondary network that learns a pruning mask for a given architecture (li et al. (2020); liu et al. (2019)). recently, frankle and carbin (2019) have introduced and validated experimentally the lottery ticket hypothesis which conjectures the existence of a sparse subnetwork that achieves similar performance to the original nn. these empirical findings have motivated the development of pruning at initialization such as snip (lee et al. (2018)) which demonstrated similar performance to classical pruning methods of pruning-after-training. importantly, pruning at initialization never requires training the complete nn and is thus more memory efficient, allowing to train deep nn using limited computational resources. however, such techniques may suffer from different problems. in particular, nothing prevents such methods from pruning one whole layer of the nn, making it untrainable. more generally, it is typically difficult to train the resulting pruned nn (li et al., 2018). to solve this situation, lee et al. (2020) try to tackle this issue by enforcing dynamical isometry using orthogonal weights, while wang et al. (2020) (grasp) uses hessian based pruning to preserve gradient flow. other work by tanaka et al. (2020) considers a data-agnostic iterative approach using the concept of synaptic flow in order to avoid the layer-collapse phenomenon (pruning a whole layer). in our work, we use principled scaling and re-parameterization to solve this issue, and show numerically that our algorithm achieves sota performance on cifar10, cifar100, tinyimagenet and imagenet in some scenarios and remains competitive in others. table 1: classification accuracies on cifar10 for resnet with varying depths and sparsities using snip (lee et al. (2018)) and our algorithm sbp-sr algorithm resnet32 resnet50 snip sbp-sr snip sbp-sr snip sbp-sr in this paper, we provide novel algorithms for sensitivity-based pruning (sbp), i.e. pruning schemes that prune a weight w based on the magnitude of |w ∂l ∂w | at initialization where l is the loss. experimentally, compared to other available one-shot pruning schemes, these algorithms provide state-of the-art results (this might not be true in some regimes). our work is motivated by a new theoretical analysis of gradient back-propagation relying on the mean-field approximation of deep nn (hayou et al., 2019; schoenholz et al., 2017; poole et al., 2016; yang and schoenholz, 2017; xiao et al., 2018; lee et al., 2018; matthews et al., 2018). our contribution is threefold: • for deep fully connected feedforward nn (ffnn) and convolutional nn (cnn), it has been previously shown that only an initialization on the so-called edge of chaos (eoc) make models trainable; see e.g. (schoenholz et al., 2017; hayou et al., 2019). for such models, we show that an eoc initialization is also necessary for sbp to be efficient. outside this regime, one layer can be fully pruned. • for these models, pruning pushes the nn out of the eoc making the resulting pruned model difficult to train. we introduce a simple rescaling trick to bring the pruned model back in the eoc regime, making the pruned nn easily trainable. • unlike ffnn and cnn, we show that resnets are better suited for pruning at initialization since they ‘live’ on the eoc by default (yang and schoenholz, 2017). however, they can suffer from exploding gradients, which we resolve by introducing a re-parameterization, called ‘stable resnet’ (sr). the performance of the resulting sbp-sr pruning algorithm is illustrated in table 1: sbp-sr allows for pruning up to 99.5% of resnet104 on cifar10 while still retaining around 87% test accuracy. the precise statements and proofs of the theoretical results are given in the supplementary. appendix h also includes the proof of a weak version of the lottery ticket hypothesis (frankle and carbin, 2019) showing that, starting from a randomly initialized nn, there exists a subnetwork initialized on the eoc. sensitivity pruning for ffnn/cnn and the rescaling trick setup and notations let x be an input in rd. a nn of depth l is defined by yl(x) = fl(w l, yl−1(x)) + bl, where yl(x) is the vector of pre-activations, w l and bl are respectively the weights and bias of the lth layer and fl is a mapping that defines the nature of the layer. the weights and bias are initialized with w l iid∼ n (0, σ2 w/vl), where vl is a scaling factor used to control the variance of yl, and bl iid∼ n (0, σ2 b ). hereafter, ml denotes the number of weights in the lth layer, φ the activation function and [m : n] := {m, m + 1, ..., n} for m ≤ n. two examples of such architectures are: • fully connected ffnn. for a ffnn of depth l and widths (nl)0≤l≤l, we have vl = nl−1, ml = nl−1nl and y1 i (x) = ijxj + b1 i , yl i(x) = w l ijφ(yl−1 j (x)) + bl i for l ≥ 2. • cnn. for a 1d cnn of depth l, number of channels (nl)l≤l, and number of neurons per channel (nl)l≤l, we have y1 i,α(x) = β∈kerl i,j,βxj,α+β + b1 i , yl i,α(x) = β∈kerl w l i,j,βφ(yl−1 j,α+β(x)) + bl i, for l ≥ 2, (3) where i ∈ [1 : nl] is the channel index, α ∈ [0 : nl −1] is the neuron location, kerl = [−kl : kl] is the filter range, and 2kl + 1 is the filter size. to simplify the analysis, we assume hereafter that nl = n and kl = k for all l. here, we have vl = nl−1(2k + 1) and ml = nl−1nl(2k + 1). we assume periodic boundary conditions; so yl i,α+n = yl i,α−n . generalization to multidimensional convolutions is straightforward. when no specific architecture is mentioned, (w l i )1≤i≤ml denotes the weights of the lth layer. in practice, a pruning algorithm creates a binary mask δ over the weights to force the pruned weights to be zero. the neural network after pruning is given by i,α = yl yl(x) = fl(δl ◦ w l, yl−1(x)) + bl, (4) where ◦ is the hadamard (i.e. element-wise) product. in this paper, we focus on pruning at initialization. the mask is typically created by using a vector gl of the same dimension as w l using a mapping of choice (see below), we then prune the network by keeping the weights that correspond to the top k values in the sequence (gl i)i,l where k is fixed by the sparsity that we want to achieve. there are three popular types of criteria in the literature : • magnitude based pruning (mbp): we prune weights based on the magnitude |w |. • sensitivity based pruning (sbp): we prune the weights based on the values of |w ∂l is the loss. this is motivated by lw ≈ lw =0 + w ∂l ∂w used in snip (lee et al. (2018)). • hessian based pruning (hbp): we prune the weights based on some function that uses the hessian of the loss function as in grasp (wang et al., 2020). ∂w | where l in the remainder of the paper, we focus exclusively on sbp while our analysis of mbp is given in appendix e. we leave hbp for future work. however, we include empirical results with grasp (wang et al., 2020) in section 4. hereafter, we denote by s the sparsity, i.e. the fraction of weights we want to prune. let al be the set of indices of the weights in the lth layer that are pruned, i.e. al = {i ∈ [1 : ml], s.t. δl i = 0}. we define the critical sparsity scr by scr = min{s ∈ (0, 1), s.t. ∃l, |al| = ml}, where |al| is the cardinality of al. intuitively, scr represents the maximal sparsity we are allowed to choose without fully pruning at least one layer. scr is random as the weights are initialized randomly. thus, we study the behaviour of the expected value e[scr] where, hereafter, all expectations are taken w.r.t. to the random initial weights. this provides theoretical guidelines for pruning at initialization. for all l ∈ [1 : l], we define αl by vl = αln where n > 0, and ζl > 0 such that ml = ζln 2, where we recall that vl is a scaling factor controlling the variance of yl and ml is the number of weights in the lth layer. this notation assumes that, in each layer, the number of weights is quadratic in the number of neurons, which is satisfied by classical ffnn and cnn architectures. sensitivity-based pruning (sbp) sbp is a data-dependent pruning method that uses the data to compute the gradient with backpropagation at initialization (one-shot pruning).we randomly sample a batch and compute the gradients ∂l of the loss with respect to each weight. the mask is then defined by δl | ≥ ts), where ∂w l i ts = |w ∂l s order statistics of the sequence ∂l (|w l i ∂w l i ∂w |(ks) and ks = (1 − s) (cid:80) |)1≤l≤l,1≤i≤ml . ∂w |(ks) is the kth l ml and |w ∂l i = i(|w l i however, this simple approach suffers from the well-known exploding/vanishing gradients problem which renders the first/last few layers respectively susceptible to be completely pruned. we give a formal definition to this problem. definition 1 (well-conditioned & ill-conditioned nn). let ml = e[|w l |2] for l ∈ [1 : l]. we 1 say that the nn is well-conditioned if there exist a, b > 0 such that for all l ≥ 1 and l ∈ [1 : l] we have a ≤ ml/ml ≤ b, and it is ill-conditioned otherwise. ∂l ∂w l 1 understanding the behaviour of gradients at initialization is thus crucial for sbp to be efficient. using a mean-field approach, such analysis has been carried out in (schoenholz et al., 2017; hayou et al., 2019; xiao et al., 2018; poole et al., 2016; yang, 2019) where it has been shown that an initialization known as the eoc is beneficial for dnn training. the mean-field analysis of dnns relies on two standard approximations that we will also use here. approximation 1 (mean-field approximation). when nl (cid:29) 1 for ffnn or nl (cid:29) 1 for cnn, we use the approximation of infinitely wide nn. this means infinite number of neurons per layer for fully connected layers and infinite number of channels per layer for convolutional layers. approximation 2 (gradient independence). the weights used for forward propagation are independent from those used for back-propagation. these two approximations are ubiquitous in literature on the mean-field analysis of neural networks. they have been used to derive theoretical results on signal propagation (schoenholz et al., 2017; hayou et al., 2019; poole et al., 2016; yang, 2019; yang and schoenholz, 2017; yang et al., 2019) and are also key tools in the derivation of the neural tangent kernel (jacot et al., 2018; arora et al., 2019; hayou et al., 2020). approximation 1 simplifies the analysis of the forward propagation as it allows the derivation of closed-form formulas for covariance propagation. approximation 2 does the same for back-propagation. see appendix a for a detailed discussion of these approximations. throughout the paper, we provide numerical results that substantiate the theoretical results that we derive using these two approximations. we show that these approximations lead to excellent match between theoretical results and numerical experiments. edge of chaos (eoc): for inputs x, x(cid:48), let cl(x, x(cid:48)) be the correlation between yl(x) and yl(x(cid:48)). from (schoenholz et al., 2017; hayou et al., 2019), there exists a so-called correlation function f that depends on (σw, σb) such that cl+1(x, x(cid:48)) = f (cl(x, x(cid:48))). let χ(σb, σw) = f (cid:48)(1). the eoc is the set of hyperparameters (σw, σb) satisfying χ(σb, σw) = 1. when χ(σb, σw) > 1, we are in the chaotic phase, the gradient explodes and cl(x, x(cid:48)) converges exponentially to some c < 1 for x (cid:54)= x(cid:48) and the resulting output function is discontinuous everywhere. when χ(σb, σw) < 1, we are in the ordered phase where cl(x, x(cid:48)) converges exponentially fast to 1 and the nn outputs constant functions. initialization on the eoc allows for better information propagation (see supplementary for more details). hence, by leveraging the above results, we show that an initialization outside the eoc will lead to an ill-conditioned nn. theorem 1 (eoc initialization is crucial for sbp). consider a nn of type (2) or (3) (ffnn or cnn). assume (σw, σb) are chosen on the ordered phase, i.e. χ(σb, σw) < 1, then the nn is ill-conditioned. moreover, we have e[scr] ≤ log(κln 2) κ + o where κ = | log χ(σb, σw)|/8. if (σw, σb) are on the eoc, i.e. χ(σb, σw) = 1, then the nn is well-conditioned. in this case, κ = 0 and the above upper bound no longer holds. the proof of theorem 1 relies on the behaviour of the gradient norm at initialization. on the ordered phase, the gradient norm vanishes exponentially quickly as it back-propagates, thus resulting in an ill-conditioned network. we use another approximation for the sake of simplification of the proof (approximation 3 in the supplementary) but the result holds without this approximation although the resulting constants would be a bit different. theorem 1 shows that the upper bound decreases the farther χ(σb, σw) is from 1, i.e. the farther the initialization is from the eoc. for constant width ffnn with l = 100, n = 100 and κ = 0.2, the theoretical upper bound is e[scr] (cid:47) 27% while we obtain e[scr] ≈ 22% based on 10 simulations. a similar result can be obtained when the nn is initialized on the chaotic phase; in this case too, the nn is ill-conditioned. to illustrate these results, figure 1 shows the impact of the initialization with sparsity s = 70%. the dark area in figure 1(b) corresponds to layers that are fully pruned in the chaotic phase due to exploding gradients. using an eoc initialization, figure 1(a) shows that pruned weights are well distributed in the nn, ensuring that no layer is fully pruned. (a) edge of chaos (b) chaotic phase figure 1: percentage of weights kept after sbp applied to a randomly initialized ffnn with depth 100 and width 100 for 70% sparsity on mnist. each pixel (i, j) corresponds to a neuron and shows the proportion of connections to neuron (i, j) that have not been pruned. the eoc (a) allows us to preserve a uniform spread of the weights, whereas the chaotic phase (b), due to exploding gradients, prunes entire layers. training pruned networks using the rescaling trick we have shown previously that an initialization on the eoc is crucial for sbp. however, we have not yet addressed the key problem of training the resulting pruned nn. this can be very challenging in practice (li et al., 2018), especially for deep nn. consider as an example a ffnn architecture. after pruning, we have for an input x i(x) = (cid:80)nl−1 ˆyl j=1 w l ijδl ijφ(ˆyl−1 j (x)) + bl i, for l ≥ 2, where δ is the pruning mask. while the original nn initialized on the eoc was satisfying cl+1(x, x(cid:48)) = f (cl(x, x(cid:48))) for f (cid:48)(1) = χ(σb, σw) = 1, the pruned architecture leads to ˆcl+1(x, x(cid:48)) = fpruned(ˆcl(x, x(cid:48))) with f (cid:48) pruned(1) (cid:54)= 1, hence pruning destroys the eoc. consequently, the pruned nn will be difficult to train (schoenholz et al., 2017; hayou et al., 2019) especially if it is deep. hence, we propose to bring the pruned nn back on the eoc. this approach consists of rescaling the weights obtained after sbp in each layer by factors that depend on the pruned architecture itself. proposition 1 (rescaling trick). consider a nn of type (2) or (3) (ffnn or cnn) initialized on the eoc. then, after pruning, the pruned nn is not initialized on the eoc anymore. however, the rescaled pruned nn yl(x) = f(ρl ◦ δl ◦ w l, yl−1(x)) + bl, for l ≥ 1, where ij = (e[nl−1(w l ρl i1])− is initialized on the eoc. (the scaling is constant across j). i,j,β = (e[nl−1(w l 1 2 for ffnn , ρl 1 2 for cnn, the scaling factors in equation 7 are easily approximated using the weights kept after pruning. algorithm 1 (see appendix i) details a practical implementation of this rescaling technique for ffnn. we illustrate experimentally the benefits of this approach in section 4. sensitivity-based pruning for stable residual networks resnets and their variants (he et al., 2015; huang et al., 2017) are currently the best performing models on various classification tasks (cifar10, cifar100, imagenet etc (kolesnikov et al., 2019)). thus, understanding resnet pruning at initialization is of crucial interest. yang and schoenholz (2017) showed that resnets naturally ‘live’ on the eoc. using this result, we show that resnets are actually better suited to sbp than ffnn and cnn. however, resnets suffer from an exploding gradient problem (yang and schoenholz, 2017) which might affect the performance of sbp. we figure 2: percentage of non-pruned weights per layer in a resnet32 for our stable resnet32 and standard resnet32 with kaiming initialization on cifar10. with stable resnet, we prune less aggressively weights in the deeper layers than for standard resnet. address this issue by introducing a new resnet parameterization. let a standard resnet architecture be given by y1(x) = f(w 1, x), yl(x) = yl−1(x) + f(w l, yl−1), (8) where f defines the blocks of the resnet. hereafter, we assume that f is either of the form (2) or (3) (ffnn or cnn). for l ≥ 2, the next theorem shows that resnets are well-conditioned independently from the initialization and are thus well suited for pruning at initialization. theorem 2 (resnet are well-conditioned). consider a resnet with either fully connected or convolutional layers and relu activation function. then for all σw > 0, the resnet is wellconditioned. moreover, for all l ∈ {1, ..., l}, we have ml = θ((1 + σ2 w the above theorem proves that resnets are always well-conditioned. however, taking a closer look at ml, which represents the variance of the pruning criterion (definition 1), we see that it grows exponentially in the number of layers l. therefore, this could lead to a ‘higher variance of pruned networks’ and hence high variance test accuracy. to this end, we propose a resnet parameterization which we call stable resnet. stable resnets prevent the second moment from growing exponentially as shown below. proposition 2 (stable resnet). consider the following resnet parameterization yl(x) = yl−1(x) + 1√ l then the nn is well-conditioned for all σw > 0. moreover, for all l ≤ l we have ml = θ(l−1). f(w l, yl−1), for l ≥ 2, in proposition 2, l is not the number of layers but the number of blocks. for example, resnet32 has 15 blocks and 32 layers, hence l = 15. figure 2 shows the percentage of weights in each layer kept after pruning resnet32 and stable resnet32 at initialization. the jumps correspond to limits between sections in resnet32 and are caused by max-pooling. within each section, stable resnet tends to have a more uniform distribution of percentages of weights kept after pruning compared to standard resnet. in section 4 we show that this leads to better performance of stable resnet compared to standard resnet. further theoretical and experimental results for stable resnets are presented in (hayou et al., 2021). in the next proposition, we establish that, unlike ffnn or cnn, we do not need to rescale the pruned resnet for it to be trainable as it lives naturally on the eoc before and after pruning. proposition 3 (resnet live on the eoc even after pruning). consider a residual nn with blocks of type ffnn or cnn. then, after pruning, the pruned residual nn is initialized on the eoc. experiments in this section, we illustrate empirically the theoretical results obtained in the previous sections. we validate the results on mnist, cifar10, cifar100 and tiny imagenet. | 5 | [
107.751,
107.4020784,
503.9968519344,
128.3236784
] |
NRHajbzg8y0P.pdf | 2,023 | 1 | multimodal analogical reasoning knowledge graphs over ningyu zhang1∗ lei li1∗ xiang chen1∗ xiaozhuan liang1 1zhejiang university, azft joint lab for knowledge engine 2national university of singapore {zhangningyu,leili21,xiang chen,liangxiaozhuan,231sm,huajunsir}@zju.edu.cn shumin deng2 huajun chen1† abstract analogical reasoning is fundamental to human cognition and holds an important place in various fields. however, previous studies mainly focus on single-modal analogical reasoning and ignore taking advantage of structure knowledge. notably, the research in cognitive psychology has demonstrated that information from multimodal sources always brings more powerful cognitive transfer than single modality sources. to this end, we introduce the new task of multimodal analogical reasoning over knowledge graphs, which requires multimodal reasoning ability with the help of background knowledge. specifically, we construct a multimodal analogical reasoning dataset (mars) and a multimodal knowledge graph markg. we evaluate with multimodal knowledge graph embedding and pre-trained transformer baselines, illustrating the potential challenges of the proposed task. we further propose a novel model-agnostic multimodal analogical reasoning framework with transformer (mart) motivated by the structure mapping theory, which can obtain better performance. we hope our work can deliver benefits and inspire future research1. introduction analogical reasoning – the ability to perceive and use relational similarity between two situations or events – holds an important place in human cognition (johnson-laird, 2006; wu et al., 2020; bengio et al., 2021; chen et al., 2022a) and can provide back-end support for various fields such as education (thagard, 1992), creativity (goel, 1997), thus appealing to the ai community. early, mikolov et al. (2013b); gladkova et al. (2016a); ethayarajh et al. (2019a) propose visual analogical reasoning aiming at lifting machine intelligence in computer vision (cv) by associating vision with relational, structural, and analogical reasoning. meanwhile, researchers of natural language processing (nlp) hold the connectionist assumption (gentner, 1983) of linear analogy (ethayarajh et al., 2019b); for example, the relation between two words can be inferred through vector arithmetic of word embeddings. however, it is still an open question whether artificial neural networks are also capable of recognizing analogies among different modalities. note that humans can quickly acquire new abilities based on finding a common relational system between two exemplars, situations, or domains. based on mayer’s cognitive theory of multimedia learning (hegarty & just, 1993; mayer, 2002), human learners often perform better on tests with analogy when they have learned from multimodal sources than single-modal sources. evolving from recognizing single-modal analogies to exploring multimodal reasoning for neural models, we emphasize the importance of a new kind of analogical reasoning task with knowledge graphs (kgs). in this paper, we introduce the task of multimodal analogical reasoning over knowledge graphs to fill this blank. unlike the previous multiple-choice qa setting, we directly predict the analogical target and formulate the task as link prediction without explicitly providing relations. specifically, the task can be formalized as (eh, et) : (eq, ?) with the help of background multimodal knowledge graph ∗equal contribution and shared co-first authorship. †corresponding author. 1code and datasets are available in https://github.com/zjunlp/mkg_analogy. g, in which eh, et or eq have different modalities. we collect a multimodal analogical reasoning dataset (mars) and a multimodal knowledge graph markg to support this task. these data are collected and annotated from seed entities and relations in e-kar (chen et al., 2022a) and bats (gladkova et al., 2016a), with linked external entities in wikidata and images from laion-5b (schuhmann et al., 2021). to evaluate the multimodal analogical reasoning process, we follow the guidelines from psychological theories and conduct comprehensive experiments on mars with multimodal knowledge graph embedding baselines and multimodal pre-trained transformer baselines. we further propose a novel multimodal analogical reasoning framework with transformer, namely mart, which is readily pluggable into any multimodal pre-trained transformer models and can yield better performance. to summarize, our contributions are three-fold: (1) we advance the traditional setting of analogy learning by introducing a new multimodal analogical reasoning task. our work may open up new avenues for improving analogical reasoning through multimodal resources. (2) we collect and build a dataset mars with a multimodal knowledge graph markg, which can be served as a scaffold for investigating the multimodal analogy reasoning ability of neural networks. (3) we report the performance of various multimodal knowledge graph embedding, multimodal pre-trained transformer baselines, and our proposed framework mart. we further discuss the potential of this task and hope it facilitates future research on zero-shot learning and domain generalization in both cv and nlp. background | 1 | [
108.299,
468.2826768,
200.0860082,
480.2378768
] |
zOHQGKO3WGY.pdf | 2,023 | 2 | semi-supervised learning with a principled likelihood from a generative model of data curation stoil ganev and laurence aitchison department of computer science university of bristol, bristol, uk laurence.aitchison@bristol.ac.uk abstract we currently do not have an understanding of semi-supervised learning (ssl) objectives such as pseudo-labelling and entropy minimization as log-likelihoods, which precludes the development of e.g. bayesian ssl. here, we note that benchmark image datasets such as cifar-10 are carefully curated, and we formulate ssl objectives as a log-likelihood in a generative model of data curation. we show that ssl objectives, from entropy minimization and pseudo-labelling, to state-ofthe-art techniques similar to fixmatch can be understood as lower-bounds on our principled log-likelihood. we are thus able to introduce a bayesian extension of ssl, which gives considerable improvements over standard ssl in the setting of 40 labelled points on cifar-10, with performance of 92.2±0.3% vs 88.6% in the original fixmatch paper. finally, our theory suggests that ssl is effective in part due to the statistical patterns induced by data curation. this provides an explanation of past results which show ssl performs better on clean datasets without any “out of distribution” examples. confirming these results we find that ssl gave much larger performance improvements on curated than on uncurated data, using matched curated and uncurated datasets based on galaxy zoo 2.1 introduction to build high-performing deep learning models for industrial and medical applications, it is necessary to train on large human-labelled datasets. for instance, imagenet (deng et al., 2009), a classic benchmark dataset for object recognition, contains over 1 million labelled examples. unfortunately, human labelling is often prohibitively expensive. in contrast obtaining unlabelled data is usually very straightforward. for instance, unlabelled image data can be obtained in almost unlimited volumes from the internet. semi-supervised learning (ssl) attempts to leverage this unlabelled data to reduce the required number of human labels (seeger, 2000; zhu, 2005; chapelle et al., 2006; zhu & goldberg, 2009; van engelen & hoos, 2020). one family of ssl methods — those based on low-density separation — assume that decision boundaries lie in regions of low probability density, far from all labelled and unlabelled points. to achieve this, pre deep learning (dl) low-density separation ssl methods such as entropy minimization and pseudo-labelling (grandvalet & bengio, 2005; lee, 2013) use objectives that repel decision boundaries away from unlabelled points by encouraging the network to make more certain predictions on those points. entropy minimization (as the name suggests) minimizes the predictive entropy, whereas pseudo-labelling treats the currently most-probable label as a pseudo-label, and minimizes the cross entropy to that pseudo-label. more modern work uses the notion of consistency regularisation, which augments the unlabelled data (e.g. using translations and rotations), then encourages the neural network to produce similar outputs for different augmentations of the same underlying image (sajjadi et al., 2016; xie et al., 2019; berthelot et al., 2019b; sohn et al., 2020). further developments of this line of work have resulted in many variants/combinations of these algorithms, from directly encouraging the smoothness of the classifier outputs around unlabelled datapoints (miyato et al., 2018) to the “fixmatch” family of 1our code: https://anonymous.4open.science/r/gz_ssl-ed9e; mit licensed algorithms (berthelot et al., 2019b;a; sohn et al., 2020), which combine pseudo-labelling and consistency regularisation by augmenting each image twice, and using one of the augmented images to provide a pseudo-label for the other augmentation. however, some of the biggest successes of deep learning, from supervised learning to many generative models, have been built on a principled statistical framework as maximum (marginal) likelihood inference (e.g. the cross-entropy objective in supervised learning can be understood as the log-likelihood for a categorical-softmax model of the class-label mackay, 2003). low-density separation ssl methods such as pseudo-labelling and entropy minimization are designed primarily to encourage the class-boundary to lie in low-density regions. therefore they cannot be understood as log-likelihoods and cannot be combined with principled statistical methods such as bayesian inference. here, we give a formal account of ssl methods based on low-density separation (chapelle et al., 2006) as lower bounds on a principled log-likelihood. in particular, we consider pseudo-labelling (lee, 2013), entropy minimization (grandvalet & bengio, 2005), and modern methods similar to fixmatch (sohn et al., 2020). thus, we introduce a bayesian extension of ssl which gives 92.2 ± 0.3% accuracy, vs 88.6% in the case of 40 labelled examples in the original fixmatch paper. we confirm the importance of data curation for ssl on real data from galaxy zoo 2 (also see cozman et al., 2003; oliver et al., 2018; chen et al., 2020; guo et al., 2020). background the intuition behind low-density separation objectives for semi-supervised learning is that decision boundaries should be in low-density regions away from both labelled and unlabelled data. as such, it is sensible to “repel” decision boundaries away from labelled and unlabelled datapoints and this can be achieved by making the classifier as certain as possible on those points. this happens automatically for labelled points as the standard supervised objective encourages the classifier to be as certain as possible about the true class label. but for unlabelled points we need a new objective that encourages certainty, and we focus on two approaches. first, and perhaps most direct is entropy minimization (grandvalet & bengio, 2005) lneg entropy(x) = py(x) log py(x) ∈y where x is the input, y is the on particular label and y is the set of possible labels. here, we have followed the typical probabilistic approach in writing the negative entropy as an objective to be maximized. alternatively, we could use pseudo-labelling, which takes the current classification, y∗, to be the true label, and maximizes the log-probability of that label (lee, 2013), y lpseudo(x) = log py∗ (x) y∗ = arg max y ∈y log py(x). lee (2013) regarded pseudo-labelling as closely related to entropy miminization as the optimal value of both objectives is reached when all the probability mass is assigned to one class. however, they are not formulated as a principled log-likelihood, which gives rise to at least three problems. first, these methods cannot be combined with other principled statistical methods such as bayesian inference. second, it is unclear how to combine these objectives with standard supervised objectives, except by taking a weighted sum and doing hyperparameter optimization over the weight. third, these objectives risk reinforcing any initial poor classifications and it is unclear whether this is desirable. in standard supervised learning, unlabelled points should be uninformative it is important to note that under the standard supervised-learning generative model unlabelled points should not give any information about the weights. the typical supervised learning setup assumes that the joint probability factorises as, p (x, θ, y ) = p (x) p (θ) p (y |x, θ) , a b figure 1: a. a toy dataset generated to illustrate the dangers of using the clustering of the input points to inform classification boundaries. the input features, x0 and x1 are plotted on the x and yaxes and the class is represented by colour. b. a schematic diagram demonstrating the effect of our principled likelihood incorporating data-augmentation on the certainty of predictions for different degrees of invariance. more invariant nns (left) give similar predictive distributions for different augmentations (blue), and hence a certain averaged predictive distribution (bottom; orange). less invariant nns (right) give different predictive distributions for different augmentations (blue), and hence highly uncertain averaged predictive distributions (bottom; orange). as the prior over weights, θ is usually chosen to be independent of the inputs (e.g. iid gaussian). thus, x and θ are marginally independent and we cannot obtain any information about θ from x alone. formally, the posterior over θ conditioned on x is equal to the prior, p (θ|x) = p (θ, x) p (x) ∈y p (θ, x, y =y) p (x) p (θ) p (x) (cid:80) y p (y =y|θ, x) ∈y p (x) = p (θ) . ∈y as 1 = (cid:80) p (y =y|θ, x). to confirm this result is intuitively sensible, note that there are many y situations where encouraging the decision boundary to lie in low density regions would be very detrimental to performance. consider a classifier with two input features: x0 and x1 (fig. 1a). the class boundary lies in the high-density region crossing both clusters, so to obtain a reasonable result, the classifier should ignore the low-density region lying between the clusters. however, strong low-density separation ssl terms in the objective may align the cluster boundaries with the class boundaries, leading the classifier to wrongly believe that one cluster is entirely one class and the other cluster is entirely the other class. in contrast, supervised learning without ssl will ignore clustering and obtain a reasonable answer close to the grey dashed line. importantly, this is just an illustrative example to demonstrate that without further assumptions, the standard supervised approach of ignoring unlabelled data is sensible; semi-supervised learning without loss of performance in such settings has been studied and is known as safe ssl (li & zhou, 2014; krijthe & loog, 2014; kawakita & takeuchi, 2014; loog, 2015; krijthe & loog, 2016; guo & li, 2018; li et al., 2019). a generative model of data curation | 2 | [
108.249,
212.0160784,
314.6465919,
221.9786784
] |
wtcud6HroZr.pdf | 2,023 | 2 | learning to decompose visual features with latent textual prompts feng wang1, manling li2, xudong lin3, hairong lv1, alexander g. schwing2 & heng ji2 1tsinghua university 2university of illinois at urbana-champaign 3columbia university abstract recent advances in pre-training vision-language models like clip (radford et al., 2021) have shown great potential in learning transferable visual representations. nonetheless, for downstream inference, clip-like models suffer from either 1) degraded accuracy and robustness when inferring by retrieving textual class names (the zero-shot protocol); or 2) breaking the well-established vision-language alignment (linear probing). to combine the best of both worlds, we propose decomposed feature prompting (defo). defo maintains the dual-model architecture yet leverages learnable embeddings as textual input and performs classification with an additional linear layer. as a result, we find defo to be able to extract decomposed visual features with the help of textual prompts and to allow a scalable size of language inputs. our empirical study shows defo’s significance in improving the vision-language models. for example, defo obtains 73.2% test accuracy on imagenet with a resnet-50 backbone without tuning any pretrained weights of both the vision and language encoder, outperforming zero-shot clip by a large margin of 15.0%, and outperforming state-of-the-art vision-language prompt tuning by 7.6%. introduction language-guided visual pretraining has gained a lot of attention and shows great promise in learning transferable image representations. by establishing a connection between images and natural language, recent vision-language models are able to turn visual inference over a restricted number of classes into zero-shot open-vocabulary inference (radford et al., 2021; jia et al., 2021; pham et al., 2021). one of the recent successes for zero-shot inference is the contrastive language-image pretraining (clip) model (radford et al., 2021). it uses 400 million image-text pairs to learn an alignment between visual and textual representations obtained from a vision encoder and a language encoder respectively. in downstream applications, clip-like models (radford et al., 2021; jia et al., 2021; pham et al., 2021) then perform zero-shot inference by hard-target retrieval, i.e., they directly compute the distance between a vectorial image representation obtained from the vision encoder, and representations of text prompts (e.g., “a photo of an airplane” or “a photo of an automobile”) obtained from the language encoder. the target class (e.g., “airplane” or “automobile”) corresponding to the text prompt with the smallest distance to the vector representing the image constitutes the zero-shot inference result. when annotations are given, simple linear probing (i.e., removing the language encoder, fine-tuning of the vision encoder and training of a classifier on top of the vision encoder) further improves the results (radford et al., 2021). moreover, context optimization (coop) (zhou et al., 2021) replaces the hand-crafted prefix or suffix (e.g., “a photo of a”) of the text prompts by trainable embedding vectors. however, the zero-shot clip and coop infer using hard textual targets, i.e., the class names, which results in two main challenges. first, class names in text prompts (e.g., “airplane” or “automobile”), as used in zero-shot clip and coop inference, do not permit to accurately summarize the semantic information of an image. therefore, inference is very sensitive to the words chosen for class names. we refer to this challenge as expressive sensitivity. empirically, this challenge causes zero-shot clip and coop to struggle to achieve as competitive results as linear probing with the same image encoder when downstream training data is available (e.g., 58.2% accuracy vs. 72.3% on imagenet (deng et al., 2009)). moreover, this sensitivity can be observed by modifying class names. fore example, for zero-shot inference on cifar-10 (krizhevsky et al., 2009), clip obtains an accuracy of 63.7% when the original class names are used. notably, simply replacing or extending the class names with suitable synonyms1 (e.g., “plane” and “car” rather than “airplane” and “automobile”) can improve accuracy to 79.6%, which highlights the challenge of expressive sensitivity. second, despite the fact that hundreds of millions of pretraining samples cover a large number of concepts that can possibly appear in downstream datasets, zero-shot inference continues to struggle to recognize rare objects. we refer to this as the conceptual sensitivity. for example, zero-shot clip is only 38.5% accurate when classifying eurosat satellite images (helber et al., 2019), which is much lower than the result of a supervised resnet-50 (he et al., 2016) encoder (93.4%). also, zero-shot clip with a resnet-50 encoder achieves less than 90% accuracy on mnist (lecun, 1998), which can even be outperformed by a simple logistic regression model. while linear probing is a straightforward way to improve results, removing of the language encoder breaks the visionlanguage alignment that is learned from the pretraining data, and therefore degrades few-shot and transfer learning performance. in this paper, we propose decomposed feature prompting (defo), which turns the hard-targetretrieval paradigm of clip and coop into dual-model feature prompting. specifically, defo 1) provides to the language encoder a set of learnable embedding sequences which are independent of the hard semantic targets; and 2) performs classification by tuning an additional layer. as a result, defo does not rely on the textual representations of class names being classification targets, which addresses the issues of expressive sensitivity and conceptual sensitivity. meanwhile, defo maintains the dual-model architecture, which enables the model to leverage the language information, so that few-shot and transfer learning performance can be boosted. defo results show the significance of addressing the sensitivity challenges of clip-like models. for example, with a resnet-50 backbone, defo achieves 73.2% test accuracy on imagenet without modifying any pretrained weight of the image and text encoders, outperforming vanilla clip by a large margin of 15.0% and outperforming coop by 7.6%. in a variety of visual contexts, defo attains an average accuracy of 79.9% over 11 image classification benchmarks, which is 21.0% higher than that of zero-shot clip and 6.2% higher than coop. related work pretraining-finetuning has long been a dominant paradigm of transfer learning in machine learning, computer vision, and natural language processing. generally, pretraining a vision encoder by generative objectives (bao et al., 2021; he et al., 2022) or discriminative objectives (he et al., 2020; chen et al., 2020; grill et al., 2020; caron et al., 2021) at the scale of one to ten million images (deng et al., 2009) is sufficient to yield good visual representations and strong predictive performance in downstream visual tasks. however, without the supervision from other modalities, such pretrained models require task-specific finetuning (bao et al., 2021; he et al., 2022; o pinheiro et al., 2020; wang et al., 2022a; lin et al., 2022a) or linear probing he et al. (2020); chen et al. (2020) for reasonably domain-adapted predictions. the contrastive language-image pretraining (clip) (radford et al., 2021) method instead jointly pretrains a vision encoder and a text encoder on 400 million curated image-text pairs, with a contrastive objective (gutmann & hyv¨arinen, 2010) that matches the visual and textual representations. in downstream applications, clip achieves competitive results in various vision or vision-language tasks such as image classification (zhou et al., 2021; gao et al., 2021), dense prediction (rao et al., 2022), video-language tasks (luo et al., 2021; lin et al., 2022b; wang et al., 2022b), image manipulation (patashnik et al., 2021), and multimedia event extraction (li et al., 2022). following the success of clip, the align (jia et al., 2021) model leverages a noisy dataset of 1.8 billion image-text pairs to scale up vision-language representation learning, and the basic (pham et al., 2021) model further scales up this approach in terms of data and model size. based on the success of clip-like vision-language pretraining, a series of follow-up inference approaches are proposed to improve classification results. for example, zhou et al. (2021) propose coop to learn 1we use wordnet (fellbaum, 2010) to find synonyms. context information in downstream datasets, and gao et al. (2021) propose clip-adapter to learn domain-adaptation for vision-language models. further, following coop, zhou et al. (2022) propose cocoop to enhance the performance in unseen classes; and similarly, following clip-adapter, zhang et al. (2021) propose tip-adapter to explore non-parametric adaptation layers. despite the progress these methods (zhou et al., 2021; gao et al., 2021; zhou et al., 2022) have achieved in downstream predictive performance, they do not change clip’s inference paradigm of retrieving class names. hence, the challenges of expressive sensitivity and conceptual sensitivity remain. methodology as shown in figure 1, our defo follows the dual-model architecture of clip, i.e., we use a vision encoder and a language encoder which map the visual inputs and textual inputs into the same latent space. however, in defo, the language encoder plays a different role from that in the zeroshot clip. specifically, clip directly constructs hard targets for classification by feeding the language encoder with k textual queries (e.g., “a photo of cat”, “a photo of dog”, . . . ), where k is the number of classes and each query corresponds to a specific one. as explained in section 1, this inference protocol leads to expressive sensitivity and conceptual sensitivity challenges which incurs degradation of accuracy and robustness. figure 1: an architectural comparison between our defo and clip. “v” and “l” denotes vision and language encoder respectively and their weights are fixed. defo leverages sequences of trainable embedding vectors ([vj i ]) as textual input and maps decomposed visual features by a linear layer. in contrast, in defo, we change the existing paradigm of hard-target retrieval while maintaining the vision-language encoder architecture to learn decomposed visual features. specifically, defo aims to utilize the language encoder to construct a projection matrix that maps the visual features from the d-dimensional clip latent space to a new n-dimensional feature space. to this end, we feed the language encoder with n trainable text queries and then perform classification by an additional linear layer. by jointly tuning both the text queries and the classification layer, defo is able to learn textual prompts of detailed visual features and a robust feature mapping for classification. overall, defo has two main benefits compared to clip-like models. first, compared with hardtarget-based inference protocols such as the zero-shot clip and coop (zhou et al., 2021), defo removes the expressive and conceptual sensitivity challenges which significantly improves accuracy and robustness of downstream performance (see table 1 and 4). next, compared with linear probing which discards textual information, the optimization of the projection matrix in defo is bounded by the text encoder which results in the need for much fewer training samples to achieve good performance (see table 2). moreover, also note that in defo the number of textual queries n is independent of the number of classes k, so the query size is scalable to fit specific downstream tasks. next, we detail the defo and compare it to existing methods. dual-model inference as shown in figure 1, defo uses a visual encoder gv : rw×h×3 → rd and a language encoder gl : rm×de → rd to extract image and text representations, respectively. for this, the visual inputs are 3-channel images of shape w × h, and the language inputs are sentences with m words where each word is embedded into a de-dimensional vector. both the visual and textual features are then mapped into a d-dimensional latent space, i.e., we get an image representation vector fi ∈ rd and t ∈ rd, where n denotes the number of query sentences n text representation vectors f 1 used for the encoder gl. by applying the dot product between fi and each of the f i t (note that both t , . . . , f n fi and f i where the i-th element measures the similarity between the image and the i-th text query. t are ℓ2 normalized vectors, i.e., ∥fi ∥2 = ∥f i t ∥2 = 1), we get an n-dimensional vector, clip and coop directly use this vector to predict the label of the image, because each text query in their settings corresponds to a specific class. formally, clip and coop have n = k, where k is the number of classes to be inferred, and the probability of the image belonging to the i-th class is computed by pi = exp(⟨fi , f i t ⟩/τ ) j=1 exp(⟨fi , f j t ⟩/τ ) where ⟨·, ·⟩ denotes the dot product and τ is a temperature coefficient. instead, defo decouples the text queries from specific classes. specifically, we use a scalable number of queries, i.e., the number n is not limited to be equal to k, and perform classification by an additional linear layer that maps the n-dimensional feature vectors to k-dimensional vectors. the probabilities are then computed by the softmax of the k-dimensional vector. note that only this linear classification layer and the textual queries are trainable in defo. we fix the weights of both the text encoder and the image encoder to maintain the vision-language alignment. trainable text embeddings the language encoder gl receives sequences of de-dimensional embedding vectors as input. when natural language is used, each word in the vocabulary first needs to be encoded into a de-dimensional embedding. in defo, we skip the process of designing hand-crafted prompts with natural language. instead, we directly optimize the word embeddings via back-propagation. specifically, we initialize n independent sequences of text embeddings where each sequence consists of m de-dimensional vectors in the form of “[v1] [v2] . . . [vm]”. the total “textual” input of defo can be written as a tensor xl ∈ rn×m×de. note that here we assign the same length m to each query for easy comprehension and implementation. in practice, the design of defo’s input is more flexible and the length of each query is not required to be identical. by optimizing xl, defo makes clip-like vision-language models free from both hand-crafted prompts and annotations such as class names. in this way we address the issues of expressive and conceptual sensitivity caused by using class names as hard targets. comparison to existing methods as illustrated in figure 1, zero-shot clip has no trainable parameters. the textual queries are composed by a hand-crafted prompt and class names that describe the semantic targets of the categories. the linear-probing clip uses only the vision encoder for classification. without the assistance of textual representations, this method has to utilize an additional linear layer to map the visual features from the latent space (d-dimensional) to the output space (k-dimensional), which introduces n = d × k additionally trainable parameters. coop (zhou et al., 2021) mostly follows the architecture of zero-shot clip, yet replaces clip’s hand-crafted prompt by a sequence of trainable text embeddings, with n = k × m × de learnable parameters for class-specific prompts. intuitively, both coop and our defo use trainable text embeddings as inputs of the language encoder. both methods differ in that the number of textual queries n is independent from the number of classes k for defo, and the queries are not composed using class names. therefore, defo has a scalable size of additionally learnable parameters. specifically, it introduces in total n = n × (m × de + k) trainable parameters, which scales linearly with the number of queries n. for example, with n = 256 and m = 16 in imagenet (k = 1000), defo introduces 2.4m learnable parameters while attaining 72.3% accuracy, which outperforms coop (65.6%) who has 8.2m learnable parameters and clip-adapter (63.6%) who has 1m learnable parameters. in addition, compared to linear probing which directly maps the d-dimensional latent features to output logits, defo also uses a linear layer but maps n-dimensional features. in this way, defo is able to first project visual features with the assistance of n textual representation vectors, which provides defo with significantly better few-shot performance and interpretability than linear probing. table 1: test accuracy on imagenet (%). results with † are taken from zhou et al. (2021), and those with ‡ are taken from zhang et al. (2021). our results are marked in gray . the best results are bolded. the results without using text encoder are de-emphasized. method zero-shot clip (radford et al., 2021) linear-probing clip prompt ensembling coop (zhou et al., 2021) cocoop (zhou et al., 2022) target optimization (our ablation) clip-adapter (gao et al., 2021) tip-adapter (zhang et al., 2021) defo (ours) experiments experimental setup 4.1.1 baseline models defo is based on clip (radford et al., 2021) for an easy comparison to the other baselines (zhou et al., 2021; 2022; gao et al., 2021). for clip, we mainly explore two inference protocols, zero-shot and linear probing. zero-shot clip requires no extra training data and it infers by directly matching image representation to the text representation of class names with hand-crafted prompts. linearprobing clip drops the text encoder and instead attaches a randomly initialized linear layer to the image encoder, and then tunes only this linear layer with downstream training data for domainadapted classification. coop (zhou et al., 2021) and clip-adapter (gao et al., 2021) succeed in improving clip inference performance so they serve as the primary baselines to our defo. to give more comprehensive results, we also compare defo in imagenet to the recent baselines of cocoop (zhou et al., 2022) and tip-adapter (zhang et al., 2021), which are direct extensions for coop (zhou et al., 2021) and clip-adapter (gao et al., 2021). note that we do not expect cocoop and tip-adapter to yield better results than their base models coop and clip-adapter because they are proposed to address a different problem (discussed in section 2). we report the results of tip-adapter without its further fine-tuning (zhang et al., 2021) and all the baselines follow the pre-processing of coop for a fair comparison. further, in this paper, we develop another baseline called “target optimization”, which uses learnable embedding vectors as class names combined with a hand-crafted prompt prefix. target optimization can be regarded as an ablated version of defo, which helps to understand the importance of the learnable embeddings. 4.1.2 datasets we follow prior methods to select 11 publicly available datasets, i.e., imagenet (deng et al., 2009), food101 (bossard et al., 2014), oxfordpets (parkhi et al., 2012), caltech101 (fei-fei et al., 2004), sun397 (xiao et al., 2010), ucf101 (soomro et al., 2012), stanfordcars (krause et al., 2013), fgvcaircraft (maji et al., 2013), dtd (cimpoi et al., 2014), flowers102 (nilsback & zisserman, 2008), and eurosat (helber et al., 2019). the categories in these 11 datasets include natural objects, scenes, human actions and fine-grained features such as textures and satellite imagery, which could cover general semantic targets of visual understanding tasks. for the domain-generalization study, we also evaluate the models on four imagenet-variant datasets, namely, imagenet-v2 (recht et al., 2019), imagenet-adversarial (hendrycks et al., 2021b), imagenet-retention (hendrycks et al., 2021a), and imagenet-sketch (wang et al., 2019). these four datasets do not have training images and their categories correspond to imagenet (deng et al., 2009). we train on imagenet and test on these variant datasets to evaluate domain-generalization performance. table 2: few-shot accuracy on imagenet (%). n-shot denotes training with n samples per class. †: note that the zero-shot clip uses no training data of imagenet. we put this result to the column “full” for easy comparison. our results are marked in gray . the best results are bolded. method zero-shot clip linear prob. clip coop cocoop clip-adapter tip-adapter defo (ours) l encoder ✓ ✗ ✓ ✓ ✓ ✓ ✓ 4.1.3 technical details the experiments are built upon clip pretrained models. during training, the weights of both image and text encoders are frozen. in this paper, we explore both few-shot and full-dataset training. the few-shot setting follows clip (radford et al., 2021) and coop (zhou et al., 2021), i.e., training with 1, 2, 4, 8, and 16 samples per class that are randomly selected from the training set. by default, we use simple data augmentation of random crop and flip, and train with a sgd optimizer with a minibatch size of 32, 2e-3 learning rate, 0.9 momentum, and 0.01 weight decay (following coop (zhou et al., 2021)) for 50 epochs. for full-dataset training on imagenet, we use a batch size of 256 and a learning rate of 0.01, which yields similar accuracy to the default setting but significantly reduces training time. the number of text queries (n) is naturally fixed to the number of classes (k) for zero-shot clip, coop and target optimization. we set the length of learnable prompt to 16 words for coop, and set the length of learnable class name to two words for target optimization. the query size of defo is scalable in terms of both the length and quantity of text, so we have flexible choices. we empirically find that a larger query size (the number of text queries n) generally yields better predictive performance, in particular for large-scale datasets such as imagenet (deng et al., 2009). for example, with a similar number of text queries, i.e., n = 1000 for clip and n = 1024 for defo, defo outperforms the zero-shot clip by 14.1% (top-1 acc.) on imagenet, while this improvement can be further boosted to 15.0% by using 2048 queries in defo. when training on full imagenet, we use n = 2048 text queries and m = 16 words (following coop (zhou et al., 2021)) to fully exploit its learning capacity. for few-shot training on imagenet, we use a smaller query size, i.e., n = 1024 and m = 4, to prevent over-fitting. for the other 10 datasets, the text length is set to 16, and we find that a smaller number of queries could be sufficient to yield good performance. specifically, considering the scale of each dataset, we set n = 1024 for sun397 (xiao et al., 2010), n = 512 for stanfordcars (krause et al., 2013), food101 (bossard et al., 2014), and ucf101 (soomro et al., 2012), n = 256 for caltech101 (fei-fei et al., 2004), flowers102 (nilsback & zisserman, 2008), and fgvcaircraft (maji et al., 2013), and n = 128 for oxfordpets (parkhi et al., 2012), dtd (cimpoi et al., 2014), and eurosat (helber et al., 2019). for coop, we follow its default setup to initialize the trainable text embeddings from randomness, as we find that the random initialization and manual initialization (e.g., initialize from “a photo of a”) yield almost the same performance. when training on full datasets, this phenomenon also works for defo and target optimization, so we randomly initialize the parameters as well. for few-shot training of defo, we initialize the first k text queries by the k class names with random prefix, and fix the corresponding weights (w ∈ rk×k) of the classification layer to an identity matrix. in this way we further reduce the number of trainable parameters and make use of language supervision via the text encoder, which consequently yields robust performance when training data is limited. main results 4.2.1 comparison on imagenet we first compare our defo with the baselines on imagenet under both full-dataset training and few-shot settings. as shown in table 1, by training on the entire imagenet data, our method obtable 3: domain transfer accuracy on imagenet variants (%). our results are marked in gray . method zero-shot clip lin-probe clip coop defo (ours) l encoder ✓ ✗ ✓ ✓ imagenet-a imagenet-r table 4: average test accuracy (%) on 11 datasets. results with † are taken from (gao et al., 2021). our results are marked in gray . the best results are bolded. the results without using text encoder are de-emphasized. method zero-shot clip (radford et al., 2021) linear-probing clip coop (zhou et al., 2021) target optimization clip-adapter (gao et al., 2021) defo (ours) tains the highest test accuracy with both resnet and vision transformer backbones. notably, with a resnet-50 image encoder, our defo outperforms the zero-shot clip by 15.0%. it is also observed that by using better prompts (i.e., prompt ensembling and coop), the accuracy is improved by a relatively small margin. this result demonstrates the issues of expressive sensitivity, i.e., the human-annotated class names cannot define or well describe the semantic information of the images in each category, even if the prompt has been optimized. notably, using a simple prompt but optimizing the class names (target optimization) yields more competitive performance (e.g., 71.4% vs. 65.6%). overall, our defo continues to yield superior performance than the baselines for both full-dataset and few-shot training as shown in table 2. the linear-probing protocol achieves close accuracy to defo with sufficient training samples (e.g., 72.8% vs. 73.2%). however, its drawback is obvious when training data is limited. typically, as reported in table 2, the linear probing protocol with one sample per class yields only 23.6% accuracy, which is much lower than that of zero-shot clip (58.2%), coop (59.2%), and our defo (59.4%). 4.2.2 generalized performance we evaluate the domain-transfer performance by 16-shot training on imagenet and testing on imagenet-v2 (recht et al., 2019), imagenet-adversarial (hendrycks et al., 2021b), imagenetretention (hendrycks et al., 2021a), and imagenet-sketch (wang et al., 2019). as shown in table 3, compared with the baseline of zero-shot clip, defo attains 6.9% higher accuracy on imagenet-v2. also, defo yields a similar level of transfer performance as zero-shot clip and coop did on the other three datasets. in contrast, the linear probing protocol incurs significantly degraded performance on imagenet-a, -r, and -s, as it forgoes assistance of language information. for a wider range of classification tasks, we further evaluate defo on a total of 11 datasets. as shown in table 4, our defo achieves the highest average test accuracy over the 11 benchmarks with different image encoders. a specific comparison to clip and coop on each of the datasets is also provided in figure 2. we note that clip favors the common and generic objects such as the images in food101, oxfordpets, and caltech101, for which our defo outperforms clip by < 10% accuracy and coop even fails to improve upon clip on food101. however, when it comes to fine-grained feature recognition tasks such as classifying the type of aircraft (maji et al., 2013), clip and coop are shown to be very sensitive to the objects. consequently, defo outperforms clip by 25.4% accuracy and outperforms coop by 11.2% on this dataset. the different robustness between clip and defo on the 11 datasets indicates the issue of sensitivity challenge for clip, and indicates that defo successfully addresses this issue by decomposing and then combining the visual features. figure 2: accuracy improvements over zero-shot clip. on all the 11 classification benchmarks, our method outperforms the clip and coop baselines by non-trivial margins. figure 3: interpretation (nearest words) of the learned text embeddings of defo. we highlight the key words and replace the symbols and meaningless words by “[s]”. we surprisingly find that our defo is able to learn detailed visual features such as color (a), shape (b), texture (c), and context (d). also, defo is able to directly learn a precise semantic target (f, sunflower is a category of caltech101) or a generalized semantic target (e). interpretation of text queries one benefit of clip-like models is that they are able to provide interpretable visual predictions, as the visual features are highly aligned with the representations of natural language. a simple way to interpret the learned word embeddings, i.e., the n sequences of m embedding vectors in xl, is searching the nearest natural words within the vocabulary by measuring their euclidean distance. however, as this approach directly maps the continuous embedding vectors into discrete codes of words, the interpreted sentences do not necessarily “make sense” and may contain meaningless words or symbols, which is also observed in prior work (zhou et al., 2021). nonetheless, we still find very interesting evidence from the interpretation of defo. we observe that some of the interpreted query sentences include meaningful key words that describe specific visual features such as color, shape, and texture. as illustrated in figure 3 (a)-(c), in caltech-101 dataset (fei-fei et al., 2004), defo learns the words “red”, “ring”, and “stripe”, while the wellmatched (based on the consine similarity in the latent space) images in the dataset look consistent with human understanding of these features. for example, defo matches the word “red” with the objects such as a lotus flower and a bird in this color. for the word “ring”, we can find the ring or circle shapes in the corresponded images. also, defo is able to extract background information such as “snow” (see figure 3 (d)). and surprisingly, defo sometimes directly learns the semantic targets that are closely related to the categories of the dataset. for example, it learns the word “dogs” which is a parent category in oxfordpets (parkhi et al., 2012), and the word “sunflowers” which is an exact category in caltech-101 (fei-fei et al., 2004). despite the fact that this interpretation approach is not rigorous enough, because the text features learned by defo possibly exceed the existing vocabulary, it still provides very strong evidence that defo features are meaningful. we hope this result will yield greater insights in a follow-up study on interpretable vision-language inference. ablation study | 8 | [
108.249,
633.9030784,
209.0760807,
643.8656784
] |
p_jIy5QFB7.pdf | 2,023 | 1 | taking a step back with kcal: multi-class kernel-based calibration for deep neural networks shubhendu trivedi zhen lin1 1 department of computer science, university of illinois at urbana-champaign 2 carle illinois college of medicine, university of illinois at urbana-champaign {zhenlin4,jimeng}@illinois.edu shubhendu@csail.mit.edu jimeng sun1,2 abstract deep neural network (dnn) classifiers are often overconfident, producing miscalibrated class probabilities. in high-risk applications like healthcare, practitioners require fully calibrated probability predictions for decision-making. that is, conditioned on the prediction vector, every class’ probability should be close to the predicted value. most existing calibration methods either lack theoretical guarantees for producing calibrated outputs, reduce classification accuracy in the process, or only calibrate the predicted class. this paper proposes a new kernel-based calibration method called kcal. unlike existing calibration procedures, kcal does not operate directly on the logits or softmax outputs of the dnn. instead, kcal learns a metric space on the penultimate-layer latent embedding and generates predictions using kernel density estimates on a calibration set. we first analyze kcal theoretically, showing that it enjoys a provable full calibration guarantee. then, through extensive experiments across a variety of datasets, we show that kcal consistently outperforms baselines as measured by the calibration error and by proper scoring rules like the brier score. our code is available at https://github.com/zlin7/kcal. introduction the notable successes of deep neural networks (dnns) in complex classification tasks, such as object detection (ouyang & wang, 2013), speech recognition (deng et al., 2013), and medical diagnosis (qiao et al., 2020; biswal et al., 2017), have made them essential ingredients within various critical decision-making pipelines. in addition to the classification accuracy, a classifier should ideally also generate reliable uncertainty estimates represented in the predicted probability vector. an influential study (guo et al., 2017) reported that modern dnns are often overconfident or miscalibrated, which could lead to severe consequences in high-stakes applications such as healthcare (jiang et al., 2012). calibration is the process of closing the gap between the prediction and the ground truth distribution given this prediction. for a k-classification problem, with covariates x ∈ x and the label y ∈ y = [k], denote our classifier x (cid:55)→ ∆k−1 as ˆp = [ˆp1, . . . , ˆpk], with ∆k−1 being (k-1)-simplex. then, definition 1. (full calibration (vaicenavicius et al., 2019)) ˆp is fully-calibrated if ∀k ∈ [k]: ∀q = [q1, . . . , qk ] ∈ ∆k−1, p{y = k|ˆp(x) = q} = qk. it is worth noting that def. (1) implies nothing about accuracy. in fact, ignoring x and simply predicting π, the class frequency vector, results in a fully calibrated but inaccurate classifier. as a result, our goal is always to improve calibration while maintaining accuracy. another important requirement is that ˆp ∈ ∆k−1. many binary calibration methods such as zadrozny & elkan (2001; 2002) result in vectors that are not interpretable as probabilities, and have to be normalized. many existing works only consider confidence calibration (guo et al., 2017; zhang et al., 2020; wenger et al., 2020; ma & blaschko, 2021), a much weaker notion than that encapsulated by def. (1) and only calibrates the predicted class (kull et al., 2019; vaicenavicius et al., 2019). definition 2. (confidence calibration) ˆp is confidence-calibrated if: ∀q ∈ [0, 1], p{y = arg max k ˆpk(x)| max k ˆpk(x) = q} = q. however, confidence calibration is far from sufficient. doctors need to perform differential diagnoses on a patient, where multiple possible diseases should be considered with proper probabilities for all of them, not only the most likely diagnosis. figure 1 shows an example where the confidence is calibrated, but prediction for important classes like seizure is poorly calibrated. a classifier can be confidence-calibrated but not useful for such tasks if the probabilities assigned to most diseases are inaccurate. figure 1: reliability diagrams for confidence calibration (top) and seizure (bottom). the popular temperature scaling (right) only calibrates the confidence, leaving seizure poorly calibrated. see figure 2 and the appendix for complete reliability diagrams. recent research effort has started to focus on full calibration, for example, in vaicenavicius et al. (2019); kull et al. (2019); widmann et al. (2019). we approach this problem by leveraging the latent neural network embedding in a nonparametric manner. nonparametric methods such as histogram binning (hb) (zadrozny & elkan, 2001) and isotonic regression (ir) (zadrozny & elkan, 2002), are natural for calibration and have become popular. gupta & ramdas (2021) recently showed a calibration guarantee for hb. however, hb usually leads to noticeable drops in accuracy (patel et al., 2021), and ir is prone to overfitting (niculescu-mizil & caruana, 2005). unlike existing methods, we take one step back and train a new low-dimensional metric space on the penultimatelayer embeddings of dnns. then, we use a kernel density estimation-based classifier to predict the class probabilities directly. we refer to our kernel-based calibration method as kcal. unlike most calibration methods, kcal provides high probability error bounds for full calibration under standard assumptions. empirically, we show that with little overhead, kcal outperforms all existing calibration methods in terms of calibration quality, across multiple tasks and dnn architectures, while maintaining and sometimes improving the classification accuracy. summary of contributions: • we propose kcal, a principled method that calibrates dnns using kernel density estimation on the latent embeddings. • we present an efficient pipeline to train kcal, including a dimension-reducing projection and a stratified sampling method to facilitate efficient training. • we provide finite sample bounds for the calibration error of kcal-calibrated output under standard assumptions. to the best of our knowledge, this is the first method with a full calibration guarantee, especially for neural networks. • in extensive experiments on multiple datasets and state-of-the-art models, we found that kcal outperforms existing calibration methods in commonly used evaluation metrics. we also show that kcal provides more reliable predictions for important classes in the healthcare datasets. the code to replicate all our experimental results is submitted along with supplementary materials. related work research on calibration originated in the context of meteorology and weather forecasting (see murphy & winkler (1984) for an overview) and has a long history, much older than the field of machine learning (brier, 1950; murphy & winkler, 1977; degroot & fienberg, 1983). we refer to filho et al. (2021) for a holistic overview and focus below on methods proposed in the context of modern neural networks. based on underlying methodological similarities, we cluster them into distinct categories. scaling: a popular family of calibration methods is based on scaling, in which a mapping is learned from the predicted logits to probability vectors. confidence calibration scaling methods include temperature scaling (ts) (guo et al., 2017) and its antecedent platt scaling (platt, 1999), an ensemble of ts (zhang et al., 2020), gaussian-process scaling (wenger et al., 2020), combining a base calibrator (ts) with a rejection option (ma & blaschko, 2021). matrix scaling with regularization was also used to perform full calibration (kull et al., 2019). while some scaling-based methods can be data-efficient, there are no known theoretical guarantees for them to the best of our knowledge. binning: another cluster of solutions relies on binning and its variants, and includes uniformmass binning (zadrozny & elkan, 2001), scaling before binning (kumar et al., 2019), and mutualinformation-maximization-based binning (patel et al., 2021). isotonic regression (zadrozny & elkan, 2002) is also often interpreted as binning. uniform-mass binning (zadrozny & elkan, 2001) has a distribution-free finite sample calibration guarantee (gupta & ramdas, 2021) and asymptotic convergent ece estimation (vaicenavicius et al., 2019). however, in practice, binning tends to decrease accuracy (patel et al., 2021; guo et al., 2017). binning can also be considered a member of the broader nonparametric calibration family of methods. such methods also include gaussian process calibration (wenger et al., 2020), which however also only considers confidence calibration. loss regularization: there are also attempts to train a calibrated dnn to begin with. such methods typically add a suitable regularizer to the loss function (karandikar et al., 2021; mukhoti et al., 2020; kumar et al., 2018), which can sometimes result in expensive optimization and reduction in accuracy. use of kernels: although not directly used for calibration, kernels have also been used for uncertainty quantification for deep learning classification. in classification with rejection, the k-nearest-neighbors algorithm (knn), closely related to kernel-based methods, has been used to provide a “confidence measure” which is used to make a binary decision (i.e., whether to reject or to predict) (papernot & mcdaniel, 2018; jiang et al., 2018). recently, continuous kernels have also been used to measure calibration quality or used as regularization during training (widmann et al., 2019; kumar et al., 2018). zhang et al. (2020) introduced a kernel density estimation (kde) proxy estimator for estimating ece. however, it uses a un-optimized kernel over ∆k−1, and shows the kde-ece estimator (but not the calibration map) is consistent. to the best of our knowledge, use of trained kde to calibrate predictions hasn’t been proposed before. further, we also provide a bound on the calibration error. kcal: kernel-based calibration | 2 | [
108.299,
398.4296768,
321.9741114,
410.3848768
] |
JtBRnrlOEFN.pdf | 2,022 | 1 | charformer: fast character transformers via gradient-based subword tokenization yi tay∗, vinh q. tran∗, sebastian ruder†, jai gupta, hyung won chung, dara bahri zhen qin, simon baumgartner, cong yu, donald metzler google research and deepmind† yitay@google.com, vqtran@google.com abstract state-of-the-art models in natural language processing rely on separate rigid subword tokenization algorithms, which limit their generalization ability and adaptation to new settings. in this paper, we propose a new model inductive bias that learns a subword tokenization end-to-end as part of the model. to this end, we introduce a soft gradient-based subword tokenization module (gbst) that automatically learns latent subword representations from characters in a data-driven fashion. concretely, gbst enumerates candidate subword blocks and learns to score them in a position-wise fashion using a block scoring network. we additionally introduce charformer, a deep transformer model that integrates gbst and operates on the byte level. via extensive experiments on english glue, multilingual, and noisy text datasets, we show that charformer outperforms a series of competitive byte-level baselines while generally performing on par and sometimes outperforming subword-based models. additionally, charformer is fast, improving the speed of both vanilla byte-level and subword-level transformers by 28-100% while maintaining competitive quality. we believe this work paves the way for highly performant token-free models that are trained completely end-to-end. introduction neural networks have achieved tremendous success in natural language processing (nlp) by replacing feature-engineered models with stacks of functions that are learned end-to-end from vast amounts of data (mikolov et al., 2013; peters et al., 2018; howard and ruder, 2018). the single component of the traditional nlp pipeline (manning and schütze, 1999) that has so far resisted gradient-based learning is tokenization, which is commonly applied as a pre-processing step. state-of-the-art pre-trained language models (devlin et al., 2019) generally rely on data-driven subword-based tokenization algorithms (schuster and nakajima, 2012; sennrich et al., 2016; wu et al., 2016; kudo and richardson, 2018) while expert-crafted segmentation algorithms are still common for languages without whitespace separation such as chinese, thai, and korean (cf. lample and conneau, 2019). this reliance on rigid tokenization methods introduces a bottleneck into current nlp systems that limits their capabilities. subword segmentation algorithms split tokens into subwords solely based on frequency, without taking into account lexical or semantic similarity. as a result, models are brittle to rare words (gong et al., 2018) and perturbations, both natural and adversarial (belinkov and bisk, 2018; pruthi et al., 2019; sun et al., 2020). in multilingual models, tokens in low-resource languages are split into many subwords, which impacts performance on those languages and deteriorates crosslingual transfer (hu et al., 2020; wang et al., 2021). finally, a separate tokenization algorithm leads to a mismatch between the pre-training and downstream distribution of words when adapting pre-trained language models to new settings, which requires significant engineering effort to overcome. the direct application of character-level modelling into pre-trained language models in turn results in severely increased computational and memory complexity due to an increased sequence length and generally lower performance. to address this problem, we propose gradient-based subword tokenization (gbst), a new method that combines the compositionality of character-level representations ∗equal contribution with the efficiency of subword tokenization while enabling end-to-end learning. our method learns latent subword representations from characters using large amounts of unlabeled data. specifically, gbst learns a position-wise soft selection over candidate subword blocks by scoring them with a scoring network. in contrast to prior tokenization-free methods (clark et al., 2021), gbst learns interpretable latent subwords, which enables easy inspection of lexical representations and is more efficient than other byte-based models (xue et al., 2021). given that simply applying a standard transformer on a sequence of characters and bytes is computationally prohibitive, gbst paves the way for usable, practical and highly performant character-level models. a high level overview of how the gbst module is applied can be found at figure 3 (appendix). we furthermore introduce charformer, a transformer encoder-decoder model that uses gbst to operate directly on the byte level. in addition, we experiment with a re-scaled variant of charformer, which allocates additional capacity to the encoder to make up for the lack of discrete subword embeddings. we evaluate our model on a range of standard and non-standard english, and multilingual downstream tasks. on english glue and long document classification tasks, charformer outperforms strong byte-level baselines and overall achieves performance on par with subword-based models such as bert (devlin et al., 2019) and t5 (raffel et al., 2020). on toxicity detection in social media datasets (borkan et al., 2019; wulczyn et al., 2017), charformer outperforms byte-level baselines as well as subword-based models, demonstrating robustness to spelling variation and non-standard language. finally, a multilingually pre-trained charformer performs on par or outperforms strong subword-based multilingual baselines on standard cross-lingual datasets. we additionally demonstrate charformer is more efficient compared to byte-level and subwordbased models with similar numbers of parameters. on a comparable setup, charformer outperforms a baseline similar to the recent state-of-the-art byte-level model byt5 (xue et al., 2021) while being 2× more memory efficient and 10–93% faster. charformer also trains 28% faster than the subword-level mt5 model (xue et al., 2020), has 3× fewer parameters and achieves comparable quality on well-established benchmarks. finally, we demonstrate via visualization that the latent subwords learned by charformer are interpretable to some extent. charformer this section introduces our efficient character-level architecture, charformer. charformer is comprised of a gradient-based subword tokenization (gbst) module, followed by deep transformer layers. the input to the gbst module is a sequence of characters or bytes1, which is then downsampled to construct latent subwords. gradient-based subword tokenization (gbst) the input to gbst is a tensor of shape x ∈ rl×d where l is the number of input characters and d is the character embedding dimension. the key idea behind gbst is for the model to learn to perform a latent subword segmentation of the input by selecting the most suitable subword block at every character position. a block is a contiguous span of characters xi:i+b of length b for 1 ≤ i ≤ l − b. 2.1.1 constructing candidate latent subword blocks we first enumerate all possible subword blocks of size b up to a maximum block size m . in order to learn subword block embeddings, we use a non-parameterized strided pooling function f : rb×d → rd that projects a subword block consisting of a sequence of character embeddings xi:i+b ∈ rb×d to a single subword block representation xb,i ∈ rd for block size b at position i. we compute subword blocks xb,i with a stride s: xb = [f (xi:i+b); f (x(i+s):(i+s)+b); . . .] 1we choose bytes rather than characters (unicode code points) as this allows us to use a vocabulary of 256 possible byte values for all settings. we note that for languages with a latin alphabet, many characters correspond to a single byte. for other languages, each character corresponds to 2–3 bytes in general. for simplicity and to align with prior work, we will generally talk about characters unless stated otherwise. (a) formation of subword blocks to be scored by fr. offsets and/or pre-gbst convolutions not shown. (b) block scores that have been expanded back to length l. softmax is taken over block scores at each position i to form block weights for constructing latent subword representations. figure 1: illustration of subword block formation and scoring. in practice we set s = b, thus xb ∈ r l b ×d. the construction of latent subword blocks creates a shorter overall sequence length by downsampling. we construct xb for b ∈ 1, . . . , m , which can be seen in figure 1 for m = 4. considering offsets a limitation of a strided implementation is that it is unable to model all possible subword windows. for instance, for the character sequence [a, b, c, d] we would only be able to allocate [a, b] and [c, d] as subword blocks of length b = 2 and would ignore the subword block [b, c]. offsets can be used to model sliding windows of all possible subword blocks. we consider enumerating all possible strided blocks by additionally shifting sequences up until the offset s. as this increases computation, we instead propose to first apply a 1d convolution to x, prior to enumerating subword blocks. this effectively “smoothes” over the subword blocks. we use the variant with 1d convolutions in our main experiments and provide additional ablations in §8.3 of the appendix. considering intra-block positions it is important to preserve the ordering of the characters within the block xi, xi+1, . . . , xi+b. e.g., the output of f should differ for the blocks abc and bca. for certain choices of f it may be valuable to add a positional embedding (vaswani et al., 2017) to xi:i+b before applying f . note that this positional embedding would only be for individual blocks, and is not global to the entire input sequence. that is, only positional embedding values for positions 1, . . . , b would be used. however, in practice we apply a 1d convolution before the gbst layer and use the mean-pooling function for f . we find this to be sufficient to distinguish between same sized blocks with different character orders. 2.1.2 block scoring network in order to allow the model to learn which block to select for every character position, we introduce a block scoring network. the block scoring network is simply a parameterized function fr(.) that produces a score for each candidate block. given a subword candidate block xb,i ∈ rd, we compute a score pb,i associated with the block using a simple linear transformation fr : rd → r: pb,i = fr(xb,i) we perform ranking of subword blocks with regard to each character position in the original sequence. at every position i, the model learns to select the most suitable subword block xb,i among all block sizes 1 ≤ b ≤ m . as each sequence of subword blocks xb is downsampled, we realign the representations of the subword blocks by upsampling each xb to its original sequence length l. specifically, for a block size of b, we replicate each block representation xb,i b times. we then score each candidate block at each position i using the softmax function: pi = softmax([p1,i, p1,i, · · · , pm,i]), which computes a relative score of each candidate block at each position and pi ∈ rm . we show the scoring of realigned blocks in figure 1. 2.1.3 forming latent subwords we then sum the representations of all subword blocks xb,i at each position i multiplied by their learned probability pb,i to form a latent subword representation ˆxi ∈ rd: ˆxi = b pb,ixb,i intuitively, the model learns an ideal subword block for each position. in contrast to standard deterministic subword tokenization algorithms, this selection is soft and can thus consider different possible segmentations at every position i. in general, however, this formulation still assumes that subwords are contiguous sequences of characters. while additional context can be considered via the convolutions in §2.1.1, non-concatenative morphology where morphemes are discontinuous may be harder for the method to model.2 2.1.4 position-wise score calibration in the above approach, the scoring of each position is independent of other positions. we hypothesize that it may be beneficial for block scores at each position to be aware of each other. to this end, we introduce an optional module that enables learning a consensus among block scores by calculating dot products across the scores pi across all positions i ∈ [1, l]. this can be viewed as a form of self-attention across block scores, albeit without any projections for computational efficiency. to learn the new scores ˆp ∈ rl×m , we compute ˆp = softmax(p p (cid:62))p. 2.1.5 downsampling ds after learning a candidate block or mixture of blocks for each position, we use a downsampling function fd : rl×d → r l ×d that downsamples the sequence of latent subwords ˆx = [ ˆx1, . . . , ˆxl] to ˜x, reducing its sequence length by a factor of ds. we choose fd to be a non-parameterized mean pooling operation. notably, such simple stride-based pooling removes potential redundancies caused by adjacent positions selecting similar blocks as the mean pool of two identical block embeddings produces the same outcome. intuitively, as the downsampling operation is fixed, the parameterized components preceding it should learn an optimal subword tokenization given the downsampling. transformer stack the remainder of the charformer model remains identical to a regular transformer encoderdecoder model. the transformer stack operates on the downsampled latent subwords ˜x instead of subword embeddings. re-scaling of the transformer stack while subword-based models allocate much of their capacity to subword embeddings—up to 71% of all parameters for contemporary multilingual models (chung et al., 2021)—, the character vocabulary of character-level models is much smaller and thus less expressive. similar to xue et al. (2021), we hypothesize that character-level models require deeper encoder stacks than subword-based models to make up for their smaller embedding capacity. consequently, we explore a scaling variant of charformer that puts more parameters at the encoder at the expense of the decoder while preferring a deep narrow model over a larger wide model. specifically, we re-configure the base model size to be similar to the t5 small model size, with an expanded 24 layers in the encoder. the resulting charformersbase (scaled base) has 134m parameters, which is about 67% the parameter footprint of the standard base t5 model (200m parameters; raffel et al., 2020). moreover, this particular charformer model is approximately 50-100% faster than the t5 base model (see §4).3 for the re-scaled variant, we also used the glu variant described in (shazeer, 2020) which is commonly referred to as the v1.1 variant in the t5 library. 2future work could explicitly seek to model discontinuous morphological processes by considering skipgrams in addition to character n-grams, although this would increase computational costs. 3the benefits of such re-scaling have also been observed for subword-based encoder-decoder neural machine translation models (devlin, 2017; kasai et al., 2021). a note on comparing character-level and subword-based methods prior work on efficient methods generally compares models with the same number of parameters (chung et al., 2021). however, whereas embedding look-up even with large vocabularies in subword-based methods is o(1), re-distributing the subword embedding parameters in character-level models such as byt5 (xue et al., 2021) to dense layers incurs much higher computational costs—a 25% penalty in training speed. we believe that a fair re-scaling of character-level models should not only aim to match the number of parameters but also the compute and inference costs of subword-based models under the assumption that char/byte-level models will require longer sequences (see §4 for a comparison). span-based pre-training our pre-training scheme follows t5 quite closely. we mask n contiguous characters and train to predict them in a sequence-to-sequence architecture following xue et al. (2021). the model optimizes the cross-entropy loss and is trained with teacher forcing. experiments we evaluate our method both in english as well as in a multilingual setting on relevant benchmarks and compare against state-of-the-art character-level and subword-based methods. experiments on monolingual english datasets data to showcase the effectiveness of the proposed method, we evaluate on a diverse set of standard english tasks from glue covering sentiment classification (sst-2; socher et al., 2013), natural language inference (mnli, qnli; williams et al., 2018; rajpurkar et al., 2016), paraphrase detection (dolan and brockett, 2005, mrpc, qqp) and sentence similarity (cer et al., 2017). in addition, we evaluate on tasks that require dealing with long documents, both for sentiment analysis (imdb; maas et al., 2011) and news classification (agnews; zhang et al., 2015). baselines we compare charformer against the following state-of-the-art subword-based models: bert (devlin et al., 2019), an encoder-only pre-trained masked language model; and t5 (raffel et al., 2020), an encoder-decoder model. we also compare against byte-level t5 (xue et al., 2021), a t5 model that is directly applied to bytes. we additionally evaluate the impact of the downsampling in charformer by comparing it to the downsampling used by the character-level canine (clark et al., 2021) model in our framework. canine downsamples a character sequence using local attention and pooling via strided convolutions. as the original canine uses an encoder-only model and was only trained on multilingual data, we integrate canine-style downsampling into byte-level t5, which we refer to as byte-level t5+lasc (local attention–strided convolution).4 as an ablation for the gbst inductive bias, we compare against byte-level t5+convbase a convolutional baseline of byte-level t5 with a 1d convolution of filter size 5 placed before the encoder. note that in all the baselines and for charformer base models, in the spirit of fair comparison, we compare them at an equal parameterization (size). our scaling experiments are reserved for our sbase models, which is intended to only be compared with subword t5 models, and not to unscaled byte-level baselines. finally, we include an sbase scaled version of byte-level t5 for comparison. setup we evaluate base and sbase configurations of charformer with 203m and 134m parameters respectively. we compare to base configurations of bert and t5 that have a similar number of parameters. we pre-train all models on the c4 corpus for 1m steps using a batch size of 64 and sequence length of 1024. all non-subword models use a vocabulary of 256 bytes.5 our pre-training scheme corrupts spans with a mean length of 20 bytes. each model is pre-trained on 16 tpu v3 chips. we pre-train our models with the adafactor optimizer with an inverse square root learning rate. we then fine-tune on each individual task separately using a constant learning rate of 10−3. more details can be found in the appendix. 4compared to canine, byte-level t5+lasc does not operate on unicode codepoints and has a decoder. it thus forgoes character hash embeddings and upsampling procedures respectively. 5following xue et al. (2021) we discard illegal utf-8 sequences and reuse the final 100 byte ids as sentinel tokens. table 1: comparison of charformer against other subword and character-level models with different parameter sizes on diverse standard english datasets. model stsb cola avg qnli mrpc mnli sst-2 qqp bertbase,subword t5base,subword byte-level t5base byte-level t5+convbase byte-level t5+lascbase charformerbase byte-level t5sbase charformersbase table 2: results on comment classification on civil comments and wiki comments. metrics are accuracy and auc-pr. t5 baseline results are from (tay et al., 2021). model civil comments wiki comments t5base,subword byte-level t5base byte-level t5+lascbase charformerbase charformersbase table 3: results on text classification on long documents. model imdb news t5base,subword byte-level t5base byte-level t5+lascbase charformerbase charformersbase results for all result tables, we divide the table into three sections: subword baseline(s), un-scaled byte-level baselines, and scaled charformer results. if a section and task combination has more than one model result, we underline the best result. we show result for glue in table 1. charformer outperforms other character-level baselines trained under the same conditions with the same number of parameters across all tasks, while being considerably faster and requiring less compute than t5-style models that are directly applied to bytes or characters (see §4). charformersbase performs even better despite having a smaller number of parameters compared to the base configuration, demonstrating the usefulness of rescaling the transformer stack for character-level models. charformersbase furthermore is the only model that performs on par or even outperforms the standard subword-based models on some tasks in standard english. in table 3 we provide results for text classification of long documents. here, charformersbase is the only byte-level model to outperform t5base,subword on the imdb classification task, and both charformer models outperform byte and subword level baselines on agnews. experiments on non-standard english datasets the previous set of experiments demonstrated the ability of charformer to perform well on clean datasets consisting of standard english. however, character-level models are particularly suited to data that is noisy, containing spelling variations, typos, and other non-standard language. data to demonstrate charformer’s ability to perform well on such data, we evaluate on toxicity detection using the civil comments (borkan et al., 2019) and the wikipedia comments (wulczyn et al., 2017) datasets. both are standard benchmarks that require estimating the toxicity of usergenerated content. we use the same setup as for the standard english datasets. results we show results in table 2. character-level models outperform the subword-based t5 model on both datasets, demonstrating their suitability to deal with such noisy, user-generated data. charformer achieves performs on par or outperforms other character-level methods on both datasets across the different model sizes. multilingual experiments data to evaluate the effectiveness of character-level models on multilingual data, we evaluate on standard cross-lingual question answering and classification tasks. in particular, we evaluate on the question answering tasks tydiqa-goldp (clark et al., 2020), xquad (artetxe et al., 2020), and mlqa (lewis et al., 2020) as well as the natural language inference task xnli (conneau et al., 2018) and the paraphrase detection task paws-x (yang et al., 2019) from xtreme (hu et al., 2020). we evaluate on the in-language multi-task setting for tydiqa-goldp (clark et al., 2020) where models table 4: multilingual comparison of charformer against subword and byte-level models on in-language multi-task, translate-train multi-task, and cross-lingual zero-shot (training on english) settings. model sizes are the same as those in table 1. mbert and mt5 baseline results are from (xue et al., 2020). model mbertbase (subword) mt5base (subword) byte-level t5base byte-level t5+lascbase charformerbase charformersbase charformersbase,longp t in-language translate-train-all zero-shot tydiqa-goldp xquad mlqa xnli paws-x xnli paws-x table 5: comparison of pre-training compute metrics for mt5 (subword) versus comparable quality charformer models on the mc4 dataset. 64 tpuv3 chips were used for this experiment. charformersbase sees the same number of tokens after downsampling as mt5base, while charformersbase,longp t roughly sees the same amount of raw text as mt5base, given that a sentencepiece subword token is about 4.1 bytes on average (xue et al., 2021). charformersbase is 28% faster than mt5base, while using 33% of the flops. model batch size l mt5base (subword) charformersbase charformersbase,longp t ds speed (steps/s) are fine-tuned on the combined gold data in all target languages and the translate-train-all setting where models are fine-tuned on english training data plus translations in all target languages for the other datasets. both are the best-performing settings for the respective tasks in (hu et al., 2020). in addition, we evaluate on zero-shot cross-lingual transfer from english on xnli and paws-x. baselines we compare to strong multilingual subword-based baselines including multilingual bert (devlin et al., 2019) and multilingual t5 (xue et al., 2020). in addition, we compare to the byte-level models from §3.1, which we pre-train on multilingual data. setup we pre-train charformer as well as the byte-level t5 and byte-level t5+lasc baselines on multilingual mc4 common crawl (xue et al., 2020) in 101 languages. base size models were trained for 1m steps using a batch size of 64 and sequence length of 2048, with the exception of byte-level t5base, which was trained with a sequence length of 1024, as training speed was prohibitively slow (see table 11). charformersbase and charformersbase,longp t (longer pre-training) are trained with larger batch sizes for fair comparison with mt5. in particular, charformersbase pre-trains on the same amount of tokens after downsampling as mt5base, while charformersbase,longp t pre-trains on roughly the same amount of raw text as mt5base, given that a sentencepiece subword token is about 4.1 bytes on average (xue et al., 2021); see table 5 for further details. all models were fine-tuned with an input sequence length of 4096 for questionanswering tasks and 2048 for inference tasks. score calibration was not used for these experiments, as it did not benefit the model in the multilingual setting. for xnli and paws-x (both translate-train and zero-shot settings), we also observed that performance improved if the gbst layer was not updated during fine-tuning; the reported charformer numbers reflect this configuration. otherwise, all other hyper-parameters and model sizes are unchanged from the english experimental setup. results we show in-language multi-task, translate-train, and cross-lingual zero-shot results in table 4. charformersbase is competitive with standard subword-based models and charformersbase,longp t outperforms subword-based models on tydiqa-goldp (in-language multitask). additionally, in the translate-train setting charformersbase,longp t is on par with subword models on xquad and mlqa, and close to parity on paws-x. furthermore, charformer outperforms other character-level models in the zero-shot setting. however, we observe that this setting still remains a challenge for token-free models in general. we hypothesize that model size may be a major factor here. finally, we provide additional comparison between gbst and lasc at a fixed down-sampling rate in section 8.4 (appendix), showing that gbst significantly outperforms lasc on tydiqa. table 6: pre-training compute metrics of models at different input lengths, downsampling rates, and model sizes on the english c4 dataset. 16 tpuv3 chips were used for this experiment. these numbers reflect a batch size of 64. memory refers to per-device peak memory usage on tpuv3 chips. model l ds speed (steps/s) t5base (subword) byte-level t5base byte-level t5+lascbase charformerbase charformerbase charformersbase charformersbase peak mem. figure 2: visualization of block scores (softmax weights) for every byte position from multilingual charformersbase on an example english input. speed, memory and parameters table 6 reports the speed (global training steps per second), parameter sizes and number of floating point operations (flops) for each forward pass of the models used in our experiments. all experiments were run on 16 tpu-v3 chips and speed is benchmarked on english c4 pre-training at the 1k input length (l). charformer models are generally more efficient both in terms of speed and flops compared to other character-level models at different parameter sizes. with a low down-sampling rate ds for charformer, byte-level t5+lasc is more efficient due to using a higher down-sampling rate. directly consuming the character sequence with a transformer model is slow and requires a large number of flops, which is exacerbated with longer sequence lengths where byte-level t5 is more than 2× slower than the fastest charformer. this difference is even larger at longer input sequence lengths, which we report in the appendix. charformersbase achieves better performance (see §3) with fewer parameters but more flops by using a deep thin encoder and is twice as fast as the subword-based model with similar performance, t5base. visualizing latent subwords one benefit of charformer compared to other character-level methods is that the subwords it learns are directly interpretable and may give some indications to the behaviour of the underlying model. we visualize the scores the multilingual charformer has learned to assign to subword blocks of different sizes for the string ‘on subword tokenization’ in figure 2. we observe that the model learns to allocate single-character subword blocks predominantly to vowels and whitespace in english. moreover, in english the model allocates larger subword blocks to the beginning and end consonants of a subword. together, we believe this suggests that the model has learned a meaningful segmentation of the input, and that it is able to dynamically mix between byte-level and subword-level features. such behaviour could also parallel the relative importance attributed to consonants for word identification observed during reading in humans (lee et al., 2001; carreiras et al., 2008). related work | 7 | [
108.299,
137.7796768,
211.1957635,
149.7348768
] |
GTGb3M_KcUl.pdf | 2,021 | 1 | dynatune: dynamic tensor program optimization in deep neural network compilation minjia zhang∗, menghao li*, chi wang & mingqin li microsoft corporation {minjiaz,t-meli,wang.chi,mingqli}@microsoft.com abstract recently, the dl compiler, together with learning to compile has proven to be a powerful technique for optimizing deep learning models. however, existing methods focus on accelerating the convergence speed of the individual tensor operator rather than the convergence speed of the entire model, which results in long optimization time to obtain a desired latency. in this paper, we present a new method called dynatune, which provides significantly faster convergence speed to optimize a dnn model. in particular, we consider a multi-armed bandit (mab) model for the tensor program optimization problem. we use ucb to handle the decision-making of time-slot-based optimization, and we devise a bayesian belief model that allows predicting the potential performance gain of each operator with uncertainty quantification, which guides the optimization process. we evaluate and compare dynatune with the state-of-the-art dl compiler. the experiment results show that dynatune is 1.2–2.4 times faster to achieve the same optimization quality for a range of models across different hardware architectures. introduction the enormous computational intensity of deep neural network (dnn) models has attracted great interest in optimizing their performance. popular deep learning (dl) frameworks such as pytorch (paszke et al., 2019) and tensorflow (abadi et al., 2016) adopt custom optimized kernels such as intel mkl-dnn or nvidia cudnn (chetlur et al., 2014) as back-end. however, given the increasing complexity of tensor operations in dnns and the volatility of dl algorithms, it calls for developing fast and automated compilation frameworks to handle the unprecedented amount of innovations. to imitate or even exceed the success of hand-optimized libraries, recent research has developed neural network compilers, such as xla (leary & wang, 2017), glow (rotem et al., 2018), tensor comprehension (vasilache et al., 2018), and tvm (chen et al., 2018a). among them, tvm has shown superior performance improvements using a technique called learning to compile (autotvm) (chen et al., 2018b). autotvm optimizes the code by generating many versions of a tensor operator and chooses the best through a learned cost model and search over a large space of code transformation choices. while the learning to compile approach produces highly optimized code of dnn models, it suffers from excessively long optimization time. as an example, although autotvm is able to demonstrate close to 2× performance improvement over tensorflow on resnet-18, the optimization time can take several hours or even tens of hours (chen et al., 2018b). the long optimization time hinders the turnaround time and even puts the practical utility of the current compiler-based solutions into question. recent works strive to reduce the optimization time by improving the search strategy for the code transformation plan and lowering the hardware measurement cost (ahn et al., 2020; adams et al., 2019). however, these approaches mostly focus on accelerating the convergence speed of optimization at the individual tensor operator level (e.g., conv2d, batched gemm), which do not necessarily solve the issue of slow convergence and long optimization time of the entire model, often containing tens of tensor operators. different from existing methods, we introduce dynatune, a dl code optimization algorithm that minimizes the sum of the execution time of all operators in a model as much as possible and as ∗both authors contributed equally. order of appearance is random. quickly as possible. specifically, the contributions of our paper consist of (1) a preliminary analysis that reveals the challenges and opportunities from existing dl code optimization strategies, (2) a time-slot-based optimization scheme, which simultaneously explores different operators and learns in an online manner that allows to dynamically switch to optimizing more promising tensors operators. (3) a bayesian belief model that predicts future performance gains of operators, which helps make better decisions and expedites the convergence speed. (4) a detailed evaluation of the proposed algorithm with modern dnns (resnet-18, vgg, squeezenet, transformer) on both cpu and gpu. compared with the leading framework, autotvm, dynatune is 1.2–2.4× times faster to obtain the same levels of optimization. background dl compilation pipeline. a typical dl compiler contains multiple passes to optimize a model trained by popular dl frameworks such as tensorflow (abadi et al., 2016), pytorch (paszke et al., 2019), or mxnet (chen et al., 2015), as shown in fig. 1. in the first pass (box with dotted line), the compiler frontend applies target-independent and white-box target-dependent optimizations that do not include a measure of actual execution time. the target-independent passes perform optimizations such as operator fusion and data layout transformation, and the white-box target-dependent optimizations apply heuristic rules for code transformation based on domain knowledge. recent work such as autotvm (chen et al., 2018b) extends the pipeline with another pass, a black-box target-dependent pass, which uses learning machinery to perform optimizations. figure 1: compilation pipeline. black-box target-dependent pass. in this pass, the compiler converts code transformation decisions as code templates. a template contains knobs that control various aspects of the optimization (e.g., memory tiling, loop transformations, vectorization) and determines whether the code (1) fully utilizes the internal parallelism within processors, (2) uses the shared memory wisely, and (3) maximizes data locality. due to the large transformation space, the compiler makes use of an auto-tuner (with an optimization algorithm) and real hardware measurements to find the best transformation on target hardware (e.g., cpu, gpu, arm, or iot devices) (chen et al., 2018b). challenges and motivations this section presents several studies that reveal the challenges of existing dl compilation that guided our design in section 4. challenge 1. existing dl compilation focuses on accelerating the convergence speed of individual tensor operator instead of the entire model, resulting in slow convergence and long optimization time. prior work (chen et al., 2018a;b; vasilache et al., 2018; ahn et al., 2020) optimizes one tensor operator at a time in a predefined order (e.g., in declaration order). however, such an optimization strategy is not always appropriate in practice. for example, there is often an extreme performance difference (e.g., an order of magnitude) between optimized and unoptimized operators. if we optimize operators sequentially, the overall model inference time stays high as long as there are still unoptimized operators. as a result, practitioners may need to wait until all tensor operators have finished optimization to get the desired latency, which results in long optimization time. with the active research that has been pushing the model size to millions or even billion-scale parameters with a training time of only a few hours or less than one hour (yamazaki et al., 2019; goyal et al., 2017; you et al., 2017; lin et al., 2019; shoeybi et al., 2019; you et al., 2019), it becomes even more prominent to reduce the inference optimization cost of the current solution. furthermore, since major players in the industry have adopted many of these dl compilers (wu et al., 2019a;b; lattner et al., 2020; liu et al., 2019), fast convergence is desirable for many users of these pipelines to have figure 2: tion gain/cost. inproportional optimizafigure 3: code transformation space. figure 4: curves. dl optimization a better control of the optimization cost and good performance. for example, deployment engineers may want to obtain an optimized model sooner or quickly get a latency upper-bound estimate of a model in development. challenge 2. static scheduling has only a limited view of the tensor program and has difficulty taking advantage of the actual optimization behavior. we note that from an execution point of view, the optimization of tensor operators is independent of each other, so that we may optimize them in any order and even non-consecutively. as a result, dynamic optimization has a big advantage for iterative dl compilation: we can intelligently order the optimization sequence of operators (i.e., scheduling) to accelerate the convergence of the optimization significantly. for example, it would be better to switch to optimizing another operator if we convincingly identify that the other operator has a higher potential. that being said, is it realistic to assume that all the information concerning optimizing the operators is available before the optimization even starts so that we can decide the schedule from the very beginning? our preliminary analysis indicates that the amount of computation of an operator (known a priori) has a very disproportionate impact on the optimization time and latency reduction. fig. 2 shows that although operator 17 of vgg (simonyan & zisserman, 2015) takes the longest time to optimize, it yields the least amount of latency reduction1. our further investigation shows that the underlying code transformation space is non-linear, as shown in fig. 3 2. as a result, the optimization behavior tends to change over time, which is hard to recognize and predict with static knowledge only. challenge 3. even with dynamic information, it is not clear how to best extrapolate estimated performance. given the optimization results, there is an incentive to adopt a ”predict-thenoptimize” paradigm that builds a model to learn the correlation between the optimization cost and the observed optimization performance the model can be used to make predictions for potential performance gains. to identify the characteristics of the optimization behavior, we plot 16 optimization curves of best-found gflops (giga floating point operations per second) in fig. 4 to find patterns that can be used for designing a prediction model. we find that most curves (1) roughly follow an increasing curve with a diminishing return, (2) saturate towards an unknown final value, and (3) occasionally exhibit sudden jumps. the curve saturates to an unknown value because the performance cannot exceed the hardware peak gflops, which is 9.7-tflops in our case. the curve has sudden jumps because the code transformation space has change points, as shown in fig. 3. by taking into account the curve information, we believe it has more opportunity to dynamically optimize operators that likely lead to greater performance improvements. 1the orange bar shows the amount of computation of each operator measured as the floating-point operations (flops), which can be calculated statically before the optimization starts, as described in molchanov et al. (2017). the “optimization gain” is calculated as the reduction of wall-clock time from each operator after optimization, and the “optimization cost” is calculated as the wall-clock time spent to obtain the optimized latency, both of which are normalized by the total latency reduction and optimization time. 2the figure shows the code transformation space of a conv2d operator in resnet-18. in this case, the performance of this operator varies based on the tiling size along the input channel and output channel while having other knobs fixed. the knobs control various aspects of the optimization and its performance. a summary of the knobs can be found in ahn et al. (2020). method | 3 | [
108.299,
697.5936768,
172.8109394,
709.5488768
] |
3YjQfCLdrzz.pdf | 2,023 | 2 | fosr: first-order spectral rewiring for addressing oversquashing in gnns kedar karhadkar ucla kedar@math.ucla.edu pradeep kr. banerjee mpi mis pradeep@mis.mpg.de guido montúfar ucla & mpi mis montufar@math.ucla.edu abstract graph neural networks (gnns) are able to leverage the structure of graph data by passing messages along the edges of the graph. while this allows gnns to learn features depending on the graph structure, for certain graph topologies it leads to inefficient information propagation and a problem known as oversquashing. this has recently been linked with the curvature and spectral gap of the graph. on the other hand, adding edges to the message-passing graph can lead to increasingly similar node representations and a problem known as oversmoothing. we propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph based on spectral expansion. we combine this with a relational architecture, which lets the gnn preserve the original graph structure and provably prevents oversmoothing. we find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks. introduction graph neural networks (gnns) (gori et al., 2005; scarselli et al., 2008) are a broad class of models which process graph-structured data by passing messages between nodes of the graph. due to the versatility of graphs, gnns have been applied to a variety of domains, such as chemistry, social networks, knowledge graphs, and recommendation systems (zhou et al., 2020; wu et al., 2020). gnns broadly follow a message-passing framework, meaning that each layer of the gnn aggregates the representations of a node and its neighbors, and transforms these features into a new representation for that node. the aggregation function used by the gnn layer is taken to be locally permutationinvariant, since the ordering of the neighbors of a node is arbitrary, and its specific form is a key component of the gnn architecture; varying it gives rise to several common gnn variants (kipf and welling, 2017; veliˇckovi´c et al., 2018; li et al., 2015; hamilton et al., 2017; xu et al., 2019). the output of a gnn can be used for tasks such as graph classification or node classification. although gnns are successful in computing dependencies between nodes of a graph, they have been found to suffer from a limited capacity to capture long-range interactions. for a fixed graph, this is caused by a variety of problems depending on the number of layers in the gnn. since graph convolutions are local operations, a gnn with a small number of layers can only provide a node with information from nodes close to itself. for a gnn with l layers, the receptive field of a node (the set of nodes it receives messages from) is exactly the ball of radius l about the node. for small values of l, this results in “underreaching”, and directly limits which functions the gnn can represent. on a related note, the functions representable by gnns with l layers are limited to those computable by l steps of the weisfeiler-lehman (wl) graph isomorphism test (morris et al., 2019; xu et al., 2019; barceló et al., 2020). on the other hand, increasing the number of layers leads to its own set of problems. in contrast to other architectures that benefit from the expressivity of deeper networks, gnns experience a decrease in accuracy as the number of layers increases (li et al., 2018; chen et al., 2020). this phenomenon has partly been attributed to “oversmoothing”, where repeated graph convolutions eventually render node features indistinguishable (li et al., 2018; oono and suzuki, 2020; cai and wang, 2020; zhao and akoglu, 2020; rong et al., 2020; di giovanni et al., 2022). separate from oversmoothing is the problem of “oversquashing” first pointed out by alon and yahav (2021). as the number of layers of a gnn increases, information from (potentially) exponentiallygrowing receptive fields need to be concurrently propagated at each message-passing step. this leads to a bottleneck that causes oversquashing, when an exponential amount of information is squashed into fixed-size node vectors (alon and yahav, 2021). consequently, for prediction tasks relying on long-range interactions, the gnn can fail. oversquashing usually occurs when there are enough layers in the gnn to reach any node (the receptive fields are large enough), but few enough that the gnn cannot process all of the necessary relations between nodes. hence, for a fixed graph, the problems of underreaching, oversquashing, and oversmoothing occur in three different regimes, depending on the number of layers of the gnn. a common approach to addressing oversquashing is to rewire the input graph, making changes to its edges so that it has fewer structural bottlenecks. a simple approach to rewiring is to make the last layer of the gnn fully adjacent, allowing all nodes to interact with one another (alon and yahav, 2021). alternatively, one can make changes to edges of the input graph, feeding the modified graph into all layers of the gnn (topping et al., 2022; banerjee et al., 2022). the latter approaches can be viewed as optimizing the spectral gap of the input graph for alleviating structural bottlenecks and improving the overall quality of signal propagation across nodes (see figure 1). while these rewiring methods improve the connectivity of the graph, there are drawbacks to making too many modifications to the input. the most obvious problem is that we are losing out on topological information about the original graph. if the structure of the original graph is indeed relevant, adding and removing edges diminishes that benefit to the task. another issue arises from the smoothing effects of adding edges: if we add too many edges to the input graph, an ordinary gcn will suffer from oversmoothing (li et al., 2018). in other words, if we use this natural approach to rewiring, we experience a trade-off between oversquashing and oversmoothing. this observation, which does not seem to have been pointed out in earlier works, is the main motivation for the approach that we develop in this work. main contributions figure 1: top: schematic showing different rewiring methods, fosr (ours), sdrf (topping et al., 2022), and g-rlef (banerjee et al., 2022) for alleviating structural bottlenecks in the input graph. our method adds new edges that are labeled differently from the existing ones so that the gnn can distinguish them in training. bottom: normalized spectral gap and training accuracy as functions of the number of rewiring iterations for a learning task modeled on the neighborsmatch problem for a path-of-cliques input (for details, see appendix b.1.1). this paper presents a new framework for rewiring a graph to reduce oversquashing in gnns while preventing oversmoothing. here are our main contributions: • we introduce a framework for graph rewiring which can be used with any rewiring method that sequentially adds edges. in contrast to previous approaches that only modify the input graph (e.g., topping et al., 2022; banerjee et al., 2022; bober et al., 2022), our solution gives special labels to the added edges. we then use a relational gnn on this new graph, with the relations corresponding to whether the edge was originally in the input graph or added during the rewiring. this allows us to preserve the input graph topology while using the new edges to improve its connectivity. in theorem 3 we show that this approach also prevents oversmoothing. • we introduce a new rewiring method, fosr (first-order spectral rewiring) aimed at optimizing the spectral gap of the graph input to the gnn (algorithm 1). this algorithm computes the first-order change in the spectral gap from adding each edge, and then adds the edge which maximizes this (theorem 4 and proposition 5). • we empirically demonstrate that the proposed method results in faster spectral expansion (a marker of reduced oversquashing) and improved test accuracy against several baselines on several graph classification tasks (see table 1). experiments demonstrate that the relational structure preserving the original input graph significantly boosts test accuracy. related works past approaches to reducing oversquashing have hinged upon choosing a measure of oversquashing, and modifying the edges of the graph to minimize it. topping et al. (2022) argue that negatively curved edges are responsible for oversquashing drawing on curvature notions from forman (2003) and ollivier (2009). they introduce a rewiring method known as stochastic discrete ricci flow (sdrf), which aims to increase the balanced forman curvature of negatively curved edges by adding new edges. bober et al. (2022) extend this line of investigation by considering the same type of rewiring but using different notions of discrete curvature. banerjee et al. (2022) approach oversquashing from an information-theoeretic viewpoint, measuring it in terms of the spectral gap of the graph and demonstrate empirically that this can increase accuracy for certain graph classification tasks. they propose a rewiring algorithm greedy random local edge flip (g-rlef) motivated by an expander graph construction employing an effective resistance (lyons and peres, 2017) based edge sampling strategy. the work of alon and yahav (2021) first pointing at oversquashing also introduced an approach to rewiring, where they made the last gnn layer an expander – the complete graph that allows every pair of nodes to connect to each other. they also experimented with making the last layer partially adjacent (randomly including any potential edge). this can be thought of as a form of spectral expansion in the final layer since random graphs have high spectral gap (friedman, 1991). in contrast to these works, our method gives a practical way of achieving the largest possible increase in the spectral graph with the smallest possible modification of the input graph and in fact preserving the input graph topology via a relational structure. although not as closely related, we find it worthwhile also pointing at following works in this general context. prior to the diagnosis of the oversquashing problem, klicpera et al. (2019) used graph diffusion to rewire the input graph, improving long-range connectivity for the gnn. rewiring can also be performed while training a gnn. arnaiz-rodríguez et al. (2022) use first-order spectral methods to define a loss function depending on the adjacency matrix, allowing a gnn to learn a rewiring that alleviates oversquashing. we should mention that aside from rewiring the input graph, some works pursue different approaches to solve oversquashing, such as creating positional embeddings for the nodes or edges inspired by the transformer architecture (vaswani et al., 2017). the most direct generalization of this approach to graphs is using laplacian embeddings (kreuzer et al., 2021; dwivedi and bresson, 2020). brüel-gabrielsson et al. (2022) combine this with adding neighbors to encode the edges which are the result of multiple hops. preliminaries background on spectral graph theory let g = (v, e, r) be an undirected graph with node set v, |v| = n, edge set e, |e| = m, and relation set r. the set r is a finite set of relation types, and elements (u, v, r) ∈ e consist of a pair of nodes u, v ∈ v together with an associated relation type r ∈ r. when the relation type of an edge is not relevant, we will simply write (u, v) for an edge. for each v ∈ v we define n (v) to consist of all neighbors of v, that is all u ∈ v such that there exists an edge (u, v) ∈ e. for each r ∈ r and v ∈ v, we define nr(v) to consist of all neighbors of v of relation type r. the degree dv of a node v ∈ v is the number of neighbors of v. we define the adjacency matrix a = a(g) by aij = 1 if (i, j) ∈ e, and aij = 0 otherwise. let d = d(g) denote the diagonal matrix of degrees given by dii = di. the normalized laplacian l = l(g) is defined as l = i − d−1/2ad−1/2. we will often add self-loops (edges (i, i) for i ∈ v) to the graphs we consider, so we define augmented versions of the above matrices corresponding to graphs with self-loops added. if g is a graph without self-loops, we define its augmented adjacency matrix ˜a := i + a, its augmented degree matrix ˜d = i + d, and its augmented laplacian ˜l = i − ˜d−1/2 ˜a ˜d−1/2. we denote the eigenvalues of the normalized laplacian l by 0 = λ1 ≤ λ2 ≤ · · · ≤ λn ≤ 2. let 1 denote the constant function which assumes the value 1 on each node. then d1/21 is an eigenfunction of l with eigenvalue 0. the spectral gap of g is λ2 − λ1 = λ2. we say that g has good spectral expansion if it has a large spectral gap. in appendix a, we review the relation between the spectral gap and a related measure of graph expansion, the cheeger constant. background on relational gnns | 3 | [
108.249,
662.3520784,
296.7283756,
672.3146784
] |
NECTfffOvn1.pdf | 2,021 | 2 | fidelity-based deep adiabatic scheduling eli ovits & lior wolf tel aviv university abstract adiabatic quantum computation is a form of computation that acts by slowly interpolating a quantum system between an easy to prepare initial state and a final state that represents a solution to a given computational problem. the choice of the interpolation schedule is critical to the performance: if at a certain time point, the evolution is too rapid, the system has a high probability to transfer to a higher energy state, which does not represent a solution to the problem. on the other hand, an evolution that is too slow leads to a loss of computation time and increases the probability of failure due to decoherence. in this work, we train deep neural models to produce optimal schedules that are conditioned on the problem at hand. we consider two types of problem representation: the hamiltonian form, and the quadratic unconstrained binary optimization (qubo) form. a novel loss function that scores schedules according to their approximated success probability is introduced. we benchmark our approach on random qubo problems, grover search, 3-sat, and max-cut problems and show that our approach outperforms, by a sizable margin, the linear schedules as well as alternative approaches that were very recently proposed. introduction many of the algorithms developed for quantum computing employ the quantum circuit model, in which a quantum state involving multiple qubits undergoes a series of invertible transformations. however, an alternative model, called adiabatic quantum computation (aqc) (farhi et al., 2000; mcgeoch, 2014), is used in some of the leading quantum computers, such as those manufactured by d-wave systems (boixo et al., 2014). aqc algorithms can achieve quantum speedups over classical algorithms (albash & lidar, 2018), and are polynomially equivalent to the quantum circuit model (aharonov et al., 2008). in aqc, given a computational problem q, e.g., a specific instance of a 3sat problem, a physical system is slowly evolved until a specific quantum state that represents a proper solution is achieved. each aqc run involves three components: 1. an initial hamiltonian hb, chosen such that its ground state (in matrix terms, the minimal eigenvector of hb) is easy to prepare and there is a large spectral gap. this is typically independent of the specific instance of q. 2. a final hamiltonian hp designed such that its ground state corresponds to the solution of the problem instance q. 3. an adiabatic schedule, which is a strictly increasing function s(t) that maps a point in time 0 ≤ t ≤ tf , where tf is total computation time, to the entire interval [0, 1] (i.e., s(0) = 0, s(tf ) = 1, and s(t1) < s(t2) iff t1 < t2 and vice versa). these three components define a single time-dependent hamiltonian h(t), which can be seen as an algorithm for solving q: h(t) = (1 − s(t)) · hb + s(t) · hp at the end of the adiabatic calculation, the quantum state is measured. the square of the overlap between the quantum state and ground state of the final hamiltonian, is the fidelity, and represents the probability of success in finding the correct solution. an aqc algorithm that is evolved over an insufficient time period (a schedule that is too fast) will have a low fidelity. finding the optimal schedule, i.e., the one that would lead to a high fidelity and would keep the time complexity of the algorithm minimal is, therefore, of a great value. however, for most problems, an analytical solution for the optimal schedule does not exist (albash & lidar, 2018). attempts were made to optimize specific aspects of the adiabatic schedule by using iterative methods (zeng et al., 2015) or by direct derivations (susa et al., 2018). performance was evaluated by examining characteristics of the resulting dynamic (e.g. the minimum energy gap) and no improvement was demonstrated on the full quantum calculation. previous attempts to employ ai for the task of finding the optimal schedule have relied on reinforcement learning (lin et al., 2020; chen et al., 2020). while these methods were able to find schedules that are better than the linear path, they are limited to either learning one path for a family of problems (without considering the specific instance) or to rerunning the aqc of a specific instance q multiple times in order to optimize the schedule. in our work, supervised learning is employed in order to generalize from a training set of problems and their optimal paths to new problem instances. training is done offline and the schedule our neural model outputs is a function of the specific problem instance. the problem instance is encoded in our model either based on the final hamiltonian hp or directly based on the problem. the suggested neural models are tested using several different problem types: grover search problems, 3sat and max-cut problems, and randomized qubo problems. we show that the evolution schedules suggested by our model greatly outperform the naive linear evolution schedule, as well as those schedules provided by the recent rl methods, and allow for much shorter total evolution times. background the goal of the scheduling task is to find a schedule s(t) that maximizes the probability to get the correct answer for instance q, using hb and hp over an adiabatic quantum computer. the solution to q is coded as the lowest energy eigenstate of hp. in order to achieve the solution state with high probability, the system must be evolved “sufficiently slowly”. the adiabatic theorem (roland & cerf, 2002; albash & lidar, 2018; rezakhani et al., 2009) is used to analyze how fast could this evolution be. it states that the probability to reach the desired state at the end of the adiabatic calculation is 1 − ε2 for ε << 1 if where the dirac notation (tumulka, 2009) is used1, e0(t) (e1(t)) is the ground state (first excited state) of the time dependent hamiltonian h(t), i.e., the eigenstates that corresponds to the lowest (2nd lowest) eigenvalue, and g(t) is the time dependent instantaneous spectral gap between the smallest and second smallest eigenvalues of h(t). let tf be the total calculation time. let s(t) be an evolution schedule , such that s(0) = 0, s(tf ) = 1. applying the adiabatic condition for s(t), we get ds h(s(t)) |e0(s(t))(cid:105)(cid:12) (cid:12) g2(s(t)) ds dt d we could solve for t(s) by integration to get t(s) = ds and the total required evolution time is tf = t(s = 1) = ds 1see appendix a for the conventional matrix notation. we note that finding a numerical solution for eq 4 requires calculating the full eigenvalue decomposition of h(x). most-related work two recent contributions use deep learning in order to obtain, for a given tf , a schedule that outperform the linear schedule. lin et al. (2020) suggest using deep reinforcement learning in order to find an optimal schedule for each specific class of problems (e.g., 3sat problems of a certain size). in contrast, we study the problem of finding schedules for generic problem instances. they train and benchmark their performance by simulating an adiabatic quantum computer, and scoring the computation results for randomly chosen problem instances. their results are generally better than the naive linear schedule, and the solution produced by their neural network is somewhat transferable for larger problem sizes. chen et al. (2020) also use rl to construct, given a tf , a schedule for 3sat problems. the most successful technique suggested is a monte carlo tree search (mcts, silver et al. (2016)), which produces results that significantly outperform the linear schedule. this technique requires running the adiabatic evolution process many times for each problem, in order to find a successful schedule. an approach inspired by alpha-zero (silver et al., 2018) is used to adapt the generic mcts solution to specific problem class, while requiring only a few additional rounds of the adiabatic process for each new instance. in our method, we do not require any run given a new problem instance. method we consider two types of deep neural models. the first model is designed to get the problem hamiltonian hp as an input. for an n qubit problem, the problem hamiltonian is generally of size 2n ×2n. in this work, we consider problem hamiltonians which are diagonal and can be represented by vector of size 2n. this scenario covers both the grover search problem and the 3sat problem we present in sec. 4. the second model is designed to get a quadratic unconstrained binary optimization (qubo) problem as an input. the qubo problem has the following form: ¯x = argminx(xt qx) , (6) where x is a vector of binary variables and q ∈ rn×n defines the specific qubo instance. the qubo problem is np-complete, and many types of common problems can be reduced to qubo (glover et al., 2018). the qubo formulation is of special interest in the context of adiabatic quantum computing, since it allows a relatively easy mapping to real quantum annealing devices that do not possess full qubit connectivity (cruz-santos et al., 2019). a qubo problem q can be converted to the hamiltonian form in the following fashion: hp = qii( i + σi z 2 i(cid:54)=j qij( i + σi z 2 i + σj z 2 where σi 2n × 2n and is diagonal. z is the pauli matrix σz operating only on qubit i (liboff, 2003). the resulting hp is of size the prediction target of our models is the desired normalized schedule ˆs(t), which is defined over the range [0, 1] as ˆs(t) = s(t/tf ). for the purpose of estimation, it is sampled at 100 points in the interval t = [0, 1]. the representation of this schedule is given as a vector d ∈ [0, 1]99, which captures the temporal derivative of the schedule. in other words, d is trained to hold the differences between consecutive points on the path, i.e., element i is given by di = ˆs((i + 1)/100) − ˆs(i/100). note that the sum of d is one. universality of the optimal schedule the reason that we work with the normalized schedule is that the optimal evolution schedule is not dependent upon the choice of tf . as shown next, for every time budget tf , the same normalized schedule would provide the highest fidelity (neglecting decoherence). let s1(t) : [0, tf ] → [0, 1] be a suggested evolution schedule, which outperforms a different suggested schedule s2(t), for a specific tf = τ1, i.e. it achieves a greater fidelity at the end of the schedule for a specific problem instance q. then, thm. 1 shows that s1(t) outperforms s2(t) for every possible choice of tf for the same problem q. theorem 1. let s1(t) and s2(t) be two monotonically increasing fully differentiable bijective functions from [0, tf = τ1] to [0, 1]. let q be an optimization problem, and assume that s1(t) achieves a greater fidelity than s2(t) at the end of a quantum adiabatic computation for q with total evolution time tf = τ1. then, for any other choice tf = τ2, the scaled schedule s1( τ2 t) will achieve a greater τ1 fidelity than s2( τ2 t) for an adiabatic computation over the same problem q with total evolution time τ1 tf = τ2. the proof can be found in appendix b. architecture the model architectures are straightforward and no substantial effort was done to optimize them. the hamiltonian as input model has seven fully connected layers, with decreasing sizes: 4096, 2048, 2048, 1024, 512, and finally the output layer, which, as mentioned, is of size 99. for the qubo model, in which the input is a matrix, a two part architecture was used. in the first part, five layers of 2d convolution was employed, with kernel size of 3 × 3, for 64 kernels. the output from the convolution layers was then flattened to a vector of size 64n2, and fed to the second part of the network, consisted of five fully connected layers, with decreasing dimensions of 2048, 1024, 1024, 512, and finally the output layer of size 99. this output layers in both models are normalized to have a sum of one. for both models, the selu activation function klambauer et al. (2017) was used for all layers, except the final layer, which used the sigmoid (logistic) function. a fidelity based loss function let |ψ(t)(cid:105) is the state of the quantum system at time t = stf . the fidelity of the qac is given by (farhi et al., 2000) psuccess = |(cid:104)e0(s = 1) | ψ(t = tf )(cid:105)|2 , where (cid:104)e(cid:96)(s = 1)| is the (cid:96)-th eigenstate of the parameter dependent evolution hamiltonian h(s), such that (cid:104)e0(s = 1)| is the ground state of the final hamiltonian hp. finding (cid:104)e0(s = 1)| requires performing eigenvalue decomposition for hp, which is equivalent to solving the original optimization problem, and is done for the training set. the quantum state |ψ(t)(cid:105) is evolving according to the schr¨odinger equation i d dt a brute force approach for finding psuccess is to numerically solve the schr¨odinger equation, see appendix c. this full numerical calculation is, however, too intense to be practical. we next develop an approximate method that would be easier to compute and still be physically meaningful. it is based on the adiabatic local evolution speed limit from eq. 3: ds dt this inequality could be used as a local condition for convergence of any suggested path. we define g2 e(s) = we would like to use the local condition to create a global convergence condition for a full suggested path s(t), 0 ≤ t ≤ tf . to do so, we integrate both sides of eq. 10 over the suggested schedule s. this integral represents a mean value of the local adiabatic condition, for every point in the suggested schedule. ds dt g2 e(s) ds we note that integrand is always positive (assuming s(t) is monotonically increasing). recall that √ the adiabatic theorem ties ε to the fidelity: ε = 1 − psuccess. by defining the right hand side of eq.12 as our loss function, we ensure that any training process that minimizes eq. 12 will maximize the fidelity. recall that the vector d that the network outputs is a vector of differences, therefore, it approximates the local derivatives of the obtained path. let ˆs∗ be the optimal normalized path, which we estimate for each training sample. the loss function is, therefore, defined as: l(d, ˆs∗) = the values of ge are precomputed along the optimal path ˆs∗ for efficiency. while the denominator is obtained on points that do not correspond to the estimated path (the commutative sum of d), the approximation becomes increasingly accurate at the estimated path appraoches the optimal one. training data and the training process in order to train the qubo problem model, we produced a training dataset of 10,000 random qubo instances for each problem size: n = 6, 8, 10. the qubo problems were generated by sampling independently, from the normal distribution, each coefficient of the problem matrix q. the entire matrix q was then multiplied by a single random normal variable. we approximated an optimal evolution schedule for each problem, by calculating the full eigenvalue decomposition of ht as described in sec 2. we also calculated the value of g(s(t)) for each problem. for the model that uses the problem hamiltonian as input, we used the same prepared qubo problems, converted to the hamiltonian form. in addition, we added another 500 cases of randomized hamiltonians with randomized values around distinct energy levels. for each hamiltonian, we first randomized an energy level between the following values: 0.5, 1, 1.5 or 2, and then randomized uniformly distributed values around the selected energy level. to each hamiltonian we added a single ground state with energy 0. this type of hamiltonian is not commonly created by the random qubo creation process described above, but is more representative of binary optimization problems, and specifically more closely resembles problem hamiltonians for the grover problem and the 3sat problem, which we later use to benchmark our model performance. we note that the hamiltonian for these specific problems in our test set are nevertheless different from our randomized problem hamiltonians, which highlights the generalization capability of our method. the training was performed using the adam optimizer (kingma & ba, 2014), with batches of size 200. batch normalization (ioffe & szegedy, 2015) was applied during training. a uniform dropout value of 0.1 is employed for all layers during the model training. results as a baseline to the loss l (eq. 13) we use, we employed the mean squared error (mse) loss, for which the model output was compared to the known optimal schedule from the dataset, which was calculated in advance. grover search the grover algorithm is a well-known quantum algorithm that finds with high probability the unique n evaluations of input to a black box function that produces a particular output value, using just the function, where n is size of the search space. for an n qubit space, the search is over the set {0, 1, .., 2n − 1}, making n = 2n. it is possible to reproduce the grover speedup using an adiabatic formulation, with the following problem hamiltonian: where |m(cid:105) is the state that represents the value we search. roland & cerf (2002) showed that for this problem, a linear schedule does not produce quantum speedup over a classical algorithm, but for a specific initial hamiltonian hb = i − |ψ0(cid:105) (cid:104)ψ0|, for ψ0 as the maximal superposition state (a sum of the states representing all values from 0 to n − 1), an optimal schedule could be derived analytically to achieve a quadratic speedup. the optimal path is given by ˆs(t) = tan in practice, the proposed hb is hard to physically realize, and a simpler initial hamiltonian is used: n (cid:88) hb = i − σi x , where σi x is the pauli matrix σx operating only on qubit i (liboff, 2003). we test our model’s performance by using the grover problem hamiltonian hp as input for several problem sizes. different grover problems are completely symmetrical, and are identical after changing variables, so it is sufficient to use a single test case to test our model. we benchmark our model’s performance by simulating aqc for multiple values of tf , and calculating the fidelity by measuring the overlap between the quantum state at the end of the adiabatic evolution and the solution state. we also show the convergence pattern for the fidelity (i.e. the overlap with the solution state, measured during the adiabatic evolution) for a single specific tf . for each problem size, we chose a different tf , for which a full convergence (p > 0.95) is achieved with the evolution schedule suggested by our model. we compare several suggested schedules: the path produced by training our model using our novel loss function, the path produced by training our model using the mse loss, the linear path, and a numerically calculated optimal path. we also include the results reported by lin et al. (2020) for the same problem. the results are reported in fig. 1 for n = 6, 10, see appendix for n = 8. it is evident that our model produces paths that are significantly superior to the linear path, and also outperforms lin et al. (2020). the advantage of the new loss function over the mse loss is also clear. recall that for a grover search with a certain n, hp is a diagonal matrix of size 2n × 2n. to check whether the model trained on n = 10 generalizes to larger search problems, we view the diagonal of hp for n(cid:48) > n as a 1d signal. this signal is smoothed by a uniform averaging mask of size 6 2n(cid:48) 2n , and subsampled to obtain a diagonal of size 2n. the results are presented in fig. 2. evidently, the network trained for n = 10 achieves much better results than the linear baseline for sizes n(cid:48) = 12, 14, 16. we also trained a network for n = 16. as can be seen in fig. 2(c), this network does achieve better fidelity than the smaller network. we note that no significant changes were made to the network architecture, and the only difference is in the size of the input layer. appendix d presents results for the n = 16 network on n(cid:48) = 17, .., 20. our l-trained model achieves a much better fidelity than the linear schedule and the mse baseline. 3sat in the 3-sat problem, the logical statement consists of m clauses, ci, such that each clause contain a disjunction over three variables out of n binary variables. a solution to the 3sat problem is an assignment for the n variables that satisfies all m clauses. it is possible to construct a problem hamiltonian for each 3sat problem, by taking a sum over all clauses hp = i + σfi z , where σfi is the pauli matrix σz operating only on the state that represents the assignment z |a = {0, 1}, b = {0, 1}, c = {0, 1}(cid:105) which produces false value for clause i. this hamiltonian counts the number of clauses which are not satisfied by each assignment, and its ground state corresponds to the eigenvalue 0 and represents the solution of the problem, for which all clauses are satisfied. (a) (c) (e) (b) (d) (f) figure 1: results for grover search over (a-c) n=6 or (d-f) n=10 qubits. (a) fidelity for tf = 75, and (d) fidelity for tf = 425. (b,e) fidelity at time tf for multiple tf values. (c,f) suggested schedules. (a) (b) (c) figure 2: fidelity for various tf values for grover search problems of size n(cid:48) larger than the n = 10, for which the network was trained. (a) n(cid:48) = 12, (b) n(cid:48) = 14, (c) n(cid:48) = 16. for n(cid:48) = 16, we also present the result obtained by a network trained for solving n = 16. we test our model’s performance, by randomizing 3sat problems, and converting them to hamiltonian form. following chen et al. (2020), we focus on 3sat problems with a single solution, and a number of clauses m = 3n. this type of 3sat problems is considered difficult to solve with adiabatic algorithms ( ˇznidariˇc, 2005). (a) (b) figure 3: fidelity for various tf values, over random 3sat instances with m=3n clauses. keep note of the different time scale for each problem size, as larger problems require longer times to converge. (a) n=8 qubits (b) n=11 qubits. for n = 11, we employ the test set of chen et al. (2020) and directly compare with their mcts method. we benchmark our model’s performance by simulating the adiabatic computation for multiple values of tf and calculating the fidelity by measuring the overlap between the quantum state at the end of the adiabatic evolution and the solution state. in addition to the linear path and the paths obtained by training with either l or mse, we also include for n=11, the results for the schedules designed by mcts (chen et al., 2020). for this purpose, we used the test data obtained by chen et al. as can be seen in fig. 3, our method outperform all baselines. note that the mcts methdod was optimized, for each problem instance and for each tf using tens aqc of runs on the specific test problem, while our method does not run on the test data. as stated in sec. 3.4, the hamiltonian model is trained on 10,000 random qubo problems and 500 random hamiltonian problems. in appendix e, we study the performance when the 500 random samples are removed from the training set and when employing fewer training samples. max-cut | 7 | [
108.249,
352.2470784,
181.6335116,
362.2096784
] |
89GT-S49mGd.pdf | 2,023 | 0 | function-space regularized rényi divergences jeremiah birrell1, yannis pantazis2, paul dupuis3, luc rey-bellet1, markos a. katsoulakis1 1university of massachusetts, amherst, 2foundation for research & technology - hellas, 3brown university, {jbirrell, luc, markos}@umass.edu, pantazis@iacm.forth.gr, paul_dupuis@brown.edu abstract we propose a new family of regularized rényi divergences parametrized not only by the order α but also by a variational function space. these new objects are defined by taking the infimal convolution of the standard rényi divergence with the integral probability metric (ipm) associated with the chosen function space. we derive a novel dual variational representation that can be used to construct numerically tractable divergence estimators. this representation avoids risk-sensitive terms and therefore exhibits lower variance, making it well-behaved when α > 1; this addresses a notable weakness of prior approaches. we prove several properties of these new divergences, showing that they interpolate between the classical rényi divergences and ipms. we also study the α → ∞ limit, which leads to a regularized worst-case-regret and a new variational representation in the classical case. moreover, we show that the proposed regularized rényi divergences inherit features from ipms such as the ability to compare distributions that are not absolutely continuous, e.g., empirical measures and distributions with lowdimensional support. we present numerical results on both synthetic and real datasets, showing the utility of these new divergences in both estimation and gan training applications; in particular, we demonstrate significantly reduced variance and improved training performance. introduction rényi divergence, rényi (1961), is a significant extension of kullback-leibler (kl) divergence for numerous applications; see, e.g., van erven & harremos (2014). the recent neural-based estimators for divergences belghazi et al. (2018) along with generative adversarial networks (gans) goodfellow et al. (2014) accelerated the use of divergences in the field of deep learning. the neural-based divergence estimators are feasible through the utilization of variational representation formulas. these formulas are essentially lower bounds (and, occasionally, upper bounds) which are approximated by tractable statistical averages. the estimation of a divergence based on variational formulas is a notoriously difficult problem. challenges include potentially high bias that may require an exponential number of samples mcallester & stratos (2020) or the exponential statistical variance for certain variational estimators song & ermon (2019), rendering divergence estimation both data inefficient and computationally expensive. this is especially prominent for rényi divergences with order larger than 1. indeed, numerical simulations have shown that, unless the distributions p and q are very close to one another, the rényi divergence rα(p ∥q) is almost intractable to estimate when α > 1 due to the high variance of the statistically-approximated risk-sensitive observables birrell et al. (2021), see also the recent analysis in lee & shin (2022). a similar issue has also been observed for the kl divergence, song & ermon (2019). overall, the lack of estimators with low variance for rényi divergences has prevented wide-spread and accessible experimentation with this class of information-theoretic tools, except in very special cases. we hope our results here will provide a suitable set of tools to address this gap in the methodology. one approach to variance reduction is the development of new variational formulas. this direction is especially fruitful for the estimation of mutual information van den oord et al. (2018); cheng et al. (2020). another approach is to regularize the divergence by restricting the function space of the variational formula. indeed, instead of directly attacking the variance issue, the function space of the variational formula can be restricted, for instance, by bounding the test functions or more appropriately by bounding the derivative of the test functions. the latter regularization leads to lipschitz continuous function spaces which are also foundational to integral probability metrics (ipms) and more specifically to the duality property of the wasserstein metric. in this paper we combine the above two approaches, first deriving a new variational representation of the classical rényi divergences and then regularizing via an infimal-convolution as follows rγ,ic α (p ∥q) := inf η {rα(p ∥η) + w γ(q, η)} , where p and q are the probability distributions being compared, the infimum is over the space of probability measures, rα is the classical rényi divergence, and w γ is the ipm corresponding to the chosen regularizing function space, γ. the new family of regularized rényi divergences that are developed here address the risk-sensitivity issue inherent in prior approaches. more specifically, our contributions are as follows. • we define a new family of function-space regularized rényi divergences via the infimal convolution operator between the classical rényi divergence and an arbitrary ipm (1). the new regularized rényi divergences inherit their function space from the ipm. for instance, they inherit mass transport properties when one regularizes using the 1-wasserstein metric. • we derive a dual variational representation (11) of the regularized rényi divergences which avoids risk-sensitive terms and can therefore be used to construct lower-variance statistical estimators. • we prove a series of properties for the new object: (a) the divergence property, (b) being bounded by the minimum of the rényi divergence and ipm, thus allowing for the comparison of non-absolutely continuous distributions, (c) limits as α → 1 from both left and right, (d) regimes in which the limiting cases rα(p ∥q) and w γ(q, p ) are recovered. • we propose a rescaled version of the regularized rényi divergences (16) which lead to a new variational formula for the worst-case regret (i.e., α → ∞). this new variational formula does not involve the essential supremum of the density ratio as in the classical definition of worst-case regret, thereby avoiding risk-sensitive terms. • we present a series of illustrative examples and counterexamples that further motivate the proposed definition for the function-space regularized rényi divergences. • we present numerical experiments that show (a) that we can estimate the new divergence for large values of the order α without variance issues and (b) train gans using regularized function spaces. related work. the order of rényi divergence controls the weight put on the tails, with the limiting cases being mode-covering and mode-selection minka (2005). rényi divergence estimation is used in a number of applications, including sajid et al. (2022) (behavioural sciences), mironov (2017) (differential privacy), and li & turner (2016) (variational inference); in the latter the variational formula is an adaptation of the evidence lower bound. rényi divergences have been also applied in the training of gans bhatia et al. (2021) (loss function for binary classification - discrete case) and in pantazis et al. (2022) (continuous case, based on the rényi-donsker-varahdan variational formula in birrell et al. (2021)). rényi divergences with α > 1 are also used in contrastive representation learning, lee & shin (2022), as well as in pac-bayesian bounds, bégin et al. (2016). in the context of uncertainty quantification and sensitivity analysis, rényi divergences provide confidence bounds for rare events, atar et al. (2015); dupuis et al. (2020), with higher rarity corresponding to larger α. reducing the variance of divergence estimators through control of the function space have been recently proposed. in song & ermon (2019) an explicit bound to the output restricts the divergence values. a systematic theoretical framework on how to regularize through the function space has been developed in dupuis, paul & mao, yixiang (2022); birrell et al. (2022a) for the kl and f -divergences. despite not covering the rényi divergence, the theory in dupuis, paul & mao, yixiang (2022); birrell et al. (2022a) and particularly the infimal-convolution formulation clearly inspired the current work. however, adapting the infimal-convolution method to the rényi divergence setting requires two new technical innovations: (a) we develop a new low-variance convex-conjugate variational formula for the classical rényi divergence in theorem 2.1 (see also fig. 1), allowing us to apply infimalconvolution tools to develop the new γ-rényi divergences in theorem 3.4. (b) we study the α → ∞ limit of (a) to obtain a new low-variance variational representation of worst-case regret in theorem 2.2 and study its γ-regularization in theorem 4.5. new variational representations of classical rényi divergences the rényi divergence of order α ∈ (0, 1) ∪ (1, ∞) between p and q, denoted rα(p ∥q), can be defined as follows: let ν be a sigma-finite positive measure with dp = pdν and dq = qdν. then rα(p ∥q) := α(α−1) log if 0 < α < 1 or α > 1 and p ≪ q if α > 1 and p ̸≪ q , where p ≪ q denotes absolute continuity of p with respect to q. there always exists such a ν (e.g., ν = p + q) and one can show that the definition (2) does not depend on the choice of ν. the rα provide a notion of ‘distance’ between p and q in that they satisfy the divergence property, i.e., they are non-negative and equal zero iff q = p . the limit of rα as α approaches 1 or 0 equals the kl or reverse kl divergence respectively van erven & harremos (2014). an alternative representation of rα, the so-called rényi-donsker-varadhan variational formula, was derived from (2) in birrell et al. (2021), rα(p ∥q) = sup ϕ∈mb(ω) log e(α−1)ϕdp − log eαϕdq , p, q ∈ p(ω) . here (ω, m) denotes a measurable space, mb(ω) the space of bounded measurable real-valued functions on ω, and p(ω) is the space of probability measures on ω. by a change of variables argument this can be transformed into the following new variational representation; see theorem a.2 in appendix a for a proof. we call it the convex-conjugate rényi variational formula (cc-rényi). theorem 2.1 (convex-conjugate rényi variational formula). let p, q ∈ p(ω) and α ∈ (0, 1) ∪ (1, ∞). then rα(p ∥q) = sup g∈mb(ω):g<0 gdq + log |g|(α−1)/αdp if (ω, m) is a metric space with the borel σ-algebra then (4) holds with mb(ω) replaced by cb(ω), the space of bounded continuous real-valued functions on ω. the representation (4) is of convex-conjugate type, which will be key in our development of functionspace regularized rényi divergences. it is also of independent interest as it avoids risk-sensitive terms, unlike (3) which contains cumulant-generating-functions. this makes (4) better behaved in estimation problems, especially when α > 1; see the example in section 6.1 below. we also obtain a new variational formula for worst-case regret, as defined by van erven & harremos (2014) d∞(p ∥q) := lim α→∞ αrα(p ∥q) = ess supp dp dq log , p ≪ q p ̸≪ q . in contrast to (5), which requires estimation of the likelihood ratio, the new variational formula (6) below avoids risk-sensitive terms. theorem 2.2 (worst-case regret variational formula). let p, q ∈ p(ω). then d∞(p ∥q) = sup g∈mb(ω):g<0 gdq + log |g|dp if ω is a metric space with the borel σ-algebra then (6) holds with mb(ω) replaced by cb(ω). see theorem a.5 in appendix a for a proof. equation (6) is a new result of independent interest and will also be useful in our study of the α → ∞ limit of the function-space regularized rényi divergences that we define in the next section. remark 2.3. alternative variational formulas for d∞ on a finite alphabet were derived in kurri et al. (2022). t primal and dual formulations of the infimal-convolution γ-rényi divergences we are now ready to define the function-space regularized rényi divergences and derive their key properties. in this section, x will denote a compact metric space, p(x) will denote the set of borel probability measures on x, and c(x) will denote the space of continuous real-valued functions on x. we equip c(x) with the supremum norm and recall that the dual space of c(x) is c(x)∗ = m (x), the space of finite signed borel measures on x (see the riesz representation theorem, e.g., theorem 7.17 in folland (2013)). definition 3.1. given a test-function space γ ⊂ c(x), we define the infimal-convolution γ-rényi divergence (i.e., ic-γ-rényi divergence) between p, q ∈ p(x) by rγ,ic α (p ∥q) := inf η∈p(x) where w γ denotes the γ-ipm w γ(µ, ν) := sup { g∈γ gdµ − gdν} , µ, ν ∈ m (x) . remark 3.2. the classical rényi divergence is convex in its second argument but not in its first when α > 1 van erven & harremos (2014). this is the motivation for defining the ic-γ-rényi divergences via an infimal convolution in the second argument of rα; convex analysis tools will be critical in deriving properties of rγ,ic below. for α ∈ (0, 1) one can use the identity rα(p ∥q) = r1−α(q∥p ) to rewrite (7) as an infimal convolution in the first argument. the definition (7) can be thought of as a regularization of the classical rényi divergence using the γ-ipm. for computational purposes it is significantly more efficient to have a dual formulation, i.e., a representation of rγ,ic in terms of a supremum over a function space. to derive such a representation we begin with the variational formula for rα from theorem 2.1. if we define the convex mapping λp α : c(x) → (−∞, ∞], λp log then (4) from theorem 2.1 can be written as a convex conjugate rα(p ∥q) = (λp α )∗[q] := sup g∈c(x) gdq − λp α [g]} . one can then use fenchel-rockafellar duality to derive a dual formulation of the ic-γ-rényi divergences. to apply this theory we will need to work with spaces of test functions that satisfy the following admissibility properties. these properties are similar to those used in the construction of regularized kl and f -divergences in dupuis, paul & mao, yixiang (2022) and birrell et al. (2022a). definition 3.3. we will call γ ⊂ c(x) admissible if it is convex and contains the constant functions. we will call an admissible γ strictly admissible if there exists a p(x)-determining set ψ ⊂ c(x) such that for all ψ ∈ ψ there exists c ∈ r, ϵ > 0 such that c ± ϵψ ∈ γ. recall that ψ being p(x)-determining means that for all q, p ∈ p(x), if (cid:82) ψdq = (cid:82) ψdp for all ψ ∈ ψ then q = p . putting the above pieces together one obtains the following variational representation. theorem 3.4. let γ ⊂ c(x) be admissible, p, q ∈ p(x), and α ∈ (0, 1) ∪ (1, ∞). then: rγ,ic α (p ∥q) = sup gdq + log |g|(α−1)/αdp 2. if (11) is finite then there exists η∗ ∈ p(x) such that rγ,ic α (p ∥q) = inf η∈p(x) {rα(p ∥η) + w γ(q, η)} = rα(p ∥η∗) + w γ(q, η∗) . 3. rγ,ic α (p ∥q) ≤ min{rα(p ∥q), w γ(q, p )}. 4. if γ is strictly admissible then rγ,ic has the divergence property. see theorem b.3 in appendix b for detailed proofs of these results as well as several additional properties. we note that there are alternative strategies for proving the variational formula (11) which make different assumptions; further comments on this can be found in remark b.4. important examples of strictly admissible γ include the following: 1. γ = c(x), which leads to the classical rényi-divergences. 2. γ = lip1(x), i.e. all 1-lipschitz functions. this regularizes the rényi divergences via the wasserstein metric. 3. γ = {c + g : c ∈ r, g ∈ c(x), |g| ≤ 1}. this regularizes the rényi divergences via the total-variation metric. 4. γ = {c + g : c ∈ r, g ∈ lip1(x), |g| ≤ 1}. this regularizes the rényi divergences via the dudley metric. 5. γ = {c + g : c ∈ r, g ∈ y : ∥g∥v ≤ 1}, the unit ball in a rkhs v ⊂ c(x). this regularizes the rényi divergences via mmd. in practice, uniform bounds can be implemented using an appropriately chosen final nn layer. lipschitz bounds can be implemented using spectral normalization of neural networks miyato et al. (2018), or using a soft gradient penalty gulrajani et al. (2017). the function space γ for structurepreserving gans discussed in the appendix is implemented using equivariant neural networks, birrell et al. (2022b). if γ is a ball in an rkhs space the implementation is carried out using the same tools used in, e.g., mmd distances and divergences, gretton et al. (2012); glaser et al. (2021). the ic-γ-rényi divergences also satisfy a data processing inequality. see theorem b.8 in appendix b for a proof as well as details regarding the notation. theorem 3.5 (data processing inequality). let α ∈ (0, 1) ∪ (1, ∞), q, p ∈ p(x), and k be a probability kernel from x to y such that k[g] ∈ c(x) for all g ∈ c(x, y ). if γ ⊂ c(y ) is admissible then rγ,ic (p ∥q). if γ ⊂ c(x × y ) is admissible then rγ,ic α (p ⊗ k∥q ⊗ k) ≤ rk[γ],ic (k[p ]∥k[q]) ≤ rk[γ],ic α (p ∥q). if k[γ] is strictly contained in γ then the bounds in theorem 3.5 can be strictly tighter than the classical data processing inequality van erven & harremos (2014). data-processing inequalities are important for constructing symmetry-preserving gans; see birrell et al. (2022b) and section d.1. limits, interpolations, and regularized worst-case regret next we use theorem 3.4 to compute various limits of the ic-γ-rényi divergences. first we show that they interpolate between rα and w γ in the following sense (see theorem b.5 for a proof). theorem 4.1. let γ ⊂ c(x) be admissible, p, q ∈ p(x), and α ∈ (0, 1) ∪ (1, ∞). | 4 | [
107.671,
238.5868166,
504.3615730076,
273.9108556
] |
-qh0M9XWxnv.pdf | 2,021 | 0 | analyzing the expressive power of graph neural networks in a spectral perspective muhammet balcilar∗, guillaume renton, pierre h´eroux, benoit ga ¨uz`ere, s´ebastien adam, paul honeine normandy university, litis lab, university of rouen normandy, insa rouen normandie rouen, 76000, france abstract in the recent literature of graph neural networks (gnn), the expressive power of models has been studied through their capability to distinguish if two given graphs are isomorphic or not. since the graph isomorphism problem is np-intermediate, and weisfeiler-lehman (wl) test can give sufficient but not enough evidence in polynomial time, the theoretical power of gnns is usually evaluated by the equivalence of wl-test order, followed by an empirical analysis of the models on some reference inductive and transductive datasets. however, such analysis does not account the signal processing pipeline, whose capability is generally evaluated in in this paper, we argue that a spectral analysis of gnns the spectral domain. behavior can provide a complementary point of view to go one step further in the understanding of gnns. by bridging the gap between the spectral and spatial design of graph convolutions, we theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. using this connection, we managed to re-formulate most of the state-of-the-art graph neural networks into one common framework. this general framework allows to lead a spectral analysis of the most popular gnns, explaining their performance and showing their limits according to spectral point of view. our theoretical spectral analysis is confirmed by experiments on various graph databases. furthermore, we demonstrate the necessity of high and/or band-pass filters on a graph dataset, while the majority of gnn is limited to only low-pass and inevitably it fails. code available at https://github.com/balcilar/gnn-spectral-expressive-power. introduction over the last five years, many graph neural networks (gnns) have been proposed in the literature of geometric deep learning (veliˇckovi´c et al., 2018; gilmer et al., 2017; bronstein et al., 2017; battaglia et al., 2018), in order to generalize the very efficient deep learning paradigm into the world of graphs. this large number of contributions explains a new challenge recently tackled by the community, which consists in assessing the expressive power of gnns. in this area of research, there is a consensus to evaluate the theoretic expressive power of gnns according to equivalence of weisfeiler-lehman (wl) test order (morris et al., 2019; xu et al., 2019; maron et al., 2019b;a). hence, gnns models are frequently classified as ”as powerful as 1-wl”, ”as powerful as 2-wl”, . . . , ”as powerful as k-wl”. however, this perspective cannot make differences between two methods if they are as powerful as the same wl test order. moreover, it does not always explain success or failure of any gnn on common benchmark datasets. in this paper, we claim that analyzing theoretically and experimentally gnns with a spectral point of view can bring a new perspective on their expressive power. so far, gnns have been generally studied separately as spectral based or as spatial based (wu et al., 2019b; chami et al., 2020). to the best of our knowledge, message passing neural networks (mpnns) (gilmer et al., 2017) and graphnets (battaglia et al., 2018) are the only attempts to merge ∗muhammetbalcilar@gmail.com both approaches in the same framework. however, these models are not able to generalize custom designed spectral filters, as well as the effect of each convolution support in a multi convolution case. the spatial-spectral connection is also mentioned indirectly in several cornerstone studies by defferrard et al. (2016); kipf & welling (2017); levie et al. (2019). since the spectral-spatial interchangeability is missing, they did not propose to show spectral behavior of any graph convolution. recent studies have also attempted to show, for a limited number of spatial gnns, that they act as low-pass filters (nt & maehara, 2019; wu et al., 2019a). nt & maehara (2019) concluded that using adjacency induces low-pass effects, while wu et al. (2019a) studied a single spatial gnn’s spectral behavior by assuming adding self-connection changes the given topology of the graph. in this paper, we bridge the gap between spectral and spatial domains for gnns. our first contribution consists in demonstrating the equivalence of convolution processes regardless if they are defined as spatial or as spectral gnn. using this connection, we propose a new general framework and taxonomy for gnns as the second contribution. taking advantage of this equivalence, our third contribution is to provide a spectral analysis of any gnn model. this spectral analysis is another perspective for the analysis of expressive power of gnns. our theoretical spectral analysis is confirmed by experiments on various well-known graph datasets. furthermore, we show the necessity of high and/or band-pass filters in our experiments, while the majority of gnns are limited to only low-pass filters and thus inevitably fail when dealing with these problems. the code used in this paper is available at https://github.com/balcilar/gnn-spectral-expressive-power. the remainder of this paper is organized as follows. section 2 introduces convolutional gnns and presents existing approaches. in section 3 and section 4, we describe the main contributions mentioned above. section 5 presents a series of experiments and results which validate our propositions. finally, section 6 concludes this paper. problem statement and state of the art let g be a graph with n nodes and an arbitrary number of edges. connectivity is given by the adjacency matrix a ∈ {0, 1}n×n and features are defined on nodes by x ∈ rn×f0, with f0 the length of feature vectors. for any matrix x, we used xi, x:j and xi,j to refer its i-th column vector, j-th row vector and scalar value on (i, j) location, respectively. a graph laplacian is l = d − a (or l = i − d−1/2ad−1/2 ) where d ∈ rn×n is the diagonal degree matrix and i is the identity. through eigendecomposition, l can be written by l = u diag(λ)u t where each column of u ∈ rn×n is an eigenvector of l, λ ∈ rn gathers the eigenvalues of l and diag(.) function creates a diagonal matrix whose diagonal elements are from a given vector. we use superscript to refer same kind variable as base. for instance, h (l) ∈ rn×fl refers node representation on layer l whose feature dimension is fl. a graph convolution layer takes the node representation of the previous layer h (l−1) as input and produces a new representation h (l), with h (0) = x. spectral approaches spectral gnns rely on the spectral graph theory (chung, 1997). in this framework, signals on graphs are filtered using the eigendecomposition of graph laplacian (shuman et al., 2013). by transposing the convolution theorem to graphs, the spectral filtering in the frequency domain can be defined by xf lt = u diag(φ(λ))u (cid:62)x, where φ(.) is the desired filter function. as a consequence, a graph convolution layer in spectral domain can be written by a sum of filtered signals followed by an activation function as in (bruna et al., 2013), namely (cid:33) (cid:32) fl(cid:88) h (l+1) j u diag(f (l,j) )u (cid:62)h (l) i i for j ∈ {1, . . . , fl+1}. here, σ is the activation function, f (l,j) ∈ rn×fl is the corresponding weight vector to be tuned as used in (henaff et al., 2015) for the single-graph problem known as non-parametric spectral gnn. a first drawback is the necessity of fourier and inverse fourier transform by matrix multiplication of u and u t . another drawback occurs when generalizing the approach to multi-graph learning problems. indeed, the k-th element of the vector f (l,j) weights the contribution of the k-th eigenvector to the output. those weights are not shareable between graphs of different sizes, which means a i different length of f (l,j) nodes, their eigenvalues will be different if their structures differ. i is needed. moreover, even though the graphs have the same number of to overcome these issues, a few spatially-localized filters have been defined such as cubic b-spline (bruna et al., 2013), polynomial and chebyshev polynomial (defferrard et al., 2016) and cayley polynomial parameterization (levie et al., 2019). with such approaches, trainable parameters are defined by f (l,j) , where each column in b ∈ rn×se is designed as a function of eigenvalues, namely bk,s = φs(λk), where k = 1, . . . , n denotes eigenvalue index, s = 1, . . . , se denotes index of filters and se is the number of desired filters. here, w (l,s) ∈ rfl×fl+1 is the trainable matrix for the l-th layer’s s-th filter’s. = b (cid:2)w (l,1) , . . . , w (l,se) i,j i,j i spatial approaches spatial gnns consider an agg operator, which aggregates the neighborhood nodes, and an upd operator, which updates the concerned node as follows: h (l+1) :v = upd g0(h (l) :v ), agg g1(h (l) :u ) : u ∈ n (v) where n (v) is the set of neighborhood nodes and g0, g1 : rn×fl → rn×fl+1 trainable models. the choice of agg, upd, g0, g1, and even n (v), determines the capability of model. the vanilla gnn (known by gin-0 in (xu et al., 2019)) uses the same weights in g0 and g1. n (v) is the set of connected nodes to v, agg is the sum of all connected node values and upd(x, y) := σ(x + y) where σ is an elementwise nonlinearity. gcn has the same selection but normalizes features as in (kipf & welling, 2017). hamilton et al. (2017) used separated weights in g0 and g1, which means that two sets of trainable weights are applied on self feature and neighbor nodes. other approaches defined multi neighborhood and used different gi for different kind of neighborhood. for instance, duvenaud et al. (2015) defined the neighborhood according to node label and/or degree, niepert et al. (2016) reordered the neighbor nodes and used the same model gi to neighbors according to their order. these spatial gnns use sum or normalized sum over gi in equation 2. other methods weighted this summation by another trainable parameter, where the weights can be written by the function of node and/or edge features in order to make the convolutions more productive, such as graph attention networks (veliˇckovi´c et al., 2018), monet (monti et al., 2017), gatedgcn (bresson & laurent, 2018) and splinecnn (fey et al., 2018). bridging spatial and spectral gnns in this section, we define a general framework which includes most of the well-know gnn models, including euclidean convolution and models which use anisotropic update schema such as in veliˇckovi´c et al. (2018); bresson & laurent (2018). when upd(x, y) = σ(x + y), agg is a sum (or weighted sum) of the defined neighborhood nodes contributions and gi applies linear transformation, one can trivially show that mentioned spatial gnns can be generalized as propagation of the node features to the neighboring nodes followed by feature transformation and activation function of the form c (s)h (l)w (l,s)(cid:17) s where c (s) ∈ rn×n is the s-th convolution support that defines how the node features are propagated to the neighboring nodes. within this generalization, gnns differ from each other by the choice of convolution supports c (s). this formulation generalizes many different kinds of graph convolutions, as well as euclidean domain convolutions, which can be seen in appendix a with the detailed schema. definition 1. a trainable-support is a graph convolution support c (s) with at least one trainable parameter that can be tuned during training. if c (s) has no trainable parameters, i.e. when the supports are pre-designed, it is called a fixed-support graph convolution. in the trainable support case, supports can be different in each layer, which can be shown by c (l,s) for the s-th support in layer l. formally, we can define a trainable support by: c (l,s)(cid:17) v,u = hs,l h (l) :v , h (l) :u , e(l) v,u, a v,u shows edge features on layer l from node v to node u if it is available and h(.) is any where e(l) trainable model parametrized by (s, l). theorem 1. spectral gnn parameterized with b of entries bi,j = φj(λi), defined as h (l+1) j fl(cid:88) u diag b w (l,1) i,j , . . . , w (l,se) i,j u (cid:62)h (l) i | 3 | [
412.915,
582.2210828,
415.73380996,
589.1948828
] |
UYneFzXSJWh.pdf | 2,022 | 0 | fine-tuning can distort pretrained features and underperform out-of-distribution ananya kumar, aditi raghunathan, robbie jones, tengyu ma, percy liang stanford university, computer science department abstract when transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer—the “head”). it is well known that fine-tuning leads to better accuracy in-distribution (id). however, in this paper, we find that finetuning can achieve worse accuracy than linear probing out-of-distribution (ood) when the pretrained features are good and the distribution shift is large. on 10 distribution shift datasets (breeds-living17, breeds-entity30, domainnet, cifar → stl, cifar-10.1, fmow, imagenetv2, imagenet-r, imagenet-a, imagenet-sketch), fine-tuning obtains on average 2% higher accuracy id but 7% lower accuracy ood than linear probing. we show theoretically that this tradeoff between id and ood accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks. our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (lp-ft), sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing. empirically, lp-ft outperforms both fine-tuning and linear probing on the above datasets (1% better id, 10% better ood than full fine-tuning). introduction pretraining a model on a large dataset before transferring to a downstream task’s training data substantially improves accuracy over training from scratch—for example, pretraining a resnet-50 on unlabeled imagenet boosts accuracy on cifar-10 from 94% to 98% (chen et al., 2020a;b). high-stakes applications such as poverty mapping in under-resourced countries (jean et al., 2016), self-driving cars (yu et al., 2020), and medical diagnosis (albadawy et al., 2018), require models that also generalize to circumstances not seen in the training distribution. in addition to testing on data drawn from the downstream task’s training distribution (in-distribution; id), it is increasingly important to test on data distributions unseen during training (out-of-distribution; ood). after initializing with a pretrained model, two popular transfer methods are fine-tuning (running gradient descent on all the model parameters), and linear probing (tuning the head but freezing lower layers). in the id setting it is well known that fine-tuning leads to better accuracy than linear probing (kornblith et al., 2019; zhai et al., 2020; he et al., 2020), and even when testing ood, prior work usually fine-tunes all parameters of their model (hendrycks et al., 2019a; miller et al., 2021; andreassen et al., 2021). intuitively, fine-tuning all layers of a network can improve pretrained features by adapting them to the specific task, while linear probing freezes these features. in this work, we investigate the ood accuracy of fine-tuning and linear probing and find that surprisingly, fine-tuning can do worse than linear probing in the presence of a large distribution shift. we experiment on ten distribution shift benchmarks (breeds living17, breeds entity30, domainnet, cifar → stl, cifar10.1, fmow geo-shift, imagenetv2, imagenet-r, imagenet-a, imagenet-sketch), initializing with good pretrained features from moco-v2 (chen et al., 2020b) and clip (radford et al., 2021). while both methods offer gains over training from scratch, fine-tuning improves the average id accuracy relative to linear probing from 83% to 85% but brings down the ood accuracy from 66% to 59% (figure 1). when and why does fine-tuning underperform linear probing? we theoretically consider fine-tuning a two-layer linear network in an overparameterized regression setting where the feature extractor layer has been pretrained to map high-dimensional inputs to useful, lower-dimensional, features. we prove that fine-tuning is worse than linear probing on directions outside the span of the training data when using “good” pretrained features. even with an infinitesimally small learning rate, fine-tuning distorts pretrained features—the features of id training data are updated while those of ood data figure 1: given a good feature extractor (top-left), a randomly initialized head is added to map features to outputs and we can (a) fine-tune all the model parameters or (b) linear probe, which freezes the feature extractor and trains only the head. we run experiments on ten distribution shifts. fine-tuning does well when the test example is sampled from the fine-tuning distribution (id), but can underperform on test examples sampled from ood distributions (when the distribution shift is large). (c) our theory indicates that fine-tuning can distort the pretrained feature extractor and lead to poor ood accuracy, but initializing with a linear probed head can fix this—empirically lp-ft gets better accuracies both id and ood. change less. since the head and feature extractor are simultaneously optimized during fine-tuning to a configuration that works well on id training data, the head only accomodates the distorted features of id points and performs poorly (relative to linear probing) on the less changed features of ood points. interestingly, we show that this feature distortion issue cannot be simply fixed by early stopping—throughout the entire process of fine-tuning, we never pass through parameters that do well ood (relative to linear probing). on the other hand, given “good” features, linear probing extrapolates better ood because it preserves pretrained features, but does worse than fine-tuning id because linear probing cannot adapt the features to the downstream task. technical challenges. existing theoretical work on transfer learning focuses on linear probing (wu et al., 2020; tripuraneni et al., 2020; du et al., 2020). in contrast, analyses of fine-tuning is scarce and challenging because it requires understanding the training dynamics, instead of only the loss function and its global minimizers. in fact, fine-tuning and training from scratch optimize the same training loss and only differ in their initializations (pretrained vs random). a mathematical analysis that distinguishes them needs to capture properties of the different minima that these algorithms converge to, a phenomenon that is sometimes theoretically referred to as the implicit regularization effect of initialization (neyshabur et al., 2014). accordingly, our analysis reasons about the parameters that gradient methods pass through starting from the pretrained initialization, which is challenging because this is a non-convex optimization problem and there is no known closed form for this trajectory. two-layer linear networks are widely studied in the literature on implicit regularization (saxe et al., 2014; gunasekar et al., 2017; gidel et al., 2019; arora et al., 2018). however, they analyze random and often small initializations, which don’t capture pretraining. algorithmic implications. our theory shows that fine-tuning underpeforms because when trying to fit id training data with a randomly initialized head, the feature extractor changes significantly for id examples, making features for id and ood examples largely inconsistent. this can be fixed by initializing with a good head that does not need to be updated much during fine-tuning, reducing how much the feature extractor changes. this suggests a simple two-step strategy of first linear probing to find a good head and then full fine-tuning (lp-ft). empirically, lp-ft outperforms fine-tuning and linear probing, both id and ood. even on cifar-10.1 (small distribution shift), where fine-tuning is better for both id and ood, we find lp-ft outperforms fine-tuning on both metrics. lp-ft and vanilla fine-tuning use similar amounts of compute because the first step of linear probing is relatively very cheap. prior work has used lp-ft (levine et al., 2016; kanavati & tsuneki, 2021) (or variants such as layerwise fine-tuning (howard & ruder, 2018) or larger learning rates for the head layer (prabhu et al., 2021))—however it has not been used for robustness / ood accuracy, and we show that it addresses the id-ood tradeoff theoretically and empirically. note that lp-ft is not meant to be a sota method but rather a simple, principled way to get good id and ood accuracy—we hope our analysis inspires even better methods for robust fine-tuning. empirical validation. finally, we find that fine-tuning fails and lp-ft works, for the reasons predicted by our feature distortion theory: (1) fine-tuning changes the features for id examples more than for ood examples, leading to distortions; (2) lp-ft indeed changes both id and ood features 10−100× less than fine-tuning does; (3) lp-ft gets the best of both worlds, achieving better accuracies than fine-tuning and linear probing both id and ood (figure 1). setup task and evaluation. given training examples sampled from some distribution pid, our goal is to learn a predictor f : rd → y to map inputs x ∈ rd to outputs y ∈ y. we evaluate predictors on their standard “in-distribution” (id) performance lid on new test samples drawn from pid that the training data is also sampled from. we also evaluate classifiers on their “out-of-distribution” (ood) performance lood on test samples drawn from a new distribution pood that is different from pid. formally, for some loss function ℓ, we evaluate classifiers on: lid(f ) = e (x,y)∼pid [ℓ(f (x),y)] and lood(f ) = e (x,y)∼pood [ℓ(f (x),y)]. models. in this work, we focus on predictors that leverage pretrained representations. we parameterize the final predictor f as follows: given features gb(x) ∈ rk for some feature extractor parameters b ∈ b, and a linear “head” v ∈ v, we have fv,b(x) = v⊤gb(x). in our experiments (section 4), gb is a deep network and in our theory (section 3), gb is a linear projection. we assume access to some initial pretrained feature extractor b0 that is obtained by training on potentially large amounts of data from a distribution that contains unlabeled or weakly supervised x inputs from pid and pood. we focus on two popular methods to learn a predictor fv,b given training data from pid: (i) linear probing where b = b0 and the linear head is obtained by minimizing some loss (e.g., logistic loss for classification, squared loss for regression) on the training data, and (ii) fine-tuning where both v and b are updated by performing gradient descent on some loss on the training data with b initialized at b0. theory: fine-tuning distorts pretrained features our goal is to understand under what conditions fine-tuning does worse than linear probing out-of-distribution (ood). we consider a linear setting (feature extractor gb is linear) where the pretrained features are “good” and the ood shift is large (section 3.1). we prove our main result: that fine-tuning, in which all model parameters are updated, distorts features and gets suboptimal ood error (section 3.2, theorem 3.2). we use this result to show that linear probing gets better ood error but worse id error than fine-tuning (section 3.3). finally, we explain why linear probing then fine-tuning can mitigate this id-ood tradeoff (section 3.4). our analysis handles two key challenges which distinguishes it from prior work on transfer learning in linear models (wu et al., 2020; tripuraneni et al., 2020; du et al., 2020; xie et al., 2021a). prior work focuses on linear probing, while we study fine-tuning where the resulting optimization problem is non-convex. we also study overparameterized models where the training loss alone does not determine test performance—this captures the fact that both training neural networks from scratch and fine-tuning them have the same training loss but very different test performance. however, it also makes the analysis challenging because we need to reason about the trajectory of gradient methods starting from a pretrained initialization, which has no known closed form. 3.1 linear overparameterized setting for our analysis, we focus on regression, where y = r and ℓ(by,y) = (by−y)2 is the squared loss. models. recall from section 2 that we parameterize predictors in terms of the feature extractor and head parameters. in this section, we study models where the feature extractor is linear, i.e. fv,b(x) = v⊤bx where b ∈ b = rk×d, and v ∈ v = rk. good pretrained features. for simplicity, we assume the models are well-specified i.e. y = v⊤ ⋆ b⋆x where v⋆ ∈ rk and b⋆ ∈ rk×d. 1 note that b⋆ and v⋆ are only unique up to rotations, i.e., for any rotation matrix u , (u v⋆)t (u b⋆)x = vt ⋆ b⋆x. as in prior work (tripuraneni et al., 2020) suppose b⋆ 1our main contribution, analysis of fine-tuning (theorem 3.2), does not require well-specification. we compare ft with lp by adapting earlier work on linear probing which requires well-specification. and b0 have been orthogonalized to have orthonormal rows. suppose we have a pretrained feature extractor b0 close to b⋆, so d(b0,b⋆) ≤ ϵ where the distance d is defined as (where the min is over rotation matrices u ∈ rk×k): d(b,b′) = min u ∥b −u b′∥2. training data. let x ∈ rn×d,x ̸= 0 be a matrix encoding n training examples from pid where each of the n rows is a training input. let y ∈ rn be the corresponding outputs. let s = rowspace(x) be the m-dimensional subspace spanning the training examples. we consider an overparameterized setting where 1 ≤ m < d−k. intuitively, the input dimension d is high (e.g., 10k), feature dimension k is lower (e.g., 100) and m is in the middle (e.g., 5k). large ood shift. we assume that the ood data contains examples outside the span of the training data. formally, let pood have second moment σ = e[xx⊤] where x ∼ pood, for invertible σ. training methods. given training data and a pretrained feature extractor b0, we study the two popular methods of linear probing (lp) and fine-tuning (ft) to learn the final predictor. both methods involve optimizing the training loss via gradient descent (or variants). in order to effectively analyze these gradient based algorithms, we study vanishing step sizes leading to gradient flows. gradient flows can be thought of as a continuous time analogue of gradient based methods and have been extensively studied in recent years as a way to understand gradient based methods (gunasekar et al., 2017; arora et al., 2018; du et al., 2018). formally, for training loss bl(v,b) = ∥xb⊤v − y ∥2 2, the gradient flow differential equations for lp and ft are as follows: ∂tvft(t) = −∇v ∂tvlp(t) = −∇v bl(vft(t),bft(t)), ∂tbft(t) = −∇b bl(vlp(t),b0), ∂tblp(t) = 0, initialized with bft(0) = blp(0) = b0 and vft(0) = vlp(0) = v0. in practice, the head parameter v0 is initialized randomly—our results hold for any standard random initialization (glorot & bengio, 2010), for example v0 ∼ n (0,σ2i) for any σ2, or zero initialization where v0 = 0. recall that the initial value of the feature extractor b0 is obtained via pretraining. bl(vft(t),bft(t)), the final lp and ft solutions are the limit points of the corresponding gradient flows: v∞ ft = lim t→∞ v∞ lp = lim t→∞ vft(t) and b∞ vlp(t) and b∞ ft = lim t→∞ lp = lim t→∞ bft(t), blp(t) = b0. 3.2 fine-tuning distorts pretrained features the more common method of using a pretrained feature extractor is fine-tuning (ft) which typically improves id performance relative to linear probing (lp). in this section, we show that ft can distort features leading to poor ood performance. we first explain the key intuitions and then present our formal theorem lower bounding the ood error of ft (section 3.2.2). 3.2.1 key intuitions we use two main observations to characterize when and why ft has higher ood error than lp. 1. features get distorted: representations change only in the id subspace (i.e., subspace spanned by the training data) and are unchanged in the orthogonal subspace. to see this, we take the derivative of the training loss bl(v,b) = ∥xb⊤v−y ∥2 2 with respect to the feature extractor parameter b: ∇b bl(v,b) = 2v(y −xb⊤v)⊤x. by definition, if u is a direction orthogonal to the training subspace s = rowspace(x), then bl(v, b)u = 0, that is the gradient updates to b do not modify bu for u ∈ s⊥. however, the ∇b gradient is non-zero for directions u in the id subspace and the corresponding features bu change across the fine-tuning process. we call this feature distortion: the features in some directions are changed but not others. next, we explain why this can lead to high ood error. 2. distorted features can lead to higher ood error. consider a toy example (figure 2) where d = 2 and the dimensionality of the representations k = 1. the linear head v is a scalar quantity that denotes how much the feature extractor b has to be scaled by. suppose the id-subspace is the x-axis. there are different ways of fitting the id subspace depending on the feature extractors b as figure 2: a toy version of our theory illustrating why fine-tuning distorts features, with inputs in 2d. given input x, the ground truth output is y = w⊤ ⋆ x. the id data is along the x-axis and the pretrained feature extractor is b0. (a) linear probing learns wlp, a scaling of the pretrained feature extractor that gets the id data correct (wlp and w⋆ have the same x coordinate as indicated by the vertical dotted line). (b) fine-tuning updates the pretrained feature extractor along the id data (so horizontally) to get bft, and then learns a scaling of these features that gets the id data correct. while both methods get id data correct, fine-tuning makes large errors perpendicular to the id data, because fine-tuning updates b0 along the id direction but not the perpendicular direction. shown in the figure—both fine-tuned and linear probed estimators match the true parameter in the id subspace (since wlp,wft,w⋆ have the same projection on the x-axis). if the feature extractor were optimal or scaled versions of the optimal, good performance on the id subspace would translate to good performance everywhere, even in directions orthogonal to the id subspace. however, in ft, the features change only for inputs in the id subspace (see (1)) and thus the updated features are not simply scaled but distorted. in figure 2, this corresponds to the feature extractor b0 changing along the x-axis. in this case even if the id error is low, error in directions orthogonal to the id subspace can be high, leading to high ood error. the only way the pretrained features are not distorted and only scaled during ft is if the initial feature extractor b0 is exactly aligned with the id subspace. in figure 2, if b0 is along the x-axis (the id subspace), then updating the features exclusively along the x-axis would simply scale the initial features. in this case linear probing and fine-tuning will have identical behavior. however, if the angle between b0 and the x-axis is non-zero, the updates would lead to distortions. in high dimensions, we measure the alignment between b0 and the id subspace with the largest principal angle: definition 3.1 (largest principal angle). let a and b be arbitrary subspaces, and e and f be matrices with orthonormal columns that span a and b respectively, with r = min(dim(a),dim(b)). then cosθmax(a,b) = σr(e⊤f ), which is the r-th largest singular value of e⊤f . 3.2.2 general result on the ood error of fine-tuning our main theorem lower bounds the ood error of fine-tuning outside the span of the training data. let s⊥ = rowspace(x)⊥, theorem 3.2. r0 = rowspace(b0), and v⋆,b⋆ be the optimal parameters with w⋆ = b⋆v⋆. if cosθmax(r0,s⊥) > 0, then for all time steps t, the ood error of the fine-tuning iterates (bft(t),vft(t)) is lower bounded: in the overparameterized linear setting, p lood(vft(t),bft(t)) ≥ p σmin(σ) cosθmax(r0,s⊥) √ k ⋆ v⋆)2| is defined to be inital head alignment error and ϵ ≥ d(b0,b⋆) is the 0 v⋆)2 −(v⊤ where φ2 = |(v⊤ error in the pretrained feature extractor. proof sketch. since the features do not change for examples in s⊥ (perpendicular to the training data), we show that in order to achieve low error on s⊥ the linear head vft(t) would have to become very similar to the optimal v⋆ at some time t. the head initialization v0 is random (or zero) and likely to be far from v⋆ (measured by the alignment error φ), so the head would have to change a lot to get close to v⋆. as we see from the fine-tuning gradient flow (3.2), vft(t) and bft(t) change in a “coupled” manner, and a “balancedness” invariant in du et al. (2018) holds across the fine-tuning trajectory. correspondingly, if vft(t) changes a lot and gets close to v⋆, the features bft(t) also change a lot for examples in s—we show that this would lead to high error on examples in s. either way, fine-tuning would get some subspace (s or s⊥) of examples wrong, leading to high ood error. the full proof appears in appendix a. interpretations of various quantities. quality of pretrained features (ϵ). to unpack the bound consider a special case where the pretrained features are perfect (ϵ = 0). with perfect features, proposition a.21 shows that linear probing gets zero ood error. theorem 3.2 shows that lood(vft(t),bft(t)) > 0 at all times t—so fine-tuning underperforms when the features are perfect. alignment error of random head initialization (φ2). the lower bound (equation a.14) increases as φ2 increases, because the gradient updates to the head and feature extractor are coupled. if the head were somehow initialized perfectly at v⋆, fine-tuning updates may not increase the ood error. however, when the head is randomly initialized as is standard in fine-tuning, the alignment error is high, leading to high ood error. we use this insight in section 3.4 to show that better head initialization (via linear probing) improves ood performance of fine-tuning. 3.3 linear probing vs. fine-tuning in this section, we use our main theorem on fine-tuning (theorem 3.2) and adapt prior work on linear probing to show that linear probing is better than fine-tuning ood, but worse id, when the id distribution has density on a lower m < d dimensional subspace s, and b0 is close to b⋆. assumption 3.3 (id subspace assumption). we assume that the id data lies on an m-dimensional subspace s where k < m < d − k, and we have n ≥ m training examples. formally, let pz be a distribution on rm which has density, and let the columns of f ∈ rd×m form an orthonormal basis for s. then pid has the distribution of f z where z ∼ pz. recall that the id error is the expected mean-squared error over the id distribution pid: lid(v,b) = e x∼pid [(v⊤ ⋆ b⋆x−v⊤bx)2] ood comparison: under mild non-degeneracy conditions, we show that as the feature extractor error ϵ goes to 0, linear probing does much better than fine-tuning ood: the ratio of the losses goes to 0. the non-degeneracy conditions are similar to section 3.2—we require that the training data cannot be exactly in the same direction or orthogonal to the pretrained features, formally that cosθmax(r∗,s) and cosθmax(r∗,s⊥) are not 0 where r∗ = rowspace(b⋆). theorem 3.4 (informal version of theorem a.9). in the linear overparameterized setting, under the id subspace assumption (assumption 3.3), if cosθmax(r∗,s) ̸= 0 and cosθmax(r∗,s⊥) ̸= 0 where r∗ = rowspace(b⋆), then, lood(v∞ lp ,b0) lood(vft(t),bft(t)) p→ 0,as b0 → b⋆. this holds for all times t for ft (and therefore also for the limit v∞ to v∞ lp ,b0 as a result of the gradient flow on a convex problem. ft ,b∞ ft ) and the lp iterates converge intuitively, if the pretrained features are good, lp learns a near optimal linear head which has small ood error (lemma a.15) but fine-tuning has high ood error (theorem 3.2). we give a more formal version of theorem 3.4 and a proof in appendix a.3. id comparison: when the pretrained features have some error, we show that fine-tuning does better than linear probing id because fine-tuning can update the features to fit the id data. the non-degeneracy condition on raug below is similar to our previous results, and holds with probability 1 if the id subspace is chosen randomly, from lemma a.17. proposition 3.5. in the linear overparameterized setting, under the id subspace assumption (assumption 3.3), let r0 = rowspace(b0), and raug = span({w⋆} ∪ r0). suppose w⋆ ̸∈ r0, cosθmax(s,raug) ̸= 0, and that fine-tuning converges to a local minimum of its loss, then fine-tuning does better id almost surely: lid(v∞ lp ,b0) with probability 1 (over the randomness of the training examples). ft ) < lid(v∞ ft ,b∞ to summarize, we proved that there are tradeoffs between id and ood error: ft has lower id error but higher ood error than lp. in the next section, we extend our theoretical insights to illustrate why a simple variant of ft may mitigate such tradeoffs. 3.4 linear probing then fine-tuning: a simple variant to mitigate tradeoffs the advantage of fine-tuning is it can adapt the feature extractor to fit the downstream task. can we keep this benefit while ensuring that our ood error is low when we have good pretrained features? going back to theorem 3.2, we see that the alignment error in the head initialization φ2 = |(v⊤ ⋆ v⋆)2| plays an important role. the issue with ft was that under random or zero initialization, φ2 is usually large and since the gradient updates to the feature extractor parameter are coupled with that of the head parameter, the features get distorted in a manner that increases the ood error. this suggests that we should use a better head initialization—one obtained from linear probing. if the pretrained features are decent, a linear probed head would be much better aligned with v⋆ allowing the features to be updated in a manner that does not increase the ood error much. we formally prove this intuition in a simple setting where we have perfect pretrained features. note that in this case, linear probing alone gets zero ood error—so proposition 3.6 is just a first cut result to illustrate that if initialized well, full fine-tuning does not distort features. proposition 3.6. given perfect pretrained features b0 = u b⋆ for some rotation u . let r0 = rowspace(b0). under the non-degeneracy conditions cosθmax(r0,s) ̸= 0,cosθmax(r0,s⊥) ̸= 0: ∀t,lood(bft(t)⊤vft(t)) > 0, if v0 ∼ n (0,σ2i) is randomly initialized (ft), ∀t,lood(bft(t)⊤vft(t)) = 0, if v0 is initialized to v∞ lp (lp-ft). experiments we run experiments on ten benchmark datasets with deep neural networks and see that given good pretrained features, fine-tuning (ft) does better id but worse ood than linear probing (lp). as predicted by the theory, we find that lp-ft does better than both methods. finally, we see that a number of predictions from the feature distortion theory hold up in practice. for more details on datasets, pretraining models, and experiment protocols, see appendix b. we use standard distribution shift datasets: domainnet (peng et al., 2019; tan et al., 2020), breeds-living-17 (santurkar et al., 2020), breeds-entity-30 (santurkar et al., 2020), cifar-10 → stl (krizhevsky, 2009; coates et al., 2011; french et al., 2018), cifar-10 → cifar-10.1 (recht et al., 2018), imagenet-1k (russakovsky et al., 2015)—where the ood test sets are imagenetv2 (recht et al., 2019), imagenet-r (hendrycks et al., 2020), imageneta (hendrycks et al., 2019b), and imagenet-sketch (wang et al., 2019)—, and fmow geo-shift which is adapted from the satellite remote sensing dataset functional map of the world (christie et al., 2018; koh et al., 2021). see appendix b for more details on the datasets. pretraining and models. we use a clip pretrained vit-b/16 for imagenet. for the other datasets we use a resnet-50 architecture and consider a diverse range of pretraining methods and datasets: moco-v2 (chen et al., 2020b), clip (radford et al., 2021), and moco-tp (ayush et al., 2020). in appendix b, we also show results for a clip-vit-b/16 and more fine-tuning baselines on living-17. 4.1 linear probing vs fine-tuning experiment protocols. we initialize with the pretrained model, and fine-tune or linear probe on id training examples. for fine-tuning on each dataset we swept over 6 learning rates, using a cosine learning rate schedule and batch size of 64. we early stop and choose the best learning rate using id validation accuracy. for linear probing we train an ℓ2-regularized logistic regression classifier on frozen features from the penultimate layer of the pretrained model, selecting the best ℓ2-regularization hyperparameter based on id validation accuracy. for all methods, we run each hyperparameter configuration 3 times (with different random seeds), and take the average accuracy. we used a slightly different protocol for imagenet because the dataset is much larger and running these experiments involves more computational resources: we used a batch size of 128, swept over 3 learning rates for both fine-tuning and linear probing (we did not sweep over ℓ2-regularization), and ran each hyperparameter configuration once. in all cases, ood data was only used for evaluation. results. fine-tuning (ft) does better than linear probing (lp) on 5 out of 6 id datasets (average accuracy of 85.1% for ft vs. 82.9% for lp, see table 1). this is consistent with prior work and intuitions. however, linear probing does better on 8 out of 10 ood datasets (average accuracy of 66.2% for lp vs. 59.3% for ft, see table 2)—lp does better on all datasets except cifar-10.1 and imagenetv2, where the ood is designed to closely replicate the id dataset. this matches id accuracies with 90% confidence intervals over 3 runs—fine-tuning does better table 1: than linear probing on all datasets except domainnet (which could be because the version of the domainnet training dataset from tan et al. (2020) is fairly small, with around 20k examples). lp-ft does the best on all except fmow where it is in between linear probing and fine-tuning. cifar-10 domainnet fmow imagenet average ft lp lp-ft table 2: ood accuracies with 90% confidence intervals over 3 runs. linear probing does better than fine-tuning on all datasets except cifar-10.1 and imagenetv2, where the id and ood are similar (consistent with our theory). lp-ft does the best on all 10 datasets. stl domainnet fmow ft lp imnetv2 imnet-r imnet-sk imnet-a average ft lp lp-ft our theoretical predictions. our training datasets vary in size from 20k examples to over a million examples, so lp does not appear to perform better than ft simply because of a small training set. 4.2 linear probing then fine-tuning (lp-ft) experiment protocols. for lp-ft, we initialize the neural network head using the linear probed solution, and then fine-tune the model. lp-ft and fine-tuning use similar compute because the linear probing step is much faster than fine-tuning. as with fine-tuning, we swept over 6 learning rates, early stopping using id validation accuracy. for the imagenet experiments we swept over 3 learning rates, and explicitly ensured that lp-ft and fine-tuning use exactly the same compute (we ran each stage of lp-ft for half as many epochs as we ran vanilla fine-tuning). results. we find that lp-ft gets the best accuracy id (average: 85.7%) and ood (average: 68.9%). this is true for 5/6 id and 10/10 ood datasets—every dataset except fmow id, where lp-ft is better than linear probing but worse than fine-tuning. since the id accuracy on fmow is low (56.5%), this could be because the pretrained features are not good. 4.3 examining the feature distortion theory early stopping does not mitigate feature distortion. our theory predicts that fine-tuning can do worse ood (than linear probing) throughout the process of fine-tuning, and not just at the end. to test this, we early stop each fine-tuning method and choose the best learning rate based on ood test accuracy. as expected, fine-tuning does improve a little, but linear probing (average accuracy: 67.1%) is still better than fine-tuning (average accuracy: 61.3%). see appendix b for per-dataset results. id-ood features get distorted from fine-tuning. the feature distortion theory predicts that fine-tuning changes features for id examples more than for ood examples, which is why fitting a head on id examples performs poorly ood. to test this, for each example x in living-17 (results for other datasets are in appendix b), we took the euclidean distance of the resnet-50 features before and after fine-tuning: ∥gb(x)−gb0 (x)∥2. as expected, the average distance for id examples (0.0188 ± 0.0001) is more than for ood examples (0.0167 ± 0.0001). the theory also predicts that lp-ft changes features less than fine-tuning does. as expected, the average distance changed by lp-ft both id (0.0011±0.0001) and ood (0.0009±0.0001) is 20× smaller than for fine-tuning. pretrained features must be good, id-ood far apart. our theory says that linear probing does better than fine-tuning ood, but only if the ood and id data are quite different, and the pretrained features are good—otherwise fine-tuning can do better ood by adjusting the feature extractor id. feature quality: we use a checkpoint of moco-v1 that got 10% worse accuracy (on imagenet) and compare linear probing and fine-tuning on living-17. with worse features, both methods do worse, but fine-tuning (96% id, 71% ood) does better than linear probing (92% id, 66% ood). id ≈ ood: we fine-tune / linear probe on cifar-10, and test on cifar-10.1, a dataset collected using a similar protocol to cifar-10. as expected, fine-tuning (92.3%) outperforms linear probing ood (82.7%). even in this case, where we have no tradeoffs, lp-ft does the best (93.5%). related work and discussion linear probing. fine-tuning (ft) and linear probing (lp) are popular transfer fine-tuning vs. learning algorithms. there is substantial evidence of ft outperforming lp in-distribution (id) including recent large-scale investigations (kornblith et al., 2019; chen et al., 2021; zhai et al., 2020; chen et al., 2020b) (the only notable exception is in peters et al. (2019) where lp performs better than ft when using elmo representations, but worse using bert). ft is therefore the method of choice for improving accuracy, while lp is used to analyze properties of representations (peters et al., 2018; belinkov et al., 2017; hewitt & manning, 2019). in our work, we find that ft can underperform lp especially when using high quality pretrained features in the presence of a large distribution shift. there are a variety of other fine-tuning heuristics (ge & yu, 2017; guo et al., 2019; zhang et al., 2020; zhu et al., 2020; jiang et al., 2021; aghajanyan et al., 2021)—combining our insights with these ideas might lead to better methods. the benefit of preserving pretrained features. our work adds to growing evidence that lightweight fine-tuning, where only a small part of a pretrained model are updated, can perform better under distribution shifts—and we give a theoretical grounding to why this might be the case. zero-shot language prompting in vision (radford et al., 2021) and other lightweight fine-tuning approaches in nlp (houlsby et al., 2019; li & liang, 2021; xie et al., 2021b; lester et al., 2021; utama et al., 2021; zhou et al., 2021) have been shown to improve ood performance. andreassen et al. (2021) observe that through the course of fine-tuning, id accuracy increases but ood accuracy plateaus. mitigating id-ood tradeoffs. while lp-ft has sometimes been used as a fine-tuning heuristic (levine et al., 2016; kanavati & tsuneki, 2021; fastai), it has not been used for robustness / ood accuracy, and we show that it addresses the id-ood tradeoff theoretically and empirically. tradeoffs between id and ood accuracy are widely studied and prior work self-trains on large amounts of unlabeled data to mitigate such tradeoffs (raghunathan et al., 2020; xie et al., 2021a; khani & liang, 2021). in contrast, lp-ft uses no extra unlabeled data and is a simple variant of fine-tuning. in concurrent and independent work, wortsman et al. (2021) show that ensembling the weights of a zero-shot and fine-tuned model mitigates the id-ood tradeoff between these approaches, and this method could be promising for our datasets as well. theoretical analysis of transfer learning. prior works on transfer learning mainly analyze linear probing (wu et al., 2020; tripuraneni et al., 2020; du et al., 2020). recent works (chua et al., 2021; shachaf et al., 2021) study fine-tuning, but in the underparameterized regime (where there is a unique global optimum) or assuming a balanced initialization. prior works also focus on id error, while we analyze ood error. see section c for additional related work on theory of overparameterized models. conclusion. there is a strong trend towards leveraging pretrained models to improve downstream performance, and whenever feasible, it is common to fine-tune all model parameters. in this work, we show theoretically and empirically that preserving features might be important for robustness, and simpler approaches like linear probing can improve out-of-distribution (ood) performance. this ood gap between fine-tuning and linear probing grows as the quality of pretrained features improve, so we believe our results are likely to gain significance over time with growing innovations and scale of pretraining. finally, we showed lp-ft can mitigate tradeoffs between id and ood accuracy in our context. lpft could be useful in other situations, for example in clip we could initialize the final layer with the zero-shot classifier and then fine-tune the entire model, as done in concurrent work (wortsman et al., 2021). in nlp, linear probing is not as good—here we could first prompt-tune (lester et al., 2021) and then fine-tune the entire model. lp-ft is just a first step in leveraging the intuition from our theoretical analysis and we hope that this work inspires new methods of leveraging powerful pretrained models. proofs and reproducibility: we include proofs for our theoretical results in appendix a and additional experiment details in appendix b. updated code is available at https: //github.com/ananyakumar/transfer_learning and this codalab worksheet. acknowledgements: we would like to thank kumar ayush and burak uzkent for moco checkpoints pretrained on unlabeled fmow images, nilesh tripuraneni for clarifications on his work and references on principal angles, daniel levy for useful suggestions on experiments to run, niladri chatterji, jeff z. haochen, and colin wei for useful papers and comments on figures, niladri chatterji and kaidi cao for reviewing the paper at ml paper swap, kevin yang for his help with analyzing differential equations, tri dao and pang wei koh for help with writing, suriya gunasekar, adam kalai, simon kornblith, ting chen, sang michael xie, albert gu, and kendrick shen for useful discussions, and pang wei koh, niladri chatterji, and tri dao for suggestions on framing our results better. ananya kumar was supported by the rambus corporation stanford graduate fellowship. percy liang was supported by the open philanthropy project and nsf award grant no. 1805310. aditi raghunathan was supported by a google phd fellowship and open philanthropy project ai fellowship. tengyu ma acknowledges support of a google faculty award, nsf iis 2045685, the sloan fellowship, jd.com, sail, and sdsi. references armen aghajanyan, akshat shrivastava, anchit gupta, naman goyal, luke zettlemoyer, and sonal gupta. better fine-tuning by reducing representational collapse. in international conference on learning representations (iclr), 2021. ea albadawy, a saha, and ma mazurowski. deep learning for segmentation of brain tumors: impact of cross-institutional training and testing. med phys., 45, 2018. anders andreassen, yasaman bahri, behnam neyshabur, and rebecca roelofs. the evolution of out-of-distribution robustness throughout fine-tuning. arxiv, 2021. sanjeev arora, nadav cohen, and elad hazan. on the optimization of deep networks: implicit acceleration by overparameterization. in international conference on machine learning (icml), pp. 244–253, 2018. kumar ayush, burak uzkent, chenlin meng, kumar tanmay, m. burke, d. lobell, and stefano ermon. geography-aware self-supervised learning. arxiv, 2020. peter l. bartlett, philip m. long, gt’abor lugosi, and alexander tsigler. benign overfitting in linear regression. arxiv, 2019. yonatan belinkov, nadir durrani, fahim dalvi, hassan sajjad, and james glass. what do neural machine translation models learn about morphology? in association for computational linguistics (acl), pp. 861–872, 2017. mikhail belkin, daniel hsu, and ji xu. two models of double descent for weak features. arxiv, 2019. koby bibas, yaniv fogel, and meir feder. a new look at an old problem: a universal learning in 2019 ieee international symposium on information theory approach to linear regression. (isit), pp. 2304–2308, 2019. tianle cai, ruiqi gao, j. lee, and qi lei. a theory of label propagation for subpopulation shift. in international conference on machine learning (icml), 2021. ting chen, simon kornblith, mohammad norouzi, and geoffrey hinton. a simple framework for contrastive learning of visual representations. in international conference on machine learning (icml), pp. 1597–1607, 2020a. xinlei chen, haoqi fan, ross b. girshick, and kaiming he. improved baselines with momentum contrastive learning. arxiv, 2020b. xinlei chen, saining xie, and kaiming he. an empirical study of training self-supervised vision gordon christie, neil fendley, james wilson, and ryan mukherjee. functional map of the world. in computer vision and pattern recognition (cvpr), 2018. kurtland chua, qi lei, and jason d lee. how fine-tuning allows for effective meta-learning. arxiv adam coates, andrew ng, and honlak lee. an analysis of single-layer networks in unsuperin proceedings of the fourteenth international conference on artificial vised feature learning. intelligence and statistics, volume 15, pp. 215–223, 2011. simon s. du, wei hu, sham m. kakade, jason d. lee, and qi lei. few-shot learning via learning the representation, provably. arxiv, 2020. simon shaolei du, wei hu, and jason lee. algorithmic regularization in learning deep homogeneous models: layers are automatically balanced. in advances in neural information processing systems (neurips), 2018. fastai. fastai tutorial on transfer learning. https://github.com/fastai/course-v3/ blob/master/nbs/dl1/lesson1-pets.ipynb. geoff french, michal mackiewicz, and mark fisher. self-ensembling for visual domain adaptation. in international conference on learning representations, 2018. weifeng ge and yizhou yu. borrowing treasures from the wealthy: deep transfer learning through selective joint fine-tuning. in computer vision and pattern recognition (cvpr), 2017. gauthier gidel, francis r. bach, and simon lacoste-julien. implicit regularization of discrete gradient dynamics in deep linear neural networks. in advances in neural information processing systems (neurips), 2019. xavier glorot and yoshua bengio. understanding the difficulty of training deep feedforward neural networks. in international conference on artificial intelligence and statistics, 2010. gene h. golub and charles f. van loan. matrix computations. the johns hopkins university press, suriya gunasekar, blake e woodworth, srinadh bhojanapalli, behnam neyshabur, and nati srebro. in advances in neural information processing implicit regularization in matrix factorization. systems (neurips), pp. 6151–6159, 2017. yunhui guo, honghui shi, abhishek kumar, kristen grauman, tajana rosing, and rogerio feris. in computer vision and pattern spottune: transfer learning through adaptive fine-tuning. recognition (cvpr), 2019. trevor hastie, andrea montanari, saharon rosset, and ryan j tibshirani. surprises in highdimensional ridgeless least squares interpolation. arxiv preprint arxiv:1903.08560, 2019. kaiming he, haoqi fan, yuxin wu, saining xie, and ross girshick. momentum contrast for unsupervised visual representation learning. in computer vision and pattern recognition (cvpr), 2020. dan hendrycks, kimin lee, and mantas mazeika. using pre-training can improve model robustness and uncertainty. in international conference on machine learning (icml), 2019a. dan hendrycks, kevin zhao, steven basart, jacob steinhardt, and dawn song. natural adversarial dan hendrycks, steven basart, norman mu, saurav kadavath, frank wang, evan dorundo, rahul desai, tyler zhu, samyak parajuli, mike guo, dawn song, jacob steinhardt, and justin gilmer. the many faces of robustness: a critical analysis of out-of-distribution generalization. arxiv preprint arxiv:2006.16241, 2020. john hewitt and christopher d. manning. a structural probe for finding syntax in word representations. in association for computational linguistics (acl), 2019. neil houlsby, andrei giurgiu, stanislaw jastrzebski, bruna morrone, quentin de laroussilhe, andrea gesmundo, mona attariyan, and sylvain gelly. parameter-efficient transfer learning for nlp. arxiv, 2019. jeremy howard and sebastian ruder. universal language model fine-tuning for text classification. in association for computational linguistics (acl), 2018. neal jean, marshall burke, michael xie, w. matthew davis, david b. lobell, and stefano ermon. combining satellite imagery and machine learning to predict poverty. science, 353, 2016. haoming jiang, pengcheng he, weizhu chen, xiaodong liu, jianfeng gao, and tuo zhao. smart: robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization. in international conference on learning representations (iclr), 2021. fahdi kanavati and masayuki tsuneki. partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning. in medical imaging with deep learning, 2021. fereshte khani and percy liang. removing spurious features can hurt accuracy and affect groups disproportionately. in acm conference on fairness, accountability, and transparency (facct), 2021. pang wei koh, shiori sagawa, henrik marklund, sang michael xie, marvin zhang, akshay balsubramani, weihua hu, michihiro yasunaga, richard lanas phillips, irena gao, tony lee, etienne david, ian stavness, wei guo, berton a. earnshaw, imran s. haque, sara beery, jure leskovec, anshul kundaje, emma pierson, sergey levine, chelsea finn, and percy liang. wilds: a benchmark of in-the-wild distribution shifts. in international conference on machine learning (icml), 2021. simon kornblith, jonathon shlens, and quoc v. le. do better imagenet models transfer better? in computer vision and pattern recognition (cvpr), 2019. alex krizhevsky. learning multiple layers of features from tiny images. technical report, university thomas laurent and james h. von brecht. deep linear neural networks with arbitrary loss: all local minima are global. in international conference on machine learning (icml), 2018. brian lester, rami al-rfou, and noah constant. the power of scale for parameter-efficient prompt s. levine, chelsea finn, trevor darrell, and p. abbeel. end-to-end training of deep visuomotor policies. journal of machine learning research (jmlr), 17, 2016. xiang lisa li and percy liang. prefix-tuning: optimizing continuous prompts for generation. in association for computational linguistics (acl), 2021. xuhong li, yves grandvalet, and franck davoine. explicit inductive bias for transfer learning with convolutional networks. in international conference on machine learning (icml), 2018. song mei and andrea montanari. the generalization error of random features regression: precise asymptotics and double descent curve. arxiv preprint arxiv:1908.05355, 2019. john miller, rohan taori, aditi raghunathan, shiori sagawa, pang wei koh, vaishaal shankar, percy liang, yair carmon, and ludwig schmidt. accuracy on the line: on the strong correlation in international conference on between out-of-distribution and in-distribution generalization. machine learning (icml), 2021. vidya muthukumar, kailas vodrahalli, vignesh subramanian, and anant sahai. harmless interpolation of noisy data in regression. ieee journal on selected areas in information theory, 1(1): 67–83, 2020. behnam neyshabur, ryota tomioka, and nathan srebro. in search of the real inductive bias: on the role of implicit regularization in deep learning. arxiv, 2014. xingchao peng, qinxun bai, xide xia, zijun huang, kate saenko, and bo wang. moment matching in international conference on computer vision (iccv), for multi-source domain adaptation. 2019. matthew e. peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, and luke zettlemoyer. deep contextualized word representations. in north american association for computational linguistics (naacl), 2018. matthew e peters, sebastian ruder, and noah a smith. to tune or not to tune? adapting pretrained representations to diverse tasks. in proceedings of the 4th workshop on representation learning for nlp (repl4nlp-2019), pp. 7–14, 2019. viraj prabhu, shivam khare, deeksha karthik, and judy hoffman. selective entropy optimization via committee consistency for unsupervised domain adaptation. in international conference on computer vision (iccv), 2021. alec radford, jong wook kim, chris hallacy, aditya ramesh, gabriel goh, sandhini agarwal, girish sastry, amanda askell, pamela mishkin, jack clark, gretchen krueger, and ilya sutskever. in international learning transferable visual models from natural language supervision. conference on machine learning (icml), volume 139, pp. 8748–8763, 2021. aditi raghunathan, sang michael xie, fanny yang, john c. duchi, and percy liang. understanding in international conference on and mitigating the tradeoff between robustness and accuracy. machine learning (icml), 2020. benjamin recht, rebecca roelofs, ludwig schmidt, and vaishaal shankar. do cifar-10 classifiers generalize to cifar-10? arxiv, 2018. benjamin recht, rebecca roelofs, ludwig schmidt, and vaishaal shankar. do imagenet classifiers generalize to imagenet? in international conference on machine learning (icml), 2019. mark rudelson and roman vershynin. smallest singular value of a random rectangular matrix. communications on pure and applied mathematics, 62:1707–1739, 2009. olga russakovsky, jia deng, hao su, jonathan krause, sanjeev satheesh, sean ma, zhiheng huang, andrej karpathy, aditya khosla, michael bernstein, et al. imagenet large scale visual recognition challenge. international journal of computer vision, 115(3):211–252, 2015. shibani santurkar, dimitris tsipras, and aleksander madry. breeds: benchmarks for subpopulation shift. arxiv, 2020. andrew m. saxe, james l. mcclelland, and surya ganguli. exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arxiv, 2014. gal shachaf, alon brutzkus, and amir globerson. a theoretical analysis of fine-tuning with linear teachers. in advances in neural information processing systems (neurips), 2021. shuhan tan, xingchao peng, and kate saenko. class-imbalanced domain adaptation: an empirical rohan taori, achal dave, vaishaal shankar, nicholas carlini, benjamin recht, and ludwig schmidt. measuring robustness to natural distribution shifts in image classification. arxiv preprint arxiv:2007.00644, 2020. nilesh tripuraneni, michael i. jordan, and chi jin. on the theory of transfer learning: the importance of task diversity. arxiv, 2020. joel a. tropp. an introduction to matrix concentration inequalities. foundations and trends in prasetya ajie utama, nafise sadat moosavi, victor sanh, and iryna gurevych. avoiding inference heuristics in few-shot prompt-based finetuning. arxiv preprint arxiv:2109.04144, 2021. haohan wang, songwei ge, zachary lipton, and eric p xing. learning robust global representations in advances in neural information processing systems by penalizing local predictive power. (neurips), 2019. mitchell wortsman, gabriel ilharco, mike li, jong wook kim, hannaneh hajishirzi, ali farhadi, hongseok namkoong, and ludwig schmidt. robust fine-tuning of zero-shot models. arxiv preprint arxiv:2109.01903, 2021. sen wu, hongyang r. zhang, and christopher ré. understanding and improving information transfer in multi-task learning. in international conference on learning representations (iclr), 2020. sang michael xie, ananya kumar, robbie jones, fereshte khani, tengyu ma, and percy liang. in-n-out: pre-training and self-training using auxiliary information for out-of-distribution robustness. in international conference on learning representations (iclr), 2021a. sang michael xie, tengyu ma, and percy liang. composed fine-tuning: freezing pre-trained in international conference on machine denoising autoencoders for improved generalization. learning (icml), 2021b. fisher yu, haofeng chen, xin wang, wenqi xian, yingying chen, fangchen liu, vashisht madhavan, and trevor darrell. bdd100k: a diverse driving dataset for heterogeneous multitask learning. in computer vision and pattern recognition (cvpr), 2020. xiaohua zhai, joan puigcerver, alexander kolesnikov, pierre ruyssen, carlos riquelme, mario lucic, josip djolonga, andre susano pinto, maxim neumann, alexey dosovitskiy, lucas beyer, olivier bachem, michael tschannen, marcin michalski, olivier bousquet, sylvain gelly, and neil houlsby. a large-scale study of representation learning with the visual task adaptation benchmark. arxiv, 2020. jeffrey o zhang, alexander sax, amir zamir, leonidas guibas, and jitendra malik. side-tuning: a baseline for network adaptation via additive side networks. in european conference on computer vision (eccv), 2020. kaiyang zhou, jingkang yang, chen change loy, and ziwei liu. vision-language models. arxiv preprint arxiv:2109.01134, 2021. learning to prompt for chen zhu, yu cheng, zhe gan, siqi sun, tom goldstein, and jingjing liu. freelb: enhanced adversarial training for natural language understanding. in international conference on learning representations (iclr), 2020. a proofs for section 3 a.1 preliminaries on important notations and principal angles big-oh notation: for convenience, we use big-oh notation in a way that differs from standard theoretical computer science texts. when we say o(<expr1>) we mean that this can be replaced by c <expr1> for some universal constant such that the statement holds. as an example, we can say 5x2 ≤ o(x2) because there exists some universal constant (c = 5) such that 5x2 ≤ 5x2. more examples: we can also say 5x2 ≥ o(x2) or if x ≥ 1 then 7x2 ≤ o(x3) and 0.1x2 ≥ o(x). singular values: given a rectangular matrix a ∈ rm×n, let r = min(m,n). the minimum singular value is defined as the r-th largest singular value of a, so σmin(a) = σr(a). working with minimum singular values requires more care than maximum singular vectors. in particular, when we have rectangular matrices some bounds depend on whether the matrix is ‘fat’ (has more columns than rows) or ‘tall’ (has more rows than columns). given a matrix a, the operator norm ∥a∥2 is the maximum singular value: ∥a∥2= σmax(a). projectors: given a subspace r of rd, let πr denote the orthogonal projection onto r, satisfying that for all x ∈ rd: πr(x) ∈ r and ∀r ∈ r, ∥x−πr(x)∥2≤ ∥x−r∥2. if e ∈ rd×dim(r) has orthonormal columns that form a basis for r, then we have: πr = ee⊤ from this we can easily check that π2 (2013) for more information. r = πr and π⊤ r = πr. see e.g., chapter 2.5.1 golub & loan principal angles: given two non-zero vectors x and y, the cosine of the angle between them, cosθ, is: cosθ = if we consider the 1-dimensional subspaces (so basically lines) sx and sy spanned by x and y respectively, then the angle between them, cosθ′ is given by the absolute value (since lines are undirected): cosθ′ = principal angles generalize this notion to higher dimensions. see e.g., chapter 6.4.3 in golub & loan (2013) for more information on principal angles. definition a.1. given two non-empty subspaces r and s of rd, where r = min(dim(r),dim(s)), we have r principal angles: the directions of the inequalities swap when we take the cosine of the principal angles: 1 ≥ cosθ1 ≥ ... ≥ cosθr ≥ 0. (a.6) the cosines of the principal angles are given by the svd—let e ∈ rd×dim(r) and f ∈ rd×dim(s) have orthonormal columns which span r and s respectively. then we have: cosθi = σi(e⊤f ), where σi denotes the i-th largest singular value. in this paper, we are interested in the cosine of the largest angle between them, given by: cosθmax(r,s) = cosθr we can massage this into a variational characterization of the maximum principal angle, which is important for lower bounding the error of fine-tuning outside the span of the training data. lemma a.2. suppose dim(r) ≤ dim(s), and let f ∈ rd×dim(s) have orthonormal columns that form a basis for s. we have: cosθmax(r,s) = min proof. let e ∈ rd×dim(r) and f ∈ rd×dim(s) have orthonormal columns that span r and s respectively. since dim(r) ≤ dim(s) (a crucial condition!), f ⊤e is a ‘tall’ matrix (it has more rows than columns) so we have: σmin(f ⊤e) = min ∥v∥2=1 ∥f ⊤ev∥2. the result now follows from some algebra: cosθmax(r,s) = σmin(f ⊤e) ∥f ⊤ev∥2 = min a.2 feature distortion theorem we first prove our core theorem, that fine-tuning distorts pretrained features. restatement of theorem 3.2. in the overparameterized linear setting, let s⊥ = rowspace(x)⊥, r0 = rowspace(b0), and v⋆,b⋆ be the optimal parameters with w⋆ = b⋆v⋆. if cosθmax(r0,s⊥) > 0, then for all time steps t, the ood error of the fine-tuning iterates (bft(t),vft(t)) is lower bounded: p lood(vft(t),bft(t)) ≥ p σmin(σ) cosθmax(r0,s⊥) √ k where φ2 = |(v⊤ 0 v⋆)2 −(v⊤ error in the pretrained feature extractor. ⋆ v⋆)2| is defined to be inital head alignment error and ϵ ≥ d(b0,b⋆) is the we follow the sketch in the main paper. we begin with a few lemmas, showing that certain quantities are preserved throughout the fine-tuning process. our first lemma says that the representations bt of the training examples. note that the final output vt f t lemma a.3. for all times t and all x ∈ s⊥, we have: f tx do not change for examples perpendicular the span bt f tx still changes, because vt f t changes. b0x = bt f tx proof. we initialized fine-tuning with the feature extractor bft(0) = b0. it suffices to show that ∂tbt f t is given by the gradient flow update equation: f tx = 0 for all x ∈ s⊥. recall that ∂tbt bl(vt f t = −∂b ∂tbt f t,bt f t) = −∂b∥xb⊤v−y ∥2 computing the rhs explicitly using multivariable chain rule, we get: since x is a constant, we get: ∂tbt f t = −2v(xb⊤v−y )⊤x ∂tbt f tx = −2v(xb⊤v−y )⊤xx but xx = 0 for x ∈ s⊥, since x ∈ s⊥ is defined as x is perpendicular to the rowspace of x (i.e., perpendicular to the rows of x). so the rhs is 0—that is, ∂tbt f tx = 0, as desired. next, we show that the change in the head and feature extractor are ‘coupled’. so if the head changes in a certain way, then the feature extractor cannot just stay the same. in the literature, this is sometimes called the “balancedness" lemma, and has been proved in prior work on two layer linear networks. lemma a.4. for all t we have: f tvt f t ⊤ −bt f tbt f t proof. this follows by showing that the derivative is 0: ∂t[vt f tvt f t ⊤ −bt f tbt f t which can be verified by direct calculation. see theorem 2.2 in du et al. (2018) and the proof of theorem 1 in arora et al. (2018). for our proof we will require that every feature r ∈ r can be generated from some ood direction, that is r = b0u for some u ∈ s⊥. we will show that this is implied by the condition on the principal angle: cos θmax(r, s⊥) > 0 where r = rowspace(b0), which we assumed in theorem 3.2. the following lemma shows this (and also quantifies that the norm of u does not shrink too much when projected onto r). lemma a.5. let r, s be subspaces of rd with dim(r) ≤ dim(s). for all r ∈ r with ∥r∥2= cosθmax(r,s), there exists s ∈ s with πr(s) = r and ∥s∥2≤ 1. here πr ∈ rd×d projects a vector onto r. proof. let c = cos θmax(r, s). firt, we get rid of an easy case—if c = 0, then we need to show the claim for all r ∈ r with ∥r∥2= c = 0, which means r = 0. then we can just pick s = 0, and πr(s) = 0 = r and ∥s∥2= 0 ≤ 1. so for the rest of the proof we assume c > 0. consider arbitrary vector r ∈ r with ∥r∥2= c. let e ∈ rd×dim(s),f ∈ rd×dim(r) have orthonormal columns, which form a basis for r and s respectively. step 1: finding s: since the columns of e span r, r = ez for some z ∈ rdim(r). c = σmin(e⊤f ) > 0, which means that e⊤f ∈ rdim(r)×dim(s) has rank dim(r) since dim(r) ≤ dim(s)—in other words, e⊤f has full column rank since the column dimension is smaller than the row dimension. so z = e⊤f w for some w ∈ rowspace(e⊤f ). then we set s = f w—this means s ∈ s because the columns of f form a basis for s. in addition, following the steps above we have r = ez = ee⊤f w = ee⊤s. we note that πr = ee⊤ is the projection onto r (see e.g., chapter 2.5.1 of golub & loan (2013)). step 2: bounding norm of s: it suffices to show that ∥s∥2≤ 1. since f has orthonormal columns, ∥s∥2= ∥f w∥2= ∥w∥2, so it suffices to show that ∥w∥2≤ 1. since e has orthonormal columns, ∥r∥2= ∥z∥2. recall that z = e⊤f w—since w ∈ rowspace(e⊤f ), from lemma a.6 we have: ∥z∥2≥ σmin(e⊤f )∥w∥2= c∥w∥2. rearranging, we get ∥w∥2≤ ∥z∥2/c = 1, as desired. in the lemma above, we used a standard linear algebraic result that we include for completeness. this says that a cannot shrink vectors in its rowspace too much, where the shrinkage factor is given by the minimum singular value of a. lemma a.6. let a ∈ rm×n. let r = min(m, n). then if x ∈ rowspace(a), we have ∥ax∥2≥ σr(a)∥x∥2. proof. we bound the norm of x using the svd. consider the singular value decomposition (svd) of a: a = u dv ⊤ (a.22) where u ∈ rm×r, d ∈ rr×r, v ⊤ ∈ rr×n, where u and v have orthonormal columns, and d = diag(σ1,...,σr) is a diagonal matrix with σ1 ≥ ... ≥ σr ≥ 0. | 17 | [
108,
160.5860828,
504.0003388,
200.7998
] |
GrpU6dxFmMN.pdf | 2,023 | 2 | improving the imputation of missing data with markov blanket discovery yang liu, anthony c. constantinou machine intelligence and decision systems (minds) research group queen mary university of london {yangliu, a.constantinou}@qmul.ac.uk abstract the process of imputation of missing data typically relies on generative and regression models. these approaches often operate on the unrealistic assumption that all of the data features are directly related with one another, and use all of the available features to impute missing values. in this paper, we propose a novel markov blanket discovery approach to determine the optimal feature set for a given variable by considering both observed variables and missingness of partially observed variables to account for systematic missingness. we then incorporate this method to the learning process of the state-of-the-art missforest imputation algorithm, such that it informs missforest which features to consider to impute missing values, depending on the variable the missing value belongs to. experiments across different case studies and multiple imputation algorithms show that the proposed solution improves imputation accuracy, both under random and systematic missingness. introduction dealing with missing data values represents a common practice across different scientific domains, especially in clinical (little et al., 2012; austin et al., 2021), genomics (petrazzini et al., 2021) and ecological studies (alsaber et al., 2021; zhang & thorburn, 2022). it represents a problem that can be difficult to address accurately, and this is because missingness can be caused by various known and unknown factors, including machine fault, privacy restriction, data corruption, inconsistencies in the way data are recorded, as well as purely due to human error. rubin (1976) categorised the problem of missing data into three classes known as missing completely at random (mcar), missing at random (mar) and missing not at random (mnar). we say data is mcar when the missingness is purely random, i.e., the missing mechanism is independent of both the observed and unobserved values. on the other hand, data is mar when the missingness is dependent on the observed values but independent of the unobserved values given the observed values; implying that mar data can be effectively imputed by relying on observed data alone. lastly, data is said to be mnar when it is neither mcar nor mar and hence, missingness is dependent on both the observed and unobserved values. while it is tempting to simply remove data rows that contain empty data cells, a process often referred to as list-wise deletion or complete case analysis, past studies have shown that such an approach is ineffective since it tends to lead to poorly trained models (wilkinson, 1999; baraldi & enders, 2010). on this basis, the problem of missingness is typically handled by imputation approaches which estimate the missing values, often using regression or generative models, and return a complete data set. the imputation algorithms are often classified as either statistical or machine learning methods (lin & tsai, 2020). statistical imputation methods include mean/mode, which is one of the simplest methods where the imputation is derived by the mean or mode of the observed values found in the same data column. a more advanced statistical method is the expectation-maximization (em) algorithm (honaker et al., 2011). em computes the expectation of sufficient statistics given the observed data at the e-step (expectation), and then maximizes likelihood at the m-step (maximization). it iterates over these two steps until convergence, at which point the converged parameters are used along with the observed data to impute missing values. another statistical algorithm is the one proposed by hastie et al. (2015), called softimpute, which treats imputation as a matrix completion problem and solves it by finding a rank-restricted singular value decomposition. multiple imputation is another popular statistical method for handling missing data, and considers the uncertainty of missing values. some classic multiple-imputation algorithms include the multivariate normal imputation (mvni) (lee & carlin, 2010), multiple imputation by chained equations (mice) (van buuren & groothuis-oudshoorn, 2011), and extreme learning machine (elm) (sovilj et al., 2016). on the other hand, one of the earliest imputation methods that come from the machine learning (ml) field include the k-nearest neighbour (k-nn) (zhang, 2012), which imputes empty cells according to their k-nearest observed data points. a well-established ml imputation algorithm is missforest (stekhoven & b¨uhlmann, 2012), which trains a random forest (rf) regression model recursively given the observed data, for every variable containing missing values, and uses the trained rf model to impute missing values. recently, deep generative networks have also been used for imputing missing data values. yoon et al. (2018) proposed the generative adversarial imputation nets (gain) algorithm which trains the generator to impute missing data and the discriminator to distinguish original data and imputed data, and was shown to have higher imputation accuracy compared to previous approaches. other ml techniques used for imputation include the optimal transport (muzellec et al., 2020), a neural network with causal regularizer (kyono et al., 2021), and automatic model selection (jarrett et al., 2022). all of the aforementioned algorithms assume that all the variables in the data correlate with each other, and use all the variables to impute the missing values. considering all of the data variables increases the risk of over-fitting, but which can be minimised through l1 and l2 regularization methods often employed by ml algorithms. however, regularization leads to models that tend to lack interpretability and theoretical guarantees of correctness. because this paper focuses on interpretable models, such as those produced by structure learning algorithms, we shall focus on causal feature selection which maintains interpretability, rather than regularization. this is also partly motivated by dzulkalnine & sallehuddin (2019) who showed that using uncorrelated variables to impute missing values not only decreases learning efficiency, but also degrades imputation accuracy. on this basis, it has recently been suggested to include a feature selection phase that prunes off potentially unrelated variables, for each variable containing missing values, prior to imputation (bu et al., 2016; liu et al., 2020; hieu nguyen et al., 2021). relevant studies that focus on feature selection for imputation include the work by doquire & verleysen (2012) who used mutual information (mi) to measure the dependency between variables. they used a greedy forward search procedure to construct the feature subset, which is an iterative process that constructs feature sets that maximise mi with the dependent variable. sefidian & daneshpour (2019) also estimate the dependency between variables using mi, and chose to select a set of variables that increase mi above a given threshold, as the features of a given dependent variable. on the other hand, the algorithm proposed by dzulkalnine & sallehuddin (2019) applies a fuzzy principle component analysis (pca) approach to the complete data cases to remove irrelevant variables from the feature set, followed by a svm classification feature selection task that returns the set of features that maximise accuracy on the dependent variable. lastly, evolutionary optimisation algorithms have also been adopted for feature selection in imputation, and include differential evolution (tran et al., 2018), genetic algorithms (awawdeh et al., 2022), and particle swarm optimisation (jin et al., 2022). recently, causal information has also been adopted to feature selection for missing data imputation. kyono et al. (2021) proposed to impute missing values of a variable given its causal parents derived from the weights of the input layer in the neural network. similarly, yu et al. (2022) proposed the mimmb framework that learns markov blankets (mbs) to be used for feature selection in imputation, which is an iterative process that learns mbs from the imputed data and updates the learned mb after each iteration. note that while mimmb is related to our work, since we also use mb construction for feature selection, an important distinction between the two is that mimmb combines mbs with imputed data whereas, as we later describe in section 3, the learning phase of mbs that we propose is separated from imputation, accounts for partially observed variables, and improves computational efficiency. in this paper, we use the graphical expression of missingness proposed by mohan et al. (2013), known as m-graph, which is a graph that captures observed variables in conjunction with the possible causes of missingness as parents of the partially observed variables. we first show that an original version of the grow and shrink (gs) algorithm by margaritis (2003) is capable of discovering the mbs in m-graphs containing partially observed variables, when applied to test-wise deleted data. because this approach relies on ci tests with large conditioning sets, we modify gs such that the number of conditioning sets considered for ci tests is reduced. we provide proof that the modified gs is capable of discovering the mbs of partially observed variables in m-graphs, under the same assumptions as with the original gs. we then propose a new imputation algorithm, which we call markov-blanket miss-forest (mbmf), that combines the modified gs with the state-ofthe-art missforest (mf) imputation algorithm. we evaluate the effectiveness of mbmf on both synthetic and real-world data sets. the empirical experiments show that mbmf outperforms mf, and other relevant state-of-the-art imputation algorithms, under most experiments. preliminaries bayesian network a bayesian network ⟨g, p⟩ is a probabilistic graphical model represented by a directed acyclic graph (dag) g = (v , e) and associated probability distributions p over v . in dag g, a path is a sequence of distinct nodes such that every pair of nodes is adjacent in g. a node vi is called a collider on a path p if at least two of its neighbouring nodes on p are parents of vi in g. we denote a node vj as a descendant of vi if there is a path from vi to vj such that all arrowheads of the edges on that path are from vi to vj. a key concept of dag is d-separation, which defines the conditional independence (ci) between variables in dag. d-separation: two variables x and y are d-separated conditional on a variable set z if every path between x and y has a node w that satisfies one of the following two conditions: i) w is not a collider and w ∈ z, or ii) w is a collider and none of w or its descendant are in z (pearl, 1988). conditional independence entailed by a given dag via d-separation is not always equivalent to the conditional independence of the corresponding probability distribution. however, we assume the markov and faithfulness conditions described below, where the dag and the corresponding distribution express the same set of conditional independence. markov condition: given a dag g over v , every variable in v is independent of its nondescendants conditional on its parents. faithfulness condition: given a dag g over v , a probability distribution p is faithful to g if and only if the conditional independence relationships in p are exactly the same as the independences entailed by d-separation in g. given the faithfulness condition, a variable is conditionally independent of all the other variables given its mb, which contains all its parents, children and parents of its children. we denote mb of a variable vi as mb (vi). missingness mechanism as an m-graph we adopt the graphical representation of the mechanism of missing data known as m-graph, proposed by mohan et al. (2013), and which makes a slightly stronger assumption on mar and mnar scenarios compared to the definition proposed by rubin (1976). given a missing data set, the observed variables v can be partitioned into fully observed variables v o and partially observed variables v m. in an m-graph, there is an auxiliary indicator variable ri for each partially observed variable vi ∈ v m that specifies the missingness of vi, such that ri = 1 when vi is missing and ri = 0 when vi is observed. we have mcar if r ⊥⊥ v o ∪ v m, mar if r ⊥⊥ v m | v o, otherwise mnar. we denote rs over a set of variables s as rs = ∪vi∈s {ri | vi ∈ v m}. we also make the assumption 1 and assumption 2 for indicator variables, as described below and based on (mohan et al., 2013). assumption 1. no missing indicator variable ri can be a parent of observed variables or other indicator variables, i.e., ri could only be a leaf node in m-graph. assumption 2. in an m-graph, no edge can exist from a partially observed variable vi to its corresponding indicator variable ri. (a) mcar (b) mar (c) mnar figure 1: m-graphs under mcar, mar and mnar conditions respectively. shaded nodes represent partially observed variables. given assumption 1, if two variables vi and vj are d-separated by a variable set s, they are still d-separated given s ∪r. given assumption 2, we exclude the situation that there is a causal relation between vi and ri in order to avoid performing ci test between vi and ri. figure 1 presents three m-graph examples under different mechanisms of missingness. markov blanket based feature selection for imputation given the description of the m-graph and causal faithfulness assumption, the problem of feature selection under incomplete data can be converted into a mb discovery problem over m-graphs that contain partially observed variables and missing indicators1. because of the possible causal links between partially observed variables v m and indicator variables r (i.e., in the case of mnar), the mb of a partially observed variable is likely to contain both observed and indicator variables. for example, in figure 1c, the mb of v1 is {v2, v3, v4, v5, v6, r3}. we show that the grow and shrink (gs) algorithm with test-wise deletion is capable of discovering the m-graph mb of any variable from incomplete data. here, unlike list-wise deletion which removes all data rows containing at least one missing value, we use test-wise deletion which removes data cases containing missing values in any of the variables involved in the current ci test. the pseudo-code of gs (margaritis, 2003) is presented in algorithm 1. note that we slightly modify the grow phase, i.e., line 4-6, of gs to eliminate its dependency on the order of the variables in the data (kitson & constantinou, 2022). given the faithfulness condition, assumption 1 and assumption 2, proposition 1 describes the correctness of gs with test-wise deletion. proposition 1. given the faithfulness condition and assumptions 1 and 2, for any observed variable vi in a m-graph g, the output of gs(vi, v o ∪ v m ∪ r − {vi, ri} , d) is the mb (vi) in g. the proof of proposition 1 is provided in appendix a. therefore, an intuitive way to determine the relevant features for a given variable is to apply the function gs(vi, v o ∪ v m ∪ r − {vi, ri} , d) on every vi ∈ v m. however, this is impractical since the maximum size of the conditioning sets used for ci testing is |v o| + 2|v m| − 3. in practice, the accuracy of ci tests drops dramatically as we increase the size of the conditional set (tsamardinos et al., 2003). to address this, we propose the markov blanket feature selection (mbfs, algorithm 2) that aims to restrict the maximum size of the conditional set used by ci tests to |v o| + |v m| − 1. mbfs involves two phases, where the first phase involves learning the intrinsic mb of each partially observed variable. given a m-graph g, we define the intrinsic mb of a variable vi as the set of variables that are still in the mb of vi after removing all indicator variables from g. we denote the intrinsic mb of vi by imb (vi). note that imb (vi) is not necessarily equivalent to the set of observed variables in mb (vi). this is because the missing indicators might be a common effect of two observed variables. for example, the intrinsic mb of v1 in figure 1c is {v2, v3, v4, v5}, 1to impute the missing values of an incomplete variable, we consider its mb, rather than only its parent variables, for two reasons. firstly, the parents of an incomplete (or even a complete) variable are not guaranteed to be identifiable from observational data. secondly, the mb contains the set of nodes that can make the given variable independent over all other variables present in the input data. algorithm 1 the grow and shrink (gs) algorithm with test-wise deletion input target variable x, candidate variables set s, data d output candidate markov blanket cmb of x end if cmb ← ∅ repeat end if until cmb stays unchanged for each y ∈ cmb do if any x ̸⊥⊥ si | cmb, r{x,si}∪cmb = 0 for si ∈ s then add si with the lowest p-value to cmb remove si from s if x ⊥⊥ y | cmb − {y } , r{x}∪cmb = 0 then remove y from cmb ▷ grow phase ▷ shrink phase whereas the standard mb would have also included v6 and r3. it is worth noting that, during phase 1, some nodes that do not belong in imb (vi) may still be included in the output cmb. however, as we show in appendix b, these nodes are still in the mb (vi) in m-graph. the phase 2 aims to learn all the parents of the missing indicators, in order to complete the feature set of mb (vi). proposition 2 states that mbfs is capable of learning mb (vi) from missing data for any vi ∈ v m in a m-graph and thus, it could serve as an effective feature selection approach for imputation algorithms. algorithm 2 markov blanket-based feature selection (mbfs) input partially observed variable vi, data d output candidate markov blanket cmb of vi 1: procedure mbfs(vi, d) ▷ phase 1 (discover intrinsic mb) cmb ← gs (vi, v o ∪ v m − {vi} , d) ▷ phase 2 (discover other variables in mb caused by indicators) for each rj ∈ r − {ri} do cps ← v o ∪ v m − {vj} for each vk ∈ cps do if rj ⊥⊥ vk | s, r{vk}∪s = 0 for any s ⊆ cps then remove vk from cps end for if vi ∈ cps then cmb ← cmb ∪ {rj} ∪ cps end if end if proposition 2. given the faithfulness condition, assumptions 1 and 2, for any observed variable vi in a m-graph g, mbfs(vi, d) returns mb(vi) in g. the proof of proposition 2 is provided in appendix b. appendix c discusses the implications on the learning performance of mbfs when the assumptions 1 and 2 are violated. we then propose a modified version of missforest that incorporates mbfs as a feature selection process. the modified version of missforest, which we call markov blanket missforest (mbmf), takes the feature set mbfs (vi, d) for each partially observed variables vi, as opposed to considering all of the other observed variables as the explanatory features of vi in the random forest regression model used in missforest. in other words, mbmf accounts for the possible causal relationships between partially observed variables and the missing indicators, to minimise the risk of considering irrelevant observed variables for imputation by missforest. experiments we test the proposed mbmf algorithm with reference to the standard version of missforest (mf), the commonly used imputation algorithms mean and mode, the k-nearest neighbour (knn), and two state-of-the-art algortihms; the softimpute and gain algorithms. while the evaluation includes experiments on both continuous and categorical data, some of the other algorithms can only process one of the two types of input data and hence, their application is restricted to continuous data (mean and gain) or categorical data (mode). we use the scikit-learn python package (pedregosa et al., 2011) to test the mean, mode and knn algorithms, the missforest r package (stekhoven & stekhoven, 2013) to test mf, the softimpute r package (hastie et al., 2015) to test softimpute, and the publicly available source code of gain. the implementation of mbmf, described in this paper, is available at: https://github.com/enderlogic/markov-blanket-based-feature-selection. mbmf is applied to continuous data using the pearson’s correlation test for ci tests, and to categorical data using the g-test statistic, both of which are the default choices for gs. we also consider the default threshold for independence, which is 0.1 for ci p-value tests. the other algorithms are also tested with their default hyper-parameters as implemented in their corresponding packages listed above. synthetic case studies based on real-world bns we first evaluate the algorithms by applying them to synthetic data sampled from three bns, ecoli70, magic-irri and arth150, taken from the bnlearn repository (scutari, 2010). details about these graphical networks can be found in appendix d. we generate complete data sets for each network with sample sizes 500, 1000, 2000 and 3000. then, for each complete data set, we create nine incomplete data sets composed of different combinations of missingness rates (i.e., 10%, 30% and 50%) and missingness assumptions (i.e., mcar, mar and mnar). appendix e describes the process we followed to generate data sets with different types of missingness. evaluation process the imputation accuracy is evaluated using two different approaches. the first approach involves retrieving the root mean squared error (rmse) between imputed data and complete data. because rmse is sensitive to the discrepancy between absolute data values, we normalise the complete data column-wise and re-scale the imputed data with the same normalisation parameters to eliminate bias. the second approach involves assessing the impact of imputation on structural learning accuracy. we do this by comparing the completed partially directed acyclic graphs (cpdags) learned by the state-of-the-art ges causal structure learning algorithm (chickering, 2002) from imputed data sets produced by the different imputation algorithms. the second approach is helpful because, while it is reasonable to assume that higher imputation accuracy helps causal machine learning, it is possible that some imputed values are more important than others. causal structure learning represents a good approach to test this, and we use the f1 score to measure the accuracy of graphical structures learned by ges, as follows: 2tp 2tp + fp + fn where tp, fp and fn represent the number of true positive, false positive and false negative edges in the learned cpdag, relative to the true cpdag. for more information on how to retrieve the cpdag of a dag, please refer to (chickering, 2002). readers are also referred to (kitson et al., 2023) for a review of structure learning. figure 2: average rmse between complete and imputed data produced by the different algorithms. a lower score represents better performance. results figure 2 depicts the average rmse of imputed data produced by the different algorithms under different sample sizes. note that the results we report on gain are, to some degree, inconsistent with the results presented in the original paper (yoon et al., 2018) , but are consistent with the results presented in follow-up studies (you et al., 2020; nazabal et al., 2020; kyono et al., 2021; jarrett et al., 2022). in general, the proposed mbmf algorithm is found to outperform the baseline mf under all scenarios of missingness. specifically, for mcar and mar scenarios with missing rate 10%, mbmf provides a considerable improvement over mf, but this improvement diminishes with the higher missing rates of 30% and 50%. this is because a higher missing rate tends to decrease the sample size of the test-wise deleted data, which in turn reduces the accuracy of ci tests and the discovered mb set by mbfs. importantly, mbmf outperforms mf considerably under all mnar settings, which better reflect real-world missingness that is generally systematic. none of the other imputation algorithms provide satisfactory performance in terms of rmse; at least relative to the mbmf and mf algorithms. figure 3 presents the average f1 scores corresponding to the graphs learned by ges, given the imputed data sets produced by the different imputation algorithms, and across the different sample sizes. while this evaluation approach decreases the discrepancy in performance scores between the top performing imputation algorithms, the results are consistent with those presented in figure 2 since mbmf and mf are found to perform better than the other algorithms in almost all cases, and mbnf performs better than mf in most experiments. specifically, mbmf and mf produce similar performance when the rate of missingness is lowest at 10%, with their performance being close to that produced with complete data (dashed line). these results serve as empirical evidence that both mfbf and mf imputation algorithms perform exceptionally well with relatively low rates of missingness. when the rate of missingness increases to 30%, mbmf performs better than mf in most cases. however, when the rate of missingness is highest, at 50%, there is no clear winner between mbmf and mf. lastly, we evaluate the computational efficiency of mbmf relative to the original mf. as shown in figure 4, mbmf is generally more efficient than mf. note that while mbmf involves an additional phase needed to perform feature selection, the additional time spent by mbmf in that phase is countered by the reduced time mbmf spends to actually impute values. this is because mbmf figure 3: average f1 scores of the graphs learned by ges from data imputed by the different algorithms. a higher f1 score represents better performance. the dashed line represents the performance of ges when applied to complete data. figure 4: average execution time of mf and mbmf under different sample sizes, mechanisms of missingness, and rates of missingness. will almost always consider less features than mf during the imputation phase. specifically, mbmf is slower than mf at the lowest sample size, but becomes increasingly faster than mf with increasing sample size. averaging the results across the different mechanisms of missingness and missing rates also shows that mbmf is, in general, more efficient than mf. note that mf is slightly more efficient when the rate of missingness is at its highest, 50%, since mf trains its rf regression model on observed data rows only; implying that a higher rate of missingness decreases the training data passed to the rf. the impact of the rate of missingness is higher on mf than mbmf since the number of independent features considered by mf is, in general, considerably higher than those considered by mbmf. real-world case study | 7 | [
108.249,
102.6010784,
247.0935817,
112.5636784
] |
v8JIQdiN9Sh.pdf | 2,023 | 2 | on the effectiveness of out-of-distribution data in self-supervised long-tail learning jianhong bai1∗, zuozhu liu1∗, hualiang wang2, jin hao3, yang feng4, huanpeng chu1, haoji hu1† 1zhejiang university, 2the hong kong university of science and technology, 3harvard university, 4angelalign technology abstract though self-supervised learning (ssl) has been widely studied as a promising technique for representation learning, it doesn’t generalize well on long-tailed datasets due to the majority classes dominating the feature space. recent work shows that the long-tailed learning performance could be boosted by sampling extra in-domain (id) data for self-supervised training, however, large-scale id data which can rebalance the minority classes are expensive to collect. in this paper, we propose an alternative but easy-to-use and effective solution, contrastive with out-of-distribution (ood) data for long-tail learning (colt), which can effectively exploit ood data to dynamically re-balance the feature space. we empirically identify the counter-intuitive usefulness of ood samples in ssl long-tailed learning and principally design a novel ssl method. concretely, we first localize the ‘head’ and ‘tail’ samples by assigning a tailness score to each ood sample based on its neighborhoods in the feature space. then, we propose an online ood sampling strategy to dynamically re-balance the feature space. finally, we enforce the model to be capable of distinguishing id and ood samples by a distributionlevel supervised contrastive loss. extensive experiments are conducted on various datasets and several state-of-the-art ssl frameworks to verify the effectiveness of the proposed method. the results show that our method significantly improves the performance of ssl on long-tailed datasets by a large margin, and even outperforms previous work which uses external id data. our code is available at https://github.com/jianhongbai/colt. introduction self-supervised learning (ssl) methods (chen et al., 2020; he et al., 2020; grill et al., 2020) provide distinctive and transferable representations in an unsupervised manner. however, most ssl methods are performed on well-curated and balanced datasets (e.g., imagenet), while many real-world datasets in practical applications, such as medical imaging and self-driving cars, usually follow a long-tailed distribution (spain & perona, 2007). recent research (liu et al., 2021) indicates that existing ssl methods exhibit severe performance degradation when exposed to imbalanced datasets. to enhance the robustness of ssl methods under long-tailed data, several pioneering methods (jiang et al., 2021b; zhou et al., 2022) are proposed for a feasible migration of cost-sensitive learning, which is widely studied in supervised long-tail learning (elkan, 2001; sun et al., 2007; cui et al., 2019b; wang et al., 2022). the high-level intuition of these methods is to re-balance classes by adjusting loss values for different classes, i.e., forcing the model to pay more attention to tail samples. another promising line of work explores the probability of improving the ssl methods with external data. (jiang et al., 2021a) suggests re-balancing the class distributions by sampling external in-distribution (id) tail instances in the wild. nevertheless, they still require available id samples in the sampling pool, which is hard to collect in many real-world scenarios, e.g., medical image diagnosis (ju et al., 2021) or species classification (miao et al., 2021). ∗equal contribution. †corresponding author. (a) uniformity (b) alignment figure 1: (1a): feature space uniformity of different ssl frameworks. (1b): visualization of the alignment property of samples in minority classes and majority classes w/ or w/o colt. the experiment is conducted with resnet-18 on cifar-100-lt. the aforementioned findings and challenges motivate us to investigate another more practical and challenging setting: when the id data is not available, can we leverage the out-of-distribution (ood) data to improve the performance of ssl in long-tailed learning? compare to mak jiang et al. (2021a) that assumes external id samples are available, while we consider a more practical scenario where we only have access to ood data that can be easily collected (e.g., downloaded from the internet). a very recent work (wei et al., 2022) proposes to re-balance the class priors by assigning labels to ood images following a pre-defined distribution. however, it is performed in a supervised manner while not directly applicable to the ssl frameworks. in this paper, we proposed a novel and principal method to exploit the unlabeled ood data to improve ssl performance on long-tailed learning. as suggested in previous research, the standard contrastive learning would naturally put more weight on the loss of majority classes and less weight on that of minority classes, resulting in imbalanced feature spaces and poor linear separability on tail samples(kang et al., 2020; li et al., 2022). however, rebalancing minorities with id samples, no matter labeled or unlabeled, is quite expensive. to alleviate these issues, we devise a framework, contrastive learning with ood data for long-tailed learning (colt), to dynamically augment the minorities with unlabeled ood samples which are close to tail classes in the feature space. as illustrated in fig. 1, our colt can significantly improve ssl baselines in terms of the alignment and uniformity(wang & isola, 2020), two widely-used metrics to evaluate the performance of contrastive learning methods, demonstrating the effectiveness of our method. the pipeline of our method is illustrated in fig. 2. to augment the long-tail id dataset, we define a tailness score to localize the head and tail samples in an unsupervised manner. afterward, we design an online sampling strategy to dynamically re-balance the long-tail distribution by selecting ood samples close (with a large cosine similarity in the feature space) to the head or tail classes based on a predefined budget allocation function. we follow the intuition to allocate more ood samples to the tail classes for rebalancing. those selected ood samples are augmented with the id dataset for contrastive training, where an additional distribution-level supervised contrastive loss makes the model aware of the samples from different distributions. experimental results on four long-tail datasets demonstrate that colt can greatly improve the performance of various ssl methods and even surpass the state-of-the-art baselines with auxiliary id data. we also conduct comprehensive analyses to understand the effectiveness of colt. our contributions can be summarized as: • we raise the question of whether we can and how to improve ssl on long-tailed datasets effectively with external unlabeled ood data, which is better aligned with the practical scenarios but counter-intuitive to most existing work and rarely investigated before. • we design a novel yet easy-to-use ssl method, which is composed of tailness score estimation, dynamic sampling strategies, and additional contrastive losses for long-tail learning with external ood samples, to alleviate the imbalance issues during contrastive learning. • we conducted extensive experiments on various datasets and ssl frameworks to verify and understand the effectiveness of the proposed method. our method consistently outperforms baselines by a large margin with the consistent agreement between the superior performance and various feature quality evaluation metrics of contrastive learning. related works supervised learning with imbalanced datasets early attempts aim to highlight the minority samples by re-balancing strategy. these methods fall into two categories: re-sampling at the data level (shen et al., 2016; zou et al., 2018; geifman & el-yaniv, 2017), or re-weighting at the loss (gradient) level (cao et al., 2019; jamal et al., 2020). due to the usage of label-related information, the above methods can not be generalized to the unsupervised field. (kang et al., 2019) suggests that the scheme of decoupling learning representations and classifiers benefits long-tail learning. the feasibility of two-stage training promotes the exploration in unsupervised scenarios. self-supervised long tail learning (yang & xu, 2020) is, to our best, the first to analyze the performance of ssl methods in long-tail learning and verify the effectiveness of self-supervised pretraining theoretically and experimentally. however, (liu et al., 2021) shows that ssl methods – although more robust than the supervised methods – are not immune to the imbalanced datasets. follow-up studies improve the ability of ssl methods on long-tailed datasets. motivated by the observation that deep neural networks would easily forget hard samples after pruning (hooker et al., 2019), (jiang et al., 2021b) proposed a self-competitor to pay more attention to the hard (tail) samples. bcl (zhou et al., 2022) involved the memorization effect of deep neural networks (zhang et al., 2021b) into contrastive learning, i.e., they emphasize samples from tail by assigning more powerful augmentation based on the memorization clue. we show that our method is non-conflict with existing methods and can further improve the balancedness and accuracy (section 4.2). learning with auxiliary data auxiliary data is widely used in the field of deep learning for different purposes, e.g., improving model robustness (lee et al., 2020), combating label noise (wei et al., 2021), ood detection (liang et al., 2018; hendrycks et al., 2018a), domain generalization (li et al., 2021; liu et al., 2020; long et al., 2015), neural network compression (fang et al., 2021), training large models (alayrac et al., 2022; brown et al., 2020). in long-tail learning, mak (jiang et al., 2021a) suggests tackling the dataset imbalance problem by sampling in-distribution tail classes’ data from an open-world sampling pool. on the contrary, we explore the probability of helping long-tail learning with ood samples, i.e., none of the id samples are included in the sampling pool. opensampling (wei et al., 2022) utilizes the ood samples by assigning a label to each sample following a pre-defined label distribution. their work is performed under supervised scenarios, and the ood data is not filtered, which results in a massive computation overhead. method preliminaries unsupervised visual representation learning methods aim to find an optimal embedding function f , which projects input image x ∈ rchw to the feature space z ∈ rd with z = f (x), such that z retains the discriminative semantic information of the input image. simclr (chen et al., 2020) is one of the state-of-the-art unsupervised learning frameworks, and its training objective is defined as: lcl = − log exp(zi · z+ i /τ ) exp(zi · z+ i ∈z− exp(zi · z− z− i /τ ) i ) is the positive pair of instance i, z− where (zi, z+ indicates the negative samples from the negative i set z −, and τ is the temperature hyper-parameter. in practice, a batch of images is augmented twice in different augmentations, the positive pair is formulated as the two views of the same image, and the negative samples are the views of other images. localize tail samples in self-supervised training due to the label-agnostic assumption in the pre-training state, the first step of the proposed method is to localize tail samples. as mentioned earlier, the majority classes dominate the feature space, and tail instances turn out to be outliers. moreover, the minority classes have lower intra-class consistency (li et al., 2022). hence, a sparse neighborhood could be a reliable proxy to identify the tail samples (more analysis can be found in section 4.4). specifically, we use top-k% (k = 2 figure 2: overview of contrastive with out-of-distribution data for long-tail learning (colt). colt can be easily plugged into most ssl frameworks. proposed components are denoted as red. in practice) largest negative logits of each sample to depict the feature space neighborhood during training. given a training sample xi, its negative logits p− i i /τ ) is the following: exp(zi · z− p− i = exp(zi · z+ i ∈z− exp(zi · z− z− i /τ ) considering implementing simclr (chen et al., 2020) with batch size b, each image has 2(b − 1) t = − (cid:80) negative samples. then, we define si i as the tailness score for each id instance si,n xi. during training, we perform a momentum update to the tailness score, i.e., si,0 t = msi,n−1 t where m ∈ [0, 1) is the momentum coefficient. the momentum update t makes the tailness score more robust and discriminative to the tail samples. a higher value of si t indicates sample xi has a more sparse neighborhood in the feature space and implies that it belongs to the tail classes with a larger probability. experiments in fig 3e empirically demonstrated that tail samples could be effectively discovered by our proposed tailness score. + (1 − m)si,n top−k% p− t = si t, dynamically re-balance the feature space with online sampling the core of our approach is to sample ood images from the sampling pool sood and further rebalance the original long-tail id dataset and the feature space. first, we obtain c feature prototypes zci from id training set sid via k-means clustering. note that we use the features at the last projection layer since the contrastive process is performed on this layer. the cluster-wise tailness score sj sci t /|ci|, here |ci| is the t number of instances in cluster ci. then, we obtain each cluster’s sampling budget k ′ as follows: is defined as the mean of tailness score in cluster ci, i.e., sci zj ∈ci k ′ = k · sof tmax( (cid:101)sc t /τc), t − mean(sc sc t ) std(sc t ) where k refers to the total sampling budget, k ′ ∈ rc is the sampling budget assigned to each cluster, (cid:101)sc t is the normalized cluster tailness score. empirically, we assign more sampling budget to the tailness clusters to be consistent with the idea of re-balancing the feature space. we sample ood images whose feature is close to (higher cosine similarity) the id prototypes zci. to fully exploit the ood data, we re-sample from the sood every t epoch. the motivation behind i) the sampled ood data can be well-separated from sid after a few epochs, therefore this is: becoming less effective to re-balance the feature space; ii) over-fitting to the ood data can be toxic to the id performance (wei et al., 2022). from another perspective, this online sampling strategy lets the id training set (especially the tail samples) continuously be exposed to the more newly sampled effective negative samples, forcing the model gives more distinctive embeddings and better fitting the id distribution. the online sampling process is summarized in algorithm 2. algorithm 1 the overall pipeline of colt. input: id train set sid, ood dataset sood, sample budget k, train epoch t , momentum coefficient m, warm-up epochs w, sample interval r, cluster number c, hyper-parameter k, τc. output: pre-trained model parameter θt . initialize: model parameter θ0, the original train set strain = sid. if epoch = 0 then train model θ0 with eq. 1 and compute s0 t ; end if for epoch = 1, · · · , t − 1 do if epoch ≥ w then if (epoch − w) %r = 0 then sample by performing algorithm 2. end if calculate the supervised contrastive loss with eq. 5 use strain to train θepoch with eq. 6; else use strain to train θepoch with eq. 1; end if compute si end for t, and update si,epoch t algorithm 2 our online sampling strategy. input: id train set sid, ood dataset sood, model θ, sample budget k, cluster number c, similarity metric sim(·), hyper-parameter τc. output: new train set strain. calculate both id features zid and ood features zood through model θ; obtain c id prototypes zci via k-means clustering in the projected feature space; calculate cluster-wise tailness score by sci (cid:80) t = sj t /|ci|; zj ∈ci ci with assign each cluster a sample budget k ′ eq. 3; initialize the sample set ssample = ∅; for i = 0, · · · , c − 1 do initialize subset si while |ssample| < k ′ u = arg maxxj ∈sood sample = si si sample = ∅; ci do sample ∪ {u}; sim(zj, zci); end while ssample = ssample ∪ si sample; end for strain = strain ∪ ssample. awareness of the out-of-distribution data section 3.2 and section 3.3 introduce our sampling strategy toward ood images. to involve the sampled ood subset ssample in training, a feasible way is directly using the augmented training set (containing both id and ood samples) to train the model with eq. 1. however, we would argue that giving equal treatment to all samples may not be the optimal choice (details in section 4). one natural idea is to let the model be aware of that there are two kinds of samples from different domains. hence, we define an indicator ϕ to provide weakly supervised (distribution only) information: ϕ(xi) = (cid:26)+1, xi ∈ sid; −1, xi ∈ sood. afterward, we add a supervised contrastive loss (khosla et al., 2020) to both id and ood samples: lscl = − log p∈p (i) exp(zi · zp/τ ) exp(zi · zp/τ ) + (cid:80) n∈n (i) exp(zi · zn/τ ) where p (i) ≡ {p : ϕ(xp) = ϕ(xi)} is the set of indices of the same domain within the mini-batch, |p (i)| is its cardinality and the negative index set n (i) ≡ {n : ϕ(xn) ̸= ϕ(xi)} contains index from different distribution. fig 3c illustrates that the proposed distribution-awareness loss improves not only the overall performance but also facilitates a more balanced feature space. it’s worth noting that the proposed loss only utilizes the distribution information as the supervised term, while the labels for both id and ood samples are unavailable during the self-supervised training stage. finally, we scale the supervised loss with α and add it to the contrastive loss in eq 1: lcolt = lcl + αlscl. table 1: test accuracy (%) and balancedness (std↓) on cifar-10-lt and cifar-100-lt. method cifar-10-lt metric many ↑ median ↑ few ↑ std ↓ all ↑ many ↑ median ↑ few ↑ std ↓ all ↑ sdclr +colt bcl-i +colt table 2: test accuracy (%) and balancedness (std↓) on imagenet-100-lt and places-lt. method imagenet-100-lt places-lt metric many ↑ median ↑ few ↑ std ↓ all ↑ many ↑ median ↑ few ↑ std ↓ all ↑ sdclr +colt experiments in this section, we first introduce the datasets and experimental settings (section 4.1) and evaluate the proposed colt in three aspects: accuracy and balancedness(section 4.2), versatility and complexity (section 4.3). then, we verify whether our method can 1), localize tail samples, 2), re-balance the feature space. finally, we provide a comprehensive analysis of colt (section 4.4). datasets and settings we conduct experiments on four popular datasets. cifar-10-lt/cifar-100-lt are long-tail subsets sampled from the original cifar10/cifar100 (cui et al., 2019a). we set the imbalance ratio to 100 in default. following (wei et al., 2022), we use 300k random images (hendrycks et al., 2018b) as the ood dataset. imagenet-100-lt is proposed by (jiang et al., 2021b) with 12k images sampled from imagenet-100 (tian et al., 2020) with pareto distribution. we use imagenetr (hendrycks et al., 2021) as the ood dataset. places-lt (liu et al., 2019) contains about 62.5k images sampled from the large-scale scene-centric places dataset (zhou et al., 2017) with pareto distribution. places-extra69 (zhou et al., 2017) is utilized as the ood dataset. evaluation protocols to verify the balancedness and separability of the feature space, we report performance under two widely-used evaluation protocols in ssl: linear-probing and few-shot. for both protocols, we first perform self-supervised training on the encoder model to get the optimized visual representation. then, we fine-tune a linear classifier on top of the fixed encoder. the only difference between linear-probing and few-shot learning is that we use the full dataset for linear probing and 1% samples of the full dataset for few-shot learning during fine-tuning. measurement metrics as a common practice in long tail learning, we divide each dataset into three disjoint groups in terms of the instance number of each class: {many, median, few}. by calculating the standard deviation of the accuracy of the three groups, we can quantitatively analyze the balancedness of a feature space (jiang et al., 2021b). the linear separability of the feature space is evaluated by the overall accuracy. training settings we evaluate our method with simclr (chen et al., 2020) framework in default. we also conduct experiments on several state-of-the-art methods in self-supervised long tail learning (jiang et al., 2021b; zhou et al., 2022). we adopt resnet-18 (he et al., 2016) for small datasets (cifar-10-lt/cifar-100-lt), and resnet-50 for large datasets (imagenet-100-lt/places-lt), respectively. more details can be found in appendix. table 3: compare the proposed colt with random sample and mak under the same sampling pool and sampling budget. the best performance under each setting is marked as bold. id dataset sampling pool budget method protocol many ↑ median ↑ few ↑ std ↓ all ↑ image net-100 image net-r random mak colt random mak colt random places 69 mak colt few-shot linear-probing few-shot linear-probing few-shot linear-probing few-shot linear-probing few-shot linear-probing few-shot linear-probing few-shot linear-probing few-shot linear-probing few-shot linear-probing table 4: compare the test accuracy (%) on imagenet-100-lt of the proposed colt with mak which use id data. the best performance is marked as bold. method extra type sample set many ↑ median ↑ few ↑ std ↓ all ↑ mak id id & ood ood colt ood colt’s accuracy, balancedness and versatility the main results of the proposed approach in various datasets and settings are presented in table 1 and table 2. we sample k = 10, 000 ood images on every r = 25 epoch for cifar-10lt/cifar-100-lt, places-lt, and r = 50 for imagenet-100-lt. colt significantly outperforms the baseline (vanilla simclr) by a large margin (about 10% for long-tail cifar, 5% for imagenet100-lt, 1.6% for places-lt). besides, the performance gain of the minority classes (median & few) is more notable (e.g., about 12% for long-tailed cifar-100). meanwhile, colt yields a balanced feature space. following previous works (jiang et al., 2021b) (zhou et al., 2022), we measure the balancedness of a feature space through the accuracy’s standard deviation from many, median and few. colt significantly narrows the performance gap between the three groups (much lower std), which indicates we learn a more balanced feature space. to evaluate the versatility of colt, we carry out experiments on top of several improved ssl frameworks for long-tail learning, i.e., sdclr (jiang et al., 2021b) and bcl (zhou et al., 2022). table 1 and table 2 also summarized colt performance on these two methods. we can observe that incorporating our method into existing state-of-the-art methods can consistently improve their performance, which indicates that our method is robust to the underlying ssl frameworks. colt vs baselines with auxiliary data | 6 | [
108.249,
91.6110784,
326.2489475,
101.5736784
] |
0N8jUH4JMv6.pdf | 2,021 | 0 | implicit convex regularizers of cnn architectures: convex optimization of two- and three-layer networks in polynomial time tolga ergen & mert pilanci department of electrical engineering stanford university stanford, ca 94305, usa {ergen,pilanci}@stanford.edu abstract we study training of convolutional neural networks (cnns) with relu activations and introduce exact convex optimization formulations with a polynomial complexity with respect to the number of data samples, the number of neurons, and data dimension. more specifically, we develop a convex analytic framework utilizing semi-infinite duality to obtain equivalent convex optimization problems for several two- and three-layer cnn architectures. we first prove that two-layer cnns can be globally optimized via an (cid:96)2 norm regularized convex program. we then show that multi-layer circular cnn training problems with a single relu layer are equivalent to an (cid:96)1 regularized convex program that encourages sparsity in the spectral domain. we also extend these results to three-layer cnns with two relu layers. furthermore, we present extensions of our approach to different pooling methods, which elucidates the implicit architectural bias as convex regularizers. introduction convolutional neural networks (cnns) have shown a remarkable success across various machine learning problems (lecun et al., 2015). however, our theoretical understanding of cnns still remains restricted, where the main challenge arises from the highly non-convex and nonlinear structure of cnns with nonlinear activations such as relu. hence, we study the training problem for various cnn architectures with relu activations and introduce equivalent finite dimensional convex formulations that can be used to globally optimize these architectures. our results characterize the role of network architecture in terms of equivalent convex regularizers. remarkably, we prove that the proposed methods are polynomial time with respect to all problem parameters. convex neural network training was previously considered in bengio et al. (2006); bach (2017). however, these studies are restricted to two-layer fully connected networks with infinite width, thus, the optimization problem involves infinite dimensional variables. moreover, it has been shown that even adding a single neuron to a neural network leads to a non-convex optimization problem which cannot be solved efficiently (bach, 2017). another line of research in parhi & nowak (2019); ergen & pilanci (2019; 2020a;b;c;d); pilanci & ergen (2020); savarese et al. (2019); gunasekar et al. (2018); maennel et al. (2018); blanc et al. (2019); zhang et al. (2016) focuses on the effect of implicit and explicit regularization in neural network training and aims to explain why the resulting network generalizes well. among these studies, parhi & nowak (2019); ergen & pilanci (2020b;c;d); savarese et al. (2019) proved that the minimum (cid:96)2 norm two-layer network that perfectly fits a one dimensional dataset outputs the linear spline interpolation. moreover, gunasekar et al. (2018) studied certain linear convolutional networks and revealed an implicit non-convex quasi-norm regularization. however, as the number of layers increases, the regularization approaches to (cid:96)0 quasi-norm, which is not computationally tractable. recently, pilanci & ergen (2020) showed that two-layer cnns with linear activations can be equivalently optimized as nuclear and (cid:96)1 norm regularized convex problems. although all the norm characterizations provided by these studies are insightful for future research, existing results are quite restricted due to linear activations, simple settings or intractable problems. table 1: cnn architectures and the corresponding norm regularization in our convex programs 2-layer equation 4 2-layer equation 7 architecture implicit regularization j maxpool (cid:17) (cid:16) {(xkuj)+} wj 2-layer equation 21 1 j,k xkujwjk (cid:107) · (cid:107)∗ (nuclear norm) 3-layer equation 11 l-layer equation 9 2 j (x (cid:81) shallow cnns and their representational power: as opposed to their relatively simple and shallow architecture, cnns with two/three layers are very powerful and efficient models. belilovsky et al. (2019) show that greedy training of two/three layer cnns can achieve comparable performance to deeper models, e.g., vgg-11(simonyan & zisserman, 2014). however, a full theoretical understanding and interpretable description of cnns even with a single hidden layer is lacking in the literature. our contributions: our contributions can be summarized as follows: • we develop convex programs that are polynomial time with respect to all input parameters: the number of samples, data dimension, and the number of neurons to globally train cnns. to the best of our knowledge, this is the first work characterizing polynomial time trainability of nonconvex cnn models. more importantly, we achieve this complexity with explicit and interpretable convex optimization problems. consequently, training cnns, especially in practice, can be further accelerated by leveraging extensive tools available from convex optimization theory. • our work reveals a hidden regularization mechanism behind cnns and characterizes how the architecture and pooling strategies, e.g., max-pooling, average pooling, and flattening, dramatically alter the regularizer. as we show, ranging from (cid:96)1 and (cid:96)2 norm to nuclear norm (see table 1 for details), relu cnns exhibit an extremely rich and elegant regularization structure which is implicitly enforced by architectural choices. in convex optimization and signal processing, (cid:96)1, (cid:96)2 and nuclear norm regularizations are well studied, where these structures have been applied in compressed sensing, inverse problems, and matrix completion. our results bring light to unexplored and promising connections of relu cnns with these established disciplines. notation and preliminaries: we denote matrices/vectors as uppercase/lowercase bold letters, for which a subscript indicates a certain element/column. we use ik for the identity matrix of size k. we denote the set of integers from 1 to n as [n]. moreover, (cid:107) · (cid:107)f and (cid:107) · (cid:107)∗ are frobenius and nuclear norms and bp := {u ∈ cd : (cid:107)u(cid:107)p ≤ 1} is the unit (cid:96)p ball. we also use 1[x ≥ 0] as an indicator. to keep the presentation simple, we will use a regression framework with scalar outputs and squared loss. however, we also note that all of our results can be extended to vector outputs and arbitrary convex regression and classification loss functions. we present these extensions in appendix. in our regression framework, we denote the input data matrix and the corresponding label vector as x ∈ rn×d and y ∈ rn, respectively. moreover, we represent the patch matrices, i.e., subsets of columns, extracted from x as xk ∈ rn×h, k ∈ [k], where h denotes the filter size. with this k=1 describes a convolution operation between the filter u ∈ rh and the data matrix notation, {xku}k x. throughout the paper, we will use the relu activation function defined as (x)+ = max{0, x}. however, since cnn training problems with relus are not convex in their conventional form, below we introduce an alternative formulation for this activation, which will be crucial for our derivations. prior work (pilanci & ergen, 2020): recently, pilanci & ergen (2020) introduced an exact convex formulation for training two-layer fully connected relu networks in polynomial time for training data x ∈ rn×d of constant rank, where the model is a standard two-layer scalar output network fθ(x) := (cid:80)m j=1 (xuj)+ αj. however, this model has three main limitations. first, as noted by the authors, even though the algorithm is polynomial time, i.e., o(nr), provided that r := rank(x), the complexity is exponential in r = d, i.e., o(nd), if x is full rank. additionally, as a direct consequence of their model, the analysis is limited to fully connected architectures. although they briefly analyzed some cnn architectures in section 4, as emphasized by the authors, these are either fully linear (without relu) or separable over the patch index k as fully connected models, which do not correspond to weight sharing in classical cnn architectures in practice. finally, their analysis does not extend to three-layer architectures with two relu layers since the analysis of two relu layers is significantly more challenging. on the contrary, we prove that classical cnn architectures can be globally optimized by standard convex solvers in polynomial time independent of the rank 1the results on two-layer cnns are presented in appendix a.4. 2this refers to an l-layer network with only one relu layer and circular convolutions. table 2: computational complexity results for training cnns to global optimality using a standard interior-point solver (n: # of data samples, d: data dimensionality, k: # of patches, rc: maximal rank for the patch matrices (rc ≤ h), rcc: rank for the circular convolution, h: filter size ,m: # of filters) 2-layer equation 4 2-layer equation 7 l-layer equation 9 3-layer equation 11 # of variables # of constraints complexity o 2hpconv 2npconvk (cid:16) nk rc o 2hpconv 2npconvk 2 (cid:16) nk h3r3 c rc 4dpcconv 2npcconv (cid:16) n rcc o o (cid:16) n mrc (see table 2). more importantly, we extend this analysis to three-layer cnns with two relu layers to achieve polynomial time convex training as proven in theorem 4.1. 1.1 hyperplane arrangements let h be the set of all hyperplane arrangement patterns of x, defined as the following set h := (cid:91) (cid:8){sign(xw)} : w ∈ rd(cid:9), which has finitely many elements, i.e., |h| ≤ nh < ∞, nh ∈ n. we now define a collection of sets that correspond to positive signs for each element in h, by s := (cid:8){∪hi=1{i}} : h ∈ h(cid:9). we first note that relu is an elementwise function that masks the negative entries of a vector or matrix. hence, given a set s ∈ s, we define a diagonal mask matrix d(s) ∈ rn×n defined as d(s)ii := 1[i ∈ s]. then, we have an alternative representation for relu as (xw)+ = d(s)xw given d(s)xw ≥ 0 and (in − d(s)) xw ≤ 0. note that these constraints can be compactly defined as (2d(s) − in) xw ≥ 0. if we denote the cardinality of s as p , i.e., the number of regions in a partition of rd by hyperplanes passing through the origin and are perpendicular to the rows of the data matrix x with r := rank(x) ≤ min(n, d), then p can be upper-bounded as follow (ojha, 2000; stanley et al., 2004; winder, 1966; cover, 1965) (see appendix a.2 for details). 1.2 convolutional hyperplane arrangements we now define a notion of hyperplane arrangements for cnns, where we introduce the patch matrices {xk}k k=1 instead of directly operating on x. we first construct a new data matrix as m = [x1; x2; . . . xk] ∈ rnk×h. we then define convolutional hyperplane arrangements as the hyperplane arrangements for m and denote the cardinality of this set as pconv. then, we have pconv ≤ 2 (cid:18) e(nk − 1) rc (cid:19)rc stride (cid:5) + 1. note that when the filter size h is fixed, pconv is where rc := rank(m) ≤ h and k = (cid:4) d−h polynomial in n and d. similarly, we consider hyperplane arrangements for circular cnns followed by a linear pooling layer, i.e., xuw, where u ∈ rd×d is a circulant matrix generated by the elements u ∈ rh. then, we define circular convolutional hyperplane arrangements and denote the cardinality of this set as pcconv, which is exponential in the rank of the circular patch matrices, i.e., rcc. remark 1.1. there exist p hyperplane arrangements of x where p is exponential in r. thus, if x is full rank, r = d, then p can be exponentially large in the dimension d. as we will show, this makes the training problem for fully connected networks challenging. on the other hand, for cnns, the number of relevant hyperplane arrangements pconv is exponential in rc. if m is full rank, then rc = h (cid:28) d and accordingly pconv (cid:28) p . this shows that the parameter sharing structure in cnns enables a significant reduction in the number of possible hyperplane arrangements. consequently, as shown in the sequel and table 2, our results imply that the complexity of training problem is significantly lower compared to fully connected networks. 2 two-layer cnns in this section, we present exact convex formulation for two-layer cnn architectures. 2.1 two-layer cnns with average pooling we first consider an architecture with m filters, average pooling3, i.e., is defined as fθ(x) := (cid:80) k (xkuj)+ wj with parameters θ := {uj, wj}, and standard weight decay regularization, j which can be trained via the following problem p∗ 1 = min {uj ,wj }m (xkuj)+ wj − y (cid:1) , where uj ∈ rh and w ∈ rm are the filter and output weights, respectively, and β > 0 is a regularization parameter. after a rescaling (see appendix a.3), we obtain the following problem p∗ 1 = min {uj ,wj }m i=1 uj ∈b2,∀j (xkuj)+ wj − y then, taking dual with respect to w and changing the order of min-max yields the weak dual 1 = max v 2 s.t. max u∈b2 vt (xku)+ which is a semi-infinite optimization problem and the dual can be obtained as a finite dimensional convex program using semi-infinite optimization theory (goberna & lópez-cerdá, 1998). the same dual also corresponds to the bidual of equation 1. surprisingly, strong duality holds when m exceeds a threshold. then, using strong duality, we characterize a set of optimal filter weights as the extreme point of the constraint in equation 3. below, we use this characterization to derive an exact convex formulation for equation 1. theorem 2.1. let m be a number such that m ≥ m∗ for some m∗ ∈ n, m∗ ≤ n + 1, then strong duality holds for equation 3, i.e., p∗ 1, and the equivalent convex program for equation 1 is d(sk i )xk (c(cid:48) i − ci) − y pconv(cid:88) i ) − in)xkci ≥ 0, (2d(sk i ) − in)xkc(cid:48) i ≥ 0, ∀i, k. moreover, an optimal solution to equation 1 with m∗ filters can be constructed as follows (u∗ j1i , w∗ j1i (u∗ j2i , w∗ j2i if if where {c(cid:48)∗ i=1 jsi ∈ [|js|] given the definitions j1 := {i1 : (cid:107)c(cid:48) i1 are optimal, m∗ := (cid:80)pconv i }pconv i , c∗ therefore, we obtain a finite dimensional convex formulation with 2hpconv variables and 2npconvk constraints for the non-convex problem in equation 1. since pconv is polynomial in n and d given a fixed rc ≤ h, equation 4 can be solved by a standard convex optimization solver in polynomial time. remark 2.1. table 2 shows that for fixed rank rc, or fixed filter size h, the complexity is polynomial in all problem parameters: n (number of samples), m (number of filters, i.e., neurons), and d (dimension). the filter size h is typically a small constant, e.g., h = 9 for 3 × 3 filters. we also note that for fixed n and rank(x) = d, the complexity of fully connected networks is exponential in d, which cannot be improved unless p = n p even for m = 2 (boob et al., 2018; pilanci & ergen, 2020). however, this result shows that cnns can be trained to global optimality with polynomial complexity as a convex program. 3we define the average pooling operation as (cid:80)k k=1 (xkuj)+, which is also known as global average pooling. 4since our proof technique is similar for different cnns, we present only the proof of theorem 2.1 in section 5. the rest of the proofs can be found in appendix (including the strong duality results in a.7). interpreting non-convex cnns as convex variable selection models: interestingly, we have the sum of the squared (cid:96)2 norms of the weights (i.e., weight decay regularization) in the non-convex problem equation 1 as the regularizer, however, the equivalent convex program in equation 4 is regularized by the sum of the (cid:96)2 norms of the weights. this particular regularizer is known as group (cid:96)1 norm, and is well-studied in the context of sparse recovery and variable selection (yuan & lin, 2006; meier et al., 2008). hence, our convex program reveals an implicit variable selection mechanism in the original non-convex problem. more specifically, the original features in x are mapped to higher dimensions via convolutional hyperplane arrangements as {d(sk and followed by a convex variable selection strategy using the group (cid:96)1 norm. below, we show that this implicit regularization changes significantly with the cnn architecture and pooling strategies and can range from (cid:96)1 and (cid:96)2 norms to nuclear norm. i )xk}pconv 2.2 two-layer cnns with max pooling here, we consider the architecture with max pooling, i.e., fθ(x) = (cid:80) which is trained as follows j maxpool {(xkuj)+}k wj, p∗ 1 = min {uj ,wj }m i=1 uj ∈b2,∀j maxpool {(xkuj)+}k wj − y where maxpool(·) is an elementwise max function over the patch index k. then, taking dual with respect to w and changing the order of min-max yields 1 = max v 2 s.t. max u∈b2 (cid:12) (cid:12)vt maxpool (cid:0){(xku)+}k theorem 2.2. let m be a number such that m ≥ m∗ for some m∗ ∈ n, m∗ ≤ n + 1, then strong duality holds for equation 6, i.e., p∗ 1, and the equivalent convex program for equation 5 is d(sk i )xk (c(cid:48) i − ci) − y pconv(cid:88) i ) − in)xkci ≥ 0, (2d(sk i ) − in)xkc(cid:48) i ≥ d(sk i ≥ 0, ∀i, k, i )xjc(cid:48) i )xkc(cid:48) moreover, an optimal solution to equation 5 can be constructed from equation 7 as in theorem 2.1. i )xkci ≥ d(sk i )xjci, d(sk i, ∀i, j, k. we note that max pooling corresponds to the last two linear constraints of the above program. hence, max pooling can be interpreted as additional regularization, which constraints the parameters further. 3 multi-layer circular cnns in this section, we first consider l-layer circular cnns with l − 2 pooling layers before relu, i.e., fθ(x) = (cid:80) l uljw1j)+ w2j, which is trained via the following non-convex problem j (x (cid:81) min {{ulj }l−2 l=1 ,w1j ,w2j }m ulj ∈ul,∀l,j x uljw1j w2j − y (cid:1) , where ulj ∈ rd×d is a circulant matrix generated using ulj ∈ rhl and ul := {(u1, . . . , ul−2) : ul ∈ rhl , ∀l ∈ [l − 2]; theorem 3.1. let m be a number such that m ≥ m∗ for some m∗ ∈ n, m∗ ≤ n + 1, then strong duality holds for equation 8, i.e., p∗ ≤ 1} and we include unit norm constraints w.l.o.g. 2, and the equivalent convex problem is l=1 ul f min i}pcconv i∈cd,∀i d(si) ˜x (c(cid:48) i − ci) − y {ci,c(cid:48) ci,c(cid:48) s.t. (2d(si) − in) ˜xci ≥ 0, (2d(si) − in) ˜xc(cid:48) pcconv(cid:88) d where ˜x = xf and f ∈ cd×d is the dft matrix. additionally, as in theorem 2.1, we can construct an optimal solution to equation 8 from equation 9.5 remarkably, although the sum of the squared (cid:96)2 norms in the non-convex problem in equation 8 stand for the standard weight decay regularizer, the equivalent convex program in equation 9 is regularized by the sum of the (cid:96)1 norms which encourages sparsity in the spectral domain ˜x. thus, even with the simple choice of the weight decay in the non-convex problem, the architectural choice for a cnn implicitly employs a more sophisticated regularizer that is revealed by our convex optimization approach. we further note that in the above problem d(si) ˜x are the spectral features of a subset of data points which are seperated by a hyperplane from all the other spectral features. while such spectral features can be very predictive for images in many applications, we believe that our convex program also sheds light into the undesirable bias of cnns, e.g., towards certain textures and low frequencies (geirhos et al., 2018; rahaman et al., 2019). 4 three-layer cnns with two relu layers here, we consider three-layer cnns with two relu layers, which has the following primal problem min {uj ,w1j ,w2j }m uj ∈b2 (xkuj)+ w1jk w2j − y (cid:1) with fθ(x) = (cid:80) j k (xkuj)+ w1jk w2j and the following convex equivalent problem. theorem 4.1. let m be a number such that m ≥ m∗ for some m∗ ∈ n, m∗ ≤ n + 1, then strong duality holds for equation 8, i.e., p∗ 3, and the equivalent convex problem is ds2j iijkd(sk 1i)xk ijk − cijk (cid:1) − y (cid:1) min {cijk,c(cid:48) cijk,c(cid:48) ijk}ijk ijk∈rh s.t. (2d(s2j) − in) iijkd(sk 1i)xkcijk ≥ 0, (2d(sk 1i) − in)xkcijk ≥ 0, ∀i, j, k (2d(s2j) − in) iijkd(sk ijk ≥ 0, (2d(sk 1i) − in)xkc(cid:48) ijk ≥ 0, ∀i, j, k. where p1 and p2 are the number hyperplane arrangements for the first and second layers, iijk ∈ {±1} are sign patterns to enumerate all possible sign patterns of the second layer weights, and cij = [cij1 . . . cijk] (see appendix a.10 for further details). it is interesting to note that, although the sum of the squared (cid:96)2 norms in the non-convex problem equation 10 is the standard weight decay regularizer, the equivalent convex program equation 11 is regularized by the sum of the frobenius norms that promote matrix group sparsity, where the groups are over the patch indices. note that this is similar to equation 4 except an extra summation due to having one more relu layer. therefore, we observe that adding more convolutional layers with relu implicitly regularizes for group sparsity over a richer hierarchical representation of the data via two consecutive hyperplane arrangements. 5 proof of the main result (theorem 2.1) here, we provide our proof technique for theorem 2.1. we first focus on the single-sided constraint max u∈b2 vt (xku)+ ≤ β, 5the details are presented in appendix a.9 where the maximization problem can be written as max u∈b2 max sk⊆[n] sk∈s vt d(sk)xku s.t. (2d(sk) − in)xku ≥ 0, ∀k. since the maximization is convex and strictly feasible for fixed d(sk), equation 13 can be written as min αk≥0 max u∈b2 max sk⊆[n] sk∈s (cid:0)vt d(sk)xk + αt k (2d(sk) − in)xk (cid:1) u = max sk⊆[n] sk∈s min αk≥0 vt d(sk)xk + αt k (2d(sk) − in)xk we now enumerate all hyperplane arrangements and index them in an arbitrary order, i.e., denoted as (cid:0)s1 (cid:1), where i ∈ [pconv], pconv = |sk|, sk := {(s1 i , . . . , sk i , . . . , sk i i ) : sk equation 12 ⇐⇒ ∀i ∈ [pconv], min αk≥0 vt d(sk i )xk + αt k (2d(sk i ) − in)xk (cid:13) (cid:13) (cid:13) (cid:13) (cid:13)2 we now use the same approach for the two-sided constraint in equation 3 to obtain the following ⇐⇒ ∀i ∈ [pconv], ∃αik ≥ 0 s.t. i )xk + αt i ) − in)xk ik(2d(sk vt d(sk max v αik,α(cid:48) vt d(sk i )xk + αt ik(2d(sk i ) − in)xk −vt d(sk i )xk + α(cid:48)t ik (2d(sk i ) − in)xk ≤ β, ∀i. note that this problem is convex and strictly feasible for v = αik = α(cid:48) conditions and consequently strong duality holds, and equation 14 can be written as ik = 0. therefore, slater’s min max v αik,α(cid:48) pconv(cid:88) λi vt d(sk i )xk + αt ik(2d(sk i ) − in)xk pconv(cid:88) −vt d(sk i )xk + α(cid:48)t ik (2d(sk i ) − in)xk next, we first introduce new variables zi, z(cid:48) 1958), we change the order of the inner max-min as follows (cid:32) i ∈ rh. then, by recalling sion’s minimax theorem (sion, min min zi∈b2 z(cid:48) i∈b2 max v αik,α(cid:48) λi pconv(cid:88) vt d(sk i )xk + αt ik(2d(sk i ) − in)xk zi pconv(cid:88) −vt d(sk i )xk + α(cid:48)t ik (2d(sk i ) − in)xk z(cid:48) i we now compute the maximum with respect to v, αik, α(cid:48) ik analytically to obtain the following min min zi∈b2 z(cid:48) i∈b2 i ) − in)xkzi ≥ 0, (2d(sk s.t. (2d(sk d(sk i )xk (λ(cid:48) iz(cid:48) i − λizi) − y pconv(cid:88) (λi + λ(cid:48) i) i ) − in)xkz(cid:48) i ≥ 0, ∀i, k. (a) independent realizations with m = 5 (b) independent realizations with m = 15 figure 1: training cost of the three-layer circular cnn trained with sgd (5 initialization trials) on a synthetic dataset (n = 6, d = 20, h = 3, stride = 1), where the green and red line with a marker represent the objective value obtained by the proposed convex program in equation 9 and the non-convex objective value in equation 8 of a feasible network with the weights found by the convex program, respectively. we use markers to denote the total computation time of the convex solver. (a) mnist-training objective (b) mnist-test accuracy (c) cifar10-training objective (d) cifar10-test accuracy figure 2: evaluation of the three-layer circular cnn trained with sgd (5 initialization trials) on a subset of mnist (n = 99, d = 50, m = 20, h = 3, stride = 1) and cifar10 (n = 99, d = 50, m = 40, h = 3, stride = 1). then, we apply a change of variables and define ci = λizi and c(cid:48) i. thus, we obtain iz(cid:48) min (cid:13) pconv(cid:88) (cid:13) (cid:13) (cid:13) (cid:13) i ) − in)xkci ≥ 0, (2d(sk s.t. (2d(sk i )xk (c(cid:48) d(sk i∈rh ci,c(cid:48) i ) − in)xkc(cid:48) i − ci) − y pconv(cid:88) since λi = (cid:107)ci(cid:107)2 and λ(cid:48) we evaluate the non-convex objective in equation 1 as follows i(cid:107)2 are feasible and optimal. then, using the prescribed {u∗ j , w∗ j }m∗ j=1, (xku∗ j )+w∗ j − y pconv(cid:88) pconv(cid:88) which has the same objective value with equation 18. since strong duality holds for the convex program, p∗ 1, which is equal to the value of equation 18 achieved by the prescribed parameters. 6 numerical experiments in this section6,7, we present numerical experiments to verify our claims. we first consider a synthetic dataset, where (n, d) = (6, 20), x ∈ r6×20 is generated using a multivariate normal distribution with zero mean and identity covariance, and y = [1 − 1 1 − 1 − 1 1]t . we then train the threelayer circular cnn model in equation 8 using sgd and the convex program equation 9. in figure 1, we plot the regularized objective value with respect to the computation time with 5 different independent realizations for sgd. we also plot both the non-convex objective in equation 8 and the convex objective in equation 9 for our convex program, where optimal prescribed parameters are used to convert the solution of the convex program to the original non-convex cnn architecture in figure 1a, we use 5 filters with h = 3 and stride 1, where only one (see appendix a.9). trial converges to the optimal objective value achieved by both our convex program and feasible network. as m increases, all the trials are able to converge to the optimal objective value in figure 1b. we also evaluate the same model on a subset of mnist (lecun) and cifar10 (krizhevsky et al., 2014) for binary classification. here, we first randomly sample the dataset and then select (n, d, m, h, stride) = (99, 50, 20, 3, 1) and a batch size of 10 for sgd. similarly for cifar10, we select (n, d, m, h, stride) = (99, 50, 40, 3, 1) and use a batch size of 10 for sgd. in figure 2, we plot both the regularized objective values in equation 8 and equation 9, and the corresponding test accuracies with the computation time. since the number of filters is large enough, all the sgd trials converge the optimal value provided by our convex program. concluding remarks we studied various non-convex cnn training problems and introduced exact finite dimensional convex programs. particularly, we provide equivalent convex characterizations for relu cnn architectures in a higher dimensional space. unlike the previous studies, we prove that these equivalent characterizations have polynomial complexity in all input parameters and can be globally optimized via convex optimization solvers. furthermore, we show that depending on the type of a cnn architecture, equivalent convex programs might exhibit different norm regularization structure, e.g., (cid:96)1, (cid:96)2, and nuclear norm. thus, we claim that the implicit regularization phenomenon in modern neural networks architectures can be precisely characterized as convex regularizers. therefore, extending our results to deeper networks is a promising direction. we also conjecture that the proposed convex approach can also be used to analyze popular heuristic techniques to train modern deep learning architectures. for example, after our work, ergen et al. (2021) studied batch normalization through our convex framework and revealed an implicit patchwise whitening effect. similarly, sahiner et al. (2021) extended our model to vector outputs. more importantly, in the light of our results, efficient optimization algorithms can be developed to exactly (or approximately) optimize deep cnn architectures for large scale experiments in practice, which is left for future research. acknowledgements this work was partially supported by the national science foundation under grants iis-1838179 and eccs-2037304, facebook research, adobe research and stanford systemx alliance. references akshay agrawal, robin verschueren, steven diamond, and stephen boyd. a rewriting system for convex optimization problems. journal of control and decision, 5(1):42–60, 2018. francis bach. breaking the curse of dimensionality with convex neural networks. the journal of eugene belilovsky, michael eickenberg, and edouard oyallon. greedy layerwise learning can scale to imagenet. in international conference on machine learning, pp. 583–593, 2019. 6additional numerical results can be found in appendix a.1. 7we use cvx (grant & boyd, 2014) and cvxpy (diamond & boyd, 2016; agrawal et al., 2018) with the sdpt3 solver (tütüncü et al., 2001) to solve convex optimization problems. yoshua bengio, nicolas l roux, pascal vincent, olivier delalleau, and patrice marcotte. convex neural networks. in advances in neural information processing systems, pp. 123–130, 2006. guy blanc, neha gupta, gregory valiant, and paul valiant. implicit regularization for deep neural networks driven by an ornstein-uhlenbeck like process. corr, abs/1904.09080, 2019. url http://arxiv.org/abs/1904.09080. digvijay boob, santanu s dey, and guanghui lan. complexity of training relu neural network. stephen boyd and lieven vandenberghe. convex optimization. cambridge university press, 2004. thomas m cover. geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. ieee transactions on electronic computers, (3):326–334, 1965. steven diamond and stephen boyd. cvxpy: a python-embedded modeling language for convex optimization. journal of machine learning research, 17(83):1–5, 2016. herbert edelsbrunner, joseph o’rourke, and raimund seidel. constructing arrangements of lines and hyperplanes with applications. siam journal on computing, 15(2):341–363, 1986. tolga ergen and mert pilanci. convex duality and cutting plane methods for over-parameterized neural networks. in opt-ml workshop, 2019. tolga ergen and mert pilanci. convex programs for global optimization of convolutional neural networks in polynomial-time. in opt-ml workshop, 2020a. tolga ergen and mert pilanci. revealing the structure of deep neural networks via convex duality. tolga ergen and mert pilanci. convex geometry and duality of over-parameterized neural networks. tolga ergen and mert pilanci. convex geometry of two-layer relu networks: implicit autoencoding and interpretable models. in silvia chiappa and roberto calandra (eds.), proceedings of the twenty third international conference on artificial intelligence and statistics, volume 108 of proceedings of machine learning research, pp. 4024–4033, online, 26–28 aug 2020d. pmlr. url http://proceedings.mlr.press/v108/ergen20a.html. tolga ergen, arda sahiner, batu ozturkler, john pauly, morteza mardani, and mert pilanci. demystifying batch normalization in relu networks: equivalent convex optimization models and implicit regularization. 2021. robert geirhos, patricia rubisch, claudio michaelis, matthias bethge, felix a wichmann, and wieland brendel. imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. in international conference on learning representations, 2018. miguel angel goberna and marco lópez-cerdá. linear semi-infinite optimization. 01 1998. doi: michael grant and stephen boyd. cvx: matlab software for disciplined convex programming, version 2.1. http://cvxr.com/cvx, march 2014. suriya gunasekar, jason d lee, daniel soudry, and nati srebro. implicit bias of gradient descent on linear convolutional networks. in s. bengio, h. wallach, h. larochelle, k. grauman, n. cesabianchi, and r. garnett (eds.), advances in neural information processing systems 31, pp. 9461–9471. curran associates, inc., 2018. alex krizhevsky, vinod nair, and geoffrey hinton. the cifar-10 dataset. http://www.cs. toronto.edu/kriz/cifar.html, 2014. yann lecun. the mnist database of handwritten digits. http://yann.lecun.com/exdb/ mnist/. yann lecun, yoshua bengio, and geoffrey hinton. deep learning. nature, 521(7553):436–444, hartmut maennel, olivier bousquet, and sylvain gelly. gradient descent quantizes relu network l. meier, s. van de geer, and p. bühlmann. the group lasso for logistic regression. journal of the royal statistical society, series b, 70:53–71, 2008. behnam neyshabur, ryota tomioka, and nathan srebro. in search of the real inductive bias: on the role of implicit regularization in deep learning. arxiv preprint arxiv:1412.6614, 2014. piyush c ojha. enumeration of linear threshold functions from the lattice of hyperplane intersections. ieee transactions on neural networks, 11(4):839–850, 2000. rahul parhi and robert d. nowak. minimum "norm" neural networks are splines, 2019. mert pilanci and tolga ergen. neural networks are convex regularizers: exact polynomial-time convex optimization formulations for two-layer networks, 2020. nasim rahaman, aristide baratin, devansh arpit, felix draxler, min lin, fred hamprecht, yoshua bengio, and aaron courville. on the spectral bias of neural networks. in international conference on machine learning, pp. 5301–5310. pmlr, 2019. benjamin recht, maryam fazel, and pablo a parrilo. guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. siam review, 52(3):471–501, 2010. saharon rosset, grzegorz swirszcz, nathan srebro, and ji zhu. l1 regularization in infinite dimensional feature spaces. in international conference on computational learning theory, pp. 544–558. springer, 2007. walter rudin. principles of mathematical analysis. mcgraw-hill, new york, 1964. arda sahiner, tolga ergen, john m. pauly, and mert pilanci. vector-output relu neural network problems are copositive programs: convex analysis of two layer networks and polynomial-time in international conference on learning representations, 2021. url https: algorithms. //openreview.net/forum?id=fgf8qaqpxxg. pedro savarese, itay evron, daniel soudry, and nathan srebro. how do infinite width bounded norm networks look in function space? corr, abs/1902.05040, 2019. url http://arxiv.org/ abs/1902.05040. alexander shapiro. semi-infinite programming, duality, discretization and optimality conditions. karen simonyan and andrew zisserman. very deep convolutional networks for large-scale image recognition. arxiv preprint arxiv:1409.1556, 2014. maurice sion. on general minimax theorems. pacific j. math., 8(1):171–176, 1958. url https: richard p stanley et al. an introduction to hyperplane arrangements. geometric combinatorics, 13: rh tütüncü, kc toh, and mj todd. sdpt3—a matlab software package for semidefinite-quadraticlinear programming, version 3.0. web page http://www. math. nus. edu. sg/mattohkc/sdpt3. html, 2001. ro winder. partitions of n-space by hyperplanes. siam journal on applied mathematics, 14(4): ming yuan and yi lin. model selection and estimation in regression with grouped variables. journal of the royal statistical society: series b (statistical methodology), 68(1):49–67, 2006. chiyuan zhang, samy bengio, moritz hardt, benjamin recht, and oriol vinyals. understanding deep learning requires rethinking generalization. arxiv preprint arxiv:1611.03530, 2016. table of contents a appendix a.1 additional numerical results . . a.2 constructing hyperplane arrangements in polynomial time . a.3 equivalence of the (cid:96)1 penalized objectives . . . . . . a.4 two-layer linear cnns . a.5 extensions to vector outputs . . . . a.6 extensions to arbitrary convex loss functions . . . a.7 strong duality results . . . a.8 proof of theorem 2.2 . . . a.9 proof of theorem 3.1 . . . a.10 proof of theorem 4.1 . a appendix in this section, we present additional materials and proofs of the main results that are not included in the main paper due to the page limit. a.1 additional numerical results here, we present additional numerical experiments to further verify our theory. we first perform an experiment with another synthetic dataset, where x ∈ r6×15 is generated using a multivariate normal distribution with zero mean and identity covariance, and y = [1 − 1 1 1 1 − 1]t . in this case, we use the two-layer cnn model in equation 1 and the corresponding convex program in equation 4. in figure 3, we perform the experiment using m = 5, 8, 15 filters of size h = 10 and stride 5, where we observe that as the number of filters increases, the ratio of the trials converging to the optimal objective value increases as well. in order to apply our convex approach in theorem 2.1 to larger scale experiments, we now introduce an unconstrained version of the convex program in equation 4 as follows d(sk i )xk (c(cid:48) i − ci) − y pconv(cid:88) i ) − in)xkci (cid:1) + + (cid:0)−(2d(sk i ) − in)xkc(cid:48) i (cid:1) where ρ > 0 is a trade-off parameter. since the problem in equation 19 is in an unconstrained form, we can directly optimize its parameters using conventional algorithms such as sgd. hence, we use pytorch to optimize the parameters of a two-layer cnn architecture using both the non-convex objective in equation 1 and the convex objective in equation 19, where we use the full cifar-10 dataset for binary classification, i.e., (n, d) = (10000, 3072). in figure 4, we provide the training objective and the test accuracy of each approach with respect to the number of epochs. here, we observe that the optimization on the convex formulation achieves lower training objective and higher test accuracy compared to the classical optimization on the non-convex problem. a.2 constructing hyperplane arrangements in polynomial time in this section, we discuss the number of distinct hyperplane arrangements, i.e., p , and present algorithm that enumerates all the distinct arrangements in polynomial time. we first consider the number of all distinct sign patterns sign(xw) for all w ∈ rd. this number corresponds to the number of regions in a partition of rd by hyperplanes passing through the origin, {ci,c(cid:48) ci,c(cid:48) min i}pconv i∈rh,∀i pconv(cid:88) (a) independent realizations with m = 3 (b) independent realizations with m = 8 (c) independent realizations with m = 15 figure 3: training cost of a two-layer cnn (with average pooling) trained with sgd (5 initialization trials) on a synthetic dataset (n = 6, d = 15, h = 10, stride = 5), where the green line with a marker represents the objective value obtained by the proposed convex program in equation 4 and the red line with a marker represents the non-convex objective value in equation 1 of a feasible network with the weights found by the convex program. here, we use markers to denote the total computation time of the convex optimization solver. (a) objective value (b) test accuracy figure 4: evaluation of two-layer cnns trained with sgd on full cifar-10 (n = 10000, d = 3072, m = 50, h = 12, stride = 4). and are perpendicular to the rows of x. here, one can replace the dimensionality d with the rank of the data matrix x, i.e., denoted as r, without loss of generality. let us first introduce the singular value decomposition of x in a compact form as x = uσvt , where u ∈ rn×r, σ ∈ rr×r, and v ∈ rr×d. then, for a given vector w ∈ rd, xw = uw(cid:48), where w(cid:48) = σvt w, w(cid:48) ∈ rr. hence, the number of distinct sign patterns sign(xw) for all possible w ∈ rd is equal to the number of sign patterns sign(uw(cid:48)) for all possible w(cid:48) ∈ rr. consider an arrangement of n hyperplanes in rr, where n ≥ r. let us denote the number of regions in this arrangement by pn,r. in ojha (2000); cover (1965), it is shown that this number satisfies pn,r ≤ 2 for hyperplanes in general position, the above inequality is in fact an equality. in edelsbrunner et al. (1986), the authors present an algorithm that enumerates all possible hyperplane arrangements o(nr) time, which can be used to construct the data for the convex programs we present throughout the paper. a.3 equivalence of the (cid:96)1 penalized objectives in this section, we prove the equivalence between the original problems with (cid:96)2 regularization and their (cid:96)1 penalized versions. we also note that similar equivalence results were also presented in savarese et al. (2019); neyshabur et al. (2014); ergen & pilanci (2019; 2020c;d). we start with the equivalence between equation 1 and equation 2. lemma a.1. the following two problems are equivalent: (cid:13) (cid:13) 2 (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) 2 (xkuj)+ wj − y min {uj ,wj }m (cid:1) = min {uj ,wj }m j=1 uj ∈b2,∀j (xkuj)+ wj − y proof of lemma a.1. we rescale the parameters as ¯uj = γjuj and ¯wj = wj/γj, for any γj > 0. then, the output becomes (xk ¯uj)+ ¯wj = (xkujγj)+ wj γj (xuj)+wj, which proves that the scaling does not change the network output. in addition to this, we have the following basic inequality j ) ≥ where the equality is achieved with the scaling choice γj = is used. since the scaling operation does not change the right-hand side of the inequality, we can set (cid:107)uj(cid:107)2 = 1, ∀j. therefore, the right-hand side becomes (cid:107)w(cid:107)1. now, let us consider a modified version of the problem, where the unit norm equality constraint is relaxed as (cid:107)uj(cid:107)2 ≤ 1. let us also assume that for a certain index j, we obtain (cid:107)uj(cid:107)2 < 1 with wj (cid:54)= 0 as an optimal solution. this shows that the unit norm inequality constraint is not active for uj, and hence removing the constraint for uj will not change the optimal solution. however, when we remove the constraint, (cid:107)uj(cid:107)2 → ∞ reduces the objective value since it yields wj = 0. therefore, we have a contradiction, which proves that all the constraints that correspond to a nonzero wj must be active for an optimal solution. this also shows that replacing (cid:107)uj(cid:107)2 = 1 with (cid:107)uj(cid:107)2 ≤ 1 does not change the solution to the problem. next, we prove the equivalence between equation 8 for l = 3 and equation 30. lemma a.2. the following two problems are equivalent: min {uj ,w1j ,w2j }m uj ∈b2,∀j (xujw1j)+ w2j − y min {uj ,w1j ,w2j }m uj ,w1j ∈b2,∀j (xujw1j)+ w2j − y (cid:1) proof of lemma a.2. we rescale the parameters as ¯w1j = γjw1j and ¯w2j = w2j/γj, for any γj > 0. then, the output becomes (xuj ¯w1j)+ ¯w2j = (xujw1jγj)+ w2j γj (xujw1j)+w2j, which proves that the scaling does not change the network output. in addition to this, we have the following basic inequality where the equality is achieved with the scaling choice γj = is used. since the scaling operation does not change the right-hand side of the inequality, we can set (cid:107)w1j(cid:107)2 = 1, ∀j. therefore, the right-hand side becomes (cid:107)w2(cid:107)1. the rest of the proof directly follows from the proof of lemma a.1. a.4 two-layer linear cnns we now consider two-layer linear cnns, for which the training problem is min {uj ,wj }m xkujwjk − y (cid:1) . theorem a.1. (pilanci & ergen, 2020) the equivalent convex program for equation 20 is min k=1,zk∈rh {zk}k xkzk − y proof of theorem a.1. we first apply a rescaling (as in lemma a.1) to the primal problem in equation 20 as follows min {uj ,wj }m uj ∈b2 xkujwjk − y then, taking the dual with respect to the output layer weights wj yields max v 2 s.t. max u∈b2 k (cid:0)vt xku(cid:1)2 let us then reparameterize the problem above as follows max m,v 2 s.t. σmax (m) ≤ β, m = [xt 1 v . . . xt kv], where σmax(m) represent the maximum singular value of m. then the lagrangian is as follows l(λ, z, m, v) = − 2 + λ (β − σmax (m)) + trace(zt m) − trace(zt [xt 1 v . . . xt kv]) 2 + λ (β − σmax (m)) + trace(zt m) − vt xkzk where λ ≥ 0. then maximizing over m and v yields the following dual form min k=1,zk∈rh {zk}k xkzk − y i σi(z) is the (cid:96)1 norm of singular values, i.e., nuclear norm the regularized training problem for two-layer circular cnns as follows min {uj ,wj }m xujwj − y (cid:1) where uj ∈ rd×d is a circulant matrix generated by a circular shift modulo d using uj ∈ rh. theorem a.2. (pilanci & ergen, 2020) the equivalent convex program for equation 23 is min z∈cd β √ d where ˜x = xf and f ∈ cd×d is the dft matrix. proof of theorem a.2. we first apply a rescaling (as in lemma a.1) to the primal problem in equation 23 as follows min {uj ,wj }m uj ∈b2 xujwj − y and then taking the dual with respect to the output layer weights wj yields max v 2 s.t. max d∈d where d := {d : (cid:107)d(cid:107)2 f ≤ d}. in the problem above, we use the eigenvalue decomposition u = fdfh , where f ∈ cd×d is the dft matrix and d ∈ cd×d is a diagonal matrix defined as dfu). we also note that the unit norm constraint in the primal problem, i.e., uj ∈ b2, d := diag( dfuj) and (cid:107)dj(cid:107)2 is equivalent to dj ∈ d since dj = diag( 2 due the properties of circulant matrices. now let us first define a variable change as ˜x = xf. then, the problem above can be equivalently written as max v 2 s.t. max d∈d since d is a diagonal matrix with a norm constraint on its diagonal entries, for an arbitrary vector s ∈ cn, we have |si|2|dii|2 ≤ smax |dii|2 = smax d, where smax := maxi |si|. if we denote the maximum index as imax := arg maxi |si|, then the upper-bound is achieved when dii = d if i = imax otherwise using this observation, the problem above can be further simplified as max v β √ d then, taking the dual of this problem gives the following min z∈cd β √ d a.5 extensions to vector outputs here, we present the extensions of our approach to vector output. to keep the notation and presentation simple, we consider the vector output version of the two-layer linear cnn model in section a.4. the training problem is as follows (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) (cid:13) min {uj ,{wjk}k xkujwt jk − y f the corresponding dual problem is given by max v f + f s.t. max u∈b2 k=1 the maximizers of the dual are the maximal eigenvectors of (cid:80)k k=1 xt filters. k vvt xk, which are optimal we now focus on the dual constraint as in proof of theorem 2.1. max u∈b2 2 = max u,s,gk∈b2 skgt k vt xku = max u,s,gk∈b2 = max sk (cid:104)v, xkgk(cid:105) = max then, the rest of the derivations directly follow section a.4. a.6 extensions to arbitrary convex loss functions sk (cid:10)v, xkugt k in this section, we first show the procedure to create an optimal standard cnn architecture using the optimal weights provided by the convex program in. then, we extend our derivations to arbitrary convex loss functions. in order to keep our derivations simple and clear, we use the regularized two-layer architecture in equation 1. for a given convex loss function (cid:96)(·, y), the regularized training problem can be stated as follows p∗ 1 = min {uj ,wj }m (xkuj)+wj, y j ) . then, the corresponding finite dimensional convex equivalent is k (cid:88) (cid:32)pconv(cid:88) pconv(cid:88) d(sk i )xk(c(cid:48) i − ci), y min i}pconv {ci,c(cid:48) ci,c(cid:48) i∈rh,∀i s.t. (2d(sk we now define m∗ := (cid:80)pconv optimal weights in equation 26. theorem a.3. the convex program equation 26 and the non-convex problem equation 25, where m ≥ m∗ has identical optimal values. moreover, an optimal solution to equation 25 can be constructed from an optimal solution to equation 26 as follows i ) − in)xkci ≥ 0, ∀i, k. i }pconv i , c(cid:48)∗ are the (u∗ j1i , w∗ j1i (u∗ j2i , w∗ j2i if if where {c(cid:48)∗ i , c∗ i }pconv are the optimal solutions to equation 26. j , w∗ i }. constructing j=1 as stated in the theorem, and plugging in the non-convex objective equation 25, we proof of theorem a.3. we first note that there will be m∗ vectors {c(cid:48)∗ j }m∗ {u∗ obtain the value i , c∗ (xku∗ j )+w∗ j , y pconv(cid:88) pconv(cid:88) which is identical to the objective value of the convex program equation 26. since the value of the convex program is equal to the value of it’s dual d∗ 1, which is equal to the value of the convex program equation 26 achieved by the prescribed parameters. 1 in the dual, we conclude that p∗ we also show that our dual characterization holds for arbitrary convex loss functions (xkuj)+wj, y min {uj ,wj }m j=1 uj ∈b2,∀j where (cid:96)(·, y) is a convex loss function. theorem a.4. the dual of equation 27 is given by −(cid:96)∗(v) s.t. max v vt (xku)+ where (cid:96)∗ is the fenchel conjugate function defined as (cid:96)∗(v) = max z zt v − (cid:96)(z, y) . proof of theorem a.4. the proof follows from classical fenchel duality (boyd & vandenberghe, 2004). we first describe equation 27 in an equivalent form as follows min {uj ,wj }m uj ∈b2,∀j (xkuj)+wj, . then the dual function is g(v) = min {uj ,wj }m uj ∈b2,∀j (cid:96)(z, y) − vt z + vt therefore, using the classical fenchel duality (boyd & vandenberghe, 2004) yields the claimed dual form. a.7 strong duality results proposition a.1. given m ≥ m∗, strong duality holds for equation 3, i.e., p∗ we first review the basic properties of infinite size neural networks and introduce technical details to derive the dual of equation 3. we refer the reader to rosset et al. (2007); bach (2017) for further details. let us first consider a measurable input space x with a set of continuous basis functions (i.e., neurons or filters in our context) ψu : x → r, which are parameterized by u ∈ b2. next, we use real-valued radon measures with the uniform norms (rudin, 1964). let us consider a signed radon measure denoted as µ. now, we can use µ to formulate an infinite size neural network as f (x) = (cid:82) ψu(x)dµ(u), where x ∈ x is the input. the norm for µ is usually defined as its total variation norm, which is the supremum of (cid:82) g(u)dµ(u) over all continuous functions g(u) that satisfy |g(u)| ≤ 1. now, we consider the case where the basis functions are relus, i.e., ψu = (cid:0)xt u(cid:1) +. then, the output of a network with finitely many neurons, say m neurons, can be written as f (x) = ψuj wj which can be obtained by selecting µ as a weighted sum of dirac delta functions, i.e., µ = (cid:80)m in this case, the total variation norm, denoted as (cid:107)µ(cid:107)t v , corresponds to j=1 wjδ(u − uj). now, we ready to derive the dual of equation 3, which can be stated as follows (see section 8.6 of goberna & lópez-cerdá (1998) and section 2 of shapiro (2009) for further details) d∗ 1 ≤ p1,∞ = min µ (xku)+ dµ(u) − y although equation 28 involves an infinite dimensional integral form, by caratheodory’s theorem, we know that the integral can be represented as a finite summation, to be more precise, a summation of at most n + 1 dirac delta functions (rosset et al., 2007). if we denote the number of dirac delta functions as m∗, where m∗ ≤ n + 1, then we have (xkuj)+ wj − y p1,∞ = min {uj ,wj }m∗ j=1 uj ∈b2,∀j provided that m ≥ m∗. we now need to show that strong duality holds, i.e., p∗ we first note that the semi-infinite problem equation 3 is convex. then, we prove that the optimal value is finite. since β > 0, we know that v = 0 is strictly feasible, and achieves 0 objective value. moreover, since −(cid:107)y − v(cid:107)2 1 is finite. therefore, by theorem 2.2 of shapiro (2009), strong duality holds, i.e., p∗ 1,∞ = d∗ 1 provided that the solution set of equation 3 is nonempty and bounded. we also note that the solution set of equation 3 is the euclidean projection of y onto a convex, closed and bounded set since (xku)+ can be expressed as the union of finitely many convex closed and bounded sets. 2 ≤ 0, the optimal objective value p∗ a.8 proof of theorem 2.2 the proof follows the proof of proposition a.1. the dual of equation 6 is as follows d∗ 1 ≤ p1,∞ = min µ | 18 | [
142.905,
230.4710828,
215.6351785,
247.5358828
] |
3KUfbI9_DQE.pdf | 2,023 | 0 | distributionally robust post-hoc classifiers under prior shifts jiaheng wei∗ † uc santa cruz harikrishna narasimhan google research ehsan amid google research wen-sheng chu google research yang liu uc santa cruz abhishek kumar † google research abstract the generalization ability of machine learning models degrades significantly when the test distribution shifts away from the training distribution. we investigate the problem of training models that are robust to shifts caused by changes in the distribution of class-priors or group-priors. the presence of skewed training priors can often lead to the models overfitting to spurious features. unlike existing methods, which optimize for either the worst or the average performance over classes or groups, our work is motivated by the need for finer control over the robustness properties of the model. we present an extremely lightweight post-hoc approach that performs scaling adjustments to predictions from a pre-trained model, with the goal of minimizing a distributionally robust loss around a chosen target distribution. these adjustments are computed by solving a constrained optimization problem on a validation set and applied to the model during test time. our constrained optimization objective is inspired from a natural notion of robustness to controlled distribution shifts. our method comes with provable guarantees and empirically makes a strong case for distributional robust post-hoc classifiers. an empirical implementation is available at https://github.com/weijiaheng/drops. introduction distribution shift, a problem characterized by the shift of test distribution away from the training distribution, deteriorates the generalizability of machine learning models and is a major challenge for successfully deploying these models in the wild. we are specifically interested in distribution shifts resulting from changes in marginal class priors or group priors from training to test. this is often caused by a skewed distribution of classes or groups in the training data, and vanilla empirical risk minimization (erm) can lead to models overfitting to spurious features. these spurious features seem to be predictive on the training data but do not generalize well to the test set. for example, the background can act as a spurious feature for predicting the object of interest in images, e.g., camels in a desert background, water-birds in water background (sagawa et al., 2020). distributionally robust optimization (dro) (ben-tal et al., 2013; duchi et al., 2016; duchi & namkoong, 2018; sagawa et al., 2020) is a popular framework to address this problem which formulates a robust optimization problem over class- or group-specific losses. the common metrics of interest in the dro methods are either the average accuracy or the worst accuracy over classes/groups (menon et al., 2021; jitkrittum et al., 2022; rosenfeld et al., 2022; piratla et al., 2022; sagawa et al., 2020; zhai et al., 2021; xu et al., 2020; kirichenko et al., 2022). however, these metrics only cover the two ends of the full spectrum of distribution shifts in the priors. we are instead motivated by the need to measure the robustness of the model at various points on the spectrum of distribution shifts. to this end, we consider applications where we are provided a target prior distribution (that could either come from a practitioner or default to the uniform distribution), and would like to train a ∗work done during an internship at google research, brain team. †correspondence to: jiaheng wei <jiahengwei@ucsc.edu>, abhishek kumar <abhishk@google.com>. model that is robust to varying distribution shifts around this prior. instead of taking the conventional approach of optimizing for either the average accuracy or the worst-case accuracy, we seek to maximize the minimum accuracy within a δ-radius ball around the specified target distribution. this strategy allows us to encourage generalization on a spectrum of controlled distribution shifts governed by the parameter δ. when δ = 0, our objective is simply the average accuracy for the specified target priors, and when δ → ∞, it reduces to the model’s worst-case accuracy, thus providing a natural way to interpolate between the two extreme goals of average and worst-case optimization. to train a classifier that performs well on the prescribed distributionally robust objective, we propose a fast and extremely lightweight post-hoc method that learns scaling adjustments to predictions from a pre-trained model. these adjustments are computed by solving a constrained optimization problem on a validation set, and then applied to the model during evaluation time. a key advantage of our method is that it is able to reuse the same pretrained model for different robustness requirements by simply scaling the model predictions. this is contrast to several existing dro methods that train all model parameters using the robust optimization loss (sagawa et al., 2020; piratla et al., 2022), which requires group annotations for the training data and requires careful regularization to make it work with overparameterized models (sagawa et al., 2020). on the other hand, our approach only needs group annotations for a smaller held-out set and works by only scaling the model predictions of a pre-trained model at test time. our method also comes with provable convergence guarantees. we apply our method on standard benchmarks for class imbalance and group dro, and show that it compares favorably to the existing methods when evaluated on a range of distribution shifts away from the target prior distribution. background we are primarily interested in two specific prior shifts for distributional robustness of classifiers. in this section, we briefly introduce the problem setting of the two prior shifts and set the notation. m ] to denote the uniform prior over m classes. class-level prior shifts. we are interested in a multi-class classification problem with instance space x and output space [m] = {1, . . . , m}. let d denote the underlying data distribution over x × [m], the random variables of instance x and label y satisfy that (x, y ) ∼ d . we define the conditional-class probability as ηy(x) = p(y = y|x = x) and the class priors πy = p(y = y), note that πy = e [ηy(x)] . we use u = [ 1 our goal is then to learn a multi-class classifier h : x → [m] that maps an instance x ∈ x to one of m classes. we will do so by first learning a scoring function f : x → ∆m that estimates the conditional-class probability for a given instance, and construct the classifier by predicting the class with the highest score: h(x) = arg maxj∈[m] fj(x). we measure the performance of a scoring function using a loss function (cid:96) : [m] × ∆m → r+ and measure the per-class loss using (cid:96)i(f ) := e [(cid:96)(y, f (x)) | y = i]. let {(xi, yi)}n i=1 be a set of training data samples. the empirical estimate of training set prior is ˆπi := 1 j∈[n] 1(yj = i) where 1(·) is the indicator function. in class prior shift, the class prior n probabilities at test time shift away from ˆπ. a special case of such class-level prior shifts includes class-imbalanced learning (lin et al., 2017; cui et al., 2019; cao et al., 2019; ren et al., 2020; menon et al., 2021) where ˆπ is a long-tailed distribution while the class prior at test time is usually chosen to be the uniform distribution. regular erm tends to focus more on the majority classes at the expense of ignoring the loss of the minority classes. recent work (menon et al., 2021) uses temperature-scaled logit adjustment with training class priors to adapt the model for average class accuracy. our method also applies post-hoc adjustments to model probabilities, but our goal differs from (menon et al., 2021) as we care for varying distribution shifts around the uniform prior and the scaling adjustments are learned using a held-out set to optimize for a constrained robust loss. group-level prior shifts. the notion of groups arises when each data point (x, y) is associated with some attribute a ∈ a that is spuriously correlated with the label. this is used to form m = |a|× |y | groups as the cross-product of |a| attributes and |y | classes. the data distribution d is taken to be a mixture of m groups with mixture prior probabilities π, and each group-conditional distribution given by dj, j ∈ [m]. in this scenario, we have n training samples {(xi, yi)}n i=1 drawn i.i.d. from d, with empirical group prior probabilities ˆπ. for skewed group-prior probabilities ˆπ, regular erm is vulnerable to spurious correlations between the attributes and labels, and the accuracy degrades when the test data comes from a shifted group prior (e.g., balanced groups). domain-aware methods typically assume that the attributes are available for the training examples and optimize for the worst or average group loss (sagawa et al., 2020; piratla et al., 2022). however, recent work (rosenfeld et al., 2022; kirichenko et al., 2022) has observed that erm on skewed group priors is able to learn core features (in addition to spurious features), and training a linear classifier on top of erm learned features with a small balanced held-out set works quite well. our proposed method is aligned with this recent line of work in assuming access to only a small held-out set with group annotations. however, we differ in two aspects: (i) our method is more lightweight and works by only scaling the model predictions post-hoc during test time, (ii) the scaling adjustments are learned to allow more control over the desired robustness properties than implicitly targeting the worst or average accuracies as in (kirichenko et al., 2022). evaluation metrics under prior shifts. typical evaluation metrics used under prior shifts are: • mean: this evaluation metric assigns uniform weights for each class or group, measuring the average class- or group-level test accuracy. | 2 | [
119.339,
549.2720784,
289.8588616,
559.2346784
] |
3SV-ZePhnZM.pdf | 2,021 | 0 | incremental few-shot learning via vector quantization in deep embedded space kuilin chen department of mechanical and industrial engineering university of toronto toronto, ontario, canada kuilin.chen@mail.utoronto.ca chi-guhn lee department of mechanical and industrial engineering university of toronto toronto, ontario, canada cglee@mie.utoronto.ca abstract the capability of incrementally learning new tasks without forgetting old ones is a challenging problem due to catastrophic forgetting. this challenge becomes greater when novel tasks contain very few labelled training samples. currently, most methods are dedicated to class-incremental learning and rely on sufficient training data to learn additional weights for newly added classes. those methods cannot be easily extended to incremental regression tasks and could suffer from severe overfitting when learning few-shot novel tasks. in this study, we propose a nonparametric method in deep embedded space to tackle incremental few-shot learning problems. the knowledge about the learned tasks is compressed into a small number of quantized reference vectors. the proposed method learns new tasks sequentially by adding more reference vectors to the model using few-shot samples in each novel task. for classification problems, we employ the nearest neighbor scheme to make classification on sparsely available data and incorporate intra-class variation, less forgetting regularization and calibration of reference vectors to mitigate catastrophic forgetting. in addition, the proposed learning vector quantization (lvq) in deep embedded space can be customized as a kernel smoother to handle incremental few-shot regression tasks. experimental results demonstrate that the proposed method outperforms other state-of-the-art methods in incremental learning. introduction incremental learning is a learning paradigm that allows the model to continually learn new tasks on novel data, without forgetting how to perform previously learned tasks (cauwenberghs & poggio, 2001; kuzborskij et al., 2013; mensink et al., 2013). the capability of incremental learning becomes more important in real-world applications, in which the deployed models are exposed to possible out-of-sample data. typically, hundreds of thousands of labelled samples in new tasks are required to re-train or fine-tune the model (rebuffi et al., 2017). unfortunately, it is impractical to gather sufficient samples of new tasks in real applications. in contrast, humans can learn new concepts from just one or a few examples, without losing old knowledge. therefore, it is desirable to develop algorithms to support incremental learning from very few samples. while a natural approach for incremental few-shot learning is to fine-tune part of the base model using novel training data (donahue et al., 2014; girshick et al., 2014), the model could suffer from severe over-fitting on new tasks due to a limited number of training samples. moreover, simple fine-tuning also leads to significant performance drop on previously learned tasks, termed as catastrophic forgetting (goodfellow et al., 2014). recent attempts to mitigate the catastrophic forgetting are generally categorized into two streams: memory relay of old training samples (rebuffi et al., 2017; shin et al., 2017; kemker & kanan, 2018) and regularization on important model parameters (kirkpatrick et al., 2017; zenke et al., 2017). however, those incremental learning approaches are developed and tested on unrealistic scenarios where sufficient training samples are available in novel tasks. they may not work well when the training samples in novel tasks are few (tao et al., 2020b). to the best of our knowledge, the majority of incremental learning methodologies focus on classification problems and they cannot be extended to regression problems easily. in class-incremental learning, the model has to expand output dimensions to learn n (cid:48) novel classes while keeping the knowledge of existing n classes. parametric models estimate additional classification weights for novel classes, while nonparametric methods compute the class centroids for novel classes. in comparison, output dimensions in regression problems do not change in incremental learning as neither additional weights nor class centroids are applicable to regression problems. besides, we find that catastrophic forgetting in incremental few-shot classification can be attributed to three reasons. first, the model is biased towards new classes and forgets old classes because the model is fine-tuned on new data only (hou et al., 2019; zhao et al., 2020). meanwhile, the prediction accuracy on novel classes is not good due to over-fitting on few-shot training samples. second, features of novel samples could overlap with those of old classes in the feature space, leading to ambiguity among classes in the feature space. finally, features of old classes and classification weights are no longer compatible after the model is fine-tuned with new data. in this paper, we investigate the problem of incremental few-shot learning, where only a few training samples are available in new tasks. a unified model is learned sequentially to jointly recognize all classes or regression targets that have been encountered in previous tasks (rebuffi et al., 2017; wu et al., 2019). to tackle aforementioned problems, we propose a nonparametric method to handle incremental few-shot learning based on learning vector quantization (lvq) (sato & yamada, 1996) in deep embedded space. as such, the adverse effects of imbalanced weights in a parametric classifier can be completely avoided (mensink et al., 2013; snell et al., 2017; yu et al., 2020). our contributions are three fold. first, a unified framework is developed, termed as incremental deep learning vector quantization (idlvq), to handle both incremental classification (idlvq-c) and regression (idlvq-r) problems. second, we develop intra-class variance regularization, less forgetting constraints and calibration factors to mitigate catastrophic forgetting in class-incremental learning. finally, the proposed methods achieve state-of-the-art performance on incremental fewshot classification and regression datasets. related work incremental learning: some incremental learning approaches rely on memory replay of old exemplars to prevent forgetting previously learned knowledge. old exemplars can be saved in memory (rebuffi et al., 2017; castro et al., 2018; prabhu et al., 2020) or sampled from generative models (shin et al., 2017; kemker & kanan, 2018; van de ven et al., 2020). however, explicit storage of training samples is not scalable if the number of classes is large. furthermore, it is difficult to train a reliable generative model for all classes from very few training samples. in parallel, regularization approaches do not require old exemplars and impose regularization on network weights or outputs to minimize the change of parameters that are important to old tasks (kirkpatrick et al., 2017; zenke et al., 2017). to avoid quick performance deterioration after learning a sequence of novel tasks in regularization approaches, semantic drift compensation (sdc) is developed by learning an embedding network via triplet loss (schroff et al., 2015) and compensates the drift of class centroids using novel data only (yu et al., 2020). in comparison, idlvq-c saves only one exemplar per class and uses saved exemplars to regularize the change in feature extractor and calibrate the change in reference vectors. few-shot learning: few-shot learning attempts to obtain models for classification or regression tasks with only a few labelled samples. few-shot models are trained on widely-varying episodes of fake few-shot tasks with labelled samples drawn from a large-scale meta-training dataset (vinyals et al., 2016; finn et al., 2017; ravi & larochelle, 2017; snell et al., 2017; sung et al., 2018). meanwhile, recent works attempt to handle novel few-shot tasks while retraining the knowledge of the base task. these methods are referred to as dynamic few-shot learning (gidaris & komodakis, 2018; ren et al., 2019a; gidaris & komodakis, 2019). however, dynamic few-shot learning is different from incremental few-shot learning, because they rely on the entire base training dataset and an extra meta-training dataset during meta-training. in addition, dynamic few-shot learning does not accumulate knowledge for multiple novel tasks sequentially. incremental few-shot learning: prior works on incremental few-shot learning focus on classification problems by computing the weights for novel classes in parametric classifiers, without iterative gradient descent. for instance, the weights of novel classes can be imprinted by normalized prototypes of novel classes, while keeping the feature extractor fixed (qi et al., 2018). since novel weights are computed only with the samples of novel classes, the fixed feature extractor may not be compatible with novel classification weights. more recently, neural gas network is employed to construct an undirected graph to represent knowledge of old classes (tao et al., 2020b;a). the vertices in the graph are constructed in an unsupervised manner using competitive hebbian learning (fritzke, 1995), while the feature embedding is fixed. in contrast, idlvq learns both feature extractor and reference vectors concurrently in a supervised manner. background incremental few-shot learning in this paper, incremental few-shot learning is studied for both classification and regression tasks. for classification tasks, we consider the standard class-incremental setup in literature. after the model is trained on a base task (t = 1) with sufficient data, the model learns novel tasks sequentially. each novel task contains a number of novel classes with only a few training samples per class. learning a novel task (t > 1) is referred to as an incremental learning session. in task t, we have access only to training data dt in the current task and previously saved exemplars (one exemplar per class in this study). each task has a set of classes c t = {ct nt}, where nt is the number of classes in task t. in addition, it is assumed that there is no overlap between classes in different tasks c t (cid:84) c s = ∅ for t (cid:54)= s. after an incremental learning session, the performance of the model is evaluated on a test set that contains all previously seen classes c = (cid:84) i c i. note that our focus is not multi-task scenario, where a task id is exposed to the model during test phase and the model is only required to perform a given task one time (van de ven & tolias, 2019). our model is evaluated in a task-agnostic setting, where task id is not exposed to the model at test time. for regression tasks, we follow a similar setting with a notable difference that the target is realvalued y ∈ r. in addition, the target values in different tasks do not have to be mutually exclusive, unlike the class-incremental setup. learning vector quantization traditional nonparametric methods, such as nearest neighbors, represent knowledge and make predictions by storing the entire training set. despite the simplicity and effectiveness, they are not scalable to a large-scale base dataset. typically, incremental learning methods are only allowed to store a small number of exemplars to preserve the knowledge of previously learned tasks. however, randomly selected exemplars may not well present the knowledge in old tasks. lvq is a classical data compression method that represents the knowledge through a few learned reference vectors (sato & yamada, 1996; seo & obermayer, 2003; biehl et al., 2007). a new sample is classified to the same label as the nearest reference vector in the input space. lvq has been combined with deep feature extractors as an alternative to standard neural networks for better interpretability (de vries et al., 2016; villmann et al., 2017; saralajew et al., 2018). the combinations of lvq and deep feature extractors have been applied to natural language processing (nlp), facial recognition and biometrics (variani et al., 2015; wang et al., 2016; ren et al., 2019b; leng et al., 2015). we notice that lvq is a nonparametric method which is well suited for incremental few-shot learning because the model capacity grows by incorporating more reference vectors to learn new knowledge. for example, incremental learning vector quantization (ilvq) has been developed to learn classification models adaptively from raw features (xu et al., 2012). in this study, we present the knowledge by learning reference vectors in the feature space through lvq and adapt them in incremental few-shot learning. compared with ilvq by xu et al. (2012), our method does not rely on predefined rules to update reference vectors and can be learned along with deep neural networks in an end-to-end fashion. besides, our method uses a single reference vector for each class, while ilvq automatically assigns different numbers of prototypes for different classes. methodology incremental deep learning vector quantization the general framework of idlvq for both classification and regression can be derived from a gaussian mixture perspective (ghahramani & jordan, 1994), with a simplified covariance structure and supervised deep representation learning. in the base dataset (t = 1), a raw input x is projected into a feature space f 1 by a deep neural network fθ1, where θ1 denotes the parameters in neural networks. in addition, n 1 reference vectors m1 = {m1 1, ..., m1 n 1 } are placed in the feature space f 1, which can be learned to capture the representation of the base dataset. more reference vectors will be added incrementally while learning novel tasks. the marginal distribution p(fθ1(x)) of feature vector can be described by a gaussian mixture model p(fθ1 (x)) = (cid:80)n 1 i=1 p(i)p(fθ1 (x)|i) of n 1 components, where the prior p(i) = 1/n 1 and the component distribution p(fθ1(x)|i) is gaussian. by assuming that each component distribution p(fθ1 (x)|i) is isotropic gaussian centered at m1 i with the same covariance, the posterior distribution of a component given the input is p1(i|x) = i (cid:107)2/γ) is a gaussian kernel and γ is a scale factor. the where κ(fθ1(x), m1 conditional expectation of the output from a gaussian mixture is ˆy = (cid:80)n 1 i , where q1 i is the reference target associated with reference vector m1 i is either 0 or 1 indicating whether m1 i and x have the same label. since each reference vector is assigned to a class at initialization, q1 i in regression problems is real-valued and has to be learned. the weights in neural networks θ1, reference vectors m1, reference targets q1 i (in regression problems only) and the scale factor γ are learned concurrently by minimizing a loss function between the true label y and the predicted label ˆy. i=1 p1(i|x)q1 i . in classification problems, q1 i is fixed and does not require learning. meanwhile, q1 the proposed idlvq is a nonparametric method as it makes prediction based on similarity to reference vectors, instead of using any regression or classification weights. the capacity of the model grows naturally by adding more reference vectors to learn novel tasks, while the old knowledge is preserved in existing reference vectors. incremental deep learning vector quantization for classification for classification problems, one reference vector is assigned to each class in our study. thus, ˆy represents the predicted probability that an input belongs to a class. the model can be trained to classify data correctly by minimizing the cross-entropy loss lce between the predicted probability ˆy and the true label y. although the cross-entropy loss encourages separability of features in base classes, it does not guarantee compact intra-class variation in the feature space. specifically, in an incremental learning session, features of novel classes could overlap with those of previously learned classes. as a result, the overall classification accuracy could deteriorate after incremental learning sessions. a desirable feature embedding leaves large margin between classes to mitigate overlap in features across old and new classes. inspired by center loss (wen et al., 2016) to enhance discriminative capability in facial recognition, a regularization term on intra-class distance to reference vectors is added to get compact intra-class variation. lintra = ∀(x,y),y=i as such, fθ1(x) is forced to stay close to the reference vector with the same label and naturally moves away from other reference vectors. consequently, features of new classes are more likely to lie in the margin between old classes to mitigate ambiguity in features across different classes. the total loss in training the base task is given by l = lce + λintralintra, where λintra is a hyper-parameter to control the weight for intra-class variation loss. the total loss is differentiable w.r.t. neural network parameters θ1, reference vectors m1 = {m1 1, ..., m1 n1} and scaling factor γ. all parameters in the model can be trained jointly in an end-to-end fashion. in an incremental session (t > 1), a novel dataset dt contains nt classes and k t samples per class (nt-way k t-shot). nt new reference vectors are added and each reference vector is initialized (cid:80)kt i = 1 kt as the centroid of features in a class mt k=1 fθt(xk). the new reference vectors along with the neural network parameters are fine-tuned on dt to learn new knowledge in task t. to preserve the knowledge from the old tasks during incremental learning, the model should be updated only when necessary. therefore, cross-entropy loss is not used in incremental learning sessions because it always updates model parameters even if the sample is correctly classified. let mt + be the reference vector with the correct label and mt − be the nearest reference vector with a wrong label. for a training sample (x, y) in dt, the sample is classified correctly if (cid:13) < (cid:13) . in this case, the loss should be 0. when (cid:13) (cid:13) 2 (cid:13)fθt(x) − mt , (cid:13) − the sample is misclassified. we adapt the margin based loss function lm from de vries et al. (2016) with a minor modification (cid:13)fθt(x) − mt + lm = relu (cid:13)fθt(x) − mt + (cid:13) (cid:13)fθt(x) − mt + (cid:13)fθt(x) − mt − (cid:13)fθt(x) − mt − where relu(·) stands for the rectified linear unit function. the margin based loss leads to slow training convergence because it only updates two reference vectors one time. however, the adapted margin based loss is well suited in learning from few-shot samples while avoids unnecessary parameter updates. features for an old class could deviate away from the corresponding reference vector due to changes in θt during incremental learning, leading to catastrophic forgetting. a forgetting loss lf is developed to regularize the drift in the feature space lf = i) − fθt−1 (x i is the selected exemplar for class i and n t−1 denotes the total number of classes in the where x i for class i ∈ [n t−1, n t] is picked base task and all previous novel tasks. note that the exemplar x from dt whose feature is nearest to mt i at the end of each learning session. the total loss in the incremental learning session t is l = lm +λf lf +λintralintra, where λf and λintra are weights for forgetting loss and intra-class variation loss, respectively. the total loss is optimized w.r.t. neural network parameters θt and new reference vectors {mt n t}. the reference vectors for previously learned tasks are not updated by novel data to prevent catastrophic forgetting. however, they may not be well suited to represent knowledge and make classification in the new feature space f t as feature embedding is changed with updated θt. although the true optimal location of those reference vectors are difficult to estimate without using the entire data from all tasks, they can be calculated approximately using the shift in features of exemplars. considering that features of an exemplar x i are close to mi in the feature space, the shift of a reference vector δt i in the new feature space can be approximated by the shift of the exemplar’s features δt i). therefore, the reference vectors for previously learned tasks are calibrated i = fθt(x i = mt−1 mt is the uncalibrated reference vector for class i ∈ [1, n t−1]. a test sample, which could be from any seen classes, is classified according to the distance to reference vectors {mt n t}. the pseudo code for idlvq-c is presented in the appendix. i , where mt−1 i)−fθt−1(x i + δt i incremental deep learning vector quantization for regression for regression problems, the model is trained to recognize regression targets by the minimizing mean squared error (mse) loss lm se = (y − ˆy)2, where y is the real-valued target in training dataset. the mse loss function is differentiable w.r.t. neural network weights, reference vectors and targets, and scale factor. therefore, all parameters can be trained jointly in an end-to-end manner. the proposed idlvq-r can also be interpreted as a kernel smoother in deep embedded space. compared with traditional kernel smoother, such as nadaraya-watson estimator (nadaraya, 1964), idlvq-r is sparse and hence more scalable as it only relies on a few reference vectors and targets. in an incremental learning session (t > 1), we have access to data dt that contains k t pairs of training samples (xt i ). nt new reference vectors (nt ≤ k t) along with corresponding targets are added to the model to learn new knowledge in the novel task t. we randomly select nt samples from i, yt dt to initialize reference vectors and targets as follows mi+n t−1 = fθ(xt i), | 5 | [
336.285,
678.1680828,
346.43561028,
690.0138556
] |
3YjQfCLdrzz.pdf | 2,023 | 2 | fosr: first-order spectral rewiring for addressing oversquashing in gnns kedar karhadkar ucla kedar@math.ucla.edu pradeep kr. banerjee mpi mis pradeep@mis.mpg.de guido montúfar ucla & mpi mis montufar@math.ucla.edu abstract graph neural networks (gnns) are able to leverage the structure of graph data by passing messages along the edges of the graph. while this allows gnns to learn features depending on the graph structure, for certain graph topologies it leads to inefficient information propagation and a problem known as oversquashing. this has recently been linked with the curvature and spectral gap of the graph. on the other hand, adding edges to the message-passing graph can lead to increasingly similar node representations and a problem known as oversmoothing. we propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph based on spectral expansion. we combine this with a relational architecture, which lets the gnn preserve the original graph structure and provably prevents oversmoothing. we find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks. introduction graph neural networks (gnns) (gori et al., 2005; scarselli et al., 2008) are a broad class of models which process graph-structured data by passing messages between nodes of the graph. due to the versatility of graphs, gnns have been applied to a variety of domains, such as chemistry, social networks, knowledge graphs, and recommendation systems (zhou et al., 2020; wu et al., 2020). gnns broadly follow a message-passing framework, meaning that each layer of the gnn aggregates the representations of a node and its neighbors, and transforms these features into a new representation for that node. the aggregation function used by the gnn layer is taken to be locally permutationinvariant, since the ordering of the neighbors of a node is arbitrary, and its specific form is a key component of the gnn architecture; varying it gives rise to several common gnn variants (kipf and welling, 2017; veliˇckovi´c et al., 2018; li et al., 2015; hamilton et al., 2017; xu et al., 2019). the output of a gnn can be used for tasks such as graph classification or node classification. although gnns are successful in computing dependencies between nodes of a graph, they have been found to suffer from a limited capacity to capture long-range interactions. for a fixed graph, this is caused by a variety of problems depending on the number of layers in the gnn. since graph convolutions are local operations, a gnn with a small number of layers can only provide a node with information from nodes close to itself. for a gnn with l layers, the receptive field of a node (the set of nodes it receives messages from) is exactly the ball of radius l about the node. for small values of l, this results in “underreaching”, and directly limits which functions the gnn can represent. on a related note, the functions representable by gnns with l layers are limited to those computable by l steps of the weisfeiler-lehman (wl) graph isomorphism test (morris et al., 2019; xu et al., 2019; barceló et al., 2020). on the other hand, increasing the number of layers leads to its own set of problems. in contrast to other architectures that benefit from the expressivity of deeper networks, gnns experience a decrease in accuracy as the number of layers increases (li et al., 2018; chen et al., 2020). this phenomenon has partly been attributed to “oversmoothing”, where repeated graph convolutions eventually render node features indistinguishable (li et al., 2018; oono and suzuki, 2020; cai and wang, 2020; zhao and akoglu, 2020; rong et al., 2020; di giovanni et al., 2022). separate from oversmoothing is the problem of “oversquashing” first pointed out by alon and yahav (2021). as the number of layers of a gnn increases, information from (potentially) exponentiallygrowing receptive fields need to be concurrently propagated at each message-passing step. this leads to a bottleneck that causes oversquashing, when an exponential amount of information is squashed into fixed-size node vectors (alon and yahav, 2021). consequently, for prediction tasks relying on long-range interactions, the gnn can fail. oversquashing usually occurs when there are enough layers in the gnn to reach any node (the receptive fields are large enough), but few enough that the gnn cannot process all of the necessary relations between nodes. hence, for a fixed graph, the problems of underreaching, oversquashing, and oversmoothing occur in three different regimes, depending on the number of layers of the gnn. a common approach to addressing oversquashing is to rewire the input graph, making changes to its edges so that it has fewer structural bottlenecks. a simple approach to rewiring is to make the last layer of the gnn fully adjacent, allowing all nodes to interact with one another (alon and yahav, 2021). alternatively, one can make changes to edges of the input graph, feeding the modified graph into all layers of the gnn (topping et al., 2022; banerjee et al., 2022). the latter approaches can be viewed as optimizing the spectral gap of the input graph for alleviating structural bottlenecks and improving the overall quality of signal propagation across nodes (see figure 1). while these rewiring methods improve the connectivity of the graph, there are drawbacks to making too many modifications to the input. the most obvious problem is that we are losing out on topological information about the original graph. if the structure of the original graph is indeed relevant, adding and removing edges diminishes that benefit to the task. another issue arises from the smoothing effects of adding edges: if we add too many edges to the input graph, an ordinary gcn will suffer from oversmoothing (li et al., 2018). in other words, if we use this natural approach to rewiring, we experience a trade-off between oversquashing and oversmoothing. this observation, which does not seem to have been pointed out in earlier works, is the main motivation for the approach that we develop in this work. main contributions | 1 | [
108.249,
262.4900784,
229.6712761,
272.4526784
] |
k1FHgri5y3-.pdf | 2,023 | 1 | sparse random networks for communication-efficient federated learning berivan isik¶∗, francesco pase§∗, deniz gunduz‡, tsachy weissman¶, michele zorzi§ ¶stanford university, §university of padova, ‡imperial college london berivan.isik@stanford.edu, pasefrance@dei.unipd.it abstract one main challenge in federated learning is the large communication cost of exchanging weight updates from clients to the server at each round. while prior work has made great progress in compressing the weight updates through gradient compression methods, we propose a radically different approach that does not update the weights at all. instead, our method freezes the weights at their initial random values and learns how to sparsify the random network for the best performance. to this end, the clients collaborate in training a stochastic binary mask to find the optimal sparse random network within the original one. at the end of the training, the final model is a sparse network with random weights – or a subnetwork inside the dense random network. we show improvements in accuracy, communication (less than 1 bit per parameter (bpp)), convergence speed, and final model size (less than 1 bpp) over relevant baselines on mnist, emnist, cifar10, and cifar-100 datasets, in the low bitrate regime. introduction federated learning (fl) is a distributed learning framework where clients collaboratively train a model by performing local training on their data and by sharing their local updates with a server every few iterations, which in turn aggregates the local updates to create a global model, that is then transmitted to the clients for the next round of training. while being an appealing approach for enabling model training without the need to collect client data at the server, uplink communication of local updates is a significant bottleneck in fl (kairouz et al., 2021). this has motivated research in communication-efficient fl strategies (mcmahan et al., 2017a) and various gradient compression schemes via sparsification (lin et al., 2018; wang et al., 2018; barnes et al., 2020; ozfatura et al., 2021; isik et al., 2022), quantization (alistarh et al., 2017; wen et al., 2017; bernstein et al., 2018; mitchell et al., 2022), and low-rank approximation (koneˇcn`y et al., 2016; vargaftik et al., 2021; 2022; basat et al., 2022). in this work, while aiming for communication efficiency in fl, we take a radically different approach from prior work, and propose a strategy that does not require communication of weight updates. to be more precise, instead of training the weights, (1) the server initializes a dense random network with d weights, denoted by the weight vector 2 , . . . , winit winit = (winit d ), using a random seed seed, and broadcasts seed to the clients enabling them to reproduce the same winit locally, 1 , winit (2) both the server and the clients keep the weights frozen at their initial values winit at all times, (3) clients collaboratively train a probability mask of d parameters θ = (θ1, θ2, . . . , θd) ∈ [0, 1]d, (4) the server samples a binary mask from the trained probability mask and generates a sparse network with random weights – or a subnetwork inside the initial dense random network as follows wfinal = bern(θ) ⊙ winit, where bern(·) is the bernoulli sampling operation and ⊙ the element-wise multiplication. we call the proposed framework federated probabilistic mask training (fedpm) and summarize it in figure 1. at first glance, it may seem surprising that there exist subnetworks inside randomly ∗first two authors contributed equally. work done while f.p. was visiting imperial college london. initialized networks that could perform well without ever modifying the weight values. this phenomenon has been explored to some extent in prior work (zhou et al., 2019; ramanujan et al., 2020; pensia et al., 2020; diffenderfer & kailkhura, 2020; aladago & torresani, 2021) with different strategies for finding the subnetworks. however, how to find these subnetworks in a fl setting has not attracted much attention so far. some exceptions to this are works by li et al. (2021); vallapuram et al. (2022); mozaffari et al. (2021), which provide improvements in other fl challenges, such as personalization and poisoning attacks, while not being competitive with existing (dense) compression methods such as qsgd (alistarh et al., 2017), drive (vargaftik et al., 2021), and signsgd (bernstein et al., 2018) in terms of accuracy under the same communication budget. in this work, we propose a stochastic way of finding such subnetworks while reaching higher accuracy at a reduced communication cost – less than 1 bit per parameter (bpp). figure 1: extracting a randomly weighted sparse network using the trainable probability mask θt in the forward-pass of round t (for clients and the server). in practice, clients collaboratively train continuous scores s ∈ rd, and then at inference time, the clients (or the server) find θt = sigmoid(st) ∈ [0, 1]d. we skip this step in the figure for the sake of simplicity. in addition to the accuracy and communication gains, our framework also provides an efficient representation of the final model post-training by requiring less than 1 bpp to represent (i) the random seed that generates the initial weights winit, and (ii) a sampled binary vector bern(θ) (computed with the trained θ). therefore, the final model enjoys a memory-efficient deployment – a crucial feature for machine learning at power-constrained edge devices. another advantage our framework brings is the privacy amplification under some settings, thanks to the stochastic nature of our training strategy. our contributions can be summarized as follows: (1) we propose a fl framework, in which the clients do not train the model weights, but instead a stochastic binary mask to be used in sparsifying the dense network with random weights. this differs from the standard training approaches in the literature. (2) our framework provides efficient communication from clients to the server by requiring (less than) 1 bpp per client while yielding faster convergence and higher accuracy than the baselines. (3) we propose a bayesian aggregation strategy at the server side to better deal with partial client participation and non-iid data splits. (4) the final model (a sparse network with random weights) can be efficiently represented with a random seed and a binary mask which requires (less than) 1 bpp – at least 32× more efficient storage and communication of the final model with respect to standard fl strategies. (5) we demonstrate the efficacy of our strategy on mnist, emnsit, cifar-10, and cifar-100 datasets under both iid and non-iid data splits; and show improvements in accuracy, bitrate, convergence speed, and final model size over relevant baselines, under various system configurations. related work | 1 | [
108.299,
95.7256768,
211.1957635,
107.6808768
] |
m8uJvVgwRci.pdf | 2,022 | 0 | creating training sets via weak indirect supervision jieyu zhang1,2, bohan wang1,3, xiangchen song4, yujing wang1, yaming yang1, jing bai1, alexander ratner2,5 1microsoft research asia 3university of science and technology of china {jieyuz2, ajratner}@cs.washington.edu {yujwang, yayaming, jbai}@microsoft.com wbhfy@mail.ustc.edu.cn xiangchensong@cmu.edu 4carnegie mellon university 5snorkel ai, inc. 2university of washington abstract creating labeled training sets has become one of the major roadblocks in machine learning. to address this, recent weak supervision (ws) frameworks synthesize training labels from multiple potentially noisy supervision sources. however, existing frameworks are restricted to supervision sources that share the same output space as the target task. to extend the scope of usable sources, we formulate weak indirect supervision (wis), a new research problem for automatically synthesizing training labels based on indirect supervision sources that have different output label spaces. to overcome the challenge of mismatched output spaces, we develop a probabilistic modeling approach, plrm, which uses user-provided label relations to model and leverage indirect supervision sources. moreover, we provide a theoretically-principled test of the distinguishability of plrm for unseen labels, along with a generalization bound. on both image and text classification tasks as well as an industrial advertising application, we demonstrate the advantages of plrm by outperforming baselines by a margin of 2%-9%. introduction one of the greatest bottlenecks of using modern machine learning models is the need for substantial amounts of manually-labeled training data. in real-world applications, such manual annotations are typically time-consuming, labor-intensive and static. to reduce the efforts of annotation, researchers have proposed weak supervision (ws) frameworks (ratner et al., 2016; 2018; 2019; fu et al., 2020) for synthesizing labels from multiple weak supervision sources, e.g., heuristics, knowledge bases, or pre-trained classifiers. these frameworks have been widely applied on various machine learning tasks (dunnmon et al., 2020; fries et al., 2021; safranchik et al., 2020; lison et al., 2020; zhou et al., 2020; hooper et al., 2021; zhan et al., 2019; varma et al., 2019) and industrial data (bach et al., 2019). among them, data programming (ratner et al., 2016), one representative example that generalizes many approaches in the literature, represents weak supervision sources as labeling functions (lfs) and synthesizes training labels using probabilistic graphical model (pgm). | 0 | [
108,
197.4710784,
505.240067688,
317.0226784
] |
-8sBpe7rDiV.pdf | 2,022 | 1 | network insensitivity to parameter noise via adversarial regularization julian büchel ibm research - zurich synsense, zürich, switzerland eth zürich, switzerland jbu@zurich.ibm.com fynn faber eth zürich, switzerland faberf@ethz.ch dylan r. muir synsense, zürich, switzerland dylan.muir@synsense.ai abstract neuromorphic neural network processors, in the form of compute-in-memory crossbar arrays of memristors, or in the form of subthreshold analog and mixed-signal asics, promise enormous advantages in compute density and energy efficiency for nn-based ml tasks. however, these technologies are prone to computational non-idealities, due to process variation and intrinsic device physics. this degrades the task performance of networks deployed to the processor, by introducing parameter noise into the deployed model. while it is possible to calibrate each device, or train networks individually for each processor, these approaches are expensive and impractical for commercial deployment. alternative methods are therefore needed to train networks that are inherently robust against parameter variation, as a consequence of network architecture and parameters. we present a new network training algorithm that attacks network parameters during training, and promotes robust performance during inference in the face of random parameter variation. our approach introduces a loss regularization term that penalizes the susceptibility of a network to weight perturbation. we compare against previous approaches for producing parameter insensitivity such as dropout, weight smoothing and introducing parameter noise during training. we show that our approach produces models that are more robust to random mismatch-induced parameter variation as well as to targeted parameter variation. our approach finds minima in flatter locations in the weight-loss landscape compared with other approaches, highlighting that the networks found by our technique are less sensitive to parameter perturbation. our work provides an approach to deploy neural network architectures to inference devices that suffer from computational non-idealities, with minimal loss of performance. this method will enable deployment at scale to novel energy-efficient computational substrates, promoting cheaper and more prevalent edge inference. introduction there is increasing interest in nn and ml inference on iot and embedded devices, which imposes energy constraints due to small battery capacity and untethered operation. existing edge inference solutions based on cpus or vector processing engines such as gpus or tpus are improving in energy efficiency, but still entail considerable energy cost (huang et al., 2009). alternative compute architectures such as memristor crossbar arrays and mixed-signal event-driven neural network accelerators promise significantly reduced energy consumption for edge inference tasks. novel non-volatile memory technologies such as resistive ram and phase-change materials (chen, 2016; yu & chen, 2016) promise increased memory density with multiple bits per memory cell, as well as compact compute-in-memory for nn inference tasks (sebastian et al., 2020). analog implementations of neurons and synapses, coupled with asynchronous digital routing fabrics, permit high sparsity in both network architecture and activity, thereby reducing energy costs associated with computation. however, both of these novel compute fabrics introduce complexity in the form of computational non-idealities, which do not exist for pure synchronous digital solutions. some novel memory technologies support several bits per memory cell, but with uncertainty about the precise value stored on each cycle (le gallo et al., 2018b; wu et al., 2019). others exhibit significant drift in stored states (joshi et al., 2020). inference processors based on analog and mixed-signal devices (neckar et al., 2019; moradi et al., 2018; cassidy et al., 2016; schemmel et al., 2010; khaddam-aljameh et al., 2022) exhibit parameter variation across the surface of a chip, and between chips, due to manufacturing process non-idealities. collectively these processes known as “device mismatch” manifest as frozen parameter noise in weights and neuron parameters. in all cases the mismatch between configured and implemented network parameters degrades the task performance by modifying the resulting mapping between input and output. existing solutions for deploying networks to inference devices that exhibit mismatch mostly focus on per-device calibration or re-training (ambrogio et al., 2018; bauer et al., 2019; nandakumar et al., 2020a). however, this, and other approaches such as few-shot learning or meta learning entail significant per-device handling costs, making them unfit for commercial deployment. we consider a network to be “robust” if the output of a network to a given input does not change in the face of parameter perturbation. with this goal, network architectures that are intrinsically robust against device mismatch can be investigated (thakur et al., 2018; büchel et al., 2021). another approach is to introduce parameter perturbations during training that promote robustness during inference, for example via random pruning (dropout) (srivastava et al., 2014) or by injecting noise (murray & edwards, 1994). in this paper we introduce a novel solution, by applying adversarial training approaches to parameter mismatch. most existing adversarial training methods attack the input space. here we describe an adversarial attack during training that seeks the parameter perturbation that causes the maximum degradation in network response. in summary, we make the following contributions: • we propose a novel algorithm for gradient-based supervised training of networks that are robust against parameter mismatch, by performing adversarial training in the weight space. • we demonstrate that our algorithm flattens the weight-loss landscape and therefore leads to models that are inherently more robust to parameter noise. • we show that our approach outperforms existing methods in terms of robustness. • we validate our algorithm on a highly accurate phase change memory (pcm)-based computein-memory (cim) simulator and achieve new state-of-the-art results in terms of performance and performance retention over time. related work | 1 | [
108.299,
316.1946768,
211.1957635,
328.1498768
] |
9SDQB3b68K.pdf | 2,022 | 1 | dara: dynamics-aware reward augmentation in offline reinforcement learning jinxin liu123∗ hongyin zhang1∗ donglin wang13† 1 westlake university. 3 institute of advanced technology, westlake institute for advanced study. {liujinxin, zhanghongyin, wangdonglin}@westlake.edu.cn 2 zhejiang university. abstract offline reinforcement learning algorithms promise to be applicable in settings where a fixed dataset is available and no new experience can be acquired. however, such formulation is inevitably offline-data-hungry and, in practice, collecting a large offline dataset for one specific task over one specific environment is also in this paper, we thus 1) formulate the offline dynamics costly and laborious. adaptation by using (source) offline data collected from another dynamics to relax the requirement for the extensive (target) offline data, 2) characterize the dynamics shift problem in which prior offline methods do not scale well, and 3) derive a simple dynamics-aware reward augmentation (dara) framework from both modelfree and model-based offline settings. specifically, dara emphasizes learning from those source transition pairs that are adaptive for the target environment and mitigates the offline dynamics shift by characterizing state-action-next-state pairs instead of the typical state-action distribution sketched by prior offline rl methods. the experimental evaluation demonstrates that dara, by augmenting rewards in the source offline dataset, can acquire an adaptive policy for the target environment and yet significantly reduce the requirement of target offline data. with only modest amounts of target offline data, our performance consistently outperforms the prior offline rl methods in both simulated and real-world tasks. introduction offline reinforcement learning (rl) (levine et al., 2020; lange et al., 2012), the task of learning from the previously collected dataset, holds the promise of acquiring policies without any costly active interaction required in the standard online rl paradigm. however, we note that although the active trail-and-error (online exploration) is eliminated, the performance of offline rl method heavily relies on the amount of offline data that is used for training. as shown in figure 1, the performance deteriorates dramatically as the amount of offline data decreases. a natural question therefore arises: can we reduce the amount of the (target) offline data without significantly affecting the final performance for the target task? figure 1: solid and dashed lines denote offline mediumreplay and medium-expert data in d4rl (walker2d) resp. bringing the idea from the transfer learning (pan & yang, 2010), we assume that we have access to another (source) offline dataset, hoping that we can leverage this dataset to compensate for the performance degradation caused by the reduced (target) offline dataset. in the offline setting, previous work (siegel et al., 2020; chebotar et al., 2021) has characterized the reward (goal) difference between the source and target, relying on the ”conflicting” or multi-goal offline dataset (fu et al., 2020), while we focus on the relatively unexplored transition dynamics difference between the source dataset and the target environment. meanwhile, we believe that this dynamics shift is not arbitrary in reality: in healthcare treatment, offline data for a particular patient is often limited, whereas we can obtain diagnostic data from other patients with the same case (same reward/goal) ∗equal contribution. †corresponding author. and there often exist individual differences between patients (source dataset with different transition dynamics). careful treatment with respect to the individual differences is thus a crucial requirement. given source offline data, the main challenge is to cope with the transition dynamics difference, i.e., strictly tracking the state-action supported by the source offline data can not guarantee that the same transition (state-action-next-state) can be achieved in the target environment. however, in the offline setting, such dynamics shift is not explicitly characterized by the previous offline rl methods, where they typically attribute the difficulty of learning from offline data to the state-action distribution shift (chen & jiang, 2019; liu et al., 2018). the corresponding algorithms (fujimoto et al., 2019; abdolmaleki et al., 2018; yu et al., 2020) that model the support of state-action distribution induced by the learned policy, will inevitably suffer from the transfer problem where dynamics shift happens. our approach is motivated by the well established connection between reward modification and dynamics adaptation (kumar et al., 2020b; eysenbach & levine, 2019; eysenbach et al., 2021), which indicates that, by modifying rewards, one can train a policy in one environment and make the learned policy to be suitable for another environment (with different dynamics). thus, we propose to exploit the joint distribution of state-action-next-state: besides characterizing the state-action distribution shift as in prior offline rl algorithms, we additionally identify the dynamics (i.e., the conditional distribution of next-state given current state-action pair) shift and penalize the agent with a dynamics-aware reward modification. intuitively, this reward modification aims to discourage the learning from these offline transitions that are likely in source but are unlikely in the target environment. unlike the concurrent work (ball et al., 2021; mitchell et al., 2021) paying attention to the offline domain generalization, we explicitly focus on the offline domain (dynamics) adaptation. our principal contribution in this work is the characterization of the dynamics shift in offline rl and the derivation of dynamics-aware reward augmentation (dara) framework built on prior modelfree and model-based formulations. dara is simple and general, can accommodate various offline rl methods, and can be implemented in just a few lines of code on top of dataloader at training. in our offline dynamics adaptation setting, we also release a dataset, including the gym-mujoco tasks (walker2d, hopper and halfcheetah), with dynamics (mass, joint) shift compared to d4rl, and a 12-dof quadruped robot in both simulator and real-world. with only modest amounts of target offline data, we show that dara-based offline methods can acquire an adaptive policy for the target tasks and achieve better performance compared to baselines in both simulated and real-world tasks. related work offline rl describes the setting in which a learner has access to only a fixed dataset of experience, while no interactive data collection is allowed during policy learning (levine et al., 2020). prior work commonly assumes that the offline experience is collected by some behavior policies on the same environment that the learned policy be deployed on. thus, the main difficulty of such offline setting is the state-action distribution shift (fujimoto et al., 2019; liu et al., 2018). algorithms address this issue by following the two main directions: the model-free and model-based offline rl. model-free methods for such setting typically fall under three categories: 1) typical methods mitigate this problem by explicitly (fujimoto et al., 2019; kumar et al., 2019; wu et al., 2019) or implicitly (siegel et al., 2020; peng et al., 2019; abdolmaleki et al., 2018) constraining the learned policy away from ood state-action pairs. 2) conservative estimation based methods learn pessimistic value functions to prevent the overestimation (kumar et al., 2020a; xu et al., 2021). 3) importance sampling based methods directly estimate the state-marginal importance ratio and obtain an unbiased value estimation (zhang et al., 2020; nachum & dai, 2020; nachum et al., 2019b). model-based methods typically eliminate the state-action distribution shift by incorporating a reward penalty, which relies on the uncertainty quantification of the learned dynamics (kidambi et al., 2020; yu et al., 2020). to remove this uncertainty estimation, yu et al. (2021) learns conservative critic function by penalizing the values of the generated state-action pairs that are not in the offline dataset. these methods, however, define their objective based on the state-action distribution shift, and ignore the potential dynamics shift between the fixed offline data and the target mdp. in contrast, we account for dynamics (state-action-next-state) shift and explicitly propose the dynamics aware reward augmentation. a counterpart, close to our work, is off-dynamics rl (eysenbach et al., 2021), where they set up dynamics shift in the interactive environment while we focus on the offline setting. preliminaries we study rl in the framework of markov decision processes (mdps) specified by the tuple m := (s, a, r, t, ρ0, γ), where s and a denote the state and action spaces, r(s, a) ∈ [−rmax, rmax] is the reward function, t (s(cid:48)|s, a) is the transition dynamics, ρ0(s) is the initial state distribution, and γ is the discount factor. the goal in rl is to optimize a policy π(a|s) that maximizes the expected discounted return ηm (π) := eτ ∼pπ t=0 γtr(st, at)], where τ := (s0, a0, s1, a1, ...). we also define q-values q(s, a) := eτ ∼pπ t=0 γtr(st, at)|s0 = s, a0 = a], v-values v (s) := ea∼π(a|s) [q(s, a)], and the (unnormalized) state visitation distribution dπ t=0 γtp (s|π, m, t), where p (s|π, m, t) denotes the probability of reaching state s at time t by running π in m . in the offline rl problem, we are provided with a static dataset d := {(s, a, r, s(cid:48))}, which consists of transition tuples from trajectories collected by running one or more behavioral policies, denoted by πb, on mdp m . with a slight abuse of notation, we write d = {(s, a, r, s(cid:48)) ∼ dd(s)πb(a|s)r(s, a)t (s(cid:48)|s, a)}, where the dd(s) denotes state-marginal distribution in d. in the offline setting, the goal is typically to learn the best possible policy using the fixed offline dataset. m (s) := (cid:80)∞ es∼dπ model-free rl algorithms based on dynamic programming typically perform policy iteration to find the optimal policy. such methods iteratively conduct 1) policy improvement with gm q := m (s),a∼π(a|s) [q(s, a)] and 2) policy evaluation by iterating the bellman equation arg maxπ q(s, a) = bπ m (s)π(a|s). given es∼dd(s),a∼π(a|s) [q(s, a)] and off-policy d, we resort to 1) improvement with gdq := arg maxπ dq(s, a) := r(s, a) + γes(cid:48)∼td(s(cid:48)|s,a),a(cid:48)∼π(a(cid:48)|s(cid:48)) [q(s(cid:48), a(cid:48))] 2) evaluation by iterating q(s, a) = bπ over all (s, a) in d. specifically, given any initial q0, it iterates1 m q(s, a) := r(s, a) + γes(cid:48)∼t (s(cid:48)|s,a),a(cid:48)∼π(a(cid:48)|s(cid:48)) [q(s(cid:48), a(cid:48))] over dπ policy improvement: πk+1 = gdqk, policy evaluation: qk+1 = bπk+1 d qk. model-free offline rl based on the above iteration suffers from the state-action distribution shift, i.e., policy evaluation bπk d qk−1 may encounter unfamiliar state action regime that is not covered by the fixed offline dataset d, causing erroneous estimation of qk. policy improvement gdqk further exaggerates such error, biasing policy πk+1 towards out-of-distribution (ood) actions with erroneously high q-values. to address this distribution shift, prior works 1) explicitly constrain policy to be close to the behavior policy (fujimoto et al., 2019; kumar et al., 2019; wu et al., 2019; ghasemipour et al., 2021), introducing penalty αd(π(a|s), πb(a|s)) into gd or bπ d in equation 1: gdq = arg max es∼dd(s),a∼π(a|s) [q(s, a) − αd(π(a|s), πb(a|s))] , dq(s, a) = r(s, a) + γes(cid:48)∼td(s(cid:48)|s,a),a(cid:48)∼π(a(cid:48)|s(cid:48)) [q(s(cid:48), a(cid:48)) − αd(π(a(cid:48)|s(cid:48)), πb(a(cid:48)|s(cid:48)))] , bπ where d is a divergence function between distributions over actions (e.g., mmd or kl divergence), or 2) train pessimistic value functions (kumar et al., 2020a; yu et al., 2021; xu et al., 2021), penalizing q-values at states in the offline dataset d for actions generated by the current policy π: es∼dd(s),a∼π(a|s) [q(s, a)] , s.t. q = bπ q = arg min dq. q model-based rl algorithms iteratively 1) model the transition dynamics t (s(cid:48)|s, a), using the data m (s)π(a|s)t (s(cid:48)|s,a)[log ˆt (s(cid:48)|s, a)], and 2) infer a policy π from the collected in m : max ˆt modeled ˆm = (s, a, r, ˆt , ρ0, γ), where we assume that r and ρ0 are known, maximizing η ˆm (π) with a planner or the dyna-style algorithms (sutton, 1990). in this paper, we focus on the latter. es,a,s(cid:48)∼dπ model-based offline rl algorithms similarly suffer from ood state-action (kidambi et al., 2020; es,a,s(cid:48)∼d[log ˆt (s(cid:48)|s, a)]. cang et al., 2021) if we directly apply policy iteration over ˆt := max ˆt like the conservative estimation approach described in equation 3, recent conservative model-based offline rl methods provide the policy with a penalty for visiting states under the estimated ˆt where ˆt is likely to be incorrect. taking u(s, a) as the oracle uncertainty (yu et al., 2020) that provides a consistent estimate of the accuracy of model ˆt at (s, a), we can modify the reward function to obtain a conservative mdp: ˆmc = (s, a, r − αu, ˆt , ρ0, γ), then learn a policy π by maximizing η ˆmc (π). 1for parametric q-function, we often perform qk+1 ← arg minq e(s,a)∼d[(bπk+1 d qk(s, a)−q(s, a))2]. problem formulation in standard offline rl problem, the static offline dataset d consists of samples {(s, a, r, s(cid:48)) ∼ dd(s)πb(a|s)r(s, a)t (s(cid:48)|s, a)}. although offline rl methods learn policy for the target mdp m := (s, a, r, t , ρ0, γ) without (costly) online data, as we shown in figure 1, it requires a fair amount of (target) offline data d collected on m . suppose we have another (source) offline dataset d(cid:48), consisting of samples {(s, a, r, s(cid:48)) ∼ dd(cid:48)(s)πb(cid:48)(a|s)r(s, a)t (cid:48)(s(cid:48)|s, a)} collected by the behavior policy πb(cid:48) on mdp m (cid:48) := (s, a, r, t (cid:48), ρ0, γ), then we hope the transfer of knowledge between offline dataset {d(cid:48) ∪ d} can reduce the data requirements on d for learning policy for the target m . dynamics shift in offline rl although offline rl methods in section 3 have incorporated the state-action distribution constrained backups (policy constraints or conservative estimation), they also fail to learn an adaptive policy for the target mdp m with the mixed datasets {d(cid:48) ∪ d}, as we show in figure 4 (appendix). we attribute this failure to the dynamics shift (definition 2) between d(cid:48) and m in this adaptation setting. definition 1 (empirical mdp) an empirical mdp estimated from d is ˆm := (s, a, r, ˆt , ρ0, γ) where ˆt = max ˆt es,a,s(cid:48)∼d[log ˆt (s(cid:48)|s, a)] and ˆt (s(cid:48)|s, a) = 0 for all (s, a, s(cid:48)) not in dataset d. definition 2 (dynamics shift) let ˆm := (s, a, r, ˆt , ρ0, γ) be the empirical mdp estimated from d. to evaluate a policy π for m := (s, a, r, t, ρ0, γ) with offline dataset d, we say that the dynamics shift (between d and m ) in offline rl happens if there exists at least one transition pair (s, a, s(cid:48)) ∈ {(s, a, s(cid:48)) : dπ ˆm (s)π(a|s) ˆt (s(cid:48)|s, a) > 0} such that ˆt (s(cid:48)|s, a) (cid:54)= t (s(cid:48)|s, a). in practice, for a stochastic m and any finite offline data d collected in m , there always exists the dynamics shift. the main concern is that finite samples are always not sufficient to exactly model stochastic dynamics. following fujimoto et al. (2019), we thus assume both mdps m and m (cid:48) are deterministic, which means the empirical ˆm and ˆm (cid:48) are both also deterministic. more importantly, such assumption enables us to explicitly characterize the dynamics shift under finite offline samples. lemma 1 under deterministic transition dynamics, there is no dynamics shift between d and m . for offline rl tasks, prior methods generally apply bπ dq along with the state-action distribution correction (equations 2 and 3), which overlooks the potential dynamics shift between the (source) offline dataset and the target mdp (e.g., d(cid:48) → m ). as a result, these methods do not scale well to the setting in which dynamics shift happens, e.g., learning an adaptive policy for m with (source) d(cid:48). dynamics shift in model-free and model-based offline formulations m q(s, a) for all (s, a) in sπ or s(cid:48) m q(s, a) for all (s, a) such that dπ d(cid:48)q(s, a) approximates the oracle bπ π denote the sets {(s, a) : dd(s)π(a|s) > 0} and {(s, a) : dd(cid:48)(s)π(a|s) > 0} respectively. from the model-free (policy iteration) view, an exact policy evaluation on m is characterized by iterating q(s, a) = bπ m (s)π(a|s) > 0. thus, to formalize the policy evaluation with offline d or d(cid:48) (for an adaptive π on target m ), we require that bellman operator bπ dq(s, a) or bπ π, where sπ and s(cid:48) 1) to evaluate a policy π for m with d (i.e., calling the bellman operator bπ d), notable modelfree offline method bcq (fujimoto et al., 2019) translates the requirement of bπ d = bπ m into the requirement of ˆt (s(cid:48)|s, a) = t (s(cid:48)|s, a). note that under deterministic environments, we have the property that for all (s, a, s(cid:48)) in offline data d, ˆt (s(cid:48)|s, a) = t (s(cid:48)|s, a) (lemma 1). as a result, such property permits bcq to evaluate a policy π by calling bπ m , meanwhile constraining sπ to be a subset of the support of dd(s)πb(a|s). this means a policy π which only traverses transitions contained in (target) offline data d, can be evaluated on m without error. 2) to evaluate a policy π for m with d(cid:48) (i.e., calling the bellman operator bπ d, replacing the oracle bπ d(cid:48)), we have lemma 2: lemma 2 dynamics shift produces that bπ d(cid:48)q(s, a) (cid:54)= bπ m q(s, a) for some (s, a) in s(cid:48) π. with the offline data d(cid:48), lemma 2 suggests that the above requirement bπ m becomes infeasible, which limits the practical applicability of prior offline rl methods under the dynamics shift. d(cid:48) = bπ to be specific, characterizing an adaptive policy for target mdp m with d(cid:48) moves beyond the reach of the off-policy evaluation based on iterating q = bπ d(cid:48)q (equations 2 and 3). such iteration may cause the evaluated q (or learned policy π) overfits to ˆt (cid:48) and struggle to adapt to the target t . to overcome the dynamics shift, we would like to resort an additional compensation ∆ ˆt (cid:48),t such that d(cid:48)q(s, a) + ∆ ˆt (cid:48),t (s, a) = bπ bπ m q(s, a) for all (s, a) in s(cid:48) π. thus, we can apply bπ d(cid:48)q + ∆ ˆt (cid:48),t to act as a substitute for the oracle bπ m q. from the model-based view, the oracle ηm (π) (calling the bellman operator bπ and the viable η ˆm (cid:48)(π) (calling bπ m on the target m ) ˆm (cid:48) on the estimated ˆm (cid:48) from source d(cid:48)) have the following lemma. (cid:2)r(s, a) + γes(cid:48)∼t (s(cid:48)|s,a) [v (s(cid:48))](cid:3). for any π, we have: lemma 3 let bπ m v (s) = ea∼π(a|s) η ˆm (cid:48)(π) = ηm (π) + es∼dπ ˆm (cid:48) (s) (cid:2)bπ ˆm (cid:48)vm (s) − bπ m vm (s)(cid:3) . lemma 3 states that if we maximize η ˆm (cid:48)(π) subject to |es∼dπ ηm (π) will be improved. if f is a set of functions f : s → r that contains vm , then we have (cid:105) ˆm (cid:48)vm (s) − bπ ˆm (cid:48) (s)[bπ m vm (s)]| ≤ (cid:15), es∼dπ ˆm (cid:48) (s) (cid:2)bπ ˆm (cid:48)vm (s) − bπ m vm (s)(cid:3)(cid:12) (cid:12) ≤ γes,a∼dπ (cid:12) (cid:104) df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) ˆm (cid:48) (s)π(a|s) where df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) = supf ∈f |e s(cid:48)∼ ˆt (cid:48)(s(cid:48)|s,a) [f (s(cid:48))] − es(cid:48)∼t (s(cid:48)|s,a) [f (s(cid:48))] |, which is the integral probability metric (ipm). note that if we directly follow the admissible error assumption in mopo (yu et al., 2020) i.e., assuming df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ u(s, a) for all (s, a), this would be too restrictive: given that ˆt (cid:48) is estimated from the source offline samples collected under t (cid:48), not the target t , thus such error would not decrease as the source data increases. further, we find df ( ˆt (cid:48)(s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) + df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)). thus, we can bound the df ( ˆt (cid:48), t ) term with the admissible error assumption over df ( ˆt , t ), as in mopo, and the auxiliary constraints df ( ˆt (cid:48), ˆt ). see next section for the detailed implementation. in summary, we show that both prior offline model-free and model-based formulations suffer from the dynamics shift, which also suggests us to learn a modification (∆ or df ) to eliminate this shift. dynamics-aware reward augmentation in this section, we propose the dynamics-aware reward augmentation (dara), a simple data augmentation procedure based on prior (model-free and model-based) offline rl methods. we first provide an overview of our offline reward augmentation motivated by the compensation ∆ ˆt (cid:48),t in equation 4 and the auxiliary constraints df ( ˆt (cid:48), ˆt ) in equation 6, and then describe its theoretical derivation in both model-free and model-based formulations. with the (reduced) target offline data d and the source offline data d(cid:48), we summarize the overall dara framework in algorithm 1. algorithm 1 framework for dynamics-aware reward augmentation (dara) require: target offline data d (reduced) and source offline data d(cid:48) 1: learn classifiers (qsas and qsa) that distinguish source data d(cid:48) from target data d. (see appendix a.1.3) 2: set dynamics-aware ∆r(st, at, st+1) = log qsas(source|st,at,st+1) 3: modify rewards for all (st, at, rt, st+1) in d(cid:48): rt ← rt − η∆r. 4: learn policy with {d ∪ d(cid:48)} using prior model-free or model-based offline rl algorithms. qsas(target|st,at,st+1) − log qsa(source|st,at) qsa(target|st,at) . dynamics-aware reward augmentation in model-free formulation motivated by the well established connection of rl and probabilistic inference (levine, 2018), we first cast the model-free rl problem as that of inference in a particular probabilistic model. specifically, we introduce the binary random variable o that denotes whether the trajectory τ := (s0, a0, s1, ...) is optimal (o = 1) or not (o = 0). the likelihood of a trajectory can then be modeled as p(o = 1|τ ) = exp ((cid:80) t rt/η), where rt := r(st, at) and η > 0 is a temperature parameter. (reward augmentation with explicit policy/value constraints) we now introduce a variational ˆt (cid:48)(st+1|st, at)π(at|st) to approximate the posterior distribution distribution pπ t=1 m (τ |o = 1), which leads to the evidence lower bound of log pπ pπ m (o = 1): (cid:34) log pπ m (o = 1) = log eτ ∼pπ m (τ ) [p(o = 1|τ )] ≥ eτ ∼pπ log p(o = 1|τ ) + log = eτ ∼pπ t rt/η − log ˆt (cid:48)(st+1|st, at) t (st+1|st, at) pπ m (τ ) pπ ˆm (cid:48)(τ ) rt − η log ˆt (cid:48)(st+1|st,at) t (st+1|st,at) ˆt (cid:48)(s(cid:48)|s,a) t (s(cid:48)|s,a) . intuitively, the −η log since we are interested in infinite horizon problems, we introduce the discount factor γ and take the limit of steps in each rollout, i.e., h → ∞. thus, the rl problem on the mdp m , cast as the inference problem arg maxπ log pπ m (o = 1), can be stated as a maximum of the lower bound t=0 γt (cid:16) . this is equivalent to an rl problem on ˆm (cid:48) with eτ ∼pπ ˆt (cid:48)(s(cid:48)|s,a) the augmented reward r ← r(s, a) − η log t (s(cid:48)|s,a) term discourages transitions (state-action-next-state) in d(cid:48) that have low transition probability in the target m . in the model-free offline setting, we can add the explicit policy or q-value constraints (equations2 and 3) to mitigate the ood state-actions. thus, such formulation allows the oracle bπ m to be reexpressed by bπ t , which makes the motivation in equation 4 practical. (reward augmentation with implicit policy constraints) if we introduce the variational distribution pπ(cid:48) ˆt (cid:48)(st+1|st, at)π(cid:48)(at|st), we can recover the weighted-regressionstyle (wang et al., 2020; peng et al., 2019; abdolmaleki et al., 2018; peters et al., 2010) objective by maximizing j (π(cid:48), π) := e (lower bound of log pπ m (o = 1)). following the expectation maximization (em) algorithm, we can maximize j (π(cid:48), π) by iteratively (e-step) improving j (π(cid:48), ·) w.r.t. π(cid:48) and (m-step) updating π w.r.t. π(cid:48). ˆt (cid:48)(st+1|st,at) t (st+1|st,at) − η log π(cid:48)(at|st) d(cid:48) and the modification log ˆt (cid:48) rt − η log π(at|st) (e-step) we define ˜q(s, a, s(cid:48)) = e . then, given offline data d(cid:48), we can rewrite j (π(cid:48), ·) as a constrained objective (abdolmaleki et al., 2018): t γt log ˆt (cid:48)(s(cid:48)|s,a) max π(cid:48) e dd(cid:48) (s)π(cid:48)(a|s) ˆt (cid:48)(s(cid:48)|s,a) (cid:105) q(s, a) − η ˜q(s, a, s(cid:48)) s.t. es∼dd(cid:48) (s) [dkl (π(cid:48)(a|s)(cid:107)π(a|s))] ≤ (cid:15). when considering a fixed π, the above optimization over π(cid:48) can be solved analytically (vieillard ∗(a|s) ∝ et al., 2020; geist et al., 2019; peng et al., 2019). the optimal π(cid:48) π(a|s) exp (q(s, a)) exp(−η ˜q(s, a, ˆt (cid:48)(s(cid:48)|s, a))). as the policy evaluation in equation 1 (footnote2), we estimate q(s, a) and ˜q(s, a, s(cid:48)) by minimizing the bellman error with offline samples in d(cid:48). (m-step) then, we can project π(cid:48) ∗ is then given by π(cid:48) ∗ onto the manifold of the parameterized π: ∗(a|s)(cid:107)π(a|s))] es∼dd(cid:48) (s) [dkl (π(cid:48) es,a,s(cid:48)∼d(cid:48) (cid:104) log π(a|s) exp (q(s, a)) exp −η ˜q(s, a, s(cid:48)) arg min π = arg max from the regression view, prior work mpo (abdolmaleki et al., 2018) infers actions with q-value weighted regression, progressive approach compared to behavior cloning; however, such paradigm lacks the ability to capture transition dynamics. we explicitly introduce the exp(−η ˜q(s, a, s(cid:48))) term, which as we show in experiments, is a crucial component for eliminating the dynamics shift. implementation: in practice, we adopt offline samples in d to approximate the true dynamics t of ˆt (cid:48)(s(cid:48)|s,a) m , and introduce a pair of binary classifiers, qsas(·|s, a, s(cid:48)) and qsa(·|s, a), to replace log t (s(cid:48)|s,a) as in eysenbach et al. (2021): log qsa(target|s,a) . (see appendixa.1.3 for details). although the amount of data d sampled from the target m is reduced in our problem setup, we experimentally find that such classifiers are sufficient to achieve good performance. qsas(target|s,a,s(cid:48)) − log qsa(source|s,a) ˆt (cid:48)(s(cid:48)|s,a) t (s(cid:48)|s,a) = log qsas(source|s,a,s(cid:48)) dynamics-aware reward augmentation in model-based formulation following equation 6, we then characterize the dynamics shift compensation term as in the above model-free analysis in the model-based offline formulation. we will find that across different derivations, our reward augmentation ∆r has always maintained the functional consistency and simplicity. following mopo, we assume f = {f : (cid:107)f (cid:107)∞ ≤ 1}, then we have df ( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) = dtv( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a)) ≤ (dkl( ˆt (cid:48)(s(cid:48)|s, a), ˆt (s(cid:48)|s, a))/2) 1 2 , where dtv is the total variance distance. then we introduce the admissible error u(s, a) such that df ( ˆt (s(cid:48)|s, a), t (s(cid:48)|s, a)) ≤ 2 ≤ ηdkl( ˆt (cid:48), ˆt ) + δ. following u(s, a) for all (s, a), and η and δ such that (dkl( ˆt (cid:48), ˆt )/2) 1 lemma 3, we thus can maximize the following lower bound with the samples in ˆm (cid:48) (λ := γrmax 1−γ ): ηm (π) ≥ e s,a,s(cid:48)∼dπ ˆm (cid:48) (s)π(a|s) ˆt (cid:48)(s(cid:48)|s,a) r(s, a) − ηλ log ˆt (cid:48)(s(cid:48)|s, a) ˆt (s(cid:48)|s, a) − λu(s, a) − λδ ˆt (cid:48)(µθ(cid:48)(s, a), σφ(cid:48)(s, a)) and n i ˆt implementation: we model the dynamics ˆt (cid:48) and ˆt with an ensemble of 2*n parameterized gaussian distributions: n i (µθ(s, a), σφ(s, a)), where i ∈ [1, n ]. we approximate u with the maximum standard deviation of the learned models in the ensemble: u(s, a) = maxn i=1 (cid:107)σφ(s, a)(cid:107)f, omit the training-independent δ, and treat λ as a hyperparameter as in mopo. for the log ˆt (cid:48) term, we resort to the above classifiers (qsas and qsa) in model-free setˆt ting. (see appendix-a.3.2 for comparison between using classifiers and estimated-dynamics ratio.) experiments | 6 | [
108.299,
442.9726768,
200.0834953,
454.9278768
] |
8qDwejCuCN.pdf | 2,021 | 1 | unsupervised representation learning for time series with temporal neighborhood coding sana tonekaboni∗ university of toronto & vector institute the hospital for sick children stonekaboni@cs.toronto.edu anna goldengerg university of toronto & vector institute the hospital for sick children anna.goldenberg@utoronto.ca danny eytan the hospital for sick children biliary.colic@gmail.com abstract time series are often complex and rich in information but sparsely labeled and therefore challenging to model. in this paper, we propose a self-supervised framework for learning generalizable representations for non-stationary time series. our approach, called temporal neighborhood coding (tnc), takes advantage of the local smoothness of a signal’s generative process to define neighborhoods in time with stationary properties. using a debiased contrastive objective, our framework learns time series representations by ensuring that in the encoding space, the distribution of signals from within a neighborhood is distinguishable from the distribution of non-neighboring signals. our motivation stems from the medical field, where the ability to model the dynamic nature of time series data is especially valuable for identifying, tracking, and predicting the underlying patients’ latent states in settings where labeling data is practically impossible. we compare our method to recently developed unsupervised representation learning approaches and demonstrate superior performance on clustering and classification tasks for multiple datasets. introduction | 0 | [
126.82956,
281.0146768,
205.9888518,
292.9698768
] |
WmIwYTd0YTF.pdf | 2,023 | 0 | stable target field for reduced variance score estimation in diffusion models yilun xu⇤, shangyuan tong⇤, tommi jaakkola computer science and artificial intelligence lab, massachusetts institute of technology ylxu@mit.edu; sytong, tommi { @csail.mit.edu } abstract diffusion models generate samples by reversing a fixed forward diffusion process. despite already providing impressive empirical results, these diffusion models algorithms can be further improved by reducing the variance of the training targets in their denoising score-matching objective. we argue that the source of such variance lies in the handling of intermediate noise-variance scales, where multiple modes in the data affect the direction of reverse paths. we propose to remedy the problem by incorporating a reference batch which we use to calculate weighted conditional scores as more stable training targets. we show that the procedure indeed helps in the challenging intermediate regime by reducing (the trace of) the covariance of training targets. the new stable targets can be seen as trading bias for reduced variance, where the bias vanishes with increasing reference batch size. empirically, we show that the new objective improves the image quality, stability, and training speed of various popular diffusion models across datasets with both general ode and sde solvers. when used in combination with edm (karras et al., 2022), our method yields a current sota fid of 1.90 with 35 network evaluations on the unconditional cifar-10 generation task. the code is available at https://github.com/newbeeer/stf introduction diffusion models (sohl-dickstein et al., 2015; song & ermon, 2019; ho et al., 2020) have recently achieved impressive results on a wide spectrum of generative tasks, such as image generation (nichol et al., 2022; song et al., 2021b), 3d point cloud generation (luo & hu, 2021) and molecular conformer generation (shi et al., 2021; xu et al., 2022a). these models can be subsumed under a unified framework in the form of itˆo stochastic differential equations (sde) (song et al., 2021b). the models learn time-dependent score fields via score-matching (hyv¨arinen & dayan, 2005), which then guides the reverse sde during generative sampling. popular instances of diffusion models include variance-exploding (ve) and variance-preserving (vp) sde (song et al., 2021b). building on these formulations, edm (karras et al., 2022) provides the best performance to date. we argue that, despite achieving impressive empirical results, the current training scheme of diffusion models can be further improved. in particular, the variance of training targets in the denoising scorematching (dsm) objective can be large and lead to suboptimal performance. to better understand the origin of this instability, we decompose the score field into three regimes. our analysis shows that the phenomenon arises primarily in the intermediate regime, which is characterized by multiple modes or data points exerting comparable influences on the scores. in other words, in this regime, the sources of the noisy examples generated in the course of the forward process become ambiguous. we illustrate the problem in figure 1(a), where each stochastic update of the score model is based on disparate targets. we propose a generalized version of the denoising score-matching objective, termed the stable target field (stf) objective. the idea is to include an additional reference batch of examples that are used to calculate weighted conditional scores as targets. we apply self-normalized importance sampling to aggregate the contribution of each example in the reference batch. although this process can substantially reduce the variance of training targets (figure 1(b)), especially in the intermediate regime, ⇤equal contribution. (a) dsm (b) stf figure 1: illustration of differences between the dsm objective and our proposed stf objective. the “destroyed” images (in blue box) are close to each other while their sources (in red box) are not. although the true score in expectation is the weighted average of vi, the individual training updates of the dsm objective have a high variance, which our stf objective reduces significantly by including a large reference batch (yellow box). it does introduce some bias. however, we show that the bias together with the trace-of-covariance of the stf training targets shrinks to zero as we increase the size of the reference batch. experimentally, we show that our stf objective achieves new state-of-the-art performance on cifar10 unconditional generation when incorporated into edm (karras et al., 2022). the resulting fid score (heusel et al., 2017) is 1.90 with 35 network evaluations. stf also improves the fid/inception scores for other variants of score-based models, i.e., ve and vp sdes (song et al., 2021b), in most cases. in addition, it enhances the stability of converged score-based models on cifar-10 and celeba 642 across random seeds, and helps avoid generating noisy images in ve. stf accelerates the training of score-based models (3.6 speed-up for ve on cifar-10) while obtaining comparable or better fid scores. to the best of our knowledge, stf is the first technique to accelerate the training process of diffusion models. we further demonstrate the performance gain with increasing reference batch size, highlighting the negative effect of large variance. our contributions are summarized as follows: (1) we detail the instability of the current diffusion models training objective in a principled and quantitative manner, characterizing a region in the forward process, termed the intermediate phase, where the score-learning targets are most variable (section 3). (2) we propose a generalized score-matching objective, stable target field, which provides more stable training targets (section 4). (3) we analyze the behavior of the new objective and prove that it is asymptotically unbiased and reduces the trace-of-covariance of the training targets by a factor pertaining to the reference batch size in the intermediate phase under mild conditions (section 5). (4) we illustrate the theoretical arguments empirically and show that the proposed stf objective improves the performance, stability, and training speed of score-based methods. in particular, it achieves the current state-of-the-art fid score on the cifar-10 benchmark when combined with edm (section 6). background on diffusion models in diffusion models, the forward process1 is an sde with no learned parameter, in the form of: dx = f (x, t)dt + g(t)dw, rd with x(0) r, and w rd, where x 2 g : [0, 1] it gradually transforms the data 2 distribution to a known prior as time goes from 0 to 1. sampling of diffusion models is done via a corresponding reverse-time sde (anderson, 1982): p0 being the data distribution, t ⇠ rd is the standard wiener process. rx log pt(x) denotes time traveling backward from 1 to 0. ⇤ f (x, t) dx = d¯t + g(t)d ¯w, where ¯ song et al. (2021b) proposes a · probability flow ode that induces the same marginal distribution pt(x) as the sde: dx = 1for simplicity, we focus on the version where the diffusion coefficient g(t) is independent of x(t). rx log pt(x) d¯t. both formulations progressively recover p0 from the prior p1. we f (x, t) rx log pt(x), via a neural network, estimate the score of the transformed data distribution at time t, ⇥ ⇤ s✓(x, t). specifically, the training objective is a weighted sum of the denoising score-matching (vincent, 2011): min ✓ et qt(t) (t)ex p0 ex(t) pt x) s✓(x(t), t) k rx(t) log pt x) where qt is the distribution for time variable, e.g., [0, 1] for ve/vp (song et al., 2021b) and a log-normal distribution for edm karras et al. (2022), and (t) = 2 t is the positive weighting x) function to keep the time-dependent loss at the same magnitude (song et al., 2021b), and pt is the transition kernel denoting the conditional distribution of x(t) given x2. specifically, diffusion models “destroy” data according to a diffusion process utilizing gaussian transition kernels, which result in pt t i). recent works (xu et al., 2022b; rissanen et al., 2022) have also extended the underlying principle from the diffusion process to more general physical processes where the training objective is not necessarily score-related. x) = n u understanding the training target in score-matching objective the vanilla denoising score-matching objective at time t is: `dsm(✓, t) = ep0(x)ept x)[ s✓(x(t), t) | x) at (x(t), t) – the where the network is trained to fit the individual targets “influence” exerted by clean data x on x(t). we can swap the order of the sampling process by first sampling x(t) from pt and then x from p0 x(t)). thus, s✓ has a closed form minimizer: k rx(t) log pt rx(t) log pt x) t(x x(t))[ x)] = s⇤dsm(x(t), t) = ep0 rx(t) log pt(x(t)). x) with respect to the posterior the score field is a conditional expectation of distribution p0 t. in practice, a monte carlo estimate of this target can have high variance (owen, 2013; elvira & martino, 2021). in particular, when multiple modes of the data distribution have x(t)) is a multi-mode distribution, as also observed in xiao comparable influences on x(t), p0 x) vary considerably across different x and this can et al. (2022). thus the targets | strongly affect the estimated score at (x(t), t), resulting in slower convergence and worse performance in practical stochastic gradient optimization (wang et al., 2013). t( rx(t) log pt rx(t) log pt to quantitatively characterize the variations of individual targets at different time, we propose a metric – the average trace-of-covariance of training targets at time t: t( rx(t) log pt vdsm(t) = ept(x(t)) tr(covp0 = ept(x(t))ep0 h t(x x(t)) t(x x(t))( krx(t) log pt rx(t) log pt | 0(x(t) x))) | rx(t) log pt(x(t)) i x)) phases: we use vdsm(t) to define three successive phases relating to the behavior of training targets. as shown in figure 2(a), the three phases partition the score field into near, intermediate, 3 respectively). intuand far regimes (phase 1 itively, vdsm(t) peaks in the intermediate phase (phase 2), where multiple distant modes in the data distribution have comparable influences on the same noisy perturbations, resulting in unstable targets. t | concentrates around one single mode, thus low variation. in phase 3, the targets remain similar across modes since limt p1 0(x(t) | for commonly used transition kernels. in phase 1, the posterior p0 x) (a) ode sampling (b) vdsm(t) versus t figure 2: (a): illustration of the three phases in a two-mode distribution. (b): estimated vdsm(t) for two distributions. we normalize the maximum value to 1 for illustration purposes. we validate this argument empirically in figure 2(b), which shows the estimated vdsm(t) for a mixture of two gaussians as well as a subset of cifar-10 dataset (krizhevsky et al., 2009) for a more 2we omit “(0)” from x(0) when there is no ambiguity. realistic setting. here we use ve sde, i.e., pt for some m and 0(x(t) m (song et al., 2021b). vdsm(t) exhibits similar phase behavior across t in both toy and realistic cases. moreover, vdsm(t) reaches its maximum value in the intermediate phase, demonstrating the large variations of individual targets. we defer more details to appendix c. x) = n m( m m treating score as a field the vanilla denoising score-matching approach (equation 3) can be viewed as a monte carlo estimator, i.e., xi) rx(t) log pt(x(t)) = ep0 t(x ⇡ | where xi is sampled from p0 x(t)) and n = 1. the variance of a monte carlo estimator is t( proportional to 1 n , so we propose to use a larger batch (n) to counter the high variance problem described in section 3. since sampling directly from the posterior p0 t is not practical, we first apply importance sampling with the proposal distribution p0. specifically, we sample a large reference batch pn 0 and get the following approximation: i=1 rx(t) log pt rx(t) log pt x(t))[ x)] p n n x(t)) t(xi| p0(xi) rx(t) log pt the importance weights can be rewritten as p0 x)/pt(x(t)). however, x(t))/p0(x) = pt t(x | this basic importance sampling estimator has two issues. the weights now involve an unknown normalization factor pt(x(t)) and the ratio between the prior and posterior distribution can be large in high dimensional spaces. to remedy these problems, we appeal to self-normalization techniques (hesterberg, 1995) to further stabilize the training targets: xi). bl = xi} n i=1 ⇠ rx(t) log pt(x(t)) rx(t) log pt(x(t)) xj) rx(t) log pt xi). | we term this new training target in equation 5 as stable target field (stf). in practice, we sample p i=1 from pn n 0 and obtain x(t) by applying the transition kernel to the the reference batch “first” training data x1. taken together, the new stf objective becomes: bl = xi} { n pt | n j=1 pt xi) | 0(x(t) `stf(✓, t) = e xi} 0 ex(t) pn pt n s✓(x(t), t) pt | n j=1 pt xk) | 0(x(t) xj) rx(t) log pt xk) when n = 1, stf reduces to the vanilla denoising score-matching (equation 2). when n > 1, stf incorporates a reference batch to stabilize training targets. intuitively, the new weighted target assigns larger weights to clean data with higher influence on x(t), i.e., higher transition probability pt x). p similar to our analysis in section 3, we can again swap the sampling process in equation 6 so that, for a perturbation x(t), we sample the reference batch , where the first element involves the posterior, and the rest follow the data distribution. thus, the minimizer of the new objective (equation 6) is (derivation can be found in appendix b.1) n i=1 from p0 x(t))pn 0 bl = xi} { t( s⇤stf(x(t), t) = ex1⇠ t( x(t))e xi} n xk) xj) rx(t) log pt pt | j pt p xk) note that although stf significantly reduces the variance, it introduces bias: the minimizer is no longer the true score. nevertheless, in section 5, we show that the bias converges to 0 as , while reducing the trace-of-covariance of the training targets by a factor of n when n p0. we further instantiate the stf objective (equation 6) with transition kernels in the form of p0 pt t i), which includes edm (karras et al., 2022), vp (through reparameterizan tion) and ve (song et al., 2021b): x) = t( x(t))e xi} s✓(x(t), t) n exp ⇣ j exp x(t) k xkk ⌘ xj k k (xk x(t)) to aggregate the time-dependent stf objective over t, we sample the time variable t from the training distribution qt and apply the weighting function (t). together, the final training objective for stf is qt(t) [ (t)`stf(✓, t)]. we summarize the training process in algorithm 1. the small batch size et is the same as the normal batch size in the vanilla training process. we defer specific use cases of ⇠ |b| stf objectives combined with various popular diffusion models to appendix a. algorithm 1 learning the stable target field input: training iteration t , initial model s✓, dataset for t = 1 . . . t do d , learning rate ⌘. , and subsample a small batch b xi} |b|i=1 from bl |b|i=1 by applying the transition kernel pt } b bl from sample a large reference batch ti} uniformly sample the time |b|i=1 ⇠ { obtain the batch of perturbed samples calculate the stable target field of v bl (xi(ti)) = calculate the loss: update the model parameter: ✓ = ✓ x 2bl (✓) = 1 p |b| l p y d qt(t)|b| xi(ti) { bl for all xi(ti): pti | 0(xi(ti) pti | 2bl |b|i=1 (ti) ⌘ | 0(xi(ti) x) k rl p y) rxi(ti) log pti| s✓(xi(ti), ti) (✓) v 0(xi(ti) bl (xi(ti)) x) k end for return s✓ analysis in this section, we analyze the theoretical properties of our approach. in particular, we show that the new minimizer s⇤stf(x(t), t) (equation 7) converges to the true score asymptotically (section 5.1). then, we show that the proposed stf reduces the trace-of-covariance of training targets propositional to the reference batch size in the intermediate phase, with mild conditions (section 5.2). asymptotic behavior although in general s⇤stf(x(t), t) rx(t) log pt(x(t)), the bias shrinks toward 0 with a increasing n. in the following theorem we show that the minimizer of stf objective at (x(t), t), i.e., s⇤stf(x(t), t), is asymptotically normal when n theorem 1. suppose , then t pn s⇤stf(x(t), t) rx(t) log pt(x(t)) d ! n ✓ cov( rx(t)pt | pt(x(t))2 x)) we defer the proof to appendix b.2. the theorem states that, for commonly used transition kernels, s⇤stf(x(t), t) rx(t) log pt(x(t)) converges to a zero mean normal, and larger reference batch size (n) will lead to smaller asymptotic variance. as can be seen in equation 8, when n , ! 1 s⇤stf(x(t), t) highly concentrates around the true score rx(t) log pt(x(t)). trace of covariance we now highlight the small variations of the training targets in the stf objective compared to the dsm. as done in section 3, we study the trace-of-covariance of training targets in stf: vstf(t) = ept(x(t)) tr covp0 n x(t))pn t( pt | j pt xk) xj) rx(t) log pt xk) xk=1 in the following theorem we compare vstf with vdsm. in particular, we can upper bound vstf(t) by , then theorem 2. suppose p t vstf(t) n vdsm(t) + ept(x(t))df q t(x k x(t)) + o where df is an f-divergence with f (y) = t(x x(t)) ⇢ p0(x) for all x(t), vstf(t) / n . further, when n d and t(x j pt we defer the proof to appendix b.3. the second term that involves f -divergence df is necessary to capture how the coefficients, i.e., pt xj) used to calculate the weighted xk)/ score target, vary across different samples x(t). this term decreases monotonically as a function of t. in phase 1, p0 x(t)) differs substantially from p0(x) and the divergence term df dominates. in contrast to the upper bound, both vstf(t) and vdsm(t) have minimal variance at small values of t since the training target is always dominated by one x. the theorem has more relevance in phase 2, where the divergence term decreases to a value comparable to vdsm(t). in this phase, we empirically observe that the ratio of the two terms in the upper bound ranges from 10 to 100. thus, when we use a large reference batch size (in thousands), the theorem implies that stf offers a considerably lower variance (by a factor of 10 or more) relative to the dsm objective. in phase 3, the second 0 with large t for commonly used transition kernels. as a result, stf term vanishes to 0, as pt ⇡ 1 times in the far field. reduces the average trace-of-covariance of the training targets by at least n p pt together, we demonstrate that the stf targets have diminishing bias (theorem 1) and are much more stable during training (theorem 2). these properties make the stf objective more favorable for diffusion models training with stochastic gradient optimization. experiments in this section, we first empirically validate our theoretical analysis in section 5, especially for variance reduction in the intermediate phase (section 6.1). next, we show that the stf objective improves various diffusion models on image generation tasks in terms of image quality (section 6.2). in particular, stf achieves state-of-the-art performance on top of edm. in addition, we demonstrate that stf accelerates the training of diffusion models (section 6.3), and improves the convergence speed and final performance with an increasing reference batch size (section 6.3). variance reduction in the intermediate phase phase ! phase 2 phase 3 phase ! phase 2 phase 3 (a) (b) (c) (d) figure 3: (a, b): vdsm(t) and d(t) versus t. we normalize the maximum values to 1 for illustration purposes. (c, d): vstf(t) with a varying reference batch size n. the proposed algorithm 1 utilizes a large reference batch to calculate the stable target field instead of the individual target. in addition to the theoretical analysis in section 5, we provide further empirical study to characterize the intermediate phase and verify the variance reduction effects by stf. apart x(t)) and the data t( from v (t), we also quantify the average divergence between the posterior p0 ·| | t(x p0 distribution p0 at time t (introduced in theorem 2): d(t) = ept(x(t)) . intuitively, the number of high-density modes in p0 x(t)) grows as d(t) decreases. to investigate ⇤ | their behaviors, we construct two synthetic datasets: (1) a 64-dimensional mixture of two gaussian components (two gaussians), and (2) a subset of 1024 images of cifar-10 (cifar-10-4096). x(t)) df t( k figure 3(a) and figure 3(b) show the behaviors of vdsm(t) and d(t) on two gaussian and cifar10-4096. in both settings, vdsm(t) reaches its peak in the intermediate phase (phase 2), while d(t) gradually decreases over time. these results agree with our theoretical understanding from section 3. in phase 2 and 3, several modes of the data distribution have noticeable influences on the scores, but only in phase 2 are the influences much more distinct, leading to high variations of the individual target x(t)). x), x t( rx(t) log pt figure 3(c) and figure 3(d) further show the relationship between vstf(t) and the reference batch size n. recall that when n = 1, stf degenerates to individual target and vstf(t) = vdsm(t). we observe that vstf(t) decreases when enlarging n. in particular, the predicted relation vstf(t) / vdsm(t)/(n 1) in theorem 2 holds for the two gaussian datasets where df is small. on the high dimensional dataset cifar-10-4096, the stable target field can still greatly reduce the training target variance with large reference batch sizes n. image generation table 1: cifar-10 sample quality (fid, inception) and number of function evaluation (nfe). methods inception stylegan2-ada (karras et al., 2020) ddpm (ho et al., 2020) ncsnv2 (song & ermon, 2020) pfgm (xu et al., 2022b) ve (song et al., 2021b) dsm - rk45 stf (ours) - rk45 dsm - pc stf (ours) - pc vp (song et al., 2021b) dsm - ddim stf (ours) - ddim dsm - rk45 stf (ours) - rk45 edm (karras et al., 2022) dsm - heun, ncsn++ stf (ours) - heun, ncsn++ dsm - heun, ddpm++ stf (ours) - heun, ddpm++ fid nfe we demonstrate the effectiveness of the new objective on image generation tasks. we consider cifar-10 (krizhevsky et al., 2009) and celeba 64 64 (yang et al., 2015) datasets. we set the reference batch size n to 4096 (cifar-10) and 1024 (celeba 642). we choose the current state-ofthe-art score-based method edm (karras et al., 2022) as the baseline, and replace the dsm objective with our stf objective during training. we also apply stf to two other popular diffusion models, ve/vp sdes (song et al., 2021b). for a fair comparison, we directly adopt the architectures and the hyper-parameters in karras et al. (2018) and song et al. (2021b) for edm and ve/vp respectively. in particular, we use the improved ncsn++/ddpm++ models (karras et al., 2022) in the edm scheme. to highlight the stability issue, we train three models with different seeds for ve on cifar-10. we provide more experimental details in appendix d.1. numerical solver. the reverse-time ode and sde in scored-based models are compatible with any general-purpose solvers. we use the adaptive solver rk45 method (dormand & prince, 1980; song et al., 2021b) (rk45) for ve/vp and the popular ddim solver (song et al., 2021a) for vp. we adopt heun’s 2nd order method (heun) and the time discretization proposed by karras et al. (2022) for edm. for sdes, we apply the predictor-corrector (pc) sampler used in (song et al., 2021b). we denote the methods in a objective-sampler format, i.e., a-b, where a and b . we defer more details to appendix d.2. rk45, pc, ddim, heun { } results. for quantitative evaluation of the generated samples, we report the fid scores (heusel et al., 2017) (lower is better) and inception (salimans et al., 2016) (higher is better). we measure the sampling speed by the average nfe (number of function evaluations). we also include the results of several popular generative models (karras et al., 2020; ho et al., 2020; song & ermon, 2019; xu et al., 2022b) for reference. dsm, stf } table 1 and table 2 report the sample quality and the sampling speed on unconditional generation of cifar-10 and celeba 642. our main findings are: (1) stf achieves new state-ofthe-art fid scores for unconditional generation on cifar-10 benchmark. as shown in table 1, the stf objective obtains a fid of 1.90 when incorporated with the edm scheme. to the best of our knowledge, this is the lowest fid score on the unconditional cifar-10 generin addition, the stf objective consistently improves the edm across the two aration task. chitectures. (2) the stf objective improves the performance of different diffusion models. we observe that the stf objective improves the fid/inception scores of ve/vp/edm on cifar-10, for most ode and sde samplers. stf consistently provides performance gains for ve across datasets. remarkably, our objective achieves much better sample quality using ode samplers for ve, with an fid score gain of 3.39 on cifar-10, and 2.22 on celeba 642. for vp, stf provides better results on the popular ddim sampler, while suffering from a slight performance drop when using the rk45 sampler. (3) the stf objective stabilizes the converged ve model with the rk45 sampler. in appendix e.1, we report the standard deviations of performance metrics for converged models with different seeds on cifar-10 with ve. we observe that models trained with the stf objective give more consistent results, with a smaller standard deviation of used metrics. table 2: fid and nfe on celeba 642 methods/nfes nfe fid ve (dsm) ve (stf) we further provide generated samples in appendix f. one interesting observation is that when using the rk45 sampler for ve on cifar-10, the generated samples from the stf objective do not contain noisy images, unlike the vanilla dsm objective. celeba 642 - pc ve (dsm) ve (stf) accelerating training of diffusion models (a) cifar-10 (b) celeba 64 figure 4: fid and generated samples throughout training on (a) cifar-10 and (b) celeba 642. the variance-reduction techniques in neural network training can help to find better optima and achieve faster convergence rate (wang et al., 2013; defazio et al., 2014; johnson & zhang, 2013). in figure 4, we demonstrate the fid scores every 50k iterations during the course of training. since our goal is to investigate relative performance during the training process, and because the fid scores computed on 1k samples are strongly correlated with the full fid scores on 50k sample (song & ermon, 2020), we report fid scores on 1k samples for faster evaluations. we apply ode samplers for fid evaluation, and measure the training time on two nvidia a100 gpus. for a fair comparison, we report the average fid scores of models trained by the dsm and stf objective on ve versus the wall-clock training time (h). the stf objective achieves better fid scores with the same training time, although the calculation of the target field by the reference batch introduces slight overhead (algorithm 1). in figure 4(a), we show that the stf objective drastically accelerates the training of diffusion models on cifar-10. the stf objective achieves comparable fid scores with 3.6 less training time (25h versus 90h). for celeba 642 datasets, the training time improvement is less significant than on cifar-10. our hypothesis is that the stf objective is more effective when there are multiple well-separated modes in data distribution, e.g., the ten classes in cifar-10, where the dsm objective suffer from relatively larger variations in the intermediate phase. in addition, the converged models have better final performance when pairing with the stf on both datasets. effects of the reference batch size according to our theory (theorem 2), the upper bound of the trace-of-covariance of the stf target decreases proportionally to the reference batch size. here we study the effects of the reference batch size (n) on model performances during training. the fid scores are evaluated on 1k samples using the rk45 sampler. as shown in figure 5, models converge faster and produce better samples when increasing n. it suggests that smaller variations of the training targets can indeed speed up training and improve the final performances of diffusion models. related work different phases of diffusion models. the idea of diffusion models having different phases has been explored in prior works though the motivations and definitions vary (karras et al., 2022; choi et al., 2022). karras et al. (2022) argues that the training targets are difficult and unnecessary to learn in the very near field (small t in our phase 1), whereas the training targets are always dissimilar to the true targets in the intermediate and far field (our phase 2 and phase 3). as a result, their solution is sampling t with a log-normal distribution to emphasize the relevant region (relatively large t in our phase 1). in contrast, we focus on reducing large training target variance in the intermediate and far field, and propose stf to better estimate the true target (cf. karras et al. (2022)). choi et al. (2022) identifies a key region where the model learns perceptually rich contents, and determines the training weights (t) based on the signal-to-noise ratio (snr) at different t. as snr is monotonically decreasing over time, the resulting up-weighted region does not match our phase 2 characterization. in general, our proposed stf method reduces the training target variance in the intermediate field and is complementary to previous improvements of diffusion models. importance sampling. the technique of importance sampling has been widely adopted in machine learning community, such as debiasing generative models (grover et al., 2019), counterfactual learning (swaminathan & joachims, 2015) and reinforcement learning (metelli et al., 2018). prior works using importance sampling to improve generative model training include reweighted wakesleep (rws) (bornschein & bengio, 2014) and importance weighted autoencoders (iwae) (burda et al., 2015). rws views the original wake-sleep algorithm (hinton et al., 1995) as importance sampling with one latent variable, and proposes to sample multiple latents to obtain gradient estimates with lower bias and variance. iwae utilizes importance sampling with multiple latents to achieve greater flexibility of encoder training and tighter log-likelihood lower bound compared to the standard variational autoencoder (kingma & welling, 2013; rezende et al., 2014). variance reduction for fisher divergence. one popular approach to score-matching is to minimize the fisher divergence between true and predicted scores (hyv¨arinen & dayan, 2005). wang et al. (2020) links the fisher divergence to denoising score-matching (vincent, 2011) and studies the large variance problem (in o(1/ 4 0. they utilize a control variate to reduce the variance. however, this is typically not a concern for current diffusion models as the time-dependent objective can be viewed as multiplying the fisher divergence by (t) = 2 t , resulting 0. in a finite-variance objective even when t t )) of the fisher divergence when t conclusion we identify large target variance as a significant training issue affecting diffusion models. we define three phases with distinct behaviors, and show that the high-variance targets appear in the intermediate phase. as a remedy, we present a generalized score-matching objective, stable target field (stf), whose formulation is analogous to the self-normalized importance sampling via a large reference batch. albeit no longer an unbiased estimator, our proposed objective is asymptotically unbiased and reduces the trace-of-covariance of the training targets, which we demonstrate theoretically and empirically. we show the effectiveness of our method on image generation tasks, and show that stf improves the performance, stability, and training speed over various state-of-the-art diffusion models. future directions include a principled study on the effect of different reference batch sampling procedures. our presented approach is uniformly sampling from the whole dataset , so we expect that training diffusion models with a reference batch of more samples in the neighborhood of x1 (the sample from which x(t) is perturbed) would lead to an even better estimation of the score field. moreover, the three-phase analysis can effectively capture the behaviors of other physics-inspired generative models, such as pfgm (xu et al., 2022b) or the more advanced pfgm++ (xu et al., 2023). therefore, we anticipate that stf can enhance the performance and stability of these models further. xi} { acknowledgements we are grateful to benson chen for reviewing an early draft of this paper. we would like to thank hao he and the anonymous reviewers for their valuable feedback. yx and tj acknowledge support from mit-dsta singapore collaboration, from nsf expeditions grant (award 1918839) “understanding the world through code”, and from mit-ibm grand challenge project. st and tj also acknowledge support from the ml for pharmaceutical discovery and synthesis consortium (mlpds). references brian do anderson. reverse-time diffusion equation models. stochastic processes and their j¨org bornschein and yoshua bengio. reweighted wake-sleep. arxiv preprint arxiv:1406.2751, 2014. yuri burda, roger grosse, and ruslan salakhutdinov. importance weighted autoencoders. arxiv jooyoung choi, jungbeom lee, chaehun shin, sungwon kim, hyunwoo kim, and sungroh yoon. perception prioritized training of diffusion models. in proceedings of the ieee/cvf conference on computer vision and pattern recognition, pp. 11472–11481, 2022. aaron defazio, francis bach, and simon lacoste-julien. saga: a fast incremental gradient method with support for non-strongly convex composite objectives. advances in neural information processing systems, 27, 2014. j. r. dormand and p. j. prince. a family of embedded runge-kutta formulae. journal of computational v´ıctor elvira and luca martino. advances in importance sampling. wiley statsref: statistics reference online, 2021. aditya grover, jiaming song, ashish kapoor, kenneth tran, alekh agarwal, eric j horvitz, and stefano ermon. bias correction of learned generative models using likelihood-free importance weighting. advances in neural information processing systems, 32, 2019. tim hesterberg. weighted average importance sampling and defensive mixture distributions. technomartin heusel, hubert ramsauer, thomas unterthiner, bernhard nessler, and sepp hochreiter. gans trained by a two time-scale update rule converge to a local nash equilibrium. in nips, 2017. geoffrey e hinton, peter dayan, brendan j frey, and radford m neal. the “wake-sleep” algorithm jonathan ho, ajay jain, and pieter abbeel. denoising diffusion probabilistic models. advances in neural information processing systems, 33:6840–6851, 2020. aapo hyv¨arinen and peter dayan. estimation of non-normalized statistical models by score matching. journal of machine learning research, 6(4), 2005. rie johnson and tong zhang. accelerating stochastic gradient descent using predictive variance reduction. advances in neural information processing systems, 26, 2013. tero karras, timo aila, samuli laine, and jaakko lehtinen. progressive growing of gans for improved quality, stability, and variation. arxiv, abs/1710.10196, 2018. tero karras, miika aittala, janne hellsten, samuli laine, jaakko lehtinen, and timo aila. training generative adversarial networks with limited data. arxiv, abs/2006.06676, 2020. tero karras, miika aittala, timo aila, and samuli laine. elucidating the design space of diffusion-based generative models. in alice h. oh, alekh agarwal, danielle belgrave, and kyunghyun cho (eds.), advances in neural information processing systems, 2022. url https://openreview.net/forum?id=k7futowmoc7. diederik p kingma and max welling. auto-encoding variational bayes. arxiv preprint alex krizhevsky, geoffrey hinton, et al. learning multiple layers of features from tiny images. 2009. shitong luo and wei hu. diffusion probabilistic models for 3d point cloud generation. 2021 ieee/cvf conference on computer vision and pattern recognition (cvpr), pp. 2836–2844, 2021. alberto maria metelli, matteo papini, francesco faccio, and marcello restelli. policy optimization via importance sampling. in neurips, 2018. alex nichol, prafulla dhariwal, aditya ramesh, pranav shyam, pamela mishkin, bob mcgrew, ilya sutskever, and mark chen. glide: towards photorealistic image generation and editing with text-guided diffusion models. in icml, 2022. art b. owen. monte carlo theory, methods and examples. 2013. danilo jimenez rezende, shakir mohamed, and daan wierstra. stochastic backpropagation and approximate inference in deep generative models. in international conference on machine learning, pp. 1278–1286. pmlr, 2014. severi rissanen, markus heinonen, and a. solin. generative modelling with inverse heat dissipation. tim salimans, ian j. goodfellow, wojciech zaremba, vicki cheung, alec radford, and xi chen. improved techniques for training gans. arxiv, abs/1606.03498, 2016. chence shi, shitong luo, minkai xu, and jian tang. learning gradient fields for molecular conformation generation. in icml, 2021. jascha sohl-dickstein, eric weiss, niru maheswaranathan, and surya ganguli. deep unsupervised learning using nonequilibrium thermodynamics. in international conference on machine learning, pp. 2256–2265. pmlr, 2015. jiaming song, chenlin meng, and stefano ermon. denoising diffusion implicit models. arxiv, yang song and stefano ermon. generative modeling by estimating gradients of the data distribution. advances in neural information processing systems, 32, 2019. yang song and stefano ermon. improved techniques for training score-based generative models. yang song, jascha sohl-dickstein, diederik p kingma, abhishek kumar, stefano ermon, and ben poole. score-based generative modeling through stochastic differential equations. in international conference on learning representations, 2021b. url https://openreview.net/forum? id=pxtig12rrhs. adith swaminathan and thorsten joachims. the self-normalized estimator for counterfactual learning. pascal vincent. a connection between score matching and denoising autoencoders. neural computachong wang, x. chen, alex smola, and e. xing. variance reduction for stochastic gradient optimization. in nips, 2013. ziyu wang, shuyu cheng, li yueru, jun zhu, and bo zhang. a wasserstein minimum velocity approach to learning unnormalized models. in international conference on artificial intelligence and statistics, pp. 3728–3738. pmlr, 2020. zhisheng xiao, karsten kreis, and arash vahdat. tackling the generative learning trilemma with denoising diffusion gans. in international conference on learning representations, 2022. url https://openreview.net/forum?id=jprm0p-q0co. minkai xu, lantao yu, yang song, chence shi, stefano ermon, and jian tang. geodiff: a geometric diffusion model for molecular conformation generation. arxiv, abs/2203.02923, 2022a. yilun xu, ziming liu, max tegmark, and tommi jaakkola. poisson flow generative models. arxiv yilun xu, ziming liu, yonglong tian, shangyuan tong, max tegmark, and t. jaakkola. pfgm++: unlocking the potential of physics-inspired generative models. arxiv, abs/2302.04265, 2023. | 11 | [
108,
628.8325094,
505.38794225,
649.3161094
] |
v6s3HVjPerv.pdf | 2,022 | 0 | do users benefit from interpretable vision? a user study, baseline, and dataset leon sixt∗1, martin schuessler∗23, oana-iuliana popescu1, philipp weiß3, tim landgraf1 freie universit¨at berlin1 weizenbaum institut berlin2 tu berlin3 leon.sixt@fu-berlin.de, martin.schuessler@tu-berlin.de ∗ equal contribution abstract a variety of methods exist to explain image classification models. however, it remains unclear whether they provide any benefit to users over simply comparing various inputs and the model’s respective predictions. we conducted a user study (n=240) to test how such a baseline explanation technique performs against concept-based and counterfactual explanations. to this end, we contribute a synthetic dataset generator capable of biasing individual attributes and quantifying their relevance to the model. in a study, we assess if participants can identify the relevant set of attributes compared to the ground-truth. our results show that the baseline outperformed concept-based explanations. counterfactual explanations from an invertible neural network performed similarly as the baseline. still, they allowed users to identify some attributes more accurately. our results highlight the importance of measuring how well users can reason about biases of a model, rather than solely relying on technical evaluations or proxy tasks. we open-source our study and dataset so it can serve as a blue-print for future studies. introduction deep neural networks have been widely adopted in many domains. yet, for some applications, their use may be limited by how little we understand which features are relevant. whether engineer or user, insurance company, or regulatory body; all require reliable information about what the model has learned or why the model provides a certain output. numerous methods have been proposed to explain deep neural networks (gilpin et al., 2018; molnar et al., 2020). ultimately, to evaluate whether such explanations are helpful, we need user studies (doshi-velez & kim, 2017; wortman vaughan & wallach, 2020). in fact, some studies provided evidence that interpretable ml techniques may be helpful to find biases or spurious correlations (ribeiro et al., 2016b; adebayo et al., 2020a). however, a substantial body of work shows that they may not be as helpful as claimed (kaur et al., 2020; alqaraawi et al., 2020; chu et al., 2020; shen & huang, 2020). consequently, it seems that in real-world applications, biases are often found by simply inspecting the model’s predictions rather than applying interpretable ml. a recent example is the twitter image cropping algorithm: it was the users who discovered that it favored white people over people of color (yee et al., 2021). in this work, we ask: do modern interpretability methods enable users to discover biases better than by simply inspecting input/output pairs? to investigate this question empirically, we first propose two4two: a synthetic dataset depicting two abstract animals. its data-generating factors can be correlated with the binary target class, thereby creating arbitrarily strong biases. we designed a baseline explanation technique for bias discovery using only the model’s output: input images are arranged in a grid grouped by the model’s logit predictions. this design allows a user to inspect all attributes that potentially predict the target class. in an initial user study (n=50), we validated that participants were struggling to find both biases contained in our dataset using this technique. hence, more elaborate methods can improve over the baseline on this dataset. in the main study (n=240), we compared the baseline against two state-of-the-art explanations: automatically-discovered concepts and counterfactual interpolations generated with an invertible neural network. (a) baseline (b) invertible neural networks (c) concepts (zhang et al., 2021) figure 1: we tested whether users can identify the class-relevant features of images showing two types of animals. we biased attributes like the animal’s color to be predictive of the class and investigated whether explanation techniques enabled users to discover these biases. we tested a simple baseline (a) which shows random samples grouped by the model’s output logit, counterfactual samples generated by an invertible neural network (b), and automatically discovered concepts (c). a participant viewed only one of the above conditions. we found that none of these explanations outperformed the baseline, even though some features were identified more accurately with counterfactuals. the textual justifications of participants revealed several usability issues in all methods. this highlights the necessity to validate any claims about the benefits of explanation techniques in user studies. this work represents substantial empirical novelty and significance in the field of interpretable ml: • the two4two dataset generator provides control over features and biases. it is designed specifically for human subject studies and to challenge existing interpretability approaches, • methods to quantify ground-truth feature importance when the data-generating process is known, • a study design that provides a unified approach to evaluating interpretable vision methods on the task of bias discovery. it is suitable for lay-users and includes several measures to ensure high-quality crowd-sourced responses, • a strong and simple baseline explanation technique using only the model output, which we propose as a benchmark for future studies, • we open-source our dataset, explanation techniques, model, study design, including instructions and videos to support replicating our results as well as adapting our design to other explanation techniques. related work interpretable ml for vision different explanation approaches have been proposed: saliency maps (bach et al., 2015; ancona et al., 2018; sundararajan et al., 2017), example-based explanations (cai et al., 2019), counterfactual examples (singla et al., 2020), activation-concept approaches (kim et al., 2018), or models with built-in interpretability (chen et al., 2019; brendel & bethge, 2018). for a detailed review about the field, we refer to (gilpin et al., 2018; molnar et al., 2020). our work focuses on counterfactual explanations and automatically-discovered concepts. counterfactual explanations are samples that change the model output, e.g., flip the output class (wachter et al., 2018). we generated counterfactuals with invertible neural networks (inns) (jacobsen et al., 2018; kingma & dhariwal, 2018). this approach has recently gained momentum (hvilshøj et al., 2021; dombrowski et al., 2021; mackowiak et al., 2020). previous works have also used gans and vaes for counterfactual generation (goyal et al., 2019; mertes et al., 2020; sauer & geiger, 2021; singla et al., 2020; liu et al., 2019; baumgartner et al., 2018; chang et al., 2019). the main advantage of using inns for counterfactuals is that the generative function is perfectly aligned with the forward function, as an analytic inverse exists. concepts represent abstract properties, which can be used to explain a model. for example, the classification of an image as ”zebra” could be explained by a pronounced similarity to the ”stripe” concept. this similarity is determined by the dot product of the network’s internal activations with a concept vector. tcav (kim et al., 2018) required manually defined concepts. recent works proposed to discover concepts automatically (ghorbani et al., 2019; zhang et al., 2021). figure 2: the left panel depicts the main difference between peeky and stretchy: the legs’ position. while peeky shows one pair of legs moved inwards, stretchy’s legs are moved outwards. two4two offers different attributes: animal color, background color, the shape of the blocks and the animal’s body posture. all of which can be controlled and biased separately. user studies for interpretability previous works with the task of bias discovery have mainly evaluated saliency maps and used datasets with a single, simple bias, e.g. background adebayo et al. (2020a); ribeiro et al. (2016a) or image watermarks kim et al. (2018). user studies for concept-based methods tested only the accessibility of the explanations by asking users to assign images to a concept (zhang et al., 2021; ghorbani et al., 2019). counterfactual explanations have been evaluated by mertes et al. (2020) on a forward-prediction task. we thus believe that we are the first to extensively test counterfactual-based and concept-based explanations on bias discovery using a challenging dataset. recently, a study on exemplary-based explanations focused on understanding internal activations of a neural network (borowski et al., 2020). it showed that for this task, examples could be more beneficial than complex feature visualizations (olah et al., 2017). similarly, there is evidence that participants often rely on model predictions rather than on explanations (alqaraawi et al., 2020; adebayo et al., 2020a). synthetic datasets for interpretable vision datasets with known ground-truth biases have been proposed before. bam is an artificial dataset (yang & kim, 2019) where spurious background correlations are introduced by pasting segmented objects on different textures, e.g. dogs on bamboo forests. however, the resulting images are unsuitable for user studies as they look artificial and make it easy for participants to suspect that the background is important. additionally, it would be difficult to introduce more than one bias. a limitation that also the synthetic dataset in (chen et al., 2018) shares. the synthetic dataset created by arras et al. (2021) created a dataset to technically evaluate saliency methods on a visual question answering task technically. two4two is the first dataset designed explicitly for human subject evaluations. to the best of our knowledge, we provide the first unified approach to evaluate interpretable vision on a bias-discovery task. two4two: datasets with known feature importance dataset description datasets generated with two4two consist of two abstract animal classes, called peeky and stretchy. both consist of eight blocks: four for the spine and four for the legs. for both animals, one pair of legs is always at an extended position. the other pair moves parallel to the spine inward and outward. the attribute legs’ position, a scalar in [0,1], controls the position. at a value of 0.5, the pair of legs are at the same vertical position as the last block of the spine. peekies have a leg position ≤ 0.52 which means legs are moved mostly inwards to the body center. in the same fashion, stretchies are extended outwards, legs’ position ≥ 0.48. we added some ambiguity to ensure a model has an incentive to use possible biases. therefore, peekies and stretchies are equally likely for a legs’ position between 0.48 and 0.52. it is also difficult for humans to tell if the legs are outward or inwards in this range. besides the legs’ position, the dataset has the following parameters: body posture (bending and three rotation angles), position, animal color (from red to blue), blocks’ shape (from cubes to spheres), and background color (from red to blue). each can be changed arbitrarily and continuously (see appendix table 5). when designing the dataset, we wanted to ensure that (1) participants can become experts within a few minutes of training, (2) it allows for the creation of multiple biases that are difficult to find, and (3) that it provides a challenge for existing interpretability methods. goal (1) is met as participants can be instructed using only a few examples (see the tutorial video in appendix c). the high number of controllable attributes achieve goal (2). we biased the attributes such that they do not stand out, which we validated in the first user study. goal (3) is met by spatially overlapping attributes and longfigure 3: the joint distributions of legs’ position and the attributes background (left), shape (middle), and color (right). datapoints are yellow for peekies and blue for stretchies. the background is not biased. the shape is biased for legs’ position lower than (0.45) or greater (0.55), but is uniform in the center. the color contains additional predictive information about the target class, as it allow to discriminate between peeky and stretchy where the legs’ position overlaps. however, for more extreme arms’ positions the color is uniform and not biased. range image dependencies. spatially overlapping attributes, like color and shape, directly challenge saliency map explanations. long-range image dependencies, like the legs’ positions relative to the spine, can not be explained when analyzing patches separately as done in (chen et al., 2019; brendel & bethge, 2018). both properties are common in real-world datasets: for example, race and gender in facial datasets are encoded by spatially overlapping features. long-range image dependencies are particularly relevant for pose estimation and visual reasoning (johnson et al., 2017). introducing biases for our studies’ dataset, we sampled the block’s shape in a non-predictive biased fashion. this means that for legs’ positions that clearly showed a peeky [0, 0.45] most blocks were rather cubic, while for legs’ positions that clearly showed a stretchy [0.55, 1] most blocks were rather round. however, for the legs’ positions between [0.45, 0.55] the blocks shape was uniformly distributed. in particular, in the even narrower interval [0.48, 0.52] where a classifier can only be as good as random guessing, the block’s shape does not provide any additional information about the target class. in figure 3, we show the joint distribution of the block’s shape and legs’ position. we sampled the animals’ color to be predictive for the target class. at the small interval where the legs overlap [0.48; 0.52], we distributed the animal color to provide additional class information. stretchies were more likely to be red, and peekies were more likely to be blue. outside of this centered interval, the color gradually became uniformly distributed (see figure 3). hence, color was more equally distributed than the shape, making the color bias harder to detect visually. the remaining attributes, background color and body posture, were sampled independently of the class, and we expected our model to ignore them. measuring ground-truth feature importance even if a dataset contains biases, it is unclear how relevant they will be to a neural network after training. feature importance also depends on the network architecture, the optimization process, and even the weight initialization. as two4two allows us to change any parameter in isolation, we can directly compare the model prediction between two images that differ in only one parameter. for these two images, we measured both the median absolute logit change and also for how many samples the predicted class was flipped. both measures quantify how influential each parameter is (see table 1). as expected, the legs’ position had a strong influence on the prediction. the model relied more on animal color than on the blocks’ shape, which is expected as the color contains additional information about the class. surprisingly, the prediction flip for unrelated attributes such as background was only slightly lower than for blocks’ shape. to analyze this further, we calculated a linear fit for each parameter change to the logit change. we reported the coefficient of determination r2, which indicates how much of the variance in the prediction can be explained linearly by the analyzed property. while the unrelated properties sometimes flip a prediction, the direction of that flip is random (r2 ≈ 0). in contrast, the biased parameters influence predictions in a directed fashion, with animal color (r2=0.751) being clearly more directed than blocks’ shape (r2=0.307). model and evaluated methods as discussed in section 3, two4two was designed to challenge existing interpretability methods, e.g., saliency map explanations and patch-based models. we selected two methods that might provide table 1: importance of the data generating factors to the model’s prediction. prediction flip quantifies how often the model’s prediction changes the sign when changing the attribute. the mean logit change reports the median of the absolute change in logit values. the r2 score is calculated on an ordinary least squares from the changes of each factor to the changes in the model’s logit. for more attributes, see appendix table 3. factor prediction flip [%] median logit change legs’ position color shape background rotation yaw bending the user with the necessary information: counterfactuals generated with an invertible neural network (inn) and concept-based explanations (zhang et al., 2021). inn counterfactuals we trained an inn using both a supervised and an unsupervised objective (dinh et al., 2016; 2015). to predict the target class, the model first applies the forward function ϕ to map a data point x to a feature vector z = ϕ(x). then, a linear classifier takes those features z and predicts the logit score f (x) = wt z + b. any input can be reconstructed from the feature vector by applying the inverse function x = ϕ−1(z). the model has a test accuracy of 96.7%. further details can be found in appendix a.2. the baseline and concept techniques are also applied to this model. to create a counterfactual example ˜x for a data point x, we can exploit the linearity of the classifier. moving along the weight vector w, i.e., adding w to the features z, changes the model’s prediction. by controlling the step size with a scalar α, we can directly quantify the change in the logit value ∆y = αwt w. the modified feature vector z + αw can be inverted back to the input domain, resulting in a counterfactual ˜x = ϕ−1(z + αw) which visualizes the changes introduced by a step αw in z-space. the inn’s explanations are visualized in a grid where each row shows a single counterfactual interpolation (see figure 1b). automatically-discovered concepts we adapted the nmf approach of zhang et al. (2021) to our specific network architecture. because the network’s internal representations also contain negative values, we used matrix factorization instead of nmf. we generated the concepts using layer 342 (from a total of 641 layers). the layer has a feature map resolution of 8x8. this choice represents a trade-off between enough spatial resolution and high-level information. we ran the matrix factorization with 10 components and selected the five components that correlated most with the logit score (r is in the range [0.21, 0.34]). our presentation of concept-based explanations was very similar to (zhang et al., 2021): we visualized concepts with five exemplary images per row and highlighted regions corresponding to a concept. since our classifier is binary, a negative contribution for stretchy actually means a positive contribution for peeky. hence, we could have characterized a concept as more peeky and more stretchy, to make the design similar to the other two explanation techniques. however, as the concepts did not strongly correlate with the model’s output, presenting them as class-related could confuse participants: a more peeky column would have contained some images showing stretchies and vice versa. thus, we presented them separately in two consecutive rows (see figure 1c). presenting concepts in this fashion gives them a fair chance in the study because participants rated the relevance of each attribute for the model rather than for each class separately. human subject study we share the view of doshi-velez & kim (2017) and wortman vaughan & wallach (2020) that user-testing of explanation techniques is a crucial but challenging endeavor. as our second main contribution, we propose and conduct a user study based on the two4two dataset which can act as a blue-print for future investigations. our design has been iterated in over ten pilot studies and proposes solutions to common problems that arise when evaluating explanation techniques on crowd-sourcing platforms with lay participants. design considerations data without prior domain knowlege we specifically designed the two4two dataset to avoid overburdening participants, as might be the case with other types of data. within a few minutes, participants can easily become domain experts. since the data is unknown to them prior to the study, we avoid introducing any prior domain knowledge as a confounding factor, which can be an issue (alqaraawi et al., 2020). manageable but not oversimplified tasks we propose the task of bias-discovery: participants had to rate features as either relevant or irrelevant to a model. the task directly reflects users’ perception of feature importance. furthermore, bias-discovery has the advantage of being suitable for lay participants. at the same time, it is also grounded in the model’s behavior. this is an advantage over tasks used in several previous studies, which only evaluated whether explanations were accessible to users, e.g. by identifying the target property smiling using image interpolations (singla et al., 2020) or assigning images to a concept class (zhang et al., 2021; ghorbani et al., 2019). however, these tasks are an oversimplification and cannot measure any insights the users gained about the model. in contrast, alqaraawi et al. (2020) employed the task of forward prediction of a neural network. this requires substantial model understanding and is very challenging, as reflected by the participants’ low accuracy. assessing trust in a human-in-the-loop task, despite its realistic setting, has the disadvantage that trust is influenced by many factors which are difficult to control for (lee & see, 2004; springer et al., 2017). another approach is to asks participants to assess whether a model is fit for deployment (ribeiro et al., 2016b; adebayo et al., 2020b). however, in our own pilots studies, users deemed a model fit for deployment even if they knew it was biased. baseline explanation technique to quantify whether an explanation is beneficial for users, it must be compared to an alternative explanation. in this work, we argue that a very simple and reasonable alternative for users is to inspect the model’s logits assigned to a set of input images. we designed such a baseline explanation as shown in figure 1a. after several design iterations, we settled for a visually dense image grid with 5 columns sorted by the logit score, each column covering 20% of the logit values. the columns were labeled very certain for peeky/stretchy, certain for peeky/stretchy, and as unsure. pilot studies showed that participants’ attention is limited. we thus decided to display a total of 50 images, i.e. an image grid of 10 rows. the number of images was held constant between explanation techniques to ensure the same amount of visual information and a fair comparison. in this work, we focused on binary classifications. for a multi-class setting, one could adapt the baseline by contrasting one class verses another class. high response quality we took extensive measures to ensure participants understood their task and the explanation techniques. participants were required to watch three professionally-spoken tutorial videos, each under four minutes long. the videos explained, on a high level, the two4two dataset, machine learning and how to use an assigned explanation technique to discover relevant features. to avoid influencing participants, we prototyped idealized explanations using images from two4two. the explanations showed different biases than those in the study. each video was followed by a written summary and set of multiple choice comprehension questions after failing such a test once, participants could study the video and summary again. when failing a test for a second time, participants were excluded from the study. we also excluded participants if their written answers reflected a serious misunderstanding of the task, indicated by very short answers copied for all attributes or reasoning that is very different from the tutorial. we recruited participants from prolific who are fluent in english, hold an academic degree and have an approval rate of ≥ 90%. to ensure they are also motivated, we compensated them with an average hourly pay of £11.45 which included a bonus of £0.40 per correct answer. experimental design we conducted two online user studies. before starting the data collection, we formulated our hypotheses, chose appropriate statistical tests, and pre-registered our studies (see appendix d). this way, we follow the gold-standard of defining the statistical analysis before the data collection, thus ensuring that our statistical results are reliable (cockburn et al., 2018). the first study (n=50) analyzed whether the task was challenging enough that other methods could potentially improve over the baseline. we tested if at least one bias in our model (either the animal’s color or the blocks’ shape) was difficult to find using the baseline technique. consequently, we used a within-subject design. table 2: the mean accuracy for each attribute by condition. ncollected provide the number of participants collected and nfiltered the number of remaining participants after the filtering. stars mark statistical significance. condition ncollected nfiltered overall legs color backgr. shape posture study 1 (baseline) study 2 inn baseline concepts in the second study (n=240), we evaluated the two explanation techniques described in section 4 against the baseline using a between-subjects design. participants were randomly, but equally assigned to one of the explanation techniques. we specified two directed hypotheses. we expected participants in the inn condition to perform better than those in baseline, because the baseline does not clearly highlight relevant features, whereas interpolations highlight features in isolation. we expected participants viewing concepts to perform worse than those in the baseline, due to their inability to highlight spatially overlapping features. for both studies, participants completed a tutorial phase first. using their assigned explanations, they then assessed the relevance of five attributes: legs’ position relative to the spine, animal color, background, rotation or bending, and blocks’ shape. the questions were formulated as: ”how relevant is <attribute> for the system?”, and participants had to choose between irrelevant or relevant. the percentage of correct answers (accuracy) served as our primary metric. participants also had to write a short, fully-sentenced justification for their answers. for links to the study, see appendix c. results data exclusions as stated in the preregistration, we automatically excluded all participants who withdrew their consent, failed one of the comprehension questions twice, skipped a video, or exceeded prolific’s time limit for completion. if a participant was excluded, a new participant’s place was made available until the pre-registered number of completed responses was reached. we excluded 63 study respondents for the first study, and 145 for the second study in this fashion. we ensured that all participants were naive about the dataset. once they participated in a study, they were blacklisted for future studies. figure 4: the proportion of correct answers for baseline (base), concepts (con), and inn. for completed studies, two annotators independently marked the participants’ written answers and excluded those with copy and paste answers or indications of grave misunderstandings of the instructions. participants were labeled as: include, unsure, or exclude. both anotators had an agreement of κ = 0.545 for the first study and κ = 0.643 for the second (measured include vs. unsure and exclude). disagreements were solved by discussion. in total, we excluded 7 participants from the first study (14%) and 48 participants from the second study (20%). first study for the accepted 43 participants, we used two-sided exact mcnemar tests on their answers about the relevance of the legs position compared with animal color (first test) and background (second test). participants found the color bias less often than the legs’ positions (p <0.0001). the success rate for the color attribute was 49% vs. 86% for legs. the shape bias was not significantly harder to find than the legs’ positions and was identified correctly with 74% accuracy (p =0.3036). hence, we confirmed our hypothesis and concluded that other methods still have room for improvement over the baseline. second study in the second study, we evaluated 192 valid participant responses (62 inn, 71 base, 59 con). we expected data to be different from the normal distribution, and a shapiro-wilk test for all conditions confirmed this (p < 0.001). we depict the number of correct answers per condition in figure 4. a kruskal-wallis test showed a significant differences in accuracy scores between conditions (p < 0.001). for focused comparisons, we used two wilcoxon-rank-sum tests with bonferroni correction to correct for multiple comparisons. the accuracy scores differed significantly between the baseline and concept conditions (p < 0.001, r=0.778). the performance of participants using concepts was rather poor, with only 31.7% accuracy, considering that random answers would yield a score of 50%. for concepts, not a single attribute surpassed the 50% barrier. we found no significant difference when comparing the baseline and counterfactuals (p =0.441, r=0.091). their mean accuracies are close, with 80.8% for baseline and 84.5% for counterfactuals. inn counterfactuals helped users to discover the main attribute, legs’ position, (p <0.001) and color bias (p =0.033) more reliably.1 however, counterfactuals performed significantly worse for the background attribute (p =0.033), while for blocks’ shape and position we found no significant difference (for both, p =1). qualitative results to understand how participants integrated the explanation techniques into their reasoning, we analyzed the textual answers of each feature qualitatively. two annotators first applied open coding to the answers. they performed another pass of closed coding after agreeing on a subset of the relevant codes, on which the following analysis is based. overall, the participants perceived the task as challenging, as they expressed being unsure about their answers (n=71). | 7 | [
108,
538.6210784,
504.164721736,
592.4894166
] |
OUz_9TiTv9j.pdf | 2,022 | 2 | a zest of lime: towards architecture-independent model distances hengrui jia, hongyu chen, jonas guan university of toronto and vector institute {nickhengrui.jia, hy.chen}@mail.utoronto.ca, jonas@cs.toronto.edu ali shahin shamsabadi vector institute and the alan turing institute a.shahinshamsabadi@turing.ac.uk nicolas papernot university of toronto and vector institute nicolas.papernot@utoronto.ca abstract definitions of the distance between two machine learning models either characterize the similarity of the models’ predictions or of their weights. while similarity of weights is attractive because it implies similarity of predictions in the limit, it suffers from being inapplicable to comparing models with different architectures. on the other hand, the similarity of predictions is broadly applicable but depends heavily on the choice of model inputs during comparison. in this paper, we instead propose to compute distance between black-box models by comparing their local interpretable model-agnostic explanations (lime). to compare two models, we take a reference dataset, and locally approximate the models on each reference point with linear models trained by lime. we then compute the cosine distance between the concatenated weights of the linear models. this yields an approach that is both architecture-independent and possesses the benefits of comparing models in weight space. we empirically show that our method, which we call zest, helps in several tasks that require measurements of model similarity: verifying machine unlearning, and detecting many forms of model reuse, such as model stealing, knowledge distillation, and transfer learning.1 introduction we explore the problem of quantifying similarity between machine learning (ml) models with distance metrics, with a focus on deep neural networks (dnns). comparing the functional behavior of ml models is often challenging because they are not easily inspectable (e.g. located in the cloud). in addition, ml models that capture similar knowledge may not share similar architectures, and vice versa, making it difficult to compare models in weight space. finally, ml models that solve different tasks may nonetheless share similar knowledge, such as in the case of transfer learning (li et al., 2021). this is emphasized in dnns, which are not only complex, often containing millions of parameters, but also difficult to interpret (ribeiro et al., 2016). previous methods for measuring model distances use either the similarity of predictions (li et al., 2021), or the similarity of weights (jia et al., 2021). however, methods that measure the similarity of predictions only capture the local behavior of models at each data point, and it is hard to choose a set of inputs to compare their global behaviors. this causes high variance in the results: models that are unrelated may happen to give very similar predictions on some inputs, while related models (e.g., fine-tuned from one another) may give different predictions on other inputs. methods that measure the similarity of weights are more complete, because they capture the global behavior of models, and weight similarity implies prediction similarity in the limit: as the weights of the compared models approach each other, so will their predictions. but, to be meaningful, weight-space methods can only be applied when the compared models share the same architecture. furthermore, they require white-box access to the models to compare weights. these constraints restrict their use in practice. 1code is available at: https://github.com/cleverhans-lab/zest-model-distance instead, we propose to compute distances between models by first approximating their global behavior through a collection of linear models. we name our approach zest. we start by applying the local interpretable model-agnostic explanations (lime) algorithm of ribeiro et al. (2016) to generate a collection of linear models that each approximates some local behavior of a compared model. we chose lime because it is black-box (only requires access to the models’ predictions) and architecture-independent, and as a consequence so is zest. the collection of these local approximations form a global approximation of model behavior. since the approximation models obtained are linear, they are straightforward to compare. indeed, to measure similarities and differences between two models being compared, we compute the cosine distance between the concatenated weights of their respective collections of approximated linear models. because we compare local linear approximations of the models rather than their predictions directly, this allows zest to interpolate between the benefits of prior approaches in the weight or prediction spaces described earlier. in particular, our empirical validation of zest demonstrates that our approach is not sensitive to the choice of points used by lime to locally approximate the models being compared. this departs from prior approaches that compared model predictions directly and as a result were overly sensitive to the choice of points the comparison is made on. this characteristic allows zest to correctly capture the differences and similarities between pairs of unrelated or related classifiers. furthermore, we show that the distance computed by zest finds natural applications in detecting model reuse (e.g., model stealing, knowledge distillation, and transfer learning) and verifying machine unlearning. li et al. (2021) first proposed to introduce a distance to inform the detection of classifiers which were trained by reusing another classifier. in particular, this is the case in model extraction where an adversary observes the predictions of a model to reproduce its behavior in a stolen copy of the model. whereas li et al. (2021) left the identification of stolen copies of a model as future work after failing to apply their proposed modeldiff metric to this problem, we find it can be solved by zest. in a completely different setting, we also find an application of zest to machine unlearning: the process of removing the effects of learning data points from a model (cao & yang, 2015). we use zest as a heuristic to inform whether or not a user’s data has been unlearned, by comparing the decision boundaries of the models before and after unlearning around that data. our main contributions are as follows: • we propose zest, an approach for computing an architecture-independent distance metric that measures the similarity between ml models. zest only requires black-box access, and compares the global behavior of the models, which it approximates using lime. • we validate that zest measures model similarity by showing that the distance between different epochs of the same model is on average 99.27(±0.19)% on cifar-10, 98.75(±0.12)% on cifar-100, 92.68(±2.22)% on ag news and 81.17(±3.01)% on speech commands, closer than the distance between two models with different initialization trained on the same dataset. we run our experiments in the vision, text and audio domains using the resnet18, resnet20, resnet50, lstm, m5, and mobilenet-v2 architectures, on the cifar-10, cifar-100, ag news, speech commands, imagenet, flower102 and sdog120 datasets. • we show that zest can be used to help detect model reuse, including the hard case of model stealing, and inform unlearning verification. the distance between a model stolen via model extraction and the original model is 19.96(±4.28)% on cifar-10 and 54.87(±1.70)% on cifar100, closer on average than the distance between the original and a model retrained on the same datasets. compared to modeldiff, which had a 0% accuracy at detecting model extraction, we have 100% accuracy for the attacks in the modelreuse benchmark (li et al., 2021). we further show that zest helps identify all other methods of model reuse in the benchmark. to inform unlearning verification, we show that the distance from a model to its retrained counterpart without the unlearned data is on average 144.56(±15.39)% on cifar-10 and 140.51(±9.10)% on cifar-100 further than the distance to a retrained model that did not exclude the data to unlearn. background and problem statement: defining model distance we aim to compute a distance between two ml models c1 and c2 to characterize their similarity, given only the ability to observe the models’ predictions on chosen inputs. we make no assumptions on the models’ architectures, except that they have the same input dimensions. in other words, we only assume that they model similar distributions (e.g. traffic signs), but the compared models can be trained for different tasks (e.g. classifying german and american traffic signs respectively). we denote the output space of the two models d1 and d2. we focus on dnns, due to their popularity and the difficulty to interpret their behavior, but our distance metric applies to other ml models. for ease of exposition, we use the classification of images as a sample task throughout our discussion in §2 and §3. we also evaluate zest on the text and audio domains and provide our results in §4. past work on measuring model similarity has either defined distance by comparing model weights or comparing model predictions. we briefly overview both approaches and their current limitations, then give an outline of the lime algorithm and how it can be applied to address these limitations. comparing model weights. the weights of a parameterized model define its behavior and it can be seen how in the limit having close weights implies that two models will have close behavior; thus, one way to compare the similarity of models is to compare the difference in their weights. one example of weight space comparison is proof of learning: jia et al. (2021) use the comparison of model weights throughout the training procedure to provide evidence that a party has indeed trained a model. this is useful for ownership resolution. by saving checkpoints of the model during training, and showing that the model weights between subsequent checkpoints are close, the owner can provide a history that proves they are the party who trained the model. however, comparing model weights is highly restrictive in practice: it requires both access to model weights, and for compared models to share the same architecture, so that the distance between each corresponding weight value can be compared. even if models share the same architecture, it can be seen that two models whose weights are permutations of one another on a single layer would make exactly the same predictions yet have a non-zero distance in weight space. comparing model predictions. on the other hand, a model’s behavior can also be defined by its predictions on inputs. the advantage of comparing model similarity with predictions is that we can treat the models as black-boxes. because we only need to observe input-output pairs, we do not need access to model weights and are not restricted by the architecture of the models being compared. in theory, if we can query a model with every input in the domain and get its predictions, we can learn its exact functional behavior, but this is intractable in practice. to address this, current approaches resort to selecting smaller sets of inputs as reference, and comparing the models based on their predictions on the reference. the disadvantage of these approaches is that (a) they can only capture point-wise behavior of the models at the chosen inputs, and (b) choosing a representative set of inputs is hard. if the chosen inputs are far from decision boundaries then even unrelated well-trained dnns give similar predictions; whereas inputs near decision boundaries are sensitive to small changes of the boundaries, so related models (e.g. a model and its transferred copy) may predict differently. this is consistent with findings in deep learning testing, a class of methods that aim to identify areas of the input space where supposedly similar models disagree with each other on the output space. diffchaser (xie et al., 2019), deepxplore (pei et al., 2017) and tensorfuzz (odena et al., 2019) are three recent and popular testing frameworks. these are not applicable in our case because they focus on the differences between closely related models, whereas we are looking for a distance metric that is able to provide information on both the similarity and differences between any kinds of models. perhaps the closest to our work, modeldiff (li et al., 2021) is a model distance metric designed to detect model reuse, which is defined as training a model based on another to save on training costs. modeldiff feeds a set of paired test inputs to the compared models, where each pair is selected to be close in the input space, but far in the output space. it then compares the total cosine similarity of the distance between the outputs of each pair to find the model distance. using this distance metric, li et al. (2021) detected transfer learning and model compression, however, they fail at the harder case of detecting model extraction and leave it to future work. our approach solves this open problem, and our empirical study demonstrates that our distance metric is more precise and less sensitive to changes of the model that do not affect its performance on in-distribution data. lime. local interpretable model-agnostic explanations generates interpretable linear models to explain the local behavior of ml models (ribeiro et al., 2016). lime learns linear models by segmenting the input space, then systematically masking segments to identify features that are most influential to the classification of a data point. for example, this implies masking contiguous areas of pixels for image classification tasks. using the most influential features, lime learns a linear model that approximates the decision boundary of the compared model near the data point. figure 1: an overview of the zest approach to computing the distance between two black-box classifiers c1 and c2. phase 1 randomly samples several reference images. phase 2 segments the reference images into components and their location masks using a segmentation model. phase 3 approximates the local decision boundaries around each reference image by training a lime model that takes mask locations as input and predicts the classifiers response on these images’ components. finally, phase 4 combines the local decision boundaries of all reference points to approximate the global decision boundaries of c1 and c2, and computes their cosine distance. intuition of comparing in weight space of lime. from the perspective of the decision boundaries, weights of a dnn capture the entire boundaries, just like how x2 + y2 = 1 captures a unit circle. this leads to high precision but restricts generality. for example, it is hard to define how x2 +y2 = 1 similar is to max(x, y) = 1. on the other hand, each prediction only reveals one specific mapping from the input to the output space, and multiple predictions are needed to identify a tiny region of the decision boundaries. if we draw an analogy to the cartesian coordinate system, comparing model predictions is like comparing points in the coordinate system–the method is more general, but both a unit circle and a unit square can have points like [0, 1]. the method will not be precise if we compare the points and conclude the circle is similar to the square. at a high level, we are dealing with the trade-off between precision and generality. comparing predictions gives us general applicability whereas comparing weights gives us precision. thus, to interpolate between these two extremes we choose to compare linear models that locally approximate the models to be compared. using the same analogy, comparing lime of two dnns is similar to comparing lines that are tangent to the shape, instead of the formula of the shape (comparing weights) or points on the shape (comparing predictions). we choose the lime algorithm for generating local approximations because it is architecture-independent, only requires black-box access, and is simple to understand. § 3 contains a more formal description of how we apply lime. method: measuring distances between models with zest building on the insights of § 2, we now introduce an approach that is able to interpolate between the benefits of weight-space and prediction-space comparisons of two classifiers. we propose zest, a model distance metric that measures the similarity of two models c1 and c2 by comparing their global behaviour as approximated by lime (see figure 1 for an overview). algorithm 1 describes our proposed model distance, which consists of four sequential phases: 1) sampling a reference dataset; 2) sampling nearby images as done in lime; 3) learning the behaviour of c1 and c2 locally around the reference data points with linear classifiers as done in lime; 4) approximating a global perspective of c1 and c2 by combining these linear models, and computing the distance between the combinations of linear models corresponding to c1 and c2. next, we describe each phase of our proposed model distance in detail. sampling representative reference data points. we randomly sample n reference images x = {xi}n i=1 from the distribution task. note that n is much smaller than the size of the training set of the classifiers being compared. for example, we select n = 128 reference images from 50, 000 training images of cifar-10. in § 4.3, we show empirically that the choice of n does not qualitatively change the outcome of our model distance metric computations—unlike previous approaches described in § 2 that directly compared the predictions of two classifiers. increasing n simply reduces the variance of our metric, which is intuitive because the comparison is made on more points sampled from the underlying data distribution; so reference points are more likely to cover the same local regions of the classifiers’ decision surface as n increases. learning the local behaviours. while the classifiers being compared can be non-linear (e.g., deep neural networks in our case), the intuition behind lime is that their behavior can be approximated locally by a linear model. we learn the local behaviour of each of the two classifiers being compared around each reference image by training a lime linear regression model that captures the classifier responses to the image’s components (i.e. contiguous patches of pixels denoted a super-pixel). similar to the original lime paper, we create and sample image components as follows. first, we group the pixels of a reference image in k super-pixels using quickshift (vedaldi & soatto, 2008) and create binary super-pixel masks b = {bk}k k=1, where each super-pixel binary mask bk ∈ {0, 1}w×h specifies the location of pixels belong to the k-th super-pixel. these super-pixels each represent a contiguous patch of similar pixels, like a patch of fur or grass, whose presence or absence is controlled by the binary mask. a linear model on these super-pixels provides an interpretable approximation of the local decision boundary. for example, if the classifier predicts “dog” when the patch of fur is present, but changes its prediction when it is not, then we can interpret that the patch of fur greatly influences its prediction. then, we create the reference image’s component ˆx = x · m by randomly choosing several superpixels identified by 1s in component mask m ∈ {0, 1}w×h that is combination of their corresponding binary mask. in order to explore the local area around each reference image, we repeat this process and draw l different components ˆx = { ˆxl}l l=1 using l different component mask m = {ml}l we query classifiers to obtain their predictions on these l image components as z1 = c1( ˆx ) and z2 = c2( ˆx ). then, we train a linear regression model i : m → z per classifier to have high local fidelity with respect to the classifier around the reference image x. we call the weights of the trained regression model the local signature of its corresponding classifier. l=1. following the original lime paper, we set l = 1000. in total, we learn n linear regression models, capturing approximating a global perspective. the behaviour of the classifier locally around the n reference images. we approximate the global behaviour of each classifier by concatenating its n local signatures s1 = {w n n=1. note that local signatures of each classifier have as many rows as the number of classes of the classifier. in order to compare signatures of classifiers with different number of classes when the relationship between classes output by the two classifiers is unknown, we introduce class alignment to re-position and reshape the rows of local signatures. let small and big local signature be the local signatures of the classifiers with the lower and higher number of classes respectively. in class alignment, we start by the first row of the small local signature and compute its distance with each row in the big local signature. then, we move the row corresponding to the minimum distance to the first row of of big local signature. we repeat this for all other rows in the small local signature while removing aligned rows of the big local signature from this process. finally, we remove any remaining row of the big local signature not selected previously. see appendix b for further details. computing the distance. as the global signature of classifiers are the same size, we compute the model distance as d = distance(s1, s2). in appendix g, we show that the choice of distance(·, ·) does not affect the performance of zest. our experimental results in the remainder of the paper are thus presented with the cosine distance being used to compare global signatures. algorithm 1: proposed model distance approach, zest input: black-box d1-class classifier c1, black-box d2-class classifier c2, reference image set x = {xi}n i=1, quickshift segmentation algorithm, random-selector, linear regression training algorithm t (·, ·) and distance distance(·, ·) output: distance, d, between black-box classifiers c1 and c2 1: s1, s2 = {}, {} 2: for x ∈ x do b = {bk}k 3: 4: m = {ml}l 5: l=1 = random-selector(b) k=1 = quickshift(x) for i ∈ d1 do if unknown relation between d1 and d2 then arg minj∈d2 s1.append(w1), s2.append(w2) validation implementation details (cid:46) initialize model signatures (cid:46) generate k super-pixel masks (cid:46) generate l image’s component masks (cid:46) generate l image’s components (cid:46) query classifier (cid:46) train local regression model (cid:46) class alignment (cid:46) create model signature (cid:46) compute model distance we validate our proposed method, zest, in the vision, text, and audio domains using four publicly available classifiers and four datasets: resnet20 trained on cifar-10, resnet50 trained on cifar(a) cifar-10 figure 2: the distance computed by zest for unrelated classifiers (c (t) ref versus cunrelated) and related classifiers (c (t) ) as a function of training epoch, t. we consider k = 5 for the related classifier. see appendix d for the results of other k values. the distance of related classifiers is significantly smaller than the zest distance of unrelated classifiers, especially as training converges. ref versus c (t+k) ref 100, lstm trained on ag news, and m5 network trained on speech commands. the accuracy of resnet20 on cifar-10 is 91.77(±0.17)%, 76.17(±0.22)% for the resnet50 on cifar-100, 90.27(±0.52)% for lstm on ag news, and 87.67(±0.86)% for m5 network on speech commands. our results in this section are averaged over 5 different training runs of classifiers under comparison. recall from § 3 that zest consists of 4 successive phases. in phase 1 of zest, we select 128 samples randomly from the training set. in phase 1 and phase 2, we follow the same implementation and setup as the original lime paper. in phase 3, we train linear regression models using the least-squares estimation technique. we use relative cosine similarity (see appendix c for details) for computing the distance between global signatures in phase 4. however, in appendix g we perform an ablation study where we compare using the (cid:96)1 and (cid:96)2 norms as alternative metrics, and show that zest is agnostic to the choice of metric used to compare global signatures. does zest capture the distance between classifiers correctly? ref we analyse the performance of zest in computing distances between related or unrelated classifiers. to build related and unrelated pairs, we first collect a reference classifier trained with t epochs, c (t) ref . we consider the classifier at epoch t+k as a related classifier, c (t+k) , to the reference classifier c (t) ref . instead, we consider the same architecture but trained with a different random seed as a classifier, unrelated, unrelated to the reference classifier. the distance between unrelated classifiers (c (t) c (t) ref versus ref versus c (t+k) cunrelated) should be bigger than the distance between related classifiers (c (t) ). figure 2 shows the zest distance between unrelated and related classifiers, at training epochs [0, 200] of the reference classifier. overall, model distances in both the unrelated and related cases decrease as training progresses and model weights become more informative. this is intuitive given that both classifiers are trained to solve the same task. however, the distance output by zest for unrelated classifiers has a lower bound that is significantly higher than for related classifiers. for example, the distance computed by zest for unrelated resnet20 classifiers is still above 1 at the last epoch (epoch=200), while it is so close to 0 for related ones with k = 5 . therefore, zest can correctly distinguish related from unrelated classifiers. ref how sensitive is zest to the size of reference set? ref ref ref we analyse the effect of the reference set on the performance of zest by studying 1) the effect of the size of the reference set on zest; 2) the effect of stochasticity in the selection of points that make up the reference set. for both of these analyses, we consider two pairs of related classifiers: (c (100) versus c (200) ). figure 3 shows the average and standard deviation of the distance computed by zest for different sizes, {1, 2, 4, 8, 16, 32, 64, 128}, of the reference set and different random samples of reference points. the confidence interval (coloured areas) is obtained by repeating the experiment 5 times to train classifiers with different seeds. we observe that the average distance computed by zest is identical regardless of the size of the reference set in both of these related classifiers. however, the standard deviation of the distance computed by zest decreases as the size of the reference set increases: for instance, in c (100) standard deviation decreases from 0.3 for a reference set of size 1 to a value close to zero as we increase the reference size to 128. note that this remains a small reference set relative to the size of the training set used to learn these classifiers. in the case of c (190) , the classifiers are so similar in the first place that the standard deviation of the distance computed by zest is close to zero and remains similar for all of the reference set sizes. ref ref ref ref ref (a) average (b) standard deviation figure 3: influence of the reference set size and sampling stochasticity on the distance computed by zest. the average zest distances of different reference sets are similar, while the standard deviation decreases as the size of the reference set increases. (a) ag news dataset (text), lstm (b) speech commands dataset (audio), m5 ref figure 4: evaluation of correctness of zest in the text and audio domains respectively: this figure is plotted in a similar manner to figure 2, i.e., the distances between related and unrelated classifier pairs are plotted as a function of training epoch, t. we can see the zest distance of related classifiers are significantly smaller than distances of unrelated classifiers across domains. such a quick reduction in variance by increasing the number of reference points can not be achieved by prediction comparison. in the same training setting for cifar-10 and using 128 reference points, the prediction similarity between c (100) is 93.28(±1.06)%, which is overlapping with ref the prediction similarity between 2 unrelated models: 91.52(±2.27)%. the two values become 92.27(±0.22)% and 91.59(±0.76)% respectively when we increase the reference dataset to size 1280. such a small gap and large variance make it hard to confidently claim if two models are the same and can lead to a high false positive rate (i.e., claiming two unrelated models as related). this is opposite to distance computed by zest on a reference dataset that is 10 times smaller, which can easily separate distances between unrelated and related pairs, as shown in figure 2. one may think the bad performance of prediction comparison is because most of data points are far away from the decision boundaries of the well-trained models, and decide to pick data points near the decision boundaries. by doing so, we observed a similarity of 35.23(±2.60)% between unrelated models and 41.92(±2.82)% between related models. however, such points are sensitive to small changes in the decision boundaries. by training a model to match the predictions of the victim model (can be thought of as high-fidelity model extraction attacks), we find the similarity between the victim and extracted models quickly drops to 32.36(±1.84)%, being no different from the similarity between two unrelated models. this problem is solved by zest, as described in details in § 5.2. zest in other domains in this section, we analyse the performance of zest in computing distances between related and unrelated model pairs in both text and audio domains. in the text domain we used long-short term memory (lstm) classifiers (hochreiter & schmidhuber, 1997) and ag news dataset (zhang et al., 2015) (described in appendix a.1 and a.2). in the audio domain we used m5 speech classifier (dai et al., 2017) and speech commands dataset (warden, 2018). similarly to our vision experiment in section 4.2, related lstm (or m5) classifiers are checkpoints at two different epochs, while unrelated lstm (or m5) classifiers are trained with different random seeds. figure 4 shows that the distance output by zest for unrelated classifiers in both text and audio domains is significantly higher than for related classifiers (consistent with our results in section 4.2 on vision datasets). this thus reinforces that zest is applicable to different domains of vision, audio and text. case studies: applications of zest next, we evaluate the applicability of distances between reference and suspicious classifiers, computed by zest to two tasks (see table 2 in appendix a.3). our case studies, introduced earlier, are model stealing and machine unlearning. we first introduce relevant literature on these case studies. (a) cifar-10 figure 5: the distance between a victim classifier and its extracted copy obtained by zest as a function of training epochs of the latter. zest can detect the extracted copy of the victim classifier. 5.1 background on model stealing and machine unlearning model stealing. there is often a strong need to protect the confidentiality of ml models; it can be costly to collect datasets and train models, making them valuable intellectual property. model stealing refers to an attacker duplicating the functionality of a confidential model without permission, stealing intellectual property. model extraction is a class of such attacks that efficiently learn the approximate functionality of a model by observing its outputs on selected inputs (tram`er et al., 2016). this is often achieved via an exposed inference api, which is common in ml-as-a-service. model stealing can also be considered a malicious case of model reuse, where a model is used as a basis for training another model at a smaller cost (e.g., through fine-tuning). li et al. (2021) developed a distance metric, modeldiff, that detects model reuse in dnns by comparing the models’ outputs. however, while li et al. (2021) identified model reuse via transfer learning and model compression, they were unsuccessful in detecting models stolen via model extraction, and presented this as an open problem. in our case study, we show we are able to fill this gap in the literature. machine unlearning. private data is often collected for the purpose of training ml models, but legislation and privacy ethics call for the right to be forgotten. for ml models, the right to be forgotten implies that a user should have the right to remove the effects of their data on a model trained on that data. the naive solution is to remove the user’s entry from training data, then retrain the model, but there are more efficient methods: these either decrease the time it takes to exactly/directly retrain (bourtoule et al., 2021) or find a way to approximate the retraining procedure (graves et al., 2020). after requesting data removal, the user (or third-party auditors) may be interested in verifying that the new model has indeed unlearned their data. by measuring the similarity of the decision boundary between the old and new models around the user’s data, we can use zest as a heuristic to help test if the data has been unlearned. zest detects model extraction | 7 | [
108.249,
323.7890784,
287.4222612,
333.7815662
] |
-6vS_4Kfz0.pdf | 2,021 | 1 | optimizing memory placement using evolutionary graph reinforcement learning shauharda khadka ∗ intel labs estelle aflalo ∗ intel israel mattias marder ∗ intel israel avrech ben-david ∗ technion santiago miret intel labs shie mannor technion tamir hazan technion hanlin tang intel labs somdeb majumdar † intel labs abstract for deep neural network accelerators, memory movement is both energetically expensive and can bound computation. therefore, optimal mapping of tensors to memory hierarchies is critical to performance. the growing complexity of neural networks calls for automated memory mapping instead of manual heuristic approaches; yet the search space of neural network computational graphs have previously been prohibitively large. we introduce evolutionary graph reinforcement learning (egrl), a method designed for large search spaces, that combines graph neural networks, reinforcement learning, and evolutionary search. a set of fast, stateless policies guide the evolutionary search to improve its sample-efficiency. we train and validate our approach directly on the intel nnp-i chip for inference. egrl outperforms policy-gradient, evolutionary search and dynamic programming baselines on bert, resnet-101 and resnet-50. we additionally achieve 28-78% speed-up compared to the native nnp-i compiler on all three workloads. introduction | 0 | [
126.82956,
354.4216768,
205.9888518,
366.3768768
] |
qrwe7XHTmYb.pdf | 2,021 | 2 | gshard: scaling giant models with conditional computation and automatic sharding dmitry lepikhin lepikhin@google.com hyoukjoong lee hyouklee@google.com yuanzhong xu yuanzx@google.com dehao chen dehao@google.com orhan firat orhanf@google.com yanping huang huangyp@google.com maxim krikun krikun@google.com noam shazeer noam@google.com zhifeng chen zhifengc@google.com abstract neural network scaling has been critical for improving the model quality in many real-world machine learning applications with vast amounts of training data and compute. although this trend of scaling is affirmed to be a sure-fire approach for better model quality, there are challenges on the path such as the computation cost, ease of programming, and efficient implementation on parallel devices. in this paper we demonstrate conditional computation as a remedy to the above mentioned impediments, and demonstrate its efficacy and utility. we make extensive use of gshard, a module composed of a set of lightweight annotation apis and an extension to the xla compiler to enable large scale models with up to trillions of parameters. gshard and conditional computation enable us to scale up multilingual neural machine translation transformer model with sparsely-gated mixture-ofexperts. we demonstrate that such a giant model with 600 billion parameters can efficiently be trained on 2048 tpu v3 cores in 4 days to achieve far superior quality for translation from 100 languages to english compared to the prior art. introduction scaling neural networks brings dramatic quality gains over a wide array of machine learning problems such as computer vision, language understanding and neural machine translation (devlin et al., 2018; mahajan et al., 2018; arivazhagan et al., 2019; huang et al., 2019; brown et al., 2020b). this general tendency motivated recent studies to scrutinize the factors playing a critical role in the success of scaling, including the amounts of training data, the model size, and the computation being utilized as found by past studies (advani & saxe, 2017; hestness et al., 2019; geiger et al., 2020). while the final model quality was found to have a power-law relationship with these factors (hestness et al., 2017; kaplan et al., 2020), the significant quality gains brought by larger models also came with various practical challenges. training efficiency, which we define as the amount of compute and time used to achieve a superior model quality against the best system existed, is oftentimes left out. in this study, we strive for improving the model quality while being training efficiently. we built a 600 billion parameters sequence-to-sequence transformer model with sparsely-gated mixture-of-experts layers, which enjoys sub-linear computation cost and o(1) compilation time. we trained this model with 2048 tpu v3 devices for 4 days on a multilingual machine translation task and achieved far superior translation quality compared to prior art when translating 100 languages to english with a single non-ensemble model. we conducted experiments with various model sizes and found that the translation quality increases as the model gets bigger, yet the total wall-time to train only increases sub-linearly with respect to the model size, as illustrated in figure 1. to train such an extremely large model, we relied on the following key design choices. figure 1: multilingual translation quality (average ∆bleu comparing to bilingual baselines) improved as moe model size grows up to 600b, while the end-to-end training cost (in terms of tpu v3 core-year) only increased sublinearly. increasing the model size from 37.5b to 600b (16x), results in computation cost increase from 6 to 22 years (3.6x). the 600b parameters model that achieved the best translation quality was trained with 2048 tpu v3 cores for 4 days, a total cost of 22 tpu v3 core-years. in contrast, training all 100 bilingual baseline models would have required 29 tpu v3 core-years. our best quality dense single transformer model (2.3b parameters) achieving ∆bleu of 6.1, was trained with gpipe for a total of 235.5 tpu v3 core-years. conditional computation first, model architecture should be designed to keep the computation and communication requirements sublinear in the model capacity. conditional computation enables us to satisfy training and inference efficiency by having a sub-network activated on the per-input basis. shazeer et al. (2017) has shown that scaling rnn model capacity by adding sparsely gated mixture-of-experts (moe) layers allowed to achieve improved results with sub-linear cost. we therefore present our approach to extend transformer architecture with moe layers in this study. gshard annotation second, the model description should be separated from the partitioning implementation and optimization. this separation of concerns let model developers focus on the network architecture and flexibly change the partitioning strategy, while the underlying system applies semantic-preserving transformations and implements efficient parallel execution. to this end we propose a module, gshard, which only requires the user to annotate a few critical tensors in the model with partitioning policies. it consists of a set of simple apis for annotations, and a compiler extension in xla for automatic parallelization. model developers write models as if there is a single device with huge memory and computation capacity, and the compiler automatically partitions the computation for the target based on the user annotations and their own heuristics. model the transformer (vaswani et al., 2017) architecture has been widely used for natural language processing. we scale transformer with conditional computation by replacing every other feedforward layer with a sparsely activated position-wise mixture of experts (moe) layer (shazeer et al., 2017), with a variant of top-2 gating in both the encoder and the decoder (figure 2). each subword token in the training example activates a sub-network of the moe transformer during both training and inference. the size of the sub-network is roughly independent of the number of experts per moe layer, allowing sublinear scaling of the computation cost. position-wise mixture-of-experts layer the mixture-of-experts (moe) layers used in our model differ from shazeer et al. (2017)’s in the sparse gating function and the auxiliary loss being used. a moe layer for transformer consists of e feed-forward networks ffn1 . . . ffne, each of which outputs woe · relu(wie · xs), where xs is the input token to the moe layer, wi and wo being the input and output projection matrices for the feed-forward layer (an expert) with shapes [m, h] and [h, m ], respectively. the output of a moe layer is the combination of the expert outputs (cid:80)e e=1 gs,e · ffne(xs), where the vector gs,e is computed by a gating function gate(·). we choose to let each token dispatched to at most two experts. the corresponding gating entries gs,e become non-zeros, representing how much an expert contributes to the final network output. figure 2: illustration of scaling of moe transformer encoder layers. decoder modification is similar. (a) standard transformer. (b) replacing every other feed forward layer with a moe layer (c) the moe layer is sharded across multiple devices, while all other layers are replicated. the gating function gate(·) is critical to the moe layer, which is modeled by a softmax activation function to indicate the weights of each expert in processing incoming tokens. we designed a novel efficient gating function with the following mechanisms (details illustrated in algorithm 1). load balancing naively picking top-k experts from the softmax probability distribution leads to load imbalance problem for training as shown in shazeer et al. (2017). most tokens would have been dispatched to a small number of experts, leaving other experts insufficiently trained. to ensure the load is balanced, we enforce that the number of tokens processed by one expert is below some uniform threshold called expert capacity. assuming n total tokens in a batch and at most two experts per token, then the expert capacity c is set to be o(n/e). gate(·) keeps a running counter ce for how many tokens are dispatched to an expert. when both experts selected by a token already exceed their capacity, the token is considered as an overflowed token, where gs,e degenerates into a zero vector. such tokens will be passed on to the next layer via residual connections. the introduction of the fixed expert capacity instead of loading balancing functions in shazeer et al. (2017) allows us to run parallel execution of gating function as described blow. local dispatching for parallel gating load balancing required the token assignments of one expert dependent on assignments of the other experts. the original gating function proposed by (shazeer et al., 2017) had to be implemented sequentially, especially under the static shape constraints on tpus. in our study, we distributed thousands of experts over thousands of devices, a sequential implementation of the gating function would keep most of the devices idle most of the time. instead, we propose a new gate(·) function that partitions all tokens in a training batch evenly into g local groups, i.e., each group contains s = n/g tokens for local dispatching. all local groups are processed independently in parallel. each group is given a fractional capacity of each expert, c = 2n/(g · e), to ensure that at most this many tokens are dispatched to an expert. in general, increasing the expect capacity c decreases the number of overflowed tokens thus improves the model quality. since g × c is a constant, however, the higher capacity leads to smaller number of groups which hurts the training throughput by limiting the number of parallel gating execution. in this way, we can ensure that expert capacity is still enforced and the overall load is balanced. with fixed expert capacity and local dispatching, we are able to speed up the gating function by o(g) times. auxiliary loss following shazeer et al. (2017), we define a new differentiable auxiliary loss term (cid:96)aux to enforce the load balancing. it is added to the overall loss function of the model l = (cid:96)ori + k ∗ (cid:96)aux with a constant multiplier k, where (cid:96)aux is defined in line (13) of algorithm 1, and the term ce/s represents the fraction of input routed to each expert. we replace the mean square (ce/s)2 with algorithm 1: group-level top-2 gating with auxiliary loss data: xs, a group of tokens of size s data: c, expert capacity allocated to this group result: gs,e, group combine weights result: (cid:96)aux, group auxiliary loss (1) for e ← 1 to e do ce ← 0 (2) gs,e ← sof tmax(wg · xs) me ← 1 s s=1 gs,e (4) (5) end (6) for s ← 1 to s do (7) end ce1 ← c + 1 ce s · me (cid:46) gating decisions per expert (cid:46) gates per token per expert, wg are trainable weights (cid:46) mean gates per expert (cid:46) top-2 gates and expert indices (cid:46) normalized g1 (cid:46) position in e1 expert buffer (cid:46) e1 expert combine weight for xs (cid:46) incrementing e1 expert decisions count g1, e1, g2, e2 = top_2({gs,e|e = 1 · · · e}) g2 ← g2/(g1 + g2) rnd ← unif orm(0, 1) c ← ce2 if c < c ∧ 2 · g2 > rnd then (cid:46) top-2 gates and expert indices (cid:46) normalized g2 (cid:46) dispatch to second-best expert with probability ∝ 2 · g2 (cid:46) position in e2 expert buffer end ce2 ← c + 1 (cid:46) e2 expert combine weight for xs differentiable approximation me(ce/s), which can provide better numerical stability since it can be optimized with gradient descent. random routing intuitively, the output ys is a weighted average of what selected experts return. if the weight for the 2nd expert is very small, we can simply ignore the 2nd expert to conserve the overall expert capacity. hence, in addition to respecting the expert capacity constraint, gate(·) dispatches to the 2nd-best expert with the probability proportional to its weight g2. we observed much less overflowed tokens thus better accuracy with random routing for models at the small scale. we then adopted this approach for our experiments at large scales. highly parallel implementation using gshard to implement the model in section 2.1 efficiently on a cluster of devices, we first express the model in terms of linear algebra operations, which are highly tailored and optimized in our software stack tensorflow (abadi et al., 2016) and the hardware platform (tpu). our model implementation (algorithm 2) views the whole accelerator cluster as a single device and expresses its core algorithm in a few tensor operations independent of the setup of the cluster. we extensively used tf.einsum, the einstein summation notation (einstein, 1923), to concisely express the model. top2gating in algorithm 2 computes the union of all group-local gs,e described in the gating algorithm 1. combine_weights is a 4-d tensor with shape [g, s, e, c], whose element value becomes non-zero when the input token s in group g is sent to expert e at capacity buffer position c. for a specific g and s, a slice combine_weight[g, s, :, :] contains at most two non-zero values. binary dispatch_mask is produced from combine_weights by simply setting all non-zero values to 1. to scale the computation to a cluster with d devices, we choose the number of groups g and the number of experts e proportional to d. with ce = o(2s) and the number of tokens per group s independent of d, the model dimension m and the feed-forward hidden dimension h, the total number of floating point operations (flops) per device in algorithm 2: f lop ssoftmax +f lop stop2gating+f lop sdispatch|combine+f lop sffn = o(gsm e)/d+o(gsec)/d +o(gsm ec)/d = o(dm ) +o(egchm )/d +o(2hm ) algorithm 2: forward pass of the positions-wise moe layer. the underscored letter (e.g., g and e) indicates the dimension along which a tensor will be partitioned. 1 gates = softmax(einsum("gsm,me->gse", inputs, wg)) 2 combine_weights, dispatch_mask = top2gating(gates) 3 dispatched_inputs = einsum("gsec,gsm->egcm", dispatch_mask, inputs) 4 h = einsum("egcm,emh->egch", dispatched_inputs, wi) 5 h = relu(h) 6 expert_outputs = einsum("egch,ehm->gecm", h, wo) 7 outputs = einsum("gsec,gecm->gsm", combine_weights, expert_outputs) the per device flops for softmax is proportional to d, but in our experiments d ≤ 2h for up to 16k devices so it is less than that of ffn. consequently the total per-device f lop s could be considered independent of d, satisfying sublinear scaling design requirements. in addition to the computation cost, dispatching and combining token embedding using alltoall operators consumed d) cross-device communication cost on our 2d tpu cluster. we will discuss the cost analysis o( and micro-benchmarks for such communication overheads in appendix section a.3.3. due to the daunting size and computation demand of tensors in algorithm 1 when we scale the number of tokens n to millions and the number of experts e to thousands, we have to parallelize the algorithm over many devices. to express parallelism, tensors in the linear algebra computation are annotated with sharding information using gshard apis to selectively specify how they should be partitioned across a cluster of devices. for example, the underscored letters in algorithm 2 specified along which dimension the tensors are partitioned. this sharding information is propagated to the compiler so that the compiler can automatically apply transformations for parallel execution. please refer to appendix a.2 for more detailed description of the gshard module. we express the annotated version of algorithm 2 as below. the input tensor is split along the first dimension and the gating weight tensor is replicated. after computing the dispatched expert inputs, we apply split to change the sharding from the group (g) dimension to the expert (e) dimension. # replicate the gating weights across all devices # partition inputs along the first (group g) dim across d devices. 1 2 + inputs = split(inputs, 0, d) 3 4 + wg = replicate(wg) 5 6 7 8 9 + dispatched_inputs = split(dispatched_inputs, 0, d) 10 gates = softmax(einsum("gsm,me->gse", inputs, wg)) combine_weights, dispatch_mask = top2gating(gates) dispatched_inputs = einsum("gsec,gsm->egcm", dispatch_mask, inputs) # partition dispatched inputs along expert (e) dim. h = einsum("egcm,emh->egch", dispatched_inputs, wi) where split(tensor, d, d) annotates tensor to be partitioned along the d dimension over d devices, and replicate(tensor) annotates tensor to be replicated across partitions. the invocations of gshard apis such as split or replicate only adds sharding information to the tensor and does not change its logical shape. moreover, users are not required to annotate every tensor in the program. annotations are typically only required on a few important operators like einsums in our model and the compiler uses iterative data-flow analysis to infer sharding for the rest of the tensors. massively multilingual, massive machine translation (m4) we chose multilingual neural machine translation (mt) (firat et al., 2016; johnson et al., 2017; aharoni et al., 2019) to validate our design for efficient training with gshard. multilingual mt, which is an inherently multi-task learning problem, aims at building a single neural network for the goal of translating multiple language pairs simultaneously. this extends the line of work huang et al. (2019); arivazhagan et al. (2019); shazeer et al. (2017) towards a universal machine translation model (bapna & firat, 2020), a single model that can translate between more than hundred languages. in this section, we advocate how conditional computation (bengio et al., 2013; davis & arel, 2013) with sparsely gated mixture of experts fits into the above detailed desiderata and show its efficacy by scaling neural machine translation models, while keeping the training time of such massive networks practical. e.g. a 600b gshard model for m4 can process 1t tokens (source side tokens after sub-word segmentation) in 250k training steps under 4 days. we experiment with increasing the model capacity by adding more layers and more experts into the model and study the factors playing role in convergence, model quality and training efficiency. further, we demonstrate how conditional computation can speed up the training and how sparsely gating each token through the network can efficiently be learned without any prior knowledge on task or language relatedness, exemplifying the capability of learning the gating decision directly from the data. we focus on improving the translation quality (measured in terms of bleu score papineni et al. (2002)) from all 100 languages to english. this resulted in approximately 13 billion training examples to be used for model training. our baselines are separate bilingual neural machine translation models for each language pair (e.g. a single model for german-to-english), tuned depending on the available training data per-language1. rather than displaying individual bleu scores for each language pair, we follow the convention of placing the baselines along the x-axis at zero, and report the ∆bleu trendline of each massively multilingual model trained with gshard (see figure 3). the x-axis in figure 3 is sorted from left-to-right in the decreasing order of amount of available training data, where the left-most side corresponds to high-resourced languages, and low-resourced languages on the right-most side respectively. we also include a variant of dense 96 layer transformer encoderdecoder network t(96l) trained with gpipe pipeline parallelism on the same dataset as another baseline, which took over 6 weeks to convergence on 2048 tpu v3 cores 2. we varied the depth of the transformer network (l) and the number of experts (e) to scale the model. for depth, we tested three different options, 12 (original transformer depth, which consists of 6 encoder and 6 decoder layers), 36 and 60 layers. for the number of experts that replaces every other feed-forward layer, we also tested three options, namely 128, 512 and 2048 experts. note that, the number of devices used for training, is fixed to be equal to the number of experts per-layer for simplicity. please also see the detailed description in table 1 for model configurations. during training, we use float32 for both model weights and activations in order to ensure training stability. we also ran additional scalability experiments with moe(2048e, 60l) with bfloat16 activations with more than one trillion model weights. we are still working on the model convergence and hence did not include the results from this trillion weight model for the sake of reproducibility. results | 5 | [
108.249,
267.2020784,
170.2508983,
277.1646784
] |
P8YIphWNEGO.pdf | 2,023 | 2 | mlpinit: embarrassingly simple gnn training acceleration with mlp initialization xiaotian han1∗ tong zhao2 yozen liu2 xia hu3 neil shah2 1texas a&m university han@tamu.edu 3rice university @snap.com tzhao,yliu2,nshah } 2snap inc. xia.hu@rice.edu abstract training graph neural networks (gnns) on large graphs is complex and extremely time consuming. this is attributed to overheads caused by sparse matrix multiplication, which are sidestepped when training multi-layer perceptrons (mlps) with only node features. mlps, by ignoring graph context, are simple and faster for graph data, however they usually sacrifice prediction accuracy, limiting their applications for graph data. we observe that for most message passing-based gnns, we can trivially derive an analog mlp (we call this a peermlp) with an equivalent weight space, by setting the trainable parameters with the same shapes, making us curious about how do gnns using weights from a fully trained peermlp perform? surprisingly, we find that gnns initialized with such weights significantly outperform their peermlps, motivating us to use peermlp training as a precursor, initialization step to gnn training. to this end, we propose an embarrassingly simple, yet hugely effective initialization method for gnn training acceleration, called mlpinit. our extensive experiments on multiple large-scale graph datasets with diverse gnn architectures validate that mlpinit can accelerate the training of gnns (up to 33× speedup on ogb-products) and often improve prediction performance (e.g., up to 7.97% improvement for graphsage across 7 datasets for node classification, and up to 17.81% improvement across 4 datasets for link prediction on metric hits@10). the code is available at https://github.com/snapresearch/mlpinit-for-gnns. introduction graph neural networks (gnns) (zhang et al., 2018; zhou et al., 2020; wu et al., 2020) have attracted considerable attention from both academic and industrial researchers and have shown promising results on various practical tasks, e.g., recommendation (fan et al., 2019; sankar et al., 2021; ying et al., 2018; tang et al., 2022), knowledge graph analysis (arora, 2020; park et al., 2019; wang et al., 2021), forecasting (tang et al., 2020; zhao et al., 2021; jiang & luo, 2022) and chemistry analysis (li et al., 2018b; you et al., 2018; de cao & kipf, 2018; liu et al., 2022). however, training gnn on large-scale graphs is extremely time-consuming and costly in practice, thus spurring considerable work dedicated to scaling up the training of gnns, even necessitating new massive graph learning libraries (zhang et al., 2020; ferludin et al., 2022) for large-scale graphs. recently, several approaches for more efficient gnns training have been proposed, including novel architecture design (wu et al., 2019; you et al., 2020d; li et al., 2021), data reuse and partitioning paradigms (wan et al., 2022; fey et al., 2021; yu et al., 2022) and graph sparsification (cai et al., 2020; jin et al., 2021b). however, these kinds of methods often sacrifice prediction accuracy and increase modeling complexity, while sometimes meriting significant additional engineering efforts. mlps are used to accelerate gnns (zhang et al., 2021b; frasca et al., 2020; hu et al., 2021) by decoupling gnns to node features learning and graph structure learning. our work also leverages mlps but adopts a distinct perspective. notably, we observe that the weight space of mlps and gnns can be identical, which enables us to transfer weights between mlp and gnn models. having the fact that mlps train faster than gnns, this observation inspired us to raise the question: ∗this work was done while the first author was an intern at snap inc. figure 1: the training speed comparison of the gnns with random initialization and mlpinit. indicates the best performance that gnns with random initialization can achieve. indicates the comparable performance of the gnn with mlpinit. speedup indicates the training time reduced by our proposed mlpinit compared to random initialization. this experimental result shows that mlpinit is able to accelerate the training of gnns significantly. can we train gnns more efficiently by leveraging the weights of converged mlps? to answer this question, we first pioneer a thorough investigation to reveal the relationship between the mlps and gnns in terms of trainable weight space. for ease of presentation, we define the peermlp of a gnn1 so that gnn and its peermlp share the same weights 2. we find that interestingly, gnns can be optimized by training the weights of their peermlp. based on this observation, we adopt weights of converged peermlp as the weights of corresponding gnns and find that these gnns perform even better than converged peermlp on node classification tasks (results in table 2). motivated by this, we propose an embarrassingly simple, yet remarkably effective method to accelerate gnns training by initializing gnn with the weights of its converged peermlp. specifically, to train a target gnn, we first train its peermlp and then initialize the gnn with the optimal weights of converged peermlp. we present the experimental results in figure 1 to show the training speed comparison of gnns with random initialization and with mlpinit. in figure 1, speedup shows the training time reduced by our proposed mlpinit compared to random initialized gnn, while achieving the same test performance. this experimental result shows that mlpinit is able the accelerate the training of gnns significantly: for example, we speed up the training of graphsage, graphsaint, clustergcn, gcn by 2.48 on ogb-arxiv dataset, indicating the superior× ity of our method in gnns training acceleration. moreover, we speed up graphsage training more than 14 on ogb-products. we highlight our contributions as follows: • we pioneer a thorough investigation to reveal the relationship between mlps and gnns in terms of the trainable weight space through the following observations: (i) gnns and mlps have the same weight space. (ii) gnns can be optimized by training the weights of their peermlps. (iii) gnn with weights from its converged peermlp surprisingly performs better than the performance of its converged peermlp on node classification tasks. • based on the above observations, we proposed an embarrassingly simple yet surprisingly effective initialization method to accelerate the gnns training. our method, called mlpinit, initializes the weights of gnns with the weight of their converged peermlp. after initialization, we observe that gnn training takes less than half epochs to converge than those with random initialization. thus, mlpinit is able to accelerate the training of gnns since training mlps is cheaper and faster than training gnns. • comprehensive experimental results on multiple large-scale graphs with diverse gnns validate that mlpinit is able to accelerate the training of gnns (up to 33 speedup on ogb-products) while often improving the model performance 3 (e.g., 7.97% improvement for node classification on graphsage and 17.81% improvement for link prediction on hits@10). • mlpinit is extremely easy to implement and has virtually negligible computational overhead compared to the conventional gnn training schemes. in addition, it is orthogonal to other gnn acceleration methods, such as weight quantization and graph coarsening, further increasing headroom for gnn training acceleration in practice. 1the formal definition of peermlp is in section 3. 2by share the same weight, we mean that the trainable weights of gnn and its peermlp are the same in terms of size, dimension, and values. 3by performance, we refer to the model prediction quality metric of the downstream task on the corresponding test data throughout the discussion. preliminaries = (x, a), where x = rn ×d x1, x2, notations. we denote an attributed graph g { n ×n is the binary adjacency matrix. n is the number is the node feature matrix and a = 0, 1 } { of nodes, and d is the dimension of node feature. for the node classification task, we denote the n , where c is the number of classes. we denote a prediction targets by y 1 } gnn model as fgnn(x, a; wgnn), and an mlp as fmlp(x; wmlp), where wgnn and wmlp denote the trainable weights in the gnn and mlp, respectively. moreover, w∗ mlp denote the fixed weights of optimal (or converged) gnn and mlp, respectively. gnn and w∗ , xn graph neural networks. although various forms of graph neural networks (gnns) exist, our work refers to the conventional message passing flavor (gilmer et al., 2017). these models work by learning a node’s representation by aggregating information from the node’s neighbors recursively. one simple yet widely popular gnn instantiation is the graph convolutional network (gcn), whose multi-layer form can be written concisely: the representation vectors of all nodes at the l-th layer are hl = σ(ahl−1wl), where σ( ) denotes activation function, wl is the trainable weights of · the l-th layer, and hl−1 is the node representations output by the previous layer. denoting the output of the last layer of gnn by h, for a node classification task, the prediction of node label is ˆy = softmax(h). for a link prediction task, one can predict the edge probabilities with any suitable decoder, e.g., commonly used inner-product decoder as ˆa = sigmoid(h ht ) (kipf & welling, 2016b). motivating analyses in this section, we reveal that mlps and gnns share the same weight space, which facilitates the transferability of weights between the two architectures. through this section, we use gcn (kipf & welling, 2016a) as a prototypical example for gnns for notational simplicity, but we note that our discussion is generalizable to other message-passing gnn architectures. motivation 1: gnns share the same weight space with mlps. to show the weight space of gnns and mlps, we present the mathematical expression of one layer of mlp and gcn (kipf & welling, 2016a) as follows: gnn: hl = σ(ahl−1wl gnn), mlp: hl = σ(hl−1wl mlp), gnn and wl where wl mlp are the trainable weights of l-th layer of mlp and gcn, respectively. if we set the hidden layer dimensions of gnns and mlps to be the same, then wl gnn will naturally have the same size. thus, although the gnn and mlp are different models, their weight spaces can be identical. moreover, for any gnn model, we can trivially derive a corresponding mlp whose weight space can be made identical. for brevity, and when the context of a gnn model is made clear, we can write such an mlp which shares the same weight space as a peermlp, i.e., their trainable weights can be transferred to each other. mlp and wl motivation 2: mlps train faster than gnns. gnns train slower than mlps, owing to their non-trivial relational data dependency. we empirically validate that training mlps is much faster than training gnns in table 1. specifically, this is because mlps do not involve sparse matrix multiplication for neighbor aggregation. a gnn layer (here we consider a simple gcn layer, as defined in equation (1)) can be broken down into two operations: feature transformation (z = hl−1wl) and neighbor aggregation (hl = az) (ma et al., 2021). the neighbor aggregation and feature transformation are typically sparse and dense matrix multiplications, respectively. table 1 shows the time usage for these different operations on several real-world graphs. as expected, neighbor aggregation in gnns consumes the large majority of computation time. for example, on the yelp dataset, the neighbor aggregation operation induces a 3199× time overhead. given that the weights of gnns and their peermlp can be transferred to each other, but the peermlp can be trained much faster, we raise the following questions: 1. what will happen if we directly adopt the weights of a converged peermlp to gnn? 2. to what extent can peermlp speed up gnn training and improve gnn performance? in this paper, we try to answer these questions with a comprehensive empirical analysis. table 1: comparison of the running time of forward and backward for different operations (i.e., feature transformation and neighbor aggregation) in gnns. the time unit is milliseconds (ms). operation #nodes #edges ogb-arxiv flickr yelp forward backward total forward backward total forward backward total z = xw h = az figure 2: the relation of gnn and mlp during the training of peermlp. left: cross-entropy loss of fgnn(x, a; wmlp) (gnn) and fmlp(x; wmlp) (peermlp) on training set over training epochs of peermlp. in this experiment, gnn and peermlp share the same weight wmlp, which are trained by the peermlp. middle: training trajectory of peermlp on its own loss landscape. right: training trajectory of gnn with weights from peermlp on gnn’s loss landscape. the figures show that training loss of gnn with weights trained from mlp will decrease. the details are presented in appendix d.2. we also present loss curves on validation/test sets and accuracy curves in appendix a.4. what will happen if we directly adopt the weights of a converged peermlp to gnn? to answer this question, we conducted comprehensive preliminary experiments to investigate weight transferability between mlps and gnns. we made the following interesting and inspiring findings: observation 1: the training loss of gnn will decrease by optimizing the weights of its peermlp. we conducted a verification experiment to investigate the loss changes of the gnns with the weights trained from its peermlp and present the results in figure 2. in this experiment, we have two models, a gnn and its corresponding peermlp, who share the same weights wmlp. that is, the peermlp is fmlp(x; wmlp) and the gnn is fgnn(x, a; wmlp). we optimize the weights wmlp by training the peermlp, and the loss curve of fmlp(x; wmlp) is the blue line in the left figure in figure 2. we also compute the loss of gnn fgnn(x, a; wmlp) with the weights from peermlp. the loss curve of fgnn(x, a; wmlp) is shown in the red line. figure 2 shows the surprising phenomenon that the training loss of gnn with weights trained from peermlp decreases consistently. impressively, these weights (wmlp) were derived without employing neighbor aggregation in training. methods table 2: the performance of gnns and its peermlp with the weights of a converged peermlp on test data. observation 2: converged weights from peermlp provide a good gnn initialization. as peermlp and gnn have the same weight spaces, a natural follow-up question is whether gnn can directly adopt the weights of the converged peermlp and perform well. we next aim to understand this question empirically. specifically, we first trained a peermlp for a target gnn and obtained the optimal weights w∗ mlp. next, we run inference on test data using a gnn with w∗ mlp of peermlp, i.e., applying fgnn(x, a; w∗ mlp). table 2 shows the results of fmlp(x; w∗ mlp) and fgnn(x, a; w∗ mlp). we can observe that the gnns with the optimal weights of peermlp consistently outperform peermlp, indicating that the weights from converged peermlp can serve as good enough initialization of the weights of gnns. t c u d o r p b g o peermlp mlpinit i x r a b g o improv. gnn gcn gcn the proposed method: mlpinit the above findings show that mlps can help the training of gnns. in this section, we formally present our method mlpinit, which is an embarrassingly simple, yet extremely effective approach to accelerating gnn training. the basic idea of mlpinit is straightforward: we adopt the weights of a converged peermlp to initialize the gnn, subsequently, fine-tune the gnn. specifically, for a target gnn (fgnn(x, a; wgnn)), we first construct a peermlp (fmlp(x, a; wmlp)), with matching target weights. next, we optimize the weight of the peermlp model by training the peermlp solely with the node features x for m epochs. upon training the peermlp to convergence and obtaining the optimal weights (w∗ mlp), we initialize the gnn with w∗ mlp and then fine-tune the gnn with n epochs. we present pytorch-style pseudo-code of mlpinit in node classification setting in algorithm 1. training acceleration. since training of the peermlp is comparatively cheap, and the weights of the converged peermlp can provide a good initialization for the corresponding gnn, the end result is that we can significantly reduce the training time of the gnn. assuming that the training of gnn from a random initialization needs n epochs to converge, and n >> n, the total training time can be largely reduced given that mlp training time is negligible compared to gnn training time. the experimental results in table 3 show that n is generally much larger than n. algorithm 1 pytorch-style pseudocode of mlpinit # f_gnn: graph neural network model # f_mlp: peermlp of f_gnn # train peermlp for n epochs for x, y in dataloader_mlp: p = f_mlp(x) loss = nn.crossentropyloss(p, y) loss.backward() optimizer_mlp.step() # initialize gnn with mlpinit torch.save(f_mlp.state_dict(), "w_mlp.pt") f_gnn.load_state_dict("w_mlp.pt") # train gnn for n epochs for x, a, y in dataloader_gnn: p = f_gnn(x, a) loss = nn.crossentropyloss(p, y) loss.backward() optimizer_gnn.step() ease of implementation. mlpinit is extremely easy to implement as shown in algorithm 1. first, we construct an mlp (peermlp), which has the same weights with the target gnn. next, we use the node features x and node labels y to train the peermlp to converge. then, we adopt the weights of converged peermlp to the gnn, and fine-tune the gnn while additionally leveraging the adjacency a. in addition, our method can also directly serve as the final, or deployed gnn model, in resource-constrained settings: assuming n = 0, we can simply train the peermlp and adopt w∗ mlp directly. this reduces training cost further, while enabling us to serve a likely higher performance model in deployment or test settings, as table 2 shows. discussion in this section, we discuss the relation between mlpinit and existing methods. since we position mlpinit as an acceleration method involved mlp, we first compare it with mlp-based gnn acceleration methods, and we also compare it with gnn pre-training methods. comparison to mlp-based gnn acceleration methods. recently, several works aim to simplify gnn to mlp-based constructs during training or inference (zhang et al., 2022; wu et al., 2019; frasca et al., 2020; sun et al., 2021; huang et al., 2020; hu et al., 2021). our method is proposed to accelerate the message passing based gnn for large-scale graphs. thus, mlp-based gnn acceleration is a completely different line of work compared to ours since it removes the message passing in the gnns and uses mlp to model graph structure instead. thus, mlp-based gnn acceleration methods are out of the scope of the discussion in this work. comparison to gnn pre-training methods. our proposed mlpinit are orthogonal to the gnn pre-training methods(you et al., 2020b; zhu et al., 2020b; veliˇckovi´c et al., 2018b; you et al., 2021; qiu et al., 2020; zhu et al., 2021; hu et al., 2019). gnn pre-training typically leverages graph augmentation to pretrain weights of gnns or obtain the node representation for downstream tasks. compared with the pre-training methods, mlpinit has two main differences (or advantages) that significantly contribute to the speed up: (i) the training of peermlp does not involve using the graph structure data, while pre-training methods rely on it. (ii) pre-training methods usually involve graph data augmentation (qiu et al., 2020; zhao et al., 2022a), which requires additional training time. table 3: speed improvement when mlpinit achieves comparable performance with random initialized gnn. the number reported is the training epochs needed. (—) means our method can not reach comparable performance. the epoch used by random/mlpinit is denoted as in figure 1. the detailed speedup computation method are presented in appendix d.3. methods flickr yelp reddit reddit2 a-products ogb-arxiv ogb-products avg. e random( g mlpinit ( a improv. s t random n mlpinit a improv. s i n random c mlpinit g improv. c n random mlpinit c g improv. figure 3: the training curves of different gnns on ogb-arxiv. gnn with mlpinit generally obtains lower loss and higher accuracy than those with the random initialization and converges faster. the training curves are depicted based on ten runs. more experiment results are in appendix a. experiments in the next subsections, we conduct and discuss experiments to understand mlpinit from the following aspects: (i) training speedup, (ii) performance improvements, (iii) hyperparameter sensitivity, (iv) robustness and loss landscape. for node classification, we consider flickr, yelp, reddit, reddit2, a-products, and two ogb datasets (hu et al., 2020), ogb-arxiv and ogb-products as benchmark datasets. we adopt gcn (w/ mini-batch) (kipf & welling, 2016a), graphsage (hamilton et al., 2017), graphsaint(zeng et al., 2019) and clustergcn (chiang et al., 2019) as gnn backbones. the details of datasets and baselines are in appendices c.1 and c.2, respectively. for the link prediction task, we consider cora, citeseer, pubmed, corafull, cs, physics, a-photo, and a-computers as our datasets. our link prediction setup is using as gcn as an encoder which transforms a graph to node representation h and an inner-product decoder ˆa = sigmoid(h ht ) to predict the probability of the link existence, which is discussed in section 2. how much can mlpinit accelerate gnn training? in this section, we compared the training speed of gnns with random initialization and mlpinit. we computed training epochs needed by gnns with random initialization to achieve the best test performance. we also compute the running epochs needed by gnns with mlpinit to achieve comparable test performance. we present the results in table 3. we also plotted the loss and accuracy curves of different gnns on ogb-arxiv in figure 3. we made the following major observations: table 4: performance improvement when gnn with random initialization and with mlpinit achieve best test performance, respectively. mean and standard deviation are calculated based on ten runs. the best test performance for the two methods is independently selected based on validation data. methods flickr yelp reddit reddit2 a-products ogb-arxiv ogb-products avg. i table 5: the performance of link prediction task. the results are based on ten runs. the experiments on other datasets are presented in table 6. more experiments are presented in appendix a.1. d e m b u p p l b d o t o h p a s c i s y h p methods mlprandom gnnrandom gnnmlpinit improvement mlprandom gnnrandom gnnmlpinit improvement mlprandom gnnrandom gnnmlpinit improvement mlprandom gnnrandom gnnmlpinit improvement auc ap avg. observation 3: mlpinit can significantly reduce the training time of gnns. in this experiment, we summarize the epochs needed by gnn with random initialization to obtain the best performance, and then we calculate the epochs needed by gnn with mlpinit to reach a comparable performance on par with the randomly initialized gnn. we present the time speedup of mlpinit in table 3. table 3 shows mlpinit speed up the training of gnns by 2 5 times generally and in some cases even more than 30 times. the consistent reduction of training epochs on different datasets demonstrates that mlpinit can generally speed up gnn training quite significantly. how well does mlpinit perform on node classification and link prediction tasks? in this section, we conducted experiments to show the superiority of the proposed method in terms of final, converged gnn model performance on node classification and link prediction tasks. the reported test performances of both random initialization and mlpinit are selected based on the validation data. we present the performance improvement of mlpinit compared to random initialization in tables 4 and 5 for node classification and link prediction, respectively. observation 4: mlpinit improves the prediction performance for both node classification and link prediction task in most cases. table 4 shows our proposed method gains 7.97%, 7.00%, 6.61% and 14.00% improvements for graphsage, graphsaint, clustergcn, and gcn on average cross figure 5: the loss landscape of gnn trained with random initialization (left) and mlpinit (right). the low-loss area of gnns with mlpinit is larger than that with random initialization. figure 6: the training trajectory of the gnn with random initialization (left) and mlpinit (right). the first-phase training of gnns can be taken over by lightweight mlps. all the datasets for the node classification task. the results in table 5 and table 6 show our proposed method gains 1.05%, 1.10%, 17.81%, 20.97%, 14.88%,10.46% on average cross various metrics for the link prediction task. is mlpinit robust under different hyperparameters? | 7 | [
132.15924,
455.6200784,
389.6193944,
465.5826784
] |
w0QXrZ3N-s.pdf | 2,023 | 2 | the modality focusing hypothesis: towards understanding crossmodal knowledge distillation zihui xue∗ ,1, zhengqi gao∗,2, sucheng ren∗,3, hang zhao† ,4 1 the university of texas at austin 3 south china university of technology 2 massachusetts institute of technology 4 tsinghua university, shanghai qi zhi institute abstract crossmodal knowledge distillation (kd) extends traditional knowledge distillation to the area of multimodal learning and demonstrates great success in various applications. to achieve knowledge transfer across modalities, a pretrained network from one modality is adopted as the teacher to provide supervision signals to a student network learning from another modality. in contrast to the empirical success reported in prior works, the working mechanism of crossmodal kd remains a mystery. in this paper, we present a thorough understanding of crossmodal kd. we begin with two case studies and demonstrate that kd is not a universal cure in crossmodal knowledge transfer. we then present the modality venn diagram (mvd) to understand modality relationships and the modality focusing hypothesis (mfh) revealing the decisive factor in the efficacy of crossmodal kd. experimental results on 6 multimodal datasets help justify our hypothesis, diagnose failure cases, and point directions to improve crossmodal knowledge transfer in the future.1 introduction knowledge distillation (kd) is an effective technique to transfer knowledge from one neural network to another (wang & yoon, 2021; gou et al., 2021). its core mechanism is a teacher-student learning framework, where the student network is trained to mimic the teacher through a loss. the loss function, initially proposed by (hinton et al., 2015) as the kl divergence between teacher and student soft labels, has been extended in many ways (zagoruyko & komodakis, 2016; tung & mori, 2019; park et al., 2019; peng et al., 2019; tian et al., 2019). kd has been successfully applied to various fields and demonstrates its high practical value. the wide applicability of kd stems from its generality: any student can learn from any teacher. to be more precise, the student and teacher network may differ in several ways. three common scenarios are: (1) model capacity difference: many works (zagoruyko & komodakis, 2016; tung & mori, 2019; park et al., 2019; peng et al., 2019) on model compression aim to learn a lightweight student matching the performance of its cumbersome teacher for deployment benefits. (2) architecture (inductive bias) difference: as an example, recent works (touvron et al., 2021; ren et al., 2022; xianing et al., 2022) propose to utilize a cnn teacher to distill its inductive bias to a transformer student for data efficiency. (3) modality difference: kd has been extended to transfer knowledge across modalities (gupta et al., 2016; aytar et al., 2016; zhao et al., 2018; garcia et al., 2018; thoker & gall, 2019; ren et al., 2021; afouras et al., 2020; valverde et al., 2021; xue et al., 2021), where the teacher and student network come from different modalities. examples include using an rgb teacher to provide supervision signals to a student network taking depth images as input, and adopting an audio teacher to learn a visual student, etc. despite the great empirical success reported in prior works, the working mechanism of kd is still poorly understood (gou et al., 2021). this puts the efficacy of kd into question: is kd always ∗zihui, zhengqi, and sucheng contribute equally. work is done during internship at shanghai qi zhi institute. †correspond to hangzhao@mail.tsinghua.edu.cn. 1our code is available at https://github.com/zihuixue/mfh. efficient? if not, what is a good indicator of kd performance? a few works (cho & hariharan, 2019; tang et al., 2020; ren et al., 2022) are in search for the answer in the context of model capacity difference and architecture difference. however, the analysis for the third scenario, kd under modality difference or formally crossmodal kd, remains an open problem. this work aims to fill this gap and for the first time provides a comprehensive analysis of crossmodal kd. our major contributions are the following: • we evaluate crossmodal kd on a few multimodal tasks and find surprisingly that teacher performance does not always positively correlate with student performance. • to explore the cause of performance mismatch in crossmodal kd, we adopt the modality venn diagram (mvd) to understand modality relationships and formally define modalitygeneral decisive features and modality-specific decisive features. • we present the modality focusing hypothesis (mfh) that provides an explanation of when crossmodal kd is effective. we hypothesize that modality-general decisive features are the crucial factor that determines the efficacy of crossmodal kd. • we conduct experiments on 6 multimodal datasets (i.e., synthetic gaussian, av-mnist, ravdess, vggsound, nyu depth v2, and mm-imdb). the results validate the proposed mfh and provide insights on how to improve crossmodal kd. related work unimodal kd kd represents a general technique that transfers information learned by a teacher network to a student network, with applications to many vision tasks (tung & mori, 2019; peng et al., 2019; he et al., 2019; liu et al., 2019). despite the development towards better distillation techniques or new application fields, there is limited literature (phuong & lampert, 2019; cho & hariharan, 2019; tang et al., 2020; ren et al., 2022; 2023) on understanding the working mechanism of kd. specifically, cho & hariharan (2019) and mirzadeh et al. (2020) investigate kd for model compression, i.e., when the student and teacher differ in model size. they point out that mismatched capacity between student and teacher network can lead to failure of kd. ren et al. (2022) analyze kd for vision transformers and demonstrate that teacher’s inductive bias matters more than its accuracy in improving performance of the transformer student. these works provide good insight into understanding kd, yet their discussions are limited to unimodality and have not touched on kd for multimodal learning. crossmodal kd with the accessibility of the internet and the growing availability of multimodal sensors, multimodal learning has received increasing research attention (baltrušaitis et al., 2018). following this trend, kd has also been extended to achieve knowledge transfer from multimodal data and enjoys diverse applications, such as action recognition (garcia et al., 2018; luo et al., 2018; thoker & gall, 2019), lip reading (ren et al., 2021; afouras et al., 2020), and medical image segmentation (hu et al., 2020; li et al., 2020). vision models are often adopted as teachers to provide supervision to student models of other modalities, e.g., sound (aytar et al., 2016; xue et al., 2021), depth (gupta et al., 2016; xue et al., 2021), optical flow (garcia et al., 2018), thermal (kruthiventi et al., 2017), and wireless signals (zhao et al., 2018). although these works demonstrate potentials of crossmodal kd, they are often associated with a specific multimodal task. an in-depth analysis of crossmodal kd is notably lacking, which is the main focus of this paper. multimodal data relations there is continuous discussion on how to characterize multimodal (or multi-view) data relations. many works (tsai et al., 2020; lin et al., 2021; 2022) utilize the multi-view assumption (sridharan & kakade, 2008), which states that either view alone is sufficient for the downstream tasks. however, as suggested in (tsai et al., 2020), when the two views of input lie in different modalities, the multi-view assumption is likely to fail.2 in the meantime, a few works on multimodal learning (wang et al., 2a detailed comparison of our proposed mvd with the multi-view assumption is presented in appendix c. 2016; zhang et al., 2018; hazarika et al., 2020; ma et al., 2020) indicate that multimodal features can be decomposed as modality-general features and specific features in each modality. building upon these ideas, in this work, we present the mvd to formally characterize modality relations. in addition, the importance of modality-general information has been identified in these works, yet with different contexts. in multi-view learning, (lin et al., 2021; 2022) consider shared information between two views as the key to enforce cross-view consistency. to boost multimodal network performance and enhance its generalization ability, (wang et al., 2016; zhang et al., 2018; hazarika et al., 2020; ma et al., 2020) propose different ways to separate modality-general and modalityspecific information. for semi-supervised multimodal learning, (sun et al., 2020) aims at maximizing the mutual information shared by all modalities. to the best of our knowledge, our work is the first to reveal the importance of modality-general information in crossmodal kd. on the efficacy of crossmodal kd first, we revisit the basics of kd and introduce notations used throughout the paper. consider a supervised k-class classification problem. let fθs(x) ∈ rk and fθt(x) ∈ rk represent the output (i.e., class probabilities) of the student and teacher networks respectively, where {θs, θt} are learnable parameters. without loss of generality, we limit our discussion within input data of two modalities, denoted by xa and xb for modality a and b, respectively. assume that we aim to learn a student network that takes xb as input. in conventional unimodal kd, the teacher network takes input from the same modality as the student network (i.e., xb). the objective for training the student is: l = ρltask + (1 − ρ)lkd where ltask represents the cross entropy loss between the ground truth label y ∈ {0, 1, · · · , k − 1} and the student prediction fθs (xb), lkd represents the kl divergence between the student prediction fθs(xb) and the teacher prediction fθt(xb), and ρ ∈ [0, 1] weighs the importance of two terms ltask and lkd (i.e., driving the student to true labels or teacher’s soft predictions). crossmodal kd resorts to a teacher from the other modality (i.e., xa) to transfer knowledge to the student. eq. (1) is still valid with a slight correction that the kl divergence term is now calculated using fθs (xb) and fθt(xa). in addition, there is one variant (or special case) of crossmodal kd, where a multimodal teacher taking input from both modality a and b is adopted for distillation, and lkd is now a kl divergence term between fθs (xb) and fθt(xa, xb). we first present a case study on the comparison of crossmodal kd with unimodal kd. consider the special case of crossmodal kd where a multimodal teacher is adopted. intuitively, adopting a multimodal teacher, which takes both modality a and b as input, can be beneficial for distillation since: (1) a multimodal network usually enjoys a higher accuracy than its unimodal counterpart (baltrušaitis et al., 2018), and a more accurate teacher ought to result in a better student; (2) the complementary modality-dependent information brought by a multimodal teacher can enrich the student with additional knowledge. this idea motivates many research works (luo et al., 2018; hu et al., 2020; valverde et al., 2021) to replace a unimodal teacher with a multimodal one, in an attempt to improve student performance. despite many empirical evidence reported in prior works, in this paper, we reflect on this assumption and ask the question: is crossmodal kd always effective? table 1: evaluation of unimodal kd (um-kd) and crossmodal kd (cm-kd) on av-mnist and nyu depth v2. ‘mod.’ is short for modality, ‘miou’ denotes mean intersection over union, and a, i, rgb, d represents audio, grayscale images, rgb images and depth images, respectively. av-mnist nyu depth v2 teacher student teacher student mod. acc. (%) mod. acc. (%) mod. miou (%) mod. miou (%) no-kd um-kd cm-kd i + a a a a a rgb rgb + d rgb rgb rgb table 1 provides two counterexamples for the above question on av-mnist and nyu depth v2 data. the goal is to improve an audio student using kd on av-mnist and to improve an rgb model on nyu depth v2. from table 1, we can see that a more accurate multimodal network does not serve as a better teacher in these two cases. for av-mnist, while the audio-visual teacher itself has a much higher accuracy than the unimodal teacher (i.e., +7.04%), the resulting student is worse (i.e., -0.37%) instead. similarly, the great increase in teacher performance (i.e., +4.64%) does not translate to student improvement (i.e., -0.22%) for nyu depth v2. these results cast doubt on the efficacy of crossmodal kd.3 even with the great increase in teacher accuracy, crossmodal kd fails to outperform unimodal kd in some cases. contradictory to the previous intuition, teacher performance seems not reflective of student performance. inspired by this observation, our work targets on exploring the open problem: what is the fundamental factor deciding the efficacy of crossmodal kd? proposed approach the modality venn diagram to study crossmodal kd, it is critical to first establish an understanding of multimodal data. before touching multimodal data, let us fall back and consider unimodal data. following a causal perspective (schölkopf et al., 2012) (i.e., features cause labels), we assume that the label y is determined by a subset of features in xa (or xb); this subset of features are referred to as decisive features for modality a (or modality b) throughout the paper. for instance, colors of an image help identify some classes (e.g., distinguish between a zebra and a horse) and can be considered as decisive features. when considering multimodal data, input features of the two modalities will have logical relations such as intersection and union. we describe the modality venn diagram (mvd) below to characterize this relationship. stemming from the common perception that multimodal data possess shared information and preserve information specific to each modality, mvd states that any multimodal features are composed of modality-general features and modality-specific features. decisive features of the two modalities are thus composed of two parts: (1) modality-general decisive features and (2) modality-specific decisive features; these two parts of decisive features work together and contribute to the final label y. fig. 1 left shows an example of a video-audio data pair, where the camera only captures one person due to its position angle and the audio is mixed sounds of two instruments. fig. 1 right illustrates how we interpret these three features (i.e., modality-general decisive, visual modality-specific decisive and audio modality-specific decisive) at the input level. figure 1: an input video-audio pair can be regarded as composed of modality-general features and modality-specific features in the visual and audio modality. for instance, the man playing violin on the right is not captured by the camera and hence its sound (marked in red) belongs to audio modality-specific information. next, we propose a formal description of mvd to capture the generating dynamics of multimodal data. let x a, x b, and y be the feature space of modality a, modality b, and the label space, respectively, and (xa, xb, y) be a pair of data drawn from a unknown distribution p over the space x a × x b × y. mvd assumes that (xa, xb, y) is generated by a quadruple (zsa, zsb, z0, y) ∈ z sa × z sb × z 0 × y, following the generation rule: mvd generation rule: xa = ga(za), xb = gb(zb), za = [zsa, z0]t ∈ z a = z sa × z 0 zb = [zsb, z0]t ∈ z b = z sb × z 0 where collectively, gu(·) : z u (cid:55)→ x u denotes an unknown generating function, if we adopt the notation u ∈ {a, b}. to complete the mvd, another linear decision rule should be included. 3note that we even give preferable treatment to crossmodal kd: we take a multimodal network as teacher, and this teacher achieves higher accuracy than a teacher typically used in crossmodal kd. specifically, the following equation: mvd decision rule: ∃ wu, arg max [softmax(wuzu)] = arg max [wuzu] = y is assumed to hold for any (zsa, zsb, z0, y), where we slightly abuse arg max[·] and here it means the index of the largest element in the argument. in essence, mvd specifies that xu is generated based on a modality-specific decisive feature vector zsu and a modality-general decisive feature vector z0 (generation rule), and that zu is sufficient to linearly determine the label y (decision rule). we proceed to quantify the correlation of modality-general decisive features and modality-specific decisive features. let z su ⊆ rdsu and z 0 ⊆ rd0, so that z u ⊆ rdu , where du = dsu + d0. we denote a ratio γ = d0/(d0 + dsa + dsb) ∈ [0, 1], which characterizes the ratio of modalitygeneral decisive features over all decisive features. similarly, α = dsa/(d0 + dsa + dsb) and β = dsb/(d0 + dsa + dsb) denotes the proportion of modality-specific decisive features for modality a and b over all decisive features, respectively, and we have α + β + γ = 1. the modality focusing hypothesis based on mvd, we now revisit our observation in sec. 3 (i.e., teacher accuracy is not a key indicator of student performance) and provide explanations. first, teacher performance is decided by both modality-general decisive and modality-specific decisive features in modality a. in terms of student performance, although modality-specific decisive features in modality a are meaningful for the teacher, they can not instruct the student since the student only sees modality b. on the other hand, modality-general decisive features are not specific to modality b and could be transferred to the student. coming back to the example in fig. 1, if an audio teacher provides modality-specific information (i.e., the sound colored in red), the visual student will get confused as this information (i.e., playing violin) is not available in the visual modality. on the contrary, modality-general information can be well transferred across modalities and facilitates distillation as the audio teacher and visual student can both perceive the information about the left person playing guitar. this motivates the following modality focusing hypothesis (mfh). the modality focusing hypothesis (mfh). for crossmodal kd, distillation performance is dependent on the proportion of modality-general decisive features preserved in the teacher network: with larger γ, the student network is expected to perform better. the hypothesis states that in crossmodal knowledge transfer, the student learns to “focus on” modalitygeneral decisive features. crossmodal kd is thus beneficial for the case where γ is large (i.e., multimodal data share many label-relevant information). moreover, it accounts for our observation that teacher performance fails to correlate with student performance in some scenarios — when α is large and γ is small, the teacher network attains high accuracy primarily based on modality-specific information, which is not beneficial for the student’s learning process. to have an intuitive and quick understanding of our hypothesis, here we present two experiments with synthetic gaussian data. more details can be found in sec. 5.2. as shown in fig. 2, we start from the extreme case where two modalities do not overlap, and gradually increase the proportion of modality-general decisive features until all decisive features are shared by two modalities. we observe that crossmodal kd fails to work when xa and xb share few decisive features (i.e., γ is small) since modality-specific decisive features in modality a are not perceived by the student. as figure 2: an illustration of mfh with synthetic gaussian data. teacher modality is xa and student modality is xb. we plot the confidence interval of one standard deviation for student accuracy. with increasing γ, crossmodal kd becomes more effective. figure 3: with increasing α (i.e., decreasing γ), the teacher improves its prediction accuracy but the student network fails to benefit from kd. see the caption of fig. 2 for more explanations. γ gradually increases, crossmodal kd becomes more effective. for the case where all decisive features possess in both modalities, the student gains from teacher’s knowledge and outperforms its baseline by 2.1%. note that the teacher accuracy does not vary much during this process, yet student performance differs greatly. fig. 3 illustrates the reverse process where modality-specific decisive features in modality a gradually dominate. with increasing α, the teacher gradually improves since it receives more modality-specific decisive features for prediction. however, the student network fails to benefit from the improved teacher and performs slightly worse instead. clearly, teacher performance is not reflective of student performance in this case. these two sets of experiments help demonstrate that teacher accuracy does not faithfully reflect the effectiveness of crossmodal kd and lend support to our proposed hypothesis. apart from the two intuitive examples, below we provide a theoretical guarantee of mfh in an analytically tractable case, linear binary classification. formally, considering an infinitesimal learning rate which turns the training into a continuous gradient flow defined on a time parameter t ∈ [0, +∞) (phuong & lampert, 2019). if n data are available, which are collectively denoted as zu ∈ rdu×n, we have the following theorem to bound the training distillation loss with γ. theorem 1. (crossmodal kd in linear binary classification). without loss of generality, we assume fθt(·) : x a (cid:55)→ y and fθs(·) : x b (cid:55)→ y. suppose max{||zuzu,t ||, ||(zuzu,t )−1||} ≤ λ always holds for both u = a or b, and gu(·) are identity functions. if there exists (ϵ, δ) such that p r (cid:2)||za,t za − zb,t zb|| ≤ (1 − γ)ϵ(cid:3) ≥ 1 − δ then, with an initialization at t = 0 satisfying rdis n (θs(0)) ≤ q, we have, at least probability 1 − δ: rdis n (θs(t = +∞)) ≤ n( ϵ⋆ 1 − e−ϵ⋆ ) n (θs) is the empirical risk defined by kl divergence where ϵ⋆ = λ1.5(λ2 + 1)(1 − γ)ϵ and rdis (corresponding to eq. (1) when ρ = 0): rdis n (θs(t)) = −σ(θt t xa i ) · ln σ(θt σ(θt s xb i ) t xa i ) t xa i )(cid:3) · ln s xb i ) t xa i ) see appendix a for the omitting proof, several important remarks, and future improvements. implications | 5 | [
132.15924,
169.4090784,
193.5828828,
179.3716784
] |
1PL1NIMMrw.pdf | 2,023 | 2 | self-consistency improves chain of thought reasoning in language models xuezhi wang†‡, jason wei†, dale schuurmans†, quoc le†, ed h. chi†, sharan narang†, aakanksha chowdhery†, denny zhou†§ †google research, brain team ‡xuezhiw@google.com, §dennyzhou@google.com abstract chain-of-thought prompting combined with pre-trained large language models has achieved encouraging results on complex reasoning tasks. in this paper, we propose a new decoding strategy, self-consistency, to replace the naive greedy decoding used in chain-of-thought prompting. it first samples a diverse set of reasoning paths instead of only taking the greedy one, and then selects the most consistent answer by marginalizing out the sampled reasoning paths. self-consistency leverages the intuition that a complex reasoning problem typically admits multiple different ways of thinking leading to its unique correct answer. our extensive empirical evaluation shows that self-consistency boosts the performance of chain-of-thought prompting with a striking margin on a range of popular arithmetic and commonsense reasoning benchmarks, including gsm8k (+17.9%), svamp (+11.0%), aqua (+12.2%), strategyqa (+6.4%) and arc-challenge (+3.9%). introduction although language models have demonstrated remarkable success across a range of nlp tasks, their ability to demonstrate reasoning is often seen as a limitation, which cannot be overcome solely by increasing model scale (rae et al., 2021; big-bench collaboration, 2021, inter alia). in an effort to address this shortcoming, wei et al. (2022) have proposed chain-of-thought prompting, where a language model is prompted to generate a series of short sentences that mimic the reasoning process a person might employ in solving a task. for example, given the question “if there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?”, instead of directly responding with “5”, a language model would be prompted to respond with the entire chain-of-thought: “there are 3 cars in the parking lot already. 2 more arrive. now there are 3 + 2 = 5 cars. the answer is 5.”. it has been observed that chain-of-thought prompting significantly improves model performance across a variety of multi-step reasoning tasks (wei et al., 2022). in this paper, we introduce a novel decoding strategy called self-consistency to replace the greedy decoding strategy used in chain-of-thought prompting (wei et al., 2022), that further improves language models’ reasoning performance by a significant margin. self-consistency leverages the intuition that complex reasoning tasks typically admit multiple reasoning paths that reach a correct answer (stanovich & west, 2000). the more that deliberate thinking and analysis is required for a problem (evans, 2010), the greater the diversity of reasoning paths that can recover the answer. figure 1 illustrates the self-consistency method with an example. we first prompt the language model with chain-of-thought prompting, then instead of greedily decoding the optimal reasoning path, we propose a “sample-and-marginalize” decoding procedure: we first sample from the language model’s decoder to generate a diverse set of reasoning paths; each reasoning path might lead to a different final answer, so we determine the optimal answer by marginalizing out the sampled reasoning paths to find the most consistent answer in the final answer set. such an approach is analogous to the human experience that if multiple different ways of thinking lead to the same answer, one has greater confidence that the final answer is correct. compared to other decoding methods, self-consistency avoids the repetitiveness and local-optimality that plague greedy decoding, while mitigating the stochasticity of a single sampled generation. figure 1: the self-consistency method contains three steps: (1) prompt a language model using chain-of-thought (cot) prompting; (2) replace the “greedy decode” in cot prompting by sampling from the language model’s decoder to generate a diverse set of reasoning paths; and (3) marginalize out the reasoning paths and aggregate by choosing the most consistent answer in the final answer set. self-consistency is far simpler than prior approaches that either train an additional verifier (cobbe et al., 2021) or train a re-ranker given additional human annotations to improve generation quality (thoppilan et al., 2022). instead, self-consistency is entirely unsupervised, works off-the-shelf with pre-trained language models, requires no additional human annotation, and avoids any additional training, auxiliary models or fine-tuning. self-consistency also differs from a typical ensemble approach where multiple models are trained and the outputs from each model are aggregated, it acts more like a “self-ensemble” that works on top of a single language model. we evaluate self-consistency on a wide range of arithmetic and commonsense reasoning tasks over four language models with varying scales: the public ul2-20b (tay et al., 2022) and gpt-3-175b (brown et al., 2020), and two densely-activated decoder-only language models: lamda-137b (thoppilan et al., 2022) and palm-540b (chowdhery et al., 2022). on all four language models, self-consistency improves over chain-of-thought prompting by a striking margin across all tasks. in particular, when used with palm-540b or gpt-3, self-consistency achieves new state-of-the-art levels of performance across arithmetic reasoning tasks, including gsm8k (cobbe et al., 2021) (+17.9% absolute accuracy gains), svamp (patel et al., 2021) (+11.0%), aqua (ling et al., 2017) (+12.2%), and across commonsense reasoning tasks such as strategyqa (geva et al., 2021) (+6.4%) and arcchallenge (clark et al., 2018) (+3.9%). in additional experiments, we show self-consistency can robustly boost performance on nlp tasks where adding a chain-of-thought might hurt performance compared to standard prompting (ye & durrett, 2022). we also show self-consistency significantly outperforms sample-and-rank, beam search, ensemble-based approaches, and is robust to sampling strategies and imperfect prompts. self-consistency over diverse reasoning paths a salient aspect of humanity is that people think differently. it is natural to suppose that in tasks requiring deliberate thinking, there are likely several ways to attack the problem. we propose that such a process can be simulated in language models via sampling from the language model’s decoder. for instance, as shown in figure 1, a model can generate several plausible responses to a math question that all arrive at the same correct answer (outputs 1 and 3). since language models are not perfect reasoners, the model might also produce an incorrect reasoning path or make a mistake in one of the reasoning steps (e.g., in output 2), but such solutions are less likely to arrive at the same answer. that is, we hypothesize that correct reasoning processes, even if they are diverse, tend to have greater agreement in their final answer than incorrect processes. we leverage this intuition by proposing the following self-consistency method. first, a language model is prompted with a set of manually written chain-of-thought exemplars (wei et al., 2022). next, gsm8k multiarith aqua svamp csqa arc-c greedy decode weighted avg (unnormalized) weighted avg (normalized) weighted sum (unnormalized) weighted sum (normalized) unweighted sum (majority vote) 74.4 ± 0.1 table 1: accuracy comparison of different answer aggregation strategies on palm-540b. we sample a set of candidate outputs from the language model’s decoder, generating a diverse set of candidate reasoning paths. self-consistency is compatible with most existing sampling algorithms, including temperature sampling (ackley et al., 1985; ficler & goldberg, 2017), top-k sampling (fan et al., 2018; holtzman et al., 2018; radford et al., 2019), and nucleus sampling (holtzman et al., 2020). finally, we aggregate the answers by marginalizing out the sampled reasoning paths and choosing the answer that is the most consistent among the generated answers. in more detail, assume the generated answers ai are from a fixed answer set, ai ∈ a, where i = 1, . . . , m indexes the m candidate outputs sampled from the decoder. given a prompt and a question, self-consistency introduces an additional latent variable ri, which is a sequence of tokens representing the reasoning path in the i-th output, then couples the generation of (ri, ai) where ri → ai, i.e., generating a reasoning path ri is optional and only used to reach the final answer ai. as an example, consider output 3 from figure 1: the first few sentences “she eats 3 for breakfast ... so she has 9 eggs * $2 = $18.” constitutes ri, while the answer 18 from the last sentence, “the answer is $18”, is parsed as ai.1 after sampling multiple (ri, ai) from the model’s decoder, self-consistency 1(ai = a), applies a marginalization over ri by taking a majority vote over ai, i.e., arg maxa or as we defined as the most “consistent” answer among the final answer set. in table 1, we show the test accuracy over a set of reasoning tasks by using different answer aggregation strategies. in addition to majority vote, one can also weight each (ri, ai) by p (ri, ai | prompt, question) when aggregating the answers. note to compute p (ri, ai | prompt, question), we can either take the unnormalized probability of the model generating (ri, ai) given (prompt, question), or we can normalize the conditional probability by the output length (brown et al., 2020), i.e., p (ri, ai | prompt, question) = exp k=1 log p (tk|prompt,question,t1,...,tk−1), where log p (tk | prompt, question, t1, . . . , tk−1) is the log probability of generating the k-th token tk in (ri, ai) conditioned on the previous tokens, and k is the total number of tokens in (ri, ai). in table 1, we show that taking the “unweighted sum”, i.e., taking a majority vote directly over ai yields a very similar accuracy as aggregating using the “normalized weighted sum”. we took a closer look at the model’s output probabilities and found this is because for each (ri, ai), the normalized conditional probabilities p (ri, ai | prompt, question) are quite close to each other, i.e., the language model regards those generations as “similarly likely”.2 additionally, when aggregating the answers, the results in table 1 show that the “normalized” weighted sum (i.e., equation 1) yields a much higher accuracy compared to its unnormalized counterpart. for completeness, in table 1 we also report the results by taking a “weighted average”, i.e., each a gets a score of its weighted sum divided by (cid:80)m i=1 1(ai = a), which results in a much worse performance. self-consistency explores an interesting space between open-ended text generation and optimal text generation with a fixed answer. reasoning tasks typically have fixed answers, which is why researchers have generally considered greedy decoding approaches (radford et al., 2019; wei et al., 2022; chowdhery et al., 2022). however, we have found that even when the desired answer is fixed, introducing diversity in the reasoning processes can be highly beneficial; therefore we leverage 1the parser is task dependent. for arithmetic reasoning, we parse the first numerical part as the final answer after the model generates “the answer is ”. for commonsense reasoning, we parse the full string answer as the final answer after the model generates “the answer is ”. most generated outputs have a consistent format of “{reasoning paths}. the answer is x.” if we prompt the language model in this format. 2this also means that the language model is not well calibrated and thus cannot distinguish well between correct solutions and wrong solutions, which also explains why additional re-rankers were trained to better judge the quality of the solutions in previous work (cobbe et al., 2021; thoppilan et al., 2022). sampling, as commonly used for open-ended text generation (radford et al., 2019; brown et al., 2020; thoppilan et al., 2022), to achieve this goal. one should note that self-consistency can be applied only to problems where the final answer is from a fixed answer set, but in principle this approach can be extended to open-text generation problems if a good metric of consistency can be defined between multiple generations, e.g., whether two answers agree or contradict each other. experiments we conducted a series of experiments to compare the proposed self-consistency method with existing approaches on a range of reasoning benchmarks. we find that self-consistency robustly improves reasoning accuracy for every language model considered, spanning a wide range of model scales. experiment setup tasks and datasets. we evaluate self-consistency on the following reasoning benchmarks.3 • arithmetic reasoning. for these tasks, we used the math word problem repository (koncelkedziorski et al., 2016), including addsub (hosseini et al., 2014), multiarith (roy & roth, 2015), and asdiv (miao et al., 2020). we also included aqua-rat (ling et al., 2017), a recently published benchmark of grade-school-math problems (gsm8k; cobbe et al., 2021), and a challenge dataset over math word problems (svamp; patel et al., 2021). • commonsense reasoning. for these tasks, we used commonsenseqa (talmor et al., 2019), strategyqa (geva et al., 2021), and the ai2 reasoning challenge (arc) (clark et al., 2018). • symbolic reasoning. we evaluate two symbolic reasoning tasks: last letter concatenation (e.g., the input is “elon musk” and the output should be “nk”), and coinflip (e.g., a coin is heads-up, after a few flips is the coin still heads-up?) from wei et al. (2022). language models and prompts. we evaluate self-consistency over four transformer-based language models with varying scales: • ul2 (tay et al., 2022) is an encoder-decoder model trained on a mixture of denoisers with 20billion parameters. ul2 is completely open-sourced4 and has similar or better performance than gpt-3 on zero-shot superglue, with only 20b parameters and thus is more compute-friendly; • gpt-3 (brown et al., 2020) with 175-billion parameters. we use two public engines code-davinci001 and code-davinci-002 from the codex series (chen et al., 2021) to aid reproducibility;5 • lamda-137b (thoppilan et al., 2022) is a dense left-to-right, decoder-only language model with 137-billion parameters, pre-trained on a mixture of web documents, dialog data and wikipedia; • palm-540b (chowdhery et al., 2022) is a dense left-to-right, decoder-only language model with 540-billion parameters, pre-trained on a high quality corpus of 780 billion tokens with filtered webpages, books, wikipedia, news articles, source code, and social media conversations. we perform all experiments in the few-shot setting, without training or fine-tuning the language models. for a fair comparison we use the same prompts as in wei et al. (2022): for all arithmetic reasoning tasks we use the same set of 8 manually written exemplars; for each commonsense reasoning task, 4-7 exemplars are randomly chosen from the training set with manually composed chain-of-thought prompts.6 full details on the prompts used are given in appendix a.3. sampling scheme. to sample diverse reasoning paths, we followed similar settings to those suggested in radford et al. (2019); holtzman et al. (2020) for open-text generation. in particular, for ul2-20b and lamda-137b we applied temperature sampling with t = 0.5 and truncated at the top-k (k = 40) tokens with the highest probability, for palm-540b we applied t = 0.7, k = 40, and for gpt-3 we use t = 0.7 without top-k truncation. we provide an ablation study in section 3.5 to show that self-consistency is generally robust to sampling strategies and parameters. 3by default we use the test split for all datasets if the labels are available for evaluation. for commonsenseqa we use the dev split; for strategyqa we use the question-only set from big-bench collaboration (2021): https://github.com/google/big-bench/tree/main/bigbench/benchmark_tasks/strategyqa. 4model checkpoints at https://github.com/google-research/google-research/tree/master/ul2. 5public api available at https://openai.com/api/. 6self-consistency is robust to different sets of prompts and we provide a study in appendix a.1.2. main results | 4 | [
108.249,
698.0240784,
197.7368983,
707.9866784
] |
kJqXEPXMsE0.pdf | 2,023 | 2 | 3d equivariant diffusion for target-aware molecule generation and affinity prediction jiaqi guan1∗, wesley wei qian1∗, xingang peng2, yufeng su1, jian peng1, jianzhu ma3 1 department of computer science, university of illinois urbana-champaign 2 school of intelligence science and technology, peking university 3 institute for ai industry research, tsinghua university {jiaqi, weiqian3, jianpeng}@illinois.edu, majianzhu@air.tsinghua.edu.cn abstract rich data and powerful machine learning models allow us to design drugs for a specific protein target in silico. recently, the inclusion of 3d structures during targeted drug design shows superior performance to other target-free models as the atomic interaction in the 3d space is explicitly modeled. however, current 3d target-aware models either rely on the voxelized atom densities or the autoregressive sampling process, which are not equivariant to rotation or easily violate geometric constraints resulting in unrealistic structures. in this work, we develop a 3d equivariant diffusion model to solve the above challenges. to achieve target-aware molecule design, our method learns a joint generative process of both continuous atom coordinates and categorical atom types with a se(3)-equivariant network. moreover, we show that our model can serve as an unsupervised feature extractor to estimate the binding affinity under proper parameterization, which provides an effective way for drug screening. to evaluate our model, we propose a comprehensive framework to evaluate the quality of sampled molecules from different dimensions. empirical studies show our model could generate molecules with more realistic 3d structures and better affinities towards the protein targets, and improve binding affinity ranking and prediction without retraining. introduction rational drug design against a known protein binding pocket is an efficient and economical approach for finding lead molecules (anderson, 2003; batool et al., 2019) and has attracted growing attention from the research community. however, it remains challenging and computationally intensive due to the large synthetically feasible space (ragoza et al., 2022), and high degrees of freedom for binding poses (hawkins, 2017). previous prevailed molecular generative models are based on either molecular string representation (bjerrum and threlfall, 2017; kusner et al., 2017; segler et al., 2018) or graph representation (li et al., 2018; liu et al., 2018; jin et al., 2018; shi et al., 2020), but both representations do not take the 3d spatial interaction into account and therefore not well suited for target-aware molecule generation. with recent development in structural biology and protein structure prediction (jumper et al., 2021), more structural data become available (francoeur et al., 2020) and unlock new opportunities for machine learning algorithms to directly design drugs inside 3d binding complex (gebauer et al., 2019; simm et al., 2020a;b). recently, new generation of generative models are proposed specifically for the target-aware molecule generation task (luo et al., 2021; ragoza et al., 2022; tan et al., 2022; liu et al., 2022; peng et al., 2022). however, existing approaches suffer from several drawbacks. for instance, tan et al. (2022) does not explicitly model the interactions between atoms of molecules and proteins in the 3d space, but only considers the target as intermediate conditional embeddings. for those that do consider the atom interactions in the 3d space, ragoza et al. (2022) represents the 3d space as voxelized grids and model the proteins and molecules using 3d convolutional neural networks (cnn). however, this model is not rotational equivariant and cannot fully capture the 3d inductive ∗equal contribution biases. in addition, the voxelization operation will lead to poor scalability since the number of voxels increases at a cubic rate to the pocket size. advanced approaches achieve se(3)-equivariance through different modeling techniques (luo et al., 2021; liu et al., 2022; peng et al., 2022). however, these methods adopt autoregressive sampling, where atoms are generated one by one based on the learned probability density of atom types and atom coordinates. these approaches suffer from several limitations: first, the mismatch between training and sampling incurs exposure bias. secondly, the model assigns an unnatural generation order during sampling and cannot consider the probability of the entire 3d structure. for instance, it would be easy for the model to correctly place the n-th atom to form a benzene ring if the n − 1-th carbon atoms have already been placed in the same plane. however, it would be difficult for the model to place the first several atoms accurately since there is limited context information available, which yields unrealistic fragments as a consequence. moreover, the sampling scheme does not scale well when generating large binding molecules is necessary. finally, current autoregressive models could not estimate the quality of generated molecules. one has to rely on other tools based on physical-chemical energy functions such as autodock (trott and olson, 2010) to select the drug candidates. to address these problems, we propose targetdiff, a 3d full-atom diffusion model that generates target-aware molecules in a non-autoregressive fashion. thanks to recent progress in probabilistic diffusion models (ho et al., 2020; hoogeboom et al., 2021) and equivariant neural networks (fuchs et al., 2020; satorras et al., 2021b), our proposed model can generate molecules in continuous 3d space based on the context provided by protein atoms, and have the invariant likelihood w.r.t global translation and rotation of the binding complex. specifically, we represent the protein binding pockets and small molecules as atom point sets in the 3d space where each atom is associated with a 3d cartesian coordinate. we define a diffusion process for both continuous atom coordinates and discrete atom types where noise is gradually added, and learn the joint generative process with a se(3)-equivariant graph neural network which alternately updates the atom hidden embedding and atom coordinates of molecules. under certain parameterization, we can extract representative features from the model by forward passing the input molecules once without retraining. we find these features provide strong signals to estimate the binding affinity between the sampled molecule and target protein, which can then be used for ranking drug candidates and improving other supervised learning frameworks for binding affinity prediction. an empirical study on the crossdocked2020 dataset (francoeur et al., 2020) shows that targetdiff generates molecules with more realistic 3d structures and better binding energies towards the protein binding sites compared to the baselines. our main contributions can be summarized as follows: • an end-to-end framework for generating molecules conditioned on a protein target, which explicitly considers the physical interaction between proteins and molecules in 3d space. • so far as we know, this is the first probabilistic diffusion formulation for target-aware drug design, where training and sampling procedures are aligned in a non-autoregressive as well as se(3)-equivariant fashion thanks to a shifting center operation and equivariant gnn. • several new evaluation metrics and additional insights that allow us to evaluate the model generated molecules in many different dimensions. the empirical results demonstrate the superiority of our model over two other representative baselines. • propose an effective way to evaluate the quality of generated molecules based on our framework, where the model can be served as either a scoring function to help ranking or an unsupervised feature extractor to improve binding affinity prediction. related work molecule generation with different representations based on different levels of representations, existing molecular generative models can be roughly divided into three categories - stringbased, graph-based, and 3d-structure-based. the most common molecular string representation is smiles (weininger, 1988), where many existing language models such as rnn can be re-purposed for the molecule generation task (bjerrum and threlfall, 2017; g´omez-bombarelli et al., 2018; kusner et al., 2017; segler et al., 2018). however, smiles representation is not an optimal choice since it fails to capture molecular similarities and suffers from the validity issue during the generation phase (jin et al., 2018). thus, many graph-based methods are proposed to operate directly on graphs (liu et al., 2018; shi et al., 2020; jin et al., 2018; 2020; you et al., 2018; zhou et al., 2019). on the other hand, these methods are very limited in modeling the spatial information of molecules that is crucial for determining molecular properties and functions. therefore, recent work (gebauer et al., 2019; skalic et al., 2019a; ragoza et al., 2020; simm et al., 2020a;b) focus on generating molecules in 3d space. more recently, flow-based and diffusion-based generative models (satorras et al., 2021a; hoogeboom et al., 2022) are developed to leverage e(n)-equivariant gnn (satorras et al., 2021b) and achieve se(3)-equivariance in molecule generation. target-aware molecule generation as more structural data become available, various generative models are proposed to solve the target-aware molecule generation task. for example, skalic et al. (2019b); xu et al. (2021) generate smiles based on protein contexts. tan et al. (2022) propose a flow-based model to generate molecular graphs conditional on a protein target as a sequence embedding. ragoza et al. (2022) try to generate 3d molecules by voxelizing molecules in atomic density grids in a conditional vae framework. li et al. (2021) leverage monte-carlo tree search and a policy network to optimize molecules in 3d space. luo et al. (2021); liu et al. (2022); peng et al. (2022) develop autoregressive models to generate molecules atom by atom in 3d space with gnns. despite the progress made in this direction, the models still suffer from several issues, including separately encoding the small molecules and protein pockets (skalic et al., 2019b; xu et al., 2021; tan et al., 2022; ragoza et al., 2022), relying on voxelization and non-equivariance networks (skalic et al., 2019b; xu et al., 2021; ragoza et al., 2022), and autoregressive sampling (luo et al., 2021; liu et al., 2022; peng et al., 2022). different from all these models, our equivariant model explicitly considers the interaction between proteins and molecules in 3d and can perform non-autoregressive sampling, which better aligns the training and sampling procedures. diffusion models diffusion models (sohl-dickstein et al., 2015) are a new family of latent variable generative models. ho et al. (2020) propose denoising diffusion probabilistic models (ddpm) which establishes a connection between diffusion models and denoising score-based models (song and ermon, 2019). the diffusion models have shown remarkable success in generating image data (ho et al., 2020; nichol and dhariwal, 2021) and discrete data such as text (hoogeboom et al., 2021; austin et al., 2021). recently, it has also been applied in the domain of molecules. for example, geodiff (xu et al., 2022) generates molecular conformations given 2d molecular graphs. edm (hoogeboom et al., 2022) generates 3d molecules. however, the unawareness to potential targets make it hard to be utilized by biologists in real scenarios. methods problem definition | 2 | [
108.249,
288.7800784,
227.0553174,
298.7426784
] |
IxmWsm4xrua.pdf | 2,023 | 2 | toeplitz neural network for sequence modeling 2zhen qin 2xiaodong han 3weixuan sun 2bowen he 1dong li 4yuchao dai 5lingpeng kong 1yiran zhong∗ 1shanghai ai laboratory 4northwestern polytechnical university 2sensetime research 5the university of hong kong 3australian national university 3dongxu li abstract sequence modeling has important applications in natural language processing and computer vision. recently, the transformer-based models have shown strong performance on various sequence modeling tasks, which rely on attention to capture pairwise token relations, and position embedding to inject positional information. while showing good performance, the transformer models are inefficient to scale to long input sequences, mainly due to the quadratic space-time complexity of attention. to overcome this inefficiency, we propose to model sequences with a relative position encoded toeplitz matrix and use a toeplitz matrix-vector production trick to reduce the space-time complexity of the sequence modeling to log linear. a lightweight sub-network called relative position encoder is proposed to generate relative position coefficients with a fixed budget of parameters, enabling the proposed toeplitz neural network to deal with varying sequence lengths. in addition, despite being trained on 512-token sequences, our model can extrapolate input sequence length up to 14k tokens in inference with consistent performance. extensive experiments on autoregressive and bidirectional language modeling, image modeling, and the challenging long-range arena benchmark show that our method achieves better performance than its competitors in most downstream tasks while being significantly faster. the code is available at https://github.com/opennlplab/tnn. introduction figure 1: the left figure shows the training speed (x-axis), performances (y-axis), and gpu memory footprints (circle sizes) of the tnn and competing methods on long-range arena benchmark. the tnn beats the competitors with a clear margin. the right figure plots the extrapolation results with different sequence lengths, where the x-axis denotes sequence lengths, and the y-axis denotes log ppl. it demonstrates that regardless of the sequence length, the ppl of the tnn remains constant. ∗indicates the corresponding author. email: zhongyiran@gmail.com sequence modeling is a fundamental problem in natural language processing, speech processing, and computer vision. various sequence modeling methods have been proposed in the literature, including recurrent (hochreiter & schmidhuber, 1997), convolutional architectures (lecun et al., 1989), and transformers (vaswani et al., 2017). these models utilize various properties of sequential data for their modeling. for example, recurrent models (hochreiter & schmidhuber, 1997) mimic the sequential property by sequentially processing the input while maintaining hidden states through steps. convolutional models (lecun et al., 1989) enforce the locality bias sequentially and only interact elements within local patches. transformers use attention matrices to model pairwise relations regardless of the distance between them. recently, transformers (vaswani et al., 2017; dosovitskiy et al., 2021) show strong performance on a wide range of applications across domains and become arguably one of the most successful architectures for sequence modeling in general. there are two main components in transformers: the attention mechanism that learns pairwise correlations of tokens from data, and the position embedding to introduce positional inductive biases. the vanilla attention mechanism requires quadratic space-time complexity, which precludes transformers from handling long sequences. numerous attention variants have been proposed recently to reduce the complexity, including linear transformers (katharopoulos et al., 2020), and performer (choromanski et al., 2021). although the types of attention vary, the position embedding remains in every method, which indicates the importance of position information in sequence modeling. this motivates us to ask the following question: since position information is important, can we design a model that relies entirely on the position information of its elements regardless of their content, thus alleviating the quadratic computation cost of the vanilla attention mechanism? in this paper, we give an affirmative answer to this question by introducing toeplitz neural network, a new efficient architecture that solely exploits relative position relations for sequence modeling. in specific, instead of attention matrices, the toeplitz neural network uses toeplitz matrices to capture relations between each token pair. there are two motivations for selecting the toeplitz matrix. one is that it compactly represents relative positional relations between tokens with much fewer parameters, i.e., 2n − 1 parameters for an n × n toeplitz matrix. the other is that the toeplitz matrix-vector production can be efficiently processed in o(n log n) complexity, which is exactly what we used in our token mixing operation. in this way, we avoid computing content similarities between tokens and effectively reduce the quadratic computation complexity of transformers to log linear, rendering a more efficient sequence modeling architecture. we further propose relative position encoder, a lightweight module that generates relative position parameters to assemble the toeplitz matrices, so that the number of the tnn’s parameters will no longer depend on the sequence length. moreover, it allows tnn to deal with varying sequence lengths without retraining. in addition, the input sequence length extrapolation becomes an important ability in sequence modeling as training on longer sequences can be prohibitively expensive (press et al., 2022). we propose an exponential decay bias that directly applies to the toeplitz matrix. our model achieves a consistent performance to a sequence length of 14k tokens in inference when training on sequences of 512 tokens. we also show analytically that the toeplitz neural network represents a general form of sequence modeling methods, and derives transformers, cnns, and the recently proposed state-space-based methods (gu et al., 2022) as its special forms. we validate our model on a wide range of sequence modeling tasks and benchmarks. these include auto-regressive language modeling, text classification, image classification, and the long-range arena benchmark. as illustrated in fig. 1, our model achieves state-of-the-art performance on most tasks at a favorable log linear space-time complexity. it also demonstrates superior extrapolation capabilities when training on shorter sequences and evaluating on longer ones off-the-shelf. preliminary in this section, we introduce concepts used throughout the paper, including positional embedding, token and channel mixing, and the toeplitz matrix. notations used can be found in appendix a. positional embedding is introduced in transformers (vaswani et al., 2017) to inject positional inductive bias. it often uses fixed or learned parameters to encode position-specific information, thus making the model position-aware. there are mainly two types of positional embeddings: the absolute positional embedding (vaswani et al., 2017) and the relative position embedding (shaw et al., in this work, we focus on the relative position embedding to emphasize pair-wise token 2018). relations. a typical relative positional embedding (raffel et al., 2020) is formulated as: eij = q⊤ i kj/ d + wi−j, where j, i are two positional indices, eij denotes the attention score before softmax. the qi, kj represents the queries and keys in the attention. the wi−j is a positional coefficient. in this case, the relative position information is added to the attention as a bias. token and channel mixing are used by (yu et al., 2022) to refer to the two main procedures in sequence modeling. the token mixing refers to the process of mixing information between token pairs and the channel mixing for those between feature channels. in the transformers, given the attention matrix a ∈ rn×n and token matrix x ∈ rn×d, the attention operation ax can be regarded as a token mixing process and the ffn module is used for channel mixing. researchers often classify various sequence modeling techniques based on the token mixing techniques used. mlp-based methods (liu et al., 2021; tolstikhin et al., 2021) use matrix multiplication on the sequence dimension for token mixing. fft-based methods (lee-thorp et al., 2022) utilize the fft on the sequence dimension to mix token-wise information. the state-space-based methods (gu et al., 2022) leverage the state equations and hidden states to model sequences, as well as perform interactions between tokens. toeplitz matrix is a special form of a matrix that has constant values along each diagonal running from left to right, i.e., tij = ti+1,j+1 = ti−j, t ∈ rn×n. there are two nice properties of a toeplitz matrix: 1). for an n×n toeplitz matrix, we can efficiently describe it with 2n − 1 parameters. 2). the toeplitz matrix-vector production is faster than standard matrix-vector production. in particular, we have: theorem 2.1. for a toeplitz matrix t ∈ rn×n and any vector x ∈ rn, the time complexity of tx is o(n log n). we provide detailed proof in appendix b. this property enables us to use the toeplitz matrices to perform efficient token mixing. toeplitz neural network in this section, we provide a detailed design and analysis of our proposed toeplitz neural network (tnn) by giving a glance at the overall structure of our model first and then describing each of its components. we also discuss the connection between the tnn and other sequence modeling methods at the end of this section. the overall architecture our model consists of a stack of gated toeplitz units (gtu) and glu (shazeer, 2020). gtu is a modified glu layer injected with the proposed toeplitz neural operator (tno), as illustrated in fig. 2. a tno is used to perform token mixing with a toeplitz matrix. to generate relative position coefficients for the toeplitz matrix, we propose a relative position encoder (rpe), a lightweight fully-connected sub-network to encode the relative position information. an exponential decay bias is also added to the toeplitz matrix to enable extrapolation on longer inputs. toeplitz neural operator here, we will show how to use a toeplitz matrix to represent relative positional information. let us consider i, j to be two positions in a 1d sequence, by using the relative position embedding in eq. 1, we can define a toeplitz matrix t ∈ rn×n, where tij = ti−j. specifically, given a sequence x of n tokens, x = [x0, x1, . . . , xn−1]⊤ ∈ rn, we use a scalar ti−j to represent the relative position coefficients between xi and xj. then a toeplitz matrix t ∈ rn×n can be formed by gathering ti−j figure 2: network structure overview of the proposed toeplitz neural network. the proposed sequence modeling block is composed of a gated toeplitz unit and a glu shazeer (2020) and. we propose the tno to perform token mixing with only relative position information. we use a small fully-connected network named rpe to encode relative position information. for every token pair: t = ∈ rn×n. let us define a token mixing operation as: y = tx ∈ rn, where y is the token mixing result. for any d-dimensional sequences, the token mixing is performed on each dimension individually. as aforementioned in theorem 2.1, the computation complexity of eq. 4 is o(n log n). as we need to perform token mixing on d dimensions, our tno has a computation complexity of o(nd log n). one following question is how to calculate the relative position coefficients in t. a naive solution is to make the coefficients learnable parameters, such that the model can directly learn them from training data. however, this solution has some drawbacks: 1). parameter explosion. for a ddimensional sequence of n tokens, there are a total of (2n − 1)d learnable parameters, which can be prohibitively large as n increases. it also shows an unsatisfactory performance in our ablation studies in sec. 4.3. 2). fixed input sequence length. since the sequence length n is fixed in training, we are unable to adjust the sequence length during inference, i.e., it will cause a crucial performance drop when the sequence length changes. to address these drawbacks, we propose a relative position encoder to generate the relative position coefficients. relative position encoder we illustrate the network structure of our rpe in fig. 2, which is a fully connected network with k layers. the input of the network is a 1-dimensional scalar, i.e., the value of −(n − 1), . . . , (n − 1), ∀n ∈ n+, and output a d dimension vector, which is used to assemble the toeplitz matrix. in this case, the number of the tnn’s parameters will no longer depend on the input sequence length and the tnn will have the flexibility to deal with various sequence lengths in the inference stage. note that recent literature (mildenhall et al., 2021) claims that projecting the scalar input to a higher dimensional space with high frequency functions, i.e., sin and cos functions, before passing a network can lead to better performance. however, in our ablations, we find that using the original integer achieves better performance. exponential decay bias previous models (vaswani et al., 2017; qin et al., 2022) often use a fixed sequence length in both training and inference. if we need to infer a longer sequence, the model needs to be retrained on the longer sequence length to maintain the performance, which can be prohibitively expensive in the application. alibi (press et al., 2022) shows that by applying a simple penalty to the query-key attention scores, the transformer can handle longer sequence length in inference without compromising the performance. the penalty is a linear bias that is proportional to the distance between tokens. inspired by this technique, we propose an exponential decay bias that directly applies to the toeplitz matrix to achieve the same goal. in specific, let us define a decay rate of λ ∈ [0, 1], and the new relative position coefficients ¯ti−j in t can be expressed as: alibi can be seen as a special case of our method. given the equation of alibi: ¯ti−j = λ|i−j|ti−j. ¯sij = q⊤ i kj/ d + m|i − j|, exp(¯sij) = exp(q⊤ i kj/ d) exp(m|i − j|), and sij = q⊤ i kj/ d, λ ≜ exp(m), we have: exp(¯sij) = exp(sij)λ|i−j|. (8) it means the alibi applies an exponential decay on the softmax attention matrices whereas ours applies it on the toeplitz matrices. relation to other sequence modeling models in this section, we will show the relationship between our model and other sequence modeling models such as the transformers (vaswani et al., 2017), cnns (lecun et al., 1989), and the state space (gu et al., 2022). we also compare the theoretical space-time complexity of our model with previous sequence modeling models in table. 1. transformers a transformer with relative position embedding can be expressed as: o = softmax(qk⊤/ d + t)v. comparing it with eq. 4, the tnn can be regarded as an attention-free transformer, i.e., removing the q, k, and the softmax, while only keeping the relative position matrices t. cnns a convolutional layer can be viewed as a toeplitz matrix of a special structure. considering a 1d convolution: y = h ∗ x, yi = hi−jxj, h ∈ rm, x ∈ rn, y ∈ rn+m−1. let’s define a toeplitz matrix t ∈ r(n+m−1)×(n+m−1): tst = 0 ≤ t − s ≤ m − 1, 0 ≤ t ≤ n − 1 others, , z = ∈ rn+m−1. then: y = tz ∈ rn+m−1. (12) therefore, a 1d cnn can be viewed as a special case of the tnn with a zero-padded input. for better illustration, we provide a matrix form of cnn operation in appendix c.1. state space the equation of the state space can be expressed as: ui = aui−1 + bxi, yi = cui, a ∈ rh×h, b ∈ rh×1, c ∈ r1×h, i = 1, . . . , n where xi is the input, yi is the output,ui is the intermediate state. according to (gu et al., 2022), the output of the state space is: yi = ki−jxj, k = (cid:0)cb, cab, . . . , can−1b(cid:1)⊤ ∈ rn. let’s define the toeplitz matrix t ∈ rn×n: ti−j = (cid:26)ki−j, i ≥ j 0, i < j then: y = tx, x ∈ rn, y ∈ rn. (16) in this case, the state space can be regarded as a special form of tnn with the coefficients that are calculated by the state space. we also provide the matrix form in appendix c.2 for better illustration. table 1: comparison of theoretical space-time complexity of several models. parallel indicates whether parallel training is possible, n indicates the sequence length, and d indicates the feature dimension, e indicates the cnn kernel size. here we only list about 1d cnn. method cnn rnn time complexity space complexity parallel ned nd nd true false vanilla attention n2d linear attention nd2 true nd true mlp fft state space tnn true nd log n nd log n nd log n nd true nd true nd true 4 experiment we compare our method to four kinds of sequential modeling methods including attention-based methods, mlp-based methods, fft-based methods, and state-space-based methods. in particular, we select the following methods: • attention-based: vanilla transformer(vaswani et al., 2017), transformer-ls(zhu et al., 2021), flash, (hua et al., 2022), 1+elu (katharopoulos et al., 2020), performer (choromanski et al., 2020), cosformer (qin et al., 2022). • mlp-based: gmlp(liu et al., 2021), synthesizer (random), synthesizer (dense) (tay • fft-based: fnet(lee-thorp et al., 2022), gfnet (rao et al., 2021), afno(guibas et al., • state-space-based: s4(gu et al., 2022), dss (gupta et al., 2022), gss(mehta et al., 2022). we evaluate our methods on the wikitext-103 (merity et al., 2017) for autoregressive language modeling and the input length extrapolation ability, and the glue benchmark (wang et al., 2018) for bidirectional language modeling. we also validate the accuracy and efficiency of our methods in handling long-range dependencies on the long-range arena benchmark (tay et al., 2020). to demonstrate the robustness of our model, we implement our model in deit (touvron et al., 2021) structure and compare its performance with the vanilla deit (touvron et al., 2021) on the imagenet1k (deng et al., 2009) for image classification. setting we implement our models in pytorch (paszke et al., 2019) and train them on 8 v100 gpus. we adopt the same training configuration for all competitors, including batch size, learning rate, training epochs/updates, etc. more detailed hyper-parameters are listed in appendix d. for the autoregressive language modeling, all models are trained on the wikitext-103 dataset (merity et al., 2017) for 50k steps with a learning rate of 0.005. we use perplexity (ppl) as the evaluation metric. for the bidirectional language modeling, we choose the roberta (liu et al., 2019) model as the base model structure for all methods. all models are pre-trained on the wikitext-103 (merity et al., 2017) for 50k steps with lr=0.005 and fine-tuned on the glue dataset (wang et al., 2018). we use different learning rates among 1e-5, 3e-5, 6e-5, 1e-4 and choose the best result after fine-tuning for 3 epochs. for the long-range arena benchmark, we adopt the same experimental configurations from the skyformer chen et al. (2021). we ensure that performances and efficiencies of all methods are obtained with a similar parameter size and the same training hyperparameters. for the image classification on the imagenet-1k dataset, we adopt the deit (touvron et al., 2021) network structure and replace the transformer layers with our model. results autoregressive language modeling autoregressive language modeling is a crucial task that requires the models to estimate causal probability distribution given the previously seen tokens. in table 2, we compare the proposed tnn with competing sequence modeling models. first, compared to existing mlp-based methods, tnn shows better performances with a clear margin on both val set and test set. transformer-based methods are currently dominant sequence modeling methods. as a strong baseline, transformer adopts a standard self-attention module with quadratic complexity, tnn still outperforms it on both val and test sets. in addition, tnn achieves better results than most efficient transformers including flash, 1+elu, performer, and cosformer. finally, compared with recent emerging state-space-based sequence modeling methods, tnn achieves superior performance to all competing methods. it proves the effectiveness of our method in causal models. further, we also compared the extrapolation capabilities of each method. in figure 1, we show that our method outperforms all other methods and is comparable to alibi (press et al., 2022). complete results can be found in appendix 15. bidirectional language modeling we benchmark bidirectional modeling methods on the glue datasets in table. 3. tnn achieves competitive results across all tasks. further, it is worth noting that tnn boosts the results of cola by a significant margin, showing the ability of reasoning logistic information from sequences. it demonstrates the effectiveness of tnn in bidirectional language modeling. long-range arena benchmark as shown in table 4, we compare tnn with competing methods across five tasks of the lra benchmark. the results before the transformer-ls are taken from skyformer (chen et al., 2021). as demonstrated, tnn achieves the best scores on three tasks and the second places on the left two tasks. in terms of overall results, tnn outperforms all other competing methods including s4 (gu et al., 2022) 1 table 2: performances comparison of autoregressive language modeling on the wikitext103 dataset. the best result is highlighted in bold and the second in underline. ↓ means lower is better. attn stands for attention, ss stands for state space, trans stands for transformer, ls stands for transformer-ls. method attn-based trans ls flash 1+elu performer cosformer mlp-based syn(d) syn(r) gmlp ss-based s4 dss gss ours tnn ppl (val) ppl (test) params (m) for speed comparison, we compare the training speed of the tnn with other methods in table 5. for a fair and comprehensive comparison, we follow exactly the same configurations of the skyformer chen et al. (2021) and report step per second under different sequence lengths. timing is conducted on an nvidia a6000 gpu with 48g gpu memory. image modeling we report classification results on the imagenet-1k dataset in table 6. as shown, under similar parameter sizes, tnn achieves better results than deit-tiny and comparable results with deit-small. it demonstrates the capability of our method in encoding visual signals. 1we re-run the s4 experiments with the new configuration to match the number of parameters. for the sake of completeness, we also compare tnn with s4 in the original size of s4 using the suffix ”-large” in table14, which validates our ability to encode long sequences. table 3: performances comparison of bidirectional sequence modeling on the glue benchmark. mnli is reported by the match/mismatch splits. mrpc is reported by f1 score. cola is reported by matthews correlation coefficient. all the other tasks are measured by accuracy. the best result is highlighted in bold and the second in underline. the larger the better for all metrics. ”-” means unconverted. attn stands for attention, ss stands for state space, trans stands for transformer, ls stands for transformer-ls. method attn-based trans ls flash 1+elu performer cosformer mlp-based syn(d) syn(r) gmlp fft-based fnet gfnet afno ss-based s4 dss gss ours tnn mnli qnli qqp sst-2 mrpc cola avg params(m) table 4: performances comparison on the long range arena benchmark. we use bold and underline to highlight the best and the second result of each task respectively. the proposed tnn achieves the best performances and outperforms all competing methods. model transformer kernelized attention nystromformer linformer informer performer reformer bigbird skyformer ls cosformer flash s4 tnn ablation study | 7 | [
108.249,
174.9940784,
207.9624746,
184.9566784
] |
iaO86DUuKi.pdf | 2,021 | 2 | conservative safety critics for exploration homanga bharadhwaj1∗, aviral kumar2, nicholas rhinehart2, sergey levine2, florian shkurti1, animesh garg1 1university of toronto, vector institute 2university of california berkeley homanga@cs.toronto.edu abstract safe exploration presents a major challenge in reinforcement learning (rl): when active data collection requires deploying partially trained policies, we must ensure that these policies avoid catastrophically unsafe regions, while still enabling trial and error learning. in this paper, we target the problem of safe exploration in rl by learning a conservative safety estimate of environment states through a critic, and provably upper bound the likelihood of catastrophic failures at every training iteration. we theoretically characterize the tradeoff between safety and policy improvement, show that the safety constraints are likely to be satisfied with high probability during training, derive provable convergence bounds for our approach, which is no worse asymptotically than standard rl, and demonstrate the efficacy of the proposed approach on a suite of challenging navigation, manipulation, and locomotion tasks. empirically, we show that the proposed approach can achieve competitive task performance while incurring significantly lower catastrophic failure rates during training than prior methods. videos are at this url https: //sites.google.com/view/conservative-safety-critics/ introduction reinforcement learning (rl) is a powerful framework for learning-based control because it can enable agents to learn to make decisions automatically through trial and error. however, in the real world, the cost of those trials – and those errors – can be quite high: a quadruped learning to run as fast as possible, might fall down and crash, and then be unable to attempt further trials due to extensive physical damage. however, learning complex skills without any failures at all is likely impossible. even humans and animals regularly experience failure, but quickly learn from their mistakes and behave cautiously in risky situations. in this paper, our goal is to develop safe exploration methods for rl that similarly exhibit conservative behavior, erring on the side of caution in particularly dangerous settings, and limiting the number of catastrophic failures. a number of previous approaches have tackled this problem of safe exploration, often by formulating the problem as a constrained markov decision process (cmdp) (garcıa & fern´andez, 2015; altman, 1999). however, most of these approaches require additional assumptions, like assuming access to a function that can be queried to check if a state is safe (thananjeyan et al., 2020), assuming access to a default safe controller (koller et al., 2018; berkenkamp et al., 2017), assuming knowledge of all the unsafe states (fisac et al., 2019), and only obtaining safe policies after training converges, while being unsafe during the training process (tessler et al., 2018; dalal et al., 2018). in this paper, we propose a general safe rl algorithm, with bounds on the probability of failures during training. our method only assumes access to a sparse (e.g., binary) indicator for catastrophic failure, in the standard rl setting. we train a conservative safety critic that overestimates the probability of catastrophic failure, building on tools in the recently proposed conservative q-learning framework (kumar et al., 2020) for offline rl. in order to bound the likelihood of catastrophic failures at every iteration, we impose a kl-divergence constraint on successive policy updates so that the stationary distribution of states induced by the old and the new policies are not arbitrarily ∗work done during hb’s (virtual) visit to sergey levine’s lab at uc berkeley different. based on the safety critic’s value, we consider a chance constraint denoting probability of failure, and optimize the policy through primal-dual gradient descent. our key contributions in this paper are designing an algorithm that we refer to as conservative safety critics (csc), that learns a conservative estimate of how safe a state is, using this conservative estimate for safe-exploration and policy updates, and theoretically providing upper bounds on the probability of failures throughout training. through empirical evaluation in five separate simulated robotic control domains spanning manipulation, navigation, and locomotion, we show that csc is able to learn effective policies while reducing the rate of catastrophic failures by up to 50% over prior safe exploration methods. preliminaries we describe the problem setting of a constrained mdp (altman, 1999) specific to our approach and the conservative q learning (kumar et al., 2020) framework that we build on in our algorithm. constrained mdps. we take a constrained rl view of safety (garcıa & fern´andez, 2015; achiam et al., 2017), and define safe exploration as the process of ensuring the constraints of the constrained mdp (cmdp) are satisfied while exploring the environment to collect data samples. a cmdp is a tuple (s, a, p, r, γ, µ, c), where s is the state space, a is the action space, p : s × a × s → [0, 1] is a transition kernel, r : s × a → r is a task reward function, γ ∈ (0, 1) is a discount factor, µ is a starting state distribution, and c = {(ci : s → {0, 1}, χi ∈ r)|i ∈ z} is a set of (safety) constraints that the agent must satisfy, with constraint functions ci taking values either 0 (alive) or 1 (failure) and limits χi defining the maximal allowable amount of non-satisfaction, in terms of expected probability of failure. a stochastic policy π : s → p(a) is a mapping from states to action distributions, and the set of all stationary policies is denoted by π. without loss of generality, we can consider a single constraint, where c denotes the constraint satisfaction function c : s → {0, 1}, (c ≡ 1{f ailure}) similar to the task reward function, and an upper limit χ. note that since we assume only a sparse binary indicator of failure from the environment c(s), in purely online training, the agent must fail a few times during training, and hence 0 failures is impossible. however, we will discuss how we can minimize the number of failures to a small rate, for constraint satisfaction. we define discounted state distribution of a policy π as dπ(s) = (1 − γ) (cid:80)∞ t=0 γtp (st = s|π), the state value function as v π the state-action value function as r(s, a) = eτ ∼π [r(τ )|s0 = s, a0 = a], and the advantage function as aπ qπ r(s, a) − v π r (s). we define similar quantities for the constraint function, as vc, qc, and ac. so, we have v π c (µ) denoting the average episodic failures, which can also be interpreted as expected probability of failure since v π t=0 c(st)] = eτ ∼π[1{f ailure}] = p(f ailure|µ). for policy parameterized as πφ, we denote dπ(s) as ρφ(s). note that although c : s → {0, 1} takes on binary values in our setting, the function v π c (µ) is a continuous function of the policy π. r (s) = eτ ∼π [r(τ )|s0 = s], c (µ) = eτ ∼π [(cid:80)∞ r (µ) = eτ ∼π [(cid:80)∞ t=0 r(st, at)] and v π r(s, a) = qπ conservative q learning. cql (kumar et al., 2020) is a method for offline/batch rl (lange et al., 2012; levine et al., 2020) that aims to learn a q-function such that the expected value of a policy under the learned q function lower bounds its true value, preventing over-estimation due to out-ofdistribution actions as a result. in addition to training q-functions via standard bellman error, cql minimizes the expected q-values under a particular distribution of actions, µ(a|s), and maximizes the expected q-value under the on-policy distribution, π(a|s). cql in and of itself might lead to unsafe exploration, whereas we will show in section 3, how the theoretical tool introduced in cql can be used to devise a safe rl algorithm. the conservative safe-exploration framework in this section we describe our safe exploration framework. the safety constraint c(s) defined in section 2 is an indicator of catastrophic failure: c(s) = 1 when a state s is unsafe and c(s) = 0 when it is not, and we ideally desire c(s) = 0 ∀s ∈ s that the agent visits. since we do not make any assumptions in the problem structure for rl (for example a known dynamics model), we cannot guarantee this, but can at best reduce the probability of failure in every episode. so, we formulate the constraint as v π t=0 c(st)] ≤ χ, where χ ∈ [0, 1) denotes probability of failure. our approach is motivated by the insight that by being “conservative” with respect to how c (µ) = eτ ∼π [(cid:80)∞ figure 1: csc (algorithm 1). env.step(a) steps the simulator to the next state s(cid:48) and provides r(s, a) and c(s(cid:48)) values to the agent. if c(s(cid:48)) = 1 (failure), episode terminates. qc is the learned safety critic. safe a state is, and hence by over-estimating this probability of failure, we can effectively ensure constrained exploration. figure 1 provides an overview of the approach. the key idea of our algorithm is to train a conservative safety critic denoted as qc(s, a), that overestimates how unsafe a particular state is and modifies the exploration strategy to appropriately account for this safety under-estimate (by overestimating the probability of failure). during policy evaluation in the environment, we use the safety critic qc(s, a) to reduce the chance of catastrophic failures by checking whether taking action a in state s has qc(s, a) less than a threshold (cid:15). if not, we re-sample a from the current policy π(a|s). we now discuss our algorithm more formally. we start by discussing the procedure for learning the safety critic qc, then discuss how we incorporate this in the policy gradient updates, and finally discuss how we perform safe exploration (garcıa & fern´andez, 2015) during policy execution in the environment. overall objective. our objective is to learn an optimal policy π∗ that maximizes task rewards, while respecting the constraint on expected probability of failures. π∗ = arg max π∈πc r (µ) where πc = {π ∈ π : v π v π c (µ) ≤ χ} learning the safety critic. the safety critic qc is used to obtain an estimate of how unsafe a particular state is, by providing an estimate of probability of failure, that will be used to guide exploration. we desire the estimates to be “conservative”, in the sense that the probability of failure should be an over-estimate of the actual probability so that the agent can err on the side of caution while exploring. to train such a critic qc, we incorporate tools from cql to estimate qc through updates similar to those obtained by reversing the sign of α in equation 2 of cql(h) (kumar et al., 2020). this gives us an upper bound on qc instead of a lower bound, as ensured by cql. we denote the over-estimated advantage corresponding to this safety critic as ˆac. formally the safety critic is trained via the following objective, where the objective inside arg min is called cql(ζ), ζ parameterizes qc, and k denotes the kth update iteration. c ← arg min qc α · (cid:0)−es∼denv,a∼πφ(a|s)[qc(s, a)] + e(s,a)∼denv [qc(s, a)](cid:1) e(s,a,s(cid:48),c)∼denv qc(s, a) − ˆbπφ ˆqk c(s, a) here, ˆbπφ is the empirical bellman operator discussed in section 3.1 and equation 2 of kumar et al. (2020). α is a weight that varies the importance of the first term in equation 2, and controls the magnitude of value over-estimation, as we now highlight in red above. for states sampled from the replay buffer denv, the first term seeks to maximize the expectation of qc over actions sampled from the current policy, while the second term seeks to minimize the expectation of qc over actions sampled from the replay buffer. denv can include off-policy data, and also offline-data (if available). we interleave the gradient descent updates for training of qc, with gradient ascent updates for policy πφ and gradient descent updates for lagrange multiplier λ, which we describe next. policy learning. since we want to learn policies that obey the constraint we set in terms of the safety critic, we can solve the objective in equation 1 via: max πφ es∼ρφ,a∼πφ (cid:2)aπφ r (s, a)(cid:3) s.t. es∼ρφ,a∼πφqc(s, a) ≤ χ we can construct a lagrangian and solve the policy optimization problem through primal dual gradient descent max πφ min λ≥0 es∼ρφ,a∼πφ (cid:2)aπφ r (s, a) − λ (qc(s, a) − χ)(cid:3) we can apply vanilla policy gradients or some actor-critic style q-function approximator for optimization. here, qc is the safety critic trained through cql as described in equation 2. we defer specific implementation details for policy learning to the final paragraph of this section. (µ) denotes avg. failures in the previous epoch. (cid:46) execute actions in the environment. collect on-policy samples. ζ (safety critic), policy πφ, λ, denv, thresholds (cid:15), δ, χ. πφold (cid:46) ˆv c θ (task value fn), qs (µ) ← χ. for episode e in {1, . . . , m} do πφold set (cid:15) ← (1 − γ)(χ − ˆv c sample a ∼ πφold (s). execute a iff qc (s, a) ≤ (cid:15). else, resample a. obtain next state s(cid:48), r = r(s, a), c = c(s(cid:48)). denv ← denv ∪ {(s, a, s(cid:48), r, c)} algorithm 1 csc: safe exploration with conservative safety critics 1: initialize v r πφold 2: set ˆv c 3: for epochs until convergence do 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17: end for end for πφold store the average episodic failures ˆv c for step t in {1, . . . , n} do end for φold ← φ gradient ascent on φ and (optionally) add entropy regularization (appendix a.2) gradient updates for the q-function ζ := ζ − ηq∇ζcql(ζ) gradient descent step on lagrange multiplier λ (appendix a.2) (cid:46) if available, denv can be seeded with off-policy/offline data ˆv e c (cid:46) policy and q function updates using denv executing rollouts (i.e., safe exploration). since we are interested in minimizing the number of constraint violations while exploring the environment, we do not simply execute the learned policy iterate in the environment for active data collection. rather, we query the safety critic qc to obtain an estimate of how unsafe an action is and choose an action that is safe via rejection sampling. formally, we sample an action a ∼ πφold(s), and check if qc(s, a) ≤ (cid:15). we keep re-sampling actions πφold (s) until this condition is met, and once met, we execute that action in the environment. in practice, we execute this loop for 100 iterations, and choose the action a among all actions in state s for which qc(s, a) ≤ (cid:15) and the value of qc(s, a) is minimum. if no such action a is found that maintains qc(s, a) ≤ (cid:15), we just choose a for which qc(s, a) is minimum (although above the threshold). πφold here, (cid:15) is a threshold that varies across iterations and is defined as (cid:15) = (1 − γ)(χ − ˆv c πφold where, ˆv c of the true v in the replay buffer denv, we store tuples of the form (s, a, s(cid:48), r, c), where s is the previous state, a is the action executed, s(cid:48) is the next state, r is the task reward from the environment, and c = c(s(cid:48)), the constraint value. in our setting, c is binary, with 0 denoting a live agent and 1 denoting failure. (µ)) (µ) is the average episodic failures in the previous epoch, denoting a sample estimate (µ). this value of (cid:15) is theoretically obtained such that lemma 1 holds. πφold c overall algorithm. our overall algorithm, shown in algorithm 1, executes policy rollouts in the environment by respecting the constraint qc(s, a) ≤ (cid:15), stores the observed data tuples in the replay buffer denv, and uses the collected tuples to train a safety value function qc using equation 2, update the policy and the dual variable λ following the optimization objective in equation 6. implementation details. here, we discuss the specifics of the implementation for policy optimization. we consider the surrogate policy improvement problem sutton (2020): (cid:104) es∼ρφold ,a∼πφ max πφ s.t. es∼ρφold a πφold r [dkl(πφold(·|s)||πφ(·|s))] ≤ δ and v πφ c (µ) ≤ χ here, we have introduced a dkl constraint to ensure successive policies are close in order to help obtain bounds on the expected failures of the new policy in terms of the expected failures of the old policy in section 4. we replace the dkl(πφold (·|s)||πφ(·|s)) term by its second order taylor expansion (expressed in terms of the fisher information matrix f ) and enforce the resulting constraint exactly (schulman et al., 2015a). following equation 22 (appendix a.2) we have, πφold r πφold c a s.t. v es∼ρφold ,a∼πφ [dkl(πφold(·|s)||πφ(·|s))] ≤ δ max πφ s.t. es∼ρφold (5) we replace the true ac by the learned over-estimated ˆac, and consider the lagrangian dual of this constrained problem, which we can solve by alternating gradient descent as shown below. (cid:18) (cid:104) ˆac(s, a) es∼ρφold ,a∼πφ [ac(s, a)] ≤ χ v es∼ρφold ,a∼πφ es∼ρφold ,a∼πφ πφold c πφold r s.t. (φ − φold)t f (φ − φold) ≤ δ note that although we use fim for the updates, we can also apply vanilla policy gradients or some actor-critic style q-function approximator to optimize equation 6. detailed derivations of the gradient updates are in appendix a.2. max πφ theoretical analysis in this section, we aim to theoretically analyze our approach, showing that the expected probability of failures is bounded after each policy update throughout the learning process, while ensuring that the convergence rate to the optimal solution is only mildly bottlenecked by the additional safety constraint. our main result, stated in theorem 1, bounds the expected probability of failure of the policy that results from equation 5. to prove this, we first state a lemma that shows that the constraints in equation 5 are satisfied with high probability during the policy updates. detailed proofs of all the lemmas and theorems are in appendix a.1. notation. let (cid:15)c = maxs |ea∼πφnew ac(s, a)| and ∆ be the amount of overestimation in the [ ˆac(s, a)] as per equaexpected advantage value generated from the safety critic, es∼ρφ old(cid:48) ,a∼πφold [ ˆac(s, a) − ac(s, a)]. let ζ denote the sampling error in tion 2, such that ∆ = es∼ρφ the estimation of v (µ)|) and n be the number of samples used in the estimation of vc. let regc(t ) be the total cumulative failures incurred by running algorithm 1 until t samples are collected from the environment. we first show that when using algorithm 1, we can upper bound the expectation probability of failure for each policy iterate πφold. πφold (µ) by its sample estimate ˆv c πφold (µ) (i.e. ζ = | ˆv c old(cid:48) ,a∼πφold πφold c πφold c (µ) − v lemma 1. if we follow algorithm 1, during policy updates via equation 5, the following is satisfied with high probability ≥ 1 − ω v πφold c es∼ρφold ,a∼πφ [ac(s, a)] ≤ χ + ζ − here, ζ captures sampling error in the estimation of v , where c (cid:48) is a constant independent of ω obtained from union bounds and concentration inequalities (kumar et al., 2020) and n is the number of samples used in the estimation of vc. (µ) and we have ζ ≤ πφold c log(1/ω) |n | this lemma intuitively implies that the constraint on the safety critic in equation 5 is satisfied with a high probability, when we note that the rhs can be made small as n becomes large. lemma 1 had a bound in terms of v (µ) for the old policy πφold , but not for the updated policy πφnew . we now show that the expected probability of failure for the policy πφnew resulting from solving equation 5, v πφnew (µ) is bounded with a high probability. c πφold c theorem 1. consider policy updates that solve the constrained optimization problem defined in equation 5. with high probability ≥ 1 − ω, we have the following upper bound on expected probability of failure v πφnew (µ) for πφnew during every policy update iteration: 2δγ(cid:15)c (1 − γ)2 c v πφnew c where so far we have shown that, with high probability, we can satisfy the constraint in the objective during policy updates (lemma 1) and obtain an upper bound on the expected probability of failure of the updated policy πφnew (theorem 1). the key insight from theorem 1 is that if we execute policy πφnew in the environment, the probability of failing is upper-bounded by a small number depending on the specified safety threshold χ. since the probability of failure is bounded, if we execute πφnew for multiple episodes, the total number of failures is bounded as well. we now bound the task performance in terms of policy return and show that incorporating and satisfying safety constraints during learning does not severely affect the convergence rate to the optimal solution for task performance. theorem 2 builds upon and relies on the assumptions in (agarwal et al., 2019) and extends it to our constrained policy updates in equation 5. theorem 2 (convergence rate for policy gradient updates with the safety constraint). if we run the policy gradient updates through equation 5, for policy πφ, with µ as the starting state distribution, with φ(0) = 0, learning rate η > 0, and choose α as mentioned in the discussion of theorem 1, then for all policy update iterations t > 0 we have, with probability ≥ 1 − ω, r(µ) − v (t ) v ∗ r (µ) ≤ log |a| ηt + k t=0 λ(t) ηt where k ≤ (1 − χ) + since the value of the dual variables λ strictly decreases during gradient descent updates (algorithm 1), (cid:80)t −1 t=0 λ(t) is upper-bounded. so, we see that the additional term proportional to k introduced in the convergence rate (compared to (agarwal et al., 2019)) due to the safety constraint is upper bounded, and can be made small with a high probability by choosing α appropriately. in addition, we note that the safety threshold χ helps tradeoff the convergence rate by modifying the magnitude of k (a low χ means a stricter safety threshold, and a higher value of k, implying a larger rhs and slower convergence). we discuss some practical considerations of the theoretical results in appendix a.4. so far we have demonstrated that the resulting policy iterates from our algorithm all satisfy the desired safety constraint of the cmdp which allows for a maximum safety violation of χ for every intermediate policy. while this result ensures that the probability of failures is bounded, it does not elaborate on the total failures incurred by the algorithm. in our next result, we show that the cumulative failures until a certain number of samples t of the algorithm grows sublinearly when executing algorithm 1, provided the safety threshold χ is set in the right way. theorem 3. [number of cumulative safety failures grows sublinearly] let χ in algorithm 1 be timedependent such that χt = o(1/ t). then, the total number of cumulative safety violations until when t transition samples have been collected by algorithm 1, regc(t ), scales sub-linearly with t , i.e., regc(t ) = o((cid:112)|s||a|t ). a proof is provided in appendix a.1. theorem 3 is in many ways similar to a typical regret bound for exploration (russo, 2019; jaksch et al., 2010), though it measures the total number of safety violations. this means that training with algorithm 1 will converge to a “safe” policy that incurs no failures at a quick, o( t ) rate. experiments through experiments on continuous control environments of varying complexity, we aim to empirically evaluate the agreement between empirical performance and theoretical guidance by understanding the following questions: • how safe is csc in terms of constraint satisfaction during training? • how does learning of safe policies trade-off with task performance during training? 5.1 experimental setup environments. in each environment, shown in figure 2, we define a task objective that the agent must achieve and a criteria for catastrophic failure. the goal is to solve the task without dying. in point agent/car navigation avoiding traps, the agent must navigate a maze while avoiding traps. the agent has a health counter that decreases every timestep that it spends within a trap. when the figure 2: illustrations of the five environments in our experiments: (a) 2d point agent navigation avoiding traps. (b) car navigation avoiding traps. (c) panda push without toppling. (d) panda push within boundary. (e) laikago walk without falling. counter hits 0, the agent gets trapped and dies. in panda push without toppling, a 7-dof franka emika panda arm must push a vertically placed block across the table to a goal location without the block toppling over. failure is defined as when the block topples. in panda push within boundary, the panda arm must be controlled to push a block across the table to a goal location without the block going outside a rectangular constraint region. failure occurs when the block center of mass ((x, y) position) move outside the constraint region. in laikago walk without falling, an 18-dof laikago quadruped robot must walk without falling. the agent is rewarded for walking as fast as possible (or trotting) and failure occurs when the robot falls. since quadruped walking is an extremely challenging task, for all the baselines, we initialize the agent’s policy with a controller that has been trained to keep the agent standing, while not in motion. baselines and comparisons. we compare csc to three prior methods: constrained policy optimization (cpo) (achiam et al., 2017), a standard unconstrained rl method (schulman et al., 2015a) which we call base (comparison with sac (haarnoja et al., 2018) in appendix figure 7), an algorithm similar to base, called baseshaped that modifies the reward r(s, a) as r(s, a) − p c(s) where p = 10 and c(s) is 1 when a failure occurs and is 0 otherwise. we also consider a method that extends leave no trace (eysenbach et al., 2017) to our setting, which we refer to as q ensembles. this last comparison is the most similar to our approach, in that it also implements a safety critic (adapted from lnt’s backward critic), but instead of using our conservative updates, the safety critic uses an ensemble for epistemic uncertainty estimation, as proposed by eysenbach et al. (2017). there are other safe rl approaches which we cannot compare against, as they make multiple additional assumptions, such as the availability of a function that can be queried to determine if a state is safe or not thananjeyan et al. (2020), availability of a default safe policy for the task koller et al. (2018); berkenkamp et al. (2017), and prior knowledge of the location of unsafe states (fisac et al., 2019). in addition to the baselines (figure 3), we analyze variants of our algorithm with different safety thresholds through ablation studies (figure 4). we also analyze csc and the baselines by seeding with a small amount of offline data in the appendix a.10. empirical results | 6 | [
108.249,
263.9380784,
221.3478983,
273.9006784
] |
2QzNuaRHn4Z.pdf | 2,023 | 1 | bitrate-constrained dro: beyond worst case robustness to unknown group shifts amrith setlur1 don dennis1 benjamin eysenbach1 aditi raghunathan1 chelsea finn2 virginia smith1 1 carnegie mellon university 2 stanford university sergey levine3 3 uc berkeley abstract training machine learning models robust to distribution shifts is critical for real-world applications. some robust training algorithms (e.g., group dro) specialize to group shifts and require group information on all training points. other methods (e.g., cvar dro) that do not need group annotations can be overly conservative, since they naively upweight high loss points which may form a contrived set that does not correspond to any meaningful group in the real world (e.g., when the high loss points are randomly mislabeled training points). in this work, we address limitations in prior approaches by assuming a more nuanced form of group shift: conditioned on the label, we assume that the true group function (indicator over group) is simple. for example, we may expect that group shifts occur along low bitrate features (e.g., image background, lighting). thus, we aim to learn a model that maintains high accuracy on simple group functions realized by these low bitrate features, that need not spend valuable model capacity achieving high accuracy on contrived groups of examples. based on this, we consider the two-player game formulation of dro where the adversary’s capacity is bitrate-constrained. our resulting practical algorithm, bitrate-constrained dro (br-dro), does not require group information on training samples yet matches the performance of group dro on datasets that have training group annotations and that of cvar dro on long-tailed distributions. our theoretical analysis reveals that in some settings br-dro objective can provably yield statistically efficient and less conservative solutions than unconstrained cvar dro. introduction machine learning models may perform poorly when tested on distributions that differ from the training distribution. a common form of distribution shift is group shift, where the source and target differ only in the marginal distribution over finite groups or sub-populations, with no change in group conditionals (oren et al., 2019; duchi et al., 2019) (e.g., when the groups are defined by spurious correlations and the target distribution upsamples the group where the correlation is absent sagawa et al. (2019)). prior works consider various approaches to address group shift. one solution is to ensure robustness to worst case shifts using distributionally robust optimization (dro) (bagnell, 2005; ben-tal et al., 2013; duchi et al., 2016), which considers a two-player game where a learner minimizes risk on distributions chosen by an adversary from a predefined uncertainty set. as the adversary is only constrained to propose distributions that lie within an f-divergence based uncertainty set, dro often yields overly conservative (pessimistic) solutions (hu et al., 2018) and can suffer from statistical challenges (duchi et al., 2019). this is mainly because dro upweights high loss points that may not form a meaningful group in the real world, and may even be contrived if the high loss points simply correspond to randomly mislabeled examples in the training set. methods like group dro (sagawa et al., 2019) avoid overly pessimistic solutions by assuming knowledge of group membership for each training example. however, these group-based methods provide no guarantees on shifts that deviate from the predefined groups (e.g., when there is a new group), and are not applicable to problems that lack group knowledge. in this work, we therefore ask: can we train non-pessimistic robust models without access to group information on training samples? we address this question by considering a more nuanced assumption on the structure of the underlying groups. we assume that, conditioned on the label, group boundaries are realized by high-level features that depend on a small set of underlying factors (e.g., background color, brightness). this leads to simpler group ⇤correspondence can be sent to asetlur@cs.cmu.edu. true group classifier true label classifier jtt (liu et al.), cvar dro (levy et al.) minority majority bitrate-constrained dro (ours) minority majority y t i r o n m i y t i r o a m j landbird in land background landbird in water background waterbird in land background waterbird in water background majority minority majority minority figure 1: bitrate-constrained dro: a method that assumes group shifts along low-bitrate features, and restricts the adversary appropriately so that the solution found is less pessimistic and more robust to unknown group shifts. our method is also robust to training noise. (left) in waterbirds (wah et al., 2011), the spurious feature background is a large margin simple feature that separates the majority and minority points in each class. (right) prior works (levy et al., 2020; liu et al., 2021) that upweight arbitrary points with high losses force the model to memorize noisy mislabeled points while our method is robust to noise and only upweights the true minority group without any knowledge of its identity (see section 6.2). functions with large margin and simple decision boundaries between groups (figure 1 (left)). invoking the principle of minimum description length (gr¨unwald, 2007), restricting our adversary to functions that satisfy this assumption corresponds to a bitrate constraint. in dro, the adversary upweights points with higher losses under the current learner, which in practice often correspond to examples that belong to a rare group, contain complex patterns, or are mislabeled (carlini et al., 2019; toneva et al., 2018). restricting the adversary’s capacity prevents it from upweighting individual hard or mislabeled examples (as they cannot be identified with simple features), and biases it towards identifying erroneous data points misclassified by simple features. this also complements the failure mode of neural networks trained with stochastic gradient descent (sgd) that rely on simple spurious features which correctly classify points in the majority group but may fail on minority groups (blodgett et al., 2016). the main contribution of this paper is bitrate-constrained dro (br-dro), a supervised learning procedure that provides robustness to distribution shifts along groups realized by simple functions. despite not using group information on training examples, we demonstrate that br-dro can match the performance of methods requiring them. we also find that br-dro is more successful in identifying true minority training points, compared to unconstrained dro. this indicates that not optimizing for performance on contrived worst-case shifts can reduce the pessimism inherent in dro. it further validates: (i) our assumption on the simple nature of group shift; and (ii) that our bitrate constraint meaningfully structures the uncertainty set to be robust to such shifts. as a consequence of the constraint, we also find that br-dro is robust to random noise in the training data (song et al., 2022), since it cannot form “groups” entirely based on randomly mislabeled points with low bitrate features. this is in contrast with existing methods that use the learner’s training error to up-weight arbitrary sets of difficult training points (e.g., liu et al., 2021; levy et al., 2020), which we show are highly susceptible to label noise (see figure 1 (right)). finally, we theoretically analyze our approach—characterizing how the degree of constraint on the adversary can effect worst risk estimation and excess risk (pessimism) bounds, as well as convergence rates for specific online solvers. related work prior works in robust ml (e.g., li et al., 2018; lipton et al., 2018; goodfellow et al., 2014) address various forms of adversarial or structured shifts. we specifically review prior work on robustness to group shifts. while those based on dro optimize for worst-case shifts in an explicit uncertainty set, the robust set is implicit for some others, with most using some form of importance weighting. distributionally robust optimization (dro). dro methods generally optimize for worst-case performance on joint (x,y) distributions that lie in an f-divergence ball (uncertainty set) around the training distribution (ben-tal et al., 2013; rahimian & mehrotra, 2019; bertsimas et al., 2018; blanchet & murthy, 2019; miyato et al., 2018; duchi et al., 2016; duchi & namkoong, 2021). hu et al. (2018) highlights that the conservative nature of dro may lead to degenerate solutions when the unrestricted adversary uniformly upweights all misclassified points. sagawa et al. (2019) proposes to address this by limiting the adversary to shifts that only differ in marginals over predefined groups. however, in addition to it being difficult to obtain this information, kearns et al. (2018) raise “gerrymandering” concerns with notions of robustness that fix a small number of groups apriori. while they propose a solution that looks at exponentially many subgroups defined over protected attributes, our method does not assume access to such attributes and aims to be fair on them as long as they are realized by simple functions. finally, zhai et al. (2021) avoid conservative solutions by solving the dro objective over randomized predictors learned through boosting. we consider deterministic and over-parameterized learners and instead constrain the adversary’s class. constraining the dro uncertainty set. in the marginal dro setting, duchi et al. (2019) limit the adversary via easier-to-control reproducing kernel hilbert spaces (rkhs) or bounded h¨older continuous functions (liu & ziebart, 2014; wen et al., 2014). while this reduces the statistical error in worst risk estimation, the size of the uncertainty set (scales with the data) remains too large to avoid cases where an adversary can reweight mislabeled and hard examples from the majority set (carlini et al., 2019). in contrast, we restrict the adversary even for large datasets where the estimation error would be low, as this would reduce excess risk when we only care about robustness to rare sub-populations defined by simple functions. additionally, while their analysis and method prefers the adversary’s objective to have a strong dual, we show empirical results on real-world datasets and generalization bounds where the adversary’s objective is not necessarily convex. robustness to group shifts without demographics. recent works (sohoni et al., 2020; creager et al., 2021; bao & barzilay, 2022) that aim to achieve group robustness without access to group labels employ various heuristics where the robust set is implicit while others require data from multiple domains (arjovsky et al., 2019; yao et al., 2022) or ability to query test samples (lee et al., 2022). liu et al. (2021) use training losses for a heavily regularized model trained with empirical risk minimization (erm) to directly identify minority data points with higher losses and re-train on the dataset that up-weights the identified set. nam et al. (2020) take a similar approach. other methods (idrissi et al., 2022) propose simple baselines that subsample the majority class in the absence of group demographics and the majority group in its presence. hashimoto et al. (2018) find dro over a 2-divergence ball can reduce the otherwise increasing disparity of per-group risks in a dynamical system. since it does not use features to upweight points (like br-dro) it is vulnerable to label noise. same can be said about some other works (e.g., liu et al. (2021); nam et al. (2020)). importance weighting in deep learning. finally, numerous works (duchi et al., 2016; levy et al., 2020; lipton et al., 2018; oren et al., 2019) enforce robustness by re-weighting losses on individual data points. recent investigations (soudry et al., 2018; byrd & lipton, 2019; lu et al., 2022) reveal that such objectives have little impact on the learned solution in interpolation regimes. one way to avoid this pitfall is to train with heavily regularized models (sagawa et al., 2019; 2020) and employ early stopping. another way is to subsample certain points, as opposed to up-weighting (idrissi et al., 2022). in this work, we use both techniques while training our objective and the baselines, ensuring that the regularized class is robust to shifts under misspecification (wen et al., 2014). preliminaries we introduce the notation we use in the rest of the paper and describe the dro problem. in the following section, we will formalize our assumptions on the nature of the shift before introducing our optimization objective and algorithm. d and labels r x⇢ , the given source p and unknown true target q0 are notation. with covariates y ,⌃) and have densities p and q0 respectively (w.r.t. base measures over the measurable space ( l2(p ), and the adversary’s action measure µ). the learner’s choice is a hypothesis h: x7!y qp, := in standard dro is a target distribution q in set . here, df is the } f-divergence between q and p for a convex function f 1 with f(1) = 0. an equivalent action space for the adversary is the set of re-weighting functions: in class q : q { h⇢ ⌧ p,df (q x⇥y p ) wp, = w : { for a convex loss function l : l(h(x),y), and use l0 a re-weighting function w 1 to denote the loss function (h(x) 2wp,, the risk of a learner h is: r: w is measurable underp,ep [w]=1,ep f(w) (1) r+, we denote l(h) as the function over (x,y) that evaluates 2q p,, or = y). given either distribution q r(h,q)=eq [l(h)] r(h,w)=e(x,y) p [l(h(x),y) w(x,y)]= l(h),w h ip ). if the adversary is stochastic it picks a mixed action note the overload of notation for r(h, · wp,. whenever it is clear, we drop p,. which is the set of all distributions over unconstrained dro (ben-tal et al., 2013). this is a min-max optimization problem understood as a two-player game, where the learner chooses a hypothesis, to minimize risk on the worst distribution that 1for e.g., kl(q p ) can be derived with f(x)=xlogx and for total variation f(x)= x | wp,). r(h,w) the adversary can choose from its set. formally, this is given by equation 3. the first equivalence is clear from the definitions and for the second since r(h,q) is linear in q, the supremum over ( wp,) is a dirac delta over the best weighting in wp,. in the next section, we will see how a bitrate-constrained adversary can only pick certain actions from ( inf h 2h sup 2qp, q r(h,q) inf h 2h sup 2wp, w inf h 2h sup wp,) ( [r(h,w)] ew group shift. while the dro framework in section 3 is broad and addresses any unstructured shift, we focus on the specific case of group shift. first, for a given pair of measures p,q we define what gp,q (definition 3.1). intuitively, it is a set of sub-populations along we mean by the group structure which the distribution shifts, defined in a way that makes them uniquely identifiable. for e.g., in the waterbirds dataset (figure 1), there are four groups given by combinations of (label, background). corollary 3.2 follows immediately from the definition of gp,q. using this definition, the standard group shift assumption (sagawa et al., 2019) can be formally re-stated as assumption 3.3. definition 3.1 (group structure finite set of disjoint groups p(x,y gk)=q(x,y corollary 3.2 (uniqueness of gp,q). assumption 3.3 (standard group shift). there exists a well-defined group structure differs from p only in terms of marginal probabilities over all g k gp,q= gk} k=1 is the smallest { k (i) gk 2 ⌃, q(gk) > 0 and (ii) 8 gk)>0a.e. in µ. if such a structure exists then gp,q is well defined. (p,q) is unique if it is well defined. p the group structure k k=1gk)=1 and gp,q). for q k k=1 s.t. q( gk} { gp,q0 s.t. target q0 p,q, the structure g bitrate-constrained dro we begin with a note on the expressivity of the adversary in unconstrained dro and formally introduce the assumption we make on the nature of shift. then, we build intuition for why unconstrained adversaries fail but restricted ones do better under our assumption. finally, we state our main objective and discuss a specific instance of it. how expressive is unconstrained adversary? note that the set wp, includes all measurable functions (under p ) such that the re-weighted distribution is bounded in f-divergence (by ). while prior works (shafieezadeh abadeh et al., 2015; duchi et al., 2016) shrink to construct confidence intervals, ⌃, but does not this only controls the total mass that can be moved between measurable sets g1,g2 2 restrict the choice of g1 and g2 itself. as noted by hu et al. (2018), such an adversary is highly expressive, and optimizing for the worst case only leads to the solution of empirical risk minimization (erm) under l0 1 loss. thus, we can conclude that dro recovers degenerate solutions because the worst target in wp, lies far from the subspace of naturally occurring targets. since it is hard to precisely characterize natural targets we make a nuanced assumption: the target q0 only upsamples those rare subpopulations that are misclassified by simple features. we state this formally in assumption 4.2 after we define the bitrate-constrained function class definition 4.1. a function class e[ ]: s.t. { assumption 4.2 (simple group shift). target q0 satisfies assumption 3.3 (group shift) w.r.t. source p . additionally, for some prior ⇡ and a small ⇤, the re-weighting function q0/p lies in a bitrate-constrained g) = wg a.e.. ( ⇤). in other words, for every group g class we refer to such a g as a simple group that is realized in ( ) in definition 4.1. ( ) is bitrate-constrained if there exists a data independent prior ⇡, w w ),kl( ( ⇤) s.t. ((x,y) w w w w ⇡)) increases the description length of the encoding under the principle of minimum description length (gr¨unwald, 2007) any deviation from the prior (i.e., kl( ( ) as being bitrate-constrained in the sense that it contains functions (means of distributions) that can be described with a limited number of bits given the prior ⇡. see appendix a.3 for an example of a bitrate-constrained class of functions. next we present arguments for why identifiability of simple (satisfy assumption 4.2) minority groups can be critical for robustness. ), thus we refer to w w neural networks can perform poorly on simple minorities. for a fixed target q0, let’s say there p (gmaj). by assumption 4.2, both (p,q0) such that p (gmin) exists two groups: gmin and gmaj 2g gmin and gmaj are simple (realized in ( ⇤)), and are thus separated by some simple feature. the learner’s class is usually a class of overparameterized neural networks. when trained with stochastic gradient descent (sgd), these are biased towards learning simple features that classify a majority of the w h data (shah et al., 2020; soudry et al., 2018). thus, if the simple feature separating gmin and gmaj itself correlates with the label y on gmaj, then neural networks would fit on this feature. this is precisely the case in the waterbirds example, where the groups are defined by whether the simple feature background correlates with the label (figure 1). thus our assumption on the nature of shift complements the nature of neural networks perform poorly on simple minorities. (p,q0). any method that aims the bitrate constraint helps identify simple unfair minorities in to be robust on q0 must up-weight data points from gmin but without knowing its identity. since the unconstrained adversary upsamples any group of data points with high loss and low probability, it cannot ( ⇤) and a rare group of distinguish between a rare group that is realized by simple functions in examples that share no feature in common or may even be mislabeled. on the other hand, the group of ( ⇤). thus, a bitrate constraint mislabeled examples cannot be separated from the rest by functions in adversary can only identify simple groups and upsamples those that incur high losses – possibly due to the simplicity bias of neural networks. w w g (p,q0) is not realized in bitrate constrained class br-dro objective. according to assumption 4.2, there cannot exist a target q0 such that minority ( ⇤). thus, by constraining our adversary gmin 2g ( ) (for some that is user defined), we can possibly evade issues emerging from optimizing to a class for performance on mislabeled or hard examples, even if they were rare. this gives us the objective in equation 4 where the equalities hold from the linearity of w w , h· ·i l(h),e [w] h and definition 4.1. ip = inf h w sup r(h,w) inf h 2h sup ( w ⇡) ew r(h,w) = inf 2h h sup ( w ⇡) ⇥h and adversary ✓w 2 ⇥w as neural networks2. br-dro in practice. we parameterize the learner ✓h 2 in practice, we implement the adversary either as a one hidden layer variational information bottleneck (vib) (alemi et al., 2016), where the kullback-leibler (kl) constraint on the latent variable z (output of vib’s hidden layer) directly constrains the bitrate; or as an l2 norm constrained linear layer. the objective =0) in equation 5 below. see appendix a.2 for the vib (l2) version is obtained by setting vib 6 for details. note that the objective in equation 5 is no longer convex-concave and can have multiple local equilibria or stationary points (mangoubi & vishnoi, 2021). the adversary’s objective also does not have a strong dual that can be solved through conic programs—a standard practice in dro literature (namkoong & duchi, 2016). thus, we provide an algorithm where both learner and adversary optimize br-dro iteratively through stochastic gradient ascent/descent (algorithm 1 in appendix a.1). l(✓h),✓⇤wip s.t. ✓⇤w =argmax ladv(✓w;✓h, vib, l2,⌘) ⇥hh ⇥w ✓w2 ⌘,✓wip l(✓h) h vib ep kl(p(z x;✓w) ||n ✓wk min ✓h2 ladv(✓w;✓h, vib, l2,⌘)= training. for each example, the adversary takes as input: (i) the last layer output of the current learner’s feature network; and (ii) the input label. the adversary then outputs a weight (in [0,1]). the idea of applying the adversary directly on the learner’s features (instead of the original input) is based on recent literature (rosenfeld et al., 2022; kirichenko et al., 2022) that suggests re-training the prediction head is sufficient for robustness to shifts. the adversary tries to maximize weights on examples with value ⌘ (hyperparameter) and minimize on others. for the learner, in addition to the example it takes as input the adversary assigned weight for that example from the previous round and uses it to reweigh its loss in a minibatch. both players are updated in a round (algorithm 1). theoretical analysis | 4 | [
108.149,
193.7465888,
249.610707,
205.7017888
] |
hx1IXFHAw7R.pdf | 2,021 | 1 | provable rich observation reinforcement learning with combinatorial latent states dipendra misra∗ microsoft research qinghua liu princeton university chi jin princeton university john langford microsoft research abstract we propose a novel setting for reinforcement learning that combines two common real-world difficulties: presence of observations (such as camera images) and factored states (such as location of objects). in our setting, the agent receives observations generated stochastically from a latent factored state. these observations are rich enough to enable decoding of the latent state and remove partial observability concerns. since the latent state is combinatorial, the size of state space is exponential in the number of latent factors. we create a learning algorithm factorl (fact-o-rel) for this setting which uses noise-contrastive learning to identify latent structures in emission processes and discover a factorized state space. we derive polynomial sample complexity guarantees for factorl which polynomially depend upon the number factors, and very weakly depend on the size of the observation space. we also provide a guarantee of polynomial time complexity when given access to an efficient planning algorithm. introduction most reinforcement learning (rl) algorithms scale polynomially with the size of the state space, which is inadequate for many real world applications. consider for example a simple navigation task in a room with furniture where the set of furniture pieces and their locations change from episode 10 grid and consider each element in the to episode. if we crudely approximate the room as a 10 grid to contain a single bit of information about the presence of furniture, then we end up with a state space of size 2100, as each element of the grid can be filled independent of others. this is intractable for rl algorithms that depend polynomially on the size of state space. the notion of factorization allows tractable solutions to be developed. for the above example, the room can be considered a state with 100 factors, where the next value of each factor is dependent on just a few other parent factors and the action taken by the agent. learning in factored markov decision processes (mdp) has been studied extensively (kearns & koller, 1999; guestrin et al., 2003; osband & van roy, 2014) with tractable solutions scaling linearly in the number of factors and exponentially in the number of parent factors whenever planning can be done efficiently. however, factorization alone is inadequate since the agent may not have access to the underlying factored state space, instead only receiving a rich-observation of the world. in our room example, the agent may have access to an image of the room taken from a megapixel camera instead of the grid representation. naively, treating each pixel of the image as a factor suggests there are over a million factors and a prohibitively large number of parent factors for each pixel. counterintuitively, thinking of the observation as the state in this way leads to the conclusion that problems become harder as the camera resolution increases or other sensors are added. it is entirely possible, that these pixels (or more generally, observation atoms) are generated by a small number of latent factors with a small number of parent factors. this motivates us to ask: can we achieve pac rl guarantees that depend polynomially on the number of latent factors and very weakly (e.g., logarithmically) on the size of observation space? recent work has addressed this for a rich-observation setting with a non-factored latent state space when certain supervised learning problems are tractable (du et al., 2019; misra et al., 2020; agarwal et al., 2020). however, addressing the rich-observation setting with a latent factored state space has remained elusive. specifically, ignoring the factored structure in the latent space or treating observation atoms as factors yields intractable solutions. ∗correspondence at: dimisra@microsoft.com figure 1: left: a room navigation tasks as a factored block mdp setting showing atoms and factors. center and right: shows the different stages executed by the factorl algorithm. we do not show the observation x emitted by s for brevity. in practice a factor would emit many more atoms. contributions. we combine two threads of research on rich-observation rl and factored mdp by proposing a new problem setup called factored block mdp (section 2). in this setup, observations are emitted by latent states that obey the dynamics of a factored mdp. we assume observations to be composed of atoms (which can be pixels for an image) that are emitted by the latent factors. a single factor can emit a large number of atoms but no two factors can control the same atom. following existing rich-observation rl literature, we assume observations are rich enough to decode the current latent state. we introduce an algorithm factorl that achieves the desired guarantees for a large class of factored block mdps under certain computational and realizability assumptions (section 4). the main challenge that factorl handles is to map atoms to the parent factor that emits them. we achieve this by reducing the identification problem to solving a set of independence test problems with distributions satisfying certain properties. we perform independence tests in a domain-agnostic setting using noise-contrastive learning (section 3). once we have mapped atoms to their parent factors, factorl then decodes the factors, estimates the model, recovers the latent structure in the transition dynamics, and learns a set of exploration policies. figure 1 shows the different steps of factorl. this provides us with enough tools to visualize the latent dynamics, and plan for any given reward function. due to the space limit, we defer the discussion of related work to appendix b. to the best of our knowledge, our work represents the first provable solution to rich-observation rl with a combinatorially large latent state space. the factored block mdp setting there are many possible ways to add rich observations to a factored mdp resulting in inapplicability or intractability. our goal here is to define a problem setting that is tractable to solve and covers potential real-world problems. we start with the definition of factored mdp (kearns & koller, 1999), but first review some useful notation that we will be using: n, we use [n] to denote the set · · · [n] and length k, we use the notation . for any ordered set (or a , n } ] to denote notations: for any n vector) u the ordered set ( of size n, and an ordered index set [ i [ i [ i u definition 1. a factored mdp ( u s d, a finite action space i ⊆ [k]]). s ⊆ { s × a → unknown reward function r : of d factors with the ith factor denoted as s[i]. the transition function satisfies t (s(cid:48) a a s × a → [0, 1] and a time horizon h. each state s u , t, r, h) consists of a d-dimensional discrete state space ), an , an unknown transition function t : s consists ∈ s s, a) = | ) 0, 1 ∆( } { × a → 2[d] defines the set of parent pt(i) | | and a d i=1 ti(s(cid:48)[i] s[pt(i)], a) for every s, s(cid:48) , where ti : defines a factored transition distribution and a parent function pt : [d] (cid:81) factors that can influence a factor at the next timestep. ∈ a ∈ s [ i u we assume a deterministic start state. we also assume, without loss of generality, that each state and observation is reachable at exactly one time step. this can be easily accomplished by concatenating the time step information to state and observations. this allows us to write the state space as s sh ) where sh is the set of states reachable at time step h. a natural question to ask here is why we assume factored transition. in tabular mdps, the lower bound for sample complexity scales linearly w.r.t. the size of the state set (kakade, 2003). if we do not assume a factorized transition function then we can encode an arbitrary mdp with a state space of size 2d, which would yield a lower bound of ω(2d) rendering the setting intractable. instead, (κ) we will prove sample complexity guarantees for factorl that scales in number of factors as do is the size of the largest parent factor set. the dependence of κ in the where κ := maxi exponent is unavoidable as we have to find the parent factors from all possible combinations, as well as learn the model for all possible values of the parent factor. however, for real-world problems we expect κ to be a small constant such as 2. this yields significant improvement, for example, if 1030. κ = 2 and d = 100 then dκ = 100 while 2d [d] | ∈ pt(i) d κ (cid:0) (cid:1) based on the definition of factored mdp, we define the main problem setup of this paper, called factored block mdp, where the agent does not observe the state but instead receives an observation containing enough information to decode the latent state. s ⊆ { = x m and a latent x is made of m atoms with the kth denoted by x . observations are generated stochastically given a latent state s according to a ∆(x | ch(i) |) and 0, 1 { = j. the emission function .1 supp (qi( 0)) · | ∅ , t, r, h), with parent function definition 2. a factored block mdp consists of an observation space d. a single observation x state space 0, 1 } x[k] factored emission function q(x ch : [d] satisfies the disjointness property: for every i the dynamics of the latent state space follows a factored mdp ( pt and a deterministic start state. | 2[m] is a child function satisfying ch(i) s[i]) where qi : whenever i ∩ [d], we have supp(qi( d i=1 qi(x[ch(i)] | ch(j) = s) = ∈ x ∈ s a s the notion of atoms generalizes commonly used abstractions. for example, if the observation is an image then atoms can be individual pixels or superpixels, and if the observation space is a natural language text then atoms can be individual letters or words. we make no assumption about the structure of the atom space x or its size, which can be infinite. an agent is responsible for mapping x m. for the two examples above, each observation x this mapping is routinely performed in practice. if observation is a text presented to the agent as a string, then it can use off-the-shelf tokenizer to map it to sequence of tokens (atoms). similar to states, we assume the set of observations reachable at different time steps is disjoint. additionally, we also allow the parent (pt) and child function (ch) to change across time steps. we denote these functions at time step h by pth and chh. to individual atoms (x[1], , x[m]) ∈ x the disjointness property was introduced in du et al. (2019) for block mdps—a class of richobservation non-factorized mdps. this property removes partial observability concerns and enables tractable learning. we expect this property to hold in real world problems whenever sufficient sensor data is available to decode the state from observation. for example, disjointness holds true for the navigation task with an overhead camera in figure 1. in this case, the image provides us with enough information to locate all objects in the room, which describes the agent’s state.. disjointness allows i : x | [d], such that φ(cid:63) us to define a decoder φ(cid:63) 0, 1 i (x[ch(i)]) = s[i] s[i])). we define a shorthand φ(cid:63) if x[ch(i)] i (x[ch(i)]) whenever ch is clear | from the context. lastly, we define the state decoder φ(cid:63) : for every factor i supp (qi(. ch(i) | x → { d where φ(cid:63)(x)[i] = φ(cid:63) } the agent interacts with the environment by taking actions according to a policy π : these interactions consist of episodes { sh), rh = r(xh, ah), and sh+1 ∼ q(. ). x → a with s1 = (cid:126)0, xh ∼ , ah , sh } . sh, ah). the agent never observes , sh } s1, { technical assumptions. we make two assumptions that are specific to the factorl algorithm. the first is a margin assumption on the transition dynamics that enables us to identify different values of a factor. this assumption was introduced by du et al. (2019), and we adapt it to our setting. i (x). ∆( [d], let ui be the uniform assumption 1 (margin assumption). for every h , i } ∈ 1[pt(i)]. then we assume: distribution jointly over actions and all possible reachable values of sh pui( − sh[i]) is the back1[pt(i)], a (cid:107) ward dynamics denoting the probability over parent values and last action given sh[i] and roll-in distribution ui, and σ > 0 is the margin. ∈ { σ where pui(sh − sh[i] = 1) sh[i] = 0) pui( · , h 1the notation supp(p) denotes the support of the distribution p. formally, supp(p) = {z | p(z) > 0}. assumption 1 captures a large set of problems, including all deterministic problems for which the value of σ is 1. assumption 1 helps us identify the different values of a factor but it does not help with mapping atoms to the factors from which they are emitted. in order to identify if two atoms come from the same factor, we make the following additional assumption to measure their dependence. assumption 2 (atom dependency bound). for any h ch− we have 1(v), i.e., atoms xh[u] and xh[v] have the same factor. then under any distribution d 1(u) = sh) ∆( pd(xh[u])pd(xh[v]) = v, if ch− [m] and u [h], u, v βmin. pd(xh[u], xh[v]) (cid:107) g f and . the first regressor class red box, yellow box, black box dependence assumption states that atoms emitted from the same factor will be correlated. this is true for many real-world problems. for example, consider a toy grid-based navigation task. each state factor s[i] represents a cell in the grid which can be empty (s[i] = 0) or occupied (s[i] = 1). in the latter case, a randomly sampled box from the set , occupies } its place. we expect assumption 2 to hold in this case as pixels emitted from the same factor come from the same object and hence will be correlated. more specifically, if one pixel is red in color, then another pixel from the same cell will also be red as the object occupying the cell is a red box. this assumption does not remove the key challenge in identifying factors. as atoms from different factors can still be dependent due to actions and state distributions from previous time steps. : x model class. we use two regressor classes [0, 1] takes a pair of atoms and outputs a scalar in [0, 1]. to define the second class, we first define a decoder class φ : x ∗ . we allow this class to be defined on any set of atoms. this is motivated by empirical research where commonly used neural network models operate on inputs of arbitrary lengths. for example, the lstm model can operate on a text of arbitrary length (sundermeyer et al., 2012). however, this is without loss of generality as we can define a different model class for different [0, 1]. finally, we define numbers of atom. we also define a model class : u the regressor class . we [0, 1] as , φ (x, a, ˇx) } { and are finite classes and derive sample complexity guarantees which scale as log assume |f| . however, since we only use uniform convergence arguments extending the guarantees and log to other statistical complexity measures such as rademacher complexity is straightforward. let denote the set of all non-stationary policies of this form. we then define the class of πall : policies π : , which we use later to define our task. we use | ∀ } pπ[ under the distribution over episodes induced by policy π. computational oracle. we assume access to two regression oracles reg for model classes . and g d1 be a dataset of triplets (x[u], x[v], y) where u, v denote two different atoms and y let . 0, 1 } x ∗, and y similarly, let . d2 be a dataset of quads (x, a, x(cid:48), y) where x 0, 1 } ed[ ] denote the empirical mean over dataset d. the two computational oracles compute: lastly, let · s → a ϕ(φ(cid:63)(x)) by ] to denote probability of an event } → u(x, a, φ(ˇx)) x × a × { f ∈ { ∈ { x × a × x → a f |g| x ∗ x { ∈ a ∈ x ∈ u πall , ˇx , a x f g g u e e e f reg( , reg( (f (x[u], x[v]) ) = arg min (cid:98) f ∈f ) = arg min n ∈g we also assume access to a ∆pl-optimal planning oracle planner. let learned state space and , and [0, 1] be a given reward function. let ϕ : (cid:98) be the policy value. then for any ∆pl > 0 the output of planner ˆϕ = planner( (cid:98) t , v ( ˆϕ; − (cid:105) sh) be a s1, · · · sh) be the learned dynamics, (cid:98) (cid:98) be a policy and v (ϕ; r) (cid:98) r, ∆pl) satisfies t , (cid:98) ∆pl, where supremum is taken over policies of type (cid:98) supϕ v (ϕ; (g(x, a, ˇx) th ) with s × a → s → a sh − th : r : r) t , t , g g e task definition. we focus on a reward-free setting with the goal of learning a state decoder and estimating the latent dynamics t . since the state space is exponentially large, we cannot visit every state. however, the factorization property allows us to estimate the model by reaching factor values. in fact, we show that controlling the value of at most 2κ factors is sufficient for learning the model. let c ) denote the space of all sets containing at most k different elements selected from the set [d], and . we define the reachability probability ηh( ) for a given h [h], k z k ⊆ k( u ≤ including |k|, and the reachability parameter ηmin as: 0, 1 } ηh( pπ(sh[ u z ∈ { k z ) := sup πns π k z ηmin := inf h [h] inf s ∈s h inf ηh( , s[ k k 1 our sample complexity scales polynomially with η− z k is reachable, then it is reachable with at least ηmin probability, i.e., either ηh( ) = 0 or it is at least ηmin. these requirements are similar to those made by earlier work for non-factored state k z k∈ min. note that we only require that if sh[ (cid:98) . s → a space (du et al., 2019; misra et al., 2020). the key difference being that instead of requiring every state to be reachable with ηmin probability, we only require a small set of factor values to be reachable. d, then probability of for reference, if every policy induces a uniform distribution over d but the probability of two factors taking certain values is only 0.25. this visiting any state is 2− gives us a more practical value for ηmin. s besides estimating the dynamics and learning a decoder, we also learn an α-policy cover to enable exploration of different reachable values of factors. we define this below: definition 3 (policy cover). a set of policies ψ is an α-policy cover of s ∀ ∈ sh, k ∈ c pπ(sh[ ] = s[ k k sup ψ π sh for any α > 0 and h if: ]). αηh( , s[ k k discovering emission structure with contrastive learning | 4 | [
108.299,
554.4386768,
484.0596029,
566.3938768
] |
SEcSahl0Ql.pdf | 2,023 | 0 | iterative circuit repair against formal specifications matthias cosler cispa helmholtz center for information security matthias.cosler@cispa.de frederik schmitt cispa helmholtz center for information security frederik.schmitt@cispa.de christopher hahn stanford university hahn@cs.stanford.edu bernd finkbeiner cispa helmholtz center for information security finkbeiner@cispa.de abstract we present a deep learning approach for repairing sequential circuits against formal specifications given in linear-time temporal logic (ltl). given a defective circuit and its formal specification, we train transformer models to output circuits that satisfy the corresponding specification. we propose a separated hierarchical transformer for multimodal representation learning of the formal specification and the circuit. we introduce a data generation algorithm that enables generalization to more complex specifications and out-of-distribution datasets. in addition, our proposed repair mechanism significantly improves the automated synthesis of circuits from ltl specifications with transformers. it improves the state-of-theart by 6.8 percentage points on held-out instances and 11.8 percentage points on an out-of-distribution dataset from the annual reactive synthesis competition. introduction sequential circuit repair (katz & manna, 1975) refers to the task of given a formal specification and a defective circuit implementation automatically computing an implementation that satisfies the formal specification. circuit repair finds application especially in formal verification. examples are automated circuit debugging after model checking (clarke, 1997) or correcting faulty circuit implementations predicted by heuristics such as neural networks (schmitt et al., 2021b). in this paper, we design and study a deep learning approach to circuit repair for linear-time temporal logic (ltl) specifications (pnueli, 1977) that also improves the state-of-the-art of synthesizing sequential circuits with neural networks. we consider sequential circuit implementations that continuously interact with their environments. for example, an arbiter that manages access to a shared resource interacts with processes by giving out mutually exclusive grants to the shared resource. linear-time temporal logic (ltl) and its dialects (e.g., stl maler & nickovic (2004) or ctl clarke & emerson (1981)) are widely used in academia and industry to specify the behavior of sequential circuits (e.g., godhal et al. (2013); ieee (r → g), stating that (2005); horak et al. (2021)). a typical example is the response property ) answered by grant g. we can specify an arbiter it always ( that manages the access to a shared resource for four processes by combining response patterns for requests r0, . . . , r3 and grants g0, . . . , g3 with a mutual exclusion property as follows: ) holds that request r is eventually ( response properties mutual exclusion property a possible implementation of this specification is a circuit that gives grants based on a round-robin scheduler. however, running neural reactive synthesis (schmitt et al., 2021b) on this specification results in a defective circuit as shown in figure 1a. after model checking the implementation, we observe that the circuit is not keeping track of counting (missing an and gate) and that the mutual exclusion property is violated (the same variable controls grants g0 and g1). (a) faulty circuit, predicted by synthesis model. (b) faulty circuit, first iteration of repair model. (c) correct circuit. final prediction of the repair model in the second iteration (dot visualization on the left, model’s output in aiger on the right). i n p u t 0 ( i 0 ) i n p u t 1 ( r 2 ) i n p u t 2 ( r 0 ) i n p u t 3 ( r 3 ) i n p u t 4 ( r 1 ) l a t c h 0 ( l 0 ) l a t c h 1 ( l 1 ) o u t p u t 0 ( g 3 ) o u t p u t 1 ( g 2 ) o u t p u t 2 ( g 0 ) o u t p u t 3 ( g 1 ) o u t p u t 4 ( o4 ) and − g a t e s and − g a t e s and − g a t e s and − g a t e s and − g a t e s figure 1: circuit representations of 4-process arbiter implementation in dot visualizations: the triangles represent inputs and outputs, the rectangles represent variables, the diamond-shaped nodes represent latches (flip-flop), ovals represent and gates, and the black dots represent inverter (not gates). the output of our repair model is given as an aiger circuit (bottom right). we present the first deep learning approach to repair such faulty circuits, inspired by the successful application of deep learning to the ltl trace generation (hahn et al., 2021) and reactive synthesis problem (schmitt et al., 2021b). we introduce a new transformer architecture, the separated hierarchical transformer, that accounts for the different characteristics of the problem’s input. the separated hierarchical transformer combines the advantages of the hierarchical transformer (li et al., 2021) with the multimodal representation learning of an ltl specification and a faulty circuit. in particular, it utilizes that ltl specifications typically consist of reoccurring patterns. this architecture can successfully be trained on the circuit repair problem. our model, for example, produces a correct circuit implementation of the round-robin strategy by repairing the faulty circuit in figure 1a in only two iterations. each iteration predicts a circuit based on the specification and a faulty circuit as input. the result of the first iteration is shown in figure 1b. the circuit remains faulty, with two of the four grants still controlled by the same variable. progress was made, however, towards a functioning counter: latch l1 now consists of a combination of and gates and inverters expressive enough to represent a counter. the second iteration finally results in a correct implementation, as shown in figure 1c. to effectively train and enable further research on repair models, we provide open-source datasets and our open-source implementation for the supervised training of the circuit repair problem1. we demonstrate that the trained separated hierarchical transformer architecture generalizes to unseen specifications and faulty circuits. further, we show that our approach can be combined with the existing neural method for synthesizing sequential circuits (schmitt et al., 2021b) by repairing its mispredictions, improving the overall accuracy substantially. we made a significant improvement of 6.8 percentage points to a total of 84% on held-out-instances, while an even more significant improvement was made on out-of-distribution datasets with 11.8 percentage points on samples from the annual reactive synthesis competition syntcomp (jacobs et al., 2022a). 1https://github.com/reactive-systems/circuit-repair related work circuit repair. the repair problem is an active field of research dating back to katz & manna (1975). jobstmann et al. (2005; 2012) show a game-based approach to repair programs using ltl specifications. baumeister et al. (2020) propose an approach for synthesizing reactive systems from ltl specifications iteratively through repair steps. ahmad et al. (2022) presented a framework for automatically repairing defects in hardware design languages like verilog. staber et al. (2005) combine fault localization and correction for sequential systems with ltl specifications. deep learning for temporal logics and hardware. hahn et al. (2021); schmitt et al. (2021a) initiated the study of deep learning for temporal logics; showing that a transformer can understand the semantics of temporal and propositional logics. schmitt et al. (2021b) successfully applied the transformer to the reactive synthesis problem. kreber & hahn (2021) showed that (w)gans equipped with transformer encoders can generate sensible and challenging training data for ltl problems. luo et al. apply deep learning to the lt lf satisfiability problem. mukherjee et al. (2022) present a deep learning approach to learn graph representations for ltl model checking. vasudevan et al. (2021) applied deep learning to learn semantic abstractions of hardware designs. in hahn et al. (2022), the authors generate formal specifications from unstructured natural language. reactive synthesis. the hardware synthesis problem traces back to alonzo church in 1957 (church, 1963). buchi & landweber (1990) provided solutions, although only theoretically, already in 1969. since then, significant advances in the field have been made algorithmically, e.g., with a quasi-polynomial algorithm for parity games (calude et al., 2020), conceptually with distributed (pneuli & rosner, 1990) and bounded synthesis (finkbeiner & schewe, 2013), and on efficient fragments, e.g., gr(1) (piterman et al., 2006) synthesis. synthesis algorithms have been developed for hyperproperties (finkbeiner et al., 2020). recently, deep learning has been successfully applied to the hardware synthesis problem (schmitt et al., 2021b). compared to classical synthesis, a deep learning approach can be more efficient, with the tradeoff of being inherently incomplete. the field can build on a rich supply of tools (e.g. (bohy et al., 2012; faymonville et al., 2017; meyer et al., 2018a)). a yearly competition (syntcomp) (jacobs & p´erez) is held at cav. datasets we build on the reactive synthesis dataset from schmitt et al. (2021b), where each sample consists of two entries: a formal specification in ltl and a target circuit that implements the specification given in the aiger format. we construct a dataset for the circuit repair problem that consists of three entries: a formal specification in ltl, a defective circuit, and the corrected target circuit. in section 3.1, we give details of the domain-specific languages for the circuit repair problem’s input. section 3.2 describes the data generation process and summarizes the dataset that resulted in the best-performing model (see section 6 for ablations). we approach the generation of the dataset from two angles: 1) we collect mispredictions, i.e., faulty circuits predicted by a neural model, and 2) we introduce semantic errors to correct circuits in a way that they mimic human mistakes. linear-time temporal logic (ltl) and and-inverter graphs (aiger) ltl specifications. the specification consists of two lists of sub-specifications: assumptions and guarantees. assumptions pose restrictions on the environment behavior, while guarantees describe how the circuit has to react to the environment. they jointly build an ltl specification as follows: spec := (assumption1 ∧ · · · ∧ assumptionn) → (guarantee1 ∧ · · · ∧ guaranteem). a specification is called realizabile if there exists a circuit implementing the required behavior and called unrealizable if no such circuit exists. for example, an implementation can be unrealizable if there are contradictions in the required behavior, or if the environment assumptions are not restrictive enough. formally, an ltl specification is defined over traces through the circuit. a circuit c satisfies an ltl specification φ if all possible traces through the circuit traces c satisfy the specification, i.e., if ∀t ∈ traces c. t |= φ. for example, the ltl formula (¬g0 ∨ ¬g1) from the arbiter example in section 3.1 requires all traces through the circuit to respect mutual exclusive behavior between g0 and g1. if a specification is realizable, the target circuit represents the implementation, and if a specification is unrealizable, the target circuit represents the counter-strategy of the environment showing that no such implementation exists. the formal semantics of ltl can be found in appendix a. algorithm 1 algorithm for introducing errors to correct circuit implementations. 1: input: circuit c, number of changes standard deviation σc, maximum number of changes mc, new variable number standard deviation σv, delete line probability pdelete 2: output: circuit c ▷ sample from discrete truncated gaussian with probability pdelete do l ∼ u(1, number of lines in c) remove line l of c else pos ∼ u(1, number of positions in c) var′ ← var ← variable number at position pos in c while var = var′ do var′ ∼ n d(var, σv 2, 0, 61) replace variable number at position pos in c with var′ ▷ sample line number uniformly ▷ sample position uniformly ▷ sample from discrete truncated gaussian aiger format. the defective and target circuits, are in a text-based representation of and-inverter graphs called aiger format (brummayer et al., 2007); see, for example, bottom-right of figure 1. a line in the aiger format defines nodes such as latches (flip-flops) and and-gates by defining the inputs and outputs of the respective node. connections between nodes are described by the variable numbers used as the input and output of nodes. a latch is defined by one input and one output connection, whereas two inputs and one output connection define an and gate. inputs and outputs of the whole circuit are defined through lines with a single variable number that describes the connection to a node. the parity of the variable number implicitly gives negations. hence, two consecutive numbers describe the same connection, with odd numbers representing the negated value of the preceding even variable number. the numbers 0 and 1 are the constants false and true. the full definition of aiger circuits can be found in appendix b. data generation we replicated the neural circuit synthesis model of schmitt et al. (2021b) and evaluated the model with all specifications from their dataset while keeping the training, validation, and test split separate. we evaluated with a beam size of 3, resulting in a dataset repairraw of roughly 580 000 specifications and corresponding (possibly faulty) circuits in the training split and about 72 000 in the validation and test split, respectively. we model-check each predicted circuit against its specification with nuxmv (cavada et al., 2014) to classify defective implementations into the following classes. a sample is violated if the predicted circuit is defective, i.e., violates the specification (55%). a sample is matching if the prediction of the synthesis model is completely identical to the target circuit in the dataset (16%). lastly, a sample is satisfied when the predicted circuit satisfies the specification (or represents a correct counter-strategy) but is no match (29%), which regularly happens as a specification has multiple correct implementations. for example, consider our round-robin scheduler from the introduction: the specification does not specify the order in which the processes are given access to the resource. we construct our final dataset from repairraw in two steps. in the first step, we consider the violating samples, i.e., mispredictions of the neural circuit synthesis network, which are natural candidates for a circuit repair dataset. in the second step, we introduce mistakes inspired by human errors into correct implementations (see figure 6 in the appendix for an overview of the dataset generation and its parameters). in the following, we describe these steps in detail. mispredictions of neural circuit synthesis. we first consider the violating samples from repairraw. likewise to a specification having multiple correct implementations, a defective circuit has multiple possible fixes, leading to correct yet different implementations. for a given defective circuit, a fix can thus either be small and straightforward or lengthy and complicated. in a supervised learning setting, this leads us to the issue of misleading target circuits. this concerns samples where only a lengthy and complicated fix of the faulty circuit leads to the target circuit, specification parameters global layer parameters circuit parameters assumption guarantee circuit figure 2: the structure of global and local layers in the separated hierarchical transformer. for simplicity, shown for a single assumption, a single guarantee and only two tokens each. although a minor fix would also lead to a correct but different implementation. we identify misleading targets by searching for alternative solutions with the synthesis model (up to a beam size of 4). if the model finds a correct alternative circuit with a smaller levenshtein distance (see appendix c for a definition) to the faulty circuit, a fix leading to the alternative circuit is smaller than a fix to the original target. the target circuit will be replaced accordingly with the alternative circuit. we select all samples with a levenshtein distance to the target circuit ≤ 50 for the final dataset. introducing errors. we propose algorithm 1, which probabilistically introduces human-like mistakes into correct circuits. such mistakes include missing latches, and gates, or inverters and miswired connections between the components. first, we determine the number of mistakes or changes we will introduce to the circuit. for that, we sample from a discrete truncated normal distribution around zero, with a standard deviation of 7.5 and bounds from 1 to 50. for each change, we flip a coin with the probability of pdelete = 0.2 for deleting a line from the aiger circuit and 1 − pdelete = 0.8 for changing a variable number. for deleting, we uniformly choose a line from the aiger circuit to remove. we do not remove inputs or outputs to stay consistent with the dataset. for changing a variable number, we uniformly choose a position of a variable number. the position can be an input, output, inbound edge(s), or outbound edge of a latch or and gate. we replace the variable number at this position with a new variable number that is determined by sampling a discrete truncated normal distribution around the old variable number, a standard deviation of 10, and bounds given by the minimal and maximal possible variable number in the dataset (0 to 61). the new variable number cannot be the mean itself to ensure a definite change. for a visualization of the discrete truncated normal distributions, see figure 7 in the appendix. lastly, we spot-check altered circuits by model-checking to determine whether introduced changes create a faulty circuit. only in less than 2% of the cases the circuit still satisfies the specification. final dataset. in the final dataset repair, 61% of the samples contain circuits with errors introduced by algorithm 1, while the others are based on mispredicted circuits. in 38% of cases, the samples have a levenshtein distance of less than 10 between the repair circuit and the target circuit. in total, the levenshtein distance in the dataset has a mean of 15.7 with a standard deviation of 12.77, and the median is at 13 (see figure 8 in appendix d for its composition). architecture in this section, we introduce the separated hierarchical transformer architecture, a variation of the hierarchical transformer li et al. (2021), and provide further details on our architectural setup. the hierarchical transformer has been shown to be superior to a vanilla transformer in many applications including logical and mathematical problems li et al. (2021); schmitt et al. (2021b). the hierarchical transformer, as well as the novel separated hierarchical transformer, is invariant against the order of the assumptions and guarantees in the specification. separated hierarchical transformer the encoder of a hierarchical transformer contains two types of hierarchically structured layers. local layers only see parts of the input, while global layers handle the combined output of all local layers. contrary to the original transformer, the input is partitioned before being fed into the local layers. a positional encoding is applied separately to each partition of the input. model parameters are shared between the local layers, but no attention can be calculated between tokens in different partitions. the hierarchical transformer has been beneficial to understanding repetitive structures in mathematics (li et al., 2021) and has shown to be superior for processing ltl specifications (schmitt et al., 2021b). we extend the hierarchical transformer to a separated hierarchical transformer, which has two types of local layers: each separated local layer is an independent encoder; therefore, separated local layers do not share any model parameters. attention calculations are done independently for each local layer. a visualization of the proposed architecture is shown in figure 2. shared local layers are identical to local layers in the hierarchical transformer. a separated local layer contains one or more shared local layers. the results of the separated and shared local layers are concatenated and fed into the global layer. while the number of shared local layers does not change the model size, multiple separated local layers introduce slightly more model parameters. the separated hierarchical transformer handles multiple independent inputs that differ in structure, type, or length better. architectural setup we separate the specification and the faulty circuit with a separate local layer. the specification is partitioned into its guarantees and assumptions, which we feed into shared local layers. let attention be the attention function of vaswani et al. (2017) (see appendix k). when identifying the assumptions assumption1, · · · , assumptionn and guarantees guarantee1, · · · , guaranteem with specification properties p1, . . . , pn+m, the following computations are performed in a shared local layer: attention(hpiw q s , hpiw k s , hpiw v s ) where pi ∈ {p1, . . . , pn+m} , where hpi denotes the stacked representations of all positions of specification property pi. therefore, the attention computation is limited between tokens in each guarantee and between tokens in each assumption while the learned parameters w q s are shared between all guarantees and assumptions. the separated local layer that processes the circuit performs the attention computation: s , w k s , w v attention(hcw q c , hcw k c , hcw v c ) , where hc denotes the stacked representations of all positions of the circuit. therefore, the computation is performed over all tokens in the circuit but the parameters w q c are different from the parameters for the specification (see figure 2). c , w k c , w v for embedding and tokenization, we specialize in the domain specific language (dsl) of ltl formulas and aiger circuits with only a few symbols. for every symbol in the dsl, we introduce a token. variables in properties (i.e., assumptions and guarantees) are limited to five inputs i0 · · · i4 and five outputs o0 · · · o4, for each of which we introduce a token. in the aiger format (used for the faulty circuit and the target circuit), we fix the variable numbers to the range of 0 to 61, thereby indirectly limiting the size of the circuit, while allowing for reasonable expressiveness. we set a special token as a prefix to the circuit embedding to encode the presumed realizability of the specification. this determines whether the circuit represents a satisfying circuit or a counter strategy. we embed the tokens by applying a one-hot encoding which we multiply with a learned embedding matrix. properties have a tree positional encoding (shiv & quirk, 2019) as used for ltl formulas by (hahn et al., 2021). this encoding incorporates the tree structure of the ltl formula into the positional encoding and allows easy calculations between tree relations. for circuits, we use the standard linear positional encoding from vaswani et al. (2017). experiments in this section, we report on experimental results. we first describe our training setup in section 5.1 before evaluating the model with two different methods. the model evaluation shows the evaluation s e l p m a s 30 levenshtein distance semantic accuracy (smoothed) violated (copy) satisfied violated match figure 3: accuracy broken down by the levenshtein distance between faulty and target circuit. of the repair model on the repair dataset distribution(section 5.2). in the synthesis pipeline evaluation, the repair model is evaluated on the predictions of the synthesis model and then repeatedly evaluated on its predictions (section 5.3). we differentiate between syntactic and semantic accuracy, following hahn et al. (2021). a sample is considered semantically correct if the prediction satisfies the specification. we consider the prediction syntactically correct if it is identical to the target. training setup we trained a separated hierarchical transformer with 4 heads in all attention layers, 4 stacked local layers in both separated local layers, and 4 stacked layers in the global layer. the decoder stack contains 8 stacked decoders. the embedding size in the decoder and encoder is 256 and all feedforward networks have a size of 1024 and use the rectified linear unit (relu) activation function. we use the adam optimizer (kingma & ba, 2017) with β1 = 0.9, β2 = 0.98 and ϵ = 10−9 and 4000 warmup steps with an increased learning rate, as proposed in vaswani et al. (2017). we trained on an single gpu of a nvidia dgx a100 system with a batch size of 256 for 20 000 steps. we restricted the specification input to 5 inputs and 5 outputs, no more than 12 properties (assumptions + guarantees) and no properties of a size of its abstract syntax tree (ast) greater than 25. model evaluation we evaluated the model up to a beam size of 16. the key results of the model evaluation can be found at the top of table 1. with a beam size of 16, the model outputs a correct implementation in 84% of the cases on a single try. when analyzing the beams, we found that the model shows enormous variety when fixing the circuits. almost half of the beams result in correct implementations. to investigate if the model performs a repair operation, we identify samples where the model copied the defective circuit (violated (copy)). the model only copied 31 of 1024 samples. we, additionally, track if the predicted circuit contains syntax errors, which rarely happens (a total of 8 errors out of every beam). we provide insight into the model’s performance by analyzing a) what exactly makes a sample challenging to solve for the model and b) if the model makes significant improvements towards the target circuit even when the prediction violates the specification. difficulty measures. we consider three parameters to measure the difficulty of solving a specific repair problem: the size of the specification (the ltl formula’s ast), the size of the target circuit (and gates + latches), and the levenshtein distance between the defective circuit and the target circuit. the levenshtein distance is the dominant indicator of a sample’s difficulty (see figure 3). however, the specification and circuit size is, perhaps surprisingly, less of a factor (see figure 11 and figure 10 in the appendix). this indicates that our approach has the potential to scale up to larger circuits when increasing the model size. improvement measures. we semantically and syntactically approximate whether a violating prediction is still an improvement over the faulty input circuit. for syntactic improvement, we calculate the difference between the distance of the faulty input and target circuit lev(ci, ct) and the distance between prediction and target circuit lev(cp, ct). if the difference is below zero: lev(cp, ct) − lev(cf , ct) < 0, the model syntactically improved the faulty circuit towards the target circuit. on our test set, violated circuits improved by −9.98 edits on average. for semantic improvement, we obtained a set of sub-specifications by creating a new specification with each s e l p m a s specification size semantic accuracy (new, smoothed) match or satisfied (unchanged) match (new) violated or error (remaining) satisfied (new) figure 4: improvements on the reactive synthesis held-out test set (see test in table 1) broken down by the size of the specifications ast. we aggregate the best result from all iterations over 16 beams. the annotations new and unchanged indicate whether the status improved from the evaluation of the synthesis model to the evaluation of the repair model. guarantee from the original specification: let a1 to an be the original assumptions and g1 to gm the original guarantees, the set of sub-specifications is defined as {(a1 ∧ · · · ∧ an) → gi | 1 ≤ i ≤ m}. we approximate that, the more sub-specifications a circuit satisfies, the closer it is semantically to a correct circuit. on our test set, in 75.9%, the prediction satisfied more sub-specifications, and in 2.4%, the prediction satisfied fewer sub-specifications. for a more detailed insight, we supported violin plots for syntactic (figure 12) and semantic (figure 13) improvement in the appendix. since in both approximations, even violating predictions are an improvement over the faulty input, this poses the natural question if the model’s performance can be increased by iteratively querying the model on its predictions. in the next section, we investigate this more in-depth by applying our repair model iteratively to the prediction of a neural circuit synthesis model including real-world examples. synthesis pipeline evaluation we demonstrate how our approach can be used to improve the current state-of-the-art for neural reactive synthesis (see figure 5). we first evaluate the synthesis model we replicated from schmitt et al. (2021b). if the predicted circuit violates the specification, we feed the specification together with the violating circuit into our repair model. if the prediction still violates the specification after applying the repair model once, we re-feed the specification with the new violating circuit into the repair model until it is solved. using the presented pipeline, we improve the results of schmitt et al. (2021b) significantly, as shown in the bottom half of table 1. we evaluate heldout samples from the synthesis dataset and out-of-distribution benchmarks and filtered out samples that exceed our input restrictions (see section 5.1). the datasets test contain randomly sampled held-out instances from the repair and neural synthesis dataset, respectively. figure 4 shows an in-depth analysis of the status changes of the samples when applying the synthesis pipeline. table 1: syntactic and semantic accuracy of the model (top) and pipeline (bottom) evaluation. beam size semantic accuracy syntactic accuracy correct beams per sample synthesis model after first iteration after up to n iterations n test (repair dataset) test (synthesis dataset) timeouts syntcomp smart home light and dark green represent the instances that were additionally solved with our repair approach; gray represent the instances that were already initially solved by the synthesis network. the problem becomes increasingly more challenging with a larger target circuit size. in total, we achieve an improvement of 6.8 percentage points. to show that our improvement over the state-of-the-art is not due to scaling but rather a combination of new training data, architecture and iterative evaluation, we additionally scaled the model from schmitt et al. (2021b) to match or exceed the number of parameters of our model. the parameter-matched models only lead to insignificant improvements over the base model (see table 2 in the appendix). we further identified a set of held-out samples were our approach performs significantly better than the classical stateof-the-art synthesizer tool strix (meyer et al., 2018b): samples in timeouts could not been solved by strix in 1h, of which we still achieve 34.2% with an improvement of 8.1 percentage points. even more significant improvement can be observed in real-world samples from the annual synthesis competitions and out-of-distribution benchmarks: the dataset smart home are benchmarks for synthesizing properties over smart home applications (geier et al., 2022), where we improve by 11.8 percentage points. the dataset syntcomp contains benchmarks from the annual reactive synthesis competition (jacobs et al., 2022a;b), where the model pipeline improves the state-of-the-art by 23.8 percentage points and even by 19 percentage points by applying it once. ablations specification (in ltl) hierarchical transformer circuit (prediction) model checking (faulty) aiger circuit specification (in ltl) t a e p e r separated hierarchical transformer circuit (prediction) model checking s i s e h t n y s t i u c r i c l a r u e n r i a p e r t i u c r i c e v i t a r e t i figure 5: pipeline structure we performed various ablation studies that the interested reader can find in the appendix. in particular, we parameterized our data generation for constructing the circuit repair dataset (see figure 6 in appendix d). an extensive collection of over 100 generated datasets are available through our code at github2. we trained various models based on these datasets and different hyperparameters, also available at github. a hyperparameter study can be found in figure 3 in appendix i. an in-depth analysis of the results of different models tested in the synthesis pipeline can be found in appendix j. conclusion in this paper, we studied the first application of neural networks to the circuit repair problem. we introduced the separated hierarchical transformer to account for the multimodal input of the problem. we provided a data generation method with a novel algorithm for introducing errors to circuit implementations. a separated hierarchical transformer model was successfully trained to repair defective sequential circuits. the resulting model was used to significantly improve the state-of-theart in neural circuit synthesis. additionally, our experiments indicate that the separated hierarchical transformer has the potential to scale up to even larger circuits. our approach can find applications in the broader hardware verification community. possible applications include the automated debugging of defective hardware after model checking or testing. due to its efficiency, a well-performing neural repair method reduces the necessary human interaction in the hardware design process. the benefit of a deep learning approach to the circuit repair problem is the scalability and generalization capabilities of deep neural networks: this allows for an efficient re-feeding of faulty circuits into the network when classical approaches suffer from the problem’s high computational complexity. moreover, neural networks generalize beyond classical repair operations, whereas classical approaches are limited in their transformations, such as the limitation of replacing boolean functions. future work includes, for example, the extension of our approach to hardware description languages, such as vhdl or verilog, and the extension to other specification languages that express security policies, such as noninterference or observational determinism. 2https://github.com/reactive-systems/circuit-repair reproducibility statement all models, datasets, code, and guides are available in the corresponding code repository. all our datasets and models mentioned in the paper, the code of the data generation method, and the code for training new models as well as evaluating existing models are licensed under the open-source mit license. multiple jupyter notebooks guide the interested reader through the use of the code to allow low-effort reproducibility of all our results and encourage fellow researchers to use, extend and build on our work. acknowledgments this work was partially supported by the european research council (erc) grant hyper (no. 101055412). references hammad ahmad, yu huang, and westley weimer. cirfix: automatically repairing defects in hardware design code. in proceedings of the 27th acm international conference on architectural support for programming languages and operating systems, pp. 990–1003, 2022. tom baumeister, bernd finkbeiner, and hazem torfah. explainable reactive synthesis. in international symposium on automated technology for verification and analysis, pp. 413–428. springer, 2020. aaron bohy, v´eronique bruyere, emmanuel filiot, naiyong jin, and jean-franc¸ois raskin. acacia+, a tool for ltl synthesis. in international conference on computer aided verification, pp. 652–657. springer, 2012. robert brummayer, alessandro cimatti, koen claessen, niklas een, marc herbstritt, hyondeuk kim, toni jussila, ken mcmillan, alan mishchenko, and fabio somenzi. the aiger andinverter graph (aig) format version 20070427. 2007. j richard buchi and lawrence h landweber. solving sequential conditions by finite-state strategies. in the collected works of j. richard b¨uchi, pp. 525–541. springer, 1990. cristian s calude, sanjay jain, bakhadyr khoussainov, wei li, and frank stephan. deciding parity games in quasi-polynomial time. siam journal on computing, (0):stoc17–152, 2020. roberto cavada, alessandro cimatti, michele dorigatti, alberto griggio, alessandro mariotti, andrea micheli, sergio mover, marco roveri, and stefano tonetta. the nuxmv symbolic in armin biere and roderick bloem (eds.), computer aided verification model checker. 26th international conference, cav 2014, held as part of the vienna summer of logic, vsl 2014, vienna, austria, july 18-22, 2014. proceedings, volume 8559 of lecture notes in computer science, pp. 334–342. springer, 2014. doi: 10.1007/978-3-319-08867-9\ 22. url https://doi.org/10.1007/978-3-319-08867-9_22. alonzo church. application of recursive arithmetic to the problem of circuit synthesis. 1963. edmund m clarke. model checking. in international conference on foundations of software technology and theoretical computer science, pp. 54–56. springer, 1997. edmund m clarke and e allen emerson. design and synthesis of synchronization skeletons using branching time temporal logic. in workshop on logic of programs, pp. 52–71. springer, 1981. peter faymonville, bernd finkbeiner, and leander tentrup. bosy: an experimentation framework for bounded synthesis. in international conference on computer aided verification, pp. 325–332. springer, 2017. bernd finkbeiner and sven schewe. bounded synthesis. international journal on software tools bernd finkbeiner, christopher hahn, philip lukert, marvin stenger, and leander tentrup. synthesis from hyperproperties. acta informatica, 57(1):137–163, 2020. gideon geier, philippe heim, felix klein, and marvin stenger. j.a.r.v.i.s. tsl/tlsf benchmark suite, 2022. url https://github.com/syntcomp/benchmarks. yashdeep godhal, krishnendu chatterjee, and thomas a. henzinger. christopher hahn, frederik schmitt, jens u. kreber, markus n. rabe, and bernd finkbeiner. teaching temporal logics to neural networks. in international conference on learning representations, 2021. url https://openreview.net/forum?id=docqk-f4byz. christopher hahn, frederik schmitt, julia j tillman, niklas metzger, julian siber, and bernd finkbeiner. formal specifications from natural language. arxiv preprint arxiv:2206.01962, 2022. tom horak, norine coenen, niklas metzger, christopher hahn, tamara flemisch, juli´an m´endez, dennis dimov, bernd finkbeiner, and raimund dachselt. visual analysis of hyperproperties ieee transactions on visualization and computer for understanding model checking results. graphics, 28(1):357–367, 2021. ieee. ieee standard for property specification language (psl). sven jacobs and guillermo a. p´erez. the 7th reactive synthesis competition: syntcomp 2020. http://www.syntcomp.org/syntcomp-2020-results/. swen jacobs, guillermo a. perez, remco abraham, veronique bruyere, michael cadilhac, maximilien colange, charly delfosse, tom van dijk, alexandre duret-lutz, peter faymonville, bernd finkbeiner, ayrat khalimov, felix klein, michael luttenberger, klara meyer, thibaud michaud, adrien pommellet, florian renkin, philipp schlehuber-caissier, mouhammad sakr, salomon sickert, gaetan staquet, clement tamines, leander tentrup, and adam walker. syntcomp 2020 results | the reactive synthesis competition, june 2022a. url http://www. syntcomp.org/syntcomp-2020-results/. swen jacobs, guillermo a. perez, remco abraham, veronique bruyere, michael cadilhac, maximilien colange, charly delfosse, tom van dijk, alexandre duret-lutz, peter faymonville, bernd finkbeiner, ayrat khalimov, felix klein, michael luttenberger, klara meyer, thibaud michaud, adrien pommellet, florian renkin, philipp schlehuber-caissier, mouhammad sakr, salomon sickert, gaetan staquet, clement tamines, leander tentrup, and adam walker. the reactive synthesis competition (syntcomp): 2018-2021, june 2022b. url http://arxiv.org/ abs/2206.00251. number: arxiv:2206.00251 arxiv:2206.00251 [cs]. barbara jobstmann, andreas griesmayer, and roderick bloem. program repair as a game. in international conference on computer aided verification, pp. 226–238. springer, 2005. barbara jobstmann, stefan staber, andreas griesmayer, and roderick bloem. finding and fixing faults. journal of computer and system sciences, 78(2):441–460, 2012. publisher: elsevier. shmuel katz and zohar manna. towards automatic debugging of programs. acm sigplan nodiederik p. kingma and jimmy ba. adam: a method for stochastic optimization, january 2017. jens u kreber and christopher hahn. generating symbolic reasoning problems with transformer wenda li, lei yu, yuhuai wu, and lawrence c. paulson. isarstep: a benchmark for high-level mathematical reasoning. technical report arxiv:2006.09265, arxiv, march 2021. url http: //arxiv.org/abs/2006.09265. arxiv:2006.09265 [cs, stat] type: article. weilin luo, hai wan, jianfeng du, xiaoda li, yuze fu, rongzhen ye, and delong zhang. teaching ltlf satisfiability checking to neural networks. oded maler and dejan nickovic. monitoring temporal properties of continuous signals. in formal techniques, modelling and analysis of timed and fault-tolerant systems, pp. 152–166. springer, 2004. philipp j. meyer, salomon sickert, and michael luttenberger. strix: explicit reactive synthesis strikes back! in computer aided verification - 30th international conference, cav 2018, held as part of the federated logic conference, floc 2018, oxford, uk, july 14-17, 2018, proceedings, part i, volume 10981 of lecture notes in computer science, pp. 578–586. springer, 2018a. doi: 10.1007/978-3-319-96145-3\ 31. philipp j. meyer, salomon sickert, and michael luttenberger. strix: explicit reactive synin hana chockler and georg weissenbacher (eds.), computer aided thesis strikes back! verification, lecture notes in computer science, pp. 578–586, cham, 2018b. springer interisbn 978-3-319-96145-3. doi: 10.1007/978-3-319-96145-3 31. url national publishing. https://strix.model.in.tum.de/. prasita mukherjee, haoteng yin, susheel suresh, and tiark rompf. octal: graph representation learning for ltl model checking. arxiv preprint arxiv:2207.11649, 2022. nir piterman, amir pnueli, and yaniv sa’ar. synthesis of reactive(1) designs. in e. allen emerson and kedar s. namjoshi (eds.), verification, model checking, and abstract interpretation, 7th international conference, vmcai 2006, charleston, sc, usa, january 8-10, 2006, proceedings, volume 3855 of lecture notes in computer science, pp. 364–380. springer, 2006. doi: 10.1007/ 11609773\ 24. url https://doi.org/10.1007/11609773_24. amir pneuli and roni rosner. distributed reactive systems are hard to synthesize. in proceedings [1990] 31st annual symposium on foundations of computer science, pp. 746–757. ieee, 1990. a. pnueli. the temporal logic of programs. in 18th annual symposium on foundations of computer frederik schmitt, christopher hahn, jens u kreber, markus n rabe, and bernd finkbeiner. deep learning for temporal logics. 6th conference on artificial intelligence and theorem proving (aitp), 2021a. frederik schmitt, christopher hahn, markus n. rabe, and bernd finkbeiner. neural circuit synthesis from specification patterns. in advances in neural information processing systems 34 preproceedings, 2021b. url https://proceedings.neurips.cc/paper/2021/hash/ 8230bea7d54bcdf99cdfe85cb07313d5-abstract.html. vighnesh shiv and chris quirk. novel positional encodings to enable tree-based transformin advances in neural information processing systems, volume 32. curran assoers. ciates, inc., 2019. url https://proceedings.neurips.cc/paper/2019/hash/ 6e0917469214d8fbd8c517dcdc6b8dcf-abstract.html. stefan staber, barbara jobstmann, and roderick bloem. finding and fixing faults. in advanced research working conference on correct hardware design and verification methods, pp. 35– 49. springer, 2005. shobha vasudevan, wenjie joe jiang, david bieber, rishabh singh, c richard ho, charles sutton, et al. learning semantic representations to verify hardware designs. advances in neural information processing systems, 34:23491–23504, 2021. ashish vaswani, noam shazeer, niki parmar, gomez, łukasz kaiser, and illia polosukhin. vances inc., 3f5ee243547dee91fbd053c1c4a845aa-abstract.html. jakob uszkoreit, llion jones, aidan n in advolume 30. curran associates, url https://proceedings.neurips.cc/paper/2017/hash/ information processing systems, attention is all you need. in neural a linear-time temporal logic (ltl) the syntax of linear-time temporal logic (ltl) pnueli (1977) is given as follows. φ := p | φ ∧ φ | ¬φ | φ | φ u φ , where p is an atomic proposition p ∈ ap . in this context, we assume that the set of atomic propositions ap can be partitioned into inputs i and outputs o: ap = i ˙∪o. the semantics of ltl is defined over a set of traces: t r := (2ap )ω. let π ∈ t r be trace, π[0] the starting element of a trace π and for a k ∈ n and be π[k] be the kth element of the trace π. with π[k,∞] we denote the infinite suffix of π starting at k. we write π |= φ for the trace π satisfies the formula φ. for a trace π ∈ t r, p ∈ ap and formulas φ the semantics of ltl is defined as follows: • π |= ¬φ iff π ̸|= φ • π |= p iff p ∈ π[0] ; π |= ¬p iff p ̸∈ π[0] • π |= φ1 ∧ φ2 iff π |= φ1and π |= φ2 • π |= φ iff π[1] |= φ • π |= φ1 u φ2 iff ∃l ∈ n : (π[l,∞] |= φ2 ∧ ∀m ∈ [0, l − 1] : π[m,∞] |= φ1) . we use further temporal and boolean operators that can be derived from the ones defined above. that includes ∨, →, ↔ as boolean operators and the following temporal operators: • φ1 r φ2 (release) is defined as ¬(¬φ1 u ¬φ2) • φ (globally) is defined as false r φ φ (eventually) is defined as true u φ . reactive synthesis reactive synthesis is the task to find a circuit c that satisfies a given formal specification φ, i.e., ∀t ∈ tracesc. t |= φ, or determine that no such circuit exists. we consider formal specifications that are formulas over a set of atomic propositions (ap ) in ltl. the specification defines the desired behavior of a system based on a set of input and output variables. as the system, we consider circuits, more precisely a text representation of and-inverter graphs, called aiger circuits. and-inverter graphs connect input and output edges using and gates, not gates (inverter), and memory cells (latches). b and-inverter graphs and-inverter graphs are graphs that describe hardware circuits. the graph connects input edges with output edges through and gates, latches, and implicit not gates. we usually represent this graph by a text version called the aiger format brummayer et al. (2007). the aiger format uses variable numbers that define variables. variables can be interpreted as wired connections in a circuit or as edges in a graph, where gates and latches are nodes. • a negation is implicitly encoded by distinguishing between even and odd variable numbers. two successive variable numbers represent the same variable, the even variable number represents the non-negated variable, and the odd variable number represents the negated variable. the variable numbers 0 and 1 have the constant values false and true. • each input and output edge is defined by a single variable number, respectively. • an and gate is defined by three variable numbers. the first variable number defines the outbound edge of the and gate, and the following two variable numbers are inbound edges. the value of the outbound variable is determined by the conjunction of the values of both inbound variables. • a latch is defined by two variable numbers: an outbound edge and an inbound edge. the value of the outbound variable is the value of the inbound variable at the previous time step. in the first time step, the outbound variable is initialized as false. the aiger format starts with a header, beginning with the letters aag and following five nonnegative integers m, i, l, o, a with the following meaning: i n d e x m=maximum v a r i a b l e i =number o f i n p u t s l a t c h e s l=number o f o=number o f o u t p u t s a=number o f and g a t e s after the header, each line represents a definition of either input, latch, output, or and gate in this order. the numbers in the header define the number of lines associated with each type. after the definition of the circuit, an optional symbol table might follow, where we can define names for inputs, outputs, latches, and and gates. in this context, the circuit can either describe a satisfying system or a counter strategy to the specification. c levenshtein distance the levenshtein distance is an edit distance metric, measuring the degree of distinction between two strings. let s1 and s2 two given strings, then the levenshtein distance lev(s1, s2) is a the number of actions necessary to transform string s1 into string s2 or vice versa. possible actions are deletions, insertions, and substitutions. d data generation in figure 6 we sketch the data generation process. the base of the process is the evaluation of a model for neural circuit synthesis. this is parameterized as multiple beam size are possible. for mispredicted samples, we replace misleading targets (see section 3.2). this is optional but our experiments showed that the training benefits from this step. up to a given levenshtein distance, we collect samples for the final dataset. all other samples (greater levenshtein distances and correct predictions) are processed in section 3.2 and algorithm 1. this process is optional, can be applied to only some samples and is also parameterized. the results can be used for the final dataset or are, depending on various parameters, discarded. evaluation of neural circuit sythesis parameterized optional mispredicted samples correct samples replace misleading targets distance > 50 introduce artificial mistakes distance <= 50 final dataset figure 6: overview over the data generation process. figure 7 shows the probability mass function for the used truncated normal distributions used in algorithm 1. figure 8 shows the composition of the final dataset. samples are sorted into bins depending on the levenshtein distance between its faulty circuit and its target circuit. while yellow shows all samples in the final dataset, blue only shows samples in the final dataset that are based on section 3.2 and red only shows samples that are based on section 3.2. y t i l i b a b o r p new variable number 20 number of changes per sample figure 7: probability mass function for the truncated normal distributions. left: distribution for sampling the number of changes. right: distribution for sampling new variable number with exemplary old variable number 12. t n u o c depicted samples levenshtein distance between repair circuit and target circuit samples with mispredicted faulty circuits samples with altered faulty circuits final dataset figure 8: composition of the final dataset. outliers > 55 not shown. in figure 9, we show the composition three alternative datasets. samples are sorted into bins depending on levenshtein distance between its faulty circuit and its target circuit. the dataset scpa-repair-alter-19 (blue) shows a dataset that is solely based on section 3.2. datasets scpa-repair-gen-108 and scpa-repair-gen-96 (red and yellow) are the two best performing datasets from all datasets we trained and based on a mixture of section 3.2 and section 3.2. dataset scpa-repair-gen-96 (yellow) is the dataset presented in this paper. e difficulty measures figure 10 and figure 11 (together with figure 3) show predictions of the presented model, sorted into bin by specification and target size as well as levenshtein distance between faulty input circuit and target circuit. we use beam search (beam size 16) and only display the result of the best beam. different colors depict the different classes, a sample is sorted into, i.e. violated for a prediction that violates the specification and violated (copy) for a prediction that additionally is identical to the faulty input. satisfied for correct predictions and match for predictions that additionally are identical to the target circuit. the line shows the semantic accuracy smoothed over several bins. t n u o c levenshtein distance between repair circuit and target circuit dataset scpa-repair-alter-19(train) scpa-repair-gen-96(train) scpa-repair-gen-108(train) figure 9: comparison of the two best performing datasets and a dataset that is solely based on altered circuit data. range 0 to 100 s e l p m a s 100 specification size semantic accuracy (smoothed) violated (copy) satisfied error violated match figure 10: accuracies and sample status broken down by the size of the specification ast. s e l p m a s target size semantic accuracy (smoothed) violated (copy) satisfied error violated match figure 11: accuracies and sample status broken down by the size of the target circuit (ands + latches). f improvement measures figure 12 shows the levenshtein distance difference (lev(cp, ct) − lev(cf , ct)) between faulty input circuit and prediction. a value below zero implies syntactic improvement towards the target circuit. figure 13 shows the number of satisfied sub-specifications. the more sub-specifications a circuit satisfies, the closer it semantically is to a correct circuit. violated satisfied levenshtein distance improvement unrealizable realizable figure 12: violin plot of the improvement of the levenshtein distance from the repair circuit and prediction to the target. the dashed line shows the mean of the displayed distribution. violated satisfied figure 13: violin plot of the difference between the number of sub-specs that are satisfied by the faulty input circuit vs. the predicted circuit. the larger the number the larger the improvement. inside the violin plot is a box plot with the dashed line showing the mean of the displayed distribution. only realizable samples. g arbiter here, we repeat the arbiter from figure 1, with the aiger format for all circuits on the left of each graph representation. i n p u t 0 ( i 0 ) i n p u t 1 ( r 2 ) i n p u t 2 ( r 0 ) i n p u t 3 ( r 3 ) i n p u t 4 ( r 1 ) l a t c h 0 ( l 0 ) l a t c h 1 ( l 1 ) o u t p u t 0 ( g 3 ) o u t p u t 1 ( g 2 ) o u t p u t 2 ( g 0 ) o u t p u t 3 ( g 1 ) o u t p u t 4 ( o4 ) and − g a t e s and − g a t e s and − g a t e s (a) faulty circuit. predicted in the base model (iteration 0). i n p u t 0 ( i 0 ) i n p u t 1 ( r 2 ) i n p u t 2 ( r 0 ) i n p u t 3 ( r 3 ) i n p u t 4 ( r 1 ) l a t c h 0 ( l 0 ) l a t c h 1 ( l 1 ) o u t p u t 0 ( g 3 ) o u t p u t 1 ( g 2 ) o u t p u t 2 ( g 0 ) o u t p u t 3 ( g 1 ) o u t p u t 4 ( o4 ) and − g a t e s and − g a t e s and − g a t e s and − g a t e s (b) faulty circuit. predicted in iteration 1 of the repair model. i n p u t 1 ( r 2 ) i n p u t 2 ( r 0 ) i n p u t 3 ( r 3 ) i n p u t 4 ( r 1 ) l a t c h 0 ( l 0 ) l a t c h 1 ( l 1 ) o u t p u t 0 ( g 3 ) o u t p u t 1 ( g 2 ) o u t p u t 2 ( g 0 ) o u t p u t 3 ( g 1 ) o u t p u t 4 ( o4 ) and − g a t e s and − g a t e s and − g a t e s and − g a t e s and − g a t e s (c) correct circuit. predicted in iteration 2 of the repair model. figure 14: failed attempt of synthesizing an arbiter and successful repair. h scaling parameters in this experiment, we scaled the synthesis model (schmitt et al., 2021b) to match or exceed the number of parameters of our model. this shows that the increased number of parameters of the separated hierarchical transformer is not the reason for the overall increase in performance. the detailed results are shown in table 2. table 2: comparison of model size and semantic accuracy between different configurations of the synthesis model and our model. model synthesis model (baseline) synthesis model: 8 local layers synthesis model: 8 global layers synthesis model: 6 encoder layers synthesis model: network size of 2048 (local layers) synthesis model: network size of 2048 (global layers) synthesis model: network size of 2048 (encoder) repair model (ours) i hyperparameter study parameter sem acc. with beam size 16 we trained several versions of a model on the presented dataset (scpa-repair-gen-96) as a hyperparameter study shown in table 3. | 18 | [
108,
330.4830784,
504.0045882,
352.0920978
] |
n05upKp02kQ.pdf | 2,023 | 0 | partially observable rl with b-stability: unified structural condition and sharp sample-efficient algorithms fan chen peking university chern@pku.edu.cn yu bai˚ salesforce research yu.bai@salesforce.com song mei˚ uc berkeley songmei@berkeley.edu abstract partial observability—where agents can only observe partial information about the true underlying state of the system—is ubiquitous in real-world applications of reinforcement learning (rl). theoretically, learning a near-optimal policy under partial observability is known to be hard in the worst case due to an exponential sample complexity lower bound. recent work has identified several tractable subclasses that are learnable with polynomial samples, such as partially observable markov decision processes (pomdps) with certain revealing or decodability conditions. however, this line of research is still in its infancy, where (1) unified structural conditions enabling sample-efficient learning are lacking; (2) existing sample complexities for known tractable subclasses are far from sharp; and (3) fewer sample-efficient algorithms are available than in fully observable rl. this paper advances all three aspects above for partially observable rl in the general setting of predictive state representations (psrs). first, we propose a natural and unified structural condition for psrs called b-stability. b-stable psrs encompasses the vast majority of known tractable subclasses such as weakly revealing pomdps, low-rank future-sufficient pomdps, decodable pomdps, and regular psrs. next, we show that any b-stable psr can be learned with polynomial samples in relevant problem parameters. when instantiated in the aforementioned subclasses, our sample complexities improve substantially over the current best ones. finally, our results are achieved by three algorithms simultaneously: optimistic maximum likelihood estimation, estimation-to-decisions, and model-based optimistic posterior sampling. the latter two algorithms are new for sample-efficient learning of pomdps/psrs. we additionally design a variant of the estimation-to-decisions algorithm to perform sample-efficient all-policy model estimation for b-stable psrs, which also yields guarantees for reward-free learning as an implication. introduction partially observable reinforcement learning (rl)—where agents can only observe partial information about the true underlying state of the system—is ubiquitous in real-world applications of rl such as robotics (akkaya et al., 2019), strategic games (brown & sandholm, 2018; vinyals et al., 2019; berner et al., 2019), economic simulation (zheng et al., 2020), and so on. partially observable rl defies standard efficient approaches for learning and planning in the fully observable case (e.g. those based on dynamical programming) due to the non-markovian nature of the observations (jaakkola et al., 1994), and has been a hard challenge for rl research. theoretically, it is well-established that learning in partial observable rl is statistically hard in the worst case—in the standard setting of partially observable markov decision processes (pomdps), learning a near-optimal policy has an exponential sample complexity lower bound in the horizon length (mossel & roch, 2005; krishnamurthy et al., 2016), which in stark contrast to fully observable mdps where polynomial sample complexity is possible (kearns & singh, 2002; jaksch et al., ˚equal contribution. table 1: comparisons of sample complexities for learning an ε near-optimal policy in pomdps and psrs. definitions of the problem parameters can be found in section 3.2. the last three rows refer to the m-step versions of the problem classes (e.g. the third row considers m-step αrev-revealing pomdps). the current best results within the last four rows are due to zhan et al. (2022); liu et al. (2022a); wang et al. (2022); efroni et al. (2022) respectively1. all results are scaled to the setting with total reward in r0, 1s. problem class λb-stable psr αpsr-regular psr αrev-revealing tabular pomdp ν-future-suff. rank-dtrans pomdp ro decodable rank-dtrans pomdp current best ro psrε2q ˘ ah 6 logpnθoq{pα6 d4 psra4u 9 ` ro s4a6m´4h 6 log nθ{pα4 ` transa5m`3l`1h 2plog nθq2 ¨ ν4γ2{ε2 d4 ` dtransamh 2 log ng{ε2 revε2q ro ours ` dpsrauah 2 log nθ ¨ λ2 ah 2 log nθ{pα2 ro ` ro dpsrau 2 ` ro ` dtransa2m´1h 2 log nθ ¨ ν2{ε2 ro b{ε2 psrε2q ˘ revε2q s2amh 2 log nθ{pα2 dtransamh 2 log nθ{ε2 ro 2010; azar et al., 2017). a later line of work identifies various additional structural conditions or alternative learning goals that enable sample-efficient learning, such as reactiveness (jiang et al., 2017), revealing conditions (jin et al., 2020a; liu et al., 2022c; cai et al., 2022; wang et al., 2022), decodability (du et al., 2019; efroni et al., 2022), and learning memoryless or short-memory policies (azizzadenesheli et al., 2018; uehara et al., 2022b). despite these progresses, research on sample-efficient partially observable rl is still at an early stage, with several important questions remaining open. first, to a large extent, existing tractable structural conditions are mostly identified and analyzed in a case-by-case manner and lack a more unified understanding. this question has just started to be tackled in the very recent work of zhan et al. (2022), who show that sample-efficient learning is possible in the more general setting of predictive state representations (psrs) (littman & sutton, 2001)—which include pomdps as a special case—with a certain regularity condition. however, their regularity condition is defined in terms of additional quantities (such as “core matrices”) not directly encoded in the definition of psrs, which makes it unnatural in many known examples and unable to subsume important tractable problems such as decodable pomdps. second, even in known sample-efficient problems such as revealing pomdps (jin et al., 2020c; liu et al., 2022a), existing sample complexities involve large polynomial factors of relevant problem parameters that are likely far from sharp. third, relatively few principles are known for designing sample-efficient algorithms in pomdps/psrs, such as spectral or tensor-based approaches (hsu et al., 2012; azizzadenesheli et al., 2016; jin et al., 2020c), maximum likelihood or density estimation (liu et al., 2022a; wang et al., 2022; zhan et al., 2022), or learning short-memory policies (efroni et al., 2022; uehara et al., 2022b). this contrasts with fully observable rl where the space of sample-efficient algorithms is much more diverse (agarwal et al., 2019). it is an important question whether we can expand the space of algorithms for partially observable rl. this paper advances all three aspects above for partially observable rl. we define b-stablility, a natural and general structural condition for psrs, and design sharp algorithms for learning any b-stable psr sample-efficiently. our contributions can be summarized as follows. • we identify a new structural condition for psrs termed b-stability, which simply requires its brepresentation (or observable operators) to be bounded in a suitable operator norm (section 3.1). b-stable psrs subsume most known tractable subclasses such as revealing pomdps, decodable pomdps, low-rank future-sufficient pomdps, and regular psrs (section 3.2). • we show that b-stable psrs can be learned sample-efficiently by three algorithms simultaneously with sharp sample complexities (section 4): optimistic maximum likelihood estimation (omle), explorative estimation-to-decisions (explorative e2d), and model-based optimistic posterior sampling (mops). to our best knowledge, the latter two algorithms are first shown to be sample-efficient in partially observable rl. • our sample complexities improve substantially over the current best when instantiated in both regular psrs (section 4.1) and known tractable subclasses of pomdps (section 5). for example, for m-step αrev-revealing pomdps with s latent states, our algorithms find an ε nearoptimal policy within ro rev replaced by episodes of play (with s2{α2 s2am log n {pα2 ˘ revε2q 1for ν-future-sufficient pomdps, wang et al. (2022)’s sample complexity depends on γ, which is an additional l-step past-sufficiency parameter that they require. sλ2 b if measured in b-stability), which improves significantly over the current best result of ` ro s4a6m´4 log n {pα4 . a summary of such comparisons is presented in table 1. revε2q • as a variant of the e2d algorithm, we design the all-policy model-estimation e2d algorithm that achieves sample-efficient all-policy model estimation—and as an application, rewardfree learning—for b-stable psrs (section 4.2 & appendix h.2). • technically, our three algorithms rely on a unified sharp analysis of b-stable psrs that involves a careful error decomposition in terms of its b-representation, along with a new generalized ℓ2-type eluder argument, which may be of future interest (appendix b). related work our work is closely related to the long lines of work on sample-efficient learning of fully/partially observable rl (with/without function approximation), especially the lines of work on pomdps and psrs. we review these related works in appendix a due to the space limit. preliminaries ␣ h, o, a, p, trhuh sequential decision processes with observations an episodic sequential decision process is spec, where h p zě1 is the horizon length; o is the observaified by a tuple tion space with |o| “ o; a is the action space with |a| “ a; p specifies the transition dynamics, such that the initial observation follows o1 „ p0p¨q p ∆poq, and given the history τh :“ po1, a1, ¨ ¨ ¨ , oh, ahq up to step h, the observation follows oh`1 „ pp¨|τhq; rh : o ˆ a ñ r0, 1s is the reward function at h-th step, which we assume is a known deterministic function of poh, ahq. a policy π “ tπh : po ˆ aqh´1 ˆ o ñ ∆paquh h“1 is a collection of h functions. at step h p rhs, an agent running policy π observes the observation oh and takes action ah „ πhp¨|τh´1, ohq p ∆paq based on the history pτh´1, ohq “ po1, a1, . . . , oh´1, ah´1, ohq. the agent then receives their reward rhpoh, ahq, and the environment generates the next observation oh`1 „ pp¨|τhq based on τh “ po1, a1, ¨ ¨ ¨ , oh, ahq. the episode terminates immediately after the dummy observation oh`1 “ odum is generated. we use π to denote the set of all deterministic policies, and identify ∆pπq as both the set of all policies and all distributions over deterministic policies interchangeably. for any ph, τhq, let ppτhq :“ h1ďh πh1pah1|τh1´1, oh1 q, and let pπpτhq :“ ppτhq ˆ πpτhq denote the probability of observing τh (for the first h steps) when executing π. the h value of a policy π is defined as the expected cumulative reward v pπq :“ eπr h“1 rhpoh, ahqs. we assume that h h“1 rhpoh, ahq ď 1 almost surely for any policy π. ppoh1 |τh1´1q, πpτhq :“ h“1, tohuh pomdps a partially observable markov decision process (pomdp) is a special sequential decision process whose transition dynamics are governed by latent states. an episodic pomdp is specified by a tuple th, s, o, a, tthuh h“1, µ1u, where s is the latent state space with |s| “ s, ohp¨|¨q : s ñ ∆poq is the emission dynamics at step h (which we identify as an emission matrix oh p roˆs ), thp¨|¨, ¨q : s ˆ a ñ ∆psq is the transition dynamics over the latent states (which we identify as transition matrices thp¨|¨, aq p rsˆs for each a p a), and µ1 p ∆psq specifies the distribution of initial state. at each step h, given latent state sh (which the agent cannot observe), the system emits observation oh „ ohp¨|shq, receives action ah p a from the agent, emits the reward rhpoh, ahq, and then transits to the next latent state sh`1 „ thp¨|sh, ahq in a markov fashion. note that a pomdp can be fully described by the parameter θ :“ pt, o, µ1q. h“1, trhuh predictive state representations we consider predictive state representations (psrs) (littman & sutton, 2001), a broader class of sequential decision processes that generalize pomdps by removing the explicit assumption of latent states, but still requiring the system dynamics to be described succinctly by a core test set. w pzě1 psr, core test sets, and predictive states a test t is a sequence of future observations and actions ow ˆ aw ´1). for some test th “ poh:h`w ´1, ah:h`w ´2q with length (i.e. t p t :“ w ě 1, we define the probability of test th being successful conditioned on (reachable) history τh´1 as ppth|τh´1q :“ ppoh:h`w ´1|τh´1; dopah:h`w ´2qq, i.e., the probability of observing oh:h`w ´1 if the agent deterministically executes actions ah:h`w ´2, conditioned on history τh´1. we follow the convention that, if pπpτh´1q “ 0 for any π, then ppt|τh´1q “ 0. ppth|τh´1q “ xbth,h, rppt|τh´1qstpuh y, definition 1 (psr, core test sets, and predictive states). for any h p rhs, we say a set uh ă t is a core test set at step h if the following holds: for any w p zě1, any possible future (i.e., test) th “ poh:h`w ´1, ah:h`w ´2q p ow ˆ aw ´1, there exists a vector bth,h p ruh such that @τh´1 p t h´1 :“ po ˆ aqh´1. (1) we refer to the vector qpτh´1q :“ rppt|τh´1qstpuh as the predictive state at step h (with convention qpτh´1q “ 0 if τh´1 is not reachable), and q0 :“ rpptqstpu1 as the initial predictive state. a (linear) psr is a sequential decision process equipped with a core test set tuhuhprhs. the predictive state qpτh´1q p ruh in a psr acts like a “latent state” that governs the transition pp¨|τh´1q through the linear structure (1). we define ua,h :“ ta : po, aq p uh for some o p ť w pn` ow u as the set of action sequences (possibly including an empty sequence) in uh, with ua :“ maxhprhs |ua,h|. further define uh`1 :“ todumu for notational simplicity. throughout the paper, we assume the core test sets puhqhprhs are known and the same within the psr model class. b-representation we define the b-representation of a psr, a standard notion for psrs (also known as the observable operators (jaeger, 2000)). definition 2 (b-representation). a b-representation of a psr with core test set puhqhprhs is a set of matrices2 tpbhpoh, ahq p ruh`1ˆuhqh,oh,ah, q0 p ru1 u such that for any 0 ď h ď h, policy π, history τh “ po1:h, a1:hq p t h, and core test th`1 “ poh`1:h`w , ah`1:h`w ´1q p uh`1, the quantity ppτh, th`1q, i.e. the probability of observing o1:h`w upon taking actions a1:h`w ´1, admits the decomposition ppτh, th`1q “ ppo1:h`w |dopa1:h`w ´1qq “ ej where eth`1 p ruh`1 is the indicator vector of th`1 p uh`1, and th`1 ¨ bh:1pτhq ¨ q0, bh:1pτhq :“ bhpoh, ahqbh´1poh´1, ah´1q ¨ ¨ ¨ b1po1, a1q. it is a standard result (see e.g. thon & jaeger (2015)) that any psr admits a b-representation, and the converse also holds—any sequential decision process admitting a b-representation on test sets puhqhprhs is a psr with core test set puhqhprhs (proposition d.1). however, the b-representation of a given psr may not be unique. we also remark that the b-representation is used in the structural conditions and theoretical analyses only, and will not be explicitly used in our algorithms. rank an important complexity measure of a psr is its psr rank (henceforth also “rank”). definition 3 (psr rank). given a psr, its psr rank is defined as dpsr :“ maxhprhs rankpdhq, where dh :“ rqpτhqsτhpt h p ruh`1ˆt h is the matrix formed by predictive states at step h p rhs. the psr rank measures the inherent dimension3 of the space of predictive state vectors, which always admits the upper bound dpsr ď maxhprhs |uh|, but may in addition be much smaller. pomdps as low-rank psrs as a primary example, all pomdps are psrs with rank at most s (zhan et al., 2022, lemma 2). first, definition 1 can be satisfied trivially by choosing uh “ 1ďw ďh´h`1 tpoh, ah, . . . , oh`w ´1qu as the set of all possible tests, and bth,h “ eth p ruh as indicator vectors. for concrete subclasses of pomdps, we will consider alternative choices of puhqhprhs with much smaller cardinalities than this default choice. second, to compute the rank (definition 3), note that by the latent state structure of pomdps, we have ppth`1|τhq “ ppth`1|sh`1qppsh`1|τhq for any ph, τh, th`1q. therefore, the associated matrix dh “ rppth`1|τhqspth`1,τhqpuh`1ˆt h always has the following decomposition: dh “ rppth`1|sh`1qspth`1,sh`1qpuh`1ˆs ˆ rppsh`1|τhqspsh`1,τhqpsˆt h , which implies that dpsr “ maxhprhs rankpdhq ď s. learning goal we consider the standard pac learning setting, where we are given a model class of psrs θ and interact with a ground truth model θ‹ p θ. note that, as we do not put further restrictions on the parametrization, this setting allows any general function approximation for 2this definition can be generalized to continuous uh, where bhpoh, ahq p lpl1puhq, l1puh`1qq are linear operators instead of (finite-dimensional) matrices. 3this definition using matrix ranks may be further relaxed, e.g. by considering the effective dimension. the model class. for any model class θ, we define its (optimistic) covering number nθpρq for ρ ą 0 in definition c.4. let vθpπq denote the value function of policy π under model θ, and πθ :“ arg maxπpπ vθpπq denote the optimal policy of model θ. the goal is to learn a policy pπ that achieves small suboptimality v‹ ´ vθ‹ppπq within as few episodes of play as possible, where v‹ :“ vθ‹ pπθ‹ q. we refer to an algorithm as sample-efficient if it finds an ε-near optimal policy within polyprelevant problem parameters, 1{εq4 episodes of play. psrs with b-stability we begin by proposing a natural and general structural condition for psr called b-stability (or also stability). we show that b-stable psrs encompass and generalize a variety of existing tractable pomdps and psrs, and can be learned sample-efficiently as we show in the sequel. the b-stability condition for any psr with an associated b-representation, we define its b-operators tbh:huhprhs as bh:h : ruh ñ rpoˆaqh´h`1 q þñ rbh:hpτh:h q ¨ qsτh:h ppoˆaqh´h`1. operator bh:h maps any predictive state q “ qpτh´1q at step h to the vector bh:hq “ pppτh:h |τh´1qqτh:h which governs the probability of transitioning to all possible futures, by properties of the b-representation (cf. (17) & corollary d.2). for each h p rhs, we equip the image space of bh:h with the π-norm: for a vector b indexed by τh:h p po ˆ aqh´h`1, we define }b}π :“ max¯π τh:h ppoˆaqh´h`1 ¯πpτh:h q |bpτh:h q| , where the maximization is over all policies ¯π starting from step h (ignoring the history τh´1) and hďh1ďh ¯πh1 pah1 |oh1, τh:h1´1q. we further equip the domain ruh with a fused-norm ¯πpτh:h q “ } ¨ }˚, which is defined as the maximum of p1, 2q-norm and π1-norm5: apua,h o:po,aqpuh ˘ |qpo, aq| }q}π1 :“ max¯π tpu h ¯πptq |qptq| , where u h :“ tt p uh : et1 p uh such that t is a prefix of t1u. we now define the b-stability condition, which simply requires the b-operators tbh:huhprhs to have bounded operator norms from the fused-norm to the π-norm. definition 4 (b-stability). a psr is b-stable with parameter λb ě 1 (henceforth also λb-stable) if it admits a b-representation with associated b-operators tbh:huhprhs such that sup hprhs max }q}˚“1 }bh:hq}π ď λb. when using the b-stability condition, we will often take q “ q1pτh´1q ´ q2pτh´1q to be the difference between two predictive states at step h. intuitively, definition 4 requires that the propagated π-norm error }bh:hpq1 ´ q2q}π to be controlled by the original fused-norm error }q1 ´ q2}˚. the fused-norm }¨}˚ is equivalent to the vector 1-norm up to a |ua,h|1{2-factor (despite its seemingly involved form): we have }q}˚ ď }q}1 ď |ua,h|1{2 }q}˚ (lemma d.6), and thus assuming a relaxed condition max}q}1“1 }bh:h}π ď λ will also enable sample-efficient learning of psrs. however, we consider the fused-norm in order to obtain the sharpest possible sample complexity guarantees. finally, all of our theoretical results still hold under a more relaxed (though less intuitive) weak b-stability condition (definition d.4), with the same sample complexity guarantees. (see also the additional discussions in appendix d.2.) 4for the m-step versions of our structural conditions, we allow an exponential dependence on m but not h. such a dependence is necessary, e.g. in m-step decodable pomdps (efroni et al., 2022). 5the π1-norm is in general a semi-norm. relation with known sample-efficient subclasses we show that the b-stability condition encompasses many known structural conditions of psrs and pomdps that enable sample-efficient learning. throughout, for a matrix a p rmˆn, we define its operator norm }a}pñq :“ max}x}pď1 }ax}q, and use }a}p :“ }a}pñp for shorthand. weakly revealing pomdps (jin et al., 2020a; liu et al., 2022a) is a subclass of pomdps that assumes the current latent state can be probabilistically inferred from the next m emissions. example 5 (multi-step weakly revealing pomdps). a pomdp is called m-step αrev-weakly revealing (henceforth also “αrev-revealing”) with αrev ď 1 if maxhprh´m`1s }m: rev , where for h p rh ´ m ` 1s, mh p romam´1ˆs is the m-step emission-action matrix at step h, defined as rmhspo,aq,s :“ ppoh:h`m´1 “ o|sh “ s, ah:h`m´2 “ aq, @po, aq p om ˆ am´1, s p s. (7) we show that any m-step αrev-weakly revealing pomdp is a λb-stable psr with core test sets ? uh “ po ˆ aqmintm´1,h´hu ˆ o, and λb ď rev (proposition d.7). a similar result holds for the ℓ1 variant of the revealing condition (see appendix d.3.1). ♢ when the transition matrix th of the pomdp has a low rank structure, wang et al. (2022) show that a subspace-aware generalization of the ℓ1-revealing condition—the future-sufficiency condition— enables sample-efficient learning of pomdps with possibly enormous state/observation spaces (see also cai et al. (2022)). we consider the following generalized definition of future-sufficiency. example 6 (low-rank future-sufficient pomdps). we say a pomdp has transition rank dtrans if for each h p rh ´1s, the transition kernel of the pomdp has rank at most dtrans (i.e. maxh rankpthq ď dtrans). it is clear that low-rank pomdps with transition rank dtrans has psr rank dpsr ď dtrans. a transition rank-dtrans (henceforth rank-dtrans) pomdp is called m-step ν-future-sufficient with ν ě 1, if for h p rh ´1s, there exists m6 h}1ñ1 ď ν, where mh is the m-step emission-action matrix defined in (7). 6 we show that any m-step ν-future sufficient rank-dtrans pomdp is a b-stable psr with core test am´1ν (proposition d.12). ♢ sets uh “ po ˆ aqmintm´1,h´hu ˆ o, dpsr ď dtrans, and λb ď mhth´1 “ th´1 and }m6 h p rsˆuh such that m6 h decodable pomdps (efroni et al., 2022), as a multi-step generalization of block mdps (du et al., 2019), assumes the current latent state can be perfectly decoded from the recent m observations. example 7 (multi-step decodable pomdps). a pomdp is called m-step decodable if there exists (unknown) decoders ϕ‹ “ tϕ‹ huhprhs, such that for every reachable trajectory ps1, o1, a1, ¨ ¨ ¨ , sh, ohq we have sh “ ϕ‹ h pzhq, where zh “ pomphq, amphq, ¨ ¨ ¨ , ohq and mphq “ maxth ´ m ` 1, 1u. we show that any m-step decodable pomdp is a b-stable psr with core test sets uh “ po ˆ aqmintm´1,h´hu ˆ o and λb “ 1 (proposition d.17). ♢ finally, zhan et al. (2022) define the following regularity condition for general psrs. example 8 (regular psrs). a psr is called αpsr-regular if for all h p rhs there exists a core matrix kh p ruh`1ˆrankpdhq, which is a column-wise sub-matrix of dh such that rankpkhq “ rankpdhq and maxhprhs }k : psr . we show that any αpsr-regular psr is λb-stable with λb ď psr (proposition d.18). ♢ we emphasize that b-stability not only encompasses αpsr-regularity, but is also strictly more expressive. for example, decodable pomdps are not αpsr-regular unless with additional assumptions on k : h (zhan et al., 2022, section 6.5), whereas they are b-stable with λb “ 1 (example 7). also, any αrev-revealing pomdp is αpsr-regular with some α´1 psr potentially not polynomially bounded by α´1 rev (and other problem parameters) due to the restriction of kh being a column-wise sub-matrix of dh; by contrast it is b-stable with λb ď psr ă 8, but with α´1 rev (example 5). learning b-stable psrs in this section, we show that b-stable psrs can be learned sample-efficiently, achieved by three model-based algorithms simultaneously. we instantiate our results to pomdps in section 5. 6it is straightforward to generalize this example to the case when s and o are infinite by replacing vectors with l1 integrable functions, and matrices with linear operators between these spaces. algorithm 1 optimistic maximum likelihood estimation (omle) 1: input: model class θ, parameter β ą 0. 2: initialize: θ1 “ θ, d “ tu. 3: for iteration k “ 1, . . . , k do 4: 5: 6: set pθk, πkq “ arg maxθpθk,π vθpπq. for h “ 0, . . . , h ´ 1 do h,exp :“ πk ˝h unifpaq ˝h`1 unifpua,h`1q. h,exp to collect a trajectory τ k,h, and add pπk h,exp, τ k,hq into d. set exploration policy πk execute πk update confidence set " pθ p θ : pπ,τ qpd log pπ pθ pτ q ě maxθpθ pπ,τ qpd log pπ θ pτ q ´ β output: pπout :“ unifp ␣ πk ( kprksq. optimistic maximum likelihood estimation (omle) the omle algorithm is proposed by liu et al. (2022a) for learning revealing pomdps and adapted7 by zhan et al. (2022) for learning regular psrs, achieving polynomial sample complexity (in relevant problem parameters) in both cases. we show that omle works under the broader condition of b-stability, with significantly improved sample complexities. algorithm and theoretical guarantee the omle algorithm (described in algorithm 1) takes in a class of psrs θ, and performs two main steps in each iteration k p rks: 1. (optimism) construct a confidence set θk ď θ, which is a superlevel set of the log-likelihood of all trajectories within dataset d (line 8). the policy πk is then chosen as the greedy policy with respect to the most optimistic model within θk (line 4). h,expq0ďhďh´1, where each πk 2. (data collection) execute exploration policies pπk h,exp is defined via the ˝h notation as follows: follow πk for the first h ´ 1 steps, take a uniform action unifpaq at step h, take an action sequence sampled from unifpua,h`1q at step h ` 1, and behave arbitrarily afterwards (line 6). all collected trajectories are then added into d (line 7). intuitively, the concatenation of the current policy πk with unifpaq and unifpua,h`1q in step 2 above is designed according to the structure of psrs to foster exploration. theorem 9 (guarantee of omle). suppose every θ p θ is λb-stable (definition 4) and the true model θ‹ p θ has rank dpsr ď d. then, choosing β “ c logpnθp1{khq{δq for some absolute constant c ą 0, with probability at least 1 ´ δ, algorithm 1 outputs a policy pπout p ∆pπq such that v‹ ´ vθ‹ ppπoutq ď ε, as long as the number of episodes b{ε2 dauah 2 logpnθp1{t q{δqι ¨ λ2 ř o,a }bhpo, aqv}1u. where ι :“ log p1 ` kduaλbrbq, with rb :“ maxht1, max}v}1“1 theorem 9 shows that omle is sample-efficient for any b-stable psrs—a broader class than in existing results for the same algorithm (liu et al., 2022a; zhan et al., 2022)—with much sharper sample complexities than existing work when instantiated to their settings. importantly, we achieve the first polynomial sample complexity that scales with λ2 b dependence b-stability parameter (or regularity parameters alike8). instantiating to αpsr-regular psrs, using λb ď psr (example 8), our result implies a ropdau 2 psrε2qq sample complexity (ignoring h and ι9). this improves significantly over the ropd4a4u 9 psrε2qq result of zhan et al. (2022). a logpnθoq{pα6 a log nθ{pα2 t “ kh ě o 7named crane in (zhan et al., 2022). 8uehara et al. (2022b) achieves an am σ´2 1 dependence for learning the optimal memory-m policy in (their) σ1-revealing pomdps, which is however easier than learning the globally optimal policy considered here. 9the log-factor ι contains additional parameter rb that is not always controlled by λb; this quantity also appears in zhan et al. (2022); liu et al. (2022b) but is controlled by their α´1 psr or γ´1 respectively. nevertheless, for all of our pomdp instantiations, rb is polynomially bounded by other problem parameters so that ι is a mild log-factor. further, our next algorithm explorative e2d avoids the dependence on rb (theorem 10). overview of techniques the proof of theorem 9 (deferred to appendix g) builds upon a sharp analysis for b-stable psrs: 1) we use a more delicate choice of norm for bounding the errors (in the b operators) yielded from performance difference arguments; 2) we develop a generalized ℓ2-type eluder argument that is sharper than the ℓ1-eluder argument of liu et al. (2022a); zhan et al. (2022). a more detailed overview of techniques is presented in appendix b. explorative estimation-to-decisions (explorative e2d) estimation-to-decisions (e2d) is a general model-based algorithm that is sample-efficient for any interactive decision making problem (including mdps) with a bounded decision-estimation coefficient (dec), as established in the dec framework by foster et al. (2021). however, the e2d algorithm has not been instantiated on pomdps/psrs. we show that b-stable psrs admit a sharp dec bound, and thus can be learned sample-efficiently by a suitable e2d algorithm. edec & explorative e2d algorithm we consider the explorative dec (edec) proposed in the recent work of chen et al. (2022), which for a psr class θ is defined as ! eπ„pout rvθpπθq ´ vθpπqs ´ γeπ„pexp e¯θ„µ θ , pπ pπ ¯θ edecγpθq “ sup µp∆pθq sup θpθ | 7 | [
229.727,
508.08155,
244.72669056,
526.2958556
] |
tgcAoUVHRIB.pdf | 2,022 | 2 | neural methods for logical reasoning over knowledge graphs alfonso amayuelas∗†, shuai zhang†, susie xi rao† & ce zhang† ∗epfl †eth zurich alfonso.amayuelas@alumni.epfl.ch {shuazhang, raox, ce.zhang}@inf.ethz.ch abstract reasoning is a fundamental problem for computers and deeply studied in artificial intelligence. in this paper, we specifically focus on answering multi-hop logical queries on knowledge graphs (kgs). this is a complicated task because, in real-world scenarios, the graphs tend to be large and incomplete. most previous works have been unable to create models that accept full first-order logical (fol) queries, which include negative queries, and have only been able to process a limited set of query structures. additionally, most methods present logic operators that can only perform the logical operation they are made for. we introduce a set of models that use neural networks to create one-point vector embeddings to answer the queries. the versatility of neural networks allows the framework to handle fol queries with conjunction (∧), disjunction (∨) and negation (¬) operators. we demonstrate experimentally the performance of our model through extensive experimentation on well-known benchmarking datasets. besides having more versatile operators, the models achieve a 10% relative increase over the best performing state of the art and more than 30% over the original method based on single-point vector embeddings. introduction knowledge graphs (kgs) are a type of data structure that can capture many kinds of relationships cityin −−−→ russia) and have been popularized since the creation of the between entities (e.g.: moscow semantic web or its introduction into google’s search engine. they can contain many kinds of different information, and they can be widely used in question-answering systems, search engines, and recommender systems (palumbo et al., 2017; xiong et al., 2017a). reasoning is a fundamental skill of human brains. for example, we can infer new knowledge based on known facts and logic rules, and discern patterns/relationships to make sense of seemingly unrelated information. it is a multidisciplinary topic and is being studied in psychology, neuroscience, and artificial intelligence (fagin et al., 2003). the ability to reason about the relations between objects is central to generally intelligent behavior. we can define reasoning as the process of inferring new knowledge based on known facts and logic rules. knowledge graphs are a structure used for storing many kinds of information, therefore the ability to answer complex queries and extract answers that are not directly encoded in the graph are of high interest to the ai community. to answer complex queries, the model receives a query divided in logical statements. a full firstorder logic (fol) is necessary to process a wider range of queries, which includes negative queries. fol includes the following logical operators: existential (∃), conjunction (∧), disjunction (∨), and negation (¬). the power of representation of our logical framework is the key to process complex queries. however, most frameworks have only been able to process existential positive first-order logic (epfo), which means that negative queries cannot be processed. for example, one could ask a knowledge graph containing drugs and side effects the following question: “what drug can be used to treat pneumonia and does not cause drowsiness?”. the first step to answer such a query is to translate it into logical statements: q = v? · ∃v : treat(pneumonia, v?) source code available on: https://github.com/amayuelas/nnkgreasoning figure 1: mlp framework for kg reasoning. representation of a sample query: ”list the teams where brazilian football players who were awarded a ballon d’or played”. (a) query represented by its logical statements and dependency graph. (b) 2d representation of the answer entities in a one-point vector space used by the reasoning framework. ∧ ¬ cause(drowsiness,v?). once the query is divided into logical statements, we obtain the computation graph, a directed acyclic graph (dag) which defines the order of operations. afterwards, we can start traversing the graph. however, many real-world graphs are incomplete and therefore traversing them becomes very hard and even computationally impossible. there are many possible answer entities, and it requires modeling sets of entities. as such, embedding methods become a good solution to answer these queries. previous works (hamilton et al., 2018; ren et al., 2020; ren & leskovec, 2020) have created methods for embedding the query and the graph into a vector space. the idea of graph embeddings reduces the problem to simply using nearest-neighbor search to find the answers, without paying attention to the intermediate results. the embedding approach solves many of the problems of query-answering in knowledge graphs. in theory, we could answer the queries just by traversing the graph. in practice, graphs are large and incomplete, and answering arbitrary logical queries becomes a complicated task. the graph incompleteness means that traversing its edges would not provide the correct answers. this work aims to create some models that allow complex queries and extract the correct answers from large incomplete knowledge graphs. to this end, we present a set of models based on neural networks that embed the query and the entities into a one-point vector space. then, it computes the distance between the query and the entities to rank the answers according to the likelihood to answer the query. we use the versatility of neural networks to create the operators needed to process fol queries. we conduct experiments using well-known datasets for kg reasoning: fb15k, fb15-237, and nell. the experiments show that our models can effectively answer fol and provide a noticeable improvement when compared with the state-of-the-art baselines. our models provide a relative improvement of 5% to 10% to the latest state-of-art method and about 30% to 40% when compared with the method that uses the same idea of one-point vector space embeddings (hamilton et al., 2018). the main contributions of this work are summarized as: (1). new embedding-based methods for logical reasoning over knowledge graphs: two new models, plus variants, for kg reasoning. these methods embed the query and the entities in the same vector space with single-point vectors. implementing the logical operators with neural networks provides versatility to create any operator with virtually the same architecture. (2). improved performance over the current state of the art. experimental results show that the models presented in this paper outperform the selected baselines: graph query embeddings (gqe) (hamilton et al., 2018), query2box (q2b) (ren et al., 2020), and betae (ren & leskovec, 2020). (3). handling of negative queries. modelling queries with negation has been an open question in kg reasoning until recently. betae (ren & leskovec, 2020) introduced the first method able to do so. this work takes advantages of the good relationship inference capabilities of neural networks and uses them to create the negation operator. related work traditional tasks on graphs include link prediction (liben-nowell & kleinberg, 2007), knowledge base completion (wang et al., 2015), or basic query-answering (one-hop). they are all different versions of the same problem: is link (h,r,t) in the kg? or is t an answer to query (h,r,)?, where only a variable is missing. however, we face a more complicated problem, known as knowledge graph reasoning, that may involve several unobserved edges or nodes over massive and incomplete kgs. in this case, queries can be path queries, conjunctive queries, disjunctive queries or or a combination of them. a formal definition of kg reasoning can be found in chen et al. (2020), as stated in definition 2.1. definition 2.1 (reasoning over knowledge graphs). defining a knowledge graph as: g = (cid:104)e, r, t (cid:105), where e, t represent the set of entities, r the set of relations, and the edges in r link two nodes to form a triple as (h, r, t) ∈ t . then, reasoning over a kg is defined as creating a triplet that does not exist in the original kg, g(cid:48) = {(h, r, t)|h ∈ e, r ∈ r, t ∈ t , (h, r, t) (cid:54)∈ g} most related to our work are embedding approaches for multi-hop queries over kgs: (hamilton et al., 2018), (ren et al., 2020), (ren & leskovec, 2020) and (das et al., 2016), as well as models for question answering (yasunaga et al., 2021), (feng et al., 2020). the main differences with these methods rely on the ability to handle full first-order logical queries and using neural networks to define all logical operators, including the projection. we also deliver a more extensive range of network implementations. on a broader outlook, we identify a series of works that aim to solve knowledge graph reasoning with several different techniques, such as attention mechanisms (wang et al., 2018), reinforcement learning like deeppath (xiong et al., 2017b) or diva (chen et al., 2018), or neural logic networks (shi et al., 2020), (qu & tang, 2019). models both models presented here follow the idea behind graph query embedding – gqe (hamilton et al., 2018): learning to embed the queries into a low dimensional space. our models differ from it in the point that logical query operations are represented by geometric operators. in our case, we do not follow the direct geometric sense and these operators are all represented by neural networks, instead of just the intersection operator in gqe. similarly, however, the operators are jointly optimized with the node embeddings to find the optimal representation. in order to answer a query, the system receives a query q, represented as a dag, where the nodes are the entities and the edges the relationships. starting with the embeddings ev1 , ..., evn of its anchor nodes and apply the logical operations represented by the edges to finally obtain an embedding q of the query (guu et al., 2015). formal problem definition a knowledge graph (g) is a heterogeneous graph with a set of entities – nodes – (v) and a set of relations – edges – (r). in heterogeneous graphs, there can be different kinds of relations, which are defined as binary functions r : v × v → {t rue, f alse} that connect two entities with a directed edge. the goal is to answer first-order logical (fol) queries. we can define them as follows: definition 3.1 (first-order logical queries). a first-order logical query q is formed by an anchor entity set va ⊆ v, an unknown target variable v? and a series of existentially quantified variables v1, ..., vk. in its disjunctive normal form (dnf), it is written as a disjunction of conjunctions: q[v?] = v? · ∃v1, ...vk : c1 ∨ c2 ∨ ... ∨ cn where ci represents a conjunctive query of one or several literals: ci = ei1 ∧ ei2 ∧ ... ∧ eim. and the literals represent a relation or its negation: eij = r(vi, vj) or ¬v(vi, vj) where vi, vj are entities and r ∈ r. the entity embeddings are initialized to zero and later learned as part of the training process, along with the operators’ weights. (a) representation of mlp for 2 input operators: projection, intersection. (b) representation of mlp for 1 input operator: negation. figure 2: multi-later perceptron model (mlp) - network architecture. computation graph. the computation graph can be defined as the direct acyclic graph (dag) where the nodes correspond to embeddings and the edges represent the logical operations. the computation graph can be derived from a query by representing the relations as projections, intersections as merges and negation as complement. this graph shows the order of operations to answer the queries. each branch can be computed independently and then merged until the sink node is reached. each node represents a point in the embedding space and each edge represents a logical operation, computed via a neural network in our case. the representation of a fol as a computation graph can be seen as a heterogeneous tree where each leaf node corresponds to the anchor entities and the root is the final target variable, which is a set of entities. the logical operations corresponding to the edges are defined below: • projection. given an entity vi ∈ v and a relation type r ∈ r. it aims to return the set of adjacent entities with that relation. being pri(vi, r) the set of adjacent entities through r, we define the projection as: pri(vi, r) = {v(cid:48) ∈ v : (v, v(cid:48)) = true}. • intersection. the intersection can be defined as: i(vi) = ∩n i=1vi. • negation. it calculates the complement of a set of entities t ⊆ v: n (t ) = t = v \ t , where the set can either be the embedding corresponding to an entity or another embedding in between which represents a set of them. a union operation is unnecessary, as it will be later discussed in sections 3.5. query2box (ren et al., 2020) shows that a union operator becomes intractable in distance-based metrics. multi-layer perceptron model (mlp) based on the good results of neural tensor networks (ntn) (socher et al., 2013) for knowledge base completion, we have extended a similar approach to multi-hop reasoning. we introduce three logical operators to compute the queries. each of them is represented by a simple neural network: a multilayer perceptron. each perceptron contains a feed-forward network: a linear layer plus a relu rectifier. the number of layers remains as a hyper-parameter. figures 2a and 2b show what the model looks like. neural operators. we define the operators with a multi-layer perceptron. the model will take as an input the embedding representation of the input entities and will return an approximate embedding of the answer. defining the operators with a neural network has the advantage of generalization. thus, we distinguish between 2-input operators, projection and intersection and 1-input operator, negation. - 2-input operator (figure 2a): projection p , and intersection i. the operator is composed by a multi-layer perceptron that takes 2 inputs and returns 1 as embedding as the output. the training process will make the networks learn the weights to represent each operation. equation 2 expresses it formally: figure 3: mlp-mixer. at the top, we show the block diagram of the mlp-mixer architecture. it is formed by a per-patch fully connected module, n mixer modules, an average pooling and a last fully connected module. the bottom figure shows the mixer module, which contains one channelmixing mlp, each consisting of 2 fully-connected layers and a relu nonlinearity. it also includes skip-connections, dropout and layer norm on the channels. p (si, rj) = nnk(si, rj), ∀si ∈ s, ∀sj ∈ r i(si, sj) = nnk(si, sj), ∀si, sj ∈ s where si ∈ s is an embedding in the vector space s, rj ∈ r is a relation and n nk is a multi-layer perceptron with k layers. the intersection can take more than two entities as an input, for instance the 3i query structure. in this case we do a recursive operation, we use the result of the previous intersection to compute the next one. - 1-input operator (figure 2b): negation n . the goal of this operator is to represent the negation of a set of entities. following the same neural network approach, we can represent it as in the equation below (equation 3). where si ∈ s is a vector in the embedding space, it can be an entity or the result of a previous operation. n nk is a multi-layer perceptron with k layers and the same number of inputs as outputs. n (si) = nnk(si), ∀si ∈ s multi-layer perceptron mixer model (mlp-mixer) the mlp-mixer (tolstikhin et al., 2021) is a neural architecture originally built for computer vision applications, which achieves competitive results when compared to convolutional neural networks (cnns) and attention-based networks. the mlp-mixer is a model based exclusively on multilayer perceptrons (mlps). it contains two types of layers: (1) one with mlps applied independently to patches and (2) another one with mlps applied across patches. figure 3 presents a diagram of the architecture. mixer operators. we use the same procedure as in the mlp model. we use a full mlp-mixer block to train each of the 2 operators with 2 inputs: projection and intersection. since negation only has 1 input, the architecture cannot be accommodated for this use so far. - 2-input operator (figure 2a). represents projection p or intersection i with mlp-mixer architecture. p (si, rj) = mlpmix(si, rj), ∀si ∈ s, ∀sj ∈ r i(si, sj) = mlpmix(si, sj), ∀si, sj ∈ s where m lpmix represent the mixer architecture, si an embedding in the entity vector space s; and rj ∈ r a relation. training objective, distance and inference training objective. the goal is to jointly train the logical operators and the node embeddings, which are learning parameters that are initialized randomly. our training objective is to minimize the distance between the query and the query vector, while maximizing the distance from the query to incorrect random entities, which can be done via negative samples. equation 5 expresses this training objective in mathematical terms. l = − log σ(γ − dist(v; q)) − log σ( dist(v(cid:48) j; q) − γ)) where q is the query, v ∈ [q] is an answer of query (the positive sample); v(cid:48) j (cid:54)∈ [q] represents a random negative sample and γ refers to the margin. both, the margin γ and the number of negative samples k remain as hyperparameters of the model. distance measure. when defining the training objective, we still need to specify the distance measure to compare the entity vectors. unlike in previous works, we do not need a measure that compares between boxes or distributions. the euclidean distance is enough for this purpose, as it calculates the distance between two points in a euclidean space: dist(v, q) = |v − q|. inference. each operator provides an embedding. following the query’s computation graph, we obtain the final answer embedding (or query representation). then, all entities are ranked according to the distance value of this embedding to all entity embeddings via near-neighbor search in constant time using locality sensitivity hashing (indyk & motwani, 1998). discussion on answering fol queries | 5 | [
108.249,
427.6920784,
316.9899982,
437.6546784
] |
MMAeCXIa89.pdf | 2,022 | 0 | πbo: augmenting acquisition functions with user beliefs for bayesian optimization carl hvarfner1, danny stoll2, artur souza3, marius lindauer4, frank hutter2,5 & luigi nardi1,6 1lund university, 2university of freiburg, 3federal university of minas gerais, 4leibniz university hannover, 5bosch center for artificial intelligence, 6stanford university {carl.hvarfner, luigi.nardi}@cs.lth.se, {stolld, fh}@cs.uni-freiburg.de, arturluis@dcc.ufmg.br, lindauer@tnt.uni-hannover.de abstract bayesian optimization (bo) has become an established framework and popular tool for hyperparameter optimization (hpo) of machine learning (ml) algorithms. while known for its sample-efficiency, vanilla bo can not utilize readily available prior beliefs the practitioner has on the potential location of the optimum. thus, bo disregards a valuable source of information, reducing its appeal to ml practitioners. to address this issue, we propose πbo, an acquisition function generalization which incorporates prior beliefs about the location of the optimum in the form of a probability distribution, provided by the user. in contrast to previous approaches, πbo is conceptually simple and can easily be integrated with existing libraries and many acquisition functions. we provide regret bounds when πbo is applied to the common expected improvement acquisition function and prove convergence at regular rates independently of the prior. further, our experiments show that πbo outperforms competing approaches across a wide suite of benchmarks and prior characteristics. we also demonstrate that πbo improves on the state-of-theart performance for a popular deep learning task, with a 12.5× time-to-accuracy speedup over prominent bo approaches. introduction the optimization of expensive black-box functions is a prominent task, arising across a wide range of applications. bayesian optimization (bo) is a sample-efficient approach to cope with this task, and has been successfully applied to various problem settings, including hyperparameter optimization (hpo) (snoek et al., 2012), neural architecture search (nas) (ru et al., 2021), joint nas and hpo (zimmer et al., 2021), algorithm configuration (hutter et al., 2011), hardware design (nardi et al., 2019), robotics (calandra et al., 2014), and the game of go (chen et al., 2018). despite the demonstrated effectiveness of bo for hpo (bergstra et al., 2011; turner et al., 2021), its adoption among practitioners remains limited. in a survey covering neurips 2019 and iclr 2020 (bouthillier & varoquaux, 2020), manual search was shown to be the most prevalent tuning method, with bo accounting for less than 7% of all tuning efforts. as the understanding of hyperparameter settings in deep learning (dl) models increase (smith, 2018), so too does the tuning proficiency of practitioners (anand et al., 2020). as previously displayed (smith, 2018; anand et al., 2020; souza et al., 2021; wang et al., 2019), this knowledge manifests in choosing single configurations or regions of hyperparameters that presumably yield good results, demonstrating a belief over the location of the optimum. bo’s deficit to properly incorporate said beliefs is a reason why practitioners prefer manual search to bo (wang et al., 2019), despite its documented shortcomings (bergstra & bengio, 2012). to improve the usefulness of automated hpo approaches for ml practictioners, the ability to incorporate such knowledge is pivotal. well-established bo frameworks (snoek et al., 2012; hutter et al., 2011; the gpyopt authors, 2016; kandasamy et al., 2020; balandat et al., 2020) support user input to a limited extent, such as by biasing the initial design, or by narrowing the search space; however, this type of hard prior can lead to poor performance by missing important regions. bo also supports a prior over functions p(f ) via the gaussian process kernel. however, this option for injecting knowledge is not aligned with the knowledge that experts possess: they often know which ranges of hyperparameter values tend to work best (perrone et al., 2019; smith, 2018; wang et al., 2019), and are able to specify a probability distribution to quantify these priors. for example, many users of the adam optimizer (kingma & ba, 2015) know that its best learning rate is often in the vicinity of 1 × 10−3. in practice, dl experiments are typically conducted in a low-budget setting of less than 50 full model trainings (bouthillier & varoquaux, 2020). as such, practitioners want to exploit their knowledge efficiently without wasting early model trainings on configurations they expect to likely perform poorly. unfortunately, this suits standard bo poorly, as bo requires a moderate number of function evaluations to learn about the response surface and make informed decisions that outperform random search. while there is a demand to increase knowledge injection possibilities to further the adoption of bo, the concept of encoding prior beliefs over the location of an optimum is still rather novel: while there are some initial works (ramachandran et al., 2020; li et al., 2020; souza et al., 2021), no approach exists so far that allows the integration of arbitrary priors and offers flexibility in the choice of acquisition function; theory is also lacking. we close this gap by introducing a novel, remarkably simple, approach for injecting arbitrary prior beliefs into bo that is easy to implement, agnostic to the surrogate model used and converges at standard bo rates for any choice of prior. our contributions after discussing our problem setting, related work, and background (section 2), we make the following contributions: 1. we introduce πbo, a novel generalization of myopic acquisition functions that accounts for user-specified prior distributions over possible optima, is demonstrably simple-to-implement, and can be easily combined with arbitrary surrogate models (section 3.1 & 3.2); 2. we formally prove that πbo inherits the theoretical properties of the well-established expected improvement acquisition function (section 3.3); 3. we demonstrate on a broad range of established benchmarks and in dl case studies that πbo can yield 12.5× time-to-accuracy speedup over vanilla bo (section 4). background and related work black-box optimization we consider the problem of optimizing a black-box function f across a set of feasible inputs x ⊂ rd: x∗ ∈ arg min f (x). x∈x we assume that f (x) is expensive to evaluate, and can potentially only be observed through a noisy estimate, y. in this setting, we wish to minimize f in an efficient manner, typically adhering to a budget which sets a cap on the number of points that can be evaluated. black-box optimization with probabilistic user beliefs in our work, we consider an augmented version of the optimization problem in eq. (1), where we have access to user beliefs in the form of a probability distribution on the location of the optimum. formally, we define the problem of black-box optimization with probabilistic user beliefs as solving eq. (1), given a user-specified prior probability on the location of the optimum defined as π(x) = p f (x) = min x(cid:48)∈x f (x(cid:48)) where regions that the user expects to likely to contain an optimum will have a high value. we note that, without loss of generality, we require π to be strictly positive on all of x , i.e., any point in the search space might be an optimum. since the user belief π(x) can be inaccurate or even misleading, optimizing eq. (1) given (2) is a challenging problem. bayesian optimization we outline bayesian optimization (mockus et al., 1978; brochu et al., 2010; shahriari et al., 2016b). model bo aims to globally minimize f by an initial experimental design d0 = {(xi, yi)}m i=1 and thereafter sequentially deciding on new points xn to form the data dn = dn−1 ∪ {(xn, yn)} for the n-th iteration with n ∈ {1 . . . n }. after each new observation, bo constructs a probabilistic surrogate model of f and uses that surrogate to evaluate an acquisition function α(x, dn). the combination of surrogate model and acquisition function encodes the policy for selecting the next point xn+1. when constructing the surrogate, the most common choice is gaussian processes (rasmussen & williams, 2006), which model f as p(f |dn) = gp(m, k), with prior mean m (which is typically 0) and positive semi-definite covariance kernel k. the posterior mean mn and the variance s2 n are mn(x) = kn(x)(cid:62)(kn + σ2 ni)y, where (kn)ij = k(xi, xj), kn(x) = [k(x, x1), . . . , k(x, xn)](cid:62) and σ2 n is the estimation of the observation noise variance σ2. alternative surrogate models include random forests (hutter et al., 2011) and bayesian neural networks (springenberg et al., 2016). n(x) = k(x, x) − kn(x)(cid:62)(kn + σ2 s2 ni)kn(x), acquisition functions to obtain new candidates to evaluate, bo employs a criterion, called an acquisition function, that encapsulates an explore-exploit trade-off. by maximizing this criterion at each iteration, one or more candidate point are obtained and added to observed data. several acquisition functions are used in bo; the most common of these is expected improvement (ei) (jones et al., 1998). for a noiseless function, ei selects the next point xn+1, where f ∗ n is the minimal objective function value observed by iteration n, as xn+1 ∈ arg max x∈x e (cid:2)[(f ∗ n − f (x)]+(cid:3) = arg max x∈x zsn(x)φ(z) + sn(x)φ(z), where z = (f ∗ n − mn(x))/sn(x). thus, ei provides a myopic strategy for determining promising points; it also comes with convergence guarantees (bull, 2011). similar myopic acquisition functions are upper confidence bound (ucb) (srinivas et al., 2012), probability of improvement (pi) (jones, 2001; kushner, 1964) and thompson sampling (ts) (thompson, 1933). a different class of acquisition functions is based on non-myopic criteria, such as entropy search (hennig & schuler, 2012), predictive entropy search (hernández-lobato et al., 2014) and max-value entropy search (wang & jegelka, 2017), which select points to minimize the uncertainty about the optimum, and the knowledge gradient (frazier et al., 2008), which aims to minimize the posterior mean of the surrogate at the subsequent iteration. our work applies to all acquisition functions in the first class, and we leave its extension to those in the second class for future work. related work there are two main categories of approaches that exploit prior knowledge in bo: approaches that use records of previous experiments, and approaches that incorporate assumptions on the black-box function provided either directly or indirectly by the user. as πbo exploits prior knowledge from users, we briefly discuss approaches which utilize previous experiments, and then comprehensively discuss the literature on exploiting expert knowledge. learning from previous experiments transfer learning for bo aims to automatically extract and use knowledge from prior executions of bo. these executions can come, for example, from learning and optimizing the hyperparameters of a machine learning algorithm on different datasets (van rijn & hutter, 2018; swersky et al., 2013; wistuba et al., 2015; perrone et al., 2019; feurer et al., 2015; 2018), or from optimizing the hyperparameters at different development stages (stoll et al., 2020). for a comprehensive overview of meta learning for hyperparameter optimization, please see the survey from vanschoren (2018). in contrast to these transfer learning approaches, πbo and the related work discussed below does not hinge on the existence of previous experiments, and can therefore be applied more generally. incorporating expert priors over function structure bo can leverage structural priors on how the objective function is expected to behave. traditionally, this is done via the surrogate model’s prior over functions, e.g., the kernel of the gp. however, there are lines of work that explore additional structural priors for bo to leverage. for instance, both smac (hutter et al., 2011) and irace (lópezibáñez et al., 2016) support structural priors in the form of log-transformations, li et al. (2018) propose to use knowledge about the monotonicity of the objective function as a prior for bo, and snoek et al. (2014) model non-stationary covariance between inputs by warping said inputs. oh et al. (2018) and siivola et al. (2018) both propose structural priors tailored to high-dimensional problems, addressing the issue of over-exploring the boundary described by swersky (2017). oh et al. (2018) propose a cylindrical kernel that expands the center of the search space and shrinks the edges, while siivola et al. (2018) propose adding derivative signs to the edges of the search space to steer bo towards the center. lastly, shahriari et al. (2016a) propose a bo algorithm for unbounded search spaces which uses a regularizer to penalize points based on their distance to the center of the user-defined search space. all of these approaches incorporate prior information on specific properties of the function or search space, and are thus not always applicable. moreover, they do not generally direct the search to desired regions of the search space, offering the user little control over the selection of points to evaluate. incorporating expert priors over function optimum few previous works have proposed to inject explicit prior distributions over the location of an optimum into bo. in these cases, users explicitly define a prior that encodes their beliefs on where the optimum is more likely to be located. bergstra et al. (2011) suggest an approach that supports prior beliefs from a fixed set of distributions. however, this approach cannot be combined with standard acquisition functions. bopro (souza et al., 2021) employs a similar structure that combines the user-provided prior distribution with a data-driven model into a pseudo-posterior. from the pseudo-posterior, configurations are selected using the ei acquisition function, using the formulation in bergstra et al. (2011). while bopro is able to recover from misleading priors, its design restricts it to only use ei. moreover, it does not provide the convergence guarantees of πbo. li et al. (2020) propose to infer a posterior conditioned on both the observed data and the user prior through repeated thompson sampling and maximization under the prior. this method displays robustness against misleading priors but lacks in empirical performance. additionally, it is restricted to only one specific acquisition function. ramachandran et al. (2020) use the probability integral transform to warp the search space, stretching high-probability regions and shrinking others. while the approach is model- and acquisition function agnostic, it requires invertible priors, and does not empirically display the ability to recover from misleading priors. in section 4, we demonstrate that πbo compares favorably for priors over the function optimum, and shows improved empirical performance. additionally, we do a complete comparison of all approaches in appendix c. in summary, πbo sets itself apart from the methods above by being simpler (and thus easier to implement in different frameworks), flexible with regard to different acquisition functions and different surrogate models, the availability of theoretical guarantees, and, as we demonstrate in section 4, better empirical results. methodology we now present πbo, which allows users to specify their belief about the location of the optimum through any probability distribution. a conceptually simple approach, πbo can be easily implemented in existing bo frameworks and can be combined directly with the myopic acquisition functions listed above. πbo augments an acquisition function to emphasize promising regions under the prior, ensuring such regions are to be explored frequently. as optimization progresses, the πbo strategy increasingly resembles that of vanilla bo, retaining its standard convergence rates (see section 3.3). πbo is publicly available as part of the smac (https://github.com/automl/smac3) and hypermapper (https://github.com/luinardi/hypermapper) hpo frameworks. | 3 | [
107.532,
198.2970784,
505.7467626992,
285.1908556
] |
Ai8Hw3AXqks.pdf | 2,023 | 2 | simplified state space layers for sequence modeling jimmy t.h. smith*, 1, 2, andrew warrington*, 2, 3, scott w. linderman2, 3 *equal contribution. 1institute for computational and mathematical engineering, stanford university. 2wu tsai neurosciences institute, stanford university. 3department of statistics, stanford university. {jsmith14,awarring,scott.linderman}@stanford.edu. abstract models using structured state space sequence (s4) layers have achieved state-ofthe-art performance on long-range sequence modeling tasks. an s4 layer combines linear state space models (ssms), the hippo framework, and deep learning to achieve high performance. we build on the design of the s4 layer and introduce a new state space layer, the s5 layer. whereas an s4 layer uses many independent single-input, single-output ssms, the s5 layer uses one multi-input, multi-output ssm. we establish a connection between s5 and s4, and use this to develop the initialization and parameterization used by the s5 model. the result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing s5 to match the computational efficiency of s4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. s5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult path-x task. introduction efficiently modeling long sequences is a challenging problem in machine learning. information crucial to solving tasks may be encoded jointly between observations that are thousands of timesteps apart. specialized variants of recurrent neural networks (rnns) (arjovsky et al., 2016; erichson et al., 2021; rusch & mishra, 2021; chang et al., 2019), convolutional neural networks (cnns) (bai et al., 2018; oord et al., 2016; romero et al., 2022b), and transformers (vaswani et al., 2017) have been developed to try to address this problem. in particular, many efficient transformer methods have been introduced (choromanski et al., 2021; katharopoulos et al., 2020; kitaev et al., 2020; beltagy et al., 2020; gupta & berant, 2020; wang et al., 2020) to address the standard transformer’s quadratic complexity in the sequence length. however, these more efficient transformers still perform poorly on very long-range sequence tasks (tay et al., 2021). gu et al. (2021a) presented an alternative approach using structured state space sequence (s4) layers. an s4 layer defines a nonlinear sequence-to-sequence transformation via a bank of many independent single-input, single-output (siso) linear state space models (ssms) (gu et al., 2021b), coupled together with nonlinear mixing layers. each ssm leverages the hippo framework (gu et al., 2020a) by initializing with specially constructed state matrices. since the ssms are linear, each layer can be equivalently implemented as a convolution, which can then be applied efficiently by parallelizing across the sequence length. multiple s4 layers can be stacked to create a deep sequence model. such models have achieved significant improvements over previous methods, including on the long range arena (lra) (tay et al., 2021) benchmarks specifically designed to stress test long-range sequence models. extensions have shown good performance on raw audio generation (goel et al., 2022) and classification of long movie clips (islam & bertasius, 2022). we introduce a new state space layer that builds on the s4 layer, the s5 layer, illustrated in figure 1. s5 streamlines the s4 layer in two main ways. first, s5 uses one multi-input, multi-output (mimo) ssm in place of the bank of many independent siso ssms in s4. second, s5 uses an efficient and widely implemented parallel scan. this removes the need for the convolutional and frequencydomain approach used by s4, which requires a non-trivial computation of the convolution kernel. figure 1: the computational components of an s5 layer for offline application to a sequence. the s5 layer uses a parallel scan on a diagonalized linear ssm to compute the ssm outputs y1:l ∈ rl×h . a nonlinear activation function is applied to the ssm outputs to produce the layer outputs. a similar diagram for s4 is included in appendix b. the resulting state space layer has the same computational complexity as s4, but operates purely recurrently and in the time domain. we then establish a mathematical relationship between s4 and s5. this connection allows us to inherit the hippo initialization schemes that are key to the success of s4. unfortunately, the specific hippo matrix that s4 uses for initialization cannot be diagonalized in a numerically stable manner for use in s5. however, in line with recent work on the dss (gupta et al., 2022) and s4d (gu et al., 2022) layers, we found that a diagonal approximation to the hippo matrix achieves comparable performance. we extend a result from gu et al. (2022) to the mimo setting, which justifies the diagonal approximation for use in s5. we leverage the mathematical relationship between s4 and s5 to inform several other aspects of parameterization and initialization, and we perform thorough ablation studies to explore these design choices. the final s5 layer has many desirable properties. it is straightforward to implement (see appendix a),1 enjoys linear complexity in the sequence length, and can efficiently handle time-varying ssms and irregularly sampled observations (which is intractable with the convolution implementation of s4). s5 achieves state-of-the-art performance on a variety of long-range sequence modeling tasks, with an lra average of 87.4%, and 98.5% accuracy on the most difficult path-x task. background we provide the necessary background in this section prior to introducing the s5 layer in section 3. linear state space models continuous-time linear ssms are the core component of both the s4 layer and the s5 layer. given an input signal u(t) ∈ ru , a latent state x(t) ∈ rp and an output signal y(t) ∈ rm , a linear continuous-time ssm is defined by the differential equation: dx(t) dt = ax(t) + bu(t), y(t) = cx(t) + du(t), and is parameterized by a state matrix a ∈ rp ×p , an input matrix b ∈ rp ×u , an output matrix c ∈ rm ×p and a feedthrough matrix d ∈ rm ×u . for a constant step size, ∆, the ssm can be discretized using, e.g. euler, bilinear or zero-order hold (zoh) methods to define the linear recurrence (2) where the discrete-time parameters are each a function, specified by the discretization method, of the continuous-time parameters. see iserles (2009) for more information on discretization methods. xk = axk−1 + buk, yk = cxk + duk, parallelizing linear state space models with scans we use parallel scans to efficiently compute the states of a discretized linear ssm. given a binary associative operator • (i.e. (a • b) • c = a • (b • c)) and a sequence of l elements [a1, a2, ..., al], the 1the full s5 implementation is available at: https://github.com/lindermanlab/s5. scan operation (sometimes referred to as all-prefix-sum) returns the sequence computing a length l linear recurrence of a discretized ssm, xk = axk−1 + buk as in (2), is a specific example of a scan operation. as discussed in section 1.4 of blelloch (1990), parallelizing the linear recurrence of the latent transitions in the discretized ssm above can be computed in a parallel time of o(t⊙ log l), assuming l processors, where t⊙ represents the cost of matrix-matrix multiplication. for a general matrix a ∈ rp ×p , t⊙ is o(p 3). this can be prohibitively expensive in deep learning settings. however, if a is a diagonal matrix, the parallel time becomes o(p log l) with l processors and only requires o(p l) space. finally, we note that efficient parallel scans are implemented in a work-efficient manner, thus the total computational cost of the parallel scan with a diagonal matrix is o(p l) operations. see appendix h for more information on parallel scans. s4: structured state space sequence layers the s4 layer (gu et al., 2021a) defines a nonlinear sequence-to-sequence transformation, mapping 1:l ∈ rl×h . an s4 layer contains a from an input sequence u1:l ∈ rl×h to an output sequence u′ bank of h independent single-input, single-output (siso) ssms with n -dimensional states. each s4 ssm is applied to one dimension of the input sequence. this results in an independent linear transformation from each input channel to each preactivation channel. a nonlinear activation function is then applied to the preactivations. finally, a position-wise linear mixing layer is applied to combine the independent features and produce the output sequence u′ 1:l. figure 4a in the appendix illustrates the view of the s4 layer as a bank of independent ssms. figure 2a shows an alternative view of s4 as one large ssm with state size hn and block-diagonal state, input and output matrices. each s4 ssm leverages the hippo framework for online function approximation (gu et al., 2020a) by initializing the state matrices with a hippo matrix (most often the hippo-legs matrix). this was demonstrated empirically to lead to strong performance (gu et al., 2021b;a), and can be shown as approximating long-range dependencies with respect to an infinitely long, exponentially-decaying measure (gu et al., 2023). while the hippo-legs matrix is not stably diagonalizable (gu et al., 2021a), it can be represented as a normal plus low-rank (nplr) matrix. the normal component, referred to as hippo-n and denoted anormal , can be diagonalized. thus, the hippo-legs can be conjugated into a diagonal plus low-rank (dplr) form, which s4 then utilizes to derive an efficient form of the convolution kernel. this motivates s4’s dplr parameterization. legs efficiently applying the s4 layer requires two separate implementations depending on context: a recurrent mode and a convolution mode. for online generation, the ssm is iterated recurrently, much like other rnns. however, when the entire sequence is available and the observations are evenly spaced, a more efficient convolution mode is used. this takes advantage of the ability to represent the linear recurrence as a one-dimensional convolution between the inputs and a convolution kernel for each of the ssms. fast fourier transforms (ffts) can then be applied to efficiently parallelize this application. figure 4a in the appendix illustrates the convolution approach of the s4 layer for offline processing. we note that while parallel scans could, in principle, allow a recurrent approach to be used in offline scenarios, applying the parallel scan to all h of the n -dimensional ssms would in general be much more expensive than the convolution approach s4 actually uses. the trainable parameters of each s4 layer are the h independent copies of the learnable ssm parameters and the o(h 2) parameters of the mixing layer. for each of the h ∈ {1, ..., h} s4 ssms, given a scalar input signal u(h)(t) ∈ r, an s4 ssm uses an input matrix b(h) ∈ cn ×1, a dplr parameterized transition matrix a(h) ∈ cn ×n , an output matrix c(h) ∈ c1×n , and feedthrough matrix d(h) ∈ r1×1, to produce a signal y(h)(t) ∈ r. to apply the s4 ssms to discrete sequences, each continuous-time ssm is discretized using a constant timescale parameter ∆(h) ∈ r+. the learnable parameters of each ssm are the timescale parameter ∆(h) ∈ r+, the continuous-time parameters b(h), c(h), d(h), and the dplr matrix, parameterized by vectors λ(h) ∈ cn and p(h), q(h) ∈ cn representing the diagonal matrix and low-rank terms respectively. for notational compactness we denote the concatenation of the h s4 ssm states at discrete time index k as x(1:h) k , and the h ssm outputs as yk = (cid:2)y(1) k )⊤, . . . , (x(h) k , . . . , y(h) )⊤(cid:3)⊤ (cid:3)⊤ k k (a) internal structure of a single s4 layer (gu et al., 2021a) when viewed as a block-diagonal system. (b) internal structure of a single s5 layer. figure 2: schematic of the internal structure of a discretized s4 layer (gu et al., 2021a) (top) and s5 layer (bottom). note d is omitted for simplicity. we view an s4 layer as a single block-diagonal ssm with a latent state of size hn , followed by a nonlinearity and mixing layer to mix the independent features. (b) in contrast, the s5 layer uses a dense, mimo linear ssm with latent size p ≪ hn . the s5 layer in this section we present the s5 layer. we describe its structure, parameterization and computation, particularly focusing on how each of these differ from s4. s5 structure: from siso to mimo the s5 layer replaces the bank of siso ssms (or large block-diagonal system) in s4 with a multiinput, multi-output (mimo) ssm, as in (1), with a latent state size p , and input and output dimension h. the discretized version of this mimo ssm can be applied to a vector-valued input sequence u1:l ∈ rl×h , to produce a vector-valued sequence of ssm outputs (or preactivations) y1:l ∈ rl×h , using latent states xk ∈ rp . a nonlinear activation function is then applied to produce a sequence 1:l ∈ rl×h . see figure 2b for an illustration. unlike s4, we do not require of layer outputs u′ an additional position-wise linear layer, since these features are already mixed. we note here that compared to the hn latent size of the block-diagonal ssm in the s4 layer, s5’s latent size p can be significantly smaller, allowing for the use of efficient parallel scans, as we discuss in section 3.3. s5 parameterization: diagonalized dynamics the parameterization of the s5 layer’s mimo ssm is motivated by the desire to use efficient parallel scans. as discussed in section 2.2, a diagonal state matrix is required to efficiently compute the linear recurrence using a parallel scan. thus, we diagonalize the system, writing the continuous-time state matrix as a = vλv−1, where λ ∈ cp ×p denotes the diagonal matrix containing the eigenvalues and v ∈ cp ×p corresponds to the eigenvectors. therefore, we can diagonalize the continuous-time latent dynamics from (1) as dv−1x(t) dt = λv−1x(t) + v−1bu(t). defining ˜x(t) = v−1x(t), ˜b = v−1b, and ˜c = cv gives a reparameterized system, d˜x(t) dt = λ˜x(t) + ˜bu(t), y(t) = ˜c˜x(t) + du(t). this is a linear ssm with a diagonal state matrix. this diagonalized system can be discretized with a timescale parameter ∆ ∈ r+ using the zoh method to give another diagonalized system with parameters λ = eλ∆, b = λ−1(λ − i) ˜b, c = ˜c, d = d. in practice, we use a vector of learnable timescale parameters ∆ ∈ rp (see section 4.3) and restrict the feedthrough matrix d to be diagonal. the s5 layer therefore has the learnable parameters: ˜b ∈ cp ×h , ˜c ∈ ch×p , diag(d) ∈ rh , diag(λ) ∈ cp , and ∆ ∈ rp . initialization prior work showed that the performance of deep state space models are sensitive to the initialization of the state matrix (gu et al., 2021b;a). we discussed in section 2.2 that state matrices must be diagonal for efficient application of parallel scans. we also discussed in section 2.3 that the hippo-legs matrix cannot be diagonalized stably, but that the hippo-n matrix can be. in section 4 we connect the dynamics of s5 to s4 to suggest why initializing with hippo-like matrices may also work well in the mimo setting. we support this empirically, finding that diagonalizing the hippo-n matrix leads to good performance, and perform ablations in appendix e to compare to other initializations. we note that dss (gupta et al., 2022) and s4d (gu et al., 2022) layers also found strong performance in the siso setting by using a diagonalization of the hippo-n matrix. conjugate symmetry the complex eigenvalues of a diagonalizable matrix with real entries always occur in conjugate pairs. we enforce this conjugate symmetry by using half the number of eigenvalues and latent states. this ensures real outputs and reduces the runtime and memory usage of the parallel scan by a factor of two. this idea is also discussed in gu et al. (2022). s5 computation: fully recurrent compared to the large hn effective latent size of the block-diagonal s4 layer, the smaller latent dimension of the s5 layer (p ) allows the use of efficient parallel scans when the entire sequence is available. the s5 layer can therefore be efficiently used as a recurrence in the time domain for both online generation and offline processing. parallel scans and the continuous-time parameterization also allow for efficient handling of irregularly sampled time series and other time-varying ssms, by simply supplying a different ak matrix at each step. we leverage this feature and apply s5 to irregularly sampled data in section 6.3. in contrast, the convolution of the s4 layer requires a time invariant system and regularly spaced observations. matching the computational efficiency of s4 and s5 a key design desiderata for s5 was matching the computational complexity of s4 for both online generation and offline recurrence. the following proposition guarantees that their complexities are of the same order if s5’s latent size p = o(h). proposition 1. given an s4 layer with h input/output features, an s5 layer with h input/output features and a latent size p = o(h) has the same order of magnitude complexity as an s4 layer in terms of both runtime and memory usage. proof. see appendix c.1. we also support this proposition with empirical comparisons in appendix c.2. relationship between s4 and s5 we now establish a relationship between the dynamics of s5 and s4. in section 4.1 we show that, under certain conditions, the outputs of the s5 ssm can be interpreted as a projection of the latent states computed by a particular s4 system. this interpretation motivates using hippo initializations for s5, which we discuss in more detail in section 4.2. in section 4.3 we discuss how the conditions required to relate the dynamics further motivate initialization and parameterization choices. different output projections of equivalent dynamics we compare the dynamics of s4 and s5 under some simplifying assumptions: assumption 1. we consider only h-dimensional to h-dimensional sequence maps. assumption 2. we assume the state matrix of each s4 ssm is identical, a(h) = a ∈ cn ×n . assumption 3. we assume the timescales of each s4 ssm are identical, ∆(h) = ∆ ∈ r+ assumption 4. we assume that the same state matrix a is used in s5 as in s4 (also cf. assumption 2). note this also specifies the s5 latent size p = n . we also assume the s5 input matrix is the horizontal concatenation of the column input vectors used by s4: b ≜ (cid:2)b(1) | . . . | b(h)(cid:3). we will discuss relaxing these assumptions shortly, but under these conditions it is straightforward to derive a relationship between the dynamics of s4 and s5: proposition 2. consider an s5 layer, with state matrix a, input matrix b and some output matrix c (cf. assumption 1); and an s4 layer, where each of the h s4 ssms has state matrix a (cf. assumption 2, 4) and input vector b(h) (cf. assumption 4). if the s4 and s5 layers are discretized with the same timescales (cf. assumption 3), then the s5 ssm produces outputs, yk, equivalent to a linear combination of the latent states of the h s4 ssms, yk = cequivx(1:h) , where cequiv = [ c · · · c ]. k proof. see appendix d.2. importantly, the s5 ssm outputs are not equal to the outputs of the block-diagonal s4 ssm. instead they are equivalent to the outputs of the block-diagonal s4 ssm with modified output matrix cequiv. under the assumptions, however, the underlying state dynamics are equivalent. recalling that initializing the s4 dynamics with hippo was key to performance (gu et al., 2021a), the relationship established in proposition 2 motivates using hippo initializations for s5, as we now discuss. diagonalizable initialization ideally, given the interpretation above, we would initialize s5 with the exact hippo-legs matrix. unfortunately, as discussed in section 2.3, this matrix is not stably diagonalizable, as is required for the efficient parallel scans used for s5. however, gupta et al. (2022) and gu et al. (2022) showed empirically that removing the low rank terms and initializing with the diagonalized hippo-n matrix still performed well. gu et al. (2022) offered a theoretical justification for the use of this normal approximation for single-input systems: in the limit of infinite state dimension, the linear ode with hippo-n state matrix produces the same dynamics as an ode with the hippo-legs matrix. using linearity, it is straightforward to extend this result to the multi-input system that s5 uses: corollary 1 (extension of theorem 3 in gu et al. (2022)). consider alegs ∈ rn ×n , anormal legs ∈ rn ×n , blegs ∈ rn ×h , plegs ∈ rn as defined in appendix b.1.1. given vector-valued inputs dx′(t) u(t) ∈ rh , the ordinary differential equation dt dx(t) dt = alegsx(t) + blegsu(t) as n → ∞. 2 blegsu(t) converges to legs x′(t) + 1 = anormal we include a simple proof of this extension in appendix d.3. this extension motivates the use of hippo-n to initialize s5’s mimo ssm. note that s4d (the diagonal extension of s4) uses the same hippo-n matrix. thus, when under the assumptions in proposition 2, an s5 ssm in fact produces outputs that are equivalent to a linear combination of the latent states produced by s4d’s ssms. our empirical results in section 6 suggest that s5 initialized with the hippo-n matrix performs just as well as s4 initialized with the hippo-legs matrix. relaxing the assumptions we now revisit the assumptions required for proposition 2, since they only relate a constrained version of s5 to a constrained version of s4. regarding assumption 2, gu et al. (2021a) report that s4 models with tied state matrices can still perform well, though allowing different state matrices often yields higher performance. likewise, requiring a single scalar timescale across all of the s4 ssms, per assumption 3, is restrictive. s4 typically learns different timescale parameters for each ssm (gu et al., 2023) to capture different timescales in the data. to relax these assumptions, note that assumption 4 constrains s5 to have dimension p = n , and n is typically much smaller than the dimensionality of the inputs, h. proposition 1 established that s5 can match s4’s complexity with p = o(h). by allowing for larger latent state sizes, assumptions 2 and 3 can be relaxed, as discussed in appendix d.4. we also discuss how this relaxation motivates a block-diagonal initialization with hippo-n matrices on the diagonal. finally, to further relax the tied timescale assumptions, we note that in practice, we find improved performance by learning p different timescales (one per state). see appendix d.5 for further discussion of this empirical finding and the ablations in appendix e.1. related work s5 is most directly related to s4 and its other extensions, which we have discussed thoroughly. however, there is prior literature that uses similar ideas to those developed here. for example, prior work studied approximating nonlinear rnns with stacks of linear rnns connected by nonlinear layers, while also using parallel scans (martin & cundy, 2018). martin & cundy (2018) showed that several efficient rnns, such as qrnns (bradbury et al., 2017) and srus (lei et al., 2018), fall into a class of linear surrogate rnns that can leverage parallel scans. kaul (2020) also used parallel scans for an approach that approximates rnns with stacks of discrete-time single-input, multi-output (simo) ssms. however, s4 and s5 are the only methods to significantly outperform other comparable state-of-the-art nonlinear rnns, transformers and convolution approaches. our ablation study in appendix e.2 suggests that this performance gain over prior attempts at parallelized linear rnns is likely due to the continuous-time parameterization and the hippo initialization. experiments we now compare empirically the performance of the s5 layer to the s4 layer and other baseline methods. we use the s5 layer as a drop-in replacement for the s4 layer. the architecture consists of a linear input encoder, stacks of s5 layers, and a linear output decoder (gu et al., 2021a). for all experiments we choose the s5 dimensions to ensure similar computational complexities as s4, following the conditions discussed in section 3.3, as well as comparable parameter counts. the results we present show that the s5 layer matches the performance and efficiency of the s4 layer. we include in the appendix further ablations, baselines and runtime comparisons. long range arena the long range arena (lra) benchmark (tay et al., 2021) is a suite of six sequence modeling tasks, with sequence lengths from 1,024 to over 16,000. the suite was specifically developed to benchmark the performance of architectures on long-range modeling tasks (see appendix g for more details). table 1 presents s5’s lra performance in comparison to other methods. s5 achieves the highest average score among methods that have linear complexity in sequence length (most notably s4, s4d, and the concurrent works: liquid-s4 (hasani et al., 2023) and mega-chunk (ma et al., 2023)). most significantly, s5 achieves the highest score among all models (including mega (ma et al., 2023)) on the path-x task, which has by far the longest sequence length of the tasks in the benchmark. raw speech classification the speech commands dataset (warden, 2018) contains high-fidelity sound recordings of different human readers reciting a word from a vocabulary of 35 words. the task is to classify which word was spoken. we show in table 2 that s5 outperforms the baselines, outperforms previous s4 methods and performs similarly to the concurrent liquid-s4 method (hasani et al., 2023). as s4 and s5 methods table 1: test accuracy on the lra benchmark tasks (tay et al., 2021). ✗ indicates the model did not exceed random guessing. we include an expanded table, table 7, with full citations and error bars in the appendix. we follow the procedure reported in gu et al. (2021a; 2022) and report means across three seeds for s4, s4d (as reported by gu et al. (2021a; 2022)) and s5. bold scores indicate highest performance, underlined scores indicate second placed performance. we also include the results for the concurrent methods liquid-s4 (hasani et al., 2023) and mega (ma et al., 2023). unlike s4 methods and s5, the best mega model retains the transformer’s o(l2) complexity. model (input length) pathfinder (1,024) transformer luna-256 h-trans.-1d ccnn mega (o(l2)) mega-chunk (o(l)) s4d-legs s4-legs liquid-s4 table 2: test accuracy on 35-way speech commands classification task (warden, 2018). we include an expanded table, table 8, with error bars in the appendix. training examples are one-second 16khz audio waveforms. last column indicates 0-shot testing at 8khz (constructed by naive decimation). as in gu et al. (2022), the mean across three random seeds is reported. performance for the baselines inceptionnet through to s4d-lin are reported from gu et al. (2022). model (input length) parameters inceptionnet (nonaka & seita, 2021) resnet-1 (nonaka & seita, 2021) xresnet-50 (nonaka & seita, 2021) convnet (nonaka & seita, 2021) s4-legs (gu et al., 2021a) s4d-legs (gu et al., 2022) liquid-s4 (hasani et al., 2023) are parameterized in continuous-time, these models can be applied to datasets with different sampling rates without the need for re-training, simply by globally re-scaling the timescale parameter ∆ by the ratio between the new and old sampling rates. the result of applying the best s5 model trained on 16khz data, to the speech data sampled (via decimation) at 8khz, without any additional fine-tuning, is also presented in table 2. s5 also improves this metric over the baseline methods. variable observation interval | 7 | [
108.249,
147.1490784,
284.3248521,
157.1116784
] |
y5W8tpojhtJ.pdf | 2,023 | 0 | neural collapse inspired feature-classifier alignment for few-shot class incremental learning yibo yang1∗, haobo yuan2∗, xiangtai li3, zhouchen lin3,4,5†, philip torr6, dacheng tao1 1jd explore academy 2school of computer science, wuhan university 3national key lab of general ai, school of intelligence science and technology, peking university 4institute for artificial intelligence, peking university 5peng cheng laboratory 6university of oxford †: corresponding author *: equal contribution abstract few-shot class-incremental learning (fscil) has been a challenging problem as only a few training samples are accessible for each novel class in the new sessions. finetuning the backbone or adjusting the classifier prototypes trained in the prior sessions would inevitably cause a misalignment between the feature and classifier of old classes, which explains the well-known catastrophic forgetting problem. in this paper, we deal with this misalignment dilemma in fscil inspired by the recently discovered phenomenon named neural collapse, which reveals that the last-layer features of the same class will collapse into a vertex, and the vertices of all classes are aligned with the classifier prototypes, which are formed as a simplex equiangular tight frame (etf). it corresponds to an optimal geometric structure for classification due to the maximized fisher discriminant ratio. we propose a neural collapse inspired framework for fscil. a group of classifier prototypes are pre-assigned as a simplex etf for the whole label space, including the base session and all the incremental sessions. during training, the classifier prototypes are not learnable, and we adopt a novel loss function that drives the features into their corresponding prototypes. theoretical analysis shows that our method holds the neural collapse optimality and does not break the feature-classifier alignment in an incremental fashion. experiments on the miniimagenet, cub-200, and cifar-100 datasets demonstrate that our proposed framework outperforms the state-of-the-art performances. code address: https://github.com/neuralcollapseapplications/fscil introduction learning incrementally and learning with few-shot data are common in the real-world implementations, and in many applications, such as robotics, the two demands emerge simultaneously. despite the great success in a closed label space, it is still challenging for a deep learning model to learn new classes continually with only limited samples (lecun et al., 2015). to this end, few-shot classincremental learning (fscil) was proposed to tackle this problem (tao et al., 2020b). compared with few-shot learning (ravi & larochelle, 2017; vinyals et al., 2016), fscil transfers a trained model into new label spaces incrementally. it also differs from incremental learning (cauwenberghs & poggio, 2000; li & hoiem, 2017; rebuffi et al., 2017) in that there are only a few (usually 5) samples accessible for each new class in the incremental sessions. for each session’s evaluation, the model is required to infer test images coming from all the classes that have been encountered. the base session of fscil contains a large label space and sufficient training samples, while each incremental session only has a few novel classes and labeled images. it poses the notorious catastrophic forgetting problem (goodfellow et al., 2013) because the novel sessions have no access to the data of the previous sessions. due to the importance and difficulty, fscil has attracted much research attention. the initial solutions to fscil finetune the network on new session data with distillation schemes to reduce the forgetting of old classes (tao et al., 2020b; dong et al., 2021). however, the few-shot data in novel sessions can easily induce over-fitting. following studies favor training a backbone network on the figure 1: a popular choice in prior studies is to evolve the old-class prototypes via delicate design of loss or regularizer to keep them separated from novel-class prototypes, but will cause misalignment. as a comparison, we pre-assign and fix an optimal feature-classifier alignment, and then train a model towards the same neural collapse optimality in each session to avoid target conflict. base session as a feature extractor (zhang et al., 2021; hersche et al., 2022; aky¨urek et al., 2022). for novel sessions, the backbone network is fixed and a group of novel-class prototypes (classifier vectors) are learned incrementally. but as shown in figure 1 (a), the newly added prototypes may lie close to the old-class prototypes, which impedes the ability to discriminate between the old-class and the novel-class samples in evaluation. as a result, adjusting the classifier prototypes is always necessary for two goals: (i) keep a sufficient distance between the old-class and the novel-class prototypes; (ii) prevent the adjusted old-class prototypes from shifting far away from their original positions. however, the two goals rely on sophisticated loss functions or regularizers (chen & lee, 2021; hersche et al., 2022; aky¨urek et al., 2022), and are hard to attain simultaneously without qualification. besides, as shown in figure 1 (a), there will be a misalignment between the adjusted classifier and the fixed features of old classes. a recent study proposes to reserve feature space for novel classes to circumvent their conflict with old classes (zhou et al., 2022a), but an optimal feature-classifier alignment is hard to be guaranteed with learnable classifier (pernici et al., 2021). we point out that it is the misalignment dilemma between feature and classifier that causes the catastrophic forgetting problem of old classes. if a backbone network is finetuned in novel sessions, the features of old classes will be easily deviated from their classifier prototypes. alternatively, when a backbone network is fixed and a group of new prototypes for novel classes are learned incrementally, the adjustment of old-class prototypes will also induce misalignments with their fixed features. in this paper, we pose and study the following question, “can we look for and pre-assign an optimal feature-classifier alignment such that the model is optimized towards the fixed optimality, so avoids conflict among sessions?” motivations and contributions neural collapse is a recently discovered phenomenon that at the terminal phase of training (after 0 training error rate), the last-layer features of the same class will collapse into a single vertex, and the vertices of all classes will be aligned with their classifier prototypes and be formed as a simplex equiangular tight frame (etf) (papyan et al., 2020). a simplex etf is a geometric structure of k vectors in rd, d ≥ k−1. all vectors have the same (cid:96)2 norm of 1 and any pair of two different vectors has an inner product of − 1 k−1 , which corresponds to the largest possible angle of k equiangular vectors. particularly when d = k − 1, a simplex etf reduces to a regular simplex such as triangle and tetrahedron. it describes an optimal geometric structure for classification due to the minimized within-class variance and the maximized between-class variance (martinez & kak, 2001), which indicates that the fisher discriminant ratio (fisher, 1936; rao, 1948) is maximized. following studies aim to theoretically explain this phenomenon (fang et al., 2021; han et al., 2022). it is expected that imperfect training condition, such as imbalance, cannot induce neural collapse and will cause deteriorated performance (fang et al., 2021; yang et al., 2022b). training in an incremental fashion will also break the neural collapse optimality. since neural collapse offers us an optimal structure where features and their classifier prototypes are aligned, we can pre-assign such a structure and learn the model towards the optimality. inspired by this insight, in this paper, we initialize a group of classifier prototypes ˆwetf ∈ rd×(k0+k(cid:48)) as a simplex etf for the whole label space, where k0 is the number of classes in the base session and k (cid:48) is the number of classes in all the incremental sessions. as shown in figure 1 (b), it serves as the optimization target and keeps fixed throughout all sessions training. we append a projection layer after the backbone network and store the mean latent feature of each class output by the backbone in a memory. in the training of incremental sessions, we only finetune the projection layer using a novel loss function that drives the final features towards their corresponding target prototypes. without bells and whistles, our method achieves superior performances and relieves the catastrophic forgetting problem. the contributions of this paper can be summarized as follows: • to relieve the misalignment dilemma in fscil, we propose to pre-assign an optimal alignment inspired by neural collapse as a fixed target throughout the incremental learning. our model is trained towards the same optimality to avoid optimization conflict among sessions. • we fix the prototypes and apply a novel loss function that only finetunes a projection layer to drive the output features into their corresponding prototypes. theoretical and empirical analyses show that our method better holds the neural collapse optimality. • experiments on miniimagenet, cifar-100, and cub-200 demonstrate that our method is able to surpass the state-of-the-art performances. in particular, our method achieves an average accuracy improvement of more than 3.5% over a recent strong baseline on both miniimagenet and cifar-100. related work few-shot class-incremental learning (fscil). as a variant of class-incremental learning (cil) (cauwenberghs & poggio, 2000; li & hoiem, 2017; rebuffi et al., 2017), fscil only has a few novel-classes and training data in each incremental session (tao et al., 2020b; dong et al., 2021), which increases the tendency of overfitting on novel classes (snell et al., 2017; sung et al., 2018). both cil and fscil require a delicate balance between well adapting a model to novel classes and less forgetting of old classes (zhao et al., 2021). a popular choice is to use meta learning (yoon et al., 2020; chi et al., 2022; zhou et al., 2022b). some studies try to make base and incremental sessions compatible via pseudo-feature (cheraghian et al., 2021b; zhou et al., 2022a), augmentation (peng et al., 2022), or looking for a flat minima (shi et al., 2021). for training in incremental sessions, the new prototypes for novel classes should be separable from the old-class prototypes. meanwhile, the adjustment of old-class prototypes should not induce large shifts. current studies widely rely on evolving the prototypes (zhang et al., 2021; zhu et al., 2021a) or sophisticated designs of loss and regularizer (ren et al., 2019; hou et al., 2019; tao et al., 2020a; joseph et al., 2022; lu et al., 2022; hersche et al., 2022; chen & lee, 2021; aky¨urek et al., 2022; yang et al., 2022a). however, the two goals have inherent conflict, and a tough effort to balance the loss terms is necessary. in contrast, our method pre-assigns and fixes a feature-classifier alignment as an optimality. a model is trained towards the same target in all sessions. we only use a single loss without any regularizer. neural collapse. neural collapse describes an elegant geometric structure of the last-layer feature and classifier in a well-trained model (papyan et al., 2020). it inspires later studies to theoretically explain this phenomenon. based on a simplified model that only considers the last-layer optimization, neural collapse is proved to be the global optimality of balanced training with the ce (weinan & wojtowytsch, 2020; graf et al., 2021; lu & steinerberger, 2020; fang et al., 2021; zhu et al., 2021b; ji et al., 2022) and the mse (mixon et al., 2020; poggio & liao, 2020; zhou et al., 2022c; han et al., 2022; tirer & bruna, 2022) loss functions. recent studies try to induce neural collapse in imbalanced training by fixing a classifier (yang et al., 2022b; zhong et al., 2023) or novel loss (xie et al., 2023). our method is inspired by yang et al. (2022b), but we apply the classifier in an incremental fashion. galanti et al. (2022) show that neural collapse is still valid when transferring a model into new samples or classes. to the best of our knowledge, we are the first to study fscil from the neural collapse perspective, which offers our method sound interpretability. background few-shot class-incremental learning (fscil) in real-world applications, one often needs to adapt a model to data coming from a new label space with only a few labeled samples. fscil trains a model incrementally on a sequence of training datasets {d(0), d(1), . . . , d(t )}, where d(t) = {(xi, yi)}|d(t)| i=1 , d(0) is the base session, and t the number of incremental sessions. the base session d(0) usually contains a large label space c(0) and sufficient training images for each class c ∈ c(0). in each incremental session d(t), t > 0, there are only a few labeled images and we have |d(t)| = pq, where p is the number of classes and q is the number samples per novel class, known as p-way q-shot. the label space c(t) has no overlap with any other session, i.e., c(t) ∩ c(t(cid:48)) = ∅, ∀t(cid:48) (cid:54)= t. for any incremental session t > 0, we only have access to the data in d(t), and the training sets of the previous sessions are not available. for evaluation in session t, the test dataset comes from all the encountered classes in the previous and current sessions 1, i.e. the label space of ∪t i=0c(i). therefore, fscil suffers from severe data scarcity and imbalance. it requires a model to be adaptable to novel classes, and meanwhile keep the ability on old classes. neural collapse neural collapse refers to a phenomenon at the terminal phase of training (after 0 training error rate) on balanced data (papyan et al., 2020). it reveals a geometric structure formed by the last-layer feature and classifier that can be defined as: definition 1 (simplex equiangular tight frame) a simplex equiangular tight frame (etf) refers to a matrix that is composed of k vectors in rd and satisfies: (cid:19) e = u ik − where e = [e1, · · · , ek] ∈ rd×k, u ∈ rd×k allows a rotation and satisfies ut u = ik, ik is the identity matrix, and 1k is an all-ones vector. all column vectors in e have the same (cid:96)2 norm and any pair has an inner produce of − 1 k−1 , i.e., et k1 where δk1,k2 = 1 when k1 = k2, and 0 otherwise. the neural collapse phenomenon includes the following four properties: (nc1): the last-layer features of the same class will collapse into their within-class mean, i.e., the covariance σ(k) w = avgi{(µk,i − µk)(µk,i − µk)t }, µk,i is the feature of sample i in class k, and µk is the within-class mean of class k features; w → 0, where σ(k) (nc2): the within-class means of all classes centered by the global mean will converge to the vertices of a simplex etf defined in definition 1, i.e., ˆµk, 1 ≤ k ≤ k satisfy eq. (2), where ˆµk = (µk − µg)/(cid:107)µk − µg(cid:107) and µg is the global mean; (nc3): the within-class means centered by the global mean will be aligned with (parallel to) their corresponding classifier weights, which means the classifier weights will converge to the same simplex etf, i.e., ˆµk = wk/(cid:107)wk(cid:107), 1 ≤ k ≤ k, where wk is the classifier weight of class k; (nc4): when (nc1)-(nc3) hold, the model prediction using logits can be simplified to the nearest class centers2, i.e., arg maxk(cid:104)µ, wk(cid:105) = arg mink ||µ−µk||, where (cid:104)·(cid:105) is the inner product operator, µ is the last-layer feature of a sample for prediction. neural collapse corresponds to an optimal feature-classifier alignment for classification due to the maximized fisher discriminant ratio (between-class variance to within-class variance). 1different from task-incremental learning, we do not know which session a test sample comes from. 2we omit the bias term in a linear classifier layer for simplicity. method neural collapse tells us an optimal geometric structure for classification problems where the lastlayer features and classifier prototype of the same class are aligned, and those of different classes are maximally separated. however, this structure will be broken in imperfect training conditions, such as imbalanced training data (fang et al., 2021; yang et al., 2022b). as illustrated in figure 1 (a), training in an incremental fashion will also break the neural collapse optimality. inspired by this perspective, what we should do for fscil is to keep the neural collapse inspired feature-classifier alignment as sound as possible. concretely, we adopt a fixed classifier and a novel loss function as described in section 4.1 and section 4.2, respectively. we introduce our framework for fscil in section 4.3. finally, in section 4.4, we conduct theoretical analysis to show how our method better holds the neural collapse optimality in an incremental fashion. etf classifier assume that the base session contains a label space of k0 classes, each incremental session has p classes, and we have t incremental sessions in total. the whole label space of this fscil problem has k0 + k (cid:48) classes, where k (cid:48) = t p, i.e., we need to learn a model that can recognize samples from k0 + k (cid:48) classes. we denote a backbone network as f , and then we have µ = f (x, θf ), where µ ∈ rd is the output feature of input x, and θf is the backbone network parameters. a popular choice in current studies learns f and w(0) using the base session data, where w(0) ∈ rd×k0 is the classifier prototypes for base classes. in incremental sessions t > 0, f is fixed as a feature extractor and only w(t) ∈ rd×p for novel classes is learnable. as shown in figure 1 (a), one need to adjust {w(0), · · · , w(t)} via sophisticated loss or regularizer to ensure separation among these prototypes (aky¨urek et al., 2022; hersche et al., 2022). but it will inevitably introduce misalignment between the adjusted prototypes and the fixed features of old classes. it is an underlying reason for the catastrophic forgetting problem (joseph et al., 2022). since neural collapse describes an optimal geometric structure of the last-layer feature and classifier, we pre-assign such an optimality by fixing a learnable classifier as the structure instructed by neural collapse. following yang et al. (2022b), we adopt an etf classifier that initializes a classifier as a simplex etf and fixes it during training. the difference lies in that we use it in an incremental fashion. concretely, we randomly initialize classifier prototypes ˆwetf ∈ rd×(k0+k(cid:48)) by eq. (1) for the whole label space, i.e., the union of classes in all session, ∪t i=0c(i). we have k0 = |c(0)| and k (cid:48) = (cid:80)t i=1 |c(i)| = t p. then any pair (k1, k2) of classifier prototypes in ˆwetf satisfies: 1 k0 + k (cid:48) − 1 ˆwt k1 where ˆwk1 and ˆwk2 are two column vectors in ˆwetf. our etf classifier ensures that the prototypes of the whole label space have the maximal pair-wise separation. it serves as a fixed target along the incremental training to avoid conflict among sessions. we only need to learn a model whose output features are aligned with this pre-assigned structure. dot-regression loss the gradient of cross entropy (ce) loss with respect to the last-layer feature is composed of a pull term that drives the feature into its classifier prototype of the same class, and a push term that pushes it away from the prototypes of different classes. as pointed out by yang et al. (2022b), when the classifier prototypes are fixed as an optimality, the pull term is always accurate towards the solution, and we can drop the push gradient that may be inaccurate. accordingly, we adopt a novel loss named dot-regression (dr) loss that can be formulated as (yang et al., 2022b): l ˆµi, ˆwetf (cid:0) ˆwt yi where ˆµi is the normalized feature, i.e., ˆµi = µi/(cid:107)µi(cid:107), µi = f (xi, θf ), yi is the label of input xi, ˆwyi is the fixed prototype in ˆwetf for class yi, and we have (cid:107) ˆwyi(cid:107) = 1 by eq. (3). the total loss is an average over a batch of input xi. the gradient of eq. (4) with respect to ˆµi takes the form of: figure 2: an illustration of our nc-fscil. hi is the intermediate feature from the backbone network f . ˆµi is the normalized output feature after the projection layer g. ˆwetf is the etf classifier that contains prototypes of the whole label space and serves as a fixed target throughout the incremental training. l denotes the dot-regression loss function. f is frozen in the incremental sessions (1 ≤ t ≤ t ). a small memory of old-class features is widely adopted in prior studies such as cheraghian et al. (2021a), chen & lee (2021), aky¨urek et al. (2022), and hersche et al. (2022). ∂l/∂ ˆµi = −(1 − cos ∠( ˆµi, ˆwyi )) ˆwyi. it is shown that the gradient pulls feature ˆµi towards the direction of ˆwyi, which is a pre-assigned target prototype. finally, the converged features will be aligned with ˆwetf, and thus the geometric structure instructed by neural collapse is attained. the theoretical advantage of the dr loss has been proved in yang et al. (2022b). in experiments, we will compare the dr loss with the ce loss to show its effectiveness in fscil. nc-fscil based on the etf classifier and the dr loss, we now introduce our neural collapse inspired framework for few-shot class-incremental learning (nc-fscil). as shown in figure 2, our model is composed of two components, a backbone network f and a projection layer g. the backbone network f takes the training data xi as input, and outputs an intermediate feature hi. the projection layer g can be a linear transformation or an mlp block following hersche et al. (2022); peng et al. (2022). it projects the intermediate feature hi into µi. finally, we perform an (cid:96)2 normalization on µi to get the output feature ˆµi, i.e., ˆµi = µi/(cid:107)µi(cid:107), µi = g(hi, θg), hi = f (xi, θf ), where θf and θg denote the parameters of the backbone network and the projection layer, respectively. we use the normalized output feature ˆµi to compute error signal by eq. (4). in the base session t = 0, we jointly train both f and g using the base session data. the empirical risk to minimize in the base session can be formulated as: (cid:16) min θf ,θg (xi,yi)∈d(0) l ˆµi, ˆwetf where ˆwetf is the pre-assigned etf classifier as introduced in section 4.1, l is the dr loss as introduced in section 4.2, and ˆµi is a function of f and g as shown in eq. (5). in each incremental session 1 ≤ t ≤ t , we fix the backbone network f as a feature extractor, and only finetune the projection layer g. as a widely adopted practice in fscil studies, a small memory of samples or features of old classes can be retained to relieve the overfitting on novel classes (cheraghian et al., 2021a; chen & lee, 2021; aky¨urek et al., 2022; hersche et al., 2022). following hersche et al. (2022), we only keep a memory m(t) of the mean intermediate feature hc for each old class c. concretely, we have, m(t) = {hc|c ∈ ∪t−1 j=0c(j)}, hc = avgi{f (xi, θf )|yi = c}, where f has been fixed after the base session. then we use d(t) as the input of f , and m(t) as the input of g to finetune the projection layer g. the empirical risk to minimize in incremental sessions can be formulated as: min θg 1 |d(t)| + |m(t)| l ˆµi, ˆwetf l ˆµc, ˆwetf (xi,yi)∈d(t) (hc,yc)∈m(t) where ˆµi, and ˆµc are the output features of xi and hc, respectively, |d(t)| is the number of training samples in session t, and we have |m(t)| = (cid:80)t−1 j=0 |c(j)|. thanks to our pre-assigned alignment, we do not rely on any regularizer in our training. in the evaluation of session t, we predict an input x based on the inner product between its output feature ˆµ and the etf classifier prototypes: arg maxk(cid:104) ˆµ, ˆwk(cid:105), ∀1 ≤ k ≤ k0 + k (cid:48). theoretical supports we perform our theoretical work based on a simplified model that drops the backbone network and only keeps the last-layer features and classifier prototypes as independent optimization variables. this simplification has been widely adopted in prior studies to facilitate analysis (graf et al., 2021; fang et al., 2021; zhu et al., 2021b). we investigate the neural collapse optimality of an incremental problem of t sessions with our etf classifier. concretely, we consider the following problem, min m(t) k(t) (cid:88) nk(cid:88) l m(t) k,i, ˆwetf ∀1 ≤ k ≤ k (t), 1 ≤ i ≤ nk, where m(t) k,i ∈ rd denotes a feature variable that belongs to the i-th sample of class k in session t, nk is number of samples in class k, k (t) is number of classes in session t, n (t) is the number of samples in session t, i.e., n (t) = (cid:80)k(t) denotes a collection of m(t) k,i. ˆwetf ∈ rd×k refers to the etf classifier for the whole label space as introduced in section 4.1, and we have k = (cid:80)t t=0 k (t). l can be both the cross entropy and the dot regression loss functions. k=1 nk, and m(t) ∈ rd×n (t) theorem 1 let ˆm(t) denote the global minimizer of eq. (9) by optimizing the model incrementally from t = 0, and we have ˆm = [ ˆm(0), · · · , ˆm(t )] ∈ rd×(cid:80)t . when l in eq. (9) is ce or dr loss, for any column vector ˆmk,i in ˆm whose class label is k, we have: t=0 n (t) k,i ˆwk(cid:48) = δk,k(cid:48) − where k = (cid:80)t k = k(cid:48) and 0 otherwise, and ˆwk(cid:48) is the prototype of class k(cid:48) in ˆwetf. t=0 k (t) denotes the total number of classes of the whole label space, δk,k(cid:48) = 1 when the proof of theorem 1 can be found in appendix a. eq. (10) indicates that the global minimizer ˆm of eq. (9) satisfies the neural collapse condition, i.e., features of the same class collapse into a single vertex, and the vertices of all classes are aligned with ˆwetf as a simplex etf. it is shown that the feature space is equally separated by prototypes of all classes. more importantly, in problem eq. (9), the number of classes k (t) among t + 1 sessions and the number of samples nk among k classes can be imbalanced, which corresponds to the challenging demand of fscil. experiments in this section, we test our method on fscil benchmark datasets including miniimagenet (russakovsky et al., 2015), cifar-100 (krizhevsky et al., 2009), and cub-200 (wah et al., 2011), and compare it with state-of-the-art methods. we also perform ablation studies to validate the effects of etf classifier and dr loss. finally, we show the feature-classifier structure achieved by our method. implementation details please refer to appendix b for our implementation details. table 1: performance of fscil in each session on miniimagenet and comparison with other studies. the top rows list class-incremental learning and few-shot learning results implemented by tao et al. (2020b); zhang et al. (2021) in the fscil setting. “average acc.” is the average accuracy of all sessions. “final improv.” calculates the improvement of our method in the last session. * indicates that the method saves the within-class feature mean of each class for training or inference. methods accuracy in each session (%) ↑ average final acc. improv. icarl (rebuffi et al., 2017) ncm (hou et al., 2019) d-cosine (vinyals et al., 2016) *topic (tao et al., 2020b) *idlvq (chen & lee, 2021) self-promoted (zhu et al., 2021a) cec (zhang et al., 2021) *limit (zhou et al., 2022b) *regularizer (aky¨urek et al., 2022) metafscil (chi et al., 2022) *c-fscil (hersche et al., 2022) data-free replay (liu et al., 2022) *alice (peng et al., 2022) *nc-fscil (ours) improvement over alice table 2: performance of fscil in each session on cifar-100 and comparison with other studies. the top rows list class-incremental learning and few-shot learning results implemented by tao et al. (2020b); zhang et al. (2021) in the fscil setting. “average acc.” is the average accuracy of all sessions. “final improv.” calculates the improvement of our method in the last session. methods accuracy in each session (%) ↑ average final acc. improv. icarl (rebuffi et al., 2017) ncm (hou et al., 2019) d-cosine (vinyals et al., 2016) *topic (tao et al., 2020b) self-promoted (zhu et al., 2021a) cec (zhang et al., 2021) dsn (yang et al., 2022a) *limit (zhou et al., 2022b) metafscil (chi et al., 2022) *c-fscil (hersche et al., 2022) data-free replay (liu et al., 2022) *alice (peng et al., 2022) *nc-fscil (ours) improvement over alice performance on benchmarks our experiment results on minimagenet, cifar-100, and cub-200 are shown in table 1, table 2, and table 4 (appendix c), respectively. we see that our method achieves the best performance in all sessions on both miniimagenet and cifar-100 compared with previous studies. alice (peng et al., 2022) is a recent study that achieves strong performances on fscil. compared with this challenging baseline, we have an improvement of 2.61% in the last session on miniimagenet, and 2.01% on cifar-100. we achieve an averaged accuracy improvement of more than 3.5% on both miniimagenet and cifar-100. although we do not surpass alice in the last session on cub-200, we still have the best average accuracy among all methods. as shown in the last rows of table 1 and table 2, the improvement of our method lasts and even becomes larger in the first several sessions. it indicates that our method is able to hold the superiority and relieve the forgetting of old sessions. ablation studies we consider three models to validate the effects of etf classifier and dr loss. all three models are based on the same framework introduced in section 4.3 including the backbone network, the projection layer, and the memory module. the first model uses a learnable classifier and the ce table 3: ablation studies on three datasets to investigate the effects of etf classifier and dr loss. “learnable+ce” uses a learnable classifier and the ce loss; “etf+ce” adopts our etf classifier with the ce loss; “etf+dr” uses both etf classifier and dr loss. “final” refers to the accuracy of the last session; “average” is the average accuracy of all sessions; “pd” denotes the performance drop, i.e., the accuracy difference between the first and the last sessions. methods miniimagenet final↑ average↑ learnable+ce etf+ce etf+dr pd↓ cifar-100 final↑ average↑ pd↓ final↑ average↑ pd↓ (a) train (each) (b) train (accumulate) (c) test (each) (d) test (accumulate) figure 3: average cosine similarity between features and classifier prototypes of different classes, i.e., avgk(cid:54)=k(cid:48){cos ∠(mk − mg, wk(cid:48))}, where mk is the within-class mean of class k features, mg denotes the global mean, and wk(cid:48) is the classifier prototype of class k(cid:48). statistics are performed among classes in each session (a and c), and all encountered classes by the current session (b and d), on train set (a and b) and test set (c and d), for models trained after each session on miniimagenet. loss, which is the most adopted practice. the second model only replaces the classifier with our etf classifier and also uses the ce loss. the third model corresponds to our method using both etf classifier and dr loss. as shown in table 3, when a fixed etf classifier is used, the final session accuracies are significantly better, and the performance drops get much mitigated. adopting the dr loss is able to further moderately improve the performances. it indicates that the success of our method is largely attributed to etf classifier and dr loss, as they pre-assign a neural collapse inspired alignment and drive a model towards the fixed optimality, respectively. feature-classifier structure we check the feature-classifier alignment instructed by neural collapse using our method and “learnable+ce” as a comparison. as shown in figure 3, the average cosine similarities between features and classifier prototypes of different classes, i.e., avgk(cid:54)=k(cid:48){cos ∠(mk − mg, wk(cid:48))}, of our method are consistently lower than those of the baseline. most values of our method are negative and close to 0, which is in line with the guidance from neural collapse as derived in eq. (10). particularly in figure 3b and figure 3d, the average cosine similarities between mk − mg and wk(cid:48) (k (cid:54)= k(cid:48)) among all encountered classes increase fast with session for the baseline method, while ours keep relatively flat. it indicates that the baseline method reduces the feature-classifier margin of different classes as training incrementally, and our method enjoys a stable alignment. as shown in figure 4 and figure 5, we also calculate the average cosine similarities between feature and classifier of the same class, i.e., avgk{cos ∠(mk − mg, wk)}, and the trace ratio of within-class covariance to between-class covariance, tr(σw )/tr(σb). these results together support that our method better holds the feature-classifier alignment and relieves the forgetting problem. conclusion in this paper, we propose to fix a learnable classifier as a geometric structure instructed by neural collapse for fscil. it pre-assigns an optimal feature-classifier alignment as a fixed target throughout incremental training, which avoids optimization conflict among sessions. accordingly, a novel loss function that drives features towards this pre-assigned optimality is adopted without any regularizer. both theoretical and empirical results support that our method is able to hold the alignment in an incremental fashion, and thus relieve the forgetting problem. in experiments of fscil, we achieve and even surpass the state-of-the-art performances on three datasets. acknowledgments z. lin was supported by national key r&d program of china (2022zd0160302), the major key project of pcl, china (no. pcl2021a12), the nsf china (no. 62276004), qualcomm, and project 2020bd006 supported by pku-baidu fund. statements ethics statement. our study does not involve any of the potential issues such as human subject, public health, privacy, fairness, security, etc. all authors of this paper confirm that they adhere to the iclr code of ethics. reproducibility statement. for our theoretical result theorem 1, we offer the proof in appendix a. all datasets used in this paper are public and have been cited. please refer to appendix b for the dataset descriptions and the implementation details of our experiments. our source code is released at https://github.com/neuralcollapseapplications/fscil. references afra feyza aky¨urek, ekin aky¨urek, derry wijaya, and jacob andreas. subspace regularizers for few-shot class incremental learning. in iclr, 2022. francisco m castro, manuel j mar´ın-jim´enez, nicol´as guil, cordelia schmid, and karteek alahari. end-to-end incremental learning. in eccv, pp. 233–248, 2018. gert cauwenberghs and tomaso poggio. learning. in neurips, volume 13, 2000. incremental and decremental support vector machine kuilin chen and chi-guhn lee. incremental few-shot learning via vector quantization in deep embedded space. in iclr, 2021. ali cheraghian, shafin rahman, pengfei fang, soumava kumar roy, lars petersson, and mehrtash in harandi. semantic-aware knowledge distillation for few-shot class-incremental learning. cvpr, pp. 2534–2543, 2021a. ali cheraghian, shafin rahman, sameera ramasinghe, pengfei fang, christian simon, lars petersson, and mehrtash harandi. synthesized feature based few-shot class-incremental learning on a mixture of subspaces. in iccv, pp. 8661–8670, 2021b. zhixiang chi, li gu, huan liu, yang wang, yuanhao yu, and jin tang. metafscil: a meta-learning approach for few-shot class incremental learning. in cvpr, pp. 14166–14175, 2022. songlin dong, xiaopeng hong, xiaoyu tao, xinyuan chang, xing wei, and yihong gong. fewshot class-incremental learning via relation knowledge distillation. in aaai, volume 35, pp. 1255– 1263, 2021. cong fang, hangfeng he, qi long, and weijie j su. exploring deep neural networks via layerpeeled model: minority collapse in imbalanced training. proceedings of the national academy of sciences, 118(43), 2021. ronald a fisher. the use of multiple measurements in taxonomic problems. annals of eugenics, 7 tomer galanti, andr´as gy¨orgy, and marcus hutter. on the role of neural collapse in transfer learnian j goodfellow, mehdi mirza, da xiao, aaron courville, and yoshua bengio. an empirical investigation of catastrophic forgetting in gradient-based neural networks. arxiv preprint arxiv:1312.6211, 2013. florian graf, christoph hofer, marc niethammer, and roland kwitt. dissecting supervised constrastive learning. in icml, pp. 3821–3830. pmlr, 2021. xy han, vardan papyan, and david l donoho. neural collapse under mse loss: proximity to and dynamics on the central path. in iclr, 2022. kaiming he, xiangyu zhang, shaoqing ren, and jian sun. deep residual learning for image recogmichael hersche, geethan karunaratne, giovanni cherubini, luca benini, abu sebastian, and abbas rahimi. constrained few-shot class-incremental learning. in cvpr, pp. 9057–9067, 2022. saihui hou, xinyu pan, chen change loy, zilei wang, and dahua lin. learning a unified classifier incrementally via rebalancing. in cvpr, pp. 831–839, 2019. wenlong ji, yiping lu, yiliang zhang, zhun deng, and weijie j su. an unconstrained layer-peeled perspective on neural collapse. in iclr, 2022. kj joseph, salman khan, fahad shahbaz khan, rao muhammad anwer, and vineeth n balasubramanian. energy-based latent aligner for incremental learning. in cvpr, pp. 7452–7461, 2022. alex krizhevsky, geoffrey hinton, et al. learning multiple layers of features from tiny images. yann lecun, yoshua bengio, and geoffrey hinton. deep learning. nature, 521(7553):436–444, zhizhong li and derek hoiem. learning without forgetting. ieee transactions on pattern analysis bin liu, yue cao, yutong lin, qi li, zheng zhang, mingsheng long, and han hu. negative margin matters: understanding margin in few-shot classification. in eccv, pp. 438–455, 2020. huan liu, li gu, zhixiang chi, yang wang, yuanhao yu, jun chen, and jin tang. few-shot classincremental learning via entropy-regularized data-free replay. in eccv, 2022. bin lu, xiaoying gan, lina yang, weinan zhang, luoyi fu, and xinbing wang. geometer: graph few-shot class-incremental learning via prototype representation. in kdd, 2022. jianfeng lu and stefan steinerberger. neural collapse with cross-entropy loss. arxiv preprint aleix m martinez and avinash c kak. pca versus lda. ieee transactions on pattern analysis and dustin g mixon, hans parshall, and jianzong pi. neural collapse with unconstrained features. arxiv vardan papyan, xy han, and david l donoho. prevalence of neural collapse during the terminal phase of deep learning training. proceedings of the national academy of sciences, 117(40): 24652–24663, 2020. can peng, kun zhao, tianren wang, meng li, and brian c lovell. few-shot class-incremental learning from an open-set perspective. in eccv, 2022. federico pernici, matteo bruni, claudio baecchi, francesco turchini, and alberto del bimbo. class-incremental learning with pre-allocated fixed classifiers. in icpr, pp. 6259–6266, 2021. tomaso poggio and qianli liao. explicit regularization and implicit bias in deep network classifiers trained with the square loss. arxiv preprint arxiv:2101.00072, 2020. | 10 | [
117.963,
138.6100784,
392.64015,
148.8018182
] |
1_OGWcP1s9w.pdf | 2,023 | 2 | learning fair graph representations via automated data augmentations hongyi ling, zhimeng jiang, youzhi luo, shuiwang ji∗, na zou∗ texas a&m university college station, tx 77843, usa {hongyiling,zhimengj,yzluo,sji,nzou1}@tamu.edu abstract we consider fair graph representation learning via data augmentations. while this direction has been explored previously, existing methods invariably rely on certain assumptions on the properties of fair graph data in order to design fixed strategies on data augmentations. nevertheless, the exact properties of fair graph data may vary significantly in different scenarios. hence, heuristically designed augmentations may not always generate fair graph data in different application scenarios. in this work, we propose a method, known as graphair, to learn fair representations based on automated graph data augmentations. such fairness-aware augmentations are themselves learned from data. our graphair is designed to automatically discover fairness-aware augmentations from input graphs in order to circumvent sensitive information while preserving other useful information. experimental results demonstrate that our graphair consistently outperforms many baselines on multiple node classification datasets in terms of fairness-accuracy trade-off performance. in addition, results indicate that graphair can automatically learn to generate fair graph data without prior knowledge on fairness-relevant graph properties. our code is publicly available as part of the dig package (https://github.com/divelab/dig). introduction recently, graph neural networks (gnns) attract increasing attentions due to their remarkable performance (gao et al., 2021; gao & ji, 2019; liu et al., 2021a;b; yuan et al., 2021) in many applications, such as knowledge graphs (hamaguchi et al., 2017), molecular property prediction (liu et al., 2022; 2020; han et al., 2022a) and social media mining (hamilton et al., 2017). despite recent advances in graph representation learning (grover & leskovec, 2016; kipf & welling, 2017; 2016; gilmer et al., 2017; han et al., 2022b), these gnn models may inherit or even amplify bias from training data (dai & wang, 2021), thereby introducing prediction discrimination against certain groups defined by sensitive attributes, such as race and gender. such discriminative behavior may lead to serious ethical and societal concerns, thus limiting the applications of gnns to many real-world high-stake tasks, such as criminal justice (suresh & guttag, 2019), job hunting (mehrabi et al., 2021), healthcare (rajkomar et al., 2018), and credit scoring (feldman et al., 2015; petrasic et al., 2017). hence, it is highly desirable to learn fair graph representations without discriminatory biases (dong et al., 2022; zhang et al., 2022; kang et al., 2022; dai et al., 2022). a primary issue (mehrabi et al., 2021; olteanu et al., 2019) in fairness is that training data usually contain biases, which is the source of discriminative behavior of models. thereby, many existing works (agarwal et al., 2021; kose & shen, 2022; spinelli et al., 2021) propose to learn fair graph representations by modifying training data with fairness-aware graph data augmentations. these methods propose some graph data properties that are beneficial to fair representation learning, and then adopt heuristic graph data augmentation operations, including node feature masking and edge perturbation, to refine graph data. however, the proposed graph properties (spinelli et al., 2021; kose & shen, 2022) may not be appropriate for all graph datasets due to the diverse nature of graph data. for example, balanced inter/intra edges (kose & shen, 2022) may destroy topology ∗equal senior contributions structures of social networks, leading to the loss of important information. even if the proposed graph properties are effective, the best graph properties may vary significantly in different scenarios. hence, it is highly desirable to automatically discover dataset-specific fairness-aware augmentation strategies among different datasets with a single framework. to this end, a natural question is raised: can we achieve fair graph representation learning via automated data augmentations? in this work, we attempt to address this question via proposing graphair, a novel automated graph augmentation method for fair graph representation learning. a primary challenge is how to achieve fairness and informativeness simultaneously in the augmented data. as we intentionally avoid assuming prior knowledge on what types of graphs are considered fair, we propose to employ an adversary model to predict sensitive attributes from augmented graph data. a fair augmented graph should prevent the adversary model from identifying the sensitive attributes. in addition, we propose to retain useful information from original graphs by using contrastive learning to maximize the agreement between original and augmented graphs. experimental results demonstrate that graphair consistently outperforms many baselines on multiple node classification datasets in terms of fairness-accuracy trade-off performance. background and related work fair graph representation learning in this work, we study the problem of fair graph representation learning. let g = {a, x, s} be a graph with n nodes. here, a ∈ {0, 1}n×n is the adjacency matrix, and aij = 1 if and only if there exists an edge between nodes i and j. x = [x1, · · · , xn]t ∈ rn×d is the node feature matrix, where each xi ∈ rd is the d-dimensional feature vector of node i. s ∈ {0, 1}n is the vector containing sensitive attributes (e.g., gender or race) of nodes that should not be captured by machine learning models to make decisions. our target is to learn a fair graph representation model f : (a, x) → h ∈ rn×d′ , and the learned representation h = f (a, x) is fed into a classification model θ : h → ˆy ∈ {0, 1}n to predict the binary label of nodes in g. particularly, for an ideal fair model f , the output representation h should result in a prediction ˆy that satisfies the fairness criteria. in general, there exist several different definitions of fairness criteria, including group fairness (dwork et al., 2012; rahmattalabi et al., 2019; jiang et al., 2022b), individual fairness (kang et al., 2020; dong et al., 2021; petersen et al., 2021), and counterfactual fairness (agarwal et al., 2021; ma et al., 2022). in this work, we focus on group fairness, which is defined as p( ˆyi|si = 0) = p( ˆyi|si = 1), where ˆyi is the prediction for node i, and si is the sensitive attribute of node i. note that even though the sets of node attributes or features in x and s are disjoint, correlations may exist between (a, x) and s. hence, even if s is not explicitly exposed to f , f may implicitly infer parts of s from (a, x) and produce biased representation h, thereby making the prediction ˆy unfair. how to prevent models from intentionally fitting these correlations is the central problem to be solved in achieving fair graph representation learning. currently, several studies have proposed different strategies to achieve fair graph representation learning. an early study (rahman et al., 2019) proposes to train the model through fair random walks. some recent studies (li et al., 2020; laclau et al., 2021) propose to reduce prediction discrimination through optimizing adjacency matrices, which can improve fairness for link prediction tasks. in addition, adversarial learning is another popular strategy to achieve fairness on node representation learning tasks. many studies (fisher et al., 2020; dai & wang, 2021; bose & hamilton, 2019) adopt adversarial learning to filter out sensitive attribute information from the learned node representations. overall, most existing methods learn fair representations via altering model training strategy with fairness regularization. however, a primary issue in fairness learning lies in the fact that training data usually possess bias. hence, an alternative and highly desirable solution is to modify data through data augmentations, thus enabling models to learn fair representations easily. in this work, we design a learnable graph augmentation method to reduce bias in graph data, leading to more effective fairness-aware representation learning on graphs. graph data augmentations inspired by the success of data augmentations in computer vision and natural language processing, graph data augmentation (zhao et al., 2022) attracts increasing attention in academia. most studies (you et al., 2020; zhu et al., 2020; wang et al., 2021; veliˇckovi´c et al., 2019; you et al., 2021; rong et al., 2020) are based on uniformly random modifications of graph adjacency matrices or node features, such as masking node features, dropping edges, or cropping subgraphs. in addition, recent studies (luo et al., 2023; zheng et al., 2020; luo et al., 2021; zhao et al., 2021; chen et al., 2020) design learnable data augmentation methods to enhance task-relevant information in augmented graphs. note that none of the above methods are fairness-aware and only a few studies have investigated fairness-aware graph augmentations. spinelli et al. (2021) argue that the tendency of nodes with the same sensitive attribute to connect leads to prediction discrimination. thereby, they propose a biased edge drop algorithm to reduce such tendency in graphs, resulting in fairness improvement on prediction tasks. agarwal et al. (2021) design a graph data augmentation method in the contrastive learning framework via modifying sensitive attributes. kose & shen (2022) study correlations between sensitive attributes and learned node representations, and propose several graph augmentations to minimize an upper bound of the correlations to achieve fairness. however, these fairness-aware augmentation methods are all based on some strong assumptions or definitions about the properties that fair graph data should have. such assumptions or definitions may not hold in different scenarios, so in practice, empirical comparisons are needed to find out the best choice. in addition, these heuristic augmentation operations may accidentally remove most of the useful information from the graph. for instance, both edge drop algorithms proposed by kose & shen (2022) and spinelli et al. (2021) may drop most of the edges and destroys the graph structure in some cases. hence, in practice, these methods do not consistently achieve good performance on all datasets. fairness via automated data augmentations while previous fairness-aware graph data augmentations all rely on manually defined and fixed fairness-relevant augmentation strategies, we explore a more adaptive and effective method to discover fairness-aware graph augmentations by automated augmentation models. note that though automated graph augmentations have been applied to some graph representation tasks (luo et al., 2023; 2021; zhao et al., 2021), they have not been studied in fair graph representation learning. in this work, we propose graphair, an automated graph augmentation method for fair graph representation learning. graphair uses an automated augmentation model to generate new graphs with fair topology structures and node features while preserving the most informative components from input graphs. the augmentation model is trained end-to-end with multiple optimization objectives in order to circumvent sensitive information while retaining other useful information simultaneously. to the best of our knowledge, graphair is the first automated graph augmentation method addressing group fairness with a theoretical guarantee of fairness and informativeness. automated graph augmentations we first present the details of the augmentation process. given an input graph g = {a, x, s}, we use the automated augmentation model g to generate a new graph g′ = {a′, x ′, s} as ta, tx = g(a, x), a′ = ta(a), x ′ = tx (x). (2) here, ta is the edge perturbation transformation, which maps a to the new adjacency matrix a′ by removing existing edges and adding new edges. tx is the node feature masking transformation, which produces the new node feature matrix x ′ by setting some values of x to zero. ta and tx contain the exact transformations for each edge and node feature in g. in other words, the augmentation model g decides whether there is an edge connecting any two nodes in g and whether each value in x should be set to zero or not. in the augmentation model g, a gnn-based augmentation encoder genc : (a, x) → z ∈ rn×dr is first used to extract dr-dimensional embeddings z for nodes in g. we adopt graph convolutional network (gcn) (kipf & welling, 2017) as the gnn encoder here. afterward, the exact transformations for each edge and node feature are performed as described below. edge perturbation. given the embedding z, an multi-layer perceptron (mlp) model mlpa first computes the hidden embeddings za ∈ rn×dr′ from z, then an inner-product decoder computes the edge probability matrix (cid:102)a′ ∈ rn×n, where the value (cid:102)a′ ij at the i-th row, j-th column of the matrix (cid:102)a′ denotes the predicted probability that an edge exists between the nodes i and j in g′. finally, the output adjacency matrix a′ is obtained by sampling from the bernoulli distribution parameterized with the probabilities in (cid:102)a′. formally, this process can be described as za = mlpa(z), (cid:102)a′ = σ (cid:0)zaz t a (cid:1) , a′ ij ∼ bernoulli ij for i, j = 1, · · · , n, where σ(·) is the sigmoid function. node feature masking. given the embedding z, an mlp model mlpx first computes the mask probability matrix (cid:102)m ∈ rn×d, where the value (cid:102)mij at the i-th row, j-th column of the matrix (cid:102)m denotes the predicted probability that the j-th feature of node i is not set to zero. afterward, the mask matrix m is sampled from the bernoulli distribution parameterized with the probabilities in (cid:102)m , and the new feature matrix x ′ is obtained by multiplying x by m . this process can be formally described as zx = mlpx (z), (cid:102)m = σ(zx ), mij ∼ bernoulli for i, j = 1, · · · , n, x ′ = m ⊙x, (4) where ⊙ is the hadamard product, and σ(·) is the sigmoid function. note that the bernoulli sampling for adjacency matrix a′ and mask matrix m are non-differentiable. to make the augmentation model g end-to-end trainable, we adopt the commonly-used trick to approximate the bernoulli sampling in eq. (3) and (4). specifically, we relax the bernoulli sampling procedure by the gumbel-softmax reparameterization trick (jang et al., 2017; maddison et al., 2017; 2014). given a probability (cid:101)p computed from a parameterized model φ, the relaxed bernoulli sampling calculates a continuous approximation ˆp = , where τ is a temperature hyperparameter and g ∼ gumbel(0, 1) is a random variable sampled from the standard gumbel distribution. for the forward propagation, the discrete value p = ⌊ ˆp + 1 2 ⌋ is used as the result sampled from the bernoulli distribution with the probability (cid:101)p . for the backward propagation, a straight-through gradient estimator (bengio et al., 2013) is used, which approximates the gradient as ∇φ ˆp ≈ ∇φp . adversarial training as our objective is to generate fair augmentations to reduce bias, the ideal augmentation model g should satisfy the fairness property. in other words, it should assign low probabilities to graph elements (edges, node features) that cause prediction bias. however, we cannot achieve it via supervised training because there is no ground truth indicating which graph elements lead to prediction bias and should be modified. to tackle this issue, we propose to use an adversarial learning based method to implicitly optimize the model to learn to mitigate bias in the input graph. specifically, we use an adversary model k : (a′, x ′) → ˆs ∈ [0, 1]n to predict the sensitive attribute s from the new adjacency matrix a′ and new node feature matrix x ′ generated by the augmentation model g. the adversary model k and the augmentation model g are jointly trained via an adversarial fashion. in this process, k is optimized to maximize the prediction accuracy of the sensitive attribute, while g is optimized to mitigate bias in a′ and x ′ so that it is difficult for the adversary model k to identify sensitive attribute information from a′ and x ′. formally, this adversarial training process can be described as the following optimization problem: min g max k ladv = min g max k silog ˆsi + (1 − si)log where ˆsi is the prediction of the sensitive attribute of node i by the adversary model k. 1 contrastive training we note that only using the adversarial training may cause the augmentation model g to collapse into trivial solutions. for instance, g may learn to always generate a complete graph and set all node 1here we use negative binary cross-entropy loss, so the adversary model k aims to maximize ladv. figure 1: an overview of our framework. i) as exp (sim(hi, h′ j)/τ (cid:1) + (cid:80)n features to zero, which contains no bias, since all nodes are equivalent. such augmented graphs are not informative at all because they lose all the information from the input graphs. to make the augmentation model g satisfy the informativeness property, i.e., preserving the most informative components of the input graph in the generated graphs, we additionally use a contrastive learning objective during training. given the input graph g = {a, x, s} and the augmented graph g′ = {a′, x ′, s}, we first use a gnn-based representation encoder f to extract node representations h = f (a, x) and h ′ = f (a′, x ′) from g and g′, respectively. afterward, we optimize the augmentation model g and the representation encoder f jointly by minimizing a contrastive objective, which maximizes the similarity between the representations of the same node in h and h ′. specifically, let hi and h′ i denote the representation of node i in h and h ′, respectively. for node i, we consider (hi, h′ i) as a positive pair, and (hi, hj) and (hi, h′ j) for any node j other than i as negative pairs. we define the representation similarity as sim(hi, h′ j) = c(t(hi), t(h′ j)), where c is the cosine similarity and t is a non-linear projection implemented with a two-layer mlp model. we follow zhu et al. (2020) to define the contrastive objective for any positive pair (hi, h′ l(hi, h′ i) = −log j=1 exp (cid:0)sim(hi, h′ i)/τ ) 1[j̸=i]exp (sim(hi, hj)/τ ) where τ denotes the temperature parameter, 1[j̸=i] ∈ {0, 1} is the indicator function whose value is 1 if and only if j ̸= i. the overall contrastive objective is computed over the positive pairs (hi, h′ i) and (h′ i, hi) for all nodes as lcon = [l(hi, h′ i) + l(h′ i, hi)] . to prevent the augmentation model g from generating graphs that deviate too much from input graphs, we add a reconstruction-based regularization term to the overall training objective. specifically, let lbce and lmse denote binary cross-entropy loss and mean squared error loss, respectively, and the regularization term is defined as lreconst = lbce(a, (cid:102)a′) + λlmse(x, x ′) n (cid:88) (cid:104) aijlog ij + (1 − aij)log ij f , where λ is a hyperparameter, and ∥ · ∥f denotes the frobenius norm of matrix (golub & van loan, 1996). to sum up, the overall training process can be described as the following min-max optimization procedure, min f,g max k l = min f,g max k αladv + βlcon + γlreconst, where α, β, γ are hyperparameters. the parameters of augmentation model g, adversary model k, and representation encoder f are jointly optimized with this min-max optimization procedure. in each training step, we first update the parameters of f and g to minimize l while keeping k fixed, then update the parameters of k to maximize ladv while keeping f and g fixed. see figure 1 for an overview of our proposed graphair method. the training algorithm is summarized in appendix b. discussions graphair learns different fairness-aware augmentation strategies for different graph datasets by the automated augmentation model, thereby eliminating the negative effect of fixed fairness-relevant augmentation strategies (spinelli et al., 2021; agarwal et al., 2021; kose & shen, 2022). in addition, graphair mitigates bias by modifying both graph topology structures and node features, while some existing studies (spinelli et al., 2021) only consider one of them. we demonstrate these advantages through extensive empirical studies in section 4.2 and 4.3. furthermore, we show in section 3.5 and 3.6 that the used training objectives can be theoretically proven to help the augmentation model generate new graphs with fair topology structures and node features, and preserve the most informative components from the input graph simultaneously. specifically, we use adversarial and contrastive learning to optimize the augmentation model to satisfy the fairness and informativeness properties, respectively. theoretical analysis of fairness | 5 | [
108.249,
474.9280784,
295.0864816,
484.8906784
] |
47B_ctC4pJ.pdf | 2,023 | 2 | learning input-agnostic manipulation directions in stylegan with text guidance yoonjeon kim1, hyunsu kim2, junho kim2, yunjey choi2,eunho yang1,3∗ korea advanced institute of science and technology (kaist)1, naver ai lab2, aitrics3 yoonkim313@kaist.ac.kr hyunsu1125.kim@navercorp.com jhkim.ai@navercorp.com yunjey.choi@navercorp.com eunhoy@kaist.ac.kr abstract with the advantages of fast inference and human-friendly flexible manipulation, image-agnostic style manipulation via text guidance enables new applications that were not previously available. the state-of-the-art text-guided image-agnostic manipulation method embeds the representation of each channel of stylegan independently in the contrastive language-image pre-training (clip) space, and provides it in the form of a dictionary to quickly find out the channel-wise manipulation direction during inference time. however, in this paper we argue that this dictionary which is constructed by controlling single channel individually is limited to accommodate the versatility of text guidance since the collective and interactive relation among multiple channels are not considered. indeed, we show that it fails to discover a large portion of manipulation directions that can be found by existing methods, which manually manipulates latent space without texts. to alleviate this issue, we propose a novel method multi2one that learns a dictionary, whose entry corresponds to the representation of a single channel, by taking into account the manipulation effect coming from the interaction with multiple other channels. we demonstrate that our strategy resolves the inability of previous methods in finding diverse known directions from unsupervised methods and unknown directions from random text while maintaining the real-time inference speed and disentanglement ability. introduction wide range of generative models including adversarial networks (goodfellow et al., 2014; karras et al., 2018; 2019; 2020b; kim et al., 2022; kim & ha, 2021; karras et al., 2021), diffusion models (dhariwal & nichol, 2021), and auto-regressive models (dosovitskiy et al., 2020; chang et al., 2022) have demonstrated notable ability to generate a high-resolution image that is hardly distinguishable from real images. among these powerful models, style-based gan models (karras et al., 2019; 2020b) are equipped with a unique latent space which enables style and content mixing of given images, manipulation of local regions (wu et al., 2021), and interpolation between different class of images (sauer et al., 2022). in this paper, we focus on the image manipulation based on the pre-trained stylegan, considering the unique advantages mentioned above and its popularity. based on the steerability in the latent space of stylegan, researchers have put tremendous effort on finding a direction that causes semantically equivalent change to the entire samples of image. in this work, we refer to such latent direction as global direction. unlike local direction which is a sample-wise traversal direction found by iterative optimization using a single image (local basis (choi et al., 2021) and latent optimization of styleclip (patashnik et al., 2021)), global direction allows fast inference and is applicable to any images once found using supervised (jahanian et al., 2019), unsupervised (shen & zhou, 2021; wang & ponce, 2021; härkönen et al., 2020; voynov & babenko, 2020), or text-guided methods (global mapper & globaldirection1 of styleclip (patashnik et al., 2021)). 1in order to distinguish it from global direction, which means finding input agnostic directions, we express the method proposed in styleclip in this way (a) (b) figure 1: (a) manipulation by the 70-th direction from ganspace generates ‘a man with wide smile’. globaldirection (gd), highlighted in red, fails to reproduce similar result even when provided with various text guidances. (b) manipulation results by randomly selected text, demostrating that gd has insufficient manipulation ability. same number of channels are manipulated in both methods. among them, the text-guided methods have a unique advantage in that they can naturally provide the flexibility of manipulation through the diversity of the given driving text without human supervision to discover the direction in the latent space. however, in this paper we argue that contrary to this common belief on text-guidance, the standard method (patashnik et al., 2021) for text-based stylegan manipulation surprisingly fails to even find the manipulation directions that are known to be found in unsupervised approaches (härkönen et al., 2020; shen & zhou, 2021) (see fig. 1(a) for examples). in addition, we also show that this standard method does not properly perform manipulation on a large number of randomly selected texts (see fig. 1(b) for examples). we hypothesize that the failure is due to the naïve approach that only considers a change of image caused by a single channel in stylespace, neglecting diverse directions that are visible only when manipulating multiple channels as a whole. in order to address these issues, we propose a novel method, named multi2one, of learning a dictionary that can manipulate multiple channels corresponding to a given text. however, here since there is no paired ground truth of text and manipulation direction corresponding to the text, we embed the directions found by existing unsupervised methods into the clip space and learn a dictionary to reproduce them in the clip space. note that this has more meaning than simply reproducing the known directions derived by unsupervised methods. as the dictionary learns the relationship between channels in stylespace and clip space, we can find manipulations that could not be found with unsupervised methods using diverse text inputs. through extensive experiments, we confirm that contrary to the state-of-the-arts method (patashnik et al., 2021) which explicitly encoded every single channel, our multi-channel based strategy not only excels in reconstruction of unsupervised directions but also in discovery of text-guided directions. related work style-based generators generators of style-based models (karras et al., 2019; 2020b;a; 2021) are built upon the progressive structure (karras et al., 2018) that generates images of higher resolution in deeper blocks. the popularity of stylegan structure that has been employed in numerous number of researches comes from its ability to generate high-fidelity images, transfer styles to other images, and manipulate images in the latent spaces using inversion methods (zhu et al., 2020; roich et al., 2021; tov et al., 2021; collins et al., 2020). the latent spaces of stylegan used for manipulation are intermediate space and stylespace w s unsupervised global directions image-agnostic directions are latent vectors that create semantically equivalent shift when applied to the latent space of stylegans. in order to find such directions, sefa (shen & zhou, 2021) performs pca on the first weight that comes after intermediate space in pre-trained stylegan, deducing the principal components as the global directions. on the and w other hand, ganspace (härkönen et al., 2020) relies on the randomly sampled latent codes in w figure 2: a diagram depicting the framework of dictionary-based image manipulation via text guidance. our method, multi2one, is differentiated from the previous methods in that it learns the dictionary for the text input. the proposed novel dictionary allows more flexible and expansive discovery of the manipulation direction ˆs with better result. ϕ(·) in dglobaldirection is an abbreviation of ϕclip(·) of eq. (1). the dictionary learning process of multi2one to create dmulti2one is illustrated in sec. 4. the eigenvectors from the latent codes proved to be global directions that share an image-agnostic modification ability. text-guided image manipulations most of the text-guided image manipulation methods aim to find a local direction (kocasari et al., 2022; xia et al., 2021; patashnik et al., 2021) which is an image-specific direction that applies to a single sample of image. methods that find local directions using text guidance could be found in various models including gan, diffusion (kim & ye, 2021; nichol et al., 2021; avrahami et al., 2022; liu et al., 2021), and vision transformers (chang et al., 2022). two unique approaches for finding a global direction using text guidance in stylegan are global mapper and globaldirection of styleclip (patashnik et al., 2021). global mapper finds an image-invariant direction for a single text by optimizing a fully connected layer. however, this method requires 10 hours of training time for every single text, making it less popular than globaldirection. on the other hand, globaldirection method offers a real-time manipulation in inference time using a dictionary-based framework that is applicable to any input image and text. limited coverage of styleclip gl o b a ldi r e c t i o n method in this section, we briefly review the globaldirection of styleclip (patashnik et al., 2021), which performs stylegan-based image manipulation using text guidance (sec. 3.1). then, we provide our key motivation that this state-of-the-art text-guided manipulation method is surprisingly insufficient to fully utilize the manipulative power of stylegan (sec. 3.2). text-guided stylegan image manipulation s s s w w w +, and + (abdal et al., 2019), and . the original latent space is stylespace of stylegan the generators of stylegan family (karras et al., 2019; 2020a;b) have a number of latent spaces: , which is typically the z z w standard normal distribution. the generator transforms an input noise z n (0, i) into intermediate latent spaces (wu et al., 2021), sequentially. (karras et al., 2019), is the most disentangled such that it can recent study (wu et al., 2021) shows that stylespace change a distinct visual attribute in a localized way. the number of style channels in is 6048 excluding torgb channels for stylegan-ada (resolution 10242), and recent methods (kocasari et al., 2022; patashnik et al., 2021; wu et al., 2021) modify the values of the channels to edit an image. our method also adopts stylespace and we use n to denote the total number of channels in and si is a single parameter). for the convenience of explanation, the pre-trained generator g is re-defined with respect to stylespace as x = gs(s); x is a generated image. the goal of stylegan-based image manipulation via text is to find a direction which generates an image xedited = gs(s + ˆs) suitable ˆs = [ˆs1, ˆs2, ..., for the provided text guidance t. note that s is the inverted style vector of image x, found via stylegan inversion methods such as alaluf et al. (2021); roich et al. (2021); tov et al. (2021) , used for manipulation purpose in most cases. the main benefit of considering stylespace in image manipulation is that it does not change the undesired regions of the image when modifying a small number of style channels by its well disentangled property. , ˆsn]t in stylespace (that is, s = [s1, s2, , sn]t ∈ s s s s s table 1: measurement of clip similarity score cos( · clip representation ϕclip( ↑ ) of unsupervised direction α. · ) between the manipulated image and the cos(ϕclip(α), xunsup) cos(ϕclip(α), xgd) text-guided manipulation by styleclip the globaldirection (patashnik et al., 2021) is a representative method of text-driven image manipulation that provides an input-agnostic direction with a level of disentanglement. intuitively, it computes the similarity between the input text and each style channel in the clip space (radford et al., 2021) to find the channels that should be modified given the text. in order to compute the similarity between the text and style channel, both should be encoded into clip. while the text guidance t is trivially encoded via the text encoder of clip as clip(t) need additional pre-processing for the embedding. globaldirection proposes to embed the manipulation effect of i-th style channel using the to clip space: following mapping from the stylespace rp, style channels in stylespace s ϕclip(ei) := es∈s (cid:104) clip(cid:0)gs(s + ei)(cid:1) clip(cid:0)gs(s s rn is a zero-vector except for the i-th entry. adding the where the manipulation vector ei ∈ manipulation vector ei to the original style vector s indicates that only the i-th channel among n channels in stylespace is manipulated. note that ϕclip(ei) is also p-dimensional since the ) into a p-dimensional clip space. the above clip encoder maps images generated by gs( · mapping in eq. (1) is enumerated across all n channels in stylespace to create a dictionary rp×n. dglobaldirection := [ϕclip(e1), ϕclip(e2), , ϕclip(en)] s rn by globaldirection is given as finally, with this dictionary, the manipulation direction ˆs the similarity score measured by the following equation: ˆs = dt globaldirectionclip(t). this overall manipulation procedure is visualized in fig. 2. in the following section (sec. 3.2), we propose the evidence to the hypothesis that the single-channel encoding strategy with ϕclip(ei) to create the dictionary d is the major bottleneck causing limited coverage issues that styleclip suffers from. coverage analysis of styleclip gl o b a ldi r e c t i o n method | 3 | [
108.249,
291.9430784,
427.3523434,
302.5930978
] |
bXNl-myZkJl.pdf | 2,023 | 2 | more convnets in the 2020s: scaling up kernels beyond 51 × 51 using sparsity shiwei liu1,2, tianlong chen1∗, xiaohan chen1∗, xuxi chen1, qiao xiao2, boqian wu2, tommi k¨arkk¨ainen4, mykola pechenizkiy2, decebal constantin mocanu2,3,5, zhangyang wang1 1university of texas at austin, 2eindhoven university of technology, 3university of twente, 4university of jyv¨askyl¨a, 5university of luxembourg codes: https://github.com/vita-group/slak abstract transformers have quickly shined in the computer vision world since the emergence of vision transformers (vits). the dominant role of convolutional neural networks (cnns) seems to be challenged by increasingly effective transformer-based models. very recently, a couple of advanced convolutional models strike back with large kernels motivated by the local-window attention mechanism, showing appealing performance and efficiency. while one of them, i.e. replknet, impressively manages to scale the kernel size to 31×31 with improved performance, the performance starts to saturate as the kernel size continues growing, compared to the scaling trend of advanced vits such as swin transformer. in this paper, we explore the possibility of training extreme convolutions larger than 31×31 and test whether the performance gap can be eliminated by strategically enlarging convolutions. this study ends up with a recipe for applying extremely large kernels from the perspective of sparsity, which can smoothly scale up kernels to 61×61 with better performance. built on this recipe, we propose sparse large kernel network (slak), a pure cnn architecture equipped with sparse factorized 51×51 kernels that can perform on par with or better than state-of-the-art hierarchical transformers and modern convnet architectures like convnext and replknet, on imagenet classification as well as a wide range of downstream tasks including semantic segmentation on ade20k, object detection on pascal voc 2007, and object detection/segmentation on ms coco. introduction since invented (fukushima & miyake, 1982; lecun et al., 1989; 1998), convolutional neural networks (cnns) (krizhevsky et al., 2012a; simonyan & zisserman, 2015.; he et al., 2016; huang et al., 2017; howard et al., 2017; xie et al., 2017; tan & le, 2019) have quickly evolved as one of the most indispensable architectures of machine learning in the last decades. however, the dominance of cnns has been significantly challenged by transformer (vaswani et al., 2017) over the past few years. stemming from natural language processing, vision transformers (vits) (dosovitskiy et al., 2021; d’ascoli et al., 2021; touvron et al., 2021b; wang et al., 2021b; liu et al., 2021e; vaswani et al., 2021) have demonstrated strong results in various computer vision tasks including image classification (dosovitskiy et al., 2021; yuan et al., 2021b), object detection (dai et al., 2021; liu et al., 2021e), and segmentation (xie et al., 2021; wang et al., 2021a;c; cheng et al., 2021). meanwhile, works on understanding of vits have blossomed. plausible reasons behind the success of vits are fewer inductive bias (dosovitskiy et al., 2021), long-range dependence (vaswani et al., 2017), advanced architecture (yu et al., 2021), and more human-like representations (tuli et al., 2021), etc. recently, there is a rising trend that attributes the supreme performance of vits to the ability to capture a large receptive field. in contrast to cnns which perform convolution in a small sliding window (e.g., 3×3 and 5×5) with shared weights, global attention or local attention with larger window sizes in vits (liu et al., 2021e) directly enables each layer to capture large receptive field. inspired by this trend, some recent works on cnns (liu et al., 2022b; ding et al., 2022) strike back by designing advanced pure cnn architecture and plugging large kernels into them. for instance, replknet (ding et al., 2022) successfully scales the kernel size to 31×31, while achieving ∗equal contribution. figure 1: large depth-wise kernel (e.g., 51×51) paradigms of convnext, replknet, and slak. dark blue squares refer to the dense weights in convolutional kernels. light blue squares refer to the sparse weights in convolutional kernels. comparable results to swin transformer (liu et al., 2021e). however, large kernels are notoriously difficult to train. even with the assistance of a parallel branch with small kernels, the performance of replknet starts to saturate as the kernel size continues increasing, compared to the scaling trend of advanced vits such as swin transformer. therefore, it remains mysterious whether we can exceed the transformer-based models by further scaling the kernel size beyond 31×31. in this paper, we attempt to answer this research question by leveraging sparsity commonly observed in the human visual system. sparsity has been seen as one of the most important principles in the primary visual cortex (v1) (tong, 2003), where the incoming stimuli have been hypothesized to be sparsely coded and selected (desimone & duncan, 1995; olshausen & field, 1997; vinje & gallant, 2000). we extensively study the trainability of large kernels and unveil three main observations: (i) existing methods that either naively apply larger kernels (liu et al., 2022b) or assist with structural re-parameterization (ding et al., 2022) fail to scale kernel sizes beyond 31×31; (ii) replacing one large m×m kernel with two rectangular, parallel kernels (m×n and n×m, where n < m) can smoothly scale the kernel size up to 61×61 with improved performance; (iii) constructing with sparse groups while expanding width significantly boosts the performance. built upon these observations, we propose slak – sparse large kernel network – a new pure cnn architecture equipped with an unprecedented kernel size of 51×51. evaluated across a variety of tasks including imagenet classification (deng et al., 2009), semantic segmentation on ade20k (zhou et al., 2019), object detection on pascal voc 2007 (everingham et al., 2007), and object detection/segmentation on coco (lin et al., 2014), slak performs better than or on par with cnn pioneers replknet and convnext (liu et al., 2022b) as well as sota attention-based models e.g., swin (liu et al., 2021e) and cswin (dong et al., 2022) transformers on imagenet. our analysis of effective receptive field (erf) shows that when plugged in the recently proposed convnext, our method is able to cover a large erf region than existing larger kernel paradigms. related work large kernel in attention. originally introduced for natural language processing (vaswani et al., 2017) and extended in computer vision by dosovitskiy et al. (2021), self-attention can be viewed as a global depth-wise kernel that enables each layer to have a global receptive field. swin transformer (liu et al., 2021e) is a vits variant that adopts local attention with a shifted window manner. compared with global attention, local attention (ramachandran et al., 2019; vaswani et al., 2021; chu et al., 2021; liu et al., 2021d; dong et al., 2022) can greatly improve the memory and computation efficiency with appealing performance. since the size of attention windows is at least 7, it can be seen as an alternative class of large kernel. a recent work (guo et al., 2022b) proposes a novel large kernel attention module that uses stacked depthwise, small convolution, dilated convolution as well as pointwise convolution to capture both local and global structure. large kernel in convolution. large kernels in convolution date back to the 2010s (krizhevsky et al., 2012b; szegedy et al., 2015; 2017), if not earlier, where large kernel sizes such as 7×7 and 11×11 are applied. global convolutional network (gcns) (peng et al., 2017) enlarges the kernel size to 15 by employing a combination of 1×m + m×1 and m×1 + 1×m convolutions. however, the proposed method leads to performance degradation on imagenet. the family of inceptions (szegedy et al., 2016; 2017) allows for the utilization of varying convolutional kernel sizes to learn spatial patterns at different scales. with the popularity of vgg (simonyan & zisserman, 2014), it has been common over the past decade to use a stack of small kernels (1×1 or 3×3) to obtain a large receptive field (he et al., 2016; howard et al., 2017; xie et al., 2017; huang et al., 2017). until very recently, some figure 2: dynamic sparsity. dynamic sparsity allows us to construct and train initially sparse neural networks (sparse kernels) from scratch. during training, it dynamically adjusts the sparse weights by pruning the least important weights and adding new. such dynamic procedure gradually optimizes the sparse kernels to a good pattern and hence encourages a more elaborate capture of local features. works start to revive the usage of large kernels in cnns. li et al. (2021) propose involution with 7×7 large kernels that uses distinct weights in the spatial extent while sharing weights across channels. however, the performance improvement plateaus when further expanding the kernel size. han et al. (2021b) find that dynamic depth-wise convolution (7×7) performs on par with the local attention mechanism if we substitute the latter with the former in swin transformer. liu et al. (2022b) imitate the design elements of swin transformer (liu et al., 2021e) and design convnext employed with 7×7 kernels, surpassing the performance of the former. replknet (ding et al., 2022) for the first time scales the kernel size to 31×31 by constructing a small kernel (e.g., 3×3 or 5×5) parallel to it and achieves comparable performance to the swin transformer. a series of work (romero et al., 2021; 2022) of continuous convolutional kernels can be used on data of arbitrary resolutions, lengths and dimensionalities. lately, chen et al. (2022) reveal large kernels to be feasible and beneficial for 3d networks too. prior works have explored the idea of paralleling (peng et al., 2017; guo et al., 2022a) or stacking (szegedy et al., 2017) two complementary m×1 and 1×m kernels. however, they limit the shorter edge to 1 and do not scale the kernel size beyond 51×51. different from those prior arts, we decompose a large kernel into two complementary non-square kernels (m×n and n×m), improving the training stability and memory scalability of large convolutions kernels. dynamic sparsity. a long-standing research topic, recent attempts on sparsity (mocanu et al., 2018; liu et al., 2021b;c; evci et al., 2020; mostafa & wang, 2019; dettmers & zettlemoyer, 2019; chen et al., 2021) train intrinsically sparse neural networks from scratch using only a small proportion of parameters and flops (as illustrated in figure 2). dynamic sparsity enables training sparse models from scratch, hence the training and inference flops and memory requirements are only a small fraction of the dense models. different from post-training pruning (han et al., 2015; frankle & carbin, 2019), models built with dynamic sparsity can be trained from scratch to match their dense counterparts without involving any pre-training or dense training. dynamic sparsity stems from sparse evolutionary training (set) (mocanu et al., 2018; liu et al., 2021b) which randomly initializes the sparse connectivity between layers randomly and dynamically adjusts the sparse connectivity via a parameter prune-and-grow scheme during the course of training. the parameter prune-and-grow scheme allows the model’s sparse structure to gradually evolve, achieving better performance than naively training a static sparse network (liu et al., 2021c). in this paper, our target is not to find sparse networks that can match the corresponding dense networks. motivated by the principle of resnext (xie et al., 2017; liu et al., 2022b) – “use more groups, expand width”, we instead attempt to leverage dynamic sparsity to scale neural architectures with extreme kernels. failures of existing approaches to go beyond 31×31 kernels we first study the performance of extreme kernel sizes larger than 31×31 using two existing largekernel techniques, convnext (liu et al., 2022b) and replknet (ding et al., 2022). we take the recently-developed convnext on imagenet-1k as our benchmark to conduct this study. we adopt the efficient large-kernel implementation developed by megengine (meg, 2020) in this paper. we follow recent works (liu et al., 2022b; bao et al., 2021; liu et al., 2021e; ding et al., 2022; touvron et al., 2021b) using mixup (zhang et al., 2017), cutmix (yun et al., 2019), table 1: test accuracy of 120-epoch convnext-t trained with various large kernel recipes on imagenet-1k. “naive” refers to directly enlarging kernel size of convnext; “replknet” refers to training convnext with structural re-parameterization (ding et al., 2022). the original convnext is built with 7×7 kernels. kernel size top-1 acc #params flops top-1 acc #params flops naive replknet randaugment (cubuk et al., 2020), and random erasing (zhong et al., 2020) as data augmentations. stochastic depth (huang et al., 2016) and label smoothing (szegedy et al., 2016) are applied as regularization with the same hyper-parameters as used in convnext. we train models with adamw (loshchilov & hutter, 2019). it is important to note that all models are trained for a reduced length of 120 epochs in this section, just to sketch the scaling trends of large kernel sizes. later in section 5, we will adopt the full training recipe and train our models for 300 epochs, to enable fair comparisons with state-of-the-art models. please refer to appendix a for more details. liu et al. (2022b) show that naively increasing kernel size from 3×3 to 7×7 brings considerably performance gains. very recently, replknet (ding et al., 2022) successfully scales convolutions up to 31×31 with structural re-parameterization (ding et al., 2019; 2021). we further increase the kernel size to 51×51 and 61×61 and see whether larger kernels can bring more gains. following the design in replknet, we set the kernel size of each stage as [51, 49, 47, 13] and [61, 59, 57, 13], and report test accuracies in table 1. as expected, naively enlarging kernel size from 7×7 to 31×31 decreases the performance, whereas replknet can overcome this problem by improving the accuracy by 0.5%. unfortunately, this positive trend does not continue when we further increase kernel size to 51×51. one plausible explanation is that although the receptive field may be enlarged by using extremely large kernels, it might fail to maintain the desirable property of locality. since the stem cell in standard resnet (he et al., 2016) and convnext results in a 4× downsampling of the input images, extreme kernels with 51×51 are already roughly equal to the global convolution for the typical 224 × 224 imagenet. therefore, this observation makes sense as well-designed local attention (liu et al., 2021e;d; chu et al., 2021) usually outperforms global attention (dosovitskiy et al., 2021) in a similar mechanism of vits. motivated by this, we see the opportunity to address this problem by introducing the locality while preserving the ability to capture global relations. a recipe for extremely large kernels beyond 31×31 in this section, we introduce a simple, two-step recipe for extremely large kernels beyond 31×31: step 1. decomposing a large kernel into two rectangular, parallel kernels. step 2. using sparse groups, expanding more width. decomposing a large kernel into two rectangular, parallel kernels smoothly scales the kernel size up to 61×61. although using convolutions with medium sizes (e.g., 31×31) seemingly can directly avoid this problem, we want to investigate if we can further push the performance of cnns by using (global) extreme convolutions. our recipe here is to approximate the large m×m kernel with a combination of two parallel and rectangular convolutions whose kernel size is m×n and n×m (where n < m), respectively, as shown in figure 1. following ding et al. (2022), we keep a 5×5 layer parallel to the large kernels and summed up their outputs after a batch norm layer. this decomposition balances between capturing long-range dependencies and extracting local detail features (with its shorter edge). moreover, existing techniques for large kernel training (liu et al., 2022b; ding et al., 2022) suffer from quadratic computational and memory overhead as the kernel size increases. in stark contrast, the overhead of our method increases just linearly with the kernel size (figure 4). the performance of kernel decomposition with n = 5 (see appendix e for the effect of n) is reported as the “decomposed” group in table 2. as the decomposition reduces learnable parameters and flops, it is no surprise to observe our network to initially sacrifice accuracy slightly compared to the original replknet at medium kernel sizes i.e. 31×31. however, as the convolution size continues to increase, our method can scale kernel size up to 61×61 with improved performance. table 2: test accuracy of convnext-t trained with various large kernel recipes on imagenet-1k. all the models are trained for 120 epochs. kernel size top-1 acc #params flops top-1 acc #params flops top-1 acc #params flops decomposed sparse groups sparse groups, expand more width “use sparse groups, expand more width” significantly boosts the model capacity. recently proposed convnext (liu et al., 2022b) revisits the principle introduced in resnext (xie et al., 2017) that splits convolutional filters into small but more groups. instead of using the standard group convolution, convnext simply employs depthwise convolutions with an increased width to achieve the goal of “use more groups, expand width”. in this paper, we attempt to extend this principle from a sparsity-inspired perspective – “use sparse groups, expand more width”. to be specific, we first replace the dense convolutions with sparse convolutions, where the sparse kernels are randomly constructed based on the layer-wise sparsity ratio of snip (lee et al., 2019)1 due to its strong performance on large-scale models (liu et al., 2022a). after construction, we train the sparse model with dynamic sparsity (mocanu et al., 2018; liu et al., 2021b), where the sparse weights are dynamically adapted during training by pruning the weights with the lowest magnitude and growing the same number of weights randomly. doing so enables dynamic adaptation of sparse weights, leading to better local features. as kernels are sparse throughout training, the corresponding parameter count and training/inference flops are only proportional to the dense models. see appendix b for the full details of dynamic sparsity. to evaluate, we sparsify the decomposed kernels with 40% sparsity and report the performance as the “sparse groups” column. we can observe in the middle column of table 2 that dynamic sparsity notably reduces more than 2.0 gflops, despite causing temporary performance degradation. we next show that the above high efficiency of dynamic sparsity can be effectively transferred to model scalability. dynamic sparsity allows us to computation-friendly scale the model size up. for example, using the same sparsity (40%), we can expand the model width by 1.3× while keeping the parameter count and flops roughly the same as the dense model. this brings us significant performance gains, increasing the performance from 81.3% to 81.6% with extreme 51×51 kernels. impressively, equipped with 61×61 kernels, our method outperforms the previous state of the arts (liu et al., 2022b; ding et al., 2022) while saving 55% flops. large kernels generalize better than small kernels with our recipe. to demonstrate that the benefits of large kernels, we also report the impact of each step for the small 7×7 kernel in table 2. we can clearly see that the performance consistently increases with kernel size, up to 51×51. applying each part of our proposed recipe to 7×7 kernels leads to either no gain or marginal gains compared to our 51x51 kernels. this break-down experiment justifies our claim: large kernel is the root of power, and our proposed recipe helps unleash such power from large kernels. building the sparse large kernel network (slak) so far, we have discovered our recipe which can successfully scale up kernel size to 51×51 without backfiring performance. built on this recipe, we next construct our own sparse large kernel network (slak), a pure cnn architecture employed with extreme 51×51 kernels. slak is built based on the architecture of convnext. the design of the stage compute ratio and the stem cell are inherited from convnext. the number of blocks in each stage is [3, 3, 9, 3] for slak-t and [3, 3, 27, 3] for slak-s/b. the stem cell is simply a convolution layer with 4×4 kernels and 4 strides. we first directly increase the kernel size of convnext to [51, 49, 47, 13] for each stage, and replace each m×m kernel with a combination of m×5 and 5×m kernels as illustrated in figure 1. we find 1snip ratio is obtained by globally selecting the important weights across layers with the highest connection sensitivity score |g ⊙ w|, where w and g is the network weight and gradient, respectively. that adding a batchnorm layer directly after each decomposed kernel is crucial before summing the output up. following the guideline of “use sparse groups, expand more width”, we further sparsify the whole network and expand the width of stages by 1.3×, ending up with slak. even there could be a large room to improve slak performance by tuning the trade-off between model width and sparsity (as shown in appendix d), we keep one set of hyperparameters (1.3× width and 40% sparsity) for all experiments, so slak works simply “out of the box” with no ad-hoc tuning at all. evaluation of slak to comprehensively verify the effectiveness of slak, we compare it with various state-of-the-art baselines on a large variety of tasks, including: imagenet-1k classification (deng et al., 2009), semantic segmentation on ade20k (zhou et al., 2019), object detection on pascal voc 2007 (everingham et al., 2007), and object detection/segmentation on coco. table 3: classification accuracy on imagenet-1k. for slak models, we report both theoretical, sparsity-aware numbers parameter & flops (in black color), as well as those numbers measured if assuming no sparsity-aware acceleration (in blue color). model image size #param. flops top-1 acc resnet-50 (he et al., 2016) resnext-50-32×4d (xie et al., 2017) resmlp-24 (touvron et al., 2021a) deit-s (touvron et al., 2021b) swin-t (liu et al., 2021e) tnt-s (han et al., 2021a) t2t-vitt-14 (yuan et al., 2021a) convnext-t (liu et al., 2022b) slak-t mixer-b/16 (tolstikhin et al., 2021) resnet-101 (he et al., 2016) resnext101-32x4d (xie et al., 2017) pvt-large (wang et al., 2021b) t2t-vitt-19 (yuan et al., 2021a) swin-s (liu et al., 2021e) convnext-s (liu et al., 2022b) slak-s deit-base/16 (touvron et al., 2021b) replknet-31b (ding et al., 2022) swin-b (liu et al., 2021e) convnext-b (liu et al., 2022b) slak-b vit-base/16 (dosovitskiy et al., 2021) deit-b/16 (touvron et al., 2021b) swin-b (liu et al., 2021e) replknet-31b (ding et al., 2022) convnext-b (liu et al., 2022b) slak-b evaluation on imagenet-1k imagenet-1k contains 1,281,167 training images, 50,000 validation images. we use exactly the same training configurations in section 4, except now training for the full 300 epochs, following convnext and swin transformer. we observed that models with batchnorm layers and ema see poor performance when trained over 300 epochs (also pointed by liu et al. (2022b)), and resolved this by running one additional pass over the training data, the same as used in garipov et al. (2018); izmailov et al. (2018). please refer to appendix a for more details about the training configurations. we compare the performance of slak on imagenet-1k with various the state-of-the-arts in table 3. with similar model sizes and flops, slak outperforms the existing convolutional models such as resne(x)t (he et al., 2016; xie et al., 2017), replknet (ding et al., 2022), and convnext (liu et al., 2022b). without using any complex attention modules and patch embedding, slak is able to achieve higher accuracy than the state-of-the-art transformers, e.g., swin transformer (liu et al., 2021e) and pyramid vision transformer (wang et al., 2021b; 2022). perhaps more interestingly, directly replacing the 7×7 of convnext-s to 51×51 is able to improve the accuracy over the latter by 0.7%. moreover, table 3 shows that our model benefits more from larger input sizes: the performance improvement of slak-b over convnext-b on 384×384 is twice that on 224×224 input, highlighting the advantages of using large kernels on high-resolution training (liu et al., 2021d). moreover, we also examine if slak can rival stronger transformer models – cswin transformer (dong et al., 2022), a carefully designed hybrid architecture with transformers and convolutions. cswin performs self-attention in both horizontal and vertical stripes. as shown in appendix c, slak consistently performs on par with cswin transformers, and is the first pure convnet model that can achieve so without any bells and whistles, to our best knowledge. evaluation on ade20k | 6 | [
108.249,
573.5860784,
245.4033902,
583.5486784
] |
O50443AsCP.pdf | 2,022 | 0 | tapex: table pre-training via learning a neural sql executor qian liu†∗, bei chen§, jiaqi guo♢∗, morteza ziyadi♡, zeqi lin§, weizhu chen♡, jian-guang lou§ †beihang university, ♢xi’an jiaotong university, §microsoft research asia, ♡microsoft azure ai qian.liu@buaa.edu.cn, jasperguo2013@stu.xjtu.edu.cn {bei.chen, morteza.ziyadi, zeqi.lin, wzchen, jlou}@microsoft.com abstract recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. however, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. in this paper, we propose tapex to show that table pretraining can be achieved by learning a neural sql executor over a synthetic corpus, which is obtained by automatically synthesizing executable sql queries and their execution outputs. tapex addresses the data scarcity challenge via guiding the language model to mimic a sql executor on the diverse, large-scale and highquality synthetic corpus. we evaluate tapex on four benchmark datasets. experimental results demonstrate that tapex outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. this includes improvements on the weakly-supervised wikisql denotation accuracy to 89.5% (+2.3%), the wikitablequestions denotation accuracy to 57.5% (+4.8%), the sqa denotation accuracy to 74.5% (+3.5%), and the tabfact accuracy to 84.2% (+3.2%). to our knowledge, this is the first work to exploit table pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks. our code can be found at https://github.com/microsoft/table-pretraining. introduction pre-trained language models (lms) such as bert (devlin et al., 2019) and bart (lewis et al., 2020) have hit a success on a range of free-form natural language (nl) tasks. by learning from a large amount of unstructured textual data, these models have demonstrated surprising capabilities in understanding nl sentences. inspired by this huge success, researchers have attempted to extend pre-training to structured tabular data (herzig et al., 2020; yin et al., 2020; yu et al., 2021a; wang et al., 2021b; deng et al., 2020; 2021; shi et al., 2021a). however, different from free-form nl sentences, tabular data often contains rich and meaningful structural information, for which existing pre-training approaches designed for unstructured data are not well suited. to apply pre-training techniques on structured tabular data, there exist two key challenges: (i) where to obtain a large-scale pre-training corpus with high quality, and (ii) how to design an efficient pretraining task for table pre-training. for the first challenge, existing works generally collect parallel data including nl sentences and tables as the pre-training corpus, since downstream tasks often involve a joint reasoning over both free-form nl sentences and tables. they either crawled tables and their surrounding nl sentences from the web (herzig et al., 2020; yin et al., 2020; deng et al., 2021), or synthesized nl sentences on available tables (yu et al., 2021a; shi et al., 2021a). however, as pointed by yin et al. (2020), the raw data mined from the web is extremely noisy and requires complicated heuristics to clean. conversely, the synthesis method is easier to control the data quality, but it usually requires experts to write hundreds of templates, which is both costly and often lacking diversity. regarding the pre-training task, existing works often employ different variants of masked language modeling (mlm) (devlin et al., 2019) to guide lms to learn better representations of ∗work done during an internship at microsoft research asia. figure 1: the schematic overview of our method. for the sake of brevity, the table content in the input is simplified with the symbol [table]. tabular data. for example, tapas (herzig et al., 2020) used mlm with whole word masking, and tabert (yin et al., 2020) proposed masked column prediction (mcp) to encourage the model to recover the names and data types of masked columns. despite their success, they still largely treat tabular data as a structural format of text, which leads to the need of an extremely large corpus for their table pre-training. all of these hinder the progress of table pre-training. in this paper, we present a novel execution-centric table pre-training approach tapex (table pretraining via execution). it addresses the above challenges and achieves efficient table pre-training via approximating the structural reasoning process of formal languages over tables. the structural reasoning process is associated with the executability of tables, i.e., tables are inherently capable of supporting various reasoning operations (e.g., summing over a column in the table). in particular, tapex approximates the structural reasoning process of sql queries by pre-training lms to mimic the behavior of a sql execution engine on tables. as shown in figure 1, by sampling executable sql queries over tables, tapex first synthesizes a large-scale pre-training corpus. then it continues pre-training a language model to output the execution results of these sql queries, which are obtained from the sql execution engine. since the diversity of sql queries can be systematically guaranteed, we can easily synthesize a diverse, large-scale, and high-quality pre-training corpus. our key insight is that if a language model can be pre-trained to faithfully “execute” sql queries and produce correct results, it should have a deep understanding of tables. thus, the execution pre-training task could be more efficient in understanding tables and reasoning over tables. to our knowledge, tapex is the first one to explore table pre-training via synthetic executable programs. tapex is conceptually simple and easy to implement. in this paper, we regard the pre-training as a sequence generation task and employ an encoder-decoder model. specifically, we employ the pre-trained encoder-decoder language model bart (lewis et al., 2020) as the backbone. furthermore, we examine the effectiveness of tapex via two fundamental downstream tasks: table-based question answering (tableqa) and table-based fact verification (tablefv). to enable fine-tuning of downstream tasks to take full advantage of tapex, we reformulate these tasks using the encoderdecoder sequence generation paradigm. we evaluate tapex using four well-known benchmark datasets. experimental results clearly demonstrate that tapex can bring significant and consistent improvements on these datasets. for example, tapex obtains an absolute improvement of 19.5% over bart in the wikitablequestions dataset. furthermore, tapex yields strong results even with a small pre-training corpus, demonstrating its high efficiency. finally, tapex achieves new state-of-the-art results on all experimental benchmarks, outperforming previous approaches by a large margin, including complicated table pre-training approaches with several heuristics in data processing. we will make our code, model, and data publicly available to facilitate future research. fine-tuning on downstream tasks before diving into the details of our proposed table pre-training, we start by describing how to tackle downstream task fine-tuning with the encoder-decoder sequence generation paradigm. in this section, we first present the background of two fundamental table related downstream tasks: tablebased question answering (tableqa) and table-based fact verification (tablefv). then we elaborate on our generative fine-tuning method in detail. figure 2: the illustration of the fine-tuning procedure in our method. during fine-tuning, we feed the concatenation of an nl sentence and its corresponding table taken from the downstream task to the model, and train it to output the answer (e.g., “marisela moreno montero”). downstream task formulation as mentioned in § 1, downstream tasks always involve joint reasoning over free-form nl sentences and tables. therefore, examples of downstream tasks generally contain an nl sentence x and a (semi-)structured table t as the model input. each nl sentence consists of k tokens as x = x1, x2, · · ·, xk, while each table t consists of m rows {ri}m i=1, in which each row ri contains n cell values {s⟨i,j⟩}n j=1. each cell s⟨i,j⟩ includes a list of tokens and corresponds to a table header cj. as for the output, there are variations among different tasks. in this paper, we focus on tableqa and tablefv. tableqa aims to retrieve table content to answer the user’s question, and thus its output is either a list of cell values or number(s) calculated over the selected table region by aggregation functions (e.g., sum). it is worth noting that for semi-structured tables, the answer may not be exactly table cell values, but their normalized forms (e.g., from 2k to 2,000), which makes downstream tasks more challenging (oguz et al., 2020). as for tablefv, the output is a binary decision entailed or refused, indicating whether the nl sentence follows the fact indicated by the table. generative fine-tuning in this section, we present a generative approach for downstream task fine-tuning. unlike previous works, we model both tableqa and tablefv as sequence generation tasks and leverage generative lms to generate the output autoregressively. taking tableqa as an example, given an nl question, our method generates the answer by decoding it in a word-by-word fashion. architecture our method theoretically applies for any lm as long as it can generate sequence, such as gpt3 (brown et al., 2020) and unilm (bao et al., 2020). in our experiments, we implemented our method based on bart (lewis et al., 2020), a widely used pre-trained encoder-decoder model. bart follows a standard sequence-to-sequence transformer architecture (vaswani et al., 2017), with modifying relu activation functions to gelu. it is pre-trained via corrupting sentences (i.e., randomly sampling length-variable spans and masking each one with a single [mask] token) and then optimizing a reconstruction loss. as for the number of layers, we employ the bartlarge configuration in our experiments, i.e., 12 layers are used in both the encoder and the decoder. model input as illustrated in figure 2, the input contains an nl sentence and its corresponding table. encoding the nl sentence is relatively straightforward, while encoding the table is non-trivial since it exhibits underlying structures. in practice, we flatten the table into a sequence so that it can be fed directly into the model. by inserting several special tokens to indicate the table boundaries, a flattened table can be represented as t ∗ = [head], c1, · · ·, cn , [row], 1, r1, [row], 2, r2, · · ·, rm . here [head] and [row] are special tokens indicating the region of table headers and rows respectively, and the number after [row] is used to indicate the row index. notably, we also separate headers or cells in different columns using a vertical bar | . finally, we prefix the flattened table t ∗ with the nl sentence x and feed them into the model encoder. model output with attending on the encoder, the decoder is responsible for modeling the outputs of both tableqa and tablefv. for tableqa, the output is the concatenation of the answer(s) separated by commas, and the decoder generates it autoregressively. in this way, our model can readily support (almost) all operators and their compositions in tableqa. for tablefv, as bart does for sequence classification tasks (lewis et al., 2020), the same input is fed into both the encoder and decoder, and a binary classifier upon the hidden state of the last token in the decoder is used for the output. notably, our method can be easily extended to other table related tasks in a similar way. figure 3: the illustration of the pre-training procedure in our method. during pre-training, we feed the concatenation of a sampled sql query and a sampled table to the model, and train it to output the corresponding execution result (e.g., “pairs”). fine-tuning strategy since our approach can perform various downstream tasks on the same architecture, it can easily perform multi-task learning. therefore, we explore two ways of finetuning, one for vanilla fine-tuning and the other for multi-task fine-tuning. the former is to fine-tune the model on each individual downstream task. the latter is inspired by tapas (herzig et al., 2020) and t5 (raffel et al., 2020), which first fine-tunes the model on related or similar intermediate downstream tasks and then continues to fine-tune it on the target downstream task. discussion our approach comes with several advantages: (i) flexibility: due to the powerful expressiveness of encoder-decoder models, our approach can readily adapt to (almost) any kind of output. (ii) conveniency: our approach does not require any modification (e.g., table-specific masking) on pre-trained lms, and can be trained in an end-to-end manner. (iii) transferability: since we formulate downstream tasks as sequence generation tasks, which allows different tasks to share the same training protocol, it is easy to perform multi-task fine-tuning for our approach. table pre-training via execution as mentioned in § 1, tapex achieves efficient table pre-training by training lms to mimic the behavior of a sql execution engine. in this section, we illustrate how to conduct table pre-training from two aspects: the pre-training task and the pre-training corpus. pre-training task following the mlm task in nl pre-training, existing works usually use reconstruction tasks for table pre-training. they generally take corrupted tables and nl sentences as input and try to recover the corrupted parts, in order to strengthen the linking between nl sentences and tables. while these pre-training tasks perform well, they tend to be less efficient since they usually require an extremely large pre-training corpus. to design efficient tasks for table pre-training, we argue that the key lies in the executability of tables. that is to say, structured tables enable us to perform discrete operations on them via programming languages such as sql queries, while unstructured text does not. taking this into account, tapex adopts sql execution as the only pre-training task. as illustrated in figure 3, the pre-training of tapex is similar to the procedure of the above generative fine-tuning. given an executable sql query and a table t , tapex first concatenates the sql query and the flattened table t ∗ to feed into the model encoder. then it obtains the query’s execution result through an off-the-shelf sql intuitively, the preexecutor (e.g., mysql) to serve as the supervision for the model decoder. training procedure is to encourage a language model to be a neural sql executor. we believe that if a language model can be trained to faithfully “execute” sql queries and produce correct results, then it should have a deep understanding of tables. pre-training corpus synthesizing the pre-training corpus is very important for table pre-training. generally, there are two key factors: the table source and the sql query sampling strategy. model dev test model dev test previous systems pre-trained language models w. execution-guided decoding bart tapex table 1: denotation accuracies on wikisqlweak. execution-guided decoding is proposed to leverage execution results of sql queries during inference (wang et al., 2018). previous systems pasupat & liang (2015) neelakantan et al. (2016) zhang et al. (2017) liang et al. (2018) dasigi et al. (2019) agarwal et al. (2019) wang et al. (2019b) pre-trained language models bart tapex table 2: denotation accuracies on wikitablequestions. table source following previous work by yin et al. (2020), we choose publicly available semistructured tables as the table source. however, rather than requiring millions of raw tables in (yin et al., 2020), tapex works well even with only a few thousand tables. therefore, instead of fetching noisy tables from the web and then heuristically filtering them, we pick high-quality tables right from existing public datasets. concretely, we randomly select nearly 1, 500 tables from the training set of wikitablequestions (pasupat & liang, 2015) as the table source for our pre-training corpus. notice that there is no overlap between the tables used in our pre-training and the tables used in the dev and test sets of all downstream tasks, so there is no data leakage problem. query sampling regarding the sampling of diverse sql queries, there are various choices in the literature. we can either sample sql queries according to a probabilistic context-free grammar (wang et al., 2021a), or instantiate sql templates over different tables (zhong et al., 2020a). in our experiments, we follow the latter, where sql templates are automatically extracted from the squall dataset (shi et al., 2020b). an example sql template is: select num1 where text1 = val1, where num1 and text1 correspond to a numeric column and a text column respectively, and val1 refers to one of the cell values with respect to the column text1. given a sql template, at each instantiation, we uniformly sample headers and cell values from a sampled table to fill the template, forming a concrete sql query. notably, sql queries that execute with empty results are discarded, because empty results do not reflect much information about the executability of tables. this way, we can obtain a large-scale pre-training corpus with high quality. experiments in this section, we evaluate tapex on different downstream tasks to verify its effectiveness. dataset and evaluation we evaluate the performance of our approach on weakly-supervised wikisql (wikisql-weak) (zhong et al., 2017), wikitablequestions (pasupat & liang, 2015), sqa (iyyer et al., 2017), and tabfact (chen et al., 2020). compared to wikisqlweak, which only requires filtering and optionally aggregating on table cell values, wikitablequestions requires more complicated reasoning capabilities. sqa is a conversational benchmark, which requires our approach to model the conversational context. datset details can be found in appendix a. for tableqa datasets, the evaluation metric is denotation accuracy, which checks whether the predicted answer(s) is equal to the ground-truth answer(s). it is worth noting that we evaluate our approach on wikisql-weak with answer annotations provided by tapas (herzig et al., 2020), since nearly 2% of answers obtained from the official evaluation script are incorrect. for tabfact, the evaluation metric is accuracy, which is calculated using the percentage of correct prediction. implementation details we implement our approach based on fairseq (ott et al., 2019). during pre-training, we synthesize up to 5 million pairs of sql queries and their execution results for model all seq q1 pasupat & liang (2015) neelakantan et al. (2017) iyyer et al. (2017) liu et al. (2019) sun et al. (2019) mueller et al. (2019) yu et al. (2021b) herzig et al. (2020) eisenschlos et al. (2020) bart tapex table 3: denotation accuracies on sqa test set. all is the denotation accuracy over all sentences, seq the denotation accuracy over all conversations, and qi the denotation accuracy of the i-th sentence in a conversation. model dev test testsimple testcomplex testsmall chen et al. (2020) zhong et al. (2020b) shi et al. (2020a) zhang et al. (2020) yang et al. (2020) eisenschlos et al. (2020) bart tapex human performance table 4: accuracies on tabfact, including the human performance. tapex. in the following, unless specified explicitly, all the experimental results are by default evaluated under the 5 million setting. our pre-training procedure runs up to 50, 000 steps with a batch size of 256. it takes about 36 hours on 8 tesla v100 gpus to finish the pre-training. the best pre-training checkpoint is selected based on the loss on the validation set. for all downstream datasets, the fine-tuning procedure runs up to 20, 000 steps with a batch size of 128. for both pretraining and fine-tuning, the learning rate is 3×10−5. main results table 1, table 2, table 3 and table 4 summarize the experimental results of various models on wikisql-weak, wikitablequestions, sqa and tabfact respectively. for both dev and test sets of all datasets, we report the median performance of our approach for five random runs. wikisql-weak as shown in table 1, tapex outperforms all the baselines by a large margin. on the test set of wikisql-weak, tapex registers a denotation accuracy of 89.5%, which is 3.7% higher than bart and 2.3% higher than the previous best performance. this is significant since the previous best model has already utilized the execution-guided decoding. in short, tapex achieves a new state-of-the-art result on the well-known benchmark wikisql-weak. wikitablequestions on the more challenging wikitablequestions, tapex also achieves a new state-of-the-art denotation accuracy of 57.5%, surpassing the previous best system by 4.8% (table 2). meanwhile, we find that bart alone can only reach the denotation accuracy of 38.0%, much worse than the performance of previous pre-training models. we conjecture that the performance degradation could be attributed to the relatively small amount of training data in wikitablequestions, which makes the adaptation of bart to tabular structures more challenging. who are the only players listed that played in 2011 ? [head] player | year | round | result | opponent [row] 1 ray mond van bar ne ve ld | 2009 | quarter - final | won | j elle k la as en [row] 2 ray mond van bar ne ve ld | 2010 | 2 nd round | won | bre nd an d olan [row] 3 ad rian le w is | 2011 | final | won | g ary and erson figure 4: the visualization results of attention weights from other tokens to the cell “adrian lewis”. intuitively, the darker the color, the more closely the word is associated with “adrian lewis”. however, tapex delivers a dramatic improvement of 19.5% over bart, indicating that in the low data regime, the improvements introduced by tapex are often more significant. sqa table 3 presents the performance of various models on the test set of sqa, where tapex again obtains a new state-of-the-art denotation accuracy in terms of both the conversation level (48.4%) and the sentence level (74.5%). this improvement is also a surprise to us since sqa is a conversational dataset while our pre-training task is context-free. meanwhile, the substantial improvements of tapex over bart on sqa continues to verify the same observation that tapex alleviates the low resource issue. tabfact beyond tableqa, tapex also excels at tablefv. as shown in table 4, tapex achieves new state-of-the-art results on all subsets of tabfact. for example, it surpasses the previous best system by 4.0% on testcomplex. the result shows that tapex endows bart with generic table understanding capabilities, which could be adapted to different downstream tasks, regardless of whether these tasks are highly similar to the tapex pre-training task or not. overall results experimental results on four datasets show that tapex can broadly improve the model ability on understanding tables, especially in the low data regime. multi-task results as discussed in § 2.2, our approach can easily perform multi-task learning, thereby conferring benefits to downstream tasks. to verify it, we conducted multi-task fine-tuning experiments and obtained the following findings: (1) when initialized by bart, multi-task fine-tuning boosts the performance of the target task significantly; (2) when initialized by tapex, the gain of multi-task fine-tuning tends to be marginal, suggesting that most of the “skills” (loosely speaking) gained by multi-task learning can be acquired by our table pre-training. detailed results can be found in appendix b. analysis in this section, we carefully analyze our approach in terms of various aspects. besides, we perform an exploratory analysis to provide more insights for future work, which can be found in appendix c. sql execution by pre-training in order to understand how well tapex performs sql execution after pre-training, we analyze its performance on nearly 20, 000 held-out sql queries over unseen tables. overall, the sql execution accuracy is relatively high, as tapex correctly “executes” 89.6% of the sql queries1. in particular, tapex performs better on filter, aggregate and superlative operators, indicating that it is highly accurate in table cell selection and table aggregating. regarding arithmetic and comparative operators, tapex also does a good job, demonstrating its numerical reasoning skill on tables. to summarize, tapex has learned to be a neural sql executor with good selection, aggregating and numerical capabilities. table understanding by pre-training to provide insight on if tapex helps downstream tasks understand tables better, we visualize and analyze the self-attention of tapex (without fine-tuning) on sampled wikitablequestions examples. as shown in figure 4, tapex seems to focus more on the row and the header where a cell corresponds to. taking the example from figure 4, the attention weights imply that “adrian lewis” is closely associated with the first column “player” and the entire third row, which are the positions of “adrian lewis” in the structured table. table reasoning by pre-training to understand if tapex can improve table reasoning, we compare the performance of tapex to bart on 500 randomly selected questions and manually ana1the full analysis about sql execution can be found in appendix d. operator example question what is the years won for each team? how long did taiki tsuchiya last? what is the amount of matches drawn? select filter aggregate superlative what was the last baekje temple? arithmetic what is the difference between white voters and black voters in 1948? bart tapex comparative besides tiger woods, what other player group won between 2007 and 2009? what was score for each winning game? table 5: the most common operators in the randomly selected 500 questions from wikitablequestions dev set. listed are, the operator, the example question with the operator semantic (i.e., the colorful spans), the performance of bart and tapex on the operator. wikitablequestions sqa tabfact wikisql-weak e c n a m r o f r e p k s a t 0.1 5.0 amount of pretraining corpus (millions) grappa tabert tapas bart tapex ( y c a r u c c a n o i t a t o n e d 10−2 102 amount of pretraining corpus (millions) figure 5: the illustration of downstream tasks performance with different scales of pre-training corpus. scaling up the pre-training corpus of tapex generally brings positive effects across datasets. figure 6: the amount of pre-training corpus vs. denotation accuracy on wikitablequestions dev set. tapex surpasses existing table pre-training approaches with a much smaller corpus, showing its high efficiency. lyzed them in table 5. one can find that tapex significantly boosts the performance on all operators, implying that it does enhance bart’s capabilities for joint reasoning over text and tables. the scale of pre-training corpus figure 5 illustrates downstream performance with different scales of the pre-training corpus. it can be seen that even if our pre-training corpus is synthetic, scaling up the pre-training corpus generally brings positive effects. the observation is analogous to the one in language modeling (brown et al., 2020): the larger the pre-training corpus, the better the downstream performance. by the comparison across different datasets, we can find that for simple tasks like wikisql-weak, the gains by scaling up pre-training corpus become marginal, while they remain non-trivial for complex tasks like tabfact. meanwhile, both downstream datasets in the low data regime show a positive trend by increasing the pre-training corpus. conclusively, the scale matters when the downstream task is difficult, or the downstream dataset is relatively small. the efficiency of pre-training as mentioned in § 1, the pre-training efficiency of existing table pre-training approaches is relatively low, as they usually require an extremely large corpus. therefore, taking wikitablequestions as an example, we compare the pre-training efficiency of tapex with tapas (herzig et al., 2020), tabert (yin et al., 2020) and grappa (yu et al., 2021a). it is worth noting that part of the pre-training corpus for grappa comes from humanannotated, high-quality parallel data. as shown in figure 6, tapex can yield very promising performance when using a much smaller pre-training corpus, indicating that our proposed sql execution pre-training task is more efficient than other table pre-training tasks. limitations the first limitation of our approach is that it cannot ideally handle large tables. as mentioned above, we employ the table flattening technique to represent a table. it works well when the table is relatively small, but it becomes infeasible when the table is too large to fit in memory. in practice, we can compress tables by removing some unrelated rows or columns, which would decrease downstream performance. the second limitation is that the task of text-to-sql cannot benefit from our proposed table pre-training. we have tried to apply tapex for a text-to-sql task, where the input remains the same and the output converts to sql. however, tapex does not show a significant advantage over bart. we attribute this to two factors: first, our synthetic pre-training corpus does not contribute to grounding, one of the most important factors for semantic parsing (liu et al., 2021); second, table reasoning capabilities (e.g., aggregate) learned by tapex may not be necessary for sql generation. for example, a model could still understand an nl phrase “total” as the aggregation function “sum”, even though it is unaware of the mathematical meaning of “sum”. 6 related work table pre-training the work most related to ours is table pre-training whose key factors include the pre-training corpus and the pre-training task. as for the pre-training corpus, most of previous works almost collect nl-table data to perform table pre-training. they either mined a large corpus of tables and their nl sentence contexts (yin et al., 2020; herzig et al., 2020), leveraged human-annotated parallel nl-table datasets for pre-training (deng et al., 2021; yu et al., 2021a), or synthesized a nl-table corpus using human-written templates (yu et al., 2021a; eisenschlos et al., 2020). our work is different from theirs because we are the first to use pure synthetic sql-table data for table pre-training, which allows us to automatically synthesize a diverse, large-scale, and high-quality pre-training corpus. as for the pre-training task, existing works proposed several pretraining tasks, such as mask column prediction (yin et al., 2020), multi-choice cloze at the cell level (wang et al., 2021b) and structure grounding (deng et al., 2021). different from all of them, we present a novel sql execution task to perform table pre-training. joint understanding on table and text as our experiments are mainly on tableqa and tablefv, our work is also closely related to previous methods for these tasks. for tableqa, previous works almost formulate it as a weakly semantic parsing task (liang et al., 2018; wang et al., 2019a; guo et al., 2021), which always employ reinforcement learning to optimize semantic parsers over tables. although these parsers produce logic forms (e.g., sql), they have difficulties in training due to the large search space and the presence of spurious programs (goldman et al., 2018). in addition, another promising line of work has emerged in recent advances (mueller et al., 2019; herzig et al., 2020), which aims at answering nl sentences without logical forms. this line of work predicts answer(s) by selecting cell values and optionally applying an aggregation operator to them. they can be easily trained, but their modeling ability is limited. for example, it is hard to support compound aggregation operators such as max(year) - min(year). what makes our approach different from these works is that we employ generative models to handle tableqa and can enjoy the end-toend training and flexibility simultaneously. for tablefv, previous works usually employ specialized architectures with limited scalability (shi et al., 2020a; yang et al., 2020; shi et al., 2021b). for example, zhong et al. (2020b) leveraged a graph construction mechanism, a semantic parser, and a semantic composition model to capture the connections among the nl sentence and the table. while the approach works well for tablefv, it is not easily applied to other table-related tasks. compared with them, our approach works well for a variety of downstream tasks in the same architecture. 7 conclusion in this paper, we present tapex, an execution-centric table pre-training approach whose corpus is automatically synthesized via sampling sql queries and their execution results. tapex addresses the data scarcity challenge in table pre-training by learning a neural sql executor on a diverse, large-scale, and high-quality synthetic corpus. experimental results on four downstream datasets demonstrate that tapex outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. our work opens the way to exploit structured data by pre-training on synthetic executable programs, which is conceptually simple and has great potential to be extended to other research areas (e.g., knowledge base). acknowledgement we would like to thank all the anonymous reviewers for their constructive feedback. the first author qian is supported by the academic excellence foundation of beihang university for phd students. ethics statement in this work, we present a novel pre-training approach for tabular data, which approximates the structural reasoning process of formal languages over tables to achieve efficient table pre-training. different from previous works which employ web crawling to construct a large-scale nl-table corpus for pre-training, our pre-training corpus is synthesized via sampling sql queries and their execution results on public tables. compared with previous works, our pre-training corpus is more controllable with high-quality. for example, compared with tabert which crawls 26 million noisy tables from the web, our approach adopts 1, 500 high-quality tables from public datasets, which greatly alleviates the potential privacy and bias issues raised by web crawling. we evaluate our approach on two fundamental table-related tasks: table-based question answering and table-based fact verification. the former enables non-expert users to query databases without learning programming languages, while the latter helps users to verify whether a textual hypothesis is valid based on given tabular evidence. experimental results on four well-known benchmark datasets show that our approach achieves new state-of-the-art results on all of them, especially in the low data regime. references rishabh agarwal, chen liang, dale schuurmans, and mohammad norouzi. learning to generalize from sparse and underspecified rewards. in icml, 2019. hangbo bao, li dong, furu wei, wenhui wang, nan yang, xiaodong liu, yu wang, songhao piao, jianfeng gao, m. zhou, and h. hon. unilmv2: pseudo-masked language models for unified language model pre-training. in icml, 2020. tom brown, benjamin mann, nick ryder, melanie subbiah, jared d kaplan, prafulla dhariwal, arvind neelakantan, pranav shyam, girish sastry, amanda askell, sandhini agarwal, ariel herbert-voss, gretchen krueger, tom henighan, rewon child, aditya ramesh, daniel ziegler, jeffrey wu, clemens winter, chris hesse, mark chen, eric sigler, mateusz litwin, scott gray, benjamin chess, jack clark, christopher berner, sam mccandlish, alec radford, ilya sutskever, and dario amodei. language models are few-shot in h. larochelle, m. ranzato, r. hadsell, m. f. balcan, and h. lin (eds.), adlearners. vances in neural information processing systems, volume 33, pp. 1877–1901. curran associates, inc., 2020. url https://proceedings.neurips.cc/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-paper.pdf. wenhu chen, hongmin wang, jianshu chen, yunkai zhang, hong wang, shiyang li, xiyou zhou, and william yang wang. tabfact: a large-scale dataset for table-based fact verification. in international conference on learning representations, 2020. url https://openreview. net/forum?id=rkejrhnydh. pradeep dasigi, matt gardner, shikhar murty, luke zettlemoyer, and eduard hovy. iterative in proceedings of the 2019 conference of search for weakly supervised semantic parsing. the north american chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 2669–2680, minneapolis, minnesota, june 2019. association for computational linguistics. doi: 10.18653/v1/n19-1273. url https://aclanthology.org/n19-1273. xiang deng, huan sun, alyssa lees, you wu, and cong yu. turl: table understanding through representation learning. proc. vldb endow., 14(3):307–319, 2020. doi: 10.5555/3430915. 3442430. url http://www.vldb.org/pvldb/vol14/p307-deng.pdf. xiang deng, ahmed hassan awadallah, christopher meek, oleksandr polozov, huan sun, and in proceedings of the matthew richardson. structure-grounded pretraining for text-to-sql. 2021 conference of the north american chapter of the association for computational linguistics: human language technologies, pp. 1337–1350, online, june 2021. association for computational linguistics. doi: 10.18653/v1/2021.naacl-main.105. url https://www.aclweb. org/anthology/2021.naacl-main.105. jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. bert: pre-training of deep bidirectional transformers for language understanding. in proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: human language technologies, volume 1 (long and short papers), pp. 4171–4186, minneapolis, minnesota, june 2019. association for computational linguistics. doi: 10.18653/v1/n19-1423. url https: //www.aclweb.org/anthology/n19-1423. julian eisenschlos, syrine krichene, and thomas m¨uller. understanding tables with intermein findings of the association for computational linguistics: emnlp diate pre-training. 2020, pp. 281–296, online, november 2020. association for computational linguistics. doi: 10.18653/v1/2020.findings-emnlp.27. url https://www.aclweb.org/anthology/ 2020.findings-emnlp.27. omer goldman, veronica latcinnik, ehud nave, amir globerson, and jonathan berant. weakly supervised semantic parsing with abstract examples. in proceedings of the 56th annual meeting of the association for computational linguistics (volume 1: long papers), pp. 1809–1819, melbourne, australia, july 2018. association for computational linguistics. doi: 10.18653/v1/ p18-1168. url https://www.aclweb.org/anthology/p18-1168. jiaqi guo, jian-guang lou, ting liu, and dongmei zhang. weakly supervised semantic parsing by learning from mistakes. in findings of the association for computational linguistics: emnlp 2021, pp. 2603–2617, punta cana, dominican republic, november 2021. association for computational linguistics. url https://aclanthology.org/2021.findings-emnlp. 222. tonglei guo and huilin gao. using database rule for weak supervised text-to-sql generation. arxiv, jonathan herzig, pawel krzysztof nowak, thomas m¨uller, francesco piccinno, and julian eisenin proceedings of the 58th schlos. tapas: weakly supervised table parsing via pre-training. annual meeting of the association for computational linguistics, pp. 4320–4333, online, july 2020. association for computational linguistics. doi: 10.18653/v1/2020.acl-main.398. url https://www.aclweb.org/anthology/2020.acl-main.398. mohit iyyer, wen-tau yih, and ming-wei chang. search-based neural structured learning for sein proceedings of the 55th annual meeting of the association quential question answering. for computational linguistics (volume 1: long papers), pp. 1821–1831, vancouver, canada, july 2017. association for computational linguistics. doi: 10.18653/v1/p17-1167. url https://aclanthology.org/p17-1167. mike lewis, yinhan liu, naman goyal, marjan ghazvininejad, abdelrahman mohamed, omer levy, veselin stoyanov, and luke zettlemoyer. bart: denoising sequence-to-sequence prein proceedings of training for natural language generation, translation, and comprehension. the 58th annual meeting of the association for computational linguistics, pp. 7871–7880, online, july 2020. association for computational linguistics. doi: 10.18653/v1/2020.acl-main.703. url https://www.aclweb.org/anthology/2020.acl-main.703. chen liang, mohammad norouzi, jonathan berant, quoc v le, and ni lao. memory augmented policy optimization for program synthesis and semantic parsing. in proceedings of nips, 2018. qian liu, bei chen, haoyan liu, jian-guang lou, lei fang, bin zhou, and dongmei zhang. a in proceedings of the 2019 consplit-and-recombine approach for follow-up query analysis. ference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp), pp. 5316–5326, hong kong, china, november 2019. association for computational linguistics. doi: 10.18653/v1/d19-1535. url https://www.aclweb.org/anthology/d19-1535. qian liu, bei chen, jiaqi guo, jian-guang lou, bin zhou, and dongmei zhang. how far are we from effective context modeling? an exploratory study on semantic parsing in context twitter. in ijcai, 2020. qian liu, dejian yang, jiahui zhang, jiaqi guo, bin zhou, and jian-guang lou. awakening lain findings of the astent grounding from pretrained language models for semantic parsing. sociation for computational linguistics: acl-ijcnlp 2021, pp. 1174–1189, online, august 2021. association for computational linguistics. doi: 10.18653/v1/2021.findings-acl.100. url https://aclanthology.org/2021.findings-acl.100. sewon min, danqi chen, hannaneh hajishirzi, and luke zettlemoyer. a discrete hard em approach for weakly supervised question answering. in proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp), pp. 2851–2864, hong kong, china, november 2019. association for computational linguistics. doi: 10.18653/v1/d19-1284. url https://www. aclweb.org/anthology/d19-1284. thomas mueller, francesco piccinno, peter shaw, massimo nicosia, and yasemin altun. anin proceedings of swering conversational questions on structured data without logical forms. the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp), pp. 5902– 5910, hong kong, china, november 2019. association for computational linguistics. doi: 10.18653/v1/d19-1603. url https://www.aclweb.org/anthology/d19-1603. arvind neelakantan, quoc v. le, and ilya sutskever. neural programmer: inducing latent programs with gradient descent. in yoshua bengio and yann lecun (eds.), 4th international conference on learning representations, iclr 2016, san juan, puerto rico, may 2-4, 2016, conference track proceedings, 2016. url http://arxiv.org/abs/1511.04834. arvind neelakantan, quoc v. le, mart´ın abadi, andrew mccallum, and dario amodei. learning a natural language interface with neural programmer. in 5th international conference on learning representations, iclr 2017, toulon, france, april 24-26, 2017, conference track proceedings. openreview.net, 2017. url https://openreview.net/forum?id=ry2yorcge. barlas oguz, xilun chen, vladimir karpukhin, stan peshterliev, dmytro okhonko, michael schlichtkrull, sonal gupta, yashar mehdad, and scott yih. unik-qa: unified representations of structured and unstructured knowledge for open-domain question answering. arxiv preprint arxiv:2012.14610, 2020. fairseq: a fast, extensible toolkit for sequence modeling. myle ott, sergey edunov, alexei baevski, angela fan, sam gross, nathan ng, david grangier, in proceedings and michael auli. of the 2019 conference of the north american chapter of the association for computational linguistics (demonstrations), pp. 48–53, minneapolis, minnesota, june 2019. association for computational linguistics. doi: 10.18653/v1/n19-4009. url https://aclanthology. org/n19-4009. panupong pasupat and percy liang. compositional semantic parsing on semi-structured tables. in proceedings of the 53rd annual meeting of the association for computational linguistics and the 7th international joint conference on natural language processing (volume 1: long papers), pp. 1470–1480, beijing, china, july 2015. association for computational linguistics. doi: 10.3115/v1/p15-1142. url https://aclanthology.org/p15-1142. colin raffel, noam shazeer, adam roberts, katherine lee, sharan narang, michael matena, yanqi zhou, wei li, and peter j liu. exploring the limits of transfer learning with a unified text-to-text transformer. journal of machine learning research, 21:1–67, 2020. peng shi, patrick ng, zhiguo wang, henghui zhu, alexander hanbo li, jun wang, c´ıcero nogueira dos santos, and bing xiang. learning contextual representations for semantic parsing with generation-augmented pre-training. in thirty-fifth aaai conference on artificial intelligence, aaai 2021, thirty-third conference on innovative applications of artificial intelligence, iaai 2021, the eleventh symposium on educational advances in artificial intelligence, eaai 2021, virtual event, february 2-9, 2021, pp. 13806–13814. aaai press, 2021a. url https: //ojs.aaai.org/index.php/aaai/article/view/17627. qi shi, yu zhang, qingyu yin, and ting liu. learn to combine linguistic and symbolic inforin proceedings of the 28th international conference mation for table-based fact verification. on computational linguistics, pp. 5335–5346, barcelona, spain (online), december 2020a. international committee on computational linguistics. url https://www.aclweb.org/ anthology/2020.coling-main.466. qi shi, yu zhang, qingyu yin, and ting liu. logic-level evidence retrieval and graph-based in proceedings of the 2021 conferverification network for table-based fact verification. ence on empirical methods in natural language processing, pp. 175–184, online and punta cana, dominican republic, november 2021b. association for computational linguistics. url https://aclanthology.org/2021.emnlp-main.16. tianze shi, chen zhao, jordan boyd-graber, hal daum´e iii, and lillian lee. on the potential of lexico-logical alignments for semantic parsing to sql queries. in findings of the association for computational linguistics: emnlp 2020, pp. 1849–1864, online, november 2020b. association for computational linguistics. doi: 10.18653/v1/2020.findings-emnlp.167. url https:// www.aclweb.org/anthology/2020.findings-emnlp.167. yibo sun, duyu tang, nan duan, jingjing xu, x. feng, and bing qin. knowledge-aware conversational semantic parsing over web tables. in nlpcc, 2019. ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n. gomez, lukasz kaiser, and illia polosukhin. attention is all you need. in isabelle guyon, ulrike von luxburg, samy bengio, hanna m. wallach, rob fergus, s. v. n. vishwanathan, and roman garnett (eds.), advances in neural information processing systems 30: annual conference on neural information processing systems 2017, december 4-9, 2017, long beach, ca, usa, pp. 5998–6008, 2017. url https://proceedings.neurips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-abstract.html. bailin wang, ivan titov, and mirella lapata. learning semantic parsers from denotations with latent structured alignments and abstract programs. in proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (emnlp-ijcnlp), pp. 3774–3785, hong kong, china, november 2019a. association for computational linguistics. doi: 10.18653/v1/d19-1391. url https://aclanthology.org/d19-1391. | 12 | [
108,
403.3570784,
504.0037874,
468.1136784
] |
hcQHRHKfN_.pdf | 2,022 | 0 | continuously discovering novel strategies via reward-switching policy optimization , bingliang zhang2, yi wu23^ zihan zhou∗† 1z 1 cs department, university of toronto, 2 iiis, tsinghua university, 3 shanghai qi zhi institute ^ \ z , wei fu∗ 2\ footoredo@gmail.com, fuwth17@gmail.com, jxwuyi@gmail.com abstract we present reward-switching policy optimization (rspo), a paradigm to discover diverse strategies in complex rl environments by iteratively finding novel policies that are both locally optimal and sufficiently different from existing ones. to encourage the learning policy to consistently converge towards a previously undiscovered local optimum, rspo switches between extrinsic and intrinsic rewards via a trajectory-based novelty measurement during the optimization process. when a sampled trajectory is sufficiently distinct, rspo performs standard policy optimization with extrinsic rewards. for trajectories with high likelihood under existing policies, rspo utilizes an intrinsic diversity reward to promote exploration. experiments show that rspo is able to discover a wide spectrum of strategies in a variety of domains, ranging from single-agent navigation tasks and mujoco control to multi-agent stag-hunt games and the starcraft ii multi-agent challenge. introduction the foundation of deep learning successes is the use of stochastic gradient descent methods to obtain a local minimum for a highly non-convex learning objective. it has been a popular consensus with theoretical justifications that most local optima are very close to the global optimum (ma, 2020). consequently, algorithms for most classical deep learning applications only focus on the final performance of the learned local solution rather than which local minimum is discovered. however, this assumption can be problematic in reinforcement learning (rl), where different local optima in the policy space can correspond to substantially different strategies. therefore, discovering a diverse set of policies can be critical for many rl applications, such as producing natural dialogues in chatbot (li et al., 2016), improving the chance of finding a targeted molecule (pereira et al., 2021), generating novel designs (wang et al., 2019) or training a specialist robot for fast adaptation (cully et al., 2015). moreover, in the multi-agent setting, a collection of diverse local optima could further result in interesting emergent behaviors (liu et al., 2019; zheng et al., 2020; baker et al., 2020) and discovery of multiple nash equilibria (tang et al., 2021), which further help build strong policies that can adapt to unseen participating agents in a zero-shot manner in competitive (jaderberg et al., 2019; vinyals et al., 2019) and cooperative games (lupu et al., 2021). in order to obtain diverse strategies in rl, most existing works train a large population of policies in parallel (pugh et al., 2016; cully et al., 2015; parker-holder et al., 2020b). these methods often adopt a soft learning objective by introducing additional diversity intrinsic rewards or auxiliary losses. however, when the underlying reward landscape in the rl problem is particularly non-uniform, policies obtained by population-based methods often lead to visually identical strategies (omidshafiei et al., 2020; tang et al., 2021). therefore, population-based methods may require a substantially large population size in order to fully explore the policy space, which can be computationally infeasible. moreover, the use of soft objective also results in non-trivial and subtle hyper-parameter tuning to balance diversity and the actual performance in the environment, which largely prevents these existing methods from discovering both diverse and high-quality policies in practice (parker-holder et al., 2020b; lupu et al., 2021; masood & doshi-velez, 2019). another type of methods directly explores diverse strategies in the reward space by performing multi-objective optimization over ∗equal contribution. †work done as a residence researcher at shanghai qi zhi institute. human-designed behavior characterizations (pugh et al., 2016; cully et al., 2015) or random search over linear combinations of the predefined objectives (tang et al., 2021; zheng et al., 2020; ma et al., 2020). although these multi-objective methods are particularly successful, a set of well-defined and informative behavior objectives may not be accessible in most scenarios. we propose a simple, generic and effective iterative learning algorithm, reward-switching policy optimization (rspo), for continuously discovering novel strategies under a single reward function without the need of any environment-specific inductive bias. rspo discovers novel strategies by solving a filtering-based objective, which restricts the rl policy to converge to a solution that is sufficiently different from a set of locally optimal reference policies. after a novel strategy is obtained, it becomes another reference policy for future rl optimization. therefore, by repeatedly running rspo, we can quickly derive diverse strategies in just a few iterations. in order to strictly enforce the novelty constraints in policy optimization, we adopt rejection sampling instead of optimizing a soft objective, which is adopted by many existing methods by converting the constraints as lagrangian penalties or intrinsic rewards. specifically, rspo only optimizes extrinsic rewards over trajectories that have sufficiently low likelihood w.r.t. the reference policies. meanwhile, to further utilize those rejected trajectories that are not distinct enough, rspo ignores the environment rewards on these trajectories and only optimizes diversity rewards to promote effective exploration. intuitively, this process adaptively switches the training objective between extrinsic rewards and diversity rewards w.r.t. the novelty of each sampled trajectory, so we call it the reward switching technique. we empirically validate rspo on a collection of highly multi-modal rl problems, ranging from multi-target navigation (mordatch & abbeel, 2018) and mujoco control (todorov et al., 2012) in the single-agent domain, to stag-hunt games (tang et al., 2021) and the starcraft ii multi-agent challenge (smac) (rashid et al., 2019) in the multi-agent domain. experiments demonstrate that rspo can reliably and efficiently discover surprisingly diverse strategies in all these challenging scenarios and substantially outperform existing baselines. the contributions can be summarized as follows: 1. we propose a novel algorithm, reward-switching policy optimization, for continuously discovering diverse policies. the iterative learning scheme and reward-switching technique both significantly benefit the efficiency of discovering strategically different policies. 2. we propose to use cross-entropy-based diversity metric for policy optimization and two additional diversity-driven intrinsic rewards for promoting diversity-driven exploration. 3. our algorithm is both general and effective across a variety of single-agent and multi-agent domains. specifically, our algorithm is the first to learn the optimal policy in the staghunt games without any domain knowledge, and successfully discovers 6 visually distinct winning strategies via merely 6 iterations on a hard map in smac. related work searching for diverse solutions in a highly multi-modal optimization problem has a long history and various block-box methods have been proposed (miller & shaw, 1996; deb & saha, 2010; kroese et al., 2006). in reinforcement learning, one of the most popular paradigms is population-based training with multi-objective optimization. representative works include the family of qualitative diversity (qd) (pugh et al., 2016) algorithms, such as map-elites (cully et al., 2015), which are based on genetic methods and assume a set of human-defined behavior characterizations, and policygradient methods (ma et al., 2020; tang et al., 2021), which typically assume a distribution of reward function is accessible. there are also some recent works that combine qd algorithms and policy gradient algorithms (cideron et al., 2020; nilsson & cully, 2021). the dvd algorithm (parker-holder et al., 2020b) improves qd by optimizing population diversity (pd), a kl-divergence-based diversity metric, without the need of hand-designed behavior characterizations. similarly, lupu et al. (2021) proposes to maximize trajectory diversity, i.e., the approximated jensen-shannon divergence with action-discounting kernel, to train a diversified population. there are also works aiming to learn policies iteratively. psro (lanctot et al., 2017) focuses on learning nash equilibrium strategies in zero-sum games by maintaining a strategy oracle and repeatedly adding best responses to it. various improvements have been made upon psro by using different metrics to promote diverse oracle strategies (liu et al., 2021; nieves et al., 2021). hong et al. (2018) utilizes the kl-divergence between the current policy and a past policy version as an exploration bonus, while we are maximizing the diversity w.r.t a fixed set of reference policies, which is more stable and will not incur a cyclic training process. diversity-inducing policy gradient (dipg) (masood & doshi-velez, 2019) utilizes maximum mean discrepancy (mmd) between policies as a soft learning objective to iteratively find novel policies. by contrast, our method utilizes a filtering-based objective via reward switching to strictly enforce all the diversity constraints. sun et al. (2020) adopts a conceptually similar objective by early terminating episodes that do not incur sufficient novelty. however, sun et al. (2020) does not leverage any exploration technique for those rejected samples and may easily suffer from low sample efficiency in challenging rl tasks we consider in this paper. there is another concurrent work with an orthogonal focus, which directly optimizes diversity with reward constraints (zahavy et al., 2021). we remark that enforcing a reward constraint can be problematic in multi-agent scenarios where different nash equilibrium can have substantially different pay-offs. in addition, the ridge rider algorithm (parker-holder et al., 2020a) proposes to follow the eigenvectors of the hessian matrix to discover diverse local optima with theoretical guarantees, but hessian estimates can be extremely inaccurate in complex rl problems. another stream of work uses unsupervised rl to discover diverse skills without the use of environment rewards, such as diyan (eysenbach et al., 2019) and ddlus (hartikainen et al., 2020). however, ignoring the reward signal can substantially limit the capability of discovering strategic behaviors. smerl (kumar et al., 2020) augments diyan with extrinsic rewards to induce diverse solutions for robust generalization. these methods primarily focus on learning low-level locomotion while we tackle a much harder problem of discovering strategically and visually different policies. finally, our algorithm is also conceptually related to exploration methods (zheng et al., 2018; burda et al., 2019; simmons-edler et al., 2019), since it can even bypass inescapable local optima in challenging rl environments. empirical comparisons can be found in section 4. however, we emphasize that our paper tackles a much more challenging problem than standard rl exploration: we aim to discover as many distinct local optima as possible. that is, even if the global optimal solution is discovered, we still want to continuously seek for sufficiently distinct local-optimum strategies. we remark that such an objective is particularly important for multi-agent games where finding all the nash equilibria can be necessary for analyzing rational multi-agent behaviors (tang et al., 2021). method preliminary we consider environments that can be modeled as a markov decision process (mdp) (puterman, 1994) m = (s, a, r, p, γ) where s and a are the state and action space respectively, r(s, a) is the reward function, p (s(cid:48)|s, a) is the transition dynamics and γ is the discount factor. we consider a stochastic policy πθ paramterized by θ. reinforcement learning optimizes the policy w.r.t. the expected return j(π) = eτ ∼π[(cid:80) t γtrt] over the sampled trajectories from π, where a trajectory τ denotes a sequence of state-action-reward triplets, i.e., τ = {(st, at, rt)}. note that this formulation can be naturally applied to multi-agent scenarios with homogeneous agents with shared state and action space, where learning a shared policy for all the agents will be sufficient. rather than learning a single solution for j(θ), we aim to discover a diverse set of m policies, i.e, {πθk |1 ≤ k ≤ m }, such that all of these polices are locally optimized under j(θ) and mutually distinct w.r.t. some distance measure d(πθi, πθj ), i.e., j(θk) ∀1 ≤ k ≤ m, subject to d(πθi , πθj ) ≥ δ, ∀1 ≤ i < j ≤ m. max θk here d(·, ·) measures how different two policies are and δ is the novelty threshold. for conciseness, in the following content, we omit θ and use πk to denote the policy with parameter θk. iterative constrained policy optimization directly solving eq. (1) suggests a population-based training paradigm, which requires a non-trivial optimization technique for the pairwise constraints and typically needs a large population size m . herein, we adopt an iterative process to discover novel policies: in the k-th iteration, we optimize a single policy πk with the constraint that πk is sufficiently distinct from previously discovered policies π1, . . . , πk−1. here, the term “iteration” is used to denote the process of learning a new policy. formally, we solve the following iterative constrained optimization problem for iteration 1 ≤ k ≤ m : θk = arg max j(θ), subject to d(πθ, πj) ≥ δ, ∀1 ≤ j < k. eq. (2) reduces the population-based objective to a standard constrained optimization problem for a single policy, which is much easier to solve. such an iterative procedure does not require a large population size m as is typically necessary in population-based methods. and, in practice, only a few iterations could result in a sufficiently diverse collection of policies. we remark that, in theory, directly solving the constraint problem in eq. (2) may lead to a solution that is not a local optimum w.r.t. the unconstrained objective j(θ). it is because a solution in eq. (2) can be located on the boundary of the constraint space (i.e., d(πθ, πj) = δ), which is undesirable according to our original goal. however, this issue can be often alleviated by properly setting the novelty threshold δ. the natural choice for measuring the policy difference is kl divergence, as done in the trust-region constraint (schulman et al., 2015; 2017). however, in our setting where the difference between policies should be maximized, using kl as the diversity measure would inherently encourage learning a policy with small entropy, which is typically undesirable in rl problems (see app. f for a detailed derivation). therefore, we adopt the accumulative cross-entropy as our diversity measure, i.e., d(πi, πj) :=h(πi, πj) = eτ ∼πi log πj(at | st) t trajectory filtering for enforcing diversity constraints a popular approach to solve the constrained optimization problem in eq. (2) is to use lagrangian multipliers to convert the constraints to penalties in the learning objective. formally, let β1, . . . , βk−1 be a set of hyperparameters, the soft objective for eq. (2) is defined by jsoft(θ) := j(πθ) + βjd(πθ, πj). such a soft objective substantially simplifies optimization and is widely adopted in rl applications. however, in our setting, since cross-entropy is a particularly dense function, including the diversity bonus as part of the objective may largely change the reward landscape of the original rl problem, which could make the final solution diverge from a locally optimal solution w.r.t j(θ). therefore, it is often necessary to anneal the lagrangian multipliers βj, which is particularly challenging in our setting with a large number of reference policies. moreover, since d(πθ, πj) is estimated over the trajectory samples, it introduces substantially high variance to the learning objective, which becomes even more severe as more policies are discovered. consequently, we propose a trajectory filtering objective to alleviate the issues of the soft objective. let’s use nll(τ ; π) to denote the negative log-likelihood of a trajectory τ w.r.t. a policy π, i.e., nll(τ ; π) = − (cid:80) (st,at)∼τ log π(at|st). we apply rejection sampling over the sampled trajectories τ ∼ πθ such that we train on those trajectories satisfying all the constraints, i.e., nll(τ ; πj) ≥ δ for each reference policy πj. formally, for each sampled trajectory τ , we define a filtering function φ(τ ), which indicates whether we want to reject the sample τ , and use i[·] to denote the indicator function, and then the trajectory filtering objective jfilter(θ) can be expressed as k−1 (cid:89) jfilter(θ) = eτ ∼πθ γtrt , where φ(τ ) := i[nll(τ ; πj) ≥ δ]. t we call the objective in eq. (5) a filtering objective. we show in app. g that solving eq. (5) is equivalent to solving eq. (2) with an even stronger diversity constraint. in addition, we also remark that trajectory filtering shares a conceptually similar motivation with the clipping term in proximal policy optimization (schulman et al., 2017). intrinsic rewards for diversity exploration the main issue in eq. (5) is that trajectory filtering may reject a significant number of trajectories, especially in the early stage of policy learning since the policy is typically initialized to the a random policy. hence, it is often the case that most of the data in a batch are abandoned, which leads to a severe wasting of samples and may even break learning due to the lack of feasible trajectories. can we make use of those rejected trajectories? we propose to additionally apply a novelty-driven objective on those rejected samples. formally, we use φj(τ ) to denote whether τ violates the constraint of πj, i.e., φj(τ ) = i[nll(τ, πj) ≥ δ]. then we have the following switching objective: jswitch = eτ ∼πθ γtrt + λ t j (1 − φj(τ )) nll(τ, πj) the above objective simultaneously maximizes the extrinsic return on accepted trajectories and the cross-entropy on rejected trajectories. it can be proved that solving eq. (6) is also equivalent to solving eq. (2) with a stronger diversity constraint (see app. g). furthermore, eq. (6) can be also interpreted as introducing additional cross-entropy intrinsic rewards on rejected trajectories (i.e., φj(τ ) = 0). more specifically, given nll(τ ; π) = − (cid:80) (st,at)∈τ log π(at|st), an intrinsic reward rint(st, at; πj) = − log πj(at | st) is applied to each state-action pair (st, at) from every rejected trajectory τ . conceptually, this suggests an even more general paradigm for diversity exploration: we can optimize extrinsic rewards on accepted trajectories while utilizing novelty-driven intrinsic rewards on rejected trajectories for more effective exploration, i.e., by encouraging the learning policy πθ to be distinct from a reference policy πj. hence, we propose two different types of intrinsic rewards to promote diversity exploration: one is likelihood-based, which directly follows eq. (6) and focuses more on behavior novelty, and the other is reward-prediction-based, which focuses more on achieving novel states and reward signals. behavior-driven exploration. the behavior-driven intrinsic reward rint rint b (a, s; πj) = − log πj(a | s). rint b encourages the learning policy to output different actions from those reference policies and therefore to be more likely to be accepted. note that rint b can be directly interpreted as the lagrangian penalty utilized in the soft objective jsoft(θ). b is defined by reward-driven exploration. a possible limitation of behavior-driven exploration is that it may overly focus on visually indistinguishable action changes rather than high-level strategies. note that in rl problems with diverse reward signals, it is usually preferred to discover policies that can achieve different types of rewards (simmons-edler et al., 2020; wang* et al., 2020). inspired by the curiosity-driven exploration method (pathak et al., 2017), we adopt a model-based approach for predicting novel reward signals. in particular, after obtaining each reference policy πj, we learn a reward prediction function f (s, a; ψj) trained by minimizing the expected mse loss l(ψj) = (cid:2)|f (st, at; ψj) − rt|2(cid:3) over the trajectories generated by πj. the reward prediction function eτ ∼πj ,t f (s, a; ψj) is expected to predict the extrinsic environment reward more accurately on state-action pairs that are more frequently visited by πj and less accurately on rarely visited pairs. to encourage policy exploration, we adopt the reward prediction error as our reward-driven intrinsic reward r (a, s; πj). formally, given the transition triplet (st, at, rt), rint rint r (st, at; πj) = |f (st, at; ψj) − rt|2. rint we remark that reward-driven exploration can be also interpreted as approximately maximizing the f -divergence of joint state occupancy measure between policies (liu et al., 2021). by combining these two intrinsic rewards together, we approximately maximize the divergence of both actions and states between policies to effectively promote diversity. by default, we use behavior-driven intrinsic reward for computational simplicity and optionally augment it with reward-driven intrinsic reward in more challenging scenarios (see examples in section 4.2). r (a, s; πj) is defined by reward-switching policy optimization we define the rspo function rrspo by t = φ(τ )rt + λ rrspo t (1 − φj(τ ))rint(at, st; πj), j where λ is a scaling hyper-parameter. note that extrinsic rewards and intrinsic rewards are mutually exclusive, i.e., a trajectory τ may be either included in jfiltering or be rejected to produce exploration bonuses. conceptually, our method is adaptively “switching” between extrinsic and intrinsic rewards during policy gradients, which is so-called reward-switching policy optimization (rspo). we also remark that the intrinsic reward will constantly push the learning policy towards the feasible policy space and the optimization objective will eventually converge to j(θ) when no trajectory is rejected. in addition to the aforementioned rspo algorithm, we also introduce two implementation enhancements for better empirical performances, especially in some performance-sensitive scenarios. automatic threshold selection. we provide an empirical way of adjusting δ. in some environments, δ is sensitive to each reference policy. instead of tuning δ for each reference policy, we choose its corresponding threshold by δj = α · d(πrnd, πj), where πrnd is a fully random policy and α is a task-specific hyperparameter. we remark that α is a constant parameter across training iterations and is much easier to choose than manually tuning δ, which requires subtle variation throughout multiple training iterations. we use automatic threshold selection by default. detailed values of α and the methodology of tuning α can be found in app. d.1 and app. b.3 respectively. smoothed-switching for intrinsic rewards. intrinsic rewards have multiple switching indicators, i.e., φ1, . . . , φk−1. moreover, for different trajectories, different subsets of indicators will be turned on and off, which may result in a varying scale of intrinsic rewards and hurt training stability. therefore, in some constraint-sensitive cases, we propose a smoothed switching mechanism which could further improve practical performance. specifically, we maintain a running average ˜φj over all the sampled trajectories for each indicator φj(τ ), and use these smoothed indicators to compute intrinsic rewards defined in eq. (9). smoothed-switching empirically improves training stability when a large number of reference policies exist, such as in stag-hunt games (see section 4.2). experiments to illustrate that our method can be applied to general rl applications, we experiment on 4 domains that feature multi-modality of solutions, including a single-agent navigation problem in the particle-world (mordatch & abbeel, 2018), 2-agent markov stag-hunt games (tang et al., 2021), continuous control in mujoco (todorov et al., 2012), and the starcraft ii multi-agent challenge (smac) (vinyals et al., 2017; rashid et al., 2019). in particle world and stag-hunt games, all the local optima can be precisely calculated, so we can quantitatively evaluate the effectiveness of different algorithms by measuring how many distinct strategy modes are discovered. in mujoco control and smac, we qualitatively demonstrate that our method can discover a large collection of visually distinguishable strategies. notably, we primarily present results from purely rl-based methods which do not require prior knowledge over possible local optima for a fair comparison. we also remark that when a precise feature descriptor of local optima is feasible, it is also possible to apply evolutionary methods (nilsson & cully, 2021) to a subset of the scenarios we considered. for readers of further interest, a thorough study with discussions can be found in app. b.4. our implementation is based on ppo (schulman et al., 2017) on a desktop machine with one cpu and one nvidia rtx3090 gpu. all the algorithms are run for the same number of total environment steps and the same number of iterations (or population size). more details can be found in appendix. single-agent particle-world environment we consider a sparse-reward navigation scenario called 4-goals (fig. 1). the agent starts from the center and will receive a reward when reaching a landmark. we set up 3 difficulty levels. in the easy mode, the landmark locations are fixed. in the medium mode, the landmarks are randomly placed. in the hard mode, landmarks are not only placed randomly but also have different sizes and rewards. specifically, the sizes and rewards of each landmark are 2×, 1×, 0.5×, 0.25× and 1×, 1.1×, 1.2×, 1.3× of the normal one respectively. we remark that in the hard mode, the landmark size decreases at an exponential rate while the reward gain is only marginal, making it exponentially harder to discover policies towards those smaller landmarks. we compare rspo with several baselines, including ppo with restarts (pg), diversity-inducing policy gradient (dipg) (masood & doshi-velez, 2019), population-based training with cross-entropy objective (pbt-ce), dvd (parker-holder et al., 2020b), smerl (kumar et al., 2020), and random network distillation (rnd) (burda et al., 2019). rnd is designed to explore the policy with the highest reward, so we only evaluate rnd in the hard mode. figure 1: the agent (orange) and landmarks (blue) in 4-goals. (b) medium (c) hard (a) easy the number of distinct local optima discovered by different methods is presented in fig. 2a. rspo consistently discovers all the 4 modes within 4 iterations even without the use of any intrinsic rewards over rejected trajectories (i.e., rint t = 0). dipg finds 4 strategies in 4 out of the 5 runs in the easy mode but performs no better than pg in the two harder modes. fig. 2b shows the highest expected return achieved over the policy population in the hard mode. rspo is the only algorithm that (a) mean number of distinct strategies found on 4-goals. (b) rewards on hard mode. figure 2: experiment results on 4-goals for m = 7 iterations averaged over 5 random seeds. error bars are 95% confidence intervals. figure 4: different strategies found in a run of 20 iterations diversity setting rspo in monster-hunt. we plot the heatmap of the two agents’ meeting point to indicate the type of the found strategies. successfully learns the optimal policy towards the smallest ball. we also report the performance of rspo optimizing the soft objective in eq. (4) with the default behavior-driven intrinsic reward rint b (no-switch), which was able to discover the policy towards the second largest landmark. comparing no-switch with rint t = 0, we could conclude that the filtering-based objective can be critical for rspo to discover sufficiently different modes. we also remark that the no-switch variant only differs from dipg by the used diversity metric. this suggests that in environments with high state variance (e.g. landmarks with random sizes and positions), state-based diversity metric may be less effective. 2-agent markov stag-hunt games we further show the effectiveness of rspo on two grid-world stag-hunt games developed in tang et al. (2021), monster-hunt and escalation, both of which have very distinct nash equilibria (nes) for self-play rl methods to converge to. moreover, the optimal ne with the highest rewards for both agents in these games are risky cooperation, i.e., a big penalty will be given to an agent if the other agent stops cooperation. this makes most self-play rl algorithms converge to the safe non-cooperative ne strategies with lower rewards. it has been shown that none of the state-of-the-art exploration methods can discover the global optimal solution without knowing the underlying reward structure (tang et al., 2021). we remark that since there are enormous nes in these environments as shown in fig. 4 and 7b, population-based methods (pbt) require a significantly large population size for meaningful performances, which is computationally too expensive to run. therefore, we do not include the results of pbt baselines. we also apply the smoothed-switching heuristic for rspo in this domain. environment details can be found in app. c.2. figure 3: monster-hunt the monster-hunt game. the monster-hunt game (fig. 3) contains a monster and two apples. when a single agent meets the monster, it gets a penalty of −2. when both agents meet the monster at the same time, they “catch” the monster and both get a bonus of 5. when a player meets an apple, it gets a bonus of 2. the optimal strategy, i.e., both agents move towards the monster, is a risky cooperative ne since an agent will receive a penalty if the other agent deceives. the non-cooperative ne for eating apples is a safe ne and easy to discover but has lower rewards. we adopt both behavior-driven and reward-driven intrinsic rewards in rspo to tackle monster-hunt. fig. 4 illustrates all the discovered strategies by rspo over 20 iterations, which covers a wide range of human-interpretable strategies, including the non-cooperative apple-eating strategy as well as the table 1: types of strategies discovered by each methods in monster-hunt over 20 iterations. apple corner edge chase ablation baseline rspo - no switch - no rint - rint b only rpg maven pg/dipg/rnd figure 5: sample acceptance ratio when learning a policy distinct from apple ne. the intrinsic reward is critical. escalation: figure 6: two agents need to keep stepping on the light simultaneously. (a) reward in iteration 2. (b) number of distinct strategies found. figure 7: results on escalation averaged over 3 random seeds. shaded area and error bars are 95% confident intervals. optimal strategy, where both two agents stay together and chase the monster actively. by visualizing the heatmap of where both agents meet, we observe a surprisingly diverse sub-optimal cooperation nes, where both agents move to a corner or an edge simultaneously, keep staying there, and wait for the monster coming. we remark that due to the existence of such a great number of passive waiting strategies, which all have similar accumulative environment rewards and states, it becomes critical to include the reward-driven intrinsic reward to quickly bypass them and discover the optimal solution. we perform ablation studies on rspo by turning off reward switching (no switch) or intrinsic reward (no rint) or only using the behavior-driven intrinsic reward (rint b only), and evaluate the performances of many baseline methods, including vanilla pg with restarts (pg), dipg, rnd, a popular multiagent exploration method maven (mahajan et al., 2019) and reward-randomized policy gradient (rpg) (tang et al., 2021). we summarize the categories of discovered strategies by all these baselines in table 1. apple denotes the non-cooperative apple-eating ne; chase denotes the optimal ne where both agents actively chase the monster; corner and edge denote the sub-optimal cooperative ne where both agents passively wait for the monster at a corner or an edge respectively. regarding the baselines, pg, dipg, and rnd never discover any strategy beyond the non-cooperative apple ne. for rpg, even using the domain knowledge to change the reward structure of the game, it never discovers the edge ne. regarding the rspo variants, both reward switching and intrinsic rewards are necessary. fig. 5 shows that when the intrinsic reward is turned off, the proportion of accepted trajectories per batch stays low throughout training. this implies that the learning policy failed to escape the infeasible subspace. besides, as shown in table 1, using behavior-driven exploration alone fails to discover the optimal ne, which suggests the necessity of reward-driven exploration to maximize the divergence of both states and actions in problems with massive equivalent local optima. the escalation game. escalation (fig. 6) requires the two players to interact with a static light. when both players step on the light simultaneously, they both receive a bonus of 1. then the light moves to a random adjacent grid. the game continues only if both players choose to follow the light. if only one player steps on the light, it receives a penalty of −0.9l, where l is the number of previous cooperation steps. for each integer l, there is a corresponding ne where both players follow the light for l steps then simultaneously stop cooperation. we run rspo with both diversity-driven intrinsic rewards and compare it with pg, dipg, rnd and rpg. except for rpg, none of the baseline methods discover any cooperative nes while rspo directly learns the optimal cooperative ne (i.e., always cooperate) in the second iteration as shown in fig. 7a. we also measure the total number of discovered nes by different methods over 10 iterations in fig. 7b. due to the existence of many spiky local optima, the smoothed-switching technique can be crucial here to stabilize the table 2: population diversity scores in mujoco. h.-cheetah hopper walker2d humanoid pg dipg pbt-ce dvd smerl rspo table 3: number of visually distinct policies over 4 iterations in smac. pg dipg pbt-ce trajdiv rspo training process. we remark that even without the smoothed-switching technique, rspo achieves comparable performance with rpg — note that rpg requires a known reward function while rspo does not assume any environment-specific domain knowledge. continuous control in mujoco we evaluate rspo in the continuous control domain, including half-cheetah, hopper, walker2d and humanoid, and compare it with baseline methods including pg, dipg, dvd (parker-holder et al., 2020b), smerl (kumar et al., 2020) and population-based training with our cross-entropy objective (pbt-ce). all the methods are run over 5 iterations (a population size of 5, or have a latent dimension of 5) across 3 seeds. we adopt population diversity, a determinant-based diversity criterion proposed in parker-holder et al. (2020b), to evaluate the diversity of derived policies by different methods. results are summarized in table 2, where rspo achieves comparable performance in hopper and humanoid and substantially outperforms all the baselines in half-cheetah and walker2d. we remark that even with the same intrinsic reward, population-based training (pbt-ce) cannot discover sufficiently novel policies compared with iterative learning (rspo). in humanoid, smerl achieves substantially lower return than other baseline methods and we don’t report the population diversity score (more details can be found in app. b.6). we also visualize some interesting emergent behaviors rspo discovered for half-cheetah and hopper in app. b.1, where different strategy modes discovered by rspo are visually distinguishable while baselines methods often converge to very similar behaviors despite of the non-zero diversity score. stacraft multi-agent challenge we further apply rspo to the starcraft ii multi-agent challenge (smac) (rashid et al., 2019), which is substantially more difficult due to partial observability, long horizon, and complex state/action space. we conduct experiments on 2 maps, an easy map 2m_vs_1z and a hard map 2c_vs_64zg, both of which have heterogeneous unit types leading to a multi-modal solution space. baseline methods include pg, dipg, pbt-ce and trajectory diversity (trajdiv) (lupu et al., 2021). smerl algorithm and dvd algorithm are not included because they were originally designed for continuous control domain and not suitable for smac (see app. b.6 and app. b.2.2). instead, we include the trajdiv algorithm (lupu et al., 2021) as an additional baseline, which was designed for cooperative multi-agent games. we compare the number of visually distinct policies by training a population of 4 (or for 4 iterations), as shown in table 3. while pbt-based algorithms tend to discover policies with slight distinctions, rspo can effectively discover different winning strategies demonstrating intelligent behaviors in just a few iterations consistently across repetitions. we remark that there may not exist an appropriate quantitative diversity metric for such a sophisticated marl game in the existing literature (see app. b.2.2). visualizations and discussions can be found in app. b.1. conclusion we propose reward-switching policy optimization (rspo), a simple, generic, and effective iterative learning algorithm that can continuously discover novel strategies. rspo tackles a noveltyconstrained optimization problem via adaptive switching between extrinsic and intrinsic rewards used for policy learning. empirically, rspo can successfully tackle a wide range of challenging rl domains under both single-agent and multi-agent settings. we leave further theoretical justifications and sample efficiency improvements as future work. references bowen baker, ingmar kanitscheider, todor markov, yi wu, glenn powell, bob mcgrew, and igor mordatch. emergent tool use from multi-agent autocurricula. in international conference on learning representations, 2020. nolan bard, jakob n foerster, sarath chandar, neil burch, marc lanctot, h francis song, emilio parisotto, vincent dumoulin, subhodeep moitra, edward hughes, et al. the hanabi challenge: a new frontier for ai research. artificial intelligence, 280:103216, 2020. yuri burda, harrison edwards, a. storkey, and oleg klimov. exploration by random network distillation. iclr, 2019. junyoung chung, caglar gulcehre, kyunghyun cho, and yoshua bengio. empirical evaluation of gated recurrent neural networks on sequence modeling. arxiv preprint arxiv:1412.3555, 2014. geoffrey cideron, thomas pierrot, nicolas perrin, karim beguir, and olivier sigaud. qd-rl: efficient mixing of quality and diversity in reinforcement learning. arxiv preprint arxiv:2006.08505, 2020. antoine cully, jeff clune, danesh tarapore, and jean-baptiste mouret. robots that can adapt like k. deb and amit saha. finding multiple solutions for multimodal optimization problems using a multi-objective evolutionary approach. in gecco ’10, 2010. benjamin eysenbach, a. gupta, j. ibarz, and sergey levine. diversity is all you need: learning skills without a reward function. iclr, 2019. scott fujimoto, herke hoof, and david meger. addressing function approximation error in actorcritic methods. in international conference on machine learning, pp. 1587–1596. pmlr, 2018. tuomas haarnoja, aurick zhou, pieter abbeel, and sergey levine. soft actor-critic: off-policy maximum entropy deep reinforcement learning with a stochastic actor. in international conference on machine learning, pp. 1861–1870. pmlr, 2018. kristian hartikainen, xinyang geng, t. haarnoja, and sergey levine. dynamical distance learning for semi-supervised and unsupervised skill discovery. iclr, 2020. zhang-wei hong, tzu-yun shann, shih-yang su, y. chang, and chun-yi lee. diversity-driven exploration strategy for deep reinforcement learning. 2018. max jaderberg, wojciech m czarnecki, iain dunning, luke marris, guy lever, antonio garcia castaneda, charles beattie, neil c rabinowitz, ari s morcos, avraham ruderman, et al. humanlevel performance in 3d multiplayer games with population-based reinforcement learning. science, 364(6443):859–865, 2019. dirk p. kroese, s. porotsky, and r. rubinstein. the cross-entropy method for continuous multiextremal optimization. methodology and computing in applied probability, 8:383–407, 2006. saurabh kumar, aviral kumar, sergey levine, and chelsea finn. one solution is not all you need: few-shot extrapolation via structured maxent rl. advances in neural information processing systems, 33, 2020. marc lanctot, v. zambaldi, a. gruslys, a. lazaridou, k. tuyls, j. pérolat, d. silver, and t. graepel. a unified game-theoretic approach to multiagent reinforcement learning. in nips, 2017. jiwei li, will monroe, alan ritter, dan jurafsky, michel galley, and jianfeng gao. deep reinforcement learning for dialogue generation. in proceedings of the 2016 conference on empirical methods in natural language processing, pp. 1192–1202, 2016. siqi liu, g. lever, j. merel, s. tunyasuvunakool, n. heess, and t. graepel. emergent coordination through competition. 2019. xiangyu liu, hangtian jia, ying wen, yaodong yang, yujing hu, yingfeng chen, changjie fan, and zhipeng hu. unifying behavioral and response diversity for open-ended learning in zero-sum games. arxiv preprint arxiv:2106.04958, 2021. andrei lupu, brandon cui, hengyuan hu, and jakob foerster. trajectory diversity for zero-shot coordination. in international conference on machine learning, pp. 7204–7213. pmlr, 2021. pingchuan ma, tao du, and wojciech matusik. efficient continuous pareto exploration in multi-task learning. in international conference on machine learning, pp. 6522–6531. pmlr, 2020. tengyu ma. why do local methods solve nonconvex problems? beyond the worst-case analysis of anuj mahajan, tabish rashid, mikayel samvelyan, and s. whiteson. maven: multi-agent variational exploration. in neurips, 2019. m. a. masood and finale doshi-velez. diversity-inducing policy gradient: using maximum mean discrepancy to find a set of diverse policies. 2019. b. miller and michael j. shaw. genetic algorithms with dynamic niche sharing for multimodal function optimization. proceedings of ieee international conference on evolutionary computation, pp. 786–791, 1996. igor mordatch and p. abbeel. emergence of grounded compositional language in multi-agent populations. in aaai, 2018. nicolas perez nieves, yaodong yang, oliver slumbers, david mguni, and jun wang. modelling behavioural diversity for learning in open-ended games. icml, 2021. olle nilsson and antoine cully. policy gradient assisted map-elites. in proceedings of the genetic and evolutionary computation conference, pp. 866–875, 2021. shayegan omidshafiei, karl tuyls, wojciech m czarnecki, francisco c santos, mark rowland, jerome connor, daniel hennes, paul muller, julien pérolat, bart de vylder, et al. navigating the landscape of multiplayer games. nature communications, 11(1):1–17, 2020. jack parker-holder, luke metz, cinjon resnick, h. hu, a. lerer, alistair letcher, alexander peysakhovich, aldo pacchiano, and jakob foerster. ridge rider: finding diverse solutions by following eigenvectors of the hessian. neurips, 2020a. jack parker-holder, aldo pacchiano, krzysztof choromanski, and stephen roberts. effective diversity in population-based reinforcement learning. neurips, 2020b. deepak pathak, pulkit agrawal, alexei a efros, and trevor darrell. curiosity-driven exploration by self-supervised prediction. in international conference on machine learning, pp. 2778–2787. pmlr, 2017. tiago pereira, maryam abbasi, bernardete ribeiro, and joel p. arrais. diversity oriented deep reinforcement learning for targeted molecule generation. journal of cheminformatics, 13(1):21, mar 2021. issn 1758-2946. doi: 10.1186/s13321-021-00498-z. url https://doi.org/ 10.1186/s13321-021-00498-z. justin k. pugh, l. b. soros, and k. stanley. quality diversity: a new frontier for evolutionary computation. frontiers robotics ai, 3:40, 2016. m. puterman. markov decision processes: discrete stochastic dynamic programming. in wiley series in probability and statistics, 1994. tabish rashid, philip hs torr, gregory farquhar, chia-man hung, tim gj rudner, nantas nardelli, shimon whiteson, christian schroeder de witt, jakob foerster, and mikayel samvelyan. the starcraft multi-agent challenge. volume 4, pp. 2186–2188. international foundation for autonomous agents and multiagent systems, 2019. john schulman, sergey levine, p. abbeel, michael i. jordan, and p. moritz. trust region policy optimization. icml, 2015. john schulman, f. wolski, prafulla dhariwal, alec radford, and oleg klimov. proximal policy riley simmons-edler, ben eisner, daniel yang, anthony bisulco, eric mitchell, sebastian seung, and daniel lee. qxplore: q-learning exploration by maximizing temporal difference error. 2019. riley simmons-edler, ben eisner, daniel yang, anthony bisulco, eric mitchell, sebastian seung, and daniel lee. reward prediction error as an exploration objective in deep rl. in proceedings of the twenty-ninth international joint conference on artificial intelligence, ijcai-20, pp. 2816–2823, 7 2020. hao sun, zhenghao peng, bo dai, jian guo, dahua lin, and bolei zhou. novel policy seeking with constrained optimization. arxiv preprint arxiv:2005.10696, 2020. zhenggang tang, c. yu, boyuan chen, huazhe xu, xiaolong wang, fei fang, s. du, yu wang, and yi wu. discovering diverse multi-agent strategic behavior via reward randomization. iclr, 2021. e. todorov, t. erez, and y. tassa. mujoco: a physics engine for model-based control. 2012 ieee/rsj international conference on intelligent robots and systems, pp. 5026–5033, 2012. oriol vinyals, timo ewalds, sergey bartunov, petko georgiev, alexander sasha vezhnevets, michelle yeo, alireza makhzani, heinrich küttler, john agapiou, julian schrittwieser, et al. starcraft ii: a new challenge for reinforcement learning. arxiv preprint arxiv:1708.04782, 2017. oriol vinyals, igor babuschkin, wojciech m czarnecki, michaël mathieu, andrew dudzik, junyoung chung, david h choi, richard powell, timo ewalds, petko georgiev, et al. grandmaster level in starcraft ii using multi-agent reinforcement learning. nature, 575(7782):350–354, 2019. rui wang, joel lehman, jeff clune, and kenneth o stanley. poet: open-ended coevolution of environments and their optimized solutions. in proceedings of the genetic and evolutionary computation conference, pp. 142–151, 2019. tonghan wang*, jianhao wang*, yi wu, and chongjie zhang. influence-based multi-agent exploration. in international conference on learning representations, 2020. chao yu, akash velu, eugene vinitsky, yu wang, alexandre bayen, and yi wu. the surprising effectiveness of mappo in cooperative, multi-agent games. arxiv preprint arxiv:2103.01955, 2021. tom zahavy, brendan o’donoghue, andre barreto, volodymyr mnih, sebastian flennerhag, and satinder singh. discovering diverse nearly optimal policies with successor features. arxiv preprint arxiv:2106.00669, 2021. stephan zheng, alexander trott, sunil srinivasa, nikhil naik, melvin gruesbeck, david c parkes, and richard socher. the ai economist: improving equality and productivity with ai-driven tax policies. arxiv preprint arxiv:2004.13332, 2020. zeyu zheng, junhyuk oh, and satinder singh. on learning intrinsic rewards for policy gradient methods. advances in neural information processing systems, 31:4644–4654, 2018. a gif demonstrations on mujoco and smac see https://sites.google.com/view/rspo-iclr-2022. b additional results b.1 visualization of discovered strategies b.1.1 mujoco (a) normal running (b) handstand running (a) normal hopping (b) charged hopping (c) upside-down running (c) small-step hopping (d) kneeling figure 8: half-cheetah behaviors. figure 9: hopper behaviors. (a) normal running (a) two feet mincing (b) jumping (b) striding (c) striding running (c) hand-helped balancing figure 10: walker behaviors. figure 11: homanoid behaviors. b.1.2 smac we present screenshots of emergent strategies in the smac environment in fig. 14 and fig. 15 for 2c_vs_64zg and 2m_vs_1z respectively. we expect different emergent strategies to be both visually distinguishable and human-interpretable. the strategies induced by baseline methods and rspo are summarized in table 4 and table 5. on the hard map 2c_vs_64zg, agents need to control the 2 colossi to fight against 64 zergs. the colossi have a wider attack range and can step over the cliff. fig. 14a shows an aggressive strategy where the two colossi keep staying together on the left side of the cliff to fire at all the coming enemies. fig. 14b shows a particularly intelligent strategy where the colossi make use of the terrain to play hit-and-run. when the game starts, the colossi stand on the cliff to snipe distant enemies to make them keep wandering around under the cliff. hence, as the game proceeds, we can observe that the enemies are clearly partitioned while the colossi always maintain a smart fire position on the cliff. fig. 14c shows a mirror strategy similar to fig. 14a to aggressively clean up incoming enemies from the right side. fig. 14d shows a conservative strategy that the colossi stand still in the corner to keep minimal contact with the enemies, and thus minimize damages received from enemies. fig. 14e shows another smart strategy: one colossi blocks all incoming enemies on the mountain pass as a fire attractor, while the other one hides behind the fire attractor and snipes enemies distantly. we can see from the last two frames that the distant sniper does not lose any health points in late stages. in fig. 14f, one colossi (#1) actively take advantage of the terrain to walk along the cliff, such that enemies on the plateau must run around to attack it. in the mean time, the colossi helps its teammate by sniping distant enemies. finally, they separately clean up all the remaining enemies for the win. on the easy map 2m_vs_1z, agents need to control 2 marines to defeat a zealot. the marines can shoot the zealot in a distant position but the zealot can only perform close-range attacks. in fig. 15a, the marines swing horizontally to keep an appropriate fire distance from the zealot. in fig. 15b, the marines perform a parallel hit-and-run from top to bottom as the zealot approaches. in fig. 15c, the right-side marine stands still, and the left-side marine swings vertically to distract the zealot. in fig. 15d, the two marines alternatively perform hit-and-run from bottom to top to distract the zealot. table 4: strategies induced by baseline methods and rspo in smac map 2c_vs_64zg. algorithm pg dipg pbt-ce trajdiv rspo strategies left wave cleanup, right wave cleanup cliff walk, corner left wave cleanup, right wave cleanup fire attractor and distant sniper, cliff walk, right wave cleanup left wave cleanup, cliff sniping and smart blocking, right wave cleanup corner, fire attractor and distant sniper, cliff walk table 5: strategies induced by baseline methods and rspo in smac map 2m_vs_1z. algorithm pg dipg pbt-ce trajdiv strategies parallel hit-and-run, alterative distraction parallel hit-and-run, one-sided swinging one-sided swinging, parallel-hit-and-run, swinging parallel hit-and-run rspo one-sided swinging, parallel-hit-and-run, swinging, alterative distraction b.2 quantitative evaluation b.2.1 mujoco the final performance of mujoco environments is presented in table 6. as mentioned in appendix c.3, in our implementation we fix the episode length to 512 so that the diverse intrinsic rewards can be easily computed, which may harm sample efficiency and the evaluation score. moreover, we use a hidden size of 64 which is usually smaller than previous works (which is 256 in the sac paper (haarnoja et al., 2018)). hence, these results may not be directly compared with other numbers in the existing literature. however, the policy in iter #1 is obtained by vanilla ppo and can therefore be used to assess the relative performances of other runs. note that even if the presented scores are not the state-of-the-art, visualization results show that our algorithm successfully learns the basic locomotion and diverse gaits in these environments. the results indeed demonstrate diverse local optima that are properly discovered by rspo, including many interesting emergent behaviors table 6: final evaluation performance of rspo averaged over 32 episodes in mujoco continuous control domain. averaged over 3 random seeds with standard deviation in the brackets. environment iter #1 iter #2 iter #3 iter #4 iter #5 half-cheetah hopper walker2d homanoid table 7: final evaluation winning rate of rspo averaged over 32 episodes in smac. map iter #1 iter #2 iter #3 iter #4 iter #5 iter#6 n/a that may have never been reported in the literature (check the website in appendix a for details), which accords with our initial motivation. b.2.2 smac final evaluation winning rate of smac is presented in table 7. the final performance of rspo on the easy map 2m_vs_1z matches the state-of-the-art (yu et al., 2021). the median evaluation winning rate and standard deviation on the hard map 2c_vs_64zg is 98.4%(6.4%), which is slightly lower than the state-of-the-art 100%(0). we note that the policies discovered by our algorithm are both diverse and high-quality winning strategies (local optima) showing intelligent emergent behaviors. we note that population diversity, which we use for quantitative evaluation for mujoco, may not be an appropriate metric for such a sophisticated marl game with a large state space and a long horizon. diversity via determinant or population diversity (parker-holder et al., 2020b) is originally designed for the continuous control domain. it mainly focuses on the continuous control domain and directly adopts the action as action embedding, while it remains unclear how to embed a discrete action space. for smac, we adopt the logarithm probability of categorical distribution as the action embedding and evaluate rspo and selected baselines using population diversity on the hard map 2c_vs_64zg. we further train dvd with a population size of 4 as an additional baseline. the results are shown in table 8. the population diversity scores of baseline methods and rspo both reach the maximum value of 1.000. however, if we visualize all the learned policies, actually many policies induced by pbt-ce or dvd cannot be visually distinguished by humans. moreover, policies induced by pg are visually identical but still achieve a population diversity score of 0.981. this indicates that high population diversity scores might not necessarily imply a diverse strategy pool in complex environments like smac. we hypothesize that this is due to complex game dynamics in smac. for example, a unit performing micro-strategies of attack-then-move and move-then-attack are visually the same for humans but will have very different action probabilities in each timestep. such subtle changes in policy outputs can significantly increase the population diversity scores. since high scores may not directly reflect more diverse policies, it may not be reasonable to explicitly optimize population diversity as an auxiliary loss in smac. therefore, we omit the results of dvd in smac in the main body of our paper. table 8: population diversity on the hard map 2c_vs_64zg in smac. algorithm # distinct strategies population diversity pg pbt-ce dvd rspo figure 12: data efficiency with different α in humanoid. figure 13: learning curve with different λint the 2m_vs_1z map in smac. r on population diversity population diversity table 9: population diversity scores of the first 2 policies with different hyperparameters in humanoid. we have scaled the denominator of the rbf kernel in the population diversity matrix by a factor 10, such that the difference can be demonstrated more clearly. to the best of our knowledge, a commonly accepted policy diversity metric for complex marl games remains an open question in the existing literature. in our practice, rendering and visualizing the evaluation trajectories remains the best approach to distinguish different learned strategies. we emphasize that qualitatively, we are so far the first paper that ever reports such a visually diverse collection of winning strategies on a hard map in smac. please check our website for policy visualizations (see appendix a). b.3 sensitivity analysis we have performed a sensitivity analysis over α, λint performance rspo. the default values used in our experiments can be found in table 12. r since they are the critical to the b and λint α is the most important hyperparameter in rspo because it determines what trajectories in a batch to be accepted. we focus on the data efficiency, i.e., the proportion of accepted trajectories in a batch. in the sensitivity analysis, we run the second iteration of rspo in humanoid with α = 0.5, 1.0, 1.5, 2 respectively and compute the population diversity score of the 2 resulting policies. the result is shown in fig. 12 and the left part of table 9. the result accords with our heuristic to adjust α: with a small α (α = 0.5), rspo may accept all the trajectories at the beginning and lead to a similar policy after convergence, which is no better than the pg baseline; with a large α (α = 1.5 and α = 2), rspo may reject too many trajectories at the early stage of training and spend quite a lot of time on exploration, which sacrifices training time for the gain in the diversity score. in practice, we suggest starting with α = 1 and adjusting it such that the acceptance rate can drop at the start of training and then quickly and smoothly converge to 1, as shown in fig. 12 (α = 1) and fig. 5. α should be decreased if too much data is rejected at the beginning of training and increased if data efficiency always stays high in the early stage of training. λint b and λint b is analyzed in the humanoid environment and λint r is analyzed on the 2m_vs_1z map in smac. similarly, we run the second iteration of rspo with λint r = 0, 0.05, 0.2 on 2m_vs_1z. the results are shown in fig. 13 and the right part of table 9. with r determines the scale of intrinsic rewards. in our sensitivity analysis, λint b = 0.5, 1, 5, 10 in humanoid and with λint table 10: the number of distinct strategies discovered by pga-map-elites and rspo and the highest achieved rewards in 4-goals hard. numbers are averaged over 3 seeds. easy medium hard escalation reward in 4-goals hard pga-map-elites rspo value and advantage normalization in ppo, the scale of intrinsic rewards may not significantly affect performance. specifically, the diversity scores in humanoid do not vary too much, and the induced policies on 2m_vs_1z are all visually distinct from the reference one. however, if the scale is much larger than extrinsic rewards, it may cause learning instability, as shown in fig. 13. on the opposite side, if the intrinsic rewards are turned off, rspo may slow down convergence (fig. 13), fail to discover non-trivial local optima due to lack of exploration (table. 1) or get stuck during exploration due to low data efficiency (fig. 5). we suggest starting with λint r = 0, and adjusting them such that the intrinsic rewards lead to a fast and smooth convergence. b = 1 and λint b.4 additional study with an evolutionary method note that the main paper focuses on the discussion of rl-based solutions which require minimal domain knowledge of the solution structure. evolutionary methods, as another popular line of research, have also shown promising results in a variety of domains and are also able to discover interesting diverse behaviors (cully et al., 2015; hong et al., 2018). however, evolutionary methods typically assume an effective human-designed set of characteristic features for effective learning. here, for complete empirical study, we also conduct additional experiments w.r.t. a very recent evolutionary-based algorithm, pga-map-elites (nilsson & cully, 2021) which integrates mapelites (cully et al., 2015) into a policy gradient algorithm td3 (fujimoto et al., 2018). the pgamap-elites algorithm requires a human-defined behavioral descriptor (bd) to map a neural policy into a low-dimensional (discretized) space for behavior clustering. we run pga-map-elites on the 4-goals and the escalation environment, where the behavioral descriptors (bds) can be precisely defined to the best of our efforts. we remark that for the remaining cases, including the monster-hunt environment, mujoco control domain, and smac scenarios, behaviors of interests always involve strong temporal characteristics, which makes the design of a good bd particularly non-trivial and remain a challenging open question for the existing literature. in particular, we define a 4-dimensional descriptor for the 4-goals environment, i.e., a one-hot vector indicating the id of the nearest landmark. for the escalation environment, we use a 1-dimensional descriptor for the escalation environment, which is a 0-1-normalized value of the cooperation steps within the episode. we set the number of behavior cells (niches) equal to the iteration number in rspo, specifically 7 for 4-goals and 10 for escalation. results are shown in table 10, where pga-map-elites performs much worse on the 4-goals scenario while outperforms rspo on the escalation environment due to the informative bd. based on the results, we would like to discuss some characteristics of evolutionary algorithms and rspo below: 1. we empirically observe that many policies produced by pga-map-elites are immediately archived without becoming converged, particularly in 4-goals (hard) and escalation. when measuring population diversity, these unconverged policies would contribute a lot even though many of them may have unsatisfying behaviors/returns. by contrast, the objective of rspo aims to find diverse local optima. this also suggests a further research direction to bridge such a convergence-diversity gap. 2. the quality of bd can strongly influence the performance of evolutionary methods. note that the bd in escalation provides a particularly clear signal on whether a policy reaches a local optimum or not (i.e., each bd niche precisely corresponds to a policy mode) while rspo directly works on the deceptive extrinsic reward structure without knowing the structure of nes. this suggests the importance of bd design, which, however, remains an open challenge in general. 3. an improper bd may lead to a largely constrained behavior space. the success of pgamap-elites largely depends on the fact that the bd is known to be able to effectively cover all the local optima of interest. however, for complex environments like smac, we do not even know in advance what kind of behaviors will emerge after training. therefore, an open-ended bd would be desired, which becomes an even more challenging problem — note that there even has not been any effective diversity measurement on smac yet. therefore, a purely rl-based solution would be preferred. 4. without an informative bd, evolutionary methods typically require a large population size introduced, which can cause practical issues. for example, maintaining a large unstructured archive can be computationally expensive. it can be also challenging to visually evaluate learned behaviors given a large population. to sum up, when an effective and informative bd is available, evolutionary methods can be a strong candidate to consider, although it may not fit every scenario of interest, while rspo can be a generally applicable solution with minimal domain knowledge. it could be also beneficial to investigate how to incorporate informative domain prior into rspo framework, which we leave as our future work. b.5 the point-v0 environment parker-holder et al. (2020b) develops a continuous control environment which requires the agent to bypass a wall to reach the goal. a big penalty will be given to the agent if it directly runs towards the goal and hits the wall. this environment indeed has a local optimum which can be overcome by diversity-driven exploration. while the authors of the dvd paper argued that es and nsr will get stuck in the wall, the naive ppo algorithm can directly learns to bypass the wall and escape the local optimum. hence in our experiment section, we consider much more challenging environments where a much larger number of local optima exist, such as stag-hunt games and smac. b.6 smerl in smerl (kumar et al., 2020), if the agent can achieve sufficiently high return in the trajectory, the trajectory will be augmented with intrinsic rewards of diayn (eysenbach et al., 2019) for skill differentiation and policy diversification. in humanoid, it may be challenging for smerl to achieve such a high return, which turns off the intrinsic reward for promoting diversity. hence, we do not report the population diversity score in the main body of our paper. in stag-hunt games, smerl keeps producing low returns. note that since the smerl algorithm only starts promoting diversity after sufficiently high reward is achieved, all the produced policies by smerl are visually identical non-cooperative strategies. what’s more, diayn (eysenbach et al., 2019), which has the same intrinsic reward as smerl, has been evaluated in tang et al. (2021) and proved to perform worse than rpg (tang et al., 2021), while the performance of rpg is further surpassed by rspo. hence, we omit the results of smerl in the stag-hunt games. we also evaluate smerl in smac with a latent dimension of 5, which only induces 1 strategy on each map across all possible latent variables. we hypothesize the reason is that in such a complex marl environment with a large state space and long horizon, the skill latent variable may be particularly challenging to be determined from game states. moreover, the latent dimension is usually dominated by the state dimension, which makes latent variables less effective. | 17 | [
107.532,
196.8300784,
504.352875408,
250.6276784
] |
jh-rTtvkGeM.pdf | 2,021 | 1 | gradient descent on neural networks typically occurs at the edge of stability jeremy cohen simran kaur yuanzhi li j. zico kolter1 and ameet talwalkar2 carnegie mellon university and: 1bosch ai correspondence to: jeremycohen@cmu.edu 2 determined ai abstract we empirically demonstrate that full-batch gradient descent on neural network training objectives typically operates in a regime we call the edge of stability. in this regime, the maximum eigenvalue of the training loss hessian hovers just above the value 2/(step size), and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales. since this behavior is inconsistent with several widespread presumptions in the field of optimization, our findings raise questions as to whether these presumptions are relevant to neural network training. we hope that our findings will inspire future efforts aimed at rigorously understanding optimization at the edge of stability. introduction neural networks are almost never trained using (full-batch) gradient descent, even though gradient descent is the conceptual basis for popular optimization algorithms such as sgd. in this paper, we train neural networks using gradient descent, and find two surprises. first, while little is known about the dynamics of neural network training in general, we find that in the special case of gradient descent, there is a simple characterization that holds across a broad range of network architectures and tasks. second, this characterization is strongly at odds with prevailing beliefs in optimization. in more detail, as we train neural networks using gradient descent with step size η, we measure the evolution of the sharpness — the maximum eigenvalue of the training loss hessian. empirically, the behavior of the sharpness is consistent across architectures and tasks: so long as the sharpness is less than the value 2/η, it tends to continually rise (§3.1). we call this phenomenon progressive sharpening. the significance of the value 2/η is that gradient descent on quadratic objectives is unstable if the sharpness exceeds this threshold (§2). indeed, in neural network training, if the sharpness ever crosses 2/η, gradient descent quickly becomes destabilized — that is, the iterates start to oscillate with ever-increasing magnitude along the direction of greatest curvature. yet once figure 1: gradient descent typically occurs at the edge of stability. on three separate architectures, we run gradient descent at a range of step sizes η, and plot both the train loss (top row) and the sharpness (bottom row). for each step size η, observe that the sharpness rises to 2/η (marked by the horizontal dashed line of the appropriate color) and then hovers right at, or just above, this value. this happens, gradient descent does not diverge entirely or stall. instead, it enters a new regime we call the edge of stability1 (§3.2), in which (1) the sharpness hovers right at, or just above, the value 2/η; and (2) the train loss behaves non-monotonically, yet consistently decreases over long timescales. in this regime, gradient descent is constantly “trying” to increase the sharpness, but is constantly restrained from doing so. the net effect is that gradient descent continues to successfully optimize the training objective, but in such a way as to avoid further increasing the sharpness.2 in principle, it is possible to run gradient descent at step sizes η so small that the sharpness never rises to 2/η. however, these step sizes are suboptimal from the point of view of training speed, sometimes dramatically so. in particular, for standard architectures on the standard dataset cifar-10, such step sizes are so small as to be completely unreasonable — at all reasonable step sizes, gradient descent eventually enters the edge of stability (see §4). thus, at least for standard networks on cifar-10, the edge of stability regime should be viewed as the “rule,” not the “exception.” as we describe in §5, the edge of stability regime is inconsistent with several pieces of conventional wisdom in optimization theory: convergence analyses based on l-smoothness or monotone descent, quadratic taylor approximations as a model for local progress, and certain heuristics for step size selection. we hope that our empirical findings will both nudge the optimization community away from widespread presumptions that appear to be untrue in the case of neural network training, and also point the way forward by identifying precise empirical phenomena suitable for further study. certain aspects of the edge of stability have been observed in previous empirical studies of fullbatch gradient descent (xing et al., 2018; wu et al., 2018); our paper provides a unified explanation for these observations. furthermore, jastrz˛ebski et al. (2020) proposed a simplified model for the evolution of the sharpness during stochastic gradient descent which matches our empirical observations in the special case of full-batch sgd (i.e. gradient descent). however, outside the full-batch special case, there is no evidence that their model matches experiments with any degree of quantitative precision, although their model does successfully predict the directional trend that large step sizes and/or small batch sizes steer sgd into regions of low sharpness. we discuss sgd at greater length in §6. to summarize, while the sharpness does not obey simple dynamics during sgd (as it does during gd), there are indications that the “edge of stability” intuition might generalize somehow to sgd, just in a way that does not center around the sharpness. background: stability of gradient descent on quadratics in this section, we review the stability properties of gradient descent on quadratic functions. later, we will see that the stability of gradient descent on neural training objectives is partly well-modeled by the stability of gradient descent on the quadratic taylor approximation. on a quadratic objective function f (x) = 1 2 xt ax + bt x + c, gradient descent with step size η will diverge if3 any eigenvalue of a exceeds the threshold 2/η. to see why, consider first the onedimensional quadratic f (x) = 1 2 ax2 + bx + c, with a > 0. this function has optimum x∗ = −b/a. consider running gradient descent with step size η starting from x0. the update rule is xt+1 = xt − η(axt + b), which means that the error xt − x∗ evolves as (xt+1 − x∗) = (1 − ηa)(xt − x∗). therefore, the error at step t is (xt − x∗) = (1 − ηa)t(x0 − x∗), and so the iterate at step t is xt = (1 − ηa)t(x0 − x∗) + x∗. if a > 2/η, then (1 − ηa) < −1, so the sequence {xt} will oscillate around x∗ with ever-increasing magnitude, and diverge. now consider the general d-dimensional case. let (ai, qi) be the i-th largest eigenvalue/eigenvector of a. as shown in appendix b, when the gradient descent iterates {xt} are expressed in the special coordinate system whose axes are the eigenvectors of a, each coordinate evolves separately. in particular, the coordinate for each eigenvector qi, namely (cid:104)qi, xt(cid:105), evolves according to the dynamics of gradient descent on a one-dimensional quadratic objective with second derivative ai. 1this nomenclature was inspired by the title of giladi et al. (2020). 2in the literature, the term “sharpness” has been used to refer to a variety of quantities, often connected to generalization (e.g. keskar et al. (2016)). in this paper, “sharpness” strictly means the maximum eigenvalue of the training loss hessian. we do not claim that this quantity has any connection to generalization. 3for convex quadratics, this is “if and only if.” however, if a has a negative eigenvalue, then gradient descent with any (positive) step size will diverge along the corresponding eigenvector. therefore, if ai > 2/η, then the sequence {(cid:104)qi, xt(cid:105)} will oscillate with ever-increasing magnitude; in this case, we say that the iterates {xt} diverge along the direction qi. to illustrate, figure 2 shows a quadratic function with eigenvalues a1 = 20 and a2 = 1. in figure 2(a), we run gradient descent with step size η = 0.09; since 0 < a2 < a1 < 2/η, gradient descent converges along both q1 and q2. in figure 2(b), we use step size η = 0.11; since 0 < a2 < 2/η < a1, gradient descent converges along q2 yet diverges along q1, so diverges overall. figure 2: gradient descent on a quadratic with eigenvalues a1 = 20 and a2 = 1. polyak momentum (polyak, 1964) and nesterov momentum (nesterov, 1983; sutskever et al., 2013) are notable variants of gradient descent which often improve the convergence speed. on quadratic functions, these two algorithms also diverge if the sharpness exceeds a certain threshold, which we call the “maximum stable sharpness,” or mss. in particular, we prove in appendix b that gradient descent with step size η and momentum parameter β diverges if the sharpness exceeds: msspolyak(η, β) = (2 + 2β) , mssnesterov(η, β) = the polyak result previously appeared in goh (2017); the nesterov one seems to be new. note that this discussion only applies to full-batch gradient descent. as we discuss in §6, several recent papers have proposed stability analyses for sgd (wu et al., 2018; jastrz˛ebski et al., 2020). neural network training objectives are not globally quadratic. however, the second-order taylor approximation around any point x0 in parameter space is a quadratic function whose “a” matrix is the hessian at x0. if any eigenvalue of this hessian exceeds 2/η, gradient descent with step size η would diverge if run on this quadratic function — the iterates would oscillate with ever-increasing magnitude along the corresponding eigenvector. therefore, at any point x0 in parameter space where the sharpness exceeds 2/η, gradient descent with step size η would diverge if run on the quadratic taylor approximation to the training objective around x0. gradient descent on neural networks in this section, we empirically characterize the behavior of gradient descent on neural network training objectives. section 4 will show that this characterization holds broadly. progressive sharpening when training neural networks, it seems to be a general rule that so long as the sharpness is small enough for gradient descent to be stable (< 2/η, for vanilla gradient descent), gradient descent has an overwhelming tendency to continually increase the sharpness. we call this phenomenon progressive sharpening. by “overwhelming tendency,” we mean that gradient descent can occasionally decrease the sharpness (especially at the beginning of training), but these brief decreases always seem be followed by a return to continual increase. jastrz˛ebski et al. (2020) previously hypothesized (in their assumption 4) that a similar phenomenon may hold for sgd, but the evidence for, and the precise scope of, this effect are both currently far clearer for gradient descent than for sgd. figure 3: so long as the sharpness is less than 2/η, it tends to continually increase during gradient descent. we train a network to completion (99% accuracy) using gradient descent with a very small step size. we consider both mse loss (left) and cross-entropy loss (right). progressive sharpening is illustrated in figure 3. here, we use (full-batch) gradient descent to train a network on a subset of 5,000 examples from cifar-10, and we monitor the evolution of the sharpness during training. the network is a fully-connected architecture with two hidden layers of width 200, and tanh activations. in figure 3(a), we train using the mean squared error loss for classification (hui & belkin, 2020), encoding the correct class with 1 and the other classes with 0. we use the small step size of η = 2/600, and stop when the training accuracy reaches 99%. we plot both the train loss and the sharpness, with a horizontal dashed line marking the stability threshold 2/η. observe that the sharpness continually rises during training (except for a brief dip at the beginning). this is progressive sharpening. for this experiment, we intentionally chose a step size η small enough that the sharpness remained beneath 2/η for the entire duration of training. cross-entropy. when training with cross-entropy loss, there is an exception to the rule that the sharpness tends to continually increase: with cross-entropy loss, the sharpness typically drops at the end of training. this behavior can be seen in figure 3(b), where we train the same network using the cross-entropy loss rather than mse. this drop occurs because once most data points are classified correctly, gradient descent tries to drive the cross-entropy loss to zero by scaling up the margins, as detailed in soudry et al. (2018). as we explain in appendix c, this causes the sharpness to drop. the effect of width. it is known that when networks parameterized in a certain way (the “ntk parameterization”) are made infinitely wide, the hessian moves a vanishingly small amount during training (jacot et al., 2018; lee et al., 2019; li & liang, 2018), which implies that no progressive sharpening occurs. in appendix d, we experiment with networks of varying width, under both ntk and standard parameterizations. we find that progressive sharpening occurs to a lesser degree as networks become increasingly wide. nevertheless, our experiments in §4 demonstrate that progressive sharpening occurs to a dramatic degree for standard architectures on the standard dataset cifar-10. we do not know why progressive sharpening occurs, or whether “sharp” solutions differ in any important way from “not sharp” solutions. these are important questions for future work. note that mulayoff & michaeli (2020) studied the latter question in the context of deep linear networks. the edge of stability in the preceding section, we ran gradient descent using step sizes η so small that the sharpness never reached the stability threshold 2/η. in figure 4(a), we start to train the same network at the larger step size of η = 0.01, and pause training once the sharpness rises to 2/η = 200. recall from §2 that in any region where the sharpness exceeds 2/η, gradient descent with step size η would be unstable if run on the quadratic taylor approximation to the training objective — the gradient descent iterates would oscillate with ever-increasing magnitude along the leading hessian eigenvector. empirically, we find that gradient descent on the real neural training objective behaves similarly — at first. namely, let q1 be the leading hessian eigenvector at the iteration where the sharpness reaches 2/η. in figure 4(b), we resume training the network, and we monitor both the train loss and the quantity (cid:104)q1, xt(cid:105) for the next 215 iterations. observe that (cid:104)q1, xt(cid:105) oscillates with ever-increasing magnitude, similar to the divergent quadratic example in figure 2(b). at first, these oscillations are too small to affect the objective appreciably, and so the train loss continues to monotonically decrease. but eventually, these oscillations grow big enough that the train loss spikes. figure 4: once the sharpness crosses 2/η, gradient descent becomes destabilized. we run gradient descent at η = 0.01. (a) the sharpness eventually reaches 2/η. (b) once the sharpness crosses 2/η, the iterates start to oscillate along q1 with ever-increasing magnitude. (c) somehow, gd does not diverge entirely; instead, the train loss continues to decrease, albeit non-monotonically. once gradient descent becomes destabilized in this manner, classical optimization theory gives no clues as to what will happen next. one might imagine that perhaps gradient descent might diverge entirely, or that gradient descent might stall while failing to make progress, or that gradient descent might jump to a flatter region and remain there. in reality, none of these outcomes occurs. in figure 4(c), we plot both the train loss and (cid:104)q1, xt(cid:105) for 1000 iterations after the sharpness first crossed 2/η. observe that gradient descent somehow avoids diverging entirely. instead, after initially spiking around iteration 215, the train loss continues to decrease, albeit non-monotonically. this numerical example is representative. in general, after the sharpness initially crosses 2/η, gradient descent enters a regime we call the edge of stability, in which (1) the sharpness hovers right at, or just above, the value 2/η; and (2) the train loss behaves non-monotonically over short timescales, yet decreases consistently over long timescales. indeed, in figure 5, we run gradient descent at a range of step sizes using both mse and cross-entropy loss. the left plane plots the train loss curves, with a vertical dotted line (of the appropriate color) marking the iteration where the sharpness first crosses 2/η. observe that the train loss decreases monotonically before this dotted line, but behaves non-monotonically afterwards. the middle plane plots the evolution of the sharpness, with a horizontal dashed line (of the appropriate color) at the value 2/η. observe that once the sharpness reaches 2/η, it ceases to increase further, and instead hovers right at, or just above, the value 2/η for the remainder of training. (the precise meaning of “just above” varies: in figure 5, for mse loss, the sharpness hovers just a minuscule amount above 2/η, while for cross-entropy loss, the gap between the sharpness and 2/η is small yet non-miniscule.) at the edge of stability, gradient descent is “trying” to increase the sharpness further, but is being restrained from doing so. to demonstrate this, in figure 7, we train at step size 2/200 until reaching the edge of stability, and then at iteration 6,000 (marked by the vertical black line), we drop the step size to η = 2/300. observe that after the learning rate drop, the sharpness immediately starts to increase, and only stops increasing once gradient descent is back at the edge of stability. appendix o repeats this experiment on more architectures. intuitively, gradient descent with fixed step sizes acts like a constrained optimization algorithm: the use of step size η imposes an implicit 2/η constraint on the sharpness (nar & sastry, 2018), and at the edge of stability this constraint is “active.” observe from figure 5 that there do exist step sizes η (in purple) small enough that the sharpness never rises to 2/η. we call such a step size stable. however, observe that with cross-entropy loss, it takes 3700 iterations to train at the stable step size in purple, but only 1000 iterations to train at the larger step size in blue. in general, we always observe that stable step sizes are suboptimal in terms figure 5: after the sharpness reaches 2/η, gradient descent enters the edge of stability. a network is trained with gradient descent at a range of step sizes (see legend), using both mse loss (top row) and cross-entropy (bottom row). left: the train loss curves, with a vertical dotted line at the iteration where the sharpness first crosses 2/η. center: the sharpness, with a horizontal dashed line at the value 2/η. right: sharpness plotted by time (= iteration × η) rather than iteration. figure 6: momentum. we run gd with step size η = 0.01 and polyak or nesterov momentum at various β. for each algorithm, the horizontal dashed line marks the mss from equation 1. figure 7: after a learning rate drop, progressive sharpening resumes. we start training at η = 2/200 (orange) and then after 6000 iterations (dotted vertical black line), we cut the step size to η = 2/300 (green). observe that as soon as the step size is cut, the sharpness starts to rise. of convergence speed. in fact, in §4 we will see that for standard networks on cifar-10, stable step sizes are so suboptimally small that they are completely unreasonable. the “edge of stability” effect generalizes to gradient descent with momentum. in figure 6, we train using gradient descent with step size η = 0.01, and varying amounts of either polyak or nesterov momentum. observe that in each case, the sharpness rises until reaching the mss given by equation 1, and then plateaus there. appendix n has more momentum experiments. in appendix p, we briefly examine the evolution of the next few hessian eigenvalues during gradient descent. we find that each of these eigenvalues rises until plateauing near 2/η. prior work. aspects of the edge of stability have been observed previously in the literature. wu et al. (2018) noted that the sharpness at the solution reached by full-batch gradient descent was not just less than 2/η, as was expected due to stability considerations, but was mysteriously approximately equal to 2/η. in retrospect, we can attribute this observation to progressive sharpening. xing et al. (2018) observed that full-batch gradient descent eventually enters a regime (the edge of stability) in which the training loss behaves non-monotonically, and the iterates oscillate along the direction of largest curvature; however, they did not relate this regime to the sharpness. lewkowycz et al. (2020) found that in neural network training, if the sharpness at initialization is larger than 2/η, then after becoming initially destabilized, gradient descent does not always diverge entirely (as the quadratic taylor approximation would suggest), but rather sometimes “catapults” into a flatter region that is flat enough to stably accommodate the step size. it seems plausible that whichever properties of neural training objectives permit this so-called “catapult” behavior may also be the same properties that permit successful optimization at the edge of stability. indeed, optimization at the edge of stability can conceivably be viewed as a never-ending series of micro-catapults. as we discuss at greater length in §6, several papers (jastrz˛ebski et al., 2017; 2019) have observed that large step sizes steer stochastic gradient descent into less sharp regions of the loss landscape, and jastrz˛ebski et al. (2020) attributed this effect to the stability properties of sgd. finally, our precise characterization of the behavior of the sharpness during full-batch gradient descent adds to a growing body of work that empirically investigates the hessian spectrum of neural networks (sagun et al., 2017; ghorbani et al., 2019; li et al., 2020a; papyan, 2018; 2019; 2020). the gradient flow trajectory in the right pane of figure 5, we plot the evolution of the sharpness during gradient descent, with “time” = iteration × η, rather than iteration, on the x-axis. this allows us to directly compare the sharpness after, say, 100 iterations at η = 0.01 to the sharpness after 50 iterations at η = 0.02; both are time 1. observe that when plotted by time, the sharpnesses for gradient descent at different step sizes coincide until the time where each reaches 2/η. this is because for this network, gradient descent at η = 0.01 and gradient descent at η = 0.02 initially travel the same path (moving at a speed proportional to η) until each reaches the point on that path where the sharpness hits 2/η. this path is the gradient flow trajectory. the gradient flow solution at time t is defined as the limit as η → 0 of the gradient descent iterate at iteration t/η (if this limit exists). the empirical finding of interest is that for this particular network, gradient descent does not only track the gradient flow trajectory in the limit of infinitesimally small step sizes, but for any step size that is less than 2/sharpness. we can numerically approximate gradient flow trajectories by using the runge-kutta rk4 algorithm (press et al., 1992) to numerically integrate the gradient flow ode. empirically, for many but not all networks studied in this paper, we find that gradient descent at any step size η closely tracks the runge-kutta trajectory until reaching the point on that trajectory where the sharpness hits 2/η. (this sometimes occurs even for networks with relu activations or max-pooling, which give rise to training objectives that are not continuously differentiable, which means that the gradient flow trajectory is not necessarily guaranteed to exist.) for such networks, the gradient flow trajectory provides a coherent framework for reasoning about which step sizes will eventually enter the edge of stability. let λ0 be the sharpness at initialization, and let λmax be the maximum sharpness along the gradient flow trajectory. if η < 2/λmax, then gradient descent will stably track the gradient flow trajectory for the entire duration of training, and will never enter the edge of stability. on the other hand, if η ∈ [2/λmax, 2/λ0], then gradient descent will stably track the gradient flow trajectory only until reaching the point on that trajectory where the sharpness hits 2/η; shortly afterwards, gradient descent will become destabilized, depart the gradient flow trajectory, and enter the edge of stability. further experiments | 6 | [
108.299,
521.4536768,
250.0674048,
533.4088768
] |
b0JxQC7JLWh.pdf | 2,023 | 2 | certified defences against adversarial patch attacks on semantic segmentation maksym yatsura1,2∗, kaspar sakmann1, n. grace hua1, matthias hein2,3, jan hendrik metzen1 1bosch center for artificial intelligence, robert bosch gmbh, 2university of tübingen, 3tübingen ai center abstract adversarial patch attacks are an emerging security threat for real world deep learning applications. we present demasked smoothing, the first approach (up to our knowledge) to certify the robustness of semantic segmentation models against this threat model. previous work on certifiably defending against patch attacks has mostly focused on image classification task and often required changes in the model architecture and additional training which is undesirable and computationally expensive. in demasked smoothing, any segmentation model can be applied without particular training, fine-tuning, or restriction of the architecture. using different masking strategies, demasked smoothing can be applied both for certified detection and certified recovery. in extensive experiments we show that demasked smoothing can on average certify 63% of the pixel predictions for a 1% patch in the detection task and 46% against a 0.5% patch for the recovery task on the ade20k dataset. introduction physically realizable adversarial attacks are a threat for safety-critical (semi-)autonomous systems such as self-driving cars or robots. adversarial patches (brown et al., 2017; karmon et al., 2018) are the most prominent example of such an attack. their realizability has been demonstrated repeatedly, for instance by lee & kolter (2019): an attacker places a printed version of an adversarial patch in the physical world to fool a deep learning system. while empirical defenses (hayes, 2018; naseer et al., 2019; selvaraju et al., 2019; wu et al., 2020) may offer robustness against known attacks, it does not provide any guarantees against unknown future attacks (chiang et al., 2020). thus, certified defenses for the patch threat model, which allow guaranteed robustness against all possible attacks for the given threat model, are crucial for safety-critical applications. research on certifiable defenses against adversarial patches can be broadly categorized into certified recovery and certified detection. certified recovery (chiang et al., 2020; levine & feizi, 2020; zhang et al., 2020; xiang et al., 2021; metzen & yatsura, 2021; lin et al., 2021; xiang et al., 2022a; salman et al., 2021; chen et al., 2022) has the objective to make a correct prediction on an input even in the presence of an adversarial patch. in contrast, certified detection (mccoyd et al., 2020; xiang & mittal, 2021b; han et al., 2021; huang & li, 2021) provides a weaker guarantee by only aiming at detecting inputs containing adversarial patches. while certified recovery is more desirable in principle, it typically comes at a high cost of reduced performance on clean data. in practice, certified detection might be preferable because it allows maintaining high clean performance. most existing certifiable defenses against patches are focused on image classification, with the exception of detectorguard (xiang & mittal, 2021a) and objectseeker (xiang et al., 2022b) that certifiably defend against patch hiding attacks on object detectors. moreover, existing defences are not easily applicable to arbitrary downstream models, because they assume either that the downstream model is trained explicitly for being certifiably robust (levine & feizi, 2020; metzen & yatsura, 2021), or that the model has a certain network architecture such as bagnet (zhang et al., 2020; metzen & yatsura, 2021; xiang et al., 2021) or a vision transformer (salman et al., 2021; huang & li, 2021). a notable exception is patchcleanser (xiang et al., 2022a), which can be combined with arbitrary downstream models but is restricted to image classification. ∗correspondence to: maksym yatsura <maksym.yatsura@de.bosch.com> (a) (b) (c) figure 1: (a) a simple patch attack on the swin transformer (liu et al., 2021) manages to switch the prediction for a big part of the image. (b) masking the patch. (c) a sketch of demasked smoothing for certified image segmentation. first, we generate a set of masked versions of the image such that each possible patch can only affect a certain number of masked images. then we use image inpainting to partially recover the information lost during masking and then apply an arbitrary segmentation method. the output is obtained by aggregating the segmentations pixelwise. the masking strategy and aggregation method depend on the certification mode (detection or recovery). adversarial patch attacks were also proposed for the image segmentation problem (nesti et al., 2022), mostly for attacking cnn-based models that use a localized receptive field (zhao et al., 2017). however, recently self-attention based vision transformers (dosovitskiy et al., 2021) have achieved new state-of-the-art in the image segmentation task (liu et al., 2021; bousselham et al., 2021). their output may become more vulnerable to adversarial patches if they manage to manipulate the global self-attention (lovisotto et al., 2022). we demonstrate how significant parts of the segmentation output may be affected by a small patch for swin tranfromer liu et al. (2021) in figure 1a. full details on the attack are available in appendix d. we point out that preventive certified defences are important because newly developed attacks can immediately be used to compromise safety-critical applications unless they are properly defended. in this work, we propose the novel framework demasked smoothing (figure 1c) to obtain the first (to the best of our knowledge) certified defences against patch attacks on semantic segmentation models. similarly to previous work (levine & feizi, 2020), we mask different parts of the input (figure 1b) and provide guarantees with respect to every possible patch that is not larger than a certain pre-defined size. while prior work required the classification model to deal with such masked inputs, we leverage recent progress in image inpainting (dong et al., 2022) to reconstruct the input before passing it to the downstream model. this decoupling of image demasking from the segmentation task allows us to support arbitrary downstream models. moreover, we can leverage state of the art methods for image inpainting. we also propose different masking schemes tailored for the segmentation task that provide the dense input allowing the demasking model to understand the scene but still satisfy the guarantees with respect to the adversarial patch. we summarize our contributions as follows: • we propose demasked smoothing which is the first (to the best of our knowledge) certified recovery or certified detection based defence against adversarial patch attacks on semantic segmentation models (section 4). • demasked smoothing can do certified detection and recovery with any off-the-shelf segmentation model without requiring finetuning or any other adaptation. • we implement demasked smoothing, evaluate it for different certification objectives and masking schemes (section 5). we can certify 63% of all pixels in certified detection for a 1% patch and 46% in certified recovery for a 0.5% patch for the beit-b (bao et al., 2022) segmentation model on the ade20k zhou et al. (2017) dataset. related work certified recovery. the first certified recovery defence for classification models against patches was proposed by chiang et al. (2020), who adapted interval-bound propagation (gowal et al., 2019) to the patch threat model. levine & feizi (2020) proposed de-randomized smoothing (drs), which provides significant accuracy improvement when compared to chiang et al. (2020) and scales to the imagenet dataset. in drs, a base classifier is trained on images where everything but a small local region is masked (ablated). at inference time, a majority vote of all specified ablations is taken as the final classification. if this vote has a large enough margin to the runner-up class, the prediction cannot be shifted by any patch that does not exceed a pre-defined size. a similar approach was adopted in randomized cropping (lin et al., 2021). a general drawback of these approaches is that the classifier needs to be trained to process masked/cropped inputs, which (in contrast to our work) prohibits the usage of arbitrary pretrained models. a further line of work studies network architectures that are particularly suited for certified recovery. for instance, models with small receptive fields such as bagnets (brendel & bethge, 2019) have been explored, either by combining them with some fixed postprocessing (zhang et al., 2020; xiang et al., 2021) or by training them end-to-end for certified recovery (metzen & yatsura, 2021). salman et al. (2021) propose to apply drs to vision transfomers (vits). in contrast to the aforementioned works, our demasked smoothing can be applied to models with arbitrary architecture. this is a property shared with patchcleanser (xiang et al., 2022a), which however is limited to image classification and it is not clear how it can be extended to semantic segmentation where a class needs to be assigned to every pixel including the masked ones. certified recovery against patches has also been extended to object detection, specifically to defend against patch hiding attacks. two notable works in this direction are detectorguard (xiang & mittal, 2021b), an extension of patchguard (xiang et al., 2021) to object detection, and objectseeker (xiang et al., 2022b). randomized smoothing (cohen et al., 2019) has been applied to certify semantic segmentation models against ℓ2-norm bounded adversarial attacks (fischer et al., 2021). however, to the best of our knowledge, no certified defence against patch attacks for semantic segmentation has been proposed so far. certified detection. an alternative to certified recovery is certified detection. here, an adversarial patch is allowed to change the model prediction. however, if it succeeds in doing so, there is a mechanism that detects this attack certifiably with zero false negatives. minority reports (mccoyd et al., 2020) was the first certified detection method against patches, which is based on sliding a mask over the input in a way that ensures that there will be one mask position that completely hides the patch. patchguard++ (xiang & mittal, 2021b) is an extension of minority reports where the sliding mask is not applied on the input but on the feature maps of a bagnet-type feature extractor. this reduces inference time drastically since the feature extractor needs to be executed only once per input. scalecert (han et al., 2021) tries to identify “superficial important neurons”, which allows pruning the network in a way that the prediction needs to be made for fewer masked inputs. lastly, patchveto (huang & li, 2021) is a recently proposed method for certified detection that is tailored towards vit models. it implements masking by removing certain input patches of the vit. in this work, we propose a novel method for certified detection in the semantic segmentation task that can be used for any pretrained model. image reconstruction. the problem of learning to reconstruct the full image from inputs where parts have been masked out was pioneered by vincent et al. (2010). it recently attracted attention as proxy task for self-supervised pre-training, especially for the vits (bao et al., 2022; he et al., 2021). recent approaches to this problem are using fourier convolutions (suvorov et al., 2022) and visual transformers (dong et al., 2022). spg-net (song et al., 2018) trains a subnetwork to reconstruct the full semantic segmentation directly from the masked input as a part of the image inpainting pipeline. in this work, we use the state-of-the-art zits (dong et al., 2022) inpainting method. problem setup semantic segmentation in this work, we focus on the semantic segmentation task. let x be a set of rectangular images. let x ∈ x be an image with height h, width w and the number of channels c. we denote y to be a finite label set. the goal is to find the segmentation map s ∈ y h×w for x. for each pixel xi,j, the corresponding label si,j denotes the class of the object to which xi,j belongs. we denote s to be a set of segmentation maps and f : x → s to be a segmentation model. threat model let us consider an untargeted adversarial patch attack on a segmentation model. consider an image x ∈ [0, 1]h×w ×c and its ground truth segmentation map s. assume that the attacker can modify an arbitrary rectangular region of the image x which has a size of h ′ × w ′. we refer to this modification as a patch. let l ∈ {0, 1}h×w be a binary mask that defines the patch location in the image in which ones denote the pixels belonging to the patch. let l be a set of all possible patch locations for a given image x. let p ∈ [0, 1]h×w ×c be the modification itself. then we define an operator a as a(x, p, l) = (1 − l) ⊙ x + l ⊙ p, where ⊙ is element-wise product. the operator a applies the h ′ × w ′ subregion of p defined by a binary mask l to the image x while keeping the rest of the image unchanged. we denote p := [0, 1]h×w ×c × l to be a set of all possible patch configurations (p, l) that define an h ′ × w ′ patch. let s ∈ s be the ground truth segmentation for x. let q(f (x), s) be some quality metric such as global pixel accuracy or mean intersection over union (miou). the goal of an attacker is to find (p⋆, l⋆) s. t. (p⋆, l⋆) = arg min (p, l)∈p q(f (a(x, p, l)), s) defence objective in this paper, we propose certified defences against patch attacks. it means that we certify against any possible attack from p including (p⋆, l⋆). we consider two robustness objectives. certified recovery for a pixel xi,j our goal is to verify that the following statement is true ∀ (p, l) ∈ p : f (a(x, p, l))i,j = f (x)i,j certified detection we consider a verification function v defined on x such that v(x) ∈ {0, 1}h×w . if v(x)i,j = 1, then the adversarial patch attack on xi,j can be detected by applying the function v to the attacked image x′ = a(x, p, l). v(x)i,j = 1 ⇒ (cid:104) ∀ (p, l) ∈ p : v(a(x, p, l))i,j = 1 → f (a(x, p, l))i,j = f (x)i,j v(x′)i,j = 0 means an alert on pixel x′ i,j. however, if x′ is not an adversarial example, then this is a false alert. in that case the fraction of pixels for which we return false alert is called false alert ratio (far). the secondary objective is to keep far as small as possible. depending on the objective our goal is to certify one of the conditions 1, 2 for each pixel xi,j. this provides us an upper bound on an attacker’s effectiveness under any adversarial patch attack from p. demasked smoothing demasked smoothing (figure 1c) consists of several steps. first, we apply a predefined set of masks with specific properties to the input image to obtain a set of masked images. then we reconstruct the masked regions of each image based on the available information with an inpainting model g. after that we apply a segmentation model f to the demasked results. finally, we aggregate the segmentation outcomes and make a conclusion for the original image with respect to the statements (1) or (2). input masking motivation. like in previous work (section 2) we apply masking patterns to the input image and use predictions on masked images to aggregate the robust result. if an adversarial patch is completely masked, it has no effect on further processing. however, in semantic segmentation, we predict not a single whole-image label like in the classification task, but a separate label for each pixel. thus, making prediction on a masked image must allow us to predict the labels also for the masked pixels. preliminaries. consider an image x ∈ [0, 1]h×w ×c. we define "∗" to be a special masking symbol that does not correspond to any pixel value and has the property ∀z ∈ r : z × ∗ = ∗. please note that ∗ needs to be different from 0 since 0 is a valid pixel value in unmasked inputs. let m ∈ {∗, 1}h×w be a mask. we call the element-wise product x ⊙ m a masking of x. in a masking, a subset of (a) original image (c) column mask (e) 3-mask (g) 4-mask (h) detection column (i) detection row figure 2: examples of for the column masks: t = 2 (b, c), 3-mask: t = 3 (d, e), and 4-mask: t = 4 (f, g) with the number of masks k = 5, 7, 9 respectively. the number on a block denotes in which mask it is visible (there is only one such mask for each block). for each mask set, we show one of the locations l in which an adversarial patch (p, l) affects t different maskings. pixels becomes ∗ and the rest remains unchanged. we consider the threat model p with patches of size h ′ × w ′ (section 3.2). to define the structure of our masks, we break m into an array b of non-intersecting blocks, each having the same size h ′ × w ′ as the adversarial patch. we index the blocks as b[q, r], 1 ≤ q ≤ ⌈ h w ′ ⌉. we say that the block b[q, r] is visible in a mask m if ∀(i, j) ∈ b[q, r] : mi,j = 1 consider an array m of k masks. we define each mask m [k] by a set of blocks that are visible in it. for certified recovery, each block is visible in exactly one mask and masked in the others. we say that a mask m is affected by a patch (p, l) if a(x, p, l) ⊙ m ̸= x ⊙ m. we define t (m ) = max(p,l)∈p |{m ∈ m |a(x, p, l) ⊙ m ̸= x ⊙ m}|. that is: t (m ) is the largest number of masks affected by some patch. if m is defined, we refer to the value t (m ) as t for simplicity. h ′ ⌉, 1 ≤ r ≤ ⌈ w certified recovery. we define column masking m for which t = 2. we assign every k-th block column to be visible in the mask m [k] (figure 2c). any (p, l) ∈ p can intersect at most two adjacent columns since (p, l) has the same width as a column. thus, it can affect at most two masks (figure 2b). a similar scheme can be proposed for the rows. due to the block size in b, the patch (p, l) cannot intersect more than four blocks at once. we define a mask set that we call 3-mask s. t. for any four adjacent blocks two are visible in the same mask (figures 2d). hence, a patch for 3-mask can affect no more than 3 masks, t = 3. to achieve t = 4 any assignment of visible blocks to the masks works. we consider 4-mask that allows uniform coverage of the visible blocks in the image (figure 2f). see details on masking schemes in appendix b. certified detection. we define md to be a set of masks for certified detection (we use subscript d for distinction). md should have the property: ∀ (p, l) ∈ p ∃ m ∈ md : a(x, p, l) ⊙ m = x ⊙ m i. e. for every patch exists at least one mask not affected by this patch. for a patch of size h ′ × w ′ we consider k = w − w ′ + 1 masks such that the mask md[k] masks a column of width w ′ starting at the horizontal position k in the image (figure 2h). to obtain the guarantee for the same p with a smaller k, we consider a set of strided columns of width w ′′ ≥ w ′ and stride w ′′ − w ′ + 1 that also satisfy the condition (see the proof adapted from xiang et al. (2022a) in appendix a). a similar scheme can be proposed for the rows (figure 2i). alternatively, we could use a set of block masks of size h ′ × w ′. then the number of masks grows quadratically with the image resolution. hence, in the experiments we focus on the column and the row masking schemes. let g be a demasking model, g(x ⊙ m) ∈ [0, 1]h×w ×c. the goal of g is to make the reconstruction g(x ⊙ m) as close as possible (in some metric) to the original image x. for a segmentation model f we define a segmentation array s(m, x, g, f ), s[k] := f (g(x ⊙ m [k])), 1 ≤ k ≤ k. certification | 4 | [
108.249,
92.7680784,
197.7477433,
102.7306784
] |
N0uJGWDw21d.pdf | 2,022 | 1 | bag of instances aggregation boosts self-supervised distillation haohang xu1,2∗ jiemin fang3,4∗ xiaopeng zhang2 lingxi xie2 xinggang wang4 wenrui dai1 hongkai xiong1 qi tian2 1shanghai jiao tong university 2huawei inc. 3institute of artificial intelligence, huazhong university of science & technology 4school of eic, huazhong university of science & technology {xuhaohang, daiwenrui,xionghongkai}@sjtu.edu.cn {jaminfong, xgwang}@hust.edu.cn {zxphistory, 198808xc}@gmail.com tian.qi1@huawei.com abstract recent advances in self-supervised learning have experienced remarkable progress, especially for contrastive learning based methods, which regard each image as well as its augmentations as an individual class and try to distinguish them from all other images. however, due to the large quantity of exemplars, this kind of pretext task intrinsically suffers from slow convergence and is hard for optimization. this is especially true for small scale models, which we find the performance drops dramatically comparing with its supervised counterpart. in this paper, we propose a simple but effective distillation strategy for unsupervised learning. the highlight is that the relationship among similar samples counts and can be seamlessly transferred to the student to boost the performance. our method, termed as bingo, which is short for bag of instances aggregation, targets at transferring the relationship learned by the teacher to the student. here bag of instances indicates a set of similar samples constructed by the teacher and are grouped within a bag, and the goal of distillation is to aggregate compact representations over the student with respect to instances in a bag. notably, bingo achieves new state-of-the-art performance on small scale models, i.e., 65.5% and 68.9% top-1 accuracies with linear evaluation on imagenet, using resnet-18 and resnet-34 as backbone, respectively, surpassing baselines (52.5% and 57.4% top-1 accuracies) by a significant margin. the code is available at https://github.com/haohang96/bingo. introduction convolutional neural networks (cnns) have achieved great success in the field of computer vision, including image classification (he et al., 2016), object detection (ren et al., 2015) and semantic segmentation (chen et al., 2017). however, most of the time, cnns cannot succeed without enormous human-annotated data. recently, self-supervised learning, typified by contrastive learning (he et al., 2020; chen et al., 2020a), has been fighting with the annotation-eager challenge and achieves great success. most current self-supervised methods yet focus on networks with large size, e.g., resnet50 (he et al., 2016) with more than 20m parameters, but real-life implementation usually involves computation-limited scenarios, e.g., mobile/edge devices. due to annotation lacking in unsupervised tasks, learning from unlabeled data becomes challenging. recent contrastive learning methods (he et al., 2020; chen et al., 2020a) tackle this problem by narrowing gaps between embeddings of different augmentations from the same image. techniques like momentum encoder for stable updating, memory bank for storing negative pairs, complicated data augmentation strategies etc., are proposed to avoid collapse and promote the performance. with the above techniques, contrastive learning methods show promising performance. however, *equal contributions. the work was done during the internship of haohang xu and jiemin fang at huawei inc. (a) linear classification accuracy on imagenet over different student architectures distilled by resnet-50×2 teacher model (b) semi-supervised learning by fine-tuning 1% and 10% labeled images on imagenet using resnet-18 student and resnet-152 teacher model figure 1: overall performance comparisons between bingo and other unsupervised distillation methods. contrastive learning requires discriminating all instances, due to the large quantity of exemplars, this kind of pretext task intrinsically suffers from slow convergence and is hard for optimization. this issue becomes severe for small scale models, which carry too few parameters to fit the enormous data. inspired by supervised learning that knowledge from large models can effectively promote the learning ability of small models with distillation, exploring knowledge distillation on unsupervised small models becomes an important topic. compress (fang et al., 2020) and seed (fang et al., 2020) are two typical methods for unsupervised distillation, which propose to transfer knowledge from the teacher in terms of similarity distributions among different instances. however, as the similarity distribution is computed by randomly sampling instances from a dynamically maintained queue, this kind of knowledge is mostly constructed based on instances with low relation, which fails to effectively model similarity of those highly related samples. to solve this issue, we propose a new self-supervised distillation method, which transfers knowledge by aggregating bags of related instances, named bingo. in our empirical studies, transferring knowledge based on highly related samples helps boost performance more effectively compared with previous relation-agnostic methods. specifically, we select an unsupervised pretrained large model as the teacher. first, we map the conventional instance-wise dataset into a bag-wise one. each original instance is set as an anchor instance of the bag. by matching similarities of all the other instances’ embeddings produced by the teacher model, we feed instances which show high similarity with the anchor instance into the bag. then we apply the bagged dataset to the small model distillation process. to this end, we propose a bag-aggregation distillation loss, which consists of two components: inter-sample distillation and intra-sample distillation. for intra-sample distillation, embeddings of the student and teacher from two augmentations of the same instance are pushed together; for inter-sample distillation, embeddings of all instances in one bag are pushed to be more similar with the anchor one. equipped with the two proposed distillation loss, the bag-based knowledge from the teacher can be well transferred to the student, which shows significant advantages over previous relation-agnostic ones (fang et al., 2020; abbasi koohpayegani et al., 2020). our contributions can be summarized as follows. • we propose a new self-supervised distillation method, which bags related instances by matching similarities of instance embeddings produced by the teacher. the bagged dataset can effectively boost small model distillation by aggregating instance embeddings in bags. the proposed relation-guided method shows stronger performance than previous relationagnostic ones. • bingo promotes the performance of both resnet-18 and -34 to new state-of-the-art (sota) ones in unsupervised scenarios. it is worth noting that the distilled models also present far better performance compared with previous sota methods on other tasks, i.e., knn classification and semi-supervised learning. • bingo provides a new paradigm for unsupervised distillation where knowledge between instances with high relation could be more effective than relation-agnostic ones. this may be inspiring for further explorations on knowledge transfer in unsupervised scenarios. related work self-supervised learning as a generic framework to learn representations with unlabeled data, self-supervised learning has experienced remarkable progress over the past few years. by constructing a series of pretext tasks, self-supervised learning aims at extracting discriminative representations from input data. previous methods obtain self-supervised representations mainly via a corrupting and recovering manner, from perspectives of spatial ordering (noroozi & favaro, 2016), rotation changes (komodakis & gidaris, 2018), in-painting (pathak et al., 2016), or colorization (zhang et al., 2016), et al. recently, contrastive learning based methods (he et al., 2020; chen et al., 2020a) emerge and significantly promote the performance of self-supervised learning, which aim at maximizing the mutual information between two augmented views of a image. a series of subsequent works (grill et al., 2020; xu et al., 2020b; dwibedi et al., 2021) further improve the performance to a very high level. khosla et al. (2020) applies contrastive learning on supervised learning, which selects the positive samples from the same category. caron et al. (2020) proposes to align the distribution of one instance’s different views on other categories. however, few of them pay attention to self-supervised learning on small-scale models, which are of critical importance to implement self-supervised models on lightweight devices. we propose an effective method to boost the self-supervised learning of small models, which takes advantage of relation-based knowledge between data and shows superior performance than previous ones. knowledge distillation knowledge distillation aims to transfer knowledge from a model (teacher) to another one (student), usually from a large to small one, which is commonly used for improving the performance of the lightweight model. hinton et al. (2015) first proposes knowledge distillation via minimizing the kl-divergence between the student and teacher’s logits, which uses the predicted class probabilities from the teacher as soft labels to guide the student model. instead of mimicking teacher’s logits, romero et al. (2014) transfers the knowledge by minimizing the (cid:96)2 distance between intermediate outputs of the teacher and student model. to solve the dimension mismatch, romero et al. (2014) uses a randomly initialized projection layer to enlarge the dimension of a narrower student model. based on romero et al. (2014), zagoruyko & komodakis (2016) utilizes knowledge stored in the attention map generated by the teacher model, and pushes the student model to pay attention to the area where the teacher focuses on. zhou et al. (2021) improves weighted soft labels to adaptively improve the bias-variance tradeoff of each sample. besides perspectives of soft labels and intermediate features, relation between samples is also an important knowledge. park et al. (2019) and liu et al. (2019) train student model by aligning the pair-wise similarity graph with the teacher. recently, some works extend the above distillation method into self-supervised learning scenarios. tian et al. (2019) uses the contrastive loss to learn cross-modality consistency. xu et al. (2020a),fang et al. (2020) and abbasi koohpayegani et al. (2020) share a similar methodology with caron et al. (2020) of aligning feature distribution between views of the same instances. the distribution is computed as the pair-wise similarities between student’s outputs and features stored in memory bank. however, the above relation-based self-supervised distillation methods only compute the similarity between anchor sample and randomly sampled instances from a maintained queue, which ignores the relation between sampled and anchor instances. choi et al. (2021) uses the teacher model to produce cluster assignments, and encourages the student model to mimic the output of the trainable teacher model on-the-fly, which achieves promising results. gao et al. (2021) strengthens the student model by adding a regularization loss on the original contrastive loss, which aims at minimizing the (cid:96)2 distance between the student’s and teacher’s embedding. navaneet et al. (2021) also achieves competitive results with feature regression in self-supervised distillation. we propose to transfer the relation knowledge between models via a new type of dataset, which bags related instances. by aggregating the bagged instances, the relation knowledge can be effectively transferred. approach in this section, we introduce the proposed bingo in details. first, we discuss how to bag samples in the instance-wise dataset. after the samples are bagged, the bag-aggregation based knowledge figure 2: an overview of the proposed method. the samples are first bagged via feature similarity. then the related instances in a bag is aggregated via intra-sample and inter-sample distillation loss. the figure on top-right is an intuitive explanation of how bag aggregation works. distillation is introduced. we also discuss how to compute bag-aggregation loss and how they improve the performance of the lightweight model. the overall framework is illustrated in fig. 2. bagging instances with similarity matching given the unlabeled training set x = {x1, x2, ..., xn}, we define the corresponding bag-wise training set as ω = {ω1, ω2, ..., ωn}, where each bag ωi consists of a set of instances. to transfer the instance-wise dataset to a bag-wise one, we first feed x into a pretrained teacher model ft and get the corresponding features v = {v1, v2, ..., vn} where vi = ft(xi). for each anchor sample xa in the dataset, we find positive samples which share high similarity with the anchor sample. then the anchor sample as well as the similar samples are combined to form a bag. the samples in one bag have a compact representation in the embedding space. several mapping function can be used to find similar samples: k-nearest neighbors for each anchor sample xa in the instance-wise dataset, we first compute the pairwise similarity with all samples in the dataset sa = {va · vi | i = 1, 2, ..., n }. the bag ωa corresponding to xa is defined as: ωa = top−rank(sa, k), where top−rank(·, k) returns the indices of top k items in a set. k-means clustering given the training feature set v = {v1, v2, ..., vn}, we first assign a pseudo-label qi to each sample i, where qi ∈ {q1, ..., qk}. the clustering process is performed by minimizing the following term, −vt i cqi , where cqi denotes the centering feature of all features belonging to the label qi, i.e., cqi = (cid:80) qj=qi vj, ∀j = 1, ..., n . the bag ωa of anchor sample xa is defined as: ωa = {i | qi = qa, ∀i = 1, 2, ..., n }. ground truth label if the ground truth label is available, we can also bag samples with the human-annotated semantic labels. given the label set y = {y1, y2, ..., yn}, we can bag related instances of the anchor sample xa via: ωa = {i | yi = ya, ∀i = 1, 2, ..., n }. in this paper, we use k-nearest neighbors as the bagging strategy. more details about performance of using the k-means clustering based bagging strategy can be found in appendix. note that bagging instances via the ground truth label is just used to measure the upper bound of the proposed method. knowledge distillation via bag aggregation once we get the bag-wise dataset ω utilizing a pretrained teacher model, it can be used for distillation process. in each feed-forward process, the anchor sample xa and the positive sample xp which belong to the same bag ωa are sampled together in one batch. we propose the bag-aggregation distillation loss including the intra-sample distillation loss lintra and inter-sample distillation loss linter. to aggregate the representations within a bag into more compact embeddings, we minimize the following target function: min θs l = e xi∼ωa (l(fs(xi), ft(xa))), where l is a metric function to measure the distance between two embeddings – there are many metrics can be selected, such as cosine similarity, euclidean distance, etc. here we use the normalized cosine similarity, i.e., the contrastive loss commonly used in self-supervised learning to measure the distance between xi and the anchor sample xa. the target function in eq. 5 can be divided into two components: l=l(fs(t1(xa)), ft(t2(xa))) + e xi∼ωa\xa (l(fs(t3(xi)), ft(t2(xa)))), three separate data augmentation operators t1, t2, t3 are randomly sampled from the same family of moco-v2 augmentations t , which is also adopted in seed(fang et al., 2020) and disco (gao et al., 2021). where the first item focuses on pulling different views (augmentations) of the same sample together, and the second item aims at pulling different samples that are within a same bag into more related ones. we term the first item as lintra and the second item as linter. intra-sample distillation the intra-sample distillation loss is a variant of conventional contrastive loss. contrastive learning aims to learn representations by discriminating the positive key among negative samples. given two augmented views x and x(cid:48) of one input image, moco (chen et al., 2020c) uses a online encoder fq and a momentum encoder fk to generate embeddings of the positive pairs: q = fq(x), k = fk(x(cid:48)). the contrastive loss is defined as lcontrast = − log exp(q · k+/τ ) i=0 exp(q · ki/τ ) during distillation, we simply replace fq and fk by the student model fs and teacher model ft, while weights of the teacher model ft are pretrained and are not updated during distillation. the intra-sample distillation loss can be formulated as lintra = − log exp(fs(t1(xa)) · ft(t2(xa))/τ ) (cid:80)n i=0 exp(fs(t1(xa)) · k− i /τ ) where τ is the temperature parameter. we select negative samples k− in a memory bank, which is widely used in moco (he et al., 2020) and many subsequent contrastive learning methods. the memory bank is a queue of data embeddings and the queue size is much larger than a typical minibatch size. after each forward iteration, items in the queue are progressively replaced by the current output of the teacher network. inter-sample distillation given the anchor sample xa and a positive sample xp in the bag ωa, it is natural to map highly related samples to more similar representations. in other words, we want the bag filled with related samples to be more compact. inspired by eq. 8, we define the inter-sample distillation loss as linter = − log exp(fs(t3(xp)) · ft(t2(xa))/τ ) (cid:80)n i=0 exp(fs(t3(xp)) · k− i /τ ) the intra- and inter-sample distillation loss serve as different roles. the intra-sample distillation works like conventional distillation (hinton et al., 2015; romero et al., 2014), which aims at minimizing distances between outputs of the teacher and student model given the same input. however, the intersample distillation mainly focuses on transferring the data relation knowledge taking the bag-wise dataset as the carrier, which is obtained from the pretrained teacher model. experiments | 5 | [
108.299,
572.8436768,
200.0834953,
584.7988768
] |
RJkAHKp7kNZ.pdf | 2,022 | 0 | vision-based manipulators need to also see from their hands kyle hsu∗, moo jin kim∗, rafael rafailov, jiajun wu, chelsea finn stanford university {kylehsu,moojink,rafailov,jiajunwu,cbfinn}@cs.stanford.edu abstract we study how the choice of visual perspective affects learning and generalization in the context of physical manipulation from raw sensor observations. compared with the more commonly used global third-person perspective, a hand-centric (eye-in-hand) perspective affords reduced observability, but we find that it consistently improves training efficiency and out-of-distribution generalization. these benefits hold across a variety of learning algorithms, experimental settings, and distribution shifts, and for both simulated and real robot apparatuses. however, this is only the case when hand-centric observability is sufficient; otherwise, including a third-person perspective is necessary for learning, but also harms outof-distribution generalization. to mitigate this, we propose to regularize the thirdperson information stream via a variational information bottleneck. on six representative manipulation tasks with varying hand-centric observability adapted from the meta-world benchmark, this results in a state-of-the-art reinforcement learning agent operating from both perspectives improving its out-of-distribution generalization on every task. while some practitioners have long put cameras in the hands of robots, our work systematically analyzes the benefits of doing so and provides simple and broadly applicable insights for improving end-to-end learned vision-based robotic manipulation.1 figure 1: illustration suggesting the role that visual perspective can play in facilitating the acquisition of symmetries with respect to certain transformations on the world state s. t0: planar translation of the end-effector and cube. t1: vertical translation of the table surface, end-effector, and cube. t2: addition of distractor objects. o3: third-person perspective. oh: hand-centric perspective. introduction physical manipulation is so fundamental a skill for natural agents that it has been described as a “rosetta stone for cognition” (ritter & haschke, 2015). how can we endow machines with similar ∗co-first authorship. order determined by coin flip. 1project website: https://sites.google.com/view/seeing-from-hands. mastery over their physical environment? one promising avenue is to use a data-driven approach, in which the mapping from raw sensor observations of the environment (and other readily available signals, e.g. via proprioception) to actions is acquired inductively. helpful inductive biases in modern machine learning techniques such as over-parameterized models and stochastic gradient descent have enabled surprising (and poorly understood) generalization capabilities in some applications (neyshabur et al., 2014; belkin et al., 2019; zhang et al., 2021). despite this, visuomotor policies learned end-to-end remain brittle relative to many common real-world distribution shifts: subtle changes in lighting, texture, and geometry that would not faze a human cause drastic performance drops (julian et al., 2020). while a wide variety of algorithms have been proposed to improve the learning and generalization of object manipulation skills, in this paper we instead consider the design of the agent’s observation space, a facet of the learning pipeline that has been underexplored (section 5). indeed, in some applications of machine learning, e.g., image classification or text summarization, the disembodied nature of the task affords relatively little flexibility in this regard. yet, even in these settings, simple data processing techniques such as normalization and data augmentation can have noticeable effects on learning and generalization (perez & wang, 2017). the role of data can only be more profound in an embodied setting: any sensors capable of being practically instrumented will only provide a partial observation of the underlying world state. while partial observability is typically regarded as a challenge that only exacerbates the difficulty of a learning problem (kaelbling et al., 1998), we may also consider how partial observations can facilitate the acquisition of useful symmetries. the natural world gives clear examples of this. for instance, because cutaneous touch is inherently restricted to sensing portions of the environment in direct contact with the agent, tactile sensing by construction exhibits invariances to many common transformations on the underlying world state; grasping an apple from the checkout counter (without looking at it) is largely the same as doing so from one’s kitchen table. due in part to the nascent state of tactile sensing hardware (yuan et al., 2017) and simulation (agarwal et al., 2020), in this work we investigate the above insight in vision, the ubiquitous sensory modality in robotic learning. in particular, we focus on the role of perspective as induced from the placement of cameras. to roughly imitate the locality of cutaneous touch, we consider the hand-centric (eye-in-hand) perspective arising from mounting a camera on a robotic manipulator’s wrist. we also consider the more commonly used third-person perspective afforded by a fixed camera in the world frame. the main contribution of this work is an empirical study of the role of visual perspective in learning and generalization in the context of physical manipulation. we first perform a head-to-head comparison between hand-centric and third-person perspectives in a grasping task that features three kinds of distribution shifts. we find that using the hand-centric perspective, with no other algorithmic modifications, reduces aggregate out-of-distribution failure rate by 92%, 99%, and 100% (relative) in the imitation learning, reinforcement learning, and adversarial imitation learning settings in simulation, and by 45% (relative) in the imitation learning setting on a real robot apparatus. despite their apparent superiority, hand-centric perspectives cannot be used alone for tasks in which their limited observability is a liability during training. to realize the benefits of hand-centric perspectives more generally, we propose using both hand-centric and third-person perspectives in conjunction for full observability while regularizing the latter with a variational information bottleneck (alemi et al., 2016) to mitigate the latter’s detrimental effects on out-of-distribution generalization. we instantiate this simple and broadly applicable principle in drq-v2 (yarats et al., 2021), a state-of-the-art vision-based reinforcement learning algorithm, and find that it reduces the aggregate out-of-distribution failure rate compared to using both perspectives naively by 64% (relative) across six representative manipulation tasks with varying levels of hand-centric observability adapted from the meta-world benchmark (yu et al., 2020). problem setup , p, r, γ, µ), where preliminaries: mdps and pomdps. we frame the physical manipulation tasks considered in this is a 6-tuple work as discrete-time infinite-horizon markov decision processes (mdps). an mdp ( ) is a states (0, 1) is a discount transition (or dynamics) function, r : ) is an initial state distribution. an mdp whose state cannot be directly observed factor, and µ , p, r, γ, µ, ω, o) can be formalized as a partially observable mdp (pomdp), an 8-tuple ( s × a → ∈ , r is a reward function, γ is a set of actions, p : is a set of states, s × a → m s a a s s s a that extends the underlying mdp with two ingredients: a set of observations ω and an observation π(ω). we consider only a restricted class of pomdps in which the function o : s × a → ω. to solve a pomdp, we optimize a policy observation function is limited to be o : t=0 γtr(st, at)], where π : ω ) to maximize the expected return r( π o maps a state to an action distribution via composing the policy and observation function. | 2 | [
123.445,
654.1880784,
486.3087132,
664.3698556
] |
YiBa9HKTyXE.pdf | 2,022 | 0 | permutation-based sgd: is random optimal? shashank rajput∗ kangwook lee dimitris papailiopoulos university of wisconsin-madison abstract a recent line of ground-breaking results for permutation-based sgd has corroborated a widely observed phenomenon: random permutations offer faster convergence than with-replacement sampling. however, is random optimal? we show that this depends heavily on what functions we are optimizing, and the convergence gap between optimal and random permutations can vary from exponential to nonexistent. we first show that for 1-dimensional strongly convex functions, with smooth second derivatives, there exist permutations that offer exponentially faster convergence compared to random. however, for general strongly convex functions, random permutations are optimal. finally, we show that for quadratic, strongly-convex functions, there are easy-to-construct permutations that lead to accelerated convergence compared to random. our results suggest that a general convergence characterization of optimal permutations cannot capture the nuances of individual function classes, and can mistakenly indicate that one cannot do much better than random. introduction finite sum optimization seeks to solve the following: min x f (x) := fi(x). stochastic gradient descent (sgd) approximately solves finite sum problems, by iteratively updating the optimization variables according to the following rule: xt+1 := xt − α∇fσt(xt), where α is the step size and σt ∈ [n] = {1, 2, . . . , n} is the index of the function sampled at iteration t. there exist various ways of sampling σt, with the most common being with- and without-replacement sampling. in the former, σt is uniformly chosen at random from [n], and for the latter, σt represents the t-th element of a random permutation of [n]. we henceforth refer to these two sgd variants as vanilla and permutation-based, respectively. although permutation-based sgd has been widely observed to perform better in practice (bottou, 2009; recht & r´e, 2012; 2013), the vanilla version has attracted the vast majority of theoretical analysis. this is because of the fact that at each iteration, in expectation the update is a scaled version of the true gradient, allowing for simple performance analyses of the algorithm, e.g., see (bubeck et al., 2015). permutation-based sgd has resisted a tight analysis for a long time. however, a recent line of breakthrough results provides the first tight convergence guarantees for several classes of convex functions f (nagaraj et al., 2019; safran & shamir, 2019; rajput et al., 2020; mishchenko et al., 2020; ahn et al., 2020; nguyen et al., 2020). these recent studies mainly focus on two variants of permutation-based sgd where (1) a new random permutation is sampled at each epoch (also known as random reshuffle) (nagaraj et al., 2019; safran & shamir, 2019; rajput et al., 2020), and (2) a random permutation is sampled once and is reused throughout all sgd epochs (single shuffle) (safran & shamir, 2019; mishchenko et al., 2020; ahn et al., 2020). ∗correspondence to shashank rajput (cid:104)rajput3@wisc.edu(cid:105) perhaps interestingly, random reshuffle and single shuffle exhibit different convergence rates and a performance gap that varies across different function classes. in particular, when run for k epochs, the convergence rate for strongly convex functions is (cid:101)o(1/nk 2) for both random reshuffle and single shuffle (nagaraj et al., 2019; ahn et al., 2020; mishchenko et al., 2020). however, when run specifically on strongly convex quadratics, random reshuffle experiences an acceleration of rates, whereas single shuffle does not (safran & shamir, 2019; rajput et al., 2020; ahn et al., 2020; mishchenko et al., 2020). all the above rates have been coupled by matching lower bounds, at least up to constants and sometimes log factors (safran & shamir, 2019; rajput et al., 2020). from the above we observe that reshuffling at the beginning of every epoch may not always help. but then there are cases where random reshuffle is faster than single shuffle, implying that certain ways of generating permutations are more suited for certain subfamilies of functions. the goal of our paper is to take a first step into exploring the relationship between convergence rates and the particular choice of permutations. we are particularly interested in understanding if random permutations are as good as optimal, or if sgd can experience faster rates with carefully crafted permutations. as we see in the following, the answer the above is not straightforward, and depends heavily on the function class at hand. our contributions: we define as permutation-based sgd to be any variant of the iterates in (2), where a permutation of the n functions, at the start of each epoch, can be generated deterministically, randomly, or with a combination of the two. for example, single shuffle, random reshuffle, and incremental gradient descent (igd), are all permutation-based sgd variants (see algorithm 1). we first want to understand—even in the absence of computational constraints in picking the optimal permutations—what is the fastest rate one can get for permutation-based sgd? in other words, are there permutations that are better than random in the eyes of sgd? algorithm 1 permutation-based sgd variants input: initialization x1 0, step size α, epochs k 1: σ = a random permutation of [n] 2: for k = 1, . . . , k do if igd then 3: 4: 5: 6: 7: 8: 9: σk = a random permutation of [n] else if random reshuffle then else if single shuffle then σk = σ end if if flipflop and k is even then σk = reverse of σk−1 end if := xk n i−1 − α∇fσk i (xk epoch k plain rr ω ss igd with flipflop (cid:19) thm. 5 thm. 4 thm. 6 table 1: convergence rates of random reshuffle (rr), single shuffle (ss) and incremental gradient descent (igd) on strongly convex quadratics: plain vs. flipflop. lower bounds for the “plain” versions are taken from (safran & shamir, 2019). when n (cid:29) k, that is when the training set is much larger than the number of epochs, which arguably is the case in practice, the convergence rates of random reshuffle,single shuffle, and incremental gradient descent are ω( 1 k2 ) respectively. on the other hand, by combining these methods with flipflop the convergence rates become faster, i.e., (cid:101)o( 1 nk2 ), and ω( 1 perhaps surprisingly, we show that there exist permutations that may offer up to exponentially faster convergence than random permutations, but for a limited set of functions. specifically, we show this for 1-dimensional functions (theorem 1). however, such exponential improvement is no longer possible k3 ), respectively. in higher dimensions (theorem 2), or for general strongly convex objectives (theorem 3), where random is optimal. the above highlight that an analysis of how permutations affect convergence rates needs to be nuanced enough to account for the structure of functions at hand. otherwise, in lieu of further assumptions, random permutations may just appear to be as good as optimal. in this work, we further identify a subfamily of convex functions, where there exist easy-togenerate permutations that lead accelerated convergence. we specifically introduce a new technique, flipflop, which can be used in conjunction with existing permutation-based methods, e.g., random reshuffle, single shuffle, or incremental gradient descent, to provably improve their convergence on quadratic functions (theorems 4, 5, and 6). the way that flipflop works is rather simple: every even epoch uses the flipped (or reversed) version of the previous epoch’s permutation. the intuition behind why flipflop leads to faster convergence is as follows. towards the end of an epoch, the contribution of earlier gradients gets attenuated. to counter this, we flip the permutation for the next epoch so that every function’s contribution is diluted (approximately) equally over the course of two consecutive epochs. flipflop demonstrates that finding better permutations for specific classes of functions might be computationally easy. we summarize flipflop’s convergence rates in table 1 and report the results of numerical verification in section 6.2. note that in this work, we focus on the dependence of the error on the number of iterations, and in particular, the number of epochs. however, we acknowledge that its dependence on other parameters like the condition number is also very important. we leave such analysis for future work. | 2 | [
108,
488.8080784,
504.0044533982,
520.6886784
] |
JXhROKNZzOc.pdf | 2,022 | 2 | squant: on-the-fly data-free quantization via diagonal hessian approximation cong guo1,2, yuxian qiu1,2, jingwen leng1,2, ∗, xiaotian gao3, chen zhang4, yunxin liu5, fan yang3, yuhao zhu6 & minyi guo1,2, ∗ 1 shanghai jiao tong university, 2 shanghai qi zhi institute 3 microsoft research, 4 damo academy, alibaba group 5 institute for ai industry research (air), tsinghua university, 6 university of rochester {guocong, qiuyuxian, leng-jw}@sjtu.edu.cn {xiaotian.gao, fanyang}@microsoft.com mingchong.zc@alibaba-inc.com, liuyunxin@air.tsinghua.edu.cn yzhu@rochester.edu, guo-my@cs.sjtu.edu.cn abstract quantization of deep neural networks (dnn) has been proven effective for compressing and accelerating dnn models. data-free quantization (dfq) is a promising approach without the original datasets under privacy-sensitive and confidential scenarios. however, current dfq solutions degrade accuracy, need synthetic data to calibrate networks, and are time-consuming and costly. this paper proposes an on-the-fly dfq framework with sub-second quantization time, called squant, which can quantize networks on inference-only devices with low computation and memory requirements. with the theoretical analysis of the second-order information of dnn task loss, we decompose and approximate the hessian-based optimization objective into three diagonal sub-items, which have different areas corresponding to three dimensions of weight tensor: element-wise, kernel-wise, and output channel-wise. then, we progressively compose sub-items and propose a novel data-free optimization objective in the discrete domain, minimizing constrained absolute sum of error (or case in short), which surprisingly does not need any dataset and is even not aware of network architecture. we also design an efficient algorithm without back-propagation to further reduce the computation complexity of the objective solver. finally, without fine-tuning and synthetic datasets, squant accelerates the data-free quantization process to a sub-second level with > 30% accuracy improvement over the existing data-free post-training quantization works, with the evaluated models under 4-bit quantization. we have open-sourced the squant framework1. introduction with the widespread application of dnn, more and more dnn models are deployed on both computation-constrained and memory-constrained environments, e.g., smartphones, iot devices, and self-driving cars. the desire for lightweight and energy-efficient dnn deployment solutions is increasing. quantization is one of the most promising techniques to convert weights and activations to lower bit formats and simultaneously reduce computational time and memory consumption. there are two kinds of quantization: post-training quantization (ptq) (banner et al., 2018; choukroun et al., 2019; zhao et al., 2019; nagel et al., 2020) and quantization-aware training (qat) (gupta et al., 2015; jacob et al., 2018; wang et al., 2019; zhuang et al., 2021). qat requires to simulate quantization in the training process, which invokes time-consuming retraining and hyper-parameter tuning. in contrast, ptq directly quantizes well-trained models without retraining. however, they still need training datasets to calibrate (nagel et al., 2020) quantized models but are often unavailable due to privacy and security issues, such as medical and confidential scenarios. ∗jingwen leng and minyi guo are corresponding authors of this paper. 1https://github.com/clevercool/squant in contrast, data-free quantization (dfq) has recently been presented as a promising way to quantize models without original datasets (nagel et al., 2019; cai et al., 2020; zhang et al., 2021; xu et al., 2020; liu et al., 2021; qin et al., 2021; choi et al., 2020). from a deployment perspective, dfq is the most attractive quantization method since we can apply it to any trained models as a black box postprocessing step. however, current dfq methods cannot achieve high accuracy and fast processing time simultaneously. traditionally, dfq (nagel et al., 2019) uses rounding quantization, leading to the rounding-to-nearest strategy. such a strategy causes significant accuracy loss, especially in low-bit settings. to bridge the accuracy gap between data-free and data-driven quantization, researchers propose a series of data-generative dfq methods. they use gradient-based methods to generate fake datasets for trained models. with the synthetic data, they can employ a data-driven calibration and fine-tuning strategy to improve accuracy. however, data generation typically adopts the time-consuming gradient-based methods, which require multiple iterations to generate each input. for example, prior works often spend hours generating a calibration dataset and fine-tuning the network (xu et al., 2020; liu et al., 2021; zhang et al., 2021). to solve this dilemma, we propose squant, a fast and accurate data-free quantization framework for convolutional neural networks, employing the constrained absolute sum of error (case) of weights as the rounding metric. by leveraging hessian information of network loss due to quantization, we propose a novel diagonal hessian approximation, which decomposes the optimization objective into three data-free sub-items: element-wise, kernel-wise, and output channel-wise, each of which corresponds to a single or a set of dimensions of the weight tensor. we progressively compose and optimize these three sub-items in the discrete space. the final approximate objective eliminates the requirement of data generation. we propose a progressive algorithm with linear complexity to solve the optimization objective, further accelerating dfq time to a sub-second level. for example, squant only needs an average of 4 ms and 84 ms for quantizing a layer and the overall network of resnet18, respectively. as it does not require back-propagation nor fine-tuning, squant can run on inference-only devices with limited computation and memory resources on the fly. that opens up new opportunities and scenarios for adopting quantization. compared with state-of-the-art dfq methods, squant achieves higher accuracy on all evaluated models under the 4/6/8-bit settings. squant only introduces 0.1% accuracy loss on average under the 8-bit setting. under fewer bit precisions, the advantage of squant further expands. squant only introduces 1.8% accuracy loss on average under the 6-bit setting. under the 4-bit setting, squant can achieve more than 30% accuracy improvement compared with data-free ptq methods. in a word, squant pushes the accuracy and processing time of dfq to a new frontier. preliminaries notations we specifically use x, y and w to denote the input, output, and weight variables, respectively. constant and scalar are denoted by italic letters, e.g., c, m. column vector and flattened matrix are denoted by bold lowercase letters, e.g., w, and matrices (or tensors) are represented by uppercase letters, e.g., w. the subscript and superscript can further represent the element indices and the layer of a network, respectively, e.g., w(cid:96) i, j. e[·] denotes the expectation operator, and the network loss function is represented by l (·). for convenience in this paper, we call the row of fc (fully connected layer) weight as the output channel and the column of fc weight as the input channel, which are the counterparts to conv (convolution layer) weight. we use m, n, and k to denote output channel size, input channel size, and kernel height × kernel width, respectively. specifically, fc has the shape of (m, n, 1). quantization most previous works adopt the rounding-to-nearest approach for quantizing deep neural networks by rounding elements w to the nearest quantization grid values with a fixed-point data type. the quantization and dequantization for a quantized element (cid:98)w can be described as (cid:98)w = s · clip((cid:98) w s (cid:101), min, max), where s denotes the quantization scale parameter and, min and max are the lower and upper thresholds for the clipping function clip(·). the operator (cid:98)·(cid:101) represents the rounding-to-nearest, i.e., minimizing the mean squared error (mse) between the quantized and the original value. hessian-based optimization for neural networks the hessian-based approach is one of the most promising optimizations to further improve the quantization (dong et al., 2019b;a; nagel et al., 2020; shen et al., 2020; qian et al., 2020; wu et al., 2020; hubara et al., 2020; li et al., 2021; yao et al., 2021) and pruning (yu et al., 2021) performance for dnn models. some of those works exploit the hessian matrix to approximate loss degradation due to the quantization perturbation of weight, ∆w, by e[l (x, y, w + ∆w) − l (x, y, w)]≈ e[∆w · gw + ∆w · hw · ∆wt ], where the equation comes from second-order taylor series expansion, gw is the gradient and hw is the full network hessian matrix w.r.t. original weight, w. since a well-trained model has already converged, the gradient term will be close to 0 and thus can be safely ignored. however, computing hw is infeasible because of the large memory overhead and computation complexity. to tackle this problem, we approximate hw as a layer-wise hessian matrix hw(cid:96) under the assumption of = x(cid:96)x(cid:96)t cross-layer independence (dong et al., 2017; nagel et al., 2020), i.e., hw(cid:96) l , where ⊗ ∇2 y(cid:96) ⊗ denotes kronecker product of two matrices, ∇2 y(cid:96) for the m-th output channel of conv or fc, hw(cid:96) channel-wise (nagel et al., 2020; yu et al., 2021; wu et al., 2020; qian et al., 2020), l is the hessian of the task loss w.r.t. y(cid:96). can be approximatively simplified into output y(cid:96)lm,m · x(cid:96)x(cid:96)t l is approximately a diagonal matrix. then the final optimization objective is = lm · x(cid:96)x(cid:96)t hw(cid:96) where ∇2 y(cid:96) m,: = arg min m,: m,: e[hw(cid:96) m]∆w(cid:96) m,: t ≈ arg min ∆w(cid:96) m,: m,: e[x(cid:96)x(cid:96)t m,: t = arg min ∆w(cid:96) m,: e[(∆w(cid:96) which is the mse between the output activation produced from original and quantized weights. each sub-problem deals with a single output channel ∆w(cid:96) m,:. we will further approximate eq. (4) to remove any input data dependency from the optimization objective in sec. 3.2. methodology overview | 2 | [
108.249,
274.9370784,
178.45549,
284.8996784
] |
-RwZOVybbj.pdf | 2,023 | 1 | risk-aware reinforcement learning with coherent risk measures and non-linear function approximation arun verma† thanh lam† †department of computer science, national university of singapore, republic of singapore ‡department of electrical engineering and computer science, mit, usa {chithanh, arun, lowkh}@comp.nus.edu.sg† bryan kian hsiang low† jaillet@mit.edu‡ patrick jaillet‡ abstract we study the risk-aware reinforcement learning (rl) problem in the episodic finite-horizon markov decision process with unknown transition and reward functions. in contrast to the risk-neutral rl problem, we consider minimizing the risk of having low rewards, which arise due to the intrinsic randomness of the mdps and imperfect knowledge of the model. our work provides a unified framework to analyze the regret of risk-aware rl policy with coherent risk measures in conjunction with non-linear function approximation, which gives the first sub-linear regret bounds in the setting. finally, we validate our theoretical results via empirical experiments on synthetic and real-world data. introduction reinforcement learning (rl) (sutton & barto, 2018) is a control-theoretic problem in which an agent interacts with an unknown environment and aims to maximize its expected total reward. due to the intrinsic randomness of the environment, even a policy with high expected total rewards may occasionally produce very low rewards. this uncertainty is problematic in many real-life applications like competitive games (mnih et al., 2013) and healthcare (liu et al., 2020), where the agent (or decision-maker) needs to be risk-averse. for example, the drug responses to patients are stochastic due to the patients’ varying physiology or genetic profiles (mcmahon & insel, 2012); therefore, it is desirable to select a set of treatments that yield high effectiveness and minimize the possibility of adverse effects (beutler et al., 2016; fatemi et al., 2021). the existing rl policies that maximize the risk-neutral total reward can not lead to an optimal risk-aware rl policy for problems where the total reward has uncertainty (yu et al., 2018). therefore, our goal is to design an rl algorithm that learns a risk-aware rl policy to minimize the risk of having a small expected total reward. then, how should we learn a risk-aware rl policy? a natural approach is to directly learn a risk-aware rl policy that minimizes the risk of having a small expected total reward (howard & matheson, 1972). for quantifying such a risk, one can use risk measures like entropic risk (föllmer & knispel, 2011), value-at-risk (var) (dempster, 2002), conditional value-at-risk (cvar) (rockafellar et al., 2000), or entropic value-at-risk (evar) (ahmadi-javid, 2012). these risk measures capture the total reward volatility and quantify the possibility of rare but catastrophic events. the entropic risk measure can be viewed as a mean-variance criterion, where the risk is expressed as the variance of total reward (fei et al., 2021). alternatively, var, cvar, and evar use quantile criteria, which are often preferable for better risk management over the mean-variance criterion (chapter 3 of kisiala (2015)). among these risk measures, coherent risk measures1 such as cvar and evar are preferred as they enjoy compelling theoretical properties such as coherence (rockafellar et al., 2000). the risk-aware rl algorithms with cvar as a risk measure (bäuerle & ott, 2011; yu et al., 2018; rigter et al., 2021) exist in the literature. however, apart from being customized only for cvar, these algorithms suffer two significant shortcomings. first, most of them focus on the tabular mdp setting and need multiple complete traversals of the state space (bäuerle & ott, 2011; rigter et al., 1apart from cvar and evar, the risk measures like g-entropic risk measures, tail value-at-risk, proportional hazard (ph) risk measure, wang risk measure, and superhedging price also belong to the coherent risk family. more details about various coherent risk measures are given in appendix c. 2021). these traversals are prohibitively expensive for problems with large state space and impossible for problems with continuous state space, thus limiting these algorithms’ applicability in practice. second, the existing algorithms considering continuous or infinite state space assume that mdp is known, i.e., the probability transitions and reward of each state are known a priori to the algorithm. in such settings, the agent does not need to explore or generalize to unseen scenarios. therefore, the problem considered in yu et al. (2018) is a planning problem rather than a learning problem. this paper alleviates both shortcomings by proposing a new risk-aware rl algorithm where mdps are unknown and uses non-linear function approximation for addressing continuous state space. recent works (jin et al., 2020; yang et al., 2020) have proposed rl algorithms with function approximation and finite-sample regret guarantees, but they only focus on the risk-neutral rl setting. extending their results to a risk-aware rl setting is non-trivial due to two major challenges. first, the existing analyses heavily rely on the linearity of the expectation in the risk-neutral bellman equation. this linearity property does not hold in the risk-aware rl setting when a coherent risk measure replaces the expectation in the bellman equation. then, how can we address this challenge? we overcome this challenge by the non-trivial application of the super-additivity property2 of coherent risk measures (see lemma 3 and its application in appendix 4). the risk-neutral rl algorithms only need one sample of the next state to construct an unbiased estimate of the bellman update (yang et al., 2020) as one can unbiasedly estimate the expectation in the risk-neutral bellman equation with a single sample. however, this does not hold in the risk-aware rl setting. furthermore, whether one can construct an unbiased estimate of an arbitrary risk measure using only one sample is unknown. this problem leads to the second major challenge: how can we construct an unbiased estimate of the risk-aware bellman update? to resolve this challenge, we assume access to a weak simulator3 that can sample different next states given the current state and action and use these samples to construct an unbiased estimator. such an assumption is mild and holds in many real-world applications, e.g., a player can anticipate the opponent’s next moves and hence the possible next states of the game. after resolving both challenges, we propose an algorithm that uses a risk-aware value iteration procedure based on the upper confidence bound (ucb) and has a finite-sample sub-linear regret upper bound. specifically, our contributions are as follows: • we first formalize the risk-aware rl setting with coherent risk measures, namely the risk-aware objective function and the risk-aware bellman equation in section 3. we then introduce the notion of regret for a risk-aware rl policy. • we propose a general risk-aware rl algorithm named risk-aware upper confidence bound (ra-ucb) for an entire class of coherent risk measures in section 4. ra-ucb uses ucb-based value functions with non-linear function approximation and also enjoys a finite-sample sub-linear regret upper bound guarantee. • we provide a unified framework to analyze regret for any coherent risk measure in section 4.1. the novelty in our analysis is in the decomposition of risk-aware rl policy’s regret by the super-additivity property of coherent risk measures (shown in the proof of lemma 4 in appendix d.2). • our empirical experiments on synthetic and real datasets validate the different performance aspects of our proposed algorithm in section 5. related work risk-aware mdps first introduced in the seminal work of howard & matheson (1972) with the use of an exponential utility function known as the entropic risk measure. since then, the risk-aware mdps have been studied with different risk criteria: optimizing moments of the total reward (jaquette, 1973), exponential utility or entropic risk (borkar, 2001; 2002; bäuerle & rieder, 2014; fei et al., 2020; 2021; moharrami et al., 2022), mean-variance criterion (sobel, 1982; li & ng, 2000; la & ghavamzadeh, 2013; tamar et al., 2016), and conditional value-at-risk (boda & filar, 2006; artzner et al., 2007; bäuerle & mundt, 2009; bäuerle & ott, 2011; tamar et al., 2015; yu et al., 2018; rigter et al., 2021). vadori et al. (2020) focuses on the variability or uncertainty of the rewards. 2super-additivity in the reward maximization setting becomes sub-additivity in the cost minimization setting. 3note that the weak simulator can only sample possible next states and returns no information regarding the rewards. in this sense, our simulator is weaker than the archetypal simulators often assumed in the rl literature. many of these existing works assume the mdps are known a priori (known reward and transition kernels) (yu et al., 2018), focus on the optimization problem (bäuerle & ott, 2011; yu et al., 2018) or asymptotic behaviors of algorithms (e.g., does an optimal policy exist, and if so, is it markovian, etc.) (bäuerle & ott, 2011; bäuerle & rieder, 2014). the closest works to ours are fei et al. (2021); fei & xu (2022), which consider the risk-aware reinforcement learning in the function approximation and regret minimization setting. however, they use the entropic risk measure. in contrast, our work considers a significantly different family of risk measures, namely the coherent risk measures. they are preferable and widely used for risk management (kisiala, 2015). the analysis in fei et al. (2021); fei & xu (2022) utilizes a technique called exponentiated bellman equation, which is uniquely applicable to the entropic risk measure (or more generally the exponential utility family) and cannot be readily extended to coherent risk measures. therefore, our analysis differs significantly from that in fei et al. (2021); fei & xu (2022). tamar et al. (2015) proposes an actor-critic algorithm for the entire class of coherent risk measures but does not provide any theoretical analysis of the regret. safe rl and constrained mdps represent a parallel approach to obtaining risk-aware policies in the presence of uncertainty. unlike risk-aware mdps, safe rl does not modify the optimality criteria. instead, the risk-aversion is captured via constraints on the rewards or risks (chow & pavone, 2013; chow et al., 2017), or as chance constraints (ono et al., 2015; chow et al., 2017). compared with risk-aware mdps, the constrained mdps approach enjoys less compelling theoretical properties. the existence of a global optimal markov policy using the constrained mdps is unknown, and many existing algorithms only return locally optimal markov policies using gradient-based techniques. it makes these methods extremely susceptible to policy initialization (chow et al., 2017), and hence the best theoretical result one can get in this setting is convergence to a locally optimal policy (chow et al., 2017). in contrast, our result in this paper considers the regret (or sub-optimality) with respect to the global optimal policy. distributional rl (bellemare et al., 2022) attempts to model the state-value distribution, and any risk measure can be characterized by such distribution. therefore, distributional rl represents a more ambitious approach in which the agent needs to estimate the entire value distribution. existing distributional rl algorithms need to make additional distributional assumptions to work with distributional estimates such as quantiles (dabney et al., 2018) or empirical distributions (rowland et al., 2018). in contrast, our risk-aware rl framework only considers the risk measures that apply to the random state-value. as a trade-off, the demand for data and computational resources to estimate the value distribution at every state can be prohibitively expensive for even moderate-sized problems. we establish more detailed connections between risk-aware rl and distribution rl in appendix a. coherent risk measures let z ∈ l1(ω, f, p)4 be a real-valued random variable with a finite mean and the cumulative distribution function fz(z) = p(z ≤ z). for z ′ ∈ l1(ω, f, p), a function ρ : l1(ω, f, p) → r ∪ {+∞} is a coherent risk measure if it satisfies the following properties: 1. normalized: ρ(0) = 0. 2. monotonic: if p(z ≤ z ′) = 1, then ρ(z) ≤ ρ(z ′). 3. super-additive: ρ(z + z ′) ≥ ρ(z) + ρ(z ′). 4. positively homogeneous: for α ≥ 0, we have ρ(αz) = αρ(z). 5. translation invariant: for a constant variable a with value a, we have ρ(z + a) = ρ(z) + a. since our reward maximization setting contrasts with the cost minimization setting often considered in the literature, we aim to maximize the risk applied to the random reward, i.e., maximizing ρ(z). consequently, the properties of risk measure are upended compared to those usually presented in cost minimization setting (föllmer & schied, 2010). for example, super-additivity in the reward maximization setting becomes sub-additivity in the cost minimization setting. empirical estimation of the risk. the risk of a random variable ρ(z) is completely determined by the distribution of z (fz). in practice, we do not know the distribution fz; instead, we can observe 4in our risk-aware rl setting, the random variable z represents the random total reward of the agent. m independent and identically distributed (iid) samples {zi}m can use these samples to get an empirical estimator of ρ(z), which is denoted by ˆρ(z1, . . . , zm). i=1 from the distribution fz. then we problem setting we consider an episodic finite-horizon markov decision process (mdp), denoted by a tuple m = (s, a, h, p, r), where s and a are sets of possible states and actions, respectively, h ∈ z+ is the episode length, p = {ph}h∈[h] are the state transition probability measures, and r = {rh : s × a → [0, 1]}h∈[h] : are the deterministic reward functions. we assume s is a measurable space of possibly infinite cardinality, and a is a finite set. for each h ∈ [h], ph(·|x, a) denotes the probability transition kernel when the agent takes action a at state x in time step h. h ∈ s, selects an action at an agent interacts with the mdp as follows. there are t episodes. in the t-th episode, the agent begins at state xt 1 chosen arbitrarily by the environment. in each step h ∈ [h], the agent observes a state xt h). the mdp then transitions to the next state following the probability transition kernel xt h). the episode terminates when the agent reaches state xh+1 at time step h + 1. in the last time step, the agent takes no action and receives no reward. h ∈ a, and receives a reward rh(xt h+1 ∼ ph(·|xt h, at h, at a policy π of an agent is a sequence of h functions, i.e., π = {πh}h∈[h], in which each πh(·|x) is a probability distribution over a. here, πh(a|x) indicates the probability that the agent takes action a at state x in time step h. any policy π and an initial state x1 determine a probability measure p π x1 and an associated stochastic process {(xh, ah), h ∈ [h]}. let eπ [·] denote the expectation operator x1 with respect to p π x1 . the standard risk-neutral mdp objective is (cid:35) rh(xh, ah) max π risk-aware episodic mdp the risk-neutral objective defined in eq. (1) does not account for the risk incurred due to the stochasticity in the state transitions and the agent’s policy. markov risk measures (ruszczy´nski, 2010) are proposed to model and analyze such risks. the risk-aware mdp objective is defined as max π h=1 rh(xh, ah)(cid:1). where ρ is a coherent one-step conditional risk measure (ruszczy´nski, 2010, definition 6), and {x1, a1, x2, a2, . . . } is a trajectory of states and actions from the mdp under policy π. here, j π is defined as a nested and multi-stage composition of ρ, rather than through a single-stage risk measure on the cumulative reward ρ(cid:0) (cid:80)h the choice of the risk-aware objective function in eq. (2) has two advantages. firstly, it guarantees the existence of an optimal policy, and furthermore, this optimal policy is markovian. please refer to theorem 4 in ruszczy´nski (2010) for a rigorous treatment of the existence of the optimal markov policy. secondly, the above risk-aware objective satisfies the time consistency property. this property ensures that we do not contradict ourselves in our risk evaluation. the sequence that is better today should continue to be better tomorrow, i.e., our risk preference stays the same over time. note that in standard rl, where the risk measure is replaced with expectation, this property is trivially satisfied. in contrast, a single-stage risk measure (i.e., static version) applied on the cumulative h=1 rh(xh, ah)(cid:1) does not enjoy this time consistency property (ruszczy´nski, 2010). reward ρ(cid:0) (cid:80)h more detailed discussions about this are in appendix b. bellman equation and regret the risk-aware bellman equation is developed for the risk-aware objective defined in eq. (2) (ruszczy´nski, 2010). more specifically, let us define the risk-aware state- and action-value functions with respect to the markov risk measure ρ as v π h (x) = rh(x, πh(x)) + ρ qπ h(x, a) = rh(x, a) + ρ we also define the optimal policy π⋆ to be the policy that yields the optimal value function v ⋆ h (x) = supπ v π h (x). the advantage of the formulation given in eq. (2) is that one can show that the optimal policy exists, and it is markovian (theorem 4 of ruszczy´nski (2010)). for notations convenience, for any measurable function v : s → [0, h], we define the operator dρ h as (dρ hv )(x, a) := ρ (v (x′)) , (3) where the risk measure ρ is taken over the random variable x′ ∼ ph(·|x, a). then, the risk-aware bellman equation associated with a policy π takes the form h (x) = ⟨qπ v π hv π h(x, a) = (rh + dρ h(x, ·), πh(·|x)⟩a, h+1)(x, a), qπ where ⟨·, ·⟩a denote the inner product5 over a and (f + g)(x) = f (x) + g(x) for function f and g. similarly, the bellman optimality equation is given by q⋆ h(x, a) = (rh + dρ hv ⋆ h+1)(x, a), v ⋆ h (x) = max a∈a q⋆ h(x, a), the above equation implies that the optimal policy π⋆ is the greedy policy with respect to the optimal action-value function {q⋆ h}h∈[h]. in the episodic mdp setting, the agent interacts with the environment through t episodes to learn the optimal policy. at the beginning of episode t, the agent selects a policy πt, and the environment chooses an initial state xt 1) quantifies the sub-optimality of πt, which serves as the regret of the agent at episode t. the total regret after t episodes is defined as 1. the difference in values between v πt 1) and v ⋆(xt rt (ρ) = v ⋆ 1 (xt we use the widely adopted notion of regret in the risk-neutral setting (jin et al., 2020; yang et al., 2020) and risk-aware setting (fei et al., 2020; 2021). here, the policy’s regret depends on the risk measure ρ via the optimal policy π⋆. a good policy should have sub-linear regret, i.e., limt →∞ rt /t = 0, which implies that the policy will eventually learn to select the best risk-averse actions. remark 1. given two risk measures ρ1 and ρ2 with rt (ρ1) < rt (ρ2), does not imply ρ1 is a better choice of risk measure for the given problem. because the optimal policies for ρ1 and ρ2 can be different, their regrets are not directly comparable. therefore, we cannot use regret as a measure to compare or select the risk measure. weak simulator assumption one key challenge for the risk-aware rl policy is that the empirical estimation of risk is more complex than the estimation of expectation in risk-neutral rl (yu et al., 2018). in this paper, we assume the existence of a weak simulator that we can use to draw samples from the probability transition kernel ph(·|x, a) for any h ∈ [h], x ∈ s, a ∈ a. this assumption is much weaker than the archetypal simulator assumptions often seen in the rl literature, as they also allow to query reward of a given state and action rh(x, a). to the best of our knowledge, all existing works in risk-aware rl with coherent risk measures require some assumptions on the transition probabilities to facilitate the risk estimation procedure. among these assumptions, our weak simulator assumption is the weakest. estimating non-linear functions we use reproducing kernel hilbert space (rkhs) as the class of non-linear functions to represent the optimal action-value function q∗ h. for notational convenience, let us denote z = (x, a) and z = s × a. following the standard setting, we assume that z is a compact subset of rd for fixed dimension d. let h denote the rkhs defined on z with the kernel function k : z × z → r. let ⟨·, ·⟩h and ∥·∥h be the inner product and the rkhs norm on h, respectively. since h is an rkhs, there exists a feature map ϕ : z → h such that ϕ(z) = k(z, ·) and f (z) = ⟨ϕ(z), f ⟩h for all f ∈ h and for all z ∈ z, this is known as the reproducing kernel property. 5since a is a finite set, the inner product over a is the canonical inner product on euclidean vector space. risk-aware rl algorithm with coherent risk measures we now introduce our algorithm named risk-aware upper confidence bound (ra-ucb), which is built upon the celebrated value iteration algorithm (sutton & barto, 2018). ra-ucb first estimates the value function using kernel least-square regression. then, it computes an optimistic bonus that gets added to the estimated value function to encourage exploration. finally, it executes the greedy policy with respect to the estimated value function in the next episode. ra-ucb risk-aware upper confidence bound 1:input: hyperparameters of coherent risk measure ρ (e.g., confidence level α ∈ (0, 1) for cvar) 2: for episode t = 1, 2, . . . , t do 3: 4: 5: receive the initial state xt for step h = h, . . . , 1 do for τ ∈ [t − 1], draw m samples from the weak simulator and construct the response vector yt h using eq. (7). compute µt compute qt h+1 as the zero function. 1 and initialize v t h using eq. (8). h using eq. (9). h and σt h and v t end for end for for step h = 1, . . . , h do take action at observe reward rh(xt qt h ← arg max a∈a h, at h, a). h) and the next state xt h(xt recall that we defined z = (x, a) and z = s × a in section 3.4. we define the following gram matrix k t h : z → rt−1 associated with the rkhs h as h ∈ r(t−1)×(t−1) and a function kt k t h = (cid:2)k(zτ h, zτ ′ h )(cid:3) h(z) = (cid:2)k(z1 kt h, z), . . . , k(zt−1 h , z)(cid:3)⊤ given the observed histories and the weak simulator, we define the response vector yt h ∈ rt−1 as h] = (cid:2)rh(xτ [yt h, aτ h) + ˆρ({v t (i)}m τ ∈[t−1], i=1 are m next states drawn from the weak simulator ph(·|xτ where {x′ h). this step contains one of the key differences between ra-ucb and its risk-neutral counterpart, with the presence of the empirical risk estimator in the definition of the response vector yt h. with the newly introduced notations, we define two functions µt : z → r and σt : z → r as h, aτ (i))}m h and σt h + λ · i)−1yt h, h(z)⊤(k t h(z)⊤(k t h(z) = kt µt h(z) = λ−1/2 · (cid:2)k(z, z) − kt σt the terms µt h have several important connections with other literature. more specifically, it resembles the posterior mean and standard deviation of a gaussian process regression problem (rasmussen, 2003), with yt h also reduces to the ucb term used in linear bandits when the feature map ϕ is finite-dimensional (lattimore & szepesvári, 2020). we then h and v t define our estimate of the value functions qt h as follows: h(x, a), h − h + 1(cid:9), h(x, a) + β · σt h as its target. the second term σt h(x, a) := min (cid:8)µt qt h + λi)−1kt h(x, a), qt v t h (x) := max a∈a where β > 0 is an exploration versus exploitation trade-off parameter. h+1 is the estimated value function by our algorithm at episode t. thus, v t to get some insights on the algorithm, notice that eq. (7) implements the one-step bellman optimality update in eq. (4). to see this, let x ′ ∼ ph(·|xτ h, aτ h) be the random variable representing the next h+1(x ′) state. recall that v t is also a random variable, where the randomness comes from x ′. here, we can start looking at ρ(v t h+1(x ′). intuitively, this can be interpreted as the risk-adjusted value of the next state. the second term in eq. (7) above, (cid:98)ρ({v t h+1(x ′)), i.e., the risk measure ρ applied on the random variable v t i=1), is an empirical estimate of ρ(v t (i))}m the choice of the response vector in eq. (7) represents the primary novelty in our algorithm design. this choice enables a new regret decomposition and an upper bound using the concentration inequality of the risk estimator. more details are presented in appendix d.1. main theoretical results this section presents our main theoretical result, i.e., the regret upper bound guarantee of ra-ucb. we first outline the key assumption that enables the efficient approximation of the value function. assumption 1. let r > 0 be a fixed constant, h be the rkhs, and b(r) = {f ∈ h : ∥f ∥h ≤ r} to be the rkhs-norm ball with radius r. we assume that for any h ∈ [h] and any q : s × a → [0, h], we have t∗ h is the bellman optimality operator defined in eq. (4). hq ∈ b(rh), where t∗ this assumption postulates that the risk-aware bellman optimality operator maps any bounded action-value function to a function in an rkhs h with a bounded norm. this assumption ensures that for all h ∈ [h], the optimal action-value function q⋆ h lies inside b(rh). consequently, there is no approximation error when using functions from h to approximate q⋆ h. it can be viewed as equivalent to the realizability assumption in supervised learning. similar assumptions are made in jin et al. (2020); yang et al. (2020); zanette et al. (2020). please refer to du et al. (2019) for a discussion on the necessity of this assumption. given this assumption, it is clear that the complexity of h plays a central role in the regret bound of ra-ucb. following the seminal work of srinivas et al. (2009), we characterize the intrinsic complexity of h with the notion of maximum information gain defined as (cid:8) log det(i + kd/λ)(cid:9), sup d⊆z,|d|≤t where k is the kernel function, λ > 0 is a parameter, and kd is the gram matrix. the maximum information gain depends on how fast the eigenvalues of h decay to zero and can be viewed as a proxy for the dimension of h when h is infinite-dimensional. note that γk(t, λ) is a problem-dependent quantity that depends on the kernel k, state space s, and action space a. furthermore, let us first define the action-value function classes qucb(h, r, b) as qucb(h, r, b) = {q : q(z) = min{f (z) + β · λ−1/2[k(z, z) − kd(z)⊤(kd + λi)−1kd(z)]1/2, h − h + 1}+, f ∈ h, ∥f ∥h ≤ r, β ∈ [0, b], |d| ≤ t }. with the appropriate choice of r and b, the set qucb(h, r, b) contains every possible qt h that can be constructed by ra-ucb. therefore, the function class qucb resembles the concept of hypothesis space in supervised learning. and as we will see, the complexity of qucb, in particular, the covering number of qucb, plays a crucial role in the regret bound of ra-ucb. theorem 1. let λ = 1 + 1/t , β = bt in ra-ucb, and let γk(t, λ) be the maximal information gain defined in eq. (10). define a constant bt > 0 that satisfies bt = θ(cid:0)h((cid:112)γk(t, λ) + (cid:112)log n∞(ϵ, h, bt ))(cid:1). suppose that the empirical risk estimate ˆρ achieves the rate of maxh∈h ξ(m, δ), i.e., p(cid:2)|ρ(z) − ˆρ({zi}m i=1)| ≤ ξ(m, δ)(cid:3) ≥ 1 − δ. then, under assumption 1, with a probability of at least 1 − (t 2h 2)−1, the regret of ra-ucb is the proof of theorem 1 is in appendix d.1. the regret upper bound consists of two terms. the first term resembles risk-neutral regret bound (yang et al., 2020, theorem 4.2). interestingly, our bound distinguishes itself from the risk-neutral setting with the presence of the second term, which quantifies how fast one can estimate the risk from observed samples. it originates from the risk-aware bellman optimality equation, in which the one-step update requires knowledge of the risk-to-go starting from the next state (see eq. (4) for more detail). this risk-to-go quantity is approximated by its empirical counterpart, and the discrepancies give rise to the second term in regret. due to the weak simulator assumption, we have good control over the second term. in the following result, we derive the number of samples sufficient to achieve the order-optimal regret for the conditional value-at-risk (cvar), which is one of the most commonly used coherent risk measures. more details on cvar and its properties are given in appendix c.1. corollary 1. let ρ be the cvar measure defined in eq. (13) and ˆρ be the cvar estimator defined in eq. (14). then, under the same conditions in theorem 1, the algorithm ra-ucb achieves the regret of rt = o(cid:0)bt h(cid:112)t γk(t, λ)(cid:1) with t γk(t, λ)(cid:1)(cid:17) o total samples (across all t episodes) from the weak simulator. the detailed proof of corollary 1 is in appendix d.5. as an example, for the commonly used squared exponential (se) kernel, we get bt = o(cid:0)h · (cid:112)log (t h) · (log t )d(cid:1) (yang et al., 2020, corollary 4) and γk(t, λ) = o(cid:0)(log t )d+1(cid:1) (srinivas et al., 2009), and thus ra-ucb incurs a regret of rt = ˜o(cid:0)h 2 t (log t )1.5d+1(cid:1). this result leads to the first sub-linear regret upper bound of the risk-aware rl policy with coherent risk measures. experiments in this section, we empirically demonstrate the effectiveness of ra-ucb. we run different experiments on synthetic and real-world data with the cvar as a risk measure, which is a commonly used coherent risk measure. we analyze the influence of the risk aversion parameter α (or confidence level for cvar) on the total reward as well as the behavior of the output policies. the code for these experiments is available in the supplementary material. synthetic experiment: robot navigation the robot navigation environment is a continuous version of the cliff walking problem considered in example 6.6 of sutton & barto (2018), visualized in fig. 1. in this synthetic experiment, a robot must navigate inside a room full of obstacles to reach its goal destination. the robot navigates by choosing from 4 actions {up, down, left, right}. since the floor is slippery, the direction of movement is perturbed by r ·ϕ, where ϕ ∼ u (−π, π) and r ∈ [0, 1] represent the angle and magnitude of the perturbation. the robot receives a positive reward of 10 for reaching the destination and a negative reward for being close to obstacles. the negative reward increases exponentially as the robot comes close to the obstacle. we set the horizon of each episode to h = 30. the robot does not know perturbation parameters (r = 0.3) and the obstacles’ positions, so it has to learn them online via interacting with the environment. we approximate the state-action value function using the rbf kernel and the kernelridge regressor from scikit-learn. illustration of the continuous figure 1: version of the cliff walking problem. the robot starts at (0, 0) and must navigate to the goal area (in green). the robot gets negative rewards for being close to the obstacles and receives a reward of 10 upon reaching the goal. figure 2: estimated distribution of the cumulative reward when following the learned policy for different risk parameters. for α = 0.9 (leftmost plot), the policy is more risk-tolerant, which causes the average reward to be higher, but occasionally small. as we decrease α, the policy becomes more risk-averse, favoring safer paths with smaller average rewards and higher worst-case rewards. in fig. 2, we show the histograms of the robot’s cumulative rewards that it receives in 50 episodes by following the learned policy with different values of the risk parameter α ∈ [0.9, 0.5, 0.1]. for smaller values of α, the learned policy successfully mitigates the tail risk in the distribution, illustrated by the rightmost histogram having the smallest reward of at least 3.0, whereas the reward could go as low as near 0 for the remaining two policies. as we increase α, the policy becomes more risk-tolerant, leading to a higher average reward at the expense of occasional bad rewards. in this experiment, we use m = 100 samples from the weak simulator to estimate the risk in eq. (7). real-world experiment: trading this trading setup is a generalization of the betting game environment (bäuerle & ott, 2011; rigter et al., 2021). this experiment considers a simplified foreign exchange trading environment based on real historical exchange rates and volumes between eur and usd in 12 months of 2017. for simplicity, we fixed the trade volume for each hour at 10000. there are two actions in the environment: buy or sell. the state of the environment includes the current position, which is either long or short, and a vector of signal features containing the historical prices and trading volumes over a short period of time. we customize this environment based on the forexenv in the python package gym-anytrading.6 estimated distribution of in fig. 3, we show a histogram of the cumulative terminal wealth achieved by the agents in 100 episodes with different risk parameters, plotted in different colors. similar to the robot experiment, we demonstrate that for a smaller value of α, the policy is risk-averse and successfully mitigates the tail of the distribution. this can be seen that the worst-case wealth for α = 0.1 (in green) is higher than for α = 0.5 (in red) or α = 0.9 (in blue). in this experiment, we use m = 100 samples from the weak simulator to estimate the risk in eq. (7). additional experiments with other risk measures like var and evar are given in appendix e. figure 3: the normalized terminal wealth following the learned policy for different risk parameters. the vertical lines represent the average rewards. when α = 0.9 (the blue bar), the policy is more risk-tolerant, which causes the average reward to be higher at the expense of occasional low reward. the policy is more risk-averse as we decrease the value of α, favoring safe paths with lower average-case rewards and higher worst-case rewards. computational complexity of ra-ucb: we need to solve h kernel ridge regression problems in each episode. in the t-th episode, each regression problem complexity is dominated by two operations: h of size (t − 1) × (t − 1) in eq. (8), which has o(t3) time first, the inversion of the gram matrix k t complexity and o(t2) space complexity. second, the construction of the response vector in eq. (7) has o(mt) time and space complexity. therefore, the time and space complexity of the t-episode is o(h(t3 + mt)) and o(h(t2 + mt)) respectively. conclusion | 8 | [
108.299,
193.7236768,
195.3774711,
205.6788768
] |
lqU2cs3Zca.pdf | 2,021 | 2 | signatory: differentiable computations of the signature and logsignature transforms, on both cpu and gpu terry lyons patrick kidger mathematical institute, university of oxford the alan turing institute, british library {kidger, tlyons}@maths.ox.ac.uk abstract signatory is a library for calculating and performing functionality related to the signature and logsignature transforms. the focus is on machine learning, and as such includes features such as cpu parallelism, gpu support, and backpropagation. to our knowledge it is the first gpu-capable library for these operations. signatory implements new features not available in previous libraries, such as efficient precomputation strategies. furthermore, several novel algorithmic improvements are introduced, producing substantial real-world speedups even on the cpu without parallelism. the library operates as a python wrapper around c++, and is compatible with the pytorch ecosystem. it may be installed directly via pip. source code, documentation, examples, benchmarks and tests may be found at https://github.com/patrick-kidger/signatory. the license is apache-2.0. introduction the signature transform, sometimes referred to as the path signature or simply signature, is a central object in rough path theory (lyons, 1998; 2014). it is a transformation on differentiable paths1, and may be thought of as loosely analogous to the fourier transform. however whilst the fourier transform extracts information about frequency, treats each channel separately, and is linear, the signature transform exacts information about order and area, explicitly considers combinations of channels, and is in a precise sense ‘universally nonlinear’ (bonnier et al., 2019, proposition a.6). the logsignature transform (liao et al., 2019) is a related transform, that we will also consider. in both cases, by treating sequences of data as continuous paths, then the (log)signature transform may be applied for use in problems with sequential structure, such as time series. indeed there is a significant body of work using the (log)signature transform in machine learning, with examples ranging from handwriting identification to sepsis prediction, see for example morrill et al. (2019); fermanian (2019); király & oberhauser (2019); toth & oberhauser (2020); morrill et al. (2020b). earlier work often used the signature and logsignature transforms as a feature transformation. see levin et al. (2013); chevyrev & kormilitzin (2016); yang et al. (2016a;b); kormilitzin et al. (2016); li et al. (2017); perez arribas et al. (2018) for a range of examples. in this context, when training a model on top, it is sufficent to simply preprocess the entire dataset with the signature or logsignature transform, and then save the result. however, recent work has focused on embedding the signature and logsignature transforms within neural networks. recent work includes bonnier et al. (2019); liao et al. (2019); moor et al. (2020); morrill et al. (2020a); kidger et al. (2020) among others. in this context, the signature and logsignature transforms are evaluated many times throughout a training procedure, and as such efficient and differentiable implementations are crucial. previous libraries (lyons, 2017; reizenstein & graham, 2018) have been cpu-only and single-threaded, and quickly become the major source of slowdown when training and evaluating these networks. 1and may be extended to paths of bounded variation, or merely finite p-variation (lyons et al., 2004). contributions we introduce signatory, a cpu- and gpu-capable library for calculating and performing functionality related to the signature and logsignature transforms. to our knowledge it is the first gpu-capable library for these operations. the focus is on machine learning applications. signatory is significantly faster than previous libraries (whether run on the cpu or the gpu), due to a combination of parallelism and novel algorithmic improvements. in particular the latter includes both uniform and asymptotic rate improvements over previous algorithms. additionally, signatory provides functionality not available in previous libraries, such as precomputation strategies for efficient querying of the (log)signature transform over arbitrary overlapping intervals. the library integrates with the open source pytorch ecosystem and runs on linux or windows. documentation, examples, benchmarks and tests form a part of the project. much of the code is written in c++ primitives and the cpu implementation utilises openmp. the backward operations are handwritten for both speed and memory efficiency, and do not rely on the autodifferentiation provided by pytorch. the source code is located at https://github.com/patrick-kidger/signatory, documentation and examples are available at https://signatory.readthedocs.io, and the project may be installed directly via pip. this paper is not a guide to using signatory—for that we refer to the documentation. this is meant as a technical exposition of its innovations. applications signatory has already seen a rapid uptake amongst the signature community. recent work using signatory include morrill et al. (2020b); perez arribas et al. (2020) who involve signatures in neural differential equations, or moor et al. (2020); min & ichiba (2020) who study deep signature models (bonnier et al., 2019). meanwhile ni et al. (2020) apply signatory to hybridise signatures with gans, and morrill et al. (2020a) create a generalised framework for the “signature method”. as a final example, signatory is now itself a dependency for other libraries (kidger, 2020). background we begin with some exposition on theory of the signature and logsignature transforms. we begin with definitions and offer intuition afterwards. also see reizenstein & graham (2018) for an introduction focusing on computational concerns, and lyons et al. (2004) and hodgkinson et al. (2020) for pedagogical introductions to the motivating theory of rough paths. the signature transform definition 1. let rd1 ⊗rd2 ⊗· · ·⊗rdn denote the space of all real tensors with shape d1×d2×· · ·× dn. there is a corresponding binary operation ⊗, called the tensor product, which maps a tensor of shape (d1, . . . , dn) and a tensor of shape (e1, . . . , em) to a tensor of shape (d1, . . . , dn, e1, . . . , em) via (ai1,...,in , bj1,...,jm) (cid:55)→ ai1,...,in bj1,...,jm. for example when applied to two vectors, it reduces to the outer product. let (cid:0)rd(cid:1)⊗k definition 2. let n ∈ n. the signature transform to depth n is defined as = rd ⊗ · · · ⊗ rd, and v⊗k = v ⊗ · · · ⊗ v for v ∈ rd, in each case with k − 1 many ⊗. sign : (cid:8)f ∈ c([0, 1]; rd) (cid:12) (cid:12) f differentiable(cid:9) → (cid:0)rd(cid:1)⊗k sign (f ) = df dt df dt (tk) dt1 · · · dtk most texts define the signature transform using the notation of stochastic calculus. here, we sacrifice some generality (that is not needed in this context) in favour of more widely-used notation.2 the signature transform may naturally be extended to sequences of data. definition 3. the space of sequences of data over a set v is s (v ) = {x = (x1, . . . , xl) | l ∈ n, xi ∈ v for all i} . an interval of (x1, . . . , xl) ∈ s (v ) is (xi, . . . , xj) ∈ s (v ) for some 1 ≤ i < j ≤ l. definition 4. let x = (x1, . . . , xl) ∈ s (cid:0)rd(cid:1) with l ≥ 2. let f : [0, 1] → rd be the unique continuous piecewise affine function such that f ( i−1 l−1 ) = xi for all i, and is affine on the pieces in between. let n ∈ n. then define sign (x) = sign (f ). in this way we interpret sign as a map sign : s (cid:0)rd(cid:1) → (cid:0)rd(cid:1)⊗k note that the choice of i−1 definition is invariant to this choice (bonnier et al., 2019, definition a.10). l−1 is unimportant; any l points in [0, 1] would suffice, and in fact the the grouplike structure with a0 = b0 = 1 ∈ r on the right hand side, define (cid:2) by3 n (cid:89) (cid:0)rd(cid:1)⊗k (cid:0)rd(cid:1)⊗k (cid:0)rd(cid:1)⊗k (cid:2) : (a1, . . . an ) (cid:2) (b1, . . . , bn ) (cid:55)→ aj ⊗ bk−j chen’s identity (lyons et al., 2004, theorem 2.9) states that the image of the signature transform forms a noncommutative group with respect to (cid:2). that is, given a sequence of data (x1, . . . , xl) ∈ s (cid:0)rd(cid:1) and some j ∈ {2, . . . , l − 1}, then sign ((x1, . . . , xl)) = sign ((x1, . . . , xj)) (cid:2) sign ((xj, . . . , xl)). furthermore the signature of a sequence of length two may be computed explicitly from the definition. letting exp : rd → (cid:0)rd(cid:1)⊗k exp : v → v, v⊗n n ! then sign ((x1, x2)) = exp(x2 − x1). with chen’s identity, this implies that the signature transform may be computed by evaluating sign ((x1, . . . , xl)) = exp(x2 − x1) (cid:2) exp(x3 − x2) (cid:2) · · · (cid:2) exp(xl − xl−1). the logsignature, inverted signature, and inverted logsignature the group inverse we denote −1. additionally a notion of logarithm may be defined (liao et al., 2019), where log : image (cid:0)sign (cid:1) → (cid:0)rd(cid:1)⊗k 2additionally, many texts also include a k = 0 term, which is defined to equal one. we omit this as it does not carry any information, and is therefore irrelevant to the task of machine learning. 3most texts use ⊗ rather than (cid:2) to denote this operation, as it may be regarded as an generalisation of the tensor product. that will not be important to us, however, so we use differing notation to aid interpretation. this then defines the notions of inverted signature transform, logsignature transform and inverted logsignature transform as invertsign (x) = sign (x)−1, logsign (x) = log (cid:0)sign (x)(cid:1) , invertlogsign (x) = log (cid:0)sign (x)−1(cid:1) respectively. we emphasise that the inverted signature or logsignature transforms are not the inverse maps of the signature or the logsignature transforms. the logsignature transform extracts the same information as the signature transform, but represents the information in a much more compact way, as image (log) is a proper subspace4 of (cid:1) di, which is known as witt’s for(cid:81)n (cid:0)rd(cid:1)⊗k . its dimension is w(d, n ) = (cid:80)n mula (lothaire, 1997). µ is the möbius function. i|k µ (cid:0) k i signatures in machine learning | 3 | [
108.249,
525.6310784,
287.9790893,
535.5936784
] |
t98k9ePQQpn.pdf | 2,022 | 2 | optimal transport for long-tailed recognition with learnable cost matrix hanyu peng, mingming sun, ping, li cognitive computing lab baidu research no.10 xibeiwang east road, beijing 100193, china 10900 ne 8th st. bellevue, washington 98004, usa {penghanyu,sunmingming01,liping11}@baidu.com abstract it is attracting attention to the long-tailed recognition problem, a burning issue that has become very popular recently. distinctive from conventional recognition is that it posits that the allocation of the training set is supremely distorted. predictably, it will pose challenges to the generalisation behaviour of the model. approaches to these challenges revolve into two groups: firstly, training-aware methods, with the aim of enhancing the generalisability of the model by exploiting its potential in the training period; and secondly, post-hoc correction, liberally coupled with trainingaware methods, which is intended to refine the predictions to the extent possible in the post-processing stage, offering the advantages of simplicity and effectiveness. this paper introduces an alternative direction to do the post-hoc correction, which goes beyond the statistical methods. mathematically, we approach this issue from the perspective of optimal transport (ot), yet, choosing the exact cost matrix when applying ot is challenging and requires expert knowledge of various tasks. to overcome this limitation, we propose to employ linear mapping to learn the cost matrix without necessary configurations adaptively. testing our methods in practice, along with high efficiency and excellent performance, our method surpasses all previous methods and has the best performance to date. introduction classification problems in the real world are generally challenged by the long-tailed label distribution, i.e., having a small number of samples for a majority of labels, and a dominant number of samples for a minority of labels (van horn & perona, 2017; buda et al., 2018; liu et al., 2019). it is also known as imbalanced recognition, which has been widely studied in the past decades (cardie & nowe, 1997; chawla et al., 2002; qiao & liu, 2009; cui et al., 2019). these distribution biases pose a significant challenge to predictive modeling; conceivably, models often suffer from poor generalisation and undesirable estimation bias (cao et al., 2019; kang et al., 2020; zhou et al., 2020). recently, a renewed interest in the problem of long-tail recognition has emerged following the context of neural networks, as numerous publications in the literature endeavour to resolve the problem albeit in different ways including decouple (kang et al., 2020), meta-learning (ren et al., 2020; wang et al., 2020; li et al., 2021), post-hoc correction (tang et al., 2020; hong et al., 2021), etc (liu et al., 2019; cao et al., 2019; tang et al., 2020). one of the representative methods of post-hoc correction, logit adjustment menon et al. (2021), provides a statistical correction to the prediction, receiving widespread attention for its simplicity and validity. but the downside is that it is conducted on individual samples, the rectified marginal distribution may not satisfy the desired distribution. figuring out exact flaws of logit adjustment, our explicit modeling of the problem mathematically turns into an equational constraint, meanwhile to minimise the difference between refined distribution and the original one, this minimisation is motivated upon the inner-product similarity. a little further, the resulting problem can be linked to ot. drawing on this linkage, we develop it further by proposing a linear mapping to automatically learn cost matrix, thereby circumventing the requirement for expert knowledge to configure this matrix. in summary, our contributions are as follows: • we propose an alternative direction based on convex optimisation to do post-hoc correction, which goes beyond previous direction from the statistical view. • imposing marginal distributions to align ideal ones, we derive an optimisation problem tied to ot that is solved using sinkhorn. more further, for better learning of the cost matrix, we present a linear mapping enabling elegant learning with one-layer network. • the experimental evidence shows the high efficiency and best performance on three benchmarks. it verifies that addressing the post-hoc problem via ot is helpful and effective. preliminaries in this section, we begin with notational definition, followed by an introduction to the long-tailed recognition problem. finally, we briefly review the ot and logit adjustment menon et al. (2021). in what follows, for two matrices x, y ∈ rn ×k, we denote (cid:104)x, y (cid:105) = notations: (cid:80)n (cid:80)k k=1 xnkynk as the frobenius dot-product. δ(·) stands for the dirac function, p(·) repre|p 1k = r, p (cid:124)1n = c}, where 1n and sents the probability distribution. u (r, c) = {p ∈ rn ×k + 1k are n -dimension and k-dimension vector whose elements are all 1. r and c refer to the vectors of size n and k, u (r, c) include all matrices with row and column sums r and c respectively. problem formulation n)}ns n)}nv n, ys n, yv n, yn)t}nt n=1, validation samples {(xv having a collection of training samples {(xs n=1 and test n=1 for classification with k labels and input x ∈ rd, long-tailed recognition samples {(xt assumes that the class-prior distribution for training data p(ys) is different from that for validation data p(yv) and test data p(yt). specifically, long-tailed recognition means the distribution p(ys) is highly skewed, that is, some classes have the dominant number of samples, while tailed labels own a very small number of samples. we can use imbalance ratio to measure the skewness in training data set, which can be defined as r = n s , where n s min denote the largest and smallest n s number of samples in the training data set, respectively. in this paper, we assume that the marginal distribution of the test set is known, we consider it as an implicit prior knowledge to be applied. stepping back, even if we do not know the marginal distribution of the test dataset in advance. there are still ways to estimate the marginal distribution of the test dataset relatively precisely, such as methods in hendrycks et al. (2018); azizzadenesheli et al. (2019). max and n s max min obviously, most models trained on imbalanced training data set would suffer from extremely limited generalisation ability. hence the ultimate goal is to learn a model that minimises the empirical risk: j (φ (xs n) , ys n) = ns(cid:88) l (φ(xs n), ys n) , n) ∈ rk denotes logits with associated sample, φ(·) : rd → rk represents the mapping where φ(xs via neural networks, l stands for the loss function, typically cross entropy for classification problem. reminders on optimal transport ot is used to calculate the cost of transporting one probability measure to another. we next present a brief introduction to ot to help us better view the long-tailed problem from an ot perspective. for two random variables x and y , we denote its corresponding probability measures as r and c. besides, c(x, y ) : x × y → r+ stands for cost function which measures the expense of transporting x to y . based on these, we can define ot distance between x and y as d(r, c) = min π∈π(r,c) x×y c(x, y)π(x, y)dxdy, where π (r, c) = is the joint probability measure with r x π(x, y)dx = c and c. when we extend the above to the discrete situation, we consider following discrete distributions: y π(x, y)dy = r, (cid:82) r = pi(xi)δ(xi) c = pi(yj)δ(yj) where pi(xi) and pi(yj) represent the probability mass to the sample xi and yj respectively. in this context, ot distance can be expressed as: dm (r, c) = min p ∈u (r,c) where m stands for the cost matrix constructed by mij = c(xi, yj). the goal of ot is to find a transportation matrix p that minimizes the distance dm (r, c) as we can see, ot is a distance measure between two probability distributions under some cost matrix (villani, 2008). however, when we use network simplex or interior point methods to solve the above optimisation problem, it often comes at the cost of heavy computational demands. to tackle this issue, ot with entropy constraint is proposed to allow the optimisation at small computational cost in sufficient smoothness (burges et al., 2013). by adding a lagrangian multiplier to the entropy constraint, the new formulation can be defined as follows: where p λ = arg min p ∈u (r,c) where λ ∈ [0, +∞], h(p ) = − (cid:80)n k=1 pnk log pnk, dλ m (r, c) is also known as dual-sinkhorn divergence, besides, it can be calculated with matrix scaling algorithms for cheaper computational demand. the following lemma guarantees the convergence and uniqueness of the solution. lemma 1 for λ > 0, the solution p λ is unique and has the form p λ = diag(u)kdiag(v), where u and v are two non-negative vectors uniquely defined up to a multiplicative factor and k = e−m /λ is the element-wise exponential of −m /λ. the above lemma states the uniqueness of p λ (sinkhorn, 1974), and p λ can be efficiently computed via sinkhorn’s fixed point iteration u, v ← r./kv, c./k(cid:124)u. a quick recap of logit adjustment we give a brief introduction to logit adjustment (menon et al., 2021; hong et al., 2021). for the model φ(·), it is trained by the standard cross-entropy loss function on imbalanced training data set, and evaluated on test data. in this algorithm, the test logit is adjusted as follows: φ(xt n) = φ(xt n) − log p(ys) this simple procedure is derived from the bayes optimal rule. it is apparent that logit adjustment involves a post hoc correction on an individual sample, which does not necessarily guarantee that the marginal distribution of the whole dataset matches the desired distribution. methodology the first part of this section explores post-hoc correction from an ot perspective, proceeds to the automatic learning of the cost matrix via linear mapping. lastly, we demonstrate how it can be achieved simply with one-layer neural network. post-hoc correction formalised from an ot perspective since logit adjustment applies adjustment at the individual sample level. it doesn’t assure that the marginal distribution of the overall data set fulfils our desired distribution. in this respect, we clearly put the constraint into an equation: where y ∈ rn ×k indicates the refined prediction value in matrix form, µ represents the expected distribution on the test set. alternatively, it is desirable to preserve another characteristic of y , namely, remaining almost as similar to the original prediction as possible. we consider inner-product based similarity to measure this, which is a straightforward yet useful similarity measure. maximize y where ˆz represents the original prediction in matrix form, c(·) denotes to some transformation to ˆz, it can be some simple function, like logarithmic function log(z), exponential function zα. here we select − log(·) as the cost function. this choice was driven by the requirement that the cost matrix must be positive definite, whereas the transformation of the original prediction by − log(·) satisfies this condition. in addition, as log likelihood represents the local probability density of the associated samples, it can also be used to substitute ˆz for the similarity approximation. in brief, the resulting numerical form can be put in formal terms as follows: minimize y subject to extra constraint on y is imposed simply cos the tuned estimation has to fulfil the basic probabilistic requirement that its sum is one. comparing eq. (9-10) with eq. (4), we can see that if we substitute p with y , and substitute r and c with 1n and µ respectively, we find that the above optimisation problem is actually a special case of ot. in preliminaries, the entropy regularised ot (eot) is introduced. by adding entropy regularisation to ot, the given equation can be solved efficiently by sinkhorn algorithm. specifically, the equation is minimize y subject to the associated algorithmic flow for solving eq. (11) is outlined in detail in algorithm 1. algorithm 1: solve ot-related algorithm efficiently in the post-hoc correction via sinkhorn algorithm. input: cost matrix m = − log( ˆz), trade-off parameter λ, max number of iterations nt , iteration number t, error threshold (cid:15), current error σ, row and column sums r = 1n and c = µ, |·| denotes the vector norm. result: refined predictions y 1 initialise k = e−m /λ,uold = 1n , v = 1k, t = 0; 2 while t ≤ nt and δ ≤ (cid:15) do 3 u = r./kv; v = c./k(cid:124)u; σ = |uold − u|; uold = u; t = t + 1; 7 8 end 9 output y = diag(u)kdiag(v) assigning λ with 1, it is observed that we equate our objective function to the kl divergence, thus illustrating the extensive nature of our approach. darp (kim et al., 2020a) has previously applied it to long-tailed semi-supervised classification. remark we would like to illustrate the non-applicable scenarios of our method. firstly, our method requires a large number of samples for evaluation. this is because if the batch size is small, we can not guarantee that the desired marginal distribution can be satisfied within the batch. in some online scenarios, the sample-wise correction method is more suitable. in addition, our method assumes that the marginal distribution is already known. we assume that it is consistent with a uniform distribution. cost function learning via linear mapping simple functions are likely to be sub-optimal for the real data sets; this suggests the design of a better cost function to better fit the long-tailed recognition problem. however, manually designed cost functions require expert knowledge in different domains. thus, we propose to use a linear mapping to automatically learn the cost function, which relieves the need to configuration. more specifically, for predictions ˜z generated by softmax operation via leveraging linear transformation matrix w , w ∈ rk×k is learned so that the following objective function is minimised. minimize y ∈r ˜znk = exp(w (cid:124)φ (xn))k k exp(w (cid:124)φ (xn))k the resulting formula can be illustrated using a simple one-layer network of weight parameter w , together with an error function in eq. (12). we initialise w with an identity matrix and use small learning rate to learn w . one could also absorb the term − log p(ys) as the fixed bias parameter into the network. motivated by this description, we can use a general gradient descent algorithm, such as sgd, to optimise the error function. taking into account the implementation in practice, the direct calculation of eq. (12) can be done using the sinkhorn iterations (burges et al., 2013; frogner et al., 2015; peyré et al., 2019) in mini-batch training efficiently. besides, we term the proposed method as otlm (optimal transport via linear mapping), figure 1 illustrates the overview of otlm. figure 1: our proposed framework otlm. the logit φ (x) is fed into a single-layer feed-forward network to infer ˜z. the cost matrix m is set to − log ˜z. by solving the optimal transport problem via sinkhorn algorithm, we can obtain the refined prediction y . compute the gradients w.r.t w with the optimisation in algorithm 1, it was conducted on the overall data set with a comparatively large number of data. quite a different scenario now, as minibatch training is more favoured when it comes to neural networks. for this reason, the optimisation workflow in algorithm 1, as pointed by peyré et al. (2019); viehmann (2019), will have a problem of batch stablisation. to this end, we perform logarithmic transformation to step 3-4 in algorithm 1. log u = log r − log (m v) = log r − logsumexp log v = log c − log (m (cid:124)u) = log c − logsumexp m − log v m − log u with log-sum-exp operation logsumexp (x) = log((cid:80) i exp xi). assuming convergence is achieved in the sinkhorn’s loops, it is unnecessary for us manually to compute the derivative of the loss function in eq. (12) w.r.t w . as long as we make sure that the entire optimisation process is differentiable, modern deep learning libraries can automatically implement end-to-end derivations, as it allows us to differentiate between the results of sinkhorn’s loops as a mere composition of elementary operations. performance degradation caused by imbalanced distributions can be addressed with meta-learning based methods (ren et al., 2020; wang et al., 2020; li et al., 2021). recent works have illustrated neun)}nv ral networks can learn more meaningful representations from the validation data set {(xv n=1. we also take a validation data set to optimise the parameter w , but with the still significant difference in that we require no labelling information, thus avoiding a large expanse of labeling samples. n, yv experiments in this section, we first conduct experiments comparing our approach versus extant post-hoc correction methods on three data sets, including cifar-100-lt (cao et al., 2019), imagenet-lt (liu et al., 2019), and inaturalist (horn et al., 2018) with varying backbones. finally, we empirically make a comparison of our algorithm with alternative cutting-edge long-tailed recognition methods. observe that, for one thing, a plausible coupling of our algorithm to any training-aware long-tailed recognition method exists. as an illustration of the potency and strong generalisation of our approach, we conducted post-hoc correction of the pseudo-predictions of both methods, which are fairly typical for the training stage of all data sets: the first is based on the cross entropy (ce), and the other is the recently proposed ride (wang et al., 2021), which is based on multi experts. we conducted experiments in the appendix for semi-supervised learning, to some extent to imitate the online situation. all our experiments are implemented in the paddlepaddle deep learning platform. data sets and baselines the baseline methods and data sets are briefly described here, with implementation details placed in appendix. we assume that the marginal distribution is uniform due to the characteristics of data sets. baselines: we compare our methods with (i) post-hoc correction methods including logit adjustment (menon et al., 2021), darp (kim et al., 2020a), (ii) state-of-the-art methods including focal loss (lin et al., 2017), ldam (cao et al., 2019), bbn (zhou et al., 2020), balanced softmax (ren et al., 2020), causal norm (tang et al., 2020), lade (hong et al., 2021), m2m (kim et al., 2020b), decouple (kang et al., 2020), lfme (xiang et al., 2020), ride (wang et al., 2021) long-tailed data set: we take experiments on three data sets including cifar-100-lt, imagenetlt, and inaturalist. we build the imbalanced version of cifar-100 by downsampling samples per class following the profile in liu et al. (2019); kang et al. (2020) with imbalanced ratios 10, 50, and 100. for all the benchmarks, we evaluate the performance on the test data set using a model trained on the training data set, and report the results using top-1 accuracy. main results table 1 illustrates the results on cifar-100-lt data set. for ce-based methods, it shows that logit adjustment is indeed a simple yet effective method for post-hoc correction. remarkably, it outperforms the baseline methods by 2.6%, 4.1%, and 4.1% under the imbalance ratio of 10, 50, and 100 respectively. with a quick look at the results based on the ot approach, the superior results stress the advantages of our approach over the darp and logit adjustments. ot algorithm can further improve the performance by 0.62%, 0.43%, 0.70%, for otlm, it outperforms logit adjustment by 0.55%, 0.73%, 0.99% under the three imbalance ratios on cifar-100-lt data set. for ride-based training-aware methods, there is also a 0.6% and 1.23% improvement in the accuracy of our method. table 1: comparison on the top-1 accuracy with post-hoc correction methods on cifar-100-lt data set using resnet-32 backbone, imagenet-lt and inaturalist data set using resnext-50-32x4d and resnet-50 backbones respectively. better results are marked in bold, the small red font indicates the increase in accuracy when our method is compared to logit adjustment and ride. ’-’ means the results are not reported. ce indicates that a cross-entropy loss function is used in the training stage. data set 10 imabalced ratio baseline (ce) 59.00 logit adjustment+ce 61.63 61.78 darp+ce 62.25+0.62 ot+ce 62.18+0.55 otlm+ce ride ot+ride otlm+ride table 1 also provides the results on imagenet-lt and inaturalist data sets. the fact again attracts our attention that for ce-based training-aware methods, ot, which is based on convex optimisation, consistently outperforms logit adjustment by a large margin (1.76%) on inaturalist data set. the accuracy boosting indicates the great potential of methods based on optimisation for post-hoc correction. besides, not surprisingly, otlm can further enhance the prediction accuracy, it outperforms logit adjustment by 0.73% and 2.86% on imagenet-lt and inaturalist respectively. as for ride, the gain in accuracy is also significant, about 3% on inaturalist. comparison with the state-of-the-art methods armed with a demonstration of the validity of ot and otlm, we switched to a comparison of our performance with existing methods that achieved the most advanced results on three benchmarks. as shown in table 2, in particular, on imagenet-lt and inaturalist data sets, the experimental results are remarkable and impressive, our method outperforms ride by 1.0% and 3.0%, respectively. since inaturalist is a very difficult and fine-grained data set consisting of 8,142 categories. these huge performance gains come at a fraction of the cost, as we show in the following subsection. table 2: comparison on the top-1 accuracy with state-of-the-arts on cifar-100-lt data set using resnet-32 backbone, on imagenet-lt and inaturalist data set using resnext-50-32x4d and resnet50 backbones respectively. − denotes the results are not reported, results underlined are the ones being compared, best results are marked in bold, the small red font denotes performance gain. computation cost of ot and otlm as we have highlighted, the additional computational cost of ot and otlm is particularly small, compared to that of training-aware methods. in table 3, exact time of the evaluations on the imagenetlt and inaturalist data sets is provided. please note, the time here are measured from the start of each method to the final best performance. for the comparison between ot and otlm, because otlm runs on the gpu, one optimisation iteration of otlm is in fact much faster than ot. except for otlm, which was run on an nvidia card (v100), the results come from a 28-core machine (2.20 ghz xeon). firstly, we can observe that all post-hoc correction methods here are not quite time-consuming. for comparison, imagenet-lt and inaturalist can be trained on 4 nvidia cards table 3: time for different methods to execute on the imagenet-lt and inaturalist data sets. the times here are measured from the start of each method to the final best performance. their running time are counted in seconds. coupled with the prior results, an observation can be made that ot can be the best coordinate to trade off performance and efficiency in post-hoc correction methods. data set logit adjustment ot otlm (v100) for extremely long periods of time. using resnext-50-32x4d on the inaturalist training data set, for example, with a batch size of 256, the amount of training time (in seconds) to perform 1 iteration is approximately 850. also, if one differs the performance of each method, we gather that ot and otlm provide the best balance of performance and running costs to match. convergence behavior of otlm | 7 | [
108.249,
516.9290784,
291.5599318,
526.8916784
] |
57PipS27Km.pdf | 2,022 | 2 | continuous-time meta-learning with forward mode differentiation tristan deleu∗ david kanaa leo feng giancarlo kerg pierre-luc bacon 2 yoshua bengio 1,2 guillaume lajoie 2 mila – université de montréal abstract drawing inspiration from gradient-based meta-learning methods with infinitely small gradient steps, we introduce continuous-time meta-learning (comln), a meta-learning algorithm where adaptation follows the dynamics of a gradient vector field. specifically, representations of the inputs are meta-learned such that a taskspecific linear classifier is obtained as a solution of an ordinary differential equation (ode). treating the learning process as an ode offers the notable advantage that the length of the trajectory is now continuous, as opposed to a fixed and discrete number of gradient steps. as a consequence, we can optimize the amount of adaptation necessary to solve a new task using stochastic gradient descent, in addition to learning the initial conditions as is standard practice in gradient-based meta-learning. importantly, in order to compute the exact meta-gradients required for the outer-loop updates, we devise an efficient algorithm based on forward mode differentiation, whose memory requirements do not scale with the length of the learning trajectory, thus allowing longer adaptation in constant memory. we provide analytical guarantees for the stability of comln, we show empirically its efficiency in terms of runtime and memory usage, and we illustrate its effectiveness on a range of few-shot image classification problems. introduction among the existing meta-learning algorithms, gradient-based methods as popularized by modelagnostic meta-learning (maml, finn et al., 2017) have received a lot of attention over the past few years. they formulate the problem of learning a new task as an inner optimization problem, typically based on a few steps of gradient descent. an outer meta-optimization problem is then responsible for updating the meta-parameters of this learning process, such as the initialization of the gradient descent procedure. however since the updates at the outer level typically require backpropagating through the learning process, this class of methods has often been limited to only a few gradient steps of adaptation, due to memory constraints. although solutions have been proposed to alleviate the memory requirements of these algorithms, including checkpointing (baranchuk, 2019), using implicit differentiation (rajeswaran et al., 2019), or reformulating the meta-learning objective (flennerhag et al., 2018), they are generally either more computationally demanding, or only approximate the gradients of the meta-learning objective (nichol et al., 2018; flennerhag et al., 2020). in this work, we propose a continuous-time formulation of gradient-based meta-learning, called continuous-time meta-learning (comln), where the adaptation is the solution of a differential equation (see figure 1). moving to continuous time allows us to devise a novel algorithm, based on forward mode differentiation, to efficiently compute the exact gradients for meta-optimization, no matter how long the adaptation to a new task might be. we show that using forward mode differentiation leads to a stable algorithm, unlike the counterpart of backpropagation in continuous time called the adjoint method (frequently used in the neural ode literature; chen et al., 2018) which tends to be unstable in conjunction with gradient vector fields. moreover as the length of ∗correspondence to: tristan deleu <deleutri@mila.quebec> 1cifar senior fellow, 2cifar ai chair code is available at: https://github.com/tristandeleu/jax-comln (a) gradient-based meta-learning (b) comln wt w (t ) wt+1 = wt − α∇l(wt) dw dt = −∇l(cid:0)w (t)(cid:1) figure 1: illustration of the adaptation process in (a) a gradient-based meta-learning algorithm, such as anil (raghu et al., 2019), where the adapted parameters wt are given after t steps of gradient descent, and in (b) continuous-time meta-learning (comln), where the adapted parameters w (t ) are the result of following the dynamics of a gradient vector field up to time t . the adaptation trajectory is a continuous quantity, as opposed to a discrete number of gradient steps fixed ahead of time, we can treat the amount of adaptation in comln as a meta-parameter—on par with the initialization—which we can meta-optimize using stochastic gradient descent. we verify empirically that our method is both computationally and memory efficient, and we show that comln outperforms other standard meta-learning algorithms on few-shot image classification datasets. background in this work, we consider the problem of few-shot classification, that is the problem of learning a classification model with only a small number of training examples. more precisely for a classification task τ , we assume that we have access to a (small) training dataset dtrain m=1 to fit a model on task τ , and a distinct test dataset dtest to evaluate how well this adapted model generalizes on that task. in the few-shot learning literature, it is standard to consider the problem of k-shot n -way classification, meaning that the model has to classify among n possible classes, and there are only k examples of each class in dtrain , so that overall the number of training examples is m = kn . we use the convention that the target labels ym ∈ {0, 1}n are one-hot vectors. = {(xm, ym)}m gradient-based meta-learning gradient-based meta-learning methods aim to learn an initialization such that the model is able to adapt to a new task via gradient descent. such methods are often cast as a bi-level optimization process: adapting the task-specific parameters θ in the inner loop, and training the (task-agnostic) meta-parameters φ and initialization θ0 in the outer loop. the meta-learning objective is: (cid:2)l(θτ t , φ; dtest )(cid:3) eτ min θ0,φ s.t. θτ t − α∇θl(θτ t , φ; dtrain ∀τ ∼ p(τ ), where t is the number of inner loop updates. for example, in the case of maml (finn et al., 2017), there is no additional meta-parameter other than the initialization (φ ≡ ∅); in anil (raghu et al., 2019), θ are the parameters of the last layer, and φ are the parameters of the shared embedding network; in cavia (zintgraf et al., 2019), θ are referred to as context parameters. during meta-training, the model is trained over many tasks τ . the task-specific parameters θ are learned via gradient descent on dtrain . the meta-parameters are then updated by evaluating the error of the trained model on the test dataset dtest . at meta-testing time, the meta-trained model is adapted on dtrain τ —i.e. applying (2) with the learned meta-parameters θ0 and φ. local sensitivity analysis of ordinary differential equations | 2 | [
108.249,
698.0240784,
444.0841081,
707.9866784
] |
cP5IcoAkfKa.pdf | 2,021 | 1 | large batch simulation for deep reinforcement learning brennan shacklett1∗ erik wijmans2 aleksei petrenko3,4 manolis savva5 dhruv batra2 vladlen koltun3 kayvon fatahalian1 1stanford university 2georgia institute of technology 3intel labs 4university of southern california 5simon fraser university abstract we accelerate deep reinforcement learning-based training in visually complex 3d environments by two orders of magnitude over prior work, realizing end-to-end training speeds of over 19,000 frames of experience per second on a single gpu and up to 72,000 frames per second on a single eight-gpu machine. the key idea of our approach is to design a 3d renderer and embodied navigation simulator around the principle of “batch simulation”: accepting and executing large batches of requests simultaneously. beyond exposing large amounts of work at once, batch simulation allows implementations to amortize in-memory storage of scene assets, rendering work, data loading, and synchronization costs across many simulation requests, dramatically improving the number of simulated agents per gpu and overall simulation throughput. to balance dnn inference and training costs with faster simulation, we also build a computationally efficient policy dnn that maintains high task performance, and modify training algorithms to maintain sample efficiency when training with large mini-batches. by combining batch simulation and dnn performance optimizations, we demonstrate that pointgoal navigation agents can be trained in complex 3d environments on a single gpu in 1.5 days to 97% of the accuracy of agents trained on a prior state-of-the-art system using a 64-gpu cluster over three days. we provide open-source reference implementations of our batch 3d renderer and simulator to facilitate incorporation of these ideas into rl systems. introduction speed matters. it is now common for modern reinforcement learning (rl) algorithms leveraging deep neural networks (dnns) to require billions of samples of experience from simulated environments (wijmans et al., 2020; petrenko et al., 2020; openai et al., 2019; silver et al., 2017; vinyals et al., 2019). for embodied ai tasks such as visual navigation, where the ultimate goal for learned policies is deployment in the real world, learning from realistic simulations is important for successful transfer of learned policies to physical robots. in these cases simulators must render detailed 3d scenes and simulate agent interaction with complex environments (kolve et al., 2017; dosovitskiy et al., 2017; savva et al., 2019; xia et al., 2020; gan et al., 2020). evaluating and training a dnn on billions of simulated samples is computationally expensive. for instance, the dd-ppo system (wijmans et al., 2020) used 64 gpus over three days to learn from 2.5 billion frames of experience and achieve near-perfect pointgoal navigation in 3d scanned environments of indoor spaces. at an even larger distributed training scale, openai five used over 50,000 cpus and 1000 gpus to train dota 2 agents (openai et al., 2019). unfortunately, experiments at this scale are out of reach for most researchers. this problem will only grow worse as the field explores more complex tasks in more detailed environments. many efforts to accelerate deep rl focus on improving the efficiency of dnn evaluation and training – e.g., by “centralizing” computations to facilitate efficient batch execution on gpus or tpus (espeholt et al., 2020; petrenko et al., 2020) or by parallelizing across gpus (wijmans et al., 2020). however, most rl platforms still accelerate environment simulation by running many copies of off-the-shelf, unmodified simulators, such as simulators designed for video game engines (bellemare et al., 2013; kempka et al., 2016; beattie et al., 2016; weihs et al., 2020), on large numbers ∗correspondence to bps@cs.stanford.edu figure 1: we train agents to perform pointgoal navigation in visually complex gibson (xia et al., 2018) and matterport3d (chang et al., 2017) environments such as the ones shown here. these environments feature detailed scans of real-world scenes composed of up to 600k triangles and high-resolution textures. our system is able to train agents using 64×64 depth sensors (a highresolution example is shown on the left) in these environments at 19,900 frames per second, and agents with 64×64 rgb cameras at 13,300 frames per second on a single gpu. of cpus or gpus. this approach is a simple and productive way to improve simulation throughput, but it makes inefficient use of computation resources. for example, when rendering complex environments (kolve et al., 2017; savva et al., 2019; xia et al., 2018), a single simulator instance might consume gigabytes of gpu memory, limiting the total number of instances to far below the parallelism afforded by the machine. further, running many simulator instances (in particular when they are distributed across machines) can introduce overhead in synchronization and communication with other components of the rl system. inefficient environment simulation is a major reason rl platforms typically require scale-out parallelism to achieve high end-to-end system throughput. in this paper, we crack open the simulation black box and take a holistic approach to co-designing a 3d renderer, simulator, and rl training system. our key contribution is batch simulation for rl: designing high-throughput simulators that accept large batches of requests as input (aggregated across different environments, potentially with different assets) and efficiently execute the entire batch at once. exposing work en masse facilitates a number of optimizations: we reduce memory footprint by sharing scene assets (geometry and textures) across rendering requests (enabling orders of magnitude more environments to be rendered simultaneously on a single gpu), amortize rendering work using gpu commands that draw triangles from multiple scenes at once, hide latency of scene i/o, and exploit batch transfer to reduce data communication and synchronization costs between the simulator, dnn inference, and training. to further improve end-to-end rl speedups, the dnn workload must be optimized to match high simulation throughput, so we design a computationally efficient policy dnn that still achieves high task performance in our experiments. large-batch simulation increases the number of samples collected per training iteration, so we also employ techniques from large-batch supervised learning to maintain sample efficiency in this regime. we evaluate batch simulation on the task of pointgoal navigation (anderson et al., 2018) in 3d scanned gibson and matterport3d environments, and show that end-to-end optimization of batched rendering, simulation, inference, and training yields a 110× speedup over state-of-the-art prior systems, while achieving 97% of the task performance for depth-sensor-driven agents and 91% for rgb-camera-driven agents. concretely, we demonstrate sample generation and training at over 19,000 frames of experience per second on a single gpu.1 in real-world terms, a single gpu is capable of training a virtual agent on 26 years of experience in a single day.2 this new performance regime significantly improves the accessibility and efficiency of rl research in realistic 3d environments, and opens new possibilities for more complex embodied tasks in the future. related work systems for high-performance rl. existing systems for high-performance rl have primarily focused on improving the efficiency of dnn components of the workload (policy inference and optimization) and use a simulator designed for efficient single agent simulation as a black box. for example, impala and ape-x used multiple worker processes to asynchronously collect experience for a centralized learner (espeholt et al., 2018; horgan et al., 2018). seed rl and sample factory built upon this idea and introduced inference workers that centralize network inference, thereby allowing it to be accelerated by gpus or tpus (espeholt et al., 2020; petrenko et al., 2020). dd-ppo proposed a synchronous distributed system for similar purposes (wijmans et al., 2020). a number 1samples of experience used for learning, not ‘frameskipped’ metrics typically used in atari/dmlab. 2calculated on rate a physical robot (locobot (carnegie mellon university, 2019)) collects observations when operating constantly at maximum speed (0.5 m/s) and capturing 1 frame every 0.25m. of efficient implementations of these ideas have been proposed as part of rl frameworks or in other deep learning libraries (liang et al., 2018; stooke & abbeel, 2019; k¨uttler et al., 2019). we extend the idea of centralizing inference and learning to simulation by cracking open the simulator black box and designing a new simulation architecture for rl workloads. our large-batch simulator is a drop-in replacement for large numbers of (non-batched) simulation workers, making it synergistic with existing asynchronous and synchronous distributed training schemes. it reduces the number of processes and communication overhead needed for asynchronous methods and eliminates separate simulation worker processes altogether for synchronous methods. we demonstrate this by combining our system with dd-ppo (wijmans et al., 2020). concurrently with our work, cule, a gpu-accelerated reimplementation of the atari learning environment (ale), demonstrates the benefits of centralized batch simulation (dalton et al., 2020). while both our work and cule enable wide-batch execution of their respective simulation workloads, our focus is on high-performance batch rendering of complex 3d environments. this involves optimizations (gpu-driven pipelined geometry culling, 3d asset sharing, and asynchronous data transfer) not addressed by cule due to the simplicity of rendering atari-like environments. additionally, like cule, we observe that the large training batches produced by batch simulation reduce rl sample efficiency. our work goes further and leverages large-batch optimization techniques from the supervised learning literature to mitigate the loss of sample efficiency without shrinking batch size. large mini-batch optimization. a consequence of large batch simulation is that more experience is collected between gradient updates. this provides the opportunity to accelerate learning via large mini-batch optimization. in supervised learning, using large mini-batches during optimization typically decreases the generalization performance of models (keskar et al., 2017). goyal et al. (2017) demonstrated that model performance can be improved by scaling the learning rate proportionally with the batch size and “warming-up” the learning rate at the start of training. you et al. (2017) proposed an optimizer modification, lars, that adaptively scales the learning rate at each layer, and applied it to sgd to improve generalization further. in reinforcement learning and natural language processing, the adam optimizer (kingma & ba, 2015) is often used instead of sgd. lamb (you et al., 2020) combines lars (you et al., 2017) with adam (kingma & ba, 2015). we do not find that large mini-batch optimization harms generalization in reinforcement learning, but we do find it decreases sample efficiency. we adapt the techniques proposed above – learning rate scaling (you et al., 2017) and the lamb optimizer (you et al., 2020) – to improve sample efficiency. simulators for machine learning. platforms for simulating realistic environments for model training fall into two broad categories: those built on top of pre-existing game engines (kolve et al., 2017; dosovitskiy et al., 2017; lee et al., 2019; gan et al., 2020; james et al., 2020), and those built from scratch using open-source 3d graphics and physics libraries (savva et al., 2017; 2019; xia et al., 2018; 2020; xiang et al., 2020; zeng et al., 2020). while improving simulator performance has been a focus of this line of work, it has been evaluated in a narrow sense (i.e. frame rate benchmarks for predetermined agent trajectories), not accounting for the overall performance of end-to-end rl training. we instead take a holistic approach to co-design rendering and simulation modules and their interfaces to the rl training system, obtaining significant gains in end-to-end throughput over the state of the art. system design & implementation batch simulation accelerates rollout generation during rl training by processing many simulated environments simultaneously in large batches. fig. 2 illustrates how batch simulation interacts with policy inference to generate rollouts. simulation for sensorymotor agents, such as the pointgoal navigation task targeted by our implementation, can be separated into two tasks: determining the next environment state given an agent’s actions and rendering its sensory observations. therefore, our design utilizes two components: a batch simulator that performs geodesic distance and navigation mesh (snook, 2000) computations on the cpu, and a batch renderer that renders complex 3d environments on the gpu. during rollout generation, batches of requests are passed between these components – given n agents, the simulator produces a batch of n environment states. next, the renderer processes the batch of environment states by simultaneously rendering n frames and exposing the result directly in gpu memory. agent observations (from both the simulator and the renderer) are then provided as a batch to policy inference to determine the next actions for the n agents. figure 2: the batch simulation and rendering architecture. each component communicates at the granularity of batches of n elements (e.g., n =1024), minimizing communication overheads and allowing components to independently parallelize their execution over each batch. to fit the working set for large batches on the gpu, the renderer maintains k (cid:28) n unique scene assets in gpu memory and shares these assets across subsets of the n environments in a batch. to enable experience collection across a diverse set of environments, the renderer continuously updates the set of k inmemory scene assets using asynchronous transfers that overlap rollout generation and learning. the key idea is that the batch simulator and renderer implementations (in addition to the dnn workload) take responsibility for their own parallelization. large batch sizes (values of n on the order of hundreds to thousands of environments) provide opportunities for implementations to efficiently utilize parallel execution resources (e.g., gpus) as well as amortize processing, synchronization, and data communication costs across many environments. the remainder of this section describes the design and key implementation details of our system’s batch simulator and batch renderer, as well as contributions that improve the efficiency of policy inference and optimization in this regime. batch environment simulation our cpu-based batch simulator executes geodesic distance and navigation mesh computations in parallel for a large batch of environments. due to differences in navigation mesh complexity across environments, the time to perform simulation may differ per environment. this variance is the source of workload imbalance problems in parallel synchronous rl systems (wijmans et al., 2020; savva et al., 2019) and one motivation for recent asynchronous designs (petrenko et al., 2020; espeholt et al., 2020; 2018). to ensure good workload balance, our batch simulator operates on large batches that contain significantly more environments than the number of available cpu cores and dynamically schedules work onto cores using a pool of worker threads (simulation for each environment is carried out sequentially). worker threads report simulation results into a designated per-environment slot in a results buffer that is communicated to the renderer via a single batched request when all environment simulation for a batch is complete. to minimize cpu memory usage, the simulator only loads navigation meshes and does not utilize the main rendering assets. batch rendering a renderer for producing rl agent observations in scanned real-world environments must efficiently synthesize many low-resolution renderings (e.g., 64×64 pixels) of scenes featuring high-resolution textures and complex meshes.3 low-resolution output presents challenges for gpu acceleration. rendering images one at a time produces too little rendering work to efficiently utilize a modern gpu rendering pipeline’s parallel processing resources. rendering many environments concurrently but individually (e.g., from different worker threads or processes) exposes more rendering work to the gpu, but incurs the overhead of sending the gpu many fine-grained rendering commands. to address the problem of rendering many small images efficiently, our renderer combines the gpu commands required to render observations for an entire simulation batch of n environments into a single rendering request to the gpu – effectively drawing the entire batch as a single large frame (individual environment observations are tiles in the image). this approach exposes large amounts of rendering work to the gpu and amortizes gpu pipeline configuration and rendering overhead over an entire batch. our implementation makes use of modern gpu pipeline features (khronos group, 2017) that allow rendering tasks that access different texture and mesh assets to proceed as part of a single large operation (avoiding gpu pipeline flushes due to pipeline state reconfiguration). scene asset sharing. efficiently utilizing a gpu requires batches to be large (we use n up to 1024). however, geometry and texture assets for a single environment may be gigabytes in size, so naively loading unique assets for each environment in a large batch would exceed available gpu memory. our implementation allows multiple environments in a batch to reference the same 3d scene assets in 3the matterport3d dataset contains up to 600k triangles per 3d scan. gpu memory. specifically, our system materializes k unique assets in gpu memory (k (cid:28) n ) and constructs batches of n environments that reference these assets. asset reuse decreases the diversity of training experiences in a batch, so to preserve diversity we limit the ratio of n to k in any one batch to 32, and continuously rotate the set of k assets in gpu memory. the renderer refreshes the set of k assets by asynchronously loading new scene assets into gpu memory during the main rollout generation and learning loop. as episodes complete, new environments are constructed to reference the newly loaded assets, and assets no longer referenced by active environments are removed from gpu memory. this design allows policy optimization to learn from an entire dataset of assets without exceeding gpu memory or incurring the latency costs of frequent asset loading. pipelined geometry culling. when rendering detailed geometry to low-resolution images, most scene triangles cover less than one pixel. as a result, rendering performance is determined by the rate the gpu’s rasterization hardware processes triangles, not the rate the gpu can shade covered pixels. to reduce the number of triangles the gpu pipeline must process, the renderer uses idle gpu cores to identify and discard geometry that lies outside the agent’s view—a process known as frustum culling (akenine-m¨oller et al., 2018). our implementation pipelines frustum culling operations (implemented using gpu compute shaders) with rendering for different environments in a batch. this pipelined design increases gpu utilization by concurrently executing culling work on the gpu’s programmable cores and rendering work on the gpu’s rasterization hardware. policy dnn architecture high-throughput batch simulation creates a need for high-throughput policy dnn inference. therefore, we develop a policy dnn architecture designed to achieve an efficient balance between high task performance and low computational cost. prior work in pointgoal navigation (wijmans et al., 2020) used a policy dnn design where a visual encoder cnn processes an agent’s visual sensory information followed by an lstm (hochreiter & schmidhuber, 1997) that determines the policy’s actions. our policy dnn uses this core design augmented with several performance optimizations. first, we reduce dnn effective input resolution from 128×128 (wijmans et al., 2020) to 64×64. beyond this simple optimization, we choose a shallow visual encoder cnn – a nine-layer resnet (he et al., 2016) (resnet18 with every other block removed), rather than the 50 layer (or larger) resnets used by prior work. to counteract reduced task performance from the resnet’s relatively low capacity, all stages include squeeze-excite (se) blocks (hu et al., 2018) with r=16. additionally, we use a spacetodepth stem (ridnik et al., 2020), which we find performs equally to the standard conv+maxpool stem while using less gpu memory and compute. finally, we avoid the use of normalization layers in the resnet as these require spatial reductions over the feature maps, preventing layer-fusion optimizations. instead, the cnn utilizes fixup initialization (zhang et al., 2019) to improve training stability. fixup initialization replaces expensive normalization layers after each convolution with cheap elementwise multiplication and addition. large mini-batch policy optimization in on-policy reinforcement learning, policy optimization utilizes trajectories of experience to reduce bias and for backpropagation-through-time. when generating trajectories of length l with a simulation batch size of n , a rollout will have n ×l steps of experience. therefore, a consequence of simulation with large n is that more experience is collected per rollout. large n presents the opportunity to utilize large mini-batches to improve the throughput of policy optimization; however, throughput must be balanced against generalization and sample efficiency to ensure that reduced task performance does not offset the throughput gains. although large minibatch training is known to hurt generalization in supervised learning (keskar et al., 2017), we do not see evidence of this for rl. conversely, we do find that sample efficiency for pointgoal navigation is harmed by naively increasing n . fortunately, we are able to mitigate this loss of sample efficiency using techniques for improving generalization from the large mini-batch optimization literature. (cid:113) b bbase , where bbase=256 and b, the training batch size, is n ×l first, we scale the learning rate by divided by the number of mini-batches per training iteration. we find it beneficial to use the scaled learning rate immediately instead of ‘warming-up’ the learning rate (goyal et al., 2017). second, we use and adapt the lamb optimizer (you et al., 2020). lamb is a modification to adam (kingma & ba, 2015) that applies lars (you et al., 2017) to the step direction estimated by adam to better handle high learning rates. since the adam optimizer is often used with ppo (schulman et al., ||) as 2017), lamb is a natural choice. given the adam step direction s(k) t θ(k) t+1 = θ(k) t − ηtr(k) t (s(k) t + λθ(k) t for weights θ(k) t φ(||θ(k) ||) t t + λθ(k) ||s(k) t r(k) t = where ηt is the learning rate and λ is the weight decay coefficient. we set φ(||θ(k) min{||θ(k) ||, 10.0} and introduce an additional clip on the trust ratio r(k) : (cid:41) t t t r(k) t = min max φ(||θ(k) ||) t t + λθ(k) ||s(k) t (we observed similar we find the training with ρ ∈ {10−2, 10−3, 10−4}) and also observed that this clip is only influential at the start of training, suggesting that there is an initialization scheme where it is unnecessary. exact value of ρ to be flexible results we evaluate the impact of our contributions on end-to-end training speed and task performance by training pointgoal navigation agents in the complex gibson (xia et al., 2018) and matterport3d (chang et al., 2017) environments. the fastest published end-to-end training performance in these environments is achieved with the synchronous rl implementation presented with dd-ppo (wijmans et al., 2020). therefore, both our implementation and the baselines we compare against are synchronous ppo-based rl systems. experimental setup pointgoal navigation task. we train and evaluate agents via the same procedure as wijmans et al. (2020): agents are trained for pointgoalnav (anderson et al., 2018) with either a depth sensor or an rgb camera. depth agents are trained on gibson-2plus (xia et al., 2018) and, consistent with wijmans et al. (2020), rgb agents are also trained on matterport3d (chang et al., 2017). rgb camera simulation requires textures for the renderer, increasing the gpu memory consumed by each scene significantly. both classes of agent are trained on 2.5 billion simulated samples of experience. agents are evaluated on the gibson dataset (xia et al., 2018). we use two metrics: success, whether or not the agent reached the goal, and spl (anderson et al., 2018), a measure of both success and efficiency of the agent’s path. we perform policy evaluation using habitat-sim (savva et al., 2019), unmodified for direct comparability to prior work. batch processing simulator (bps). we provide an rl system for learning pointgoalnav built around the batch simulation techniques and system-wide optimizations described in section 3. the remainder of the paper refers to this system as bps (batch processing simulator). to further accelerate the policy dnn workload, bps uses half-precision inference and mixed-precision training. baseline. the primary baseline for this work is wijmans et al. (2020)’s open-source pointgoalnav implementation, which uses habitat-sim (savva et al., 2019) – the prior state of the art in highperformance simulation of realistic environments such as gibson. unlike bps, multiple environments are simulated simultaneously using parallel worker processes that render frames at 256×256 pixels before downsampling to 128×128 for the visual encoder. the fastest published configuration uses a resnet50 visual encoder. subsequent sections refer to this implementation as wijmans20. ablations. as an additional baseline, we provide wijmans++, which uses the optimized seresnet9-based policy dnn (including performance optimizations and resolution reduction relative to wijmans20) developed for bps, but otherwise uses the same system design and simulator as wijmans20 (with a minor modification to not load textures for depth agents). wijmans++ serves to isolate the impact of two components of bps: first, the low-level dnn efficiency improvements, and, more importantly, the performance of batch simulation versus wijmans20’s independent simulation worker design. additionally, to ablate the effect of our encoder cnn architecture optimizations, we include a variant of bps, bps-r50, that uses the same resnet50 visual encoder and input resolution as wijmans20, while maintaining the other of optimizations bps. multi-gpu training. to support multi-gpu training, all three systems replace standard ppo with dd-ppo (wijmans et al., 2020). dd-ppo scales rollout generation and policy optimization across all available gpus, scaling the number of environments simulated and the number of samples gathered between training iterations proportionally. we report results with eight gpus. sensor system cnn depth rgb se-resnet9 bps bps-r50 resnet50 wijmans++ se-resnet9 resnet50 wijmans20 se-resnet9 bps bps-r50 resnet50 wijmans++ se-resnet9 resnet50 wijmans20 table 1: system performance. average frames per second (fps, measured as samples of experience processed per second) achieved by each system. bps achieves a speedup of 110× over wijmans20 on depth experiments (19,900 vs. 180 fps) and 95× on rgb experiments (13,300 vs. 140 fps) on an rtx 3090 gpu. oom (out of memory) indicates that the rtx 2080ti could not run wijmans20 with the published dd-ppo system parameters due to insufficient gpu memory. sensor system validation test spl success spl success depth bps wijmans20 rgb table 2: policy performance. spl and success of agents produced by bps and wijmans20. the performance of the bps agent is within the margin of error of the wijmans20 agent for depth experiments on the validation set, and within five percent on rgb. bps agents are trained on eight gpus with aggregate batch size n =1024. determining batch size. the per-gpu batch size, n , controls a trade-off between memory usage, sample efficiency, and speed. for bps, n designates the batch size for simulation, inference, and training. for wijmans20 and wijmans++, n designates the batch size for inference and training, as well as the number of simulation processes. wijmans20 sets n =4 for consistency with wijmans et al. (2020). to maximize performance of single-gpu runs, bps uses the largest batch size that fits in gpu memory, subject to the constraint that no one scene asset can be shared by more than 32 environments in the batch. in eight-gpu configurations, dd-ppo scales the number of parallel rollouts with the number of gpus, so to maintain reasonable sample efficiency bps limits pergpu batch size to n =128, with k=4 active scenes per gpu. wijmans++ depth experiments use n =64 (limited by system memory due to n separate processes running habitat-sim). batch size in wijmans++ rgb experiments is limited by gpu memory (n ranges from 6 to 20 depending on the gpu). appendix b provides the batch sizes used in all experiments. benchmark evaluation. we report end-to-end performance benchmarks in terms of average frames per second (fps) achieved by each system. we measure fps as the number of samples of experience processed over 16,000 inference batches divided by the time to complete rollout generation and training for those samples. in experiments that run at 128×128 pixel sensor resolution, rendering occurs at 256×256 and is downsampled for the policy dnn to match the behavior of wijmans20 regardless of system, while 64×64 resolution experiments render without downsampling. results are reported across three models of nvidia gpus: tesla v100, geforce rtx 2080ti, and geforce rtx 3090. (the different gpus are also accompanied by different cpus, see appendix c.) end-to-end training speed single-gpu performance. on a single gpu, bps trains agents 45× (9000 vs. 190 fps, tesla v100) to 110× (19900 vs. 180 fps, rtx 3090) faster than wijmans20 (table 1). the greatest speedup was achieved using the rtx 3090, which trains depth agents at 19,900 fps and rgb agents at 13,300 fps – a 110× and 95× increase over wijmans20, respectively. this 6000 fps performance drop from depth to rgb is not caused by the more complex rendering workload, because the addifigure 3: spl vs. wall-clock time (rgb agents) on a rtx 3090 over 48 hours (time required to reach 2.5 billion samples with bps). bps exceeds 80% spl in 10 hours and achieves a significantly higher spl than the baselines. figure 4: spl vs. wall-clock time (bps training depth agents over 2.5 billion samples on 8 tesla v100s) for various batch sizes (n ). n =256 finishes after 2× the wall-clock time as n =1024, but both achieve statistically similar spl. tional cost of fetching rgb textures is masked by the dominant cost of geometry processing. instead, due to memory constraints, bps must reduce the batch size (n ) for rgb tasks, reducing the performance of all components (further detail in section 4.4). to assess how much of the bps speedup is due to the se-resnet9 visual encoder and lower input resolution, we also compare bps-r50 and wijmans20, which have matching encoder architecture and resolution. for depth agents training on the the rtx 3090, bps-r50 still achieves greater than 10× performance improvement over wijmans20 (2,300 vs. 180 fps), demonstrating the benefits of batch simulation even in dnn heavy workloads. bps-r50 is only 6× faster than wijmans20 on the rtx 2080ti, since the resnet50 encoder’s larger memory footprint requires batch size to be reduced from n =128 on the rtx 3090 (24 gb ram) to n =64 on the rtx 2080ti (11 gb ram). similarly, increasing dnn input resolution increases memory usage, forcing batch size to be decreased and reducing performance (table a1). the bps batch simulation architecture is significantly faster than the wijmans++ design that uses multiple worker processes. when training depth agents, bps outperforms wijmans++ by 4.5× to 7.8×, with a greater speedup of 6× to 13× for rgb agents. since bps and wijmans++ use the same policy dnn and input resolution, this comparison isolates the performance advantage of batch simulation and rendering against an optimized version of the multiple-worker-process-based design: wijmans++ is up to 15× faster than wijmans20. the relative speedup of bps for rgb agents is larger because wijmans++ does not share environment assets between simulator instances. textures needed for rgb rendering significantly increase the memory footprint of each simulator instance and limit wijmans++ to as few as n =6 workers (compared to n =64 for depth agents). conversely, bps shares 3d assets across environments and maintains a batch size at least n =128 for rgb agents. multi-gpu performance. bps achieves high end-to-end throughput when running in eight-gpu configurations: up to 72,000 fps for depth agents on eight rtx 2080ti. relative to wijmans20, bps is 29× to 34× faster with eight telsa v100s and 45× faster with eight rtx 2080ti. these speedups are lower than the single-gpu configurations, because bps reduces the per-gpu batch size in eight-gpu configurations to avoid large aggregate batches that harm sample efficiency. this leads to imperfect multi-gpu scaling for bps: for depth agents, each rtx 2080ti is approximately 4000 fps slower in an eight-gpu configuration than in a single-gpu configuration. eight-gpu scaling for depth is lower on the tesla v100s (3.7×) compared to the 2080ti (5.6×) because larger batch sizes are needed to utilize the large number of parallel compute units on the tesla v100. policy task performance to understand how the system design and visual encoder architecture of bps impact learning, we evaluate the task performance of agents trained with bps in an eight-gpu configuration with aggregate batch size of n =1024. for depth agents, the reduction in encoder cnn depth results in a 1% and 3% decrease in spl on val and test respectively with a negligible success change on val and a 0.9 success decrease on test (table 2, row 1 vs. 2). for rgb agents, bps suffers a performance loss of 3.8/1.3 spl/success on val and 8.3/2.0 spl/success on test (table 2, row 3 vs. 4). despite this performance reduction, the rgb agent trained by bps would have won the 2019 habitat challenge by 4 spl and is only beaten by wijmans20’s resnet50-based policy on test. spl vs. training time. bps significantly outperforms the baselines in terms of wall-clock training time to reach a given spl. after 10 hours of training on a single rtx 3090, bps reaches over 80% spl (on val) while wijmans20 and wijmans++ reach only 40% and 65% spl respectively (fig. 3). furthermore, bps converges within 1% of peak spl at approximately 20 hours; conversely, neither baseline reaches convergence within 48 hours. bps converges to a lower final spl in fig. 3 than table 2, likely due to the tested single-gpu configuration differing in batch size and scene asset swapping frequency compared to the eight-gpu configuration used to produce table 2. effect of batch size. the end-to-end training efficiency of bps is dependent on batch size (n ): larger n will increase throughput and reduce wall-clock time to reach a given number of samples, but may harm sample efficiency and final task performance at convergence. we evaluate this relationship by training depth agents with bps across a range of n . as shown in fig. 4, all experiments converge within 1% of the peak spl achieved; however, n =256 halves total throughput compared to n =1024 (the setting used elsewhere in the paper for eight-gpu configurations). at the high end, n =4096 yields slightly worse spl than n =1024 and is only 20% faster. larger batch sizes also require more memory for rollout storage and training, which is prohibitive for rgb experiments that require significant gpu memory for texture assets. in terms of sample efficiency alone, fig. a1 shows that smaller batch sizes have a slight advantage (without considering training speed). runtime breakdown fig. 5 provides a breakdown of time spent in each of the main components of the bps system (µs per frame). nearly 60% of bps runtime on the rtx 3090 gpu (for both depth and rgb) is spent in dnn inference and training, even when rendering complex 3d environments and using a small, low-cost policy dnn. this demonstrates the high degree of simulation efficiency achieved by bps. furthermore, the results in table a2 for bps-r50 show that, with the larger visual encoder, over 90% of per-frame time (on depth tasks) is spent in the dnn workload (70% on learning). figure 5: bps runtime breakdown. inference represents policy evaluation cost during rollout generation. learning represents the total cost of policy optimization. batch size (n ) heavily impacts dnn performance. dnn operations for depth (n =1024) are 2× faster than rgb (n =256) on the rtx 3090, because rgb must use a smaller batch size to fit texture assets in gpu memory. the larger batch size improves gpu utilization for all system components. a similar effect is visible when comparing the single-gpu and eight-gpu v100 breakdowns. bps reduces the per-gpu batch size from n =1024 to n =128 in eight-gpu experiments to maintain an aggregate batch size of 1024 for sample efficiency. further work in policy optimization to address this learning limitation would improve multi-gpu scaling by allowing larger aggregate batch sizes. discussion | 8 | [
108.299,
254.8736768,
190.2013701,
266.8288768
] |
6kCiVaoQdx9.pdf | 2,022 | 0 | few-shot voronoi diagrams: a geometric approach learning cluster-induced as chunwei ma1, ziyun huang2, mingchen gao1, jinhui xu1 1department of computer science and engineering, university at buffalo 2computer science and software engineering, penn state erie 1{chunweim,mgao8,jinhui}@buffalo.edu 2{zxh201}@psu.edu abstract few-shot learning (fsl) is the process of rapid generalization from abundant base samples to inadequate novel samples. despite extensive research in recent years, fsl is still not yet able to generate satisfactory solutions for a wide range of real-world applications. to confront this challenge, we study the fsl problem from a geometric point of view in this paper. one observation is that the widely embraced protonet model is essentially a voronoi diagram (vd) in the feature space. we retrofit it by making use of a recent advance in computational geometry called cluster-induced voronoi diagram (civd). starting from the simplest nearest neighbor model, civd gradually incorporates cluster-to-point and then cluster-to-cluster relationships for space subdivision, which is used to improve the accuracy and robustness at multiple stages of fsl. specifically, we use civd (1) to integrate parametric and nonparametric few-shot classifiers; (2) to combine feature representation and surrogate representation; (3) and to leverage feature-level, transformation-level, and geometry-level heterogeneities for a better ensemble. our civd-based workflow enables us to achieve new state-of-the-art results on mini-imagenet, cub, and tiered-imagennet datasets, with ∼2%−5% improvements upon the next best. to summarize, civd provides a mathematically elegant and geometrically interpretable framework that compensates for extreme data insufficiency, prevents overfitting, and allows for fast geometric ensemble for thousands of individual vd. these together make fsl stronger. introduction recent years have witnessed a tremendous success of deep learning in a number of data-intensive applications; one critical reason for which is the vast collection of hand-annotated high-quality data, such as the millions of natural images for visual object recognition (deng et al., 2009). however, in many real-world applications, such large-scale data acquisition might be difficult and comes at a premium, such as in rare disease diagnosis (yoo et al., 2021) and drug discovery (ma et al., 2021b; 2018). as a consequence, few-shot learning (fsl) has recently drawn growing interests (wang et al., 2020). generally, few-shot learning algorithms can be categorized into two types, namely inductive and transductive, depending on whether estimating the distribution of query samples is allowed. a typical transductive fsl algorithm learns to propagate labels among a larger pool of query samples in a semi-supervised manner (liu et al., 2019); notwithstanding its normally higher performance, in many real world scenarios a query sample (e.g. patient) also comes individually and is unique, for instance, in personalized pharmacogenomics (sharifi-noghabi et al., 2020). thus, we in this paper adhere to the inductive setting and make on-the-fly prediction for each newly seen sample. few-shot learning is challenging and substantially different from conventional deep learning, and has been tackled by many researchers from a wide variety of angles. despite the extensive research all four authors are corresponding authors. on the algorithmic aspects of fsl (see sec. 2), two challenges still pose an obstacle to successful fsl: (1) how to sufficiently compensate for the data deficiency in fsl? and (2) how to make the most use of the base samples and the pre-trained model? for the first question, data augmentation has been a successful approach to expand the size of data, either by generative adversarial networks (gans) (goodfellow et al., 2014) (li et al., 2020b; zhang et al., 2018) or by variational autoencoders (vaes) (kingma & welling, 2014) (zhang et al., 2019; chen et al., 2019b). however, in each way, the authenticity of either the augmented data or the feature is not guaranteed, and the out-of-distribution hallucinated samples (ma et al., 2019) may hinder the subsequent fsl. recently, liu et al. (2020b) and ni et al. (2021) investigate supportlevel, query-level, task-level, and shot-level augmentation for meta-learning, but the diversity of fsl models has not been taken into consideration. for the second question, yang et al. (2021) borrows the top-2 nearest base classes for each novel sample to calibrate its distribution and to generate more novel samples. however, when there is no proximal base class, this calibration may utterly alter the distribution. another line of work (sbai et al., 2020; zhou et al., 2020) learns to select and design base classes for a better discrimination on novel classes, which all introduce extra training burden. as a matter of fact, we still lack a method that makes full use of the base classes and the pretrained model effectively. in this paper, we study the fsl problem from a geometric point of view. in metric-based fsl, despite being surprisingly simple, the nearest neighbor-like approaches, e.g. protonet (snell et al., 2017) and simpleshot (wang et al., 2019), have achieved remarkable performance that is even better than many sophisticatedly designed methods. geometrically, what a nearest neighbor-based method does, under the hood, is to partition the feature space into a voronoi diagram (vd) that is induced by the feature centroids of the novel classes. although it is highly efficient and simple, voronoi diagrams coarsely draw the decision boundary by linear bisectors separating two centers, and may lack the ability to subtly delineate the geometric structure arises in fsl. method table 1: the underlying geometric structures for various fsl methods. to resolve this issue, we adopt a novel technique called cluster-induced voronoi diagram (civd) (chen et al., 2013; 2017; huang & xu, 2020; huang et al., 2021), which is a recent breakthrough in computation geometry. civd generalizes vd from a point-to-point distance-based diagram to a cluster-to-point influence-based structure. it enables us to determine the dominating region (or voronoi cell) not only for a point (e.g. a class prototype) but also for a cluster of points, guaranteed to have a (1 + (cid:15))-approximation with a nearly linear size of diagram for a wide range of locally dominating influence functions. civd provides us a mathematically elegant framework to depict the feature space and draw the decision boundary more precisely than vd without losing the resistance to overfitting. protonet (snell et al., 2017) s2m2 r (mangla et al., 2020) dc (yang et al., 2021) deepvoro-- (ours) deepvoro/deepvoro++ (ours) voronoi diagram spherical vd power diagram civd ccvd geometric structure accordingly, in this paper, we show how civd is used to improve multiple stages of fsl and make several contributions as follows. 1. we first categorize different types of few-shot classifiers as different variants of voronoi diagram: nearest neighbor model as voronoi diagram, linear classifier as power diagram, and cosine classifier as spherical voronoi diagram (table 1). we then unify them via civd that enjoys the advantages of multiple models, either parametric or nonparametric (denoted as deepvoro--). 2. going from cluster-to-point to cluster-to-cluster influence, we further propose cluster-to-cluster voronoi diagram (ccvd), as a natural extension of civd. based on ccvd, we present deepvoro which enables fast geometric ensemble of a large pool of thousands of configurations for fsl. 3. instead of using base classes for distribution calibration and data augmentation (yang et al., 2021), we propose a novel surrogate representation, the collection of similarities to base classes, and thus promote deepvoro to deepvoro++ that integrates feature-level, transformation-level, and geometry-level heterogeneities in fsl. extensive experiments have shown that, although a fixed feature extractor is used without independently pretrained or epoch-wise models, our method achieves new state-of-the-art results on all figure 1: schematic illustrations of voronoi diagram (vd) and surrogate representation on multidigitmnist dataset (sun, 2019). left and central panels demonstrate the vd of base classes and novel classes (5-way 1-shot) in r2, respectively. the colored squares stand for the 1-shot support samples. in the right panel, for each support sample, the surrogate representation (dotted line) exhibits a unique pattern which those of the query samples (colored lines) also follow. (see appendix c for details.) three benchmark datasets including mini-imagenet, cub, and tiered-imagenet, and improves by up to 2.18% on 5-shot classification, 2.53% on 1-shot classification, and up to 5.55% with different network architectures. related work few-shot learning. there are a number of different lines of research dedicated to fsl. (1) metricbased methods employ a certain distance function (cosine distance (mangla et al., 2020; xu et al., 2021), euclidean distance (wang et al., 2019; snell et al., 2017), or earth mover’s distance (zhang et al., 2020a;b)) to bypass the optimization and avoid possible overfitting. (2) optimization-based approaches (finn et al., 2017) manages to learn a good model initialization that accelerates the optimization in the meta-testing stage. (3) self-supervised-based (zhang et al., 2021b; mangla et al., 2020) methods incorporate supervision from data itself to learn a robuster feature extractor. (4) ensemble method is another powerful technique that boosting the performance by integrating multiple models (ma et al., 2021a). for example, dvornik et al. (2019) trains several networks simultaneously and encourages robustness and cooperation among them. however, due to the high computation load of training deep models, this ensemble is restricted by the number of networks which is typically <20. in liu et al. (2020c), instead, the ensemble consists of models learned at each epoch, which, may potentially limit the diversity of ensemble members. geometric understanding of deep learning. the geometric structure of deep neural networks is first hinted at by raghu et al. (2017) who reveals that piecewise linear activations subdivide input space into convex polytopes. then, balestriero et al. (2019) points out that the exact structure is a power diagram (aurenhammer, 1987) which is subsequently applied upon recurrent neural network (wang et al., 2018) and generative model (balestriero et al., 2020). the power/voronoi diagram subdivision, however, is not necessarily the optimal model for describing feature space. recently, chen et al. (2013; 2017); huang et al. (2021) uses an influence function f (c, z) to measure the joint influence of all objects in c on a query z to build a cluster-induced voronoi diagram (civd). in this paper, we utilize civd to magnify the expressivity of geometric modeling for fsl. methodology preliminaries few-shot learning aims at discriminating between novel classes cnovel with the aid of a larger amount of samples from base classes cbase, cnovel ∩cbase = ∅. the whole learning process usually follows the meta-learning scheme. formally, given a dataset of base classes d = {(xi, yi)}, xi ∈ d, yi ∈ cbase with d being an arbitrary domain e.g. natural image, a deep neural network z = φ(x), z ∈ rn, which maps from image domain d to feature domain rn, is trained using standard gradient descent algorithm, and after which φ is fixed as a feature extractor. this process is referred to as metatraining stage that squeezes out the commonsense knowledge from d. for a fair evaluation of the learning performance on a few samples, the meta-testing stage is typically formulated as a series of k-way n -shot tasks (episodes) {t }. each such episode is further decomposed into a support set s = {(xi, yi)}k×n i=1 , yi ∈ ct , in which the episode classes ct is a randomly sampled subset of cnovel with cardinality k, and each class contains only n and q random samples in the support set and query set, respectively. for few-shot classification, we introduce here two widely used schemes as follows. for simplicity, all samples here are from s and q, without data augmentation applied. , yi ∈ ct and a query set q = {(xi, yi)}k×q nearest neighbor classifier (nonparametric). in snell et al. (2017); wang et al. (2019) etc., a prototype ck is acquired by averaging over all supporting features for a class k ∈ ct : 1 n x∈s,y=k φ(x) ck = then each query sample x ∈ q is classified by finding the nearest prototype: arg minkd(z, ck) = ||z − ck||2 2, in which we use euclidean distance for distance metric d. linear classifier (parametric). another scheme uses a linear classifier with cross-entropy loss optimized on the supporting samples: ˆy = l(w , b) = (cid:80) (x,y)∈s − log p(y|φ(x); w , b) = (cid:80) (x,y)∈s − log exp(w t k exp(w t y φ(x) + by) k φ(x) + bk) in which wk, bk are the linear weight and bias for class k, and the predicted class for query x ∈ q is ˆy = arg maxk p(y|z; wk, bk). few-shot learning as cluster-induced voronoi diagrams in this section, we first introduce the basic concepts of voronoi tessellations, and then show how parametric/nonparametric classifier heads can be unified by vd. definition 3.1 (power diagram and voronoi diagram). let ω = {ω1, ..., ωk} be a partition of the space rn, and c = {c1, ..., ck} be a set of centers such that ∪k r=1ωr = ∅. additionally, each center is associated with a weight νr ∈ {ν1, ..., νk} ⊆ r+. then the set of pairs {(ω1, c1, ν1), ..., (ωk, cl, νk)} is a power diagram (pd), where each cell is obtained via ωr = {z ∈ rn : r(z) = r}, r ∈ {1, .., k}, with r=1ωr = rn, ∩k r(z) = arg min k∈{1,...,k} d(z, ck)2 − νk. if the weights are equal for all k, i.e. νk = νk(cid:48), ∀k, k(cid:48) ∈ {1, ..., k}, then a pd collapses to a voronoi diagram (vd). by definition, it is easy to see that the nearest neighbor classifier naturally partitions the space into k cells with centers {c1, ..., ck}. here we show that the linear classifier is also a vd under a mild condition. theorem 3.1 (voronoi diagram reduction). the linear classifier parameterized by w , b partitions the input space rn to a voronoi diagram with centers {˜c1, ..., ˜ck} given by ˜ck = 1 2 wk if bk = − 1 proof. see appendix b for details. 3.2.1 from voronoi diagram to cluster-induced voronoi diagram now that both nearest neighbor and linear classifier have been unified by vd, a natural idea is to integrate them together. cluster-induced voronoi diagram (civd) (chen et al., 2017; huang et al., 2021) is a generalization of vd which allows multiple centers in a cell, and is successfully used for clinical diagnosis from biomedical images (wang et al., 2015), providing an ideal tool for the integration of parametric/nonparametric classifier for fsl. formally: definition 3.2 (cluster-induced voronoi diagram (civd) (chen et al., 2017; huang et al., 2021)). let ω = {ω1, ..., ωk} be a partition of the space rn, and c = {c1, ..., ck} be a set (possibly a multiset) of clusters. the set of pairs {(ω1, c1), ..., (ωk, ck)} is a cluster-induced voronoi diagram (civd) with respect to the influence function f (ck, z), where each cell is obtained via ωr = {z ∈ rn : r(z) = r}, r ∈ {1, .., k}, with r(z) = arg max k∈{1,...,k} f (ck, z). here c can be either a given set of clusters or even the whole power set of a given point set, and the influence function is defined as a function over the collection of distances from each member in a cluster ck to a query point z: definition 3.3 (influence function). the influence from ck, k ∈ {1, ..., k} to z /∈ ck is f (ck, z) = f ({d(c(i) i=1 ). in this paper f is assumed to have the following form k ∈ ck}|ck| k , z)|c(i) f (ck, z) = − sign(α) (cid:80)|ck| i=0 d(c(i) k , z)α. the sign function here makes sure that f is a monotonically decreasing function with respect to distance d. the hyperparameter α controls the magnitude of the influence, for example, in gravity force α = −(n − 1) in n-dimensional space and in electric force α = −2. since the nearest neighbor centers {ck}k k=1 are obtained from different schemes and could both be informative, we merge the corresponding centers for a novel class k to be a new cluster ck = {ck, ˜ck}, and use the resulting c = {c1, ..., ck} to establish a civd. in such a way, the final partition may enjoy the advantages of both parametric and nonparametric classifier heads. we name this approach as deepvoro--. k=1 and the centers introduced by linear classifier {˜ck}k few-shot classification via surrogate representation k=1 is the key discrimination criterion for classification. we rewrite {d(z, ck)}k in nearest neighbor classifier head, the distance from a query feature z to each of the prototypes {ck}k k=1 to be a vector d ∈ rk such that dk = d(z, ck). these distances are acquired by measure the distance between two points in high dimension: z, ck ∈ rn. however, the notorious behavior of high dimension is that the ratio between the nearest and farthest points in a point set p approaches 1 (aggarwal et al., 2001), making {d(z, ck)}k k=1 less discriminative for classification, especially for fsl problem with sample size n · k (cid:28) n. hence, in this paper, we seek for a surrogate representation. in human perception and learning system, similarity among familiar and unfamiliar objects play a key role for object categorization and classification (murray et al., 2002), and it has been experimentally verified by functional magnetic resonance imaging (fmri) that a large region in occipitotemporal cortex processes the shape of both meaningful and unfamiliar objects (op de beeck et al., 2008). in our method, a connection will be built between each unfamiliar novel class in cnovel and each related well-perceived familiar class in cbase. so the first step is to identify the most relevant base classes for a specific task t . concretely: definition 3.4 (surrogate classes). in episode t , given the set of prototypes {ck}k set s and the set of prototypes {c(cid:48) ct is given as: k=1 for the support for the base set d, the surrogate classes for episode classes t}|cbase| csurrogate(t ) = top-r t∈{1,...,|cbase|} d(ck, c(cid:48) t) in which the top-r function returns r base class indices with smallest distances to ck, and the center for a base class t is given as c(cid:48) x∈d,y=tφ(x). here r is a hyperparameter. 1 |{(x,y)|x∈d,y=t}| t = the rationale behind this selection instead of simply using the whole base classes cbase is that, the episode classes ct are only overlapped with a portion of base classes (zhang et al., 2021a), and discriminative similarities are likely to be overwhelmed by the background signal especially when the number of base classes is large. after the surrogate classes are found, we re-index their feature centers to be {c(cid:48) k=1 and query feature z are represented by the collection of similarities to these surrogate centers: j=1, ˜r ≤ r · k. then, both support centers {ck}k j} ˜r ˜r)), k = 1, ..., k d(cid:48) k = (d(ck, c(cid:48) d(cid:48) = (d(z, c(cid:48) 1), ..., d(ck, c(cid:48) 1), ..., d(z, c(cid:48) ˜r)) k, d(cid:48) ∈ r ˜r are the surrogate representation for novel class k and query feature z, respeck) = 2. this set of discriminative distances is rewritten as d(cid:48)(cid:48) ∈ rk such that k). an illustration of the surrogate representation is shown in figure 1 on multiwhere d(cid:48) tively. by surrogate representation, the prediction is found through ˆy = arg minkd(d(cid:48), d(cid:48) arg mink||d(cid:48) − d(cid:48) d(cid:48)(cid:48) k = d(d(cid:48), d(cid:48) digitmnist, a demonstrative dataset. integrating feature representation and surrogate representation. until now, we have two discriminative systems, i.e., feature-based d ∈ rk and surrogate-based d(cid:48)(cid:48) ∈ rk. a natural idea is to combine them to form the following final criterion: ˜d = β where d and d(cid:48)(cid:48) are normalized by their manhattan norm, ||d||1 = (cid:80)k k=1d(cid:48)(cid:48) k, respectively, and β and γ are two hyperparameters adjusting the weights for feature representation and surrogate representation. deepvoro: integrating multi-level heterogeneity of fsl in this section we present deepvoro, a fast geometric ensemble framework that unites our contributions to multiple stages of fsl, and show how it can be promoted to deepvoro++ by incorporating surrogate representation. compositional feature transformation. it is believed that fsl algorithms favor features with more gaussian-like distributions, and thus various kinds of transformations are used to improve the normality of feature distribution, including power transformation (hu et al., 2021), tukey’s ladder of powers transformation (yang et al., 2021), and l2 normalization (wang et al., 2019). while these transformations are normally used independently, here we propose to combine several transformations sequentially in order to enlarge the expressivity of transformation function and to increase the polymorphism of the fsl process. specifically, for a feature vector z, three kinds of transformations are considered: (i) l2 normalization. by projection onto the unit sphere in rn, the feature is normalized as: f (z) = z . (ii) linear transformation. now since all the features are located on the unit sphere, we then can do scaling and shifting via a linear transformation: gw,b(z) = wz + b. (iii) tukey’s ladder of powers transformation. finally, tukey’s ladder of powers transformation is applied on the feature: hλ(z) = log(z) . by the composition of l2 normalization, linear transformation, and tukey’s ladder of powers transformation, now the transformation function becomes (hλ ◦ gw,b ◦ f )(z) parameterized by w, b, λ. multi-level heterogeneities in fsl. now we are ready to articulate the hierarchical heterogeneity existing in different stages of fsl. (i) feature-level heterogeneity: data augmentation has been exhaustively explored for expanding the data size of fsl (ni et al., 2021), including but not limited to rotation, flipping, cropping, erasing, solarization, color jitter, mixup (zhang et al., 2017), etc. the modification of image x will change the position of feature z in the feature space. we denote all possible translations of image as a set of functions {t }. (ii) transformation-level heterogeneity: after obtaining the feature z, a parameterized transformation is applied to it, and the resulting features can be quite different for these parameters (see figure f.1). we denote the set of all possible transformations to be {pw,b,λ}. (iii) geometry-level heterogeneity: even with the provided feature, the few-shot classification model can still be diverse: whether a vd or pd-based model is used, whether the feature or the surrogate representation is adopted, and the setting of r will also change the degree of locality. we denote all possible models as {m }. deepvoro for fast geometric ensemble of vds. with the above three-layer heterogeneity, the fsl process can be encapsulated as (m ◦pw,b,λ◦φ◦t )(x), and all possible configurations of a given episode t with a fixed φ is the cartesian product of these three sets: {t }×{pw,b,λ}×{m }. indeed, when a hold-out validation dataset is available, it can be used to find the optimal combination, but by virtue of ensemble learning, multiple models can still contribute positively to fsl (dvornik et al., 2019). since the cardinality of the resulting configuration set could be very large, the fsl model m as well as the ensemble algorithm is required to be highly efficient. the vd is a nonparametric model and no training is needed during the meta-testing stage, making it suitable for fast geometric ensemble. while civd models the cluster-to-point relationship via an influence function, here we further extend it so that cluster-to-cluster relationship can be considered. this motivates us to define cluster-to-cluster voronoi diagram (ccvd): definition 3.5 (cluster-to-cluster voronoi diagram). let ω = {ω1, ..., ωk} be a partition of the space rn, and c = {c1, ..., ck} be a set of totally ordered sets with the same cardinality l (i.e. |c1| = |c2| = ... = |ck| = l). the set of pairs {(ω1, c1), ..., (ωk, ck)} is a cluster-to-cluster voronoi diagram (ccvd) with respect to the influence function f (ck, c(z)), and each cell is obtained via ωr = {z ∈ rn : r(z) = r}, r ∈ {1, .., k}, with r(z) = arg max k∈{1,...,k} f (ck, c(z)) where c(z) is the cluster (also a totally ordered set with cardinality l) that query point z belongs, which is to say, all points in this cluster (query cluster) will be assigned to the same cell. similarly, the influence function is defined upon two totally ordered sets ck = {c(i) i=1 and c(z) = {z(i)}l k }l f (ck, c(z)) = − sign(α) (cid:80)l i=0 d(c(i) k , z(i))α. with this definition, now we are able to streamline our aforementioned novel approaches into a single ensemble model. suppose there are totally l possible settings in our configuration pool {t } × {pw,b,λ} × {m }, for all configurations {ρi}l i=1, we apply them onto the support set s to generate the k totally ordered clusters {{c(ρi) k=1 including each center c(ρi) i=1}k derived through configuration ρi, and onto a query sample x to generate the query cluster c(z) = {z(ρ1), ..., z(ρl)}, and then plug these two into definition 3.5 to construct the final voronoi diagram. k }l k when only the feature representation is considered in the configuration pool, i.e. ρi ∈ {t } × {pw,b,λ}, our fsl process is named as deepvoro; if surrogate representation is also incorporated, i.e. ρi ∈ {t } × {pw,b,λ} × {m }, deepvoro is promoted to deepvoro++ that allows for higher geometric diversity. see appendix a for a summary of the notations and acronyms experiments datasets base classes novel classes table 2: summarization of the datasets used in the paper. the main goals of our experiments are to: (1) validate the strength of civd to integrate parametric and nonparametric classifiers and confirm the necessity of voronoi reduction; (2) investigate how different levels of heterogeneity individually or collaboratively contribute to the overall result, and compare them with the state-of-art method; (3) reanalyze this ensemble when the surrogate representation comes into play, and see how it could ameliorate the extreme shortage of support samples. see table 2 for a summary and appendix d for the detailed descriptions of mini-imagenet (vinyals et al., 2016), cub (welinder et al., 2010), and tiered-imagenet (ren et al., 2018), that are used in this paper. multidigitmnist mini-imagenet cub tiered-imagenet images per class image size deepvoro--: integrating parametric and nonparametric methods via civd. to verify our proposed civd model for the integration of parameter/nonparametric fsl classifiers, we first run three standalone models: logistic regressions with power/voronoi diagrams as the underlining geometric structure (power-lr/voronoi-lr), and vanilla voronoi diagram (vd, i.e. nearest neighbor model), and then integrate vd with either power/voronoi-lr (see appendix e for details). interestingly, vd with the power-lr has never reached the best result, suggesting that ordinary lr cannot be integrated with vd due to their intrinsic distinct geometric structures. after the proposed voronoi reduction (theorem 3.1), however, vd+voronoi-lr is able to improve upon both models in most cases, suggesting that civd can ideally integrate parameter and nonparametric models for better fsl. table 3: the 5-way few-shot accuracy (in %) with 95% confidence intervals of deepvoro and deepvoro++ compared against the state-of-the-art results on three benchmark datasets. ¶ the results of dc and s2m2 r are reproduced based on open-sourced implementations using the same random seed with deepvoro. methods tiered-imagenet mini-imagenet cub maml (finn et al., 2017) meta-sgd (li et al., 2017) meta variance transfer (park et al., 2020) metagan (zhang et al., 2018) delta-encoder (schwartz et al., 2018) matching net (vinyals et al., 2016) prototypical net (snell et al., 2017) baseline++ (chen et al., 2019a) variational few-shot (zhang et al., 2019) trinet (chen et al., 2019b) leo (rusu et al., 2018) dco (lee et al., 2019) negative-cosine (liu et al., 2020a) mtl (wang et al., 2021) constellationnet (xu et al., 2021) afhn (li et al., 2020b) am3+traml (li et al., 2020a) e3bm (liu et al., 2020c) simpleshot (wang et al., 2019) r2-d2 (liu et al., 2020b) robust-dist++ (dvornik et al., 2019) iept (zhang et al., 2021b) melr (fei et al., 2021) s2m2 r¶ (mangla et al., 2020) m-svm+mm+ens+val (ni et al., 2021) deepemd (zhang et al., 2020a) deepemd-v2 (zhang et al., 2020b) dc¶ (yang et al., 2021) pt+ncm (hu et al., 2021) 5way 1shot 5way 5shot 5way 1shot 5way 5shot 5way 1shot 5way 5shot deepvoro deepvoro++ deepvoro: improving fsl by hierarchical heterogeneities. in this section, we only consider two levels of heterogeneities for ensemble: feature-level and transformation-level. for feature-level ensemble, we utilize three kinds of image augmentations: rotation, flipping, and central cropping summing up to 64 distinct ways of data augmentation (appendix f). for transformation-level ensemble, we use the proposed compositional transformations with 8 different combinations of λ and b that encourage a diverse feature transformations (appendix f.1) without loss of accuracy (figure 2). the size of the resulting configuration pool becomes 512 and deepvoro’s performance is shown in table 3. clearly, deepvoro outperforms all previous methods especially on 5-way 5-shot fsl. specifically, deepvoro is better than the next best by 2.18% (than ni et al. (2021)) on miniimagenet, by 1.47% (than hu et al. (2021)) on cub, and by 1.02% (than yang et al. (2021)) on tiered-imagenet. note that this is an estimated improvement because not all competitive methods here are tested with the same random seed and the number of episodes. more detailed results can be found in appendix f. by virtue of ccvd and using the simplest vd as the building block, deepvoro is arguably able to yield a consistently better result by the ensemble of a massive pool of independent vd. deepvoro also exhibits high resistance to outliers, as shown in figure k.16. deepvoro++: further improvement of fsl via surrogate representation. in surrogate representation, the number of neighbors r for each novel class and the weight β balancing surrogate/feature representations are two hyperparameters. with the help of an available validation set, a natural question is that whether the hyperparameter can be found through the optimization on the validation set, which requires a good generalization of the hyperparameters across different novel classes. from figure k.13, the accuracy of vd with varying hyperparameter shows a good agreement between testing and validation sets. with this in mind, we select 10 combinations of β and r, guided by the validation set, in conjunction with 2 different feature transformations and 64 different image augmentations, adding up to a large pool of 1280 configurations for ensemble (denoted by deepvoro++). as shown in table 3, deepvoro++ achieves best results for 1-shot fsl — 2.53% table 4: deepvoro ablation experiments with feature(feat.)/transformation(trans.)/geometry(geo.)level heterogeneities on mini-imagenet 5-way few-shot dataset. l denotes the size of configuration pool, i.e. the number of ensemble members. (cid:93)these lines show the average vd accuracy without ccvd integration. methods geometric structures feat. trans. geo. l 5-way 1-shot 5-way 5-shot tunable parameters: deepvoro-civd rotation etc. w, b, λ (cid:56) (cid:56) deepvoro ccvd deepvoro++ ccvd w/ surrogate representation figure 2: the 5-way fewshot accuracy of vd with λ different and b on miniimagenet and cub datasets. higher than zhang et al. (2020b), 2.38% higher than hu et al. (2021), and 1.09% higher than zhang et al. (2020b), on three datasets, respectively, justifying the efficacy of our surrogate representation. see appendix g for more detailed analysis. ablation experiments and running time. table 4 varies the level of heterogeneity (see table f.4 and g.5 for all datasets). the average accuracy of vds without ccvd integration is marked by (cid:93), and is significantly lower than the fully-fledged ensemble. table 5 presents the running time of deepvoro(++) benchmarked in a 20-core intel© coretm i7 cpu with numpy (v1.20.3), whose efficiency is comparable to dc/s2m2 2, even with >1000× diversity. experiments with different backbones, meta-training protocols, and domains. because different feature extraction backbones, meta-training losses, and degree of discrepancy between the source/target domains will all affect the downstream fsl, we here examine the robustness of deepvoro/deepvoro++ under a number of different circumstances, and details are shown in appendices h, i, j. notably, deepvoro/deepvoro++ attains the best performance by up to 5.55%, and is therefore corroborated as a superior method for fsl, regardless of the backbone, training loss, or domain. conclusion table 5: running time comparison. methods dc s2m2 r deepvoro deepvoro++ time (min) in this paper, our contribution is threefold. we first theoretically unify parametric and nonparametric few-shot classifiers into a general geometric framework (vd) and show an improved result by virtue of this integration (civd). by extending civd to ccvd, we present a fast geometric ensemble method (deepvoro) that takes into consideration thousands of fsl configurations with high efficiency. to deal with the extreme data insufficiency in one-shot learning, we further propose a novel surrogate representation which, when incorporated into deepvoro, promotes the performance of one-shot learning to a higher level (deepvoro++). in future studies, we plan to extend our geometric approach to meta-learning-based fsl and lifelong fsl. acknowledgments this research was supported in part by nsf through grant iis-1910492. reproducibility statement our code as well as data split, random seeds, hyperparameters, scripts for reproducing the results in the paper are available at https://github.com/horsepurve/deepvoro. references charu c aggarwal, alexander hinneburg, and daniel a keim. on the surprising behavior of disin international conference on database theory, pp. tance metrics in high dimensional space. 420–434. springer, 2001. franz aurenhammer. power diagrams: properties, algorithms and applications. siam journal on randall balestriero, romain cosentino, behnaam aazhang, and richard baraniuk. the geometry of deep networks: power diagram subdivision. advances in neural information processing systems, 32:15832–15841, 2019. randall balestriero, sebastien paris, and richard g baraniuk. max-affine spline insights into deep generative networks. in international conference on learning representations 2020, 2020. danny z. chen, ziyun huang, yangwei liu, and jinhui xu. on clustering induced voronoi diin 54th annual ieee symposium on foundations of computer science, focs 2013, agrams. 26-29 october, 2013, berkeley, ca, usa, pp. 390–399. ieee computer society, 2013. doi: 10.1109/focs.2013.49. danny z. chen, ziyun huang, yangwei liu, and jinhui xu. on clustering induced voronoi diawei-yu chen, yen-cheng liu, zsolt kira, yu-chiang frank wang, and jia-bin huang. a closer look at few-shot classification. in international conference on learning representations, 2019a. url https://openreview.net/forum?id=hkxlxnacfq. zitian chen, yanwei fu, yinda zhang, yu-gang jiang, xiangyang xue, and leonid sigal. multilevel semantic feature augmentation for one-shot learning. ieee transactions on image processing, 28(9):4594–4605, 2019b. jia deng, wei dong, richard socher, li-jia li, kai li, and li fei-fei. imagenet: a large-scale hierarchical image database. in 2009 ieee conference on computer vision and pattern recognition, pp. 248–255. ieee, 2009. nikita dvornik, cordelia schmid, and julien mairal. diversity with cooperation: ensemble methin proceedings of the ieee/cvf international conference on ods for few-shot classification. computer vision, pp. 3723–3731, 2019. nanyi fei, zhiwu lu, tao xiang, and songfang huang. {melr}: meta-learning via modeling episode-level relationships for few-shot learning. in international conference on learning representations, 2021. url https://openreview.net/forum?id=d3pcgldmx0. chelsea finn, pieter abbeel, and sergey levine. model-agnostic meta-learning for fast adaptation of deep networks. in international conference on machine learning, pp. 1126–1135. pmlr, 2017. ian goodfellow, jean pouget-abadie, mehdi mirza, bing xu, david warde-farley, sherjil ozair, aaron courville, and yoshua bengio. generative adversarial nets. advances in neural information processing systems, 27, 2014. kaiming he, xiangyu zhang, shaoqing ren, and jian sun. deep residual learning for image recognition. in proceedings of the ieee conference on computer vision and pattern recognition, pp. 770–778, 2016. andrew g howard, menglong zhu, bo chen, dmitry kalenichenko, weijun wang, tobias weyand, marco andreetto, and hartwig adam. mobilenets: efficient convolutional neural networks for mobile vision applications. arxiv preprint arxiv:1704.04861, 2017. yuqing hu, vincent gripon, and st´ephane pateux. leveraging the feature distribution in transferbased few-shot learning. artificial neural networks and machine learning – icann 2021, pp. 487–499, 2021. issn 1611-3349. doi: 10.1007/978-3-030-86340-1 39. url http://dx. doi.org/10.1007/978-3-030-86340-1_39. gao huang, zhuang liu, laurens van der maaten, and kilian q weinberger. densely connected convolutional networks. in proceedings of the ieee conference on computer vision and pattern recognition, pp. 4700–4708, 2017. ziyun huang and jinhui xu. an efficient sum query algorithm for distance-based locally dominating ziyun huang, danny z. chen, and jinhui xu. influence-based voronoi diagrams of clusters. comdiederik p. kingma and max welling. auto-encoding variational bayes. in 2nd international conference on learning representations, iclr 2014, 2014. kwonjoon lee, subhransu maji, avinash ravichandran, and stefano soatto. meta-learning with differentiable convex optimization. in proceedings of the ieee/cvf conference on computer vision and pattern recognition, pp. 10657–10665, 2019. aoxue li, weiran huang, xu lan, jiashi feng, zhenguo li, and liwei wang. boosting few-shot learning with adaptive margin loss. in proceedings of the ieee/cvf conference on computer vision and pattern recognition, pp. 12576–12584, 2020a. kai li, yulun zhang, kunpeng li, and yun fu. adversarial feature hallucination networks for few-shot learning. in proceedings of the ieee/cvf conference on computer vision and pattern recognition, pp. 13470–13479, 2020b. zhenguo li, fengwei zhou, fei chen, and hang li. meta-sgd: learning to learn quickly for fewbin liu, yue cao, yutong lin, qi li, zheng zhang, mingsheng long, and han hu. negative margin matters: understanding margin in few-shot classification. in european conference on computer vision, pp. 438–455. springer, 2020a. jialin liu, fei chao, and chih-min lin. task augmentation by rotating for meta-learning. arxiv yanbin liu, juho lee, minseop park, saehoon kim, eunho yang, sungju hwang, and yi yang. learning to propagate labels: transductive propagation network for few-shot learning. in international conference on learning representations, 2019. yaoyao liu, bernt schiele, and qianru sun. an ensemble of epoch-wise empirical bayes for fewshot learning. in european conference on computer vision, pp. 404–421. springer, 2020c. chunwei ma, yan ren, jiarui yang, zhe ren, huanming yang, and siqi liu. improved peptide retention time prediction in liquid chromatography through deep learning. analytical chemistry, 90(18):10881–10888, 2018. chunwei ma, zhanghexuan ji, and mingchen gao. neural style transfer improves 3d cardiovascular in international conference on medical image mr image segmentation on inconsistent data. computing and computer-assisted intervention, pp. 128–136. springer, 2019. chunwei ma, ziyun huang, jiayi xian, mingchen gao, and jinhui xu. improving uncertainty calibration of deep neural networks via truth discovery and geometric optimization. in uncertainty in artificial intelligence, pp. 75–85. pmlr, 2021a. jianzhu ma, samson h fong, yunan luo, christopher j bakkenist, john paul shen, soufiane mourragui, lodewyk fa wessels, marc hafner, roded sharan, jian peng, et al. few-shot learning creates predictive models of drug response that translate from high-throughput screens to individual patients. nature cancer, 2(2):233–244, 2021b. puneet mangla, nupur kumari, abhishek sinha, mayank singh, balaji krishnamurthy, and vineeth n balasubramanian. charting the right manifold: manifold mixup for few-shot learning. in proceedings of the ieee/cvf winter conference on applications of computer vision, pp. 2218–2227, 2020. scott o murray, daniel kersten, bruno a olshausen, paul schrater, and david l woods. shape perception reduces activity in human primary visual cortex. proceedings of the national academy of sciences, 99(23):15164–15169, 2002. renkun ni, micah goldblum, amr sharaf, kezhi kong, and tom goldstein. data augmentation for meta-learning. in international conference on machine learning (icml), pp. 8152–8161. pmlr, 2021. hans p. op de beeck, katrien torfs, and johan wagemans. perceived shape similarity among unfamiliar objects and the organization of the human object vision pathway. journal of neuroscience, 28(40):10111–10123, 2008. doi: 10.1523/jneurosci.2511-08.2008. seong-jin park, seungju han, ji-won baek, insoo kim, juhwan song, hae beom lee, jae-joon in han, and sung ju hwang. meta variance transfer: learning to augment from the others. international conference on machine learning, pp. 7510–7520. pmlr, 2020. maithra raghu, ben poole, jon kleinberg, surya ganguli, and jascha sohl-dickstein. on the expressive power of deep neural networks. in international conference on machine learning (icml), pp. 2847–2854. pmlr, 2017. mengye ren, sachin ravi, eleni triantafillou, jake snell, kevin swersky, josh b. tenenbaum, hugo larochelle, and richard s. zemel. meta-learning for semi-supervised few-shot classification. in international conference on learning representations, 2018. url https://openreview. net/forum?id=hjcszz-cz. olga russakovsky, jia deng, hao su, jonathan krause, sanjeev satheesh, sean ma, zhiheng huang, andrej karpathy, aditya khosla, michael bernstein, et al. imagenet large scale visual recognition challenge. international journal of computer vision, 115(3):211–252, 2015. andrei a rusu, dushyant rao, jakub sygnowski, oriol vinyals, razvan pascanu, simon osindero, and raia hadsell. meta-learning with latent embedding optimization. in international conference on learning representations, 2018. othman sbai, camille couprie, and mathieu aubry. impact of base dataset design on few-shot image classification. in european conference on computer vision, pp. 597–613. springer, 2020. eli schwartz, leonid karlinsky, joseph shtok, sivan harary, mattias marder, abhishek kumar, rog´erio schmidt feris, raja giryes, and alexander m bronstein. delta-encoder: an effective sample synthesis method for few-shot object recognition. in neurips, 2018. hossein sharifi-noghabi, shuman peng, olga zolotareva, colin c collins, and martin ester. aitl: adversarial inductive transfer learning with input and output space adaptation for pharmacogenomics. bioinformatics, 36:i380–i388, 07 2020. jake snell, kevin swersky, and richard zemel. prototypical networks for few-shot learning. advances in neural information processing systems (nips), 30:4077–4087, 2017. shao-hua sun. multi-digit mnist for few-shot learning, 2019. url https://github.com/ shaohua0116/multidigitmnist. oriol vinyals, charles blundell, timothy lillicrap, daan wierstra, et al. matching networks for one shot learning. advances in neural information processing systems, 29:3630–3638, 2016. haoxiang wang, han zhao, and bo li. bridging multi-task learning and meta-learning: towards in international conference on machine learning. efficient training and effective adaptation. pmlr, 2021. jiazhuo wang, john d. mackenzie, rageshree ramachandran, and danny z. chen. neutrophils identification by deep learning and voronoi diagram of clusters. in medical image computing and computer-assisted intervention (miccai) 2015, pp. 226–233, cham, 2015. yan wang, wei-lun chao, kilian q weinberger, and laurens van der maaten. simpleshot: revisiting nearest-neighbor classification for few-shot learning. arxiv preprint arxiv:1911.04623, 2019. yaqing wang, quanming yao, james t. kwok, and lionel m. ni. generalizing from a few examples: a survey on few-shot learning. acm comput. surv., 53(3), june 2020. zichao wang, randall balestriero, and richard baraniuk. a max-affine spline perspective of recurrent neural networks. in international conference on learning representations, 2018. peter welinder, steve branson, takeshi mita, catherine wah, florian schroff, serge belongie, and pietro perona. caltech-ucsd birds 200. california institute of technology, 2010. weijian xu, yifan xu, huaijin wang, and zhuowen tu. attentional constellation nets for few-shot learning. in international conference on learning representations, 2021. shuo yang, lu liu, and min xu. free lunch for few-shot learning: distribution calibration. in international conference on learning representations, 2021. tae keun yoo, joon yul choi, and hong kyu kim. feasibility study to improve deep learning in oct diagnosis of rare retinal diseases with few-shot classification. medical & biological engineering & computing, 59(2):401–415, 2021. sergey zagoruyko and nikos komodakis. wide residual networks. in british machine vision conference 2016. british machine vision association, 2016. baoquan zhang, xutao li, yunming ye, zhichao huang, and lisai zhang. prototype completion with primitive knowledge for few-shot learning. in proceedings of the ieee/cvf conference on computer vision and pattern recognition, pp. 3754–3762, 2021a. chi zhang, yujun cai, guosheng lin, and chunhua shen. deepemd: few-shot image classification with differentiable earth mover’s distance and structured classifiers. in ieee/cvf conference on computer vision and pattern recognition (cvpr), june 2020a. chi zhang, yujun cai, guosheng lin, and chunhua shen. deepemd: differentiable earth mover’s distance for few-shot learning, 2020b. hongyi zhang, moustapha cisse, yann n dauphin, and david lopez-paz. mixup: beyond empirical risk minimization. arxiv preprint arxiv:1710.09412, 2017. jian zhang, chenglong zhao, bingbing ni, minghao xu, and xiaokang yang. variational few-shot in proceedings of the ieee/cvf international conference on computer vision, pp. manli zhang, jianhong zhang, zhiwu lu, tao xiang, mingyu ding, and songfang huang. iept: instance-level and episode-level pretext tasks for few-shot learning. in international conference on learning representations, 2021b. ruixiang zhang, tong che, zoubin ghahramani, yoshua bengio, and yangqiu song. metagan: an adversarial approach to few-shot learning. neurips, 2:8, 2018. linjun zhou, peng cui, xu jia, shiqiang yang, and qi tian. learning to select base classes for fewshot classification. 2020 ieee/cvf conference on computer vision and pattern recognition (cvpr), jun 2020. a notations and acronyms table a.1: notations and acronyms for vd, pd, civd, and ccvd, four geometric structures discussed in the paper. geometric structures acronyms notations description voronoi diagram power diagram vd pd cluster-induced voronoi diagram civd cluster-to-cluster voronoi diagram ccvd ck ωk c νk ωk ck ωk f α ck ωk c(z) f α center for a voronoi cell ωk, k ∈ {1, .., k} dominating region for center ck, k ∈ {1, .., k} center for a power cell ωk, k ∈ {1, .., k} weight for center ck, k ∈ {1, .., k} dominating region for center ck, k ∈ {1, .., k} cluster as the ”center” for a civd cell ωk, k ∈ {1, .., k} dominating region for cluster ck influence function f (ck, z) from cluster ck to query point z magnitude of the influence cluster as the ”center” for a ccvd cell ωk, k ∈ {1, .., k} dominating region for cluster ck the cluster that query point z belongs influence function f (ck, c(z)) from ck to query cluster c(z) magnitude of the influence table a.2: summary and comparison of geometric structures, centers, tunable parameters, and the numbers of tunable parameters (denoted by #) for deepvoro--, deepvoro, and deepvoro++. parameters for feature-level , transformation-level , and geometry-level heterogeneity are shown in yellow , blue , and red , respectively. see sec. f for implementation details. †here pd is reduced to vd by theorem 3.1. ‡for every λ (or r), the b (or β) value with the highest validation accuracy is introduced into the configuration pool. methods geometric structures deepvoro-- civd centers tunable param. description ck = {ck, ˜ck} ck from vd ˜ck from pd† deepvoro ccvd deepvoro++ ccvd ck = {c(ρi) k }l ρi ∈ {t } × {pw,b,λ} angle of rotation flipping or not scaling & cropping w = 1 b λ 4 2 8 − scale factor in linear transformation shift factor in linear transformation 4 exponent in powers transformation 2 #configurations l = 512 ck = {c(ρi) k }l ρi ∈ {t } × {pw,b,λ} × {m } angle of rotation flipping or not scaling & cropping w = 1 b λ r 4 2 8 − scale factor in linear transformation shift factor in linear transformation 1‡ exponent in powers transformation 2 the number of top-r nearest base prototypes for a novel prototype − weight for surrogate representation 1‡ weight for feature representation #configurations l = 1280 b power diagram subdivision and voronoi reduction b.1 proof of theorem 3.1 lemma b.1. the vertical projection from the lower envelope of the hyperplanes {πk(z) : w t bk}k k=1 onto the input space rn defines the cells of a pd. k z + theorem 3.1 (voronoi diagram reduction). the linear classifier parameterized by w , b partitions the input space rn to a voronoi diagram with centers {˜c1, ..., ˜ck} given by ˜ck = 1 2 wk if bk = − 1 proof. we first articulate lemma b.1 and find the exact relationship between the hyperplane πk(z) and the center of its associated cell in rn. by definition 3.1, the cell for a point z ∈ rn is found by comparing d(z, ck)2 − νk for different k, so we define the power function p(z, s) expressing this value p(z, s) = (z − u)2 − r2 (11) in which s ⊆ rn is a sphere with center u and radius r. in fact, the weight ν associated with a center in definition 3.1 can be intepreted as the square of the radius r2. next, let u denote a paraboloid y = z2, let π(s) be the transform that maps sphere s with center u and radius r into hyperplane π(s) : y = 2z · u − u · u + r2. it can be proved that π is a bijective mapping between arbitrary spheres in rn and nonvertical hyperplanes in rn+1 that intersect u (aurenhammer, 1987). further, let z(cid:48) denote the vertical projection of z onto u and z(cid:48)(cid:48) denote its vertical projection onto π(s), then the power function can be written as p(z, s) = d(z, z(cid:48)) − d(z, z(cid:48)(cid:48)), (13) which implies the following relationships between a sphere in rn and an associated hyperplane in rn+1 (lemma 4 in aurenhammer (1987)): let s1 and s2 be nonco-centeric spheres in rn, then the bisector of their power cells is the vertical projection of π(s1) ∩ π(s2) onto rn. now, we have a direct relationship between sphere s, and hyperplane π(s), and comparing equation (12) with the hyperplanes used in logistic regression {πk(z) : w t k z + bk}k k=1 gives us wk u = although there is no guarantee that bk + 1 2 is always positive for an arbitrary logistic regression model, we can impose a constraint on r2 to keep it be zero during the optimization, which implies 1 4 by this way, the radii for all k spheres become identical (all zero). after the optimization of logistic regression model, the centers { 1 k=1 will be used for civd integration. bk = − 2 wk}k c details about the demonstrative example on multidigitmnist dataset multidigitmnist (sun, 2019) dataset is created by concatenating two (or three) digits of different classes from mnist for few-shot image classification. here we use doublemnist datasets (i.e. two digits in an image) consisting of 100 classes (00 to 09), 1000 images of size 64 × 64 × 1 per class, and the classes are further split into 64, 20, and 16 classes for training, testing, and validation, respectively. to better embed into the r2 space, we pick a ten-classes subset (00, 01, 12, 13, 04, 05, 06, 77, 08, and 09) as the base classes for meta-training, and another five-class subset (02, 49, 83, 17, and 36) for one episode. the feature extractor is a 4-layer convolutional network with an additional fully-connected layer for 2d embedding. in figure 1 left panel, the vd is obtained by setting the centroid of each base class as the voronoi center. for each novel class, the voronoi center is simply the 1-shot support sample (figure 1 central panel). the surrogate representation is computed as the collection of distances from a support/query sample to each of the base classes, as shown in figure 1 right panel. interestingly, the surrogate representations for a novel class, no matter if it is a support sample (dotted line) or a query sample (colored line) generally follow a certain pattern — akin within a class, distinct cross class — make them ideal surrogates for distinguishing between different novel classes. in our paper, we design a series of algorithms answering multiple questions regarding this surrogate representation: how to select base classes for the calculation of surrogate representation, how to combine it with feature representation, and how to integrate it into the overall ensemble workflow. d main datasets for a fair and thorough comparison with previous works, three widely-adopted benchmark datasets are used throughout this paper. (1) mini-imagenet (vinyals et al., 2016) is a shrunk subset of ilsvrc-12 (russakovsky et al., 2015), consists of 100 classes in which 64 classes for training, 20 classes for testing and 16 classes for validation. each class has 600 images of size 84 × 84 × 3. (2) cub (welinder et al., 2010) is another benchmark dataset for fsl, especially fine-grained fsl, including 200 species (classes) of birds. cub is an unbalanced dataset with 58 images in average per class, also of size 84 × 84 × 3. we split all classes into 100 base classes, 50 novel classes, and 50 validation classes, following previous works (chen et al., 2019a). (3) tiered-imagenet (ren et al., 2018) is another subset of ilsvrc-12 (russakovsky et al., 2015) but has more images, 779,165 images in total. all images are categorized into 351 base classes, 97 validation classes, and 160 novel classes. the number of images in each class is not always the same, 1281 in average. the image size is also 84 × 84 × 3. e deepvoro--: integrating parametric and nonparametric methods via civd table e.3: cluster-induced voronoi diagram (civd) for the integration of parametric logistic regression (lr) and nonparametric nearest neighbor (i.e. voronoi diagram, vd) methods. the results from s2m2 r and dc are also included in this table but excluded for comparison. best result is marked in bold. methods mini-imagenet cub tiered-imagenet 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot 5-way 1-shot 5-way 5-shot s2m2 r dc power-lr voronoi-lr vd civd-based deepvoro-vd + power-lr vd + voronoi-lr e.1 experimental setup and implementation details in this section, we first establish three few-shot classification models with different underlying geometric structures, two logistic regression (lr) models and one nearest neighbor model: (1) power diagram-based lr (power-lr), (2) voronoi diagram-based lr (voronoi-lr), and (3) voronoi diagram (vd). then, the main purposes of our analysis are (1) to examine how the performance is affected by the proposed voronoi reduction method in sec. 3.2, and (2) to inspect whether vd can be integrated with power/voronoi diagram-based lrs. the feature transformation used throughout this section is pw,b,λ with w = 1.0, b = 0.0, λ = 0.5. for power-lr, we train it directly on the transformed k-way n -shot support samples using pytorch library with an adam optimizer with batch size at 64 and learning rate at 0.01. for voronoi-lr, the vanilla lr is retrofitted as shown in algorithm 1, in which the bias is given by theorem 3.1 to make sure that the parameters induce a vd in each iteration. in our civd model in definition 3.2, we use a cluster instead of a single prototype to stand for a novel class. here this cluster contains two points, i.e. ck = {ck, ˜ck}, in which ck is obtained from vd, and ˜ck is acquired from power-lr or voronoi-lr. the question we intend to answer here is that whether power-lr or voronoi-lr is the suitable model for the integration. algorithm 1: voronoi diagram-based logistic regression. data: support set s result: w 1 initialize w ← w (0); 2 for epoch ← 1, ..., #epoch do 3 bk ← − 1 4 ||wk||2 compute loss l(w , b) ; update w ; 5 6 end 7 return w (cid:47) apply theorem 3.1 (cid:47) forward propagation (cid:47) backward propagation figure f.1: the t-sne visualizations of (a) original features, (b) l2 normalization, (c) tukey’s ladder of powers transformation with λ = 0.5, and (d) compositional transformation with λ = 0, w = 1, b = 0.04 of 5 novel classes from mini-imagenet dataset. e.2 results the results are shown in table e.3. interestingly, when integrated with vd, power-lr never reaches the best result, suggesting that vd and lr are intrinsic different geometric models, and cannot be simply integrated together without additional effort. on mini-imagenet and tiered-imagenet datasets, the best results are achieved by either voronoi-lr or vd+voronoi-lr, showing that civd coupled with the proposed voronoi reduction can ideally integrate parametric and nonparametric few-shot models. notably, on these two datasets, when power-lr is reduced to voronoi-lr, although the number of parameters is decreased (b is directly given by theorem 3.1, not involved in the optimization), the performance is always better, for example, increases from 65.45% to 65.58% on 5-way 1-shot mini-imagenet data. on cub dataset, results of different models are similar, probably because cub is a fine-grained dataset and all classes are similar to each other (all birds). f deepvoro: improving fsl via hierarchical heterogeneities f.1 experimental setup and implementation details in this section we describe feature-level and transformation-level heterogeneities that are used for ensemble in order to improve fsl. see the next section for geometry-level heterogeneity. feature-level heterogeneity. considering the reproducibility of the methodology, we only employ deterministic data augmentation upon the images without randomness involved. specifically, three kinds of data augmentation techniques are used. (1) rotation is an important augmentation method widely used in self-supervised learning (mangla et al., 2020). rotating the original images by 0°, 90°, 180°, and 270°gives us four ways of augmentation. (2) after rotation, we can flip the images horizontally, giving rise to additional two choices after each rotation degree. (3) central cropping after scaling can alter the resolution and focus area of the image. scaling the original images to (84+b)×(84+b), b increasing from 0 to 70 with step 10, bringing us eight ways of augmentation. finally, different combinations of the three types result in 64 kinds of augmentation methods (i.e. |{t }| = 64). transformation-level heterogeneity. in our compositional transformation, the function (hλ ◦gw,b ◦ f )(z) is parameterized by w, b, λ. since g is appended after the l2 normalization f , the vector comes into g is always a unit vector, so we simply set w = 1. for the different combinations of λ and b, we test different values with either λ = 0 or λ (cid:54)= 0 on the hold-out validation set (as shown in figure 2 and k.12), and pick top-8 combinations with the best performance on the validation set. ensemble schemes. now, in our configuration pool {t } × {pw,b,λ}, there are 512 possible configurations {ρ(i)}512 i=1. for each ρ, we apply it on both the testing and the validation sets. with this large pool of ensemble candidates, how and whether to select a subset {ρ(i)}l(cid:48) i=1 is still a nontrivial problem. here we explore three different schemes. (1) full (vanilla) ensemble. all candidates in {ρ(i)}512 i=1 are taken into consideration and then plugged into definition 3.5 to build the civd for space partition. (2) random ensemble. a randomly selected subset with size l(cid:48) < l is used for ensemble. (3) guided ensemble. we expect the performance for {ρ(i)}512 i=1 on the validation set can be used to guide the selection of {ρ(i)}l(cid:48) i=1 from the testing set, provided that there is good correlation between the testing set and the validation set. specifically, we rank the configurations in the validation set with regard to their performance, and add them sequentially into {ρ(i)}l(cid:48) i=1 until a maximum ensemble performance is reached on the validation set, then we use this configuration set for the final ensemble. since vd is nonparametric and fast, we adopt vd as the building block and only use vd for each ρ for the remaining part of the paper. the α value in the influence function (definition 3.3) is set at 1 throughout the paper, for the simplicity of computation. for a fair comparison, we downloaded the trained models1 used by mangla et al. (2020) and yang et al. (2021). the performance of fsl algorithms is typically evaluated by a sequence of independent episodes, so the data split and random seed for the selection of novel classes as well as support/query set in each episode will all lead to different result. to ensure the fairness of our evaluation, dc (yang et al., 2021), and s2m2 r (mangla et al., 2020) are reevaluated with the same data split and random seed as deepvoro. the results are obtained by running 2000 episodes and the average accuracy as well as 95% confidence intervals are reported. f.2 results table f.4: ablation study of deepvoro’s performance with different levels of ensemble. the number of ensemble members are given in parentheses. methods feature-level transformation-level tiered-imagenet mini-imagenet cub no ensemble vanilla ensemble (8) vanilla ensemble (64) vanilla ensemble (512) random ensemble (512) guided ensemble (512) 1-shot 5-shot 1-shot | 17 | [
362.2062569462,
297.8813626029,
376.6968559862,
303.6776022189
] |
vrW3tvDfOJQ.pdf | 2,022 | 2 | sample efficient deep reinforcement learning via uncertainty estimation vincent mai, kaustubh mani and liam paull ∗ robotics and embodied ai lab mila - quebec institute of artificial intelligence universit´e de montr´eal, quebec, canada {vincent.mai,kaustubh.mani,liam.paull}@umontreal.ca abstract in model-free deep reinforcement learning (rl) algorithms, using noisy value estimates to supervise policy evaluation and optimization is detrimental to the sample efficiency. as this noise is heteroscedastic, its effects can be mitigated using uncertainty-based weights in the optimization process. previous methods rely on sampled ensembles, which do not capture all aspects of uncertainty. we provide a systematic analysis of the sources of uncertainty in the noisy supervision that occurs in rl, and introduce inverse-variance rl, a bayesian framework which combines probabilistic ensembles and batch inverse variance weighting. we propose a method whereby two complementary uncertainty estimation methods account for both the q-value and the environment stochasticity to better mitigate the negative impacts of noisy supervision. our results show significant improvement in terms of sample efficiency on discrete and continuous control tasks. introduction deep reinforcement learning (drl) methods have proven to be powerful at solving sequential decision-making tasks across domains (silver et al., 2016; openai et al., 2019). combining the flexibility of the reinforcement learning framework with the representational power of deep neural networks enables policy optimization in complex and high-dimensional environments with unknown dynamics models to maximize the expected cumulative reward (sutton & barto, 2018). an important limitation of drl methods is their sample inefficiency: an enormous amount of data is necessary and makes training expensive. this makes applying drl in the real world challenging, for example in robotics (s¨underhauf et al., 2018; dulac-arnold et al., 2019). in tasks like manipulation, sample collection is a slow and costly process (liu et al., 2021). it is even more expensive in riskaverse applications like autonomous driving (kothari et al., 2021). among the current state-of-the-art approaches to improve learning efficiency, a promising direction is to exploit the prevalence of uncertainty in the underlying drl algorithm. by adopting a bayesian framework, we can consider the sampled quantities in drl as random variables and leverage information about their distributions to improve the learning process (osband et al., 2018). in this paper, we consider the particular problem of unreliable supervision in the temporal difference update and the policy optimization process. in drl, value predictions are used to supervise the training: in temporal difference-based algorithms, they are included in bootstrapped target values which are used as labels; in actor-critic frameworks, the policy is trained to optimize them. that these value predictions are noisy slows the learning and brings instability (kumar et al., 2019; 2020). the amount of noise in the supervision depends on the uncertainty of the value prediction, which evolves during the training process and depends on the state (and action) evaluated. it is therefore heteroscedastic. while there is an extensive body of literature focused on using the uncertainty of the value prediction to guide the exploration/exploitation trade-off (dearden et al., 1998; strens, 2001; osband et al., 2016; pathak et al., 2017; chen et al., 2017; osband et al., 2018; fortunato et al., 2019; osband et al., 2019; flennerhag et al., 2020; clements et al., 2020; jain et al., 2021; aravindan & lee, 2021), there are very few works focused on leveraging it to mitigate the impact of unreliable supervision. ∗canada cifar ai chair distributional rl (bellemare et al., 2017) considers the value function as a distribution to be learned as such. it is orthogonal to our proposition: we consider the uncertainty of the labels used to learn a scalar value function. in the offline rl setting, where the dataset is limited, uncertainty-weighted actor-critic (uwac) (wu et al., 2021) uses inverse-variance weighting to discard out-of-distribution state-action pairs using monte carlo dropout (gal & ghahramani, 2016) for uncertainty estimation. closer to our work, lee et al. (2021) propose sunrise, in which each sample of the bellman backup in the td update step is weighted to lower the importance of the targets which have a high standard deviation. the weights w(s′, a′) are computed based on a sigmo¨ıd of the negative standard deviation ˆqstd(s′, a′) scaled by a temperature hyperparameter t , and then offset such that they are between 0.5 and 1: w(s, a) = σ(− ˆqstd(s′, a′) ∗ t ) + 0.5. the uncertainty of the target is estimated by sampled ensembles. while sunrise proposes other contributions such as an exploration bonus, the heuristic weighting scheme and the limitations of sampled ensembles in capturing the predictive uncertainty leave space for improvement in the mitigation of the effects of unreliable supervision. we propose inverse-variance reinforcement learning (iv-rl). iv-rl also uses weights to reduce the importance of uncertain targets in training. it does so by addressing the problem from two viewpoints. first, we use variance networks (kendall & gal, 2017), whose loss function for regression is the negative log-likelihood instead of the l2 distance. for a given state-action pair (s, a), the network learns the target’s noise, due for example to the stochasticity of the environment or the update of the policy. it then naturally down-weights the highly noisy samples in the training process. second, we use variance ensembles (lakshminarayanan et al., 2017) to estimate the uncertainty of the target due to the prediction of q(s′, a′) during the temporal-difference update. we merge the predicted variances of several variance networks through a mixture of gaussians, which has been shown to be a reliable method to capture predictive uncertainty (ovadia et al., 2019). we then use batch inverse-variance (biv) (mai et al., 2021), which has been shown to significantly improve the performance of supervised learning with neural networks in the case of heteroscedastic regression. biv is normalized, which makes it ideal to cope with different and time-varying scales of variance. we show analytically that these two different variance predictions for the target are complementary and their combination leads to consistent and significant improvements in the sample efficiency and overall performance of the learning process. in summary, our contribution is threefold: 1. we present a systematic analysis of the sources of uncertainty in the supervision of modelfree drl algorithms. we show that the variance of the supervision noise can be estimated with two complementary methods: negative log-likelihood and variance ensembles. 2. we introduce iv-rl, a framework that accounts for the uncertainty of the supervisory signal by weighting the samples in a mini-batch during the agent’s training. iv-rl uses biv, a weighting scheme that is robust to poorly calibrated variance estimation.1 3. our experiments show that iv-rl can lead to significant improvements in sample efficiency when applied to deep q-networks (dqn) (mnih et al., 2013) and soft-actor critic (sac) (haarnoja et al., 2018). in section 2, we introduce biv as a weighting scheme for heteroscedastic regression, and variance ensembles as an uncertainty estimation method. we analyse the sources of uncertainty in the target in section 3, where we also introduce our iv-rl framework. we finally present our experimental results in section 4. background and preliminaries batch inverse-variance weighting in supervised learning with deep neural networks, it is assumed that the training dataset consists of inputs xk and labels yk. however, depending on the label generation process, the label may be noisy. in regression, we can model the noise as a normal distribution around the true label: ˜yk = yk + δk with δk ∼ n (0, σ2 k). in some cases, the label generation process leads to different variances for the label noises. when these variances can be estimated, each sample is a triplet (cid:0)xk, ˜yk, σ2 (cid:1). k 1the code for iv-rl is available at https://github.com/montrealrobotics/iv_rl. batch inverse-variance (biv) weighting (mai et al., 2021) leverages the additional information σ2 k, which is assumed to be provided, to learn faster and obtain better performance in the case of heteroscedastic noise on the labels. applied to l2 loss, it optimizes the neural network parameters θ using the following loss function for a mini-batch d of size k 2: lbiv(d, θ) = (fθ(xk) − ˜yk)2 σ2 k + ξ this is a normalized weighted sum with weights wk = 1/(σ2 k + ξ). normalizing in the mini-batch enables control of the effective learning rate, especially in cases where the training data changes over time, such as in drl. by focusing on the relative scale of the variances instead of their absolute value, it also provides robustness to poor scale-calibration of the variance estimates. as explained in mai et al. (2021), ξ is a hyperparameter that is important for the stability of the optimization process. a higher ξ limits the highest weights, thus preventing very small variance samples from dominating the loss function for a mini-batch. however, by controlling the discrimination between the samples, ξ is also key when the variance estimation is not completely trusted. it provides control of the effective mini-batch size ebs, according to: ebs = k wk k w2 k (cid:80)k k (cid:80)k k for example, imagine a mini-batch where most samples have very high variances, and only one has a very low variance. if ξ = 0, this one low-variance sample is effectively the only one to count in the mini-batch, and ebs tends towards 1. increasing ξ would give more relative importance to the other samples, thus increasing ebs. with a very high ξ compared to the variances, all weights are equal, and ebs tends towards k; in this case, the biv loss tends towards l2. tuning the ξ parameter the simplest way to set ξ is to choose a constant value as an additional hyperparameter. however, the best value is difficult to evaluate a priori and can change when the profile of variances changes during a task, as is the case in drl. it is instead possible to numerically compute the value of ξ which ensures a minimal ebs for each mini-batch. this method allows ξ to automatically adapt to the different scales of variance while ensuring a minimal amount of information from the dataset is accounted for by the algorithm. the minimal ebs is also a hyper-parameter, but it is easier to set and to transfer among environments, as it is simply a fraction of the original batch size. as such, it can be set as a batch size ratio. estimating the uncertainty of a neural network prediction | 2 | [
108.249,
261.8160784,
429.0638626,
271.7786784
] |
O3bqkf_Puys.pdf | 2,021 | 0 | pstnet: point spatio-temporal convolution on point cloud sequences hehe fan1, xin yu2, yuhang ding3, yi yang2 & mohan kankanhalli1 1school of computing, national university of singapore 2reler, university of technology sydney 3baidu research abstract point cloud sequences are irregular and unordered in the spatial dimension while exhibiting regularities and order in the temporal dimension. therefore, existing grid based convolutions for conventional video processing cannot be directly applied to spatio-temporal modeling of raw point cloud sequences. in this paper, we propose a point spatio-temporal (pst) convolution to achieve informative representations of point cloud sequences. the proposed pst convolution first disentangles space and time in point cloud sequences. then, a spatial convolution is employed to capture the local structure of points in the 3d space, and a temporal convolution is used to model the dynamics of the spatial regions along the time dimension. furthermore, we incorporate the proposed pst convolution into a deep network, namely pstnet, to extract features of point cloud sequences in a hierarchical manner. extensive experiments on widely-used 3d action recognition and 4d semantic segmentation datasets demonstrate the effectiveness of pstnet to model point cloud sequences. introduction modern robotic and automatic driving systems usually employ real-time depth sensors, such as lidar, to capture the geometric information of scenes accurately while being robust to different lighting conditions. a scene geometry is thus represented by a 3d point cloud, i.e., a set of measured point coordinates {(x, y, z)}. moreover, when rgb images are available, they are often used as additional features associated with the 3d points to enhance the discriminativeness of point clouds. however, unlike conventional grid based videos, dynamic point clouds are irregular and unordered in the spatial dimension while points are not consistent and even flow in and out over time. therefore, existing 3d convolutions on grid based videos (tran et al., 2015; carreira & zisserman, 2017; hara et al., 2018) are not suitable to model raw point cloud sequences, as shown in fig. 1. to model the dynamics of point clouds, one solution is converting point clouds to a sequence of 3d voxels, and then applying 4d convolutions (choy et al., 2019) to the voxel sequence. however, directly performing convolutions on voxel sequences require a large amount of computation. furthermore, quantization errors are inevitable during voxelization, which may restrict applications that require precise measurement of scene geometry. another solution meteornet (liu et al., 2019e) is extending the static point cloud method pointnet++ (qi et al., 2017b) to process raw point cloud sequences by appending 1d temporal dimension to 3d points. however, simply concatenating coordinates and time together and treating point cloud sequences as unordered 4d point sets neglect the temporal order of timestamps, which may not properly exploit the temporal information and lead to inferior performance. moreover, the scales of spatial displacements and temporal differences in point cloud sequences may not be compatible. treating them equally is not conducive for network optimization. besides, meteornet only considers spatial neighbors and neglects the local dependencies of neighboring frames. with the use of whole sequence length as its temporal receptive field, meteornet cannot construct temporal hierarchy. as points are not consistent and even flow in and out of the region, especially for long sequences and fast motion embedding points in a spatially local area along an entire sequence handicaps capturing accurate local dynamics of point clouds. in this paper, we propose a point spatio-temporal (pst) convolution to directly process raw point cloud sequences. as dynamic point clouds are spatially irregular but ordered in the temporal dimension, we figure 1: illustration of grid based and point based convolutions on spatio-temporal sequences. (a) for a grid based video, each grid represents a feature of a pixel, where c, l, h and w denote the feature dimension, the number of frames, height and width, respectively. a 3d convolution encodes an input to an output of size c (cid:48) × l(cid:48) × h (cid:48) × w (cid:48). (b) a point cloud sequence consists of a coordinate part (3 × l × n ) and a feature part (c × l × n ), where n indicates the number of points in a frame. our pst convolution encodes an input to an output composed of a coordinate tensor (3 × l(cid:48) × n (cid:48)) and a feature tensor (c (cid:48) × l(cid:48) × n (cid:48)). usually, l(cid:48) ≤ l and n (cid:48) ≤ n so that networks can model point cloud sequences in a spatio-temporally hierarchical manner. note that points in different frames are not consistent, and thus it is challenging to capture the spatio-temporal correlation. decouple the spatial and temporal information to model point cloud sequences. specifically, pst convolution consists of (i) a point based spatial convolution that models the spatial structure of 3d points and (ii) a temporal convolution that captures the temporal dynamics of point cloud sequences. in this fashion, pst convolution significantly facilitates the modeling of dynamic point clouds and reduces the impact of the spatial irregularity of points on temporal modeling. because point cloud sequences emerge inconsistently across frames, it is challenging to perform convolution on them. to address this problem, we introduce a point tube to preserve the spatio-temporal local structure. to enhance the feature extraction ability, we incorporate the proposed pst convolution into a spatiotemporally hierarchical network, namely pstnet. moreover, we extend our pst convolution to a transposed version to address point-level prediction tasks. different from the convolutional version, the pst transposed convolution is designed to interpolate temporal dynamics and spatial features. extensive experiments on widely-used 3d action recognition and 4d semantic segmentation datasets demonstrate the effectiveness of the proposed pst convolution and the superiority of pstnet in modeling point clouds sequences. the contributions of this paper are fourfold: • to the best of our knowledge, we are the first attempt to decompose spatial and temporal information in modeling raw point cloud sequences, and propose a generic point based convolutional operation, named pst convolution, to encode raw point cloud sequences. • we propose a pst transposed convolution to decode raw point cloud sequences via interpolating the temporal dynamics and spatial feature for point-level prediction tasks. • we construct convolutional neural networks based on the pst convolutions and transposed convolutions, dubbed pstnet, to tackle sequence-level classification and point-level prediction tasks. to the best of our knowledge, our pstnet is the first deep neural network to model raw point cloud sequences in a both spatially and temporally hierarchical manner. • extensive experiments on four datasets indicate that our method improves the accuracy of 3d action recognition and 4d semantic segmentation. related work learning representations on grid based videos. impressive progress has been made on generating compact and discriminative representations for rgb/rgbd videos due to the success of deep neural networks. for example, two-stream convolutional neural networks (simonyan & zisserman, 2014; wang et al., 2016) utilize a spatial stream and an optical flow stream for video modeling. to summarize the temporal dependencies of videos, recurrent neural networks (ng et al., 2015; fan et al., 2018) and pooling techniques (fan et al., 2017) are employed. in addition, by stacking multiple 2d frames into a 3d tensor, 3d convolutional neural networks (tran et al., 2015; carreira & zisserman, 2017; tran et al., 2018; hara et al., 2018) are widely used to learn spatio-temporal representations for videos, and achieve promising performance. besides, interpretable video or action reasoning methods (zhuo et al., 2019; fan et al., 2021b) are proposed by explicitly parsing changes in videos. static point cloud processing. static point cloud analysis has been widely investigated for many problems, such as classification, object part segmentation, scene semantic segmentation (qi et al., 2017a;b; li et al., 2018b; wu et al., 2019; thomas et al., 2019; wang et al., 2019), reconstruction (dai et al., 2017; yu et al., 2018) and object detection (chen et al., 2017; qi et al., 2019). most recent works aim to directly manipulate point sets without transforming coordinates into regular voxel grids. since a point cloud is essentially a set of unordered points and invariant to permutations of its points, static point cloud processing methods mainly focus on designing effective point based spatial correlation operations that do not rely on point orderings. dynamic point cloud modeling. compared with static point cloud processing, dynamic point cloud modeling is a fairly new task. fast and furious (faf) (luo et al., 2018) converts a point cloud frame into a bird’s view voxel and then extracts features via 3d convolutions. minkowskinet (choy et al., 2019) converts a point cloud sequence into a 4d occupancy grid and then applies 4d spatio-temporal convnets. pointrnn (fan & yang, 2019) leverages point based recurrent neural networks for raw point cloud sequence forecasting. meteornet (liu et al., 2019e) extends 3d points to 4d points and then appends a temporal dimension to pointnet++ to process these 4d points. 3dv (wang et al., 2020) first integrates 3d motion information into a regular compact voxel set and then applies pointnet++ to extract representations from the set for 3d action recognition. p4transformer (fan et al., 2021a) employs transformer to avoid point tracking for raw point cloud sequence modeling. niemeyer et al. (2019) learned a temporal and spatial vector field for 4d reconstruction. prantl et al. (2020) learned stable and temporal coherent feature spaces for point based super-resolution. caspr (rempe et al., 2020) learns to encode spatio-temporal changes in object shape from point clouds for reconstruction and camera pose estimation. in this paper, we propose a point based convolution to model raw point cloud sequences in a spatio-temporally hierarchical manner. proposed point spatio-temporal convolutional network in this section, we first briefly review the grid based 3d convolution operation as our point spatiotemporal (pst) convolution is motivated by it. then, we introduce how the proposed pst convolution extracts features from dynamic point clouds in detail. in order to address dense point prediction tasks, i.e., semantic segmentation, we develop a pst transposed convolution. finally, we incorporate our operations into deep hierarchical networks to address different dynamic point cloud tasks. motivation from grid based 3d convolution the power of convolutional neural network (cnns) comes from local structure modeling by convolutions and global representation learning via hierarchical architectures. given an input feature map f ∈ rc×l×h×w , where c, l, h and w denote the input feature dimension, length, height and width, 3d convolution is designed to capture the spatio-temporal local structure of f, written as: f(cid:48)(x,y) t w(i,j) k · f(x+i,y+j) t+k where w ∈ rc(cid:48)×c×l×h×w is the convolution kernel, (l, h, w) is the kernel size, c (cid:48) represents the output feature dimension and · is matrix multiplication. the w(i,j) ∈ rc(cid:48)×c denotes the weight at kernel position (k, i, j) and f(x+i,y+j) ∈ rc×1 denotes the feature of pixel at input position (t + k, x + i, y + j). usually, cnns employ small kernel sizes, e.g., (3, 3, 3), which are much smaller than the input size, to effectively model the local structure. to construct hierarchical cnns, stride (> 1) is used to subsample input feature maps during the convolution operation. in this fashion, following convolutional layers will have relatively larger receptive fields. t+k k point spatio-temporal (pst) convolution let pt ∈ r3×n and ft ∈ rc×n denote the point coordinates and features of the t-th frame in a point cloud sequence, where n and c denote the number of points and feature channels, respectively. given a point cloud sequence (cid:0)[p1; f1], [p2; f2], · · · , [pl; fl](cid:1), the proposed pst convolution will encode the sequence to (cid:0)[p (cid:48) l(cid:48)](cid:1), where l and l(cid:48) indicate the number of frames, and p (cid:48) represent the encoded coordinates and features. 3.2.1 decomposing space and time in point cloud sequence modeling because point clouds are irregular and unordered, grid based 3d convolution cannot be directly applied to point cloud sequences. as point cloud sequences are spatially irregular and unordered but temporally ordered, this motivates us to decouple these two dimensions in order to reduce the impacts of the spatial irregularity of points on temporal modeling. moreover, the scales of spatial displacements and temporal differences in point cloud sequences may not be compatible. treating them equally is not conducive for network optimization. by decoupling the spatio-temporal information in point cloud sequences, we not only make the spatio-temporal modeling easier but also significantly improve the ability to capture the temporal information. therefore, our pst convolution is formulated as: f (cid:48)(x,y,z) t w(δx,δy ,δz ) k · f (x+δx,y+δy ,z+δz ) t+k t(δx,δy ,δz ) k · (cid:0)s(δx,δy ,δz ) k · f (x+δx,y+δy ,z+δz ) t+k (cid:1), where (x, y, z) ∈ pt, (δx, δy, δz) represents displacement and r is spatial search radius. the convolution kernel w ∈ rc(cid:48)×c×l × r is decomposed into a spatial convolution kernel s ∈ rcm×c×l × r and a temporal convolution kernel t ∈ rc(cid:48)×cm×l × r and cm is the dimension of the intermediate feature. moreover, because space and time are orthogonal and independent of each other, we further decompose spatial and temporal modeling as: f (cid:48)(x,y,z) t tk · s(δx,δy ,δz ) · f (x+δx,y+δy ,z+δz ) t+k where s ∈ rcm×c × r and t ∈ rc(cid:48)×cm×l. the above decomposition can also be expressed as applying temporal convolution first and then spatial convolution to input features. however, doing so requires point tracking to capture point motion. because it is difficult to achieve accurate point trajectory and tracking points usually relies on point colors and may fail to handle colorless point clouds, we opt to model the spatial structure of irregular points first, and then capture the temporal information from the spatial regions. in this fashion, eq. (3) can be rewritten as: m (x,y,z) t s(δx,δy ,δz ) · f (x+δx,y+δy ,z+δz ) t f (cid:48)(x,y,z) t tk · m (x,y,z) t+k the spatial convolution captures spatial local structures of point clouds in 3d space, while the temporal convolution aims to describe the temporal local dynamics of point cloud sequences along the time dimension. in order to capture the distributions of neighboring points, it is required to learn a convolution kernel s for all displacements. however, this is impossible because point displacements are not discrete. to address this problem, we convert the kernel to a function of displacements, defined as: m (x,y,z) t f (cid:0)(δx, δy, δz); θ(cid:1) · f (x+δx,y+δy ,z+δz ) t where f : r1×3 → rcm×c is a function of (δx, δy, δz) parametrized by θ to generate different rcm×c according to different displacements. there can be many ways to implement the f function. in principle, the f function should be both computation-efficient and memory-efficient so that pst convolution is able to encode long sequences. existing static point cloud convolution methods (li et al., 2018b; wu et al., 2019; wang et al., 2019) usually contains multilayer perceptrons (mlps), which are computation and memory consuming. in this paper, we design a lightweight implementation for f , which contains only a few parameters. specifically, we further decompose the f function as f (cid:0)(δx, δy, δz); θ(cid:1) = θd · (δx, δy, δz)t · 1 (cid:12) θs, where θ = [θd, θs], θd ∈ rcm×3 is a displacement transform kernel, θs ∈ rcm×c is a sharing kernel, 1 = (1, · · · , 1) ∈ r1×c is for broadcasting, and (cid:12) is element-wise product. the sharing kernel θs is to increase point feature dimension to improve the feature representation ability. the displacement kernel θd is to capture spatial local structure based on displacements. in this fashion, f generates a unique spatial kernel for each displacement so that the lightweight spatial convolution is able to increase feature dimension while capturing spatial local structure like conventional convolutions. figure 2: illustration of the proposed point spatio-temporal (pst) convolution. the input contains l = 5 frames, with n = 8 points per frame. (a) based on the temporal kernel size l = 3, temporal stride st = 2, and temporal padding p = 1, the 1st, 3rd, 5th frames are selected as temporal anchor frames. according to a spatial subsampling rate ss = 4, 2 spatial anchor points are sampled by fps in each anchor frame. the sampled anchor points are then transferred to the (cid:98) l 2 (cid:99) = 1 nearest neighboring frames. a point tube is constructed with a spatial search radius r for the anchor points. (b) the spatial convolution encodes the local structure around each anchor point. (c) the temporal convolution encodes the l spatial features to a spatio-temporal feature. the original l × n = 5 × 8 sequence is encoded as a l(cid:48) × n (cid:48) = 3 × 2 sequence. 3.2.2 point tube grid based 3d convolutions can be easily performed on regular conventional videos by sliding along the length, height and width. because point cloud sequences are irregular and unordered in 3d space and emerge inconsistently across different frames, it is challenging to perform convolution on them. to address this problem, we introduce a point tube to preserve the spatio-temporal local structure. in contrast to pixel cubes in 3d convolution, in which pixels are distributed regularly, point tubes are dynamically generated based on the input sequences so that dense areas have more tubes than sparse ones. specifically, the point tube is constructed as follows: temporal anchor frame selection. for an entire sequence of point clouds, we need to select some anchor frames to generate our tubes. temporal anchor frames in a point cloud sequence are automatically selected based on temporal kernel size (l), temporal stride (st) and temporal padding (p), where l is set to an odd number so that an anchor frame is located in the middle of a point tube. moreover, we set (cid:98) l 2 (cid:99) ≥ p to avoid selecting a padding frame as an anchor frame. spatial anchor point sampling. once a temporal anchor frame is selected, we need to choose spatial anchor points that can represent the distribution of all the points in the frame. given a spatial subsampling rate ss, this operation aims to subsample n points to n (cid:48) = (cid:98) n (cid:99) points. we employ the ss farthest point sampling (fps) (qi et al., 2017b) to sample points in each anchor frame. point tubes are generated according to sampled anchor points. transferring spatial anchor points. 3d convolutions can effectively capture local changes within a cube of a kernel size (l, h, w) rather than tracking a specific pixel. following this idea, we propagate the positions of sampled anchor points to the neighboring frames without tracking, and they are regarded as the anchor points in those frames. specifically, each anchor point is transferred to the (cid:98) l 2 (cid:99) nearest frames. the original and transferred anchor points form the central axis of a tube. spatial neighbor. this step aims to find the spatial neighbors of every anchor point in each frame for performing spatial convolution. as indicated in eq. (5), radius neighborhood r is used to search neighbors within the tube slice, where local structure of points is depicted. note that, padding is usually used in grid based convolution to align feature maps. however, our point based spatial convolution is not conducted on grids and thus spatial padding is not employed. by performing pst convolution on point tubes, our network is able to capture the dynamic changes in local areas. the temporal kernel size l and spatial search radius r allow our pst convolution to capture temporal and spatial local structure, respectively. frame subsampling (according to st) and point subsampling (according to ss) make our pstnet be both temporally and spatially hierarchical. global movement can be summarized by merging the information from these tubes in a spatio-temporally hierarchical manner. an illustration of pst convolution is shown in fig. 2. point spatio-temporal transposed convolution after pst convolution, the original point cloud sequence is both spatially and temporally subsampled. however, for point-level prediction tasks, we need to provide point features for all the original points. thus, we develop a pst transposed convolution. suppose (cid:0)[p (cid:48) l(cid:48)](cid:1) is the encoded sequence of the original one (cid:0)[p1; f1], [p2; f2], · · · , [pl; fl](cid:1). pst transposed convolution propagates features (f (cid:48) l(cid:48)) to the original point coordinates (p1, p2, · · · , pl), thus t ∈ rc(cid:48)(cid:48)×n and c (cid:48)(cid:48) denotes the new feature outputting new features (f (cid:48)(cid:48) l ), where f (cid:48)(cid:48) dimension. to this end, pst transposed convolution first recovers the temporal length by a temporal transposed convolution: t(cid:48) · f (cid:48)(x,y,z) t = (cid:2)m (cid:48)(x,y,z) (cid:3), m×c(cid:48) where t(cid:48) ∈ rl×c(cid:48) is the temporal transposed convolution kernel, and then increases the number of points by assigning temporal features to original points. inspired by (qi et al., 2017b), the interpolated features are weighted by inverse distance between an original point and neighboring anchor points: f (cid:48)(cid:48)(x,y,z) t where s(cid:48) ∈ rc(cid:48)(cid:48)×c(cid:48) dimension from c (cid:48) (cid:107)(δx,δy ,δz )(cid:107)≤r w(δx, δy, δz)m (cid:48)(x+δx,y+δy ,z+δz ) (cid:107)(δx,δy ,δz )(cid:107)≤r w(δx, δy, δz) (cid:107)(δx, δy, δz)(cid:107)2 , (7) m is a sharing kernel to enhance the interpolated features and change the feature m to c (cid:48)(cid:48). an illustration of pst transposed convolution is shown in appendix a. , w(δx, δy, δz) = t pstnet architectures pstnet for 3d action recognition. we employ 6 pst convolution layers and a fully-connected (fc) layer for 3d action recognition. in the 1st, 2nd, 4th, 6th layers, the spatial subsampling rate is set to 2 to halve the spatial resolution. because point clouds are irregular and unordered, spatial receptive fields cannot be relatively enlarged by spatial subsampling. to address this problem, we progressively increase the spatial search radius. in the 2nd and 4th layers, the temporal stride is set to 2 to halve the temporal resolution. in the 2-4 layers, the temporal kernel size is set to 3 to capture temporal correlation. temporal paddings are added when necessary. after the convolution layers, average and max poolings are used for spatial pooling and temporal pooling, respectively. finally, the fc layer maps the global feature to action predictions. pstnet for 4d semantic segmentation. we use 4 pst convolution layers and 4 pst transposed convolution layers for 4d semantic segmentation. the spatial subsampling rate is set to 4 to reduce the spatial resolution in the 1-3 pst convolutions and 2 in the fourth pst convolution. similar to 3d action recognition, the spatial search radius progressively increases to grow spatial receptive fields along the network layers. in the 3rd pst convolution and 2nd transposed convolution, the temporal kernel size is set to 3 to reduce and increase temporal dimension, respectively. skip connections are added between the corresponding convolution layers and transposed convolution layers. after the last pst transposed convolution layer, a 1d convolution layer is appended for semantic predictions. in addition, batch normalization and relu activation are inserted between pst convolution and transposed convolution layers. our pstnet architectures are illustrated in appendix b. experiments 3d action recognition to show the effectiveness in sequence-level classification, we apply pstnet to 3d action recognition. following (liu et al., 2019e; wang et al., 2020), we sample 2,048 points for each frame. point table 1: action recognition accuracy (%) on the msr-action3d dataset. method input # frames accuracy vieira et al. (vieira et al., 2012) kl¨aser et al. (kl¨aser et al., 2008) actionlet (wang et al., 2012) depth depth skeleton pointnet++ (qi et al., 2017b) point meteornet (liu et al., 2019e) point pstnet (ours) point figure 3: influence of temporal kernel size and (initial) spatial search radius on msr-action3d with 24 frames. figure 4: visualization of pst convolution’s output. top: input point cloud sequences, where color encodes depth. bottom: output of pst convolution, where brighter color indicates higher activation. interestingly, pst convolution outputs high activation to salient motion. best viewed in color. cloud sequences are split into multiple clips (with a fixed number of frames) as inputs. for training, sequence-level labels are used as clip-level labels. for evaluation, the mean of the clip-level predicted probabilities is used as the sequence-level prediction. point colors are not used. 4.1.1 msr-action3d the msr-action3d (li et al., 2010) dataset consists of 567 kinect depth videos, with 20 action categories and 23k frames in total. we use the same training/test split as previous works (wang et al., 2012; liu et al., 2019e). we conduct experiments with 10 times and report the mean. comparison with the state-of-the-art. we compare our pstnet with skeleton-based, depth-based and point-based 3d action recognition methods on this dataset. as shown in table 1, the proposed pstnet significantly outperforms all the state-of-the-art methods, demonstrating the superiority of our pst convolution on feature extraction. moreover, when encoding long point sequences, our spatio-temporally hierarchical pstnet is more effective than the spatially hierarchical meteornet. specifically, from 16 frames to 24 frames, meteornet only achieves a slight improvement of 0.29%, while our method increases the accuracy by 1.30%. what does pst convolution learn? to investigate what pst convolution learns, we visualize the output of the middle layer (i.e., the 3rd layer) in fig. 4. as relu is employed as the activation function, larger values indicate higher activation. as expected, pstnet outputs higher activation on moving areas, demonstrating that our pstnet captures the most informative clues in action reasoning. the ntu rgb+d 60 (shahroudy et al., 2016) is the second largest dataset for 3d action recognition. it consists of 56k videos, with 60 action categories and 4m frames in total. the videos are captured table 2: action recognition accuracy (%) on the ntu rgb+d 60 and ntu rgb+d 120 datasets. method skelemotion (caetano et al., 2019) gca-lstm (liu et al., 2017) fsnet (liu et al., 2019b) two stream attention lstm (liu et al., 2018) body pose evolution map (liu & yuan, 2018) agc-lstm (si et al., 2019) as-gcn (li et al., 2019) va-fusion (zhang et al., 2019) 2s-agcn (shi et al., 2019b) dgnn (shi et al., 2019a) hon4d (oreifej & liu, 2013) snv (yang & tian, 2014) hog2 (ohn-bar & trivedi, 2013) li et al. (2018a) wang et al. (2018) mvdi (xiao et al., 2019) ntu rgb+d 120 baseline (liu et al., 2019a) pointnet++ (appearance) (qi et al., 2017b) 3dv (motion) (wang et al., 2020) 3dv-pointnet++ (wang et al., 2020) pstnet (ours) input skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton skeleton depth depth depth depth depth depth depth point voxel voxel + point point ntu rgb+d 60 view subject ntu rgb+d 120 setup subject using kinect v2, with 3 cameras and 40 subjects (performers). the dataset defines two types of evaluation, i.e., cross-subject and cross-view. the cross-subject evaluation splits the 40 performers into training and test groups. each group consists of 20 performers. the cross-view evaluation uses all the samples from camera 1 for testing and samples from cameras 2 and 3 for training. the ntu rgb+d 120 (liu et al., 2019a) dataset, the largest dataset for 3d action recognition, is an extension of ntu rgb+d 60. it consists of 114k videos, with 120 action categories and 8m frames in total. the videos are captured with 106 performers and 32 collection setups (locations and backgrounds). besides cross-subject evaluation, the dataset defines a new evaluation setting, i.e., cross-setup, where 16 setups are used for training, and the others are used for testing. comparison with state-of-the-art methods. as indicated in table 2, pstnet outperforms all the other approaches in all evaluation settings. particularly, as indicated by the cross-setup evaluation on ntu rgb+d 120, pstnet outperforms the second best 3dv-pointnet++ (wang et al., 2020) by 4.6%. moreover, compared to 3dv that extracts motion from voxels, pstnet directly models the dynamic information of raw point cloud sequences and thus is efficient. computational efficiency. we provide a running time comparison with the second best 3dvpointnet++ (wang et al., 2020). the average running time per video is shown in fig. 5. experiments are conducted using 1 nvidia rtx 2080ti gpu on ntu rgb+d 60. impressively, compared to 3dv-pointnet++, pstnet reduces running time by about 2s, showing that pstnet is efficient. 4d semantic segmentation to demonstrate that our pst convolution can be generalized to point-level dense prediction tasks, we apply pstnet to 4d semantic segmentation. following the works (choy et al., 2019; liu et al., 2019e), we conduct experiments on video clips with length of 3 frames. note that, although semantic segmentation can be achieved from a single frame, exploring temporal consistency would facilitate exploring the structure of scenes, thus improving segmentation accuracy and robustness to noise. the widely-used standard mean intersection over union (miou) is adopted as the evaluation metric. synthia 4d (choy et al., 2019) uses the synthia dataset (ros et al., 2016) to create 3d video sequences. the synthia 4d dataset includes 6 sequences of driving scenarios. each sequence consists of 4 stereo table 3: semantic segmentation results on the synthia 4d dataset. method input #frms #params (m) miou (%) 3d minknet14 (choy et al., 2019) voxel 4d minknet14 (choy et al., 2019) voxel pointnet++ (qi et al., 2017b) meteornet (liu et al., 2019e) pstnet (l = 1) pstnet (l = 3) point point point point figure 5: comparison of recognition running time per video on ntu rgb+d 60. rgb-d images taken from the top of a moving car. following (liu et al., 2019e), we use the same training/validation/test split, with 19,888/815/1,886 frames, respectively. as seen in table 3, pstnet (l = 3) exploits the temporal information and outperforms the state-ofthe-art. moreover, our method saves 0.11m, relative 6% of parameters compared to the second best method meteornet (liu et al., 2019e). we visualize a few segmentation examples in appendix n. ablation study clip length. usually, information is not equally distributed in sequences along time. short point cloud clips may miss key frames and thus confuse the models as noise. therefore, as shown in table 1, increasing clip length (i.e., the number of frames) benefits models for action recognition. temporal kernel size. the temporal kernel size l controls the temporal dynamics modeling of point cloud sequences. fig. 3(a) shows the accuracy on msr-action3d with different l. (a) when l is set to 1, temporal correlation is not captured. however, pstnet can still observe 24 frames and the pooling operation allows pstnet to capture the pose information of an entire clip. moreover, some actions (e.g., “golf swing”) can be easily recognized by a certain pose, and thus pstnet with l = 1 can still achieve satisfied accuracy. (b) when l is greater than 1, pstnet models temporal dynamics and therefore improves accuracy on actions that rely on motion or trajectory reasoning (e.g., “draw x”, “draw tick” and “draw circle”). (c) when l is greater than 3, the accuracy decreases. this mainly depends on motion in sequences. because most actions in msr-action3d are fast (e.g., “high arm wave”), using smaller temporal kernel size facilitates capturing fast motion, and long-range temporal dependencies will be captured in high-level layers. since we aim to present generic point based convolution operations, we do not tune the kernel size for each action but use the same size. when pstnet is used in 4d semantic segmentation, we observe that pstnet (l = 3) improves miou by 1.45% compared to pstnet (l = 1) that neglects temporal structure (shown in table 3). spatial search radius. the spatial search radius r controls the range of the spatial structure to be modeled. as shown in fig. 3(b), using too small r cannot capture sufficient structure information while using large r will decrease the discriminativeness of spatial local structure for modeling. conclusion in this paper, we propose an point spatio-temporal (pst) convolution to learn informative representations from raw point cloud sequences and a pst transposed convolution for point-level dense prediction tasks. we demonstrate that, by incorporating the proposed convolutions into deep networks, dubbed pstnets, our method is competent to address various point-based tasks. extensive experiments demonstrate that our pstnet significantly improves the 3d action recognition and 4d semantic segmentation performance by effectively modeling point cloud sequences. acknowledgments this research is supported by the agency for science, technology and research (a*star) under its ame programmatic funding scheme (#a18a2b0046). references paul j. besl and neil d. mckay. a method for registration of 3-d shapes. ieee trans. pattern anal. carlos caetano, jessica sena de souza, franc¸ois br´emond, jefersson a. dos santos, and william robson schwartz. skelemotion: a new representation of skeleton joint sequences based on motion information for 3d action recognition. in 16th ieee international conference on advanced video and signal based surveillance, avss, 2019. jo˜ao carreira and andrew zisserman. quo vadis, action recognition? a new model and the kinetics dataset. in cvpr, 2017. xiaozhi chen, huimin ma, ji wan, bo li, and tian xia. multi-view 3d object detection network for autonomous driving. in cvpr, 2017. christopher b. choy, junyoung gwak, and silvio savarese. 4d spatio-temporal convnets: minkowski convolutional neural networks. in cvpr, 2019. angela dai, angel x. chang, manolis savva, maciej halber, thomas a. funkhouser, and matthias nießner. scannet: richly-annotated 3d reconstructions of indoor scenes. in cvpr, 2017. hehe fan and yi yang. pointrnn: point recurrent neural network for moving point cloud processing. hehe fan, xiaojun chang, de cheng, yi yang, dong xu, and alexander g. hauptmann. complex event detection by identifying reliable shots from untrimmed videos. in iccv, 2017. hehe fan, zhongwen xu, linchao zhu, chenggang yan, jianjun ge, and yi yang. watching a small portion could be as good as watching all: towards efficient video classification. in ijcai, 2018. hehe fan, yi yang, and mohan s. kankanhalli. point 4d transformer networks for spatio-temporal modeling in point cloud videos. in cvpr, 2021a. hehe fan, tao zhuo, xin yu, yi yang, and mohan s. kankanhalli. understanding atomic hand-object interaction with human intention. ieee transactions on circuits and systems for video technology (tcsvt), 2021b. doi: 10.1109/tcsvt.2021.3058688. kensho hara, hirokatsu kataoka, and yutaka satoh. can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? in cvpr, 2018. alexander kl¨aser, marcin marszalek, and cordelia schmid. a spatio-temporal descriptor based on 3d-gradients. in bmvc, 2008. junnan li, yongkang wong, qi zhao, and mohan s. kankanhalli. unsupervised learning of viewinvariant action representations. in neurips, 2018a. maosen li, siheng chen, xu chen, ya zhang, yanfeng wang, and qi tian. actional-structural graph convolutional networks for skeleton-based action recognition. in cvpr, 2019. wanqing li, zhengyou zhang, and zicheng liu. action recognition based on a bag of 3d points. in cvpr workshops, 2010. yangyan li, rui bu, mingchao sun, wei wu, xinhan di, and baoquan chen. pointcnn: convolution on x-transformed points. in neurips, 2018b. jun liu, gang wang, ping hu, ling-yu duan, and alex c. kot. global context-aware attention lstm networks for 3d action recognition. in cvpr, 2017. jun liu, gang wang, ling-yu duan, kamila abdiyeva, and alex c. kot. skeleton-based human ieee trans. image action recognition with global context-aware attention lstm networks. processing, 27(4):1586–1599, 2018. doi: 10.1109/tip.2017.2785279. jun liu, amir shahroudy, mauricio perez, gang wang, ling-yu duan, and alex c. kot. ntu rgb+d 120: a large-scale benchmark for 3d human activity understanding. ieee trans. pattern anal. mach. intell., 2019a. doi: 10.1109/tpami.2019.2916873. jun liu, amir shahroudy, gang wang, ling-yu duan, and alex c. kot. skeleton-based online action prediction using scale selection network. ieee trans. pattern anal. mach. intell., 2019b. doi: 10.1109/tpami.2019.2898954. lu liu, tianyi zhou, guodong long, jing jiang, lina yao, and chengqi zhang. prototype propagation networks (ppn) for weakly-supervised few-shot learning on category graph. in ijcai, 2019c. lu liu, william hamilton, guodong long, jing jiang, and hugo larochelle. a universal representation transformer layer for few-shot image classification. in iclr, 2021. mengyuan liu and junsong yuan. recognizing human actions as the evolution of pose estimation maps. in cvpr, 2018. xingyu liu, charles r. qi, and leonidas j. guibas. flownet3d: learning scene flow in 3d point clouds. in cvpr, 2019d. xingyu liu, mengyuan yan, and jeannette bohg. meteornet: deep learning on dynamic 3d point cloud sequences. in iccv, 2019e. yanbin liu, juho lee, minseop park, saehoon kim, eunho yang, sung ju hwang, and yi yang. learning to propagate labels: transductive propagation network for few-shot learning. in iclr, 2019f. wenjie luo, bin yang, and raquel urtasun. fast and furious: real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net. in cvpr, 2018. joe yue-hei ng, matthew j. hausknecht, sudheendra vijayanarasimhan, oriol vinyals, rajat monga, and george toderici. beyond short snippets: deep networks for video classification. in cvpr, 2015. michael niemeyer, lars m. mescheder, michael oechsle, and andreas geiger. occupancy flow: 4d reconstruction by learning particle dynamics. in iccv, 2019. eshed ohn-bar and mohan m. trivedi. joint angles similarities and hog2 for action recognition. in cvpr workshops, 2013. omar oreifej and zicheng liu. hon4d: histogram of oriented 4d normals for activity recognition from depth sequences. in cvpr, 2013. lukas prantl, nuttapong chentanez, stefan jeschke, and nils thuerey. tranquil clouds: neural networks for learning temporally coherent features in point clouds. in iclr, 2020. charles ruizhongtai qi, hao su, kaichun mo, and leonidas j. guibas. pointnet: deep learning on point sets for 3d classification and segmentation. in cvpr, 2017a. charles ruizhongtai qi, li yi, hao su, and leonidas j. guibas. pointnet++: deep hierarchical feature learning on point sets in a metric space. in neurips, 2017b. charles ruizhongtai qi, or litany, kaiming he, and leonidas j. guibas. deep hough voting for 3d object detection in point clouds. in iccv, 2019. davis rempe, tolga birdal, yongheng zhao, zan gojcic, srinath sridhar, and leonidas j. guibas. caspr: learning canonical spatiotemporal point cloud representations. in neurips, 2020. germ´an ros, laura sellart, joanna materzynska, david v´azquez, and antonio m. l´opez. the synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes. in cvpr, 2016. amir shahroudy, jun liu, tian-tsong ng, and gang wang. ntu rgb+d: a large scale dataset for 3d human activity analysis. in cvpr, 2016. lei shi, yifan zhang, jian cheng, and hanqing lu. skeleton-based action recognition with directed graph neural networks. in cvpr, 2019a. lei shi, yifan zhang, jian cheng, and hanqing lu. two-stream adaptive graph convolutional networks for skeleton-based action recognition. in cvpr, 2019b. chenyang si, wentao chen, wei wang, liang wang, and tieniu tan. an attention enhanced graph convolutional lstm network for skeleton-based action recognition. in cvpr, 2019. karen simonyan and andrew zisserman. two-stream convolutional networks for action recognition in videos. in neurips, 2014. hugues thomas, charles ruizhongtai qi, jean-emmanuel deschaud, beatriz marcotegui, franc¸ois goulette, and leonidas j. guibas. kpconv: flexible and deformable convolution for point clouds. in iccv, 2019. du tran, lubomir d. bourdev, rob fergus, lorenzo torresani, and manohar paluri. learning spatiotemporal features with 3d convolutional networks. in iccv, 2015. du tran, heng wang, lorenzo torresani, jamie ray, yann lecun, and manohar paluri. a closer look at spatiotemporal convolutions for action recognition. in cvpr, 2018. antˆonio wilson vieira, erickson r. nascimento, gabriel l. oliveira, zicheng liu, and mario fernando montenegro campos. stop: space-time occupancy patterns for 3d action recognition from depth map sequences. in progress in pattern recognition, image analysis, computer vision, and applications - 17th iberoamerican congress, ciarp, 2012. jiang wang, zicheng liu, ying wu, and junsong yuan. mining actionlet ensemble for action recognition with depth cameras. in cvpr, 2012. limin wang, yuanjun xiong, zhe wang, yu qiao, dahua lin, xiaoou tang, and luc van gool. temporal segment networks: towards good practices for deep action recognition. in eccv, 2016. pichao wang, wanqing li, zhimin gao, chang tang, and philip o. ogunbona. depth pooling based large-scale 3-d action recognition with convolutional neural networks. ieee trans. multimedia, 20(5):1051–1061, 2018. doi: 10.1109/tmm.2018.2818329. yancheng wang, yang xiao, fu xiong, wenxiang jiang, zhiguo cao, joey tianyi zhou, and junsong yuan. 3dv: 3d dynamic voxel for action recognition in depth video. in cvpr, 2020. yue wang, yongbin sun, ziwei liu, sanjay e. sarma, michael m. bronstein, and justin m. solomon. dynamic graph cnn for learning on point clouds. acm trans. graph., 38(5):146:1–146:12, 2019. doi: 10.1145/3326362. wenxuan wu, zhongang qi, and fuxin li. pointconv: deep convolutional networks on 3d point clouds. in cvpr, 2019. yang xiao, jun chen, yancheng wang, zhiguo cao, joey tianyi zhou, and xiang bai. action recognition for depth video using multi-view dynamic images. inf. sci., 480:287–304, 2019. doi: 10.1016/j.ins.2018.12.050. xiaodong yang and yingli tian. super normal vector for activity recognition using depth sequences. lequan yu, xianzhi li, chi-wing fu, daniel cohen-or, and pheng-ann heng. pu-net: point cloud upsampling network. in cvpr, 2018. pengfei zhang, cuiling lan, junliang xing, wenjun zeng, jianru xue, and nanning zheng. view adaptive neural networks for high performance skeleton-based human action recognition. ieee trans. pattern anal. mach. intell., 41(8):1963–1978, 2019. doi: 10.1109/tpami.2019.2896631. linchao zhu and yi yang. label independent memory for semi-supervised few-shot video classificatao zhuo, zhiyong cheng, peng zhang, yongkang wong, and mohan s. kankanhalli. explainable video action reasoning via prior knowledge and state transitions. in acm multimedia, 2019. a pst transposed convolution figure 6: illustration of the proposed pst transposed convolution. the input contains l(cid:48) = 3 frames, with n (cid:48) = 2 points per frame. the pst transposed convolution is to generate new original point features for the original point cloud sequence, which contains l = 5 frames, with n = 8 points per frame. the temporal kernel size l is 3 and the temporal stride is st is 2. (a) temporal transposed convolution. (b) temporal interpolation. (c) spatial interpolation. 3(cid:48)](cid:1), and its original coordinate sequence (cid:0)p1, p2, p3, p4, p5 1 , f (cid:48)(cid:48) we illustrate an example of pst transposed convolution in fig. 6. given a convolved sequence (cid:0)[p (cid:48) (cid:1), pst transposed convolution aims to generate new features (f (cid:48)(cid:48) 5 ) according to the original point coordinates. first, with the temporal kernel size l = 3, each input point feature is decoded to 3 features by a temporal transposed convolution. second, 5 frames are interpolated in accordance with a temporal stride st = 2, and the input points with the decoded features are assigned to interpolated frames. in this way, the temporal cross-correlation is constructed. third, given the original coordinates of a point cloud frame (e.g., p1) and the corresponding interpolated frame (i.e., (cid:2)p (cid:48) 1](cid:3)), we generate features (i.e., f (cid:48)(cid:48) 1 ) for all the original point coordinates. b pstnet architectures b.1 pstnet for 3d action recognition figure 7: hierarchical pstnet architecture for 3d action recognition. as shown in fig. 7, the architecture of pstnet for 3d action recognition consists of 6 pst convolutions, i.e., pstconv1, pstconv2a, pstconv2b, pstconv3a, pstconv3b, pstconv4, and a fully-connected (fc) layer. in pstconv1, pstconv2a, pstconv3a and pstconv4, the spatial subsampling rate is set to 2 to halve the spatial resolution. in pstconv2a and pstconv3a, the temporal stride is set to 2 to halve the temporal resolution. the ro denotes the initial spatial search radius. the temporal padding p is split as [p1, p2], where p1 and p2 denote the left padding and the right padding, respectively. after these convolution layers, average and max poolings are used for spatial pooling and temporal pooling, respectively. the fc layer is then used as a classifier. b.2 pstnet for 4d semantic segmentation figure 8: hierarchical pstnet architecture for 4d semantic segmentation. as shown in fig. 8, we employ 4 pst convolutional layers, i.e., pstconv1, pstconv2, pstconv3, and pstconv4, and 4 pst transposed convolutional layers, i.e., pstconvtrans1, pstconvtrans2, pstconvtrans3, and pstconvtrans4 for 4d semantic segmentation. the spatial subsampling rate is set to 4 to reduce the spatial resolution in each pst convolution layer. following (choy et al., 2019; liu et al., 2019e), we conduct 4d semantic segmentation on clips with the length of 3 frames, i.e., l is fixed to 3. in pstconv3 and pstconvtrans2, the temporal kernel size is set to 3. skip connections are added from input to pstconvtrans1, pstconv1 to pstconvtrans2, pstconv2 to pstconvtrans3, and pstconv3 to pstconvtrans4. after pstconvtrans4, a 1d convolution layer is appended for semantic predictions. c implementation details to build a unified network and train the network with mini-batch, the number of spatial neighbors needs to be fixed. we follow existing point based works (qi et al., 2017b; liu et al., 2019e) to randomly sample a fixed number of neighbors. for 3d action recognition, the number is set to 9. for 4d semantic segmentation, we follow (liu et al., 2019e) to set the number to 32. if the actual number of neighbors of a point is less than the set one (e.g., 9), we randomly repeat some neighbors. we train our models for 35 epochs with the sgd optimizer. learning rate is set to 0.01, and decays with a rate of 0.1 at the 10th epoch and the 20th epoch, respectively. msr-action3d (li et al., 2010): following (liu et al., 2019e), batch size is set to 16 and frame sampling stride are set to and 1, respectively. we set the initial spatial search radius ro to 0.5. ntu rgb+d 60 /120 (shahroudy et al., 2016; liu et al., 2019a): following (wang et al., 2020), batch size is set to 32. we set the clip length, frame sampling stride and the initial spatial search radius ro to 23, 2 and 0.1, respectively. synthia 4d (choy et al., 2019): following (liu et al., 2019e), batch size and frame sampling stride are set to 12 and 1, respectively. we set the spatial search radius ro to 0.9. we have implemented our pstnets using both pytorch and paddlepaddle, which have shown similar performance. d influence of temporal kernel size on different actions we show the influence of temporal kernel size on different actions in msr-action3d in fig. 9. clip length is 24. when temporal kernel size l = 1, temporal correlation is not exploited. however, pstnet can leverage pose information for recognition, especially for actions whose poses are unique in the dataset. for example, pstnet with l = 1 can correctly recognize the “jogging” action because the action’s pose is discriminative in the msr-action3d dataset. however, pstnet with motion modeling (l ≥ 3) can distinguish similar actions such as “draw x”, “draw tick” and “draw circle”. the results support our intuition that pstnet effectively captures dynamics in point cloud sequences. (a) draw x (b) draw tick (c) draw circle (d) high arm wave (e) hammer (f) hand catch (g) high throw (h) tennis swing (i) bend (j) forward kick (k) side kick (l) jogging (m) tennis serve (n) two hand wave (o) golf swing (p) pick up & throw (q) horizont arm wave (r) forward punch (s) hand clap (t) side-boxing figure 9: influence of temporal kernel size on different actions of msr-action3d. e feature visualization (a) meteornet (b) pstnet figure 10: feature visualizations on msr-action3d using t-sne. each sequence is visualized as a point and sequences belonging to the same action have the same color. pstnet features are semantically separable compared to meteornet, suggesting that it learns better representations for point cloud sequences. we qualitatively evaluate pstnet’s ability to encode point cloud sequence by visualizing the learned features. we compare pstnet with the sate-of-the-art meteornet (liu et al., 2019e). features are extracted from the layer before classifier (fc). these features are then projected to 2-dimensional space using t-sne. as shown in fig. 10, pstnet feature is more compact and discriminative than meteornet. f synthia 4d semantic segmentation result details we list the segmentation result for each class in table 4. the synthia 4d dataset contains 12 categories. our pstnet achieves five best accuracies among them, demonstrating the effectiveness of pstnet. table 4: semantic segmentation result (miou %) details on the synthia 4d dataset. method # frames bldn road sdwlk fence vegittn pole car t. sign pedstrn bicycl lane t. light average pointnet++ (qi et al., 2017b) meteornet (liu et al., 2019e) pstnet (l = 1) pstnet (l = 3) g pst convolution without disentangling space and time to confirm the effectiveness of our disentangling structure for spatio-temporal modeling, we perform 3d action recognition on msr-action3d with non-decomposing convolution. to combine space and time, 3d displacements and time differences are encoded together to generate convolution kernel. we evaluate our pstnet with different clip lengths. as shown in fig. 11, the disentangling pst convolution achieves better accuracy than the non-decomposing structure, demonstrating the effectiveness of disentangling space and time for point cloud sequence modeling. figure 11: comparison of pst convolution with (w/) and without (w/o) disentangling structure on msr-action3d. h impact of displacement kernel and sharing kernel (5), i.e., in this paper, we design a lightweight implementation for the f function in eq. f (cid:0)(δx, δy, δz); θ(cid:1) = θd · (δx, δy, δz)t · 1 (cid:12) θs, where θd is a displacement transform kernel, θs is a sharing kernel. in this section, we investigate the influence of these two kernels. we conduct 3d action recognition on msr-action3d. the experimental results are shown in table 5. table 5: impact of displacement kernel and sharing kernel on 3d action recognition accuracy (%). the msr-action3d dataset is used. θs 16-frame 24-frame without θs, the spatial convolution degenerates into m (x,y,z) (cid:107)(δx,δy,δz)(cid:107)≤r θd · (δx, δy, δz)t . in this way, only the points in the final layer of our pstnet are used. therefore, the accuracy decreases dramatically. (cid:107)(δx,δy,δz)(cid:107)≤r θs · f (x+δx,y+δy,z+δz) without θd, the spatial convolution becomes m (x,y,z) . t in this fashion, the points in a neighborhood are treated equally. point features are leveraged but their positions are ignored. because spatial structure is not well captured, the accuracy decreases compared to the case where both θd and θs are used. t t i computational efficiency and memory usage in this section, we evaluate the computational efficiency and memory usage, i.e., the running time and number of parameters, of our method. we conduct 3d action recognition with a clip length of 16 on msr-action3d using 1 nvidia quadro rtx 6000 gpu. as shown in table 6, the proposed pstnet is 41.5% faster than meteornet (liu et al., 2019e). this is because that meteornet is based on mlps, which are less efficient than convolutions. table 6: comparison of parameter and running time on 3d action recognition accuracy (%). the msr-action3d dataset is used. clip length is 16. method # parameters (m) running time per clip (ms) accuracy (%) meteornet (liu et al., 2019e) pstnet (lightweight) pstnet (with pointconv) second, we study the impact of the number of pstnet layers on running time and parameter. as shown in fig. 12, when pstnet becomes deep, the number of parameters significantly grows. this is because, like most deep neural networks, the point feature dimension exponentially increases to improve the feature representation ability in pstnet, which needs more parameters. however, the running time does not increase significantly. this is because our pstnet is spatio-temporally hierarchical, points are exponentially reduced along both spatial and temporal dimensions, thus saving running time. figure 12: impact of the number of pstnet layers on running time and parameter. figure 13: running time proportion of main operations in pstnet. third, we investigate the running time of main operations in pstnet, including convolution, farthest point sampling and spatial neighbor search. we show the running time proportion of these operations in fig. 13. finally, because the f function in eq. (5) can be implemented in different ways, we replace our lightweight spatial convolution with pointconv (wu et al., 2019) to investigate the influence of the spatial modeling in our pstnet. as shown in table 6, using pointconv only improves accraucy slightly compared to our lightweight spatial convolutions, but significantly increases parameters and running time. this is because pointconv utilizes mlp based operations to process points and their features, which are less efficient than our lightweight convolutions. j irregular frame sampling our pstnet can also process irregularly sampled point cloud sequences by frame interpolation. in this section, we randomly remove some frames in sequences. specifically, we conduct 3d action recognition on msr-action with 24 frames. we randomly remove 8 frames from each clip. then, we use replication and iterative closest point (icp) (besl & mckay, 1992) to interpolate missing frames, respectively. we compare our pstnet with meteornet (liu et al., 2019e), which explicitly encodes time and does not need interpolation. table 7: 3d action recognition on msr-action3d with irregularly sampled frames. clip length is originally 24 and then 8 frames are randomly removed from each clip. method meteornet (liu et al., 2019e) pstnet (replication interpolation) pstnet (icp interpolation) accuracy (%) as shown in table 7, our pstnet still achieves the best accuracy, indicating that pstnet is able to model point cloud sequences with irregular frame sampling. k scene flow estimation to evaluate our method on the lidar data where points in a frame can be highly irregular, we apply our method to scene flow estimation. we follow the setting proposed by meteornet (liu et al., 2019e), i.e., given a point cloud sequence, estimating a flow vector for every point in the last frame. however, because our point tube is constructed according to the middle frame, which is not applicable to the last-frame scene flow estimation, we adapt temporal anchor frame selection and spatial anchor point transferring. specifically, we select the last frame in each tube as the anchor frame. then, after spatial sampling, each anchor point is transferred to its previous nearest l frames. following meteornet, we first train our model on a flyingthings3d dataset according to the synthetic method in (liu et al., 2019e), and then fine-tune the model on a kitti scene flow dataset (liu et al., 2019e). point tracking is not used. table 8: scene flow estimation accuracy on the kitti scene flow dataset. method input # frames end-point-error flownet3d (liu et al., 2019d) meteornet (liu et al., 2019e) pstnet (ours) points points points as shown in table 8, our pstnet achieves promising accuracy on scene flow estimation. l motion recognition on the synthetic moving mnist point cloud dataset in the real world, it is impossible to obtain point ids. to evaluate our pstnet in the scenario where points are correlated and can be accurately tracked across frames, we conduct motion recognition on a synthetic moving mnist point cloud dataset. to synthesize moving mnist digit sequences, we use a generation process similar to that described in (fan & yang, 2019). each synthetic sequence consists of 16 consecutive point cloud frames. each frame contains one handwritten digit moving and bouncing inside a 64 × 64 area. the digits are chosen randomly from the mnist dataset. we use the same training/test split as the original mnist dataset. for each digit, we sample 128 points. point order is maintained across frames so that tracking can be employed. we design 144 motions, including • two kinds of digit distortion: scaling digits horizontally or vertically, i.e., (|0.4 − 0.05t| + 0.6) × (x − cx) + cx or (|0.4 − 0.05t| + 0.6) × (y − cy) + cy, where (cx, cy) is the digit center and t ∈ [1, 16]. a few motion examples are shown in fig. 14. because motion and appearance are independent of each other, it is challenging to recognize motion while avoiding interference from digit appearance. figure 14: motion examples in the synthetic moving mnist point cloud dataset. to exploit point correlation, anchor points and their neighbors are selected in the first frame, which are then propagated to other frames. in this fashion, anchors and their neighbors are tracked across frames. table 9: motion recognition accuracy on the moving mnist point cloud dataset. method pstnet (original) pstnet (with tracking) accuracy (%) as shown in table 9, both the original pstnet and the tracking based pstnet achieve promising accuracy on the moving mnist point cloud dataset. our original pstnet achieves similar accuracy with the tracking based pstnet in this simulated case, demonstrating that our pstnet does not heavily rely on point ids or tracking. m visualization of the output of each pst convolution layer in pstnet we visualize the output of each layer in the pstnet architecture trained on msr-action3d in fig. 15. because we use relu as the activation function, all outputs are greater than zero and large outputs indicate high activation. to visualize outputs, we squeeze output vectors to scalars via l1 norm. we observe that: • for pstconv1, because temporal kernel size l = 1, it does not capture the temporal correlation. in this case, pstconv1 focuses on the appearance and therefore outputs high activation to performer contours. • pstconv2a to pstconv4. when modeling the spatio-temporal structure of point cloud sequence, pst convolutions mainly focus on moving areas so as to achieve informative representations of actions. the visualization results support our intuition that the proposed pst convolution effectively captures dynamics of point cloud sequences. n visualization of 4d semantic segmentation we visualize segmentation results from the synthia 4d dataset in fig. 16 and fig. 17. our pstnet can accurately segment most objects. o limitation like most deep neural networks, the proposed pstnet relies on large-scale labeled datasets for training. a potential improvement is to integrate some learning methods, e.g., few-shot learning (liu et al., 2019c;f; zhu & yang, 2020; liu et al., 2021), into point cloud sequences modeling to reduce the reliance on human-annotated data. s e t a c i d n i r o l o c r e t h g i r b , s t u p t u o e h t r o f . h t p e d s e d o c n e r o l o c , e c n e u q e s d u o l c t n i o p t u p n i e h t r o f . t e n t s p n i r e y a l h c a e f o t u p t u o e h t f o n o i t a z i l a u s i v : 5 1 e r u g i f g n o l a e s a e r c e d y l e v i s s e r g o r p s e m a r f d n a s t n i o p , t s e d i r t s l a r o p m e t | 20 | [
505.3713216,
450.019346692,
515.3339216,
484.905134047
] |
7C9aRX2nBf2.pdf | 2,023 | 1 | sequential latent variable models for few-shot high-dimensional time-series forecasting xiajun jiang∗, ryan missel∗, zhiyuan li & linwei wang golisano college of computing and information sciences rochester institute of technology rochester, ny 14623, usa {xj7056,rxm7244,zl7904,linwei.wang}@rit.edu abstract modern applications increasingly require learning and forecasting latent dynamics from high-dimensional time-series. compared to univariate time-series forecasting, this adds a new challenge of reasoning about the latent dynamics of an unobserved abstract state. sequential latent variable models (slvms) present an attractive solution, although existing works either struggle with long-term forecasting or have difficulty learning across diverse dynamics. in this paper, we first present a conceptual framework of slvms to unify existing works, contrast their fundamental limitations, and identify an intuitive solution to long-term forecasting for diverse dynamics via meta-learning. we then present a few-shot forecasting framework for high-dimensional time-series: instead of learning a single dynamic function, we leverage data of diverse dynamics and learn to adapt latent dynamic functions to few-shot support series. this is realized via bayesian meta-learning underpinned by: 1) a latent dynamic function conditioned on knowledge derived from few-shot support series, and 2) a meta-model that learns to extract such dynamic-specific knowledge via feed-forward embedding of support set. we compared the presented framework with a comprehensive set of baseline models 1) trained globally on the large meta-training set with diverse dynamics, 2) trained individually on single dynamics with and without fine-tuning to k-shot support series, and 3) extended to few-shot meta-formulations. we demonstrated that the presented framework is agnostic to the latent dynamic function of choice and, at meta-test time, is able to forecast for new dynamics given variable-shot of support series.1 introduction in many applications, an ultimate goal is to forecast the future states or trajectories of a dynamic system from its high-dimensional observations such as series of images. compared to the relatively well-studied univariate time-series forecasting (makridakis et al., 2018; oreshkina et al., 2020; salinas et al., 2020), high-dimensional time-series forecasting raises new challenges: it requires the extraction of the dynamics of an abstract latent state that is not directly observed (botev et al., 2021). sequential latent variable models (slvms) provide an attractive solution that, unlike autoregressive models, abstracts a latent dynamic function zi = f (z<i; θz) with state zi and parameter θz, along with zi’s emission to observations xi = g(zi) (chung et al., 2015). this pair of learned models can support long-term forecasting given only initial frames of observations, as well as controlled generation of new dynamics. critical bottlenecks however remain in reaching these goals. the earlier formulation of slvms relies on a natural extension of the static lvms: as illustrated in fig. 1a, the latent state zi is modeled as the latent variable for the generation of xi, and a sequential encoder is used to facilitate the inference of zi from current and past observations x≤i (chung et al., 2015; krishnan et al., 2017). recent works have argued to instead model and infer the parameter ∗both authors contributed equally to this work 1source code available at https://github.com/john-x-jiang/meta_ssm. figure 1: sequential latent-variable models for forecasting high-dimensional sequences. of the latent dynamic function, often modeled as time-varying linear coefficients θz,i (karl et al., 2017; fraccaro et al., 2017; rangapuram et al., 2018; klushyn et al., 2021). this results in an lvm formulation as illustrated in fig. 1b1, where the latent variable θz,i is inferred at each i from observations x≤i. while strong at time-series reconstructions and classifications, a fundamental limitation makes these slvms less suited for long-term forecasting: the latent dynamic function has a limited ability to forecast without near-term observations to support the inference of zi or θz,i. this limitation in the mainstream slvms raises a natural question: are we able to relax the assumption of linear dynamic function and directly infer its θz? works adopting this idea have emerged: as illustrated in fig. 1b2, by modeling a single θz – either deterministic (rubanova et al., 2019; botev et al., 2021) or stochastic (yildiz et al., 2020) – f (z<i; θz) can be asked to predict a time sequence using only an inferred initial state. this formulation has shown strong long-term forecasting, although with its own fundamental limitation: it learns a single dynamic function global to all training sequences. this would not only require all training data to share identical latent dynamics, but also has difficulty to forecast test sequences with dynamics different from or unknown to the training. in this paper, we answer this important open question of long-term forecasting for diverse dynamics. we first present a conceptual framework of slvms to unify existing works, and identify an intuitive solution to the underlying critical gap via meta-learning: instead of learning a single dynamic function, we can learn to pull knowledge across datasets of different dynamics and learn to adapt a dynamic function to few-shot high-dimensional time-series. we then present a bayesian meta-learning framework as illustrated in fig. 1c: instead of being a single fixed function as in fig. 1b2, we let the latent dynamic function be conditioned on knowledge derived from few-shot support time-series via a feed-forward set-embedding meta-model; given k-shot time-series of a specific dynamics, the model is asked to forecast for query time-series using only the initial frames, meta-learned across dynamics. we develop this framework to be agnostic to the latent dynamic functions of choice, and with the flexibility to forecast with a variable size of k. we evaluated the presented framework in benchmark image sequences with mixed physics including bouncing balls (fraccaro et al., 2017), pendulum (botev et al., 2021), and mass-spring (botev et al., 2021). we further applied it to forecasting complex physics of turbulence flow (wang et al., 2021) and electrical dynamics over 3d geometrical meshes of the heart. we compared the presented work with slvms representative of each of the formulations in fig. 1a-b, along with a recent autoregressive model designed to forecast for diverse dynamics (don`a et al., 2020). each baseline model was trained on 1) the large meta-training set with diverse dynamics, and 2) each dynamics individually, both with and without fine-tuning to k-shot support data. representative slvms were further tested in their feed-forward or optimization-based meta-extensions. results demonstrated clear margins of improvements by the presented work in forecasting diverse dynamics, with added ability to recognize clusters of distinct dynamics and allow controlled time-series generation given only initial conditions. related works & background sequential lvms: among the first slvms is the variational recurrent neural networks (vrnn) (chung et al., 2015), followed by a series of deep state-space models (ssms) (krishnan et al., 2017; maddison et al., 2017; li et al., 2019) focused on modeling the dependence of the posterior and transitional density of the latent state zk on past latent states z<k and observations x<k (fig. 1a) – resembling the deep extensions of the classic kalman filter (krishnan et al., 2017) and particle filter (maddison et al., 2017). an alternative line of deep ssms aims to infer the parameters of the latent dynamic function instead (karl et al., 2017; fraccaro et al., 2017; rangapuram et al., 2018; klushyn et al., 2021). existing approaches along this line assumed linear latent dynamics, where the linear transition matrix at each time frame k is modeled as a linear combination of a set of global matrices. the linear coefficients are modeled to be time-varying and inferred from observations x≤k as illustrated in fig. 1b1. in both formulations, the latent dynamic function’s reliance on inferred time-varying variables reduces its ability to forecast without near-term observations. in parallel, a set of models (fig. 1b2) have been presented that aims to learn a latent dynamic function that forecasts a sequence using only an inferred initial state, in stochastic (rubanova et al., 2019; yildiz et al., 2020) or deterministic forms (botev et al., 2021). the resulting latent dynamic function is strong at forecasting, albeit only a single function is learned at a time. we build on and advance this formulation of learning to learn a dynamic-specific function from few-shot observations. modeling switching dynamics in slvms, often based on the formulation in fig. 1a, shares the presented idea of using context variables to control the latent dynamics (becker-ehmck et al., 2019; linderman et al., 2017). they however are concerned with the switching of dynamics within a time-series, whereas we are interested in learning to learn dynamics from k-shot support series. sequential neural processes (snps), based on slvm formulation in fig. 1a (singh et al., 2019; qin et al., 2019), are underlined by bayesian meta-learning similar to the presented work. they are originally designed for supervised learning of a regression function over time instead of forecasting. in this work, we will extend snp to realize a meta-version of the slvm formulation in fig. 1a, as a counterpart to be compared with the presented meta-slvm in fig. 1c. autoregressive dynamics: autoregressive models are also popular for modeling and forecasting dynamics, especially for approximating physics-based simulations (wang et al., 2020; pfaff et al., 2020). some recent works have focused on generalizing across dynamics by, for instance, disentangling spatial and temporal modeling (don`a et al., 2020) or learning dynamic-specific functions in addition to a global dynamic function (yin et al., 2021). a recent autoregressive model considered ”meta-learning” dynamics by using task-embedding to condition the forecaster (wang et al., 2021), although this task encoder is trained separately from the forecasting model via weak supervision and it infers the task from the observed frames of a forecasting series. moreover, autoregressive models cannot support controlled generation of time-series as we will demonstrate in section 5. general few-shot learning: few-shot learning has seen substantial progress with static data, including weight initialization (finn et al., 2017; yoon et al., 2018), model optimizers (ravi & larochelle, 2016), and feed-forward models to condition (garnelo et al., 2018) or parameterize the primary networks (bertinetto et al., 2016; sung et al., 2018). among these, feed-forward meta-models replace test-time optimization with simple feed-forward passes using support data. it also has an interesting high-level relation to exemplar vae (norouzi et al., 2020) where the few-shot support samples can be viewed as the exemplar. it thus constitutes the basis of the presented few-shot forecasting methods. few-shot time-series forecasting: meta-learning is well studied in univariate time-series forecasting (montero-manso et al., 2020) including recent deep-learning advances (oreshkin et al., 2021). fewshot forecasting for high-dimensional time-series, however, has not been attempted to our knowledge. unifying conceptual framework for learning latent dynamics we first describe an lvm framework that unifies existing works under two choices of probabilistic graphical models (pgms). it includes a dynamic function of latent zk and its emission to data xk: zk = f (z<k; θz), xk = g(zk), where θz represents the parameter of the latent dynamic function. system states as latent variables: one natural choice of the latent variable is the latent state zk underlying the observations xk. this gives rise to the pgm as illustrated in fig. 1a, where the marginal likelihood of an observed sequence x0:t can be expressed as: p(x0:t ) = p(xi|zi)p(zi|z<i, x<i)dz0:t , where p(xi|zi) describes emission and p(zi|z<i, x<i) describes latent dynamics. to facilitate inference, a variational approximation of the posterior density q(z0:t |x0:t ) is often modeled as q(z0:t |x0:t ) = (cid:81)t i=1 q(zi|z<i, x≤i). the evidence lower bound (elbo) of equation (1) is: (cid:88)t eq(zi|z<i,x≤i) [log p(xi|zi)] − kl(q(zi|z<i, x≤i)||p(zi|z<i, x<i)). log p(x0:t ) ≥ existing works adopting this pgm (chung et al., 2015; krishnan et al., 2017; li et al., 2019) differ primarily in how p(zi|z<i, x<i) and q(zi|z<i, x≤i) are modeled. the first term above encourages reconstruction using the inferred q(zi|z<i, x≤i) at each time frame i; this weakens the latent dynamic function underlying p(zi|z<i, x<i) that is constrained only by the kl-divergence term. this leads to limited ability to forecast without near-term x≤i to support the inference of q(zi|z<i, x≤i). system parameters as latent variables: an alternative choice of the latent variable is the parameters themselves of the lvm equation, especially θz of the latent dynamic function. this gives rise to the pgm in fig. 1b, where the marginal likelihood of x0:t can now be expressed as: p(x0:t ) = p(xi|zi)|zi=f (z<i;θz)p(θz)dθz, θz where the observations are explained by an initial latent state z0 and parameter θz of the latent dynamic function. with a variational approximation of the posterior density q(θz, z0) and an assumption of their prior densities p(z0) and p(θz), the elbo of equation (3) becomes: log p(x0:t ) ≥ eq(θz,z0) [log p(x0:t |z0, θz)] − kl(q(z0)||p(z0)) − kl(q(θz)||p(θz)), (4) this covers different lines of existing works depending on how q(θz) and p(θz) are modeled. in a series of works (karl et al., 2017; fraccaro et al., 2017; rangapuram et al., 2018; klushyn et al., 2021), θz is modeled as time-varying system parameters θz,0:t . this involves intricate temporal modeling of q(θz,i|x≤i) and p(θz,i|z≤i) over time as illustrated in fig. 1b1. because the latent dynamic function relies on time-varying θz,0:t , its forecasting again relies on near-term observations to support the inference of θz,i. alternatively, q(θz) can be simply assumed to be global across observations and the dynamic function becomes a bayesian neural network as presented by (yildiz et al., 2020). as a more special case, θz can be deterministic which leads to the latent ode model presented by (rubanova et al., 2019). if we further assume z0 to be deterministic, we arrive at the set of deterministic encoding-decoding network with latent dynamic functions examined by (botev et al., 2021). this set of formulations, as summarized in fig. 1b2, shares the advantage of strong long-term forecasting, albeit a fundamental limitation in learning a single dynamic function at a time. in section 5, we will include representative models from each pgm to provide empirical evidence for the identified limitations. with this basis, we derive an intuitive solution to the identified critical gaps by extending the pgm in fig. 1b2 to the presented pgm in fig. 1c: instead of learning a single dynamic function, we will learn to adapt a latent dynamic function to few-shot support time-series. few-shot forecasting via bayesian meta-learning j=1. for each dj, we consider disjoint few-shot support series ds consider a dataset d of high-dimensional time-series with m similar but distinct underlying dynamics: d = {dj}m j = j = {xq,1 0:t , ..., xs,k {xs,1 instead of maximizing the marginal likelihood of x0:t for all x0:t ∈ d as in equation (3), we formulate a meta-objective to learn to maximize the marginal likelihood of xq 0:t ∈ dq j when conditioned on support series ds j , for all dynamics j ∈ {1, 2, ..., m }: (cid:90) 0:t } where k ≪ l. 0:t for all query series xq 0:t } and query series dq 0:t , ..., xq,l p(xq 0:t |ds j ) = p(xq 0:t |c)p(c|ds j )dc, xq 0:t ∈ dq j c 0:t |c), though similar to equation (3), is now conditioned on (thus adapted to) knowledge j ) is the meta-model describing how to where p(xq derived from support series of a specific dynamics. p(c|ds extract such dynamic-specific knowledge from few-shot support set ds j . set-conditioned latent dynamic functions: we model p(xq 0:t |c) based on equation (3) as: pθx (xi|zi)|zi=f (zi−1,c;θz), p(xq where the latent dynamic function is parameterized by θz but conditioned on embedding c from the support set. to focus on c, we assume θz to be deterministic and global as in (rubanova et al., 2019; botev et al., 2021). as an example, we can describe zi = ˜zi−1 + ∆zi where ∆zi conditions gated recurrent units (grus) (chung et al., 2014) on c, as detailed in appendix b. this conditioning can be generalized to other functional forms of f (·), which we will demonstrate in experiments. meta-model for amortized variational inference: we model pζ(c|ds j ) with a meta-model parameterized by ζ in the form of feed-forward embedding of support set ds j . specifically, each support sequence xs j is first encoded through a neural function hϕ(xs 0:t ) with blocks of interlaced spatial convolution and temporal compression layers. to extract knowledge shared by the set, the embedding from all sequences in ds 0:t ), where k is the size of the support set. the value of k can be fixed or variable in our framework. this set embedding parameterizes pζ(c|ds j is aggregated by an averaging function: 1 k 0:t ∈ ds j ) ∼ n (µc, σ2 0:t ∈ds xs j hϕ(xs c ) via separate linear layers. j , xq 0:t ), sharing 0:t . the elbo of equation (5) across 0:t ) as qζ(c|ds j ∪ xq to enable inference, we approximate the posterior density p(c|ds j with xq the same meta set-embedding model by augmenting ds all dynamics d = {dj}m (cid:88)m (cid:88) j=1 can then be derived as: log p(xq 0:t |ds j ) 0:t ∈dq xq j eqϕ(zq 0),qζ (c|ds j ∪xq 0:t )[log pθx(xq 0:t |zq 0:t ∈dq xq j j )(cid:1) , where qϕ(zq ) is parameterized by an encoder with lz0 = 2 in all experiments. p(z0) is assumed to be n (0, i). the likelihood term is estimated with reparameterization trick (kingma & welling, 2013), and the kl-divergence terms are calculated analytically. )||p(z0)) − kl (cid:0)qζ(c|ds 0:t )||pζ(c|ds −kl(qϕ(zq j ∪ xq the optimization of equation (7) is realized via episodic training where, in each training episode, data in each dynamic set dj is divided into disjoint support set ds j . for each query series across all dynamics, starting with an initial latent state z0 (inferred from lz0 frames) and k-shot support embedding c, the latent dynamic function is asked to propagate forward to forecast the entire sequence of z0:t and their corresponding high-dimensional observations x0:t . j and query set dq experiments on benchmark image sequences data: we first considered benchmark images generated with controllable physics, including bouncing ball fraccaro et al. (2017), hamiltonian pendulum (botev et al., 2021), and hamiltonian mass-spring systems (botev et al., 2021). details of data generation are available in appendix g. to intentionally create data with diverse dynamics, we included 1) a bouncing ball dataset with 16 different directions of gravity, each with 3000 samples simulated using a combination of different initial positions and velocities (gravity-16); and 2) a mixed-physics dataset consisting of bouncing balls under 4 gravity directions, and pendulums and mass springs each with four different values of friction coefficients of 0, 0.05, 0.1, 0.15 (mixed-physics). each physics with a unique parameter includes 3000 samples. models: we considered baseline models representative of each formulation outlined in fig. 1. this includes vrnn chung et al. (2015) and dkf krishnan et al. (2017) representing fig. 1a1, dvbf karl et al. (2017) and kvae fraccaro et al. (2017) representing fig. 1b1, and three models representing fig. 1b2 with latent dynamic functions as residual grus (gru-res), neural ordinary differential equation (node), and residual recurrent generative networks (rgn-res) (botev et al., 2021). we also considered a recent autoregressive model designed to tackle forecasting diverse dynamics (don`a et al., 2020). all baseline models were 1) trained using the entire meta-training data consisting of mixed dynamics, 2) trained in 1) and further fine-tuned to the meta-test k-shot support set (k = 15) (except for (don`a et al., 2020) as we were uncertain about a proper approach of fine-tuning due to its specialized architecture), and 3) trained individually for each single dynamics, with and without fine-tuning to the meta-test k-shot support set (k = 15). for each of the global latent dynamic models (gru-res, node, and rgn-res), we extended it into our few-shot framework. while few-shot learning with the rest of the slvms is not yet reported in literature, we further selected dkf as a representative of the slvm in fig. 1a and extended it into a feed-forward meta-formulation via a variant of the snp (meta-dkf). we also attempted table 1: comparison of the presented meta-models with all baselines trained on the meta-training set for gravity-16 data. the improvement of meta-gru-res (best-performing) over its closest baseline is statistically significant in all metrics (p < 0.01, paired t-test). pgm type model mse↓ vpt-mse↑ dist↓ vpt-dist↑ fig. 1c fig. 1a meta-gru-res meta-node meta-rgn-res gru-res gru-res finetune node node finetune rgn-res rgn-res finetune dvbf dvbf finetune kvae meta-dkf dkf dkf finetune vrnn vrnn finetune autoregressive don`a et al optimization-based meta-learning of maml (finn et al., 2017) to the dkf and gru-res models, although challenges of stability and convergence as noted in literature (mehta et al., 2021; antoniou et al., 2018) were encountered, suggesting that maml extensions to slvms may not be trivial due to issues such as vanishing gradient issues over the complex computation graph. all gre-res, node, and rgn-res based models were trained to forecast for a sequence of 20 frames using only the first 3 frames. we investigated k-shot forecasting when k is fixed at different values of k = 1, 5, 10, 15, or allowed to be variable at both meta-training and -test with 15 as the upper limit. for vrnn, dkf, dvbf, and kvae, we used their public implementations for training and evaluation. similar network components with the meta-models were scaled to have comparable parameter scales. because of their reliance on observed time frames to support prediction, 8 observed frames were exposed to the encoder to reconstruct the 8 frames and forecast the additional 12 frames. metrics: we considered four quantitative metrics on meta-test series. we included the commonly used mean squared error (mse) of forecasted images, and the recently-proposed metric of valid prediction time (vpt) that measures how long the predicted object’s trajectory remains close to the ground truth trajectory based on the mse (vpt-mse) (botev et al., 2021). because pixel-level mse does not necessary well capture the quality of the predicted dynamics due to the small object size on the image, we further introduced two new metrics: distance (dist) between the ground-truth and predicted location of the moving object; and vpt determined based on this distance error (vpt-dist). comparison with baseline models trained on full dynamics: for gravity-16 data, we used 10 gravity in meta-training, 2 in meta-validation, and 4 in meta-testing. table 1 summarizes the quantitative test performance of the three k-shot meta-models obtained with k = 15, in comparison to each of the baseline models trained from the full meta-training set. we include complete results across all models in appendix d with table 4. visual examples for these quantitative results are in appendix d with fig. 7 (shaded blue): all the baseline models, including their fine-tuned and meta-versions, struggled with limited forecasting ability, especially evidenced by the error in predicting the movement of the ball over time (dist and vpt-dist). for dkf/vrnn/kvae and meta-dkf, there were strong reconstruction and near-term forecasting from partially observed frames (marked by red vertical lines), but incorrect forecasting further away from the observed frames. gru-res/node/rgn-res and their fine-tuned versions exhibited difficulty to describe mixed gravity. figure 2: a: comparison with baselines trained on mixed-physics. b: forecasting examples. table 2: comparison with baselines trained on single dynamics in meta-training data on gravity-16. model dynamics mse↓ vpt-mse↑ dist↓ vpt-dist↑ meta-gru-res gru-res gru-res finetune meta-dkf dkf dkf finetune kvae don`a et al known unknown known unknown unknown known unknown known unknown unknown known unknown known unknown for mixed-physics data, for each of the three physics, we included three dynamic settings in metatraining and left out one in meta-testing. fig. 2a summarized the test results of the presented meta-gru-res (with variable k) with representative baseline models. visual examples are shown in fig. 2b. as shown, meta-dkf, dkf, and dvbf again demonstrated limited ability for long-term forecasting across all physics. kvae, vrnn, and the finetuned global latent gru-res were more successful with the mass spring and pendulum systems with relatively simpler dynamics, yet they struggled with the gravity system. the presented meta-gru-res model consistently outperformed all the baselines across all dynamics, with a larger gain in more complex dynamics. comparison with baseline models trained on single dynamics: table 2 summarizes the performance of representative baseline models when trained on a single gravity on gravity-16 data in comparison to meta-gru. as shown, in both test dynamics known and unknown to the training, the meta-models outperformed the single-dynamic baselines, suggesting the added benefits of learning across dynamics. this margin of improvements remained even when the single-dynamics baselines were fine-tuned to the k-shot support series of unknown test dynamics. visual examples of these baselines are also shown in appendix d with fig. 7 (orange shade). ablation study: table 3 summarized the effect of k on k-shot forecasting using the meta-gru-res model. as expected, model performance improved as the size of k increased. even with k = 5, however, the performance was significantly better than all the base models summarized in table 1. table 3: performance metrics of meta-gru-res models with fixed vs. variable k values k mode mse↓ vpt-mse↑ dist↓ vpt-dist↑ fixed fixed fixed fixed variable figure 3: a: t-sne plot of support-set embedding c from stochastic (left) and deterministic (right) meta-models. b: generated forecasting by sampling the distribution of c given the same z0. allowing k to be variable had no noticeable effect on model performance. this flexibility highlights the practicality of the presented framework to forecast with any given size of support series. latent embedding and generation of diverse dynamics: fig. 3a shows the distribution of the latent embedding c obtained from randomly-selected support set, in comparison to a deterministic version of the presented meta-model on mixed-physics data. as shown, the presented framework was able to recognize and separate the three dynamics using the k-shot support set: given an initial z0, it was then able to generate different time-series within the same dynamics as well as across dynamics by sampling the distribution of c ( fig. 3b). this was not possible with its deterministic counterpart. experiments on complex physics simulations | 7 | [
108.299,
282.3586768,
394.8472906,
294.3138768
] |
gVOXZproe-e.pdf | 2,023 | 0 | how to prepare your task head for finetuning yi ren university of british columbia renyi.joshua@gmail.com shangmin guo university of edinburgh s.guo@ed.ac.uk | 0 | [
295.374,
644.2084978,
392.0012574,
675.4714166
] |
KmtVD97J43e.pdf | 2,022 | 2 | synchromesh: reliable code generation from pre-trained language models gabriel poesia∗† stanford university poesia@stanford.edu oleksandr polozov∗‡ x, the moonshot factory polozov@google.com vu le, ashish tiwari, gustavo soares, christopher meek, sumit gulwani microsoft research, redmond {levu,astiwar,gustavo.soares,meek,sumitg}@microsoft.com abstract large pre-trained language models have been used to generate code, providing a flexible interface for synthesizing programs from natural language specifications. however, they often violate syntactic and semantic rules of their output language, limiting their practical usability. in this paper, we propose synchromesh: a framework for substantially improving the reliability of pre-trained models for code generation. synchromesh comprises two components. first, it retrieves few-shot examples from a training bank using target similarity tuning (tst), a novel method for semantic example selection. tst learns to recognize utterances that describe similar target programs despite differences in surface natural language features. then, synchromesh feeds the examples to a pre-trained language model and samples programs using constrained semantic decoding (csd): a general framework for constraining the output to a set of valid programs in the target language. csd leverages constraints on partial outputs to sample complete correct programs, and needs neither re-training nor fine-tuning of the language model. we evaluate our methods by synthesizing code from natural language descriptions using gpt-3 and codex in three real-world languages: sql queries, vega-lite visualizations and smcalflow programs. these domains showcase rich constraints that csd is able to enforce, including syntax, scope, typing rules, and contextual logic. we observe substantial complementary gains from csd and tst in prediction accuracy and in effectively preventing run-time errors. introduction large language models (llms) trained on massive corpora of unsupervised data have been shown to perform a wide range of tasks, including natural language generation, semantic parsing and sentiment analysis (brown et al., 2020; devlin et al., 2019; raffel et al., 2020). this can be achieved without task-specific training, but rather by adapting the model to each task at test-time using textual prompts, which can contain examples and natural language descriptions. in many cases, this methodology was shown to provide good performance, reducing the need to annotate large datasets for each task of interest (brown et al., 2020; shin et al., 2021). an important application of llms is in synthesizing programs from natural language descriptions (austin et al., 2021; chen et al., 2021). but this task is still challenging for llms. first, they can commit conceptual errors, generating code that misses the intent behind the given description. for example, when asked to reverse an array, the model might generate code that simply swaps the first and last elements. indeed, users of current natural language-to-code systems report that models often produce code that is unrelated to their query (xu et al., 2021). ∗equal contribution. †work done during an internship at microsoft with the prose team. ‡work done while at microsoft research (polozov@microsoft.com). figure 1: overview of the synchromesh framework. given the user’s query, high-relevance examples are first retrieved with target similarity tuning (tst). then, a program is incrementally sampled via constrained semantic decoding (csd), which queries a completion engine (ce) to enforce constraints during code generation without re-training or fine-tuning the language model. even when they capture the right intent, llms can still make implementation errors: the generated code can fail to execute. for reversing an array, a model might generate a loop with the correct structure but with an off-by-one error, causing a runtime exception. these errors are common even with very large models. for example, austin et al. (2021) tested models with up to 137b parameters on generating short python programs from natural language. still, 47% of the failures were due to syntax, typing or run-time errors (as opposed to running but producing incorrect output). this is in line with theoretical results in merrill et al. (2021) showing that programming language semantics cannot be fully inferred from ungrounded data. together, both observations suggest that simply scaling up llms might be ineffective to obtain reliable performance, especially for longer programs. in this paper, we address both conceptual and implementation errors with synchromesh, a framework for reliable code generation from pre-trained models. since llms are highly sensitive to which few-shot examples are given in their prompt, we propose target similarity tuning (tst): a method for dynamically selecting semantically relevant examples for a given description. tst mitigates conceptual errors by learning to select examples with similar intent, even when their natural language descriptions seem unrelated in form. given relevant examples, we then generate programs with constrained semantic decoding (csd), a novel method for enforcing rich syntactic and semantic constraints during code generation on top of a frozen language model. rich language-specific constraints, ranging from syntax validity to scoping and type-checking, can be implemented under the simple abstraction of completion engines (ce). csd aligns these constraints with the language model’s token vocabulary by leveraging brzozowski language derivatives (brzozowski, 1964). this guarantees that all sampled programs satisfies the implemented constraints, preventing whole classes of implementation errors by construction. the pipeline is illustrated in figure 1. we demonstrate the generality of synchromesh in three real-world languages: sql (database queries), vega-lite (data visualization) and smcalflow (calendar applications). in experiments with gpt-3 and codex, we observe that synchromesh can eliminate whole classes of errors that make outputs from unconstrained models either fail to execute or produce trivial results (e.g., empty charts). furthermore, eliminating invalid programs consistently improves prediction accuracy. in summary, we make the following contributions: • we propose target similarity tuning for selecting few-shot examples based on the similarity of the programs they describe, improving relevance and downstream performance. • we introduce completion engines as an abstraction that can implement rich classes of semantic program constraints, as we demonstrate in sql, vega-lite and smcalflow. • we introduce a general, constraint-observing decoding algorithm, which aligns programming language constraints with the language model’s token vocabulary. • we evaluate our method in three natural language-to-code tasks. csd and tst both show strong complementary gains in output validity and prediction accuracy across domains. target similarity tuning in this section, we first overview the challenge posed by conceptual errors in programs synthesized by llms. we then introduce tst, which improves performance through more relevant example figure 2: example of target similarity tuning improving example selection for synthesizing a sql query. in (a), the prompt example missed the key query structure (grouping and counting). with this example, gpt-3 generates an invalid query (b). with tst, we retrieve a relevant example which gpt-3 successfully adapts to answer the user’s question (c). selection. throughout, we will use a real example of synthesizing a sql database query to answer a question posed in natural language. suppose a data analyst has a relational database of airports and wants to answer the following question: “which city has the highest number of airports?” one procedure for turning this description into a sql query is to use an llm such as gpt-3 (brown et al., 2020) or codex (chen et al., 2021). to prompt the model for the task at hand, we would feed it with a natural language description of the task and a selection of input-output examples. given the analyst’s question, how do we select the most relevant examples from a training pool? liu et al. (2021a) proposed to retrieve examples with similar natural language descriptions using a pre-trained paraphrase detection model. figure 2a shows the most similar example from the spider natural language-to-sql dataset (yu et al., 2018) according to sentence-bert (reimers & gurevych, 2019). the query “which city has the highest elevation?” is similar on a surface level: it also asks “which city has the highest (cid:3)?”. this training query asks about “elevation”, a property that is readily available as a column in the airports table. figure 2b shows gpt-3’s output when given this and a few other examples. the model attempts to mimic the top example, referring to a nonexistent column “numberofairports”. the issue is that we picked the example in the prompt based on description similarity and not sql query similarity. in fact, the sql query in the chosen example had a simplistic structure that was significantly different from the structure of the desired sql query, and this contributed to the failure at point (b) in figure 2. we want to retrieve examples that have relevant program structures for the test query. we do so using our fine-tuning scheme called target similarity tuning (tst). formally, suppose d is a dataset of programs and associated utterances, with di = (pi, ui). let s(pa, pb) ∈ [0, 1] denote a normalized similarity metric between programs. if fθ is a pre-trained similarity model for natural language sentences, tst consists in fine-tuning f to predict the similarity between target programs given by s from their descriptions. precisely, we minimize the mean-squared error loss: lt st (θ) := ei,j∼d [fθ(ui, uj) − s(pi, pj)]2 . we define s using the classical tree edit distance algorithm from zhang & shasha (1989) to compare abstract syntax trees (asts). figure 2c shows gpt-3’s output when given examples selected with tst. now, the output query is correct: it performs a “group by” on the “city” column, and sorts by the count of records in each group. this structure was already present in the top example selected by tst, corresponding to “return the team with the most technicians”. even if the analyst’s question and this utterance are drastically different in natural language, they share similarity in the sql query that they describe. the tst objective is able to properly capture this fact. as our experiments show in section 4, tst significantly boosts the performance of both gpt-3 and codex. constrained semantic decoding we now present constrained semantic decoding (csd) as an approach to eliminate implementation errors from code generated by llms. we first illustrate csd with an example, and then formalize it using the abstraction of ces. figure 3: example on csd generating a sql query. given the prompt, gpt-3 makes a mistake (a) when generating the join condition. csd is able to prevent this error by (b) keeping track of table aliases and constraining the model to respect the database schema. the example in figure 2 showed that tst can help llms generate the correct program. in general, however, tst only helps llms by guiding toward the correct structure, but the model still needs to fill all the specific implementation details correctly. figure 3 shows a case where the model cannot simply adapt one example from the prompt. here, the user’s query is “which city has the highest number of departing flights?” this query is similar to the previous one – in fact, tst retrieves the same top-1 example as before. but now the correct sql query needs to join the “airports” and “flights” tables. gpt-3 generates the join condition flights.airportcode = airports.sourceairport, but this condition has a subtle error: the column names of the two tables are swapped. thus, this query fails to execute. in general, unconstrained language models often make such implementation errors: using undeclared variables, losing track of nesting levels when producing complex expressions, or calling functions using arguments of the wrong type. even the smallest of such errors prevents generated code from executing. csd prevents implementation errors by construction (as opposed to repairing after-the-fact). imagine we have access to an oracle, which we call a ce, that can take a partial program and return all tokens that can extend that partial program toward a complete correct program. when the llm is generating the program token by token, csd ensures that the next token is sampled from the set returned by the ce. in figure 3, after generating “t1.” inside the “on” clause, our sql ce resolves the alias and constrains the model to output one of the columns from the “flights” table. this fixes the error seen previously during generation and produces the correct sql query. completion engines we now formally define ces. let σ be a base alphabet, and σl ⊆ σ∗ be the (potentially infinite) set of tokens of the target language. our goal is to sample programs from a language l ⊆ σ∗ l – the set of valid programs. a ce cl is a partial function from σ∗ l to a set of tokens. we use a regular expression over σ to represent a set of tokens. the strings in the domain of cl are called completion points, and a ce satisfies the following axioms: (a1) the empty string and every p ∈ l must be completion points. for every p ∈ l, cl(p) = r(cid:48)$(cid:48), where r(cid:48)$(cid:48) is the regular expression that matches the stop token. (a2) if s ∈ σ∗ l is a completion point and t fully matches cl(s), then their concatenation st must also be a completion point. (a3) the ce is exhaustive; that is, if s is a completion point and s = tt0, where t0 is a token, then t should be a completion point and cl(t) should match t0. furthermore, we assume that ces are only called after maximal matches. for example, if a partial program ends in an identifier, the ce can assume that the identifier is complete. our ces are implemented in two layers: a context-free layer, which enforces syntactic validity, and a context-sensitive layer, which encodes semantic constraints that depend on language semantics and the user’s context (e.g., the database). below, we describe an automatic method for constructing context-free ces directly from the target language’s grammar. the context-sensitive layer of an engine is specific to the target language. table 1 provides an overview of several constraints implemented by our ces for sql, vega-lite and smcalflow, three rich languages with different syntactic and semantic structures. a detailed description of the three ces can be found in appendix c. language constraint example of partial program sql a valid identifier must follow after as. select name, role from user as ∧ vega-lite column names must come from schema, even behind aliases. data fields must be used with compatible types. do not facet on field with too many distinct values (breaks rendering). select u.name from user as u where u. ∧ {"x": {"field": "category", "type": ∧ {"column":{"field": ∧ smcalflow type-check parameters of all api functions. (yield (placehasfeature ( ∧ track declared variables and their types. (let (x 85) (yield (incelsius ∧ valid/invalid examples u (cid:88) t1 (cid:88) 2 × name (cid:88) dob (cid:88) birthday × "nominal" (cid:88) "temporal" × "category" (cid:88) "zipcode" × takeout (cid:88) iswindy × list.apply × x (cid:88) y × table 1: examples of constraints implemented in our ces for sql, vega-lite and smcalflow. given a partial program, ces return a regular expression that matches the valid tokens that can follow. here, we show positive and negative token examples for each such regular expression. this abstraction allows domain experts to encode a wide range of expressive code generation constraints. deriving completions from grammars computer language parsers are often automatically generated from a grammar. the grammar contains enough information to derive the context-free layer of ces. to facilitate this process, we created a library that extends any parser generated by antlr (parr & fisher, 2011), a popular ll(∗) top-down parser generator, to provide token-level completions. namely, we (i) let the antlr-generated parser process the given program prefix p, (ii) retrieve its state in the augmented transition network (atn) at the last program token, (iii) traverse the atn from that state to enumerate all possible next token productions. this process yields (a) a list of productions and token types {τj}k j=1 that are allowed to follow p and (b) a partial ast tp. each ce takes {τj} and tp as input to generate semantic context-sensitive constraints. from completion engines to a decision procedure we use ces to guide sampling from an llm. a key component of our algorithm for constrained sampling is a decision procedure for membership in prefix-closure of the set l of all valid programs. the prefix-closure lc of a language l contains all programs in l as well as all of their prefixes. intuitively, lc contains all partial programs that can be completed to a valid program. given a ce cl, our first goal is to build a decision procedure for lc: given a string s, does it belong to lc? we answer if s ∈ lc by repeatedly calling cl on certain prefixes p of s and matching the regular expression cl(p) with suffixes of s. we start with p being the empty string. we find the maximal prefix of s that matches the regular expression cl(p) and remove it from s and add it to p, and repeat until the match fails. there are two cases: either s is empty now, which means the input string was a completion point and hence it is in lc, or s now is the remainder left after removing the largest prefix that was a completion point. for the second case, we must check: does there exist a completion string c such that sc fully matches the regular expression cl(p)? this question can be efficiently answered by brzozowski derivatives (brzozowski, 1964). formally, the derivative of a formal language s with respect to a string u is another formal language u−1s = {v : uv ∈ s}. in other words, it is precisely the set of strings that can complete u to some string in s. if u−1s = ∅, then no string in s starts with u. brzozowski derivatives are efficient to compute for our regular languages (or regular expressions defining them) – we describe a simple linear-time algorithm in the appendix. given the derivative of cl(p), answering whether s can be completed to belong to cl(p) reduces to performing a simple regular expression match. this operation answers the case when the remainder is non-empty and completes our decision procedure for lc. the constrained semantic decoding algorithm using the decision procedure for lc, we can now describe the constrained semantic decoding algorithm. suppose s ∈ lc is the language model’s output so far (we start with (cid:15)). if σm is the model’s vocabulary, we can compute the set of valid next tokens vm (s) = {t ∈ σm : st ∈ lc} by using our decision procedure for each token in the vocabulary σm . in other words, we maintain the invariant that the model’s current partial output s is in lc, and make progress by using the model to sample from vm (s), instead of the unconstrained σm . once we have a complete program, we are guaranteed that it will satisfy all constraints enforced by the ce. one subtlety to note is that language models and programming languages have drastically different tokenizations; i.e., cl and llm work with different tokens. for instance, a long string literal is a single sql token, but might span multiple tokens for the language model. similarly, a single token from the language model’s vocabulary might close multiple parentheses at once. in general, token boundaries between the two can be arbitrarily misaligned. each decision of whether st belongs to lc can potentially cross multiple completion points, or might not even finish a maximal match to the previous completion point (see the appendix for an example prediction in vega-lite where this happens multiple times). nevertheless, our csd algorithm described here naturally handles this alignment problem. hence, in synchromesh, ces do not need to be aware of this issue – they can be fully implemented in terms of the target language’s tokens.1 our implementation applies substantial optimizations that leverage the structure of byte-pair encoding vocabularies (namely, that many tokens are prefixes of longer tokens) and reuse computation. we detail these optimizations in appendix e. in our experiments with gpt-3, csd adds an average of 8% overhead to the sampling procedure – a relatively small impact to trade for output correctness. experiments we evaluate synchromesh in three tasks of synthesizing code from natural language descriptions. for sql, we use the spider dataset (yu et al., 2018). for vega-lite, we use the nlv corpus (srinivasan et al., 2021). for smcalflow, we use the dataset that introduced the language (andreas et al., 2020). in nlv, which has visualizations over 3 different datasets, we alternate using each dataset as a test-set by only using training examples from the other two datasets. in spider and smcalflow, we use the training/validation set split given in each dataset. example selection model to select examples, we use sentence-bert (reimers & gurevych, 2019) to fetch the 5 closest examples by cosine similarity. when using tst, we fine-tuned the model with the tst objective in both the spider and smcalflow training sets. the nlv corpus is smaller and does not provide a clear train-test split to fairly evaluate tst. holding out one dataset and fine-tuning on the remaining two yields synchromesh accuracies of over 90%. however, we attribute that performance to the fact that nlv has only 10 distinct visualizations and the same participants labeled all three datasets. for that reason, we omit vega-lite from the tst experiments. language models we used the two largest models from the gpt-3 family (brown et al., 2020, with 13b and 175b parameters), as well as the largest codex model (chen et al., 2021). codex shares the same architecture with 175b gpt-3, but its training set contained a larger portion of source code in a variety of languages. our only access to the models was through the public openai http api, which allowed us to apply constraints by adding a bias to logits. we describe the necessary adaptations of csd to this setting in appendix f. metrics for vega-lite and smcalflow, we report the exact-match accuracy between predictions and ground-truth (field order is disregarded in vega-lite). in sql, we instead measure execution 1csd aligns the llm’s stream of vocabulary (sub-)tokens to the ce’s stream of valid program completion points, akin to a clutch that dynamically aligns the speeds of differently-sized gears in a manual transmission. such mechanism is known as synchromesh (jewkes et al., 1969), which gives the name to our whole framework. model andreas et al. (2020) srinivasan et al. (2021) rubin & berant (2021) scholak et al. (2021) exec. sql valid dist. vega-lite smcalflow acc. valid dist. acc. valid dist. gpt-3 13b ” + csd ” + tst ” + csd + tst gpt-3 175b ” + csd ” + tst ” + csd + tst codex 175b ” + csd ” + tst ” + csd + tst table 2: results of each language model on all domains with and without csd and tst. for sql, we run the resulting query and report execution match accuracy (exec.). for vega-lite and smcalflow, we instead report exact match accuracy (acc.). edit distance (dist.) measures average relative edit distance between the prediction and the ground truth. we also report the fraction of valid model outputs (those that parse, type-check and execute). for context only, we show recent results from supervised models (trained on the datasets we use) marked with (s). accuracy, comparing query results instead. for a more fine-grained signal, we additionally measure the edit distance between the predicted and ground-truth asts using the normalized tree edit distance (zhang & shasha, 1989). results table 2 and figure 4 summarize our main results evaluating synchromesh. key observations are: synchromesh improves reliability on top of all pre-trained llms. first, it improves top-1 accuracy (exact or execution-measured) over any pre-trained llm in all domains. smcalflow benefits the most, likely because this domain-specific language is absent in the llm pre-training corpus. for sql and smcalflow, the absolute gain is almost the same for equally-sized gpt-3 and codex. in sql, it eliminates execution errors second, synchromesh dramatically improves validity. from 29% of the queries generated by gpt-3 13b (as validity improves from 43% to 72%). even codex benefits, with 12% more queries executing successfully after synchromesh augmentation. in vega-lite and smcalflow, synchromesh improves reliability even more substantially. gpt-3 13b only produces valid charts for 55% of the queries in nlv; all errors are eliminated with synchromesh. this is nearly paralleled in smcalflow, in which all models produce well-typed programs 97% of the time or more with synchromesh. synchromesh brings the output closer to ground truth. error prevention alone is trivial (e.g., with a constant error-free prediction), but not while simultaneously improving accuracy or edit distance to the ground-truth, as synchromesh does. again, we observe improvements in all domains and the most in smcalflow. for gpt-3 175b, the average edit distance is reduced from 0.41 to 0.18. tst and csd bring complementary benefits. our ablation studies reported in table 2 show that their combination performs better than either one separately. tst helps llms generate programs in the “vicinity” of the correct one, and csd helps by “guiding” the models toward the correct one. synchromesh adds more value for longer programs. program synthesis is hardest when the target program is complex. does synchromesh improve synthesis of longer programs, or are its benefits (a) (b) (c) figure 4: (a) accuracy and (b) validity of codex predictions with and without synchromesh on smcalflow as a function of the ground-truth program length. we map program lengths to percentiles, and round to the closest multiple of 10%. error bands correspond to standard error. (c) evaluation of the “generate-then-test” approach with codex, showing the probability of at least one prediction being a valid program (valid@k) for up to 5 samples. figure 5: illustration of implementation and conceptual errors in vega-lite. csd can avoid generating the invalid vega-lite mark type “scatterplot”, though conceptual errors can still remain. coming from fixes to small programs? figure 4(a) shows accuracies and (b) validity for smcalflow broken down by the length of the ground truth program (we show results for sql in the appendix). here, program lengths are shown as their percentile. with synchromesh, we see that accuracy decays at a slower pace, and validity remains high throughout, when compared to codex alone. this indicates that synchromesh improves the ability of base models to generate longer programs. llms augmented with synchromesh approach but underperform supervised models. for context, we include state-of-the-art results at the time of writing for each task in table 2. we note that these methods fine-tune or train the underlying language-to-code model on each task, thus are not directly comparable to llms with synchromesh. that said, we observe that base llms—even codex— substantially underperform supervised models (19% worse for sql; 27% worse for smcalflow), and synchromesh helps narrow that gap (now 11% worse for sql; 9% worse for smcalflow). synchromesh outperforms “generate-then-test”. csd enforces program constraints during generation. instead, prior work has leveraged a “generate-then-test” approach: take multiple samples and filter out those that produce errors or violate constraints (chen et al., 2021). figure 4(b) evaluates this approach with codex, the highest performing base llm. we sample from codex with a temperature τ = 0.7 to obtain diverse but high-quality samples. we then compute the “valid@k” metric by using the “pass@k” estimator from chen et al. (2021) to calculate the probability of at least one valid sample among k, with k ≤ 5. in sql, codex needs 3 samples to match synchromesh (valid@k = 85%). in smcalflow and vega-lite, synchromesh is able to virtually eliminate errors with 1 sample, while “valid@5” for codex is still below 93%. this provides evidence that even the best llms benefit from incremental validation, especially in less popular languages. discussion | 7 | [
108.249,
146.2630784,
184.9687361,
156.2256784
] |
OqcZu8JIIzS.pdf | 2,022 | 0 | pareto policy pool for model-based offline reinforcement learning yijun yang1,4, jing jiang1, tianyi zhou2,3, jie ma1, yuhui shi4 1australian artificial intelligence institute, university of technology sydney 2university of washington, seattle, 3university of maryland, college park 4department of computer science and engineering, southern university of science and technology {yijun.yang-1, jie.ma-5}@student.uts.edu.au, jing.jiang@uts.edu.au, tianyizh@uw.edu, shiyh@sustech.edu.cn abstract online reinforcement learning (rl) can suffer from poor exploration, sparse reward, insufficient data, and overhead caused by inefficient interactions between an immature policy and a complicated environment. model-based offline rl instead trains an environment model using a dataset of pre-collected experiences so online rl methods can learn in an offline manner by solely interacting with the model. however, the uncertainty and accuracy of the environment model can drastically vary across different state-action pairs, so the rl agent may achieve a high model return but perform poorly in the true environment. unlike previous works that need to carefully tune the trade-off between the model return and uncertainty in a single objective, we study a bi-objective formulation for model-based offline rl that aims at producing a pool of diverse policies on the pareto front performing different levels of trade-offs, which provides the flexibility to select the best policy for each realistic environment from the pool. our method, “pareto policy pool (p3)”, does not need to tune the trade-off weight but can produce policies allocated at different regions of the pareto front. for this purpose, we develop an efficient algorithm that solves multiple bi-objective optimization problems with distinct constraints defined by reference vectors targeting diverse regions of the pareto front. we theoretically prove that our algorithm can converge to the targeted regions. in order to obtain more pareto optimal policies without linearly increasing the cost, we leverage the achieved policies as initialization to find more pareto optimal policies in their neighborhoods. on the d4rl benchmark for offline rl, p3 substantially outperforms several recent baseline methods over multiple tasks, especially when the quality of pre-collected experiences is low. introduction offline reinforcement learning (offline rl) (levine et al., 2020) or batch rl (lange et al., 2012) can train an agent without interacting with the environment by instead using pre-collected experiences on other agents/policies. recently, offline rl has been attracting growing interest due to the availability of large-scale datasets of diverse experiences in many rl applications, e.g., autonomous driving (shin & kim, 2019), healthcare (yu et al., 2019), robot control (schulman et al., 2016), etc. however, rl algorithms developed for the online/interactive setting usually perform poorly in the offline setting (fujimoto et al., 2019; janner et al., 2019) due to the data distribution shift caused by (1) the difference between the policy-in-training and the behavior policies used to collect the data; and (2) the difference between the realistic environment in which we will deploy the policy and the environments used to collect the data. these differences can result in function approximation error and biased policy learning (levine et al., 2020). to address these challenges, model-based rl approaches (yu et al., 2020; kidambi et al., 2020; rafailov et al., 2021; yu et al., 2021) firstly learn an environment model from a dataset of logged experiences using supervised learning and then conduct online rl interacting with the model. the learned environment model fully exploits the pre-collected experiences and can avoid/reduce the costly interactions with the realistic environment required by rl, hence improving the sample efficiency. that being said, due to the large size of state-action space, model-based offline rl approaches can still suffer from “model exploitation” (levine et al., 2020): when the dataset does not contain figure 1: model return-uncertainty trade-off of p3 (ours), mopo (yu et al., 2020), and morel (kidambi et al., 2020) on an offline rl dataset “halfcheetah-random” from the d4rl benchmark (fu et al., 2020). p3 achieves policies with different levels of model return-uncertainty trade-off. mopo and morel are run three times with regularization weights {0.1, 0.3, 0.5} and {1.0, 3.0, 5.0}, respectively. detailed discussion in sec. 1. sufficient samples for some state-action pairs, the epistemic uncertainty of the model on these “outof-distribution” pairs can lead to a poor policy in offline rl. for example, an rl agent may easily exploit the model by repeatedly visiting the out-of-distribution states where the model erroneously issues higher rewards than the true environment dynamics, and thus the rl objective is biased by overly optimistic evaluation of the agent’s performance. a widely-studied strategy to tackle this problem is applying regularization to the reward function, which keeps the model uncertainty small when maximizing the reward issued by the model (yu et al., 2020; kidambi et al., 2020; rafailov et al., 2021). by carefully tuning the regularization weight, it can achieve preferable trade-off between the model reward and model uncertainty. however, it is usually challenging to tune the weight without access to the realistic environment that the learned policy will be deployed to. moreover, even with access to the realistic environment, hyperparameter tuning methods such as bayesian optimization (frazier, 2018) need to run multiple instances of offline rl and can be computationally expensive. in addition, the offline-trained policy may not be directly generalized to different environments. in this paper, we address a challenging problem, i.e., how to balance between the model reward and model uncertainty during offline rl so it can produce diverse policies that can be adapted to different realistic environments? instead of formulating the problem as a single-objective optimization with regularization as in previous works, a primary contribution of this paper is to treat the model reward/return and model uncertainty as two separate objectives and develop an efficient bi-objective optimization method producing a pool of diverse policies on the pareto front, which correspond to different trade-offs between the two objectives. therefore, when deployed to a new realistic environment, we can accordingly choose the best policy from the pool. our method is called “pareto policy pool (p3),” and an example of the policies achieved by p3 is shown in fig. 1. when deployed to the realistic environment, pareto policy 1 is overly optimal on the model return, so it runs fast at the beginning but quickly falls to the ground due to the aforementioned “model exploitation”. on the contrary, pareto policy 4 favoring small model uncertainty is overly conservative and keeps standing still for 1000 time-steps because it avoids taking exploratory actions that potentially increase the uncertainty. as expected, pareto policy 2&3 with the more balanced trade-off between the model return and uncertainty perform better and achieve higher returns in the test environment. these results imply that model-based offline rl’s performance significantly relies on the trade-off between model return and uncertainty in the optimization objectives. however, it is usually challenging or intractable to determine the optimal trade-off before deployment and control it during training. moreover, for existing methods adopting a regularized single-objective, i.e., scalarization (boyd & vandenberghe, 2004, chapter 4.7), even trying all possible regularization weights cannot fully recover the pareto front and thus cannot guarantee to find the optimal trade-off. for example, in fig. 1, by running multiple instances with different regularization weights, mopo (yu et al., 2020) and morel (kidambi et al., 2020) can only generate a few separated solutions, and it is difficult to find one with advantageous trade-off among them. in contrast, our method aims at efficiently generating a rich pool of diverse and representative policies covering the entire pareto front without tuning the trade-off during training. thereby, when deployed to a new realistic environment, the best policy for the new environment can be efficiently selected from the pool. hence, p3 provides a simple and principal approach that addresses the two major challenges in model-based offline rl, i.e., “model exploitation” and generalization to different unseen states in order to achieve high returns. due to the complicated shape of the pareto front that is unknown during training, finding a pool of diverse policies covering the whole pareto front raises several non-trivial algorithmic challenges: (1) how to find policies located at different regions of the pareto front associated with different levels of model return-uncertainty trade-off? (2) how to avoid training each policy from scratch and reduce the computational cost linearly increasing with the number of policies? inspired by recent works in multi-objective optimization (cheng et al., 2016; ma et al., 2020; xu et al., 2020), we explore different regions of the pareto front by generating multiple diverse reference vectors in the bi-objective space, each defining a constrained bi-objective optimization whose solution resides in a local region of the pareto front. by solving these constrained bi-objective optimization problems, we can obtain a diverse set of policies covering the whole pareto front. for solving each problem, we extend mgda algorithm (d´esid´eri, 2012) to be a two-stage gradient-based method that provably converges to the pareto front region targeted by the reference vector. in order to achieve more policies on the pareto front without linearly increasing the cost, we start from the previously obtained policies for initialization and explore their neighborhoods on the pareto front by perturbing their corresponding reference vectors, resulting in a dense set of pareto policies in each local region. in experiments, we evaluate p3 and compare it with several state-of-the-art model-based/free offline rl methods on the standard d4rl gym benchmark (fu et al., 2020). p3 achieves the highest average-score over all datasets and significantly outperforms the baseline methods in 5 out of the 9 low/medium-quality datasets, showing the advantages of p3 on learning from non-expert experiences. we also present a thorough ablation study to identify the most important components in p3. 2 related work due to lack of space, we focus our discussion here on directly related works and present a more detailed overview of related work in appendix a.2. to address the aforementioned “model exploitation”, recent works rely on applying uncertainty regularization to the model return (yu et al., 2020; kidambi et al., 2020; rafailov et al., 2021) which can be difficult and costly to tune the weight of regularization. in contrast, our work reframes the policy learning under the environment model as a bi-objective optimization problem and produces a diverse set of policies on the pareto front performing different levels of model return-uncertainty trade-off, which provides flexibility to select the best policy in the inference stage. 3 preliminaries we consider an episodic markov decision process (mdp) m = {s, a, p, r, h, ρ0}, where s is the state space, a is the space of actions, p is the transition probability: s × a → ∆(s) where ∆(s) is a probability distribution over s, r : s × a → r is the reward function so r(sh, ah) is the immediate reward for taking action ah at state sh, h is the horizon of the process, and ρ0 is the distribution of the initial state s0. rl aims at learning a policy π : s → a maximizing the expected return in eq. (1), where π in this paper takes a deterministic action ah based on the state sh. max πθ r (sh, ah) in model-based offline rl, the agent instead interacts with an environment model rather than the realistic one. we train an environment model (cid:99)m = { ˆs, a, ˆp , ˆr, h, ˆρ0} using a pre-collected dataset d (cid:44) {(sh, ah, sh+1, rh)|πb} of experiences by behavior policies, hand-designed controllers, or human demonstrators. by interacting with the model, online rl methods can learn in an offline manner. however, when d does not contain sufficient samples for some state-action pairs, the epistemic uncertainty of the model on these “out-of-distribution (ood)” pairs can result in a poor policy. for example, an rl agent may easily exploit the model by repeatedly visiting the ood states where the model erroneously issues ˆr higher than the true environment reward r and thus the rl objective is biased by overly optimistic evaluation of the agent’s performance. a widely-studied strategy to tackle this problem is applying regularization to ˆr, which keeps the model uncertainty small while maximizing the model reward (yu et al., 2020; kidambi et al., 2020; rafailov et al., 2021). a practical implementation of the regularized reward function is developed by (yu et al., 2020): ˜rh = ˆrh − λu(ˆsh, ah), where u(ˆsh, ah) denotes the estimation of the model uncertainty at the state-action pair (ˆsh, ah) and λ controls the trade-off between ˆr and u, which has to be carefully tuned in practice. based on the regularized reward function, yu et al. (2020) proposed a modified policy optimization objective: (ˆr (ˆsh, ah) − λu (ˆsh, ah)) max π h=0 despite being intuitive, this method’s performance is sensitive to the regularization weight λ (kidambi et al., 2020; yu et al., 2021), and tuning λ is usually challenging without access to the realistic environment that the learned policy will be deployed to. moreover, even granted the access, hyperparameter tuning methods such as bayesian optimization (frazier, 2018) require running many instances of offline rl, which can be computationally prohibitive. instead of optimizing a single regularized objective, our “pareto policy pool (p3)” proposed later treats ˆr and u as two separate objectives and develop an efficient bi-objective optimization method that does not need to tune the trade-off deliberately but produces a pool of diverse policies, which are learned under different trade-offs between the two objectives. thereby, when deployed to a realistic environment, the best policy can be chosen from the pool. pareto policy pool for model-based offline rl problem formulation in order to estimate the model uncertainty accurately and alleviate the model exploitation problem, we follow previous works (janner et al., 2019; yu et al., 2020; kidambi et al., 2020; rafailov et al., 2021; yu et al., 2021) and construct a bootstrap ensemble of k environment models { (cid:99)mi}k i=1. each model (cid:99)mi = { ˆs, a, ˆp i, ˆri, h, ˆρ0} is a two-head feed-forward neural network that takes a state-action pair (ˆsh, ah) as input and outputs the mean µi and standard deviation σi of [ˆsh+1, ˆrh], i.e., the next state concatenated with the reward. more details about our model are given in appendix a.5. as demonstrated in yu et al. (2020), this ensemble is effective in estimating the model uncertainty as the maximal standard deviation over all models, i.e., u(ˆsh, ah) = maxi∈[k] (cid:107)σi(ˆsh, ah)(cid:107)2. moreover, by randomly selecting a model in each step to provide the reward ˆr(ˆsh, ah) = ˆrh, we can effectively mitigate the model exploitation problem. unlike previous works combining ˆr and u as a single objective, we treat them as two separate objectives and aims at solving the bi-objective optimization below. (j ˆr ˆρ0 j ˆρ0(πθ, (cid:99)m) = max (πθ, (cid:99)m))t, max θ (ˆr (ˆsh, ah) , exp (−u (ˆsh, ah) /κ))t where κ is a temperature applied to u (ˆsh, ah). for simplicity, in the rest of this paper, we remove ˆρ0 and (cid:99)m. in eq. (3), j ˆr aims to maximize the expected model return, and j u is designed to minimize the expected cumulative model uncertainty. however, the bi-objective optimization naturally has multiple (up to infinite) optimal solutions instead of one single policy and each optimal policy performs a different level of trade-off between the two objectives. for example, a policy favoring small model uncertainty maybe overly conservative and avoids taking exploratory actions so its model return can be low. in contrast, a policy pursuing high model return might fail in realistic environments at a state that the model is highly uncertain about. examples of these policies are given in fig. 1. formally, for any two policies πi and πj, πi dominates πj if and only if j(πi) ≥ j(πj) and j(πi) (cid:54)= j(πj). a policy π∗ is pareto optimal if no any policy dominates π∗ in rd, i.e., no objective can be improved without detriment to another objective at π∗. all pareto optimal policies constitute the pareto set, and the pareto front is the image of the pareto set in the space spanned by all the objectives. unfortunately, it is almost infeasible in practice to find the whole pareto set and the pareto front’s shape is also unknown. as discussed in sec. 3, it is difficult to determine the optimal trade-off between j ˆr and j u because we cannot access the realistic environment during training. it is also challenging and costly to control the trade-off. hence, a straightforward strategy is to find a diverse set of policies on the pareto front performing different levels of trade-off and select the best one when deployed to a realistic environment. however, how to find these pareto optimal policies is still an open challenge. in addition, it is expensive to train each pareto policy from scratch. after allocating a few diverse policies on the pareto front, can we start from them to find more pareto policies so we can avoid linearly increasing the computational costs? to overcome these challenges, we develop “pareto policy pool (p3)”, which can efficiently and precisely find a diverse set of policies on the pareto front of return-uncertainty bi-objective optimization. pareto policy pool fig. 2 illustrates the main idea of “pareto policy pool (p3)”, whose detailed procedure are given in alg. 1. in order to find diverse pareto policies and precisely control the trade-off of each policy, p3 generates multiple reference vectors {vi}n i=1 in the objective space, each forming a constraint to the bi-objective optimization in eq. (3) and targeting a different region on the pareto front (sec. 4.2.1). thereby, the pareto policies {πi}n i=1 achieved by solving the n constrained bi-objective optimizations are diverse in terms of the objective trade-off. for solving each of them, we develop an efficient two-stage optimization algorithm, i.e., alg. 2, which starts from an initial point and moves to a region fulfilling the constraint (correction stage), and then apply a multi-objective optimization algorithm to find a pareto policy in the targeted region. to avoid training each pareto policy from scratch, which leads to linearly increasing costs for ≥ n policies, we develop a local extension method on the pareto front in sec. 4.2.2. this “pareto extension” firstly generates more reference vectors by perturbing each vector in {vi}n i=1 and then uses their associated pareto policies {πi}n i=1 to warm start the bi-objective optimization constrained by the new vectors (line 9-13 in alg. 1). these locally extended policies, together with the n policies, compose a pool of diverse policies on the pareto front, from which we can select the best policy when deploying them to realistic environments. figure 2: illustration of p3 by solving a benchmark problem from (lin et al., 2019). algorithm 1 pareto policy pool (p3) for model-based offline rl 1: input: dataset d, constraint ψ < 0, step size η, num. reference vectors n, tg (cid:29) tl 2: initialize: environment models, pareto policy pool p = ∅, 0 < τa < τb < 1 for eq. (5), 0 < (cid:15) < τa for eq. (8), number of updates: t = n(tg + 2tl) initialize a policy πi for j = 0, 1, . . . , tg − 1 do 3: train the model on d using supervised learning; 4: generate n reference vectors {v1, . . . , vn} by eq. (5); 5: for i ∈ {1, . . . , n} (in parallel) do 6: 7: 8: 9: 10: 11: 12: 13: 14: output: p; update the parameters of πi by alg. 2 with vi; i } to vi by eq. (8); i } do for j(cid:48) = 0, 1, . . . , tl − 1 do p = p ∪ {πi}; update the parameters of πi by alg. 2 with v(cid:48); generate {v+ for v(cid:48) ∈ {v+ i , v− i , v− (cid:46) diverse pareto policies (cid:46) local pareto extension (cid:46) store pareto policies into the pool 4.2.1 diverse pareto policies by reference vectors inspired by recent works in multi-objective optimization (cheng et al., 2016; lin et al., 2019; ma et al., 2020; xu et al., 2020), we explore the pareto front by generating a diverse set of reference vectors defining multiple constrained bi-objective optimization problems. as shown in fig. 2, we generate n uniformly distributed reference vectors {vi}n vi (cid:44) (τb − (i − 1)τc, τa + (i − 1)τc), τc = i=1 in a 0-1 normalized objective space by eq. (5), i.e., τb − τa n − 1 where τa and τb control the range covered by reference vectors. each vi = (v1 i ) defines a constrained bi-objective optimization problem based on eq. (3) whose solution resides in the targeted region of the pareto front: max θ j(πθ) (cid:44) max s.t. ψ(πθ, vi) (cid:44) −dkl (j ˆr(πθ), j u(πθ))t, (cid:18) vi where ψ(πθ, vi) defines a similarity metric between the reference vector vi and the objective vector j(πθ) > 0. when ψ is large, j(πθ) is constrained to be close to vi, and the targeted region on the pareto front is small. hence, solving the constrained bi-objective optimization for the diverse set of reference vectors produces a diverse set of pareto policies associated with different trade-offs between j ˆr and j u. in the following, we develop an efficient two-stage algorithm for solving this optimization problem in eq. (6). (cid:46) correction stage i /v2 i then if j ˆr(πθt)/j u(πθt) < v1 compute ∇θj ˆr(πθt); θt+1 = θt + η∇θj ˆr(πθt); to solve the bi-objective optimization with an inequality constraint in eq. (6), we propose a two-stage gradient-based method in alg. 2. it first finds a solution meeting the constraint by alternating between optimizing the two objectives (correction stage) and then treats the constraint as the third objective and applies an existing multi-objective optimization algorithm to find a pareto policy in the region targeted by in the first the constraint (ascending stage). stage, the algorithm checks how the constraint is violated (line 3), e.g., whether j ˆr or j u is too small, and then accordingly chooses one to apply gradient ascent towards meeting the constraint. once it finds a feasible solution fulfilling the constraint, it switches to the second stage, which reframes the problem as a triobjective optimization by turning the constraint to be the third objective, i.e., maxθ f(πθ) (cid:44) (j ˆr(πθ), j u(πθ), ψ(πθ, vi))t, and applies mgda (d´esid´eri, 2012) to find a pareto solution for this problem. each step of mgda aims at finding a convex combination αt∇θf(πθt) of all objectives’ gradients ∇θf(πθt) (cid:44) (∇θj ˆr(πθt), ∇θj u(πθt), ∇θψ(πθt, vi))t such that no objective is decreasing on the combined direction. this is equal to solving the following min-norm problem. algorithm 2 a two-stage method for solving constrained bi-objective optimization 1: input: πθt, vi, ψ 2: if ψ(πθt, vi) < ψ then 3: 4: 5: 6: 7: 8: 9: else 10: 11: 12: 13: t ← t + 1 14: output: πθt compute ∇θf(πθt); find α∗ θt+1 = θt + ηα∗ compute ∇θj u(πθt); θt+1 = θt + η∇θj u(πθt); (cid:46) ascending stage t ∇θf(πθt); t to eq. (7); else min αt which can be efficiently solved by frank-wolfe’s algorithm (jaggi, 2013). then mgda takes one step along the direction to update the policy θt, i.e., θt+1 = θt + ηαt∇θf(πθt). since mgda always improves each objective, the constraint will not be violated, and the algorithm will finally find a pareto policy in the targeted region. in practice, we use openai’s es (salimans et al., 2017) to estimate the gradient of each objective (see appendix a.3) for its efficiency and stability. alg. 2 provably converges to the pareto front region targeted by the reference vector with approximate gradients. assumption 1. suppose m objectives {fi}m that their gradients are lipschitz continuous with constant li > 0. assumption 2. for the es gradient ∇fi,ν(xt), we have eεt∼n (0,i)[∇fi,ν(xt)] = ∇fi(xt). suppose var(∇fi,ν(xt)) ≤ σ2. lemma 1. ∀ mutually independent objectives fi ≥ 0, which satisfies assumption 1 & 2, has eεt[fi(xt+1)] − fi(xt) ≤ −(η − liη2 i=1 of a multi-objective function are differentiable and 2 σ2 where ¯dt = (cid:80)m i=1 αi,t∇fi(xt). lemma 1 implies that when η < (cid:107) ¯dt(cid:107)2 increasing sequence of the objectives. , the ascending stage leads to a monotonically nontheorem 1 (non-convex convergence rate). let assumption 1 & 2 hold, ∆ = fi(x0) − fi(x∗), β = η − liη2 2 , and γ = liη2 β(cid:15)−γσ2 ) iterations of ascending stage, we have 1 t 2 . for an arbitrary objective fi, given any (cid:15) > 0, after t = o( ∆ complete proofs are provided in appendix a.1. theorem 1 provides the convergence rate of the ascending stage for non-convex objectives. for the correction stage that performs a single-objective es gradient ascent, nesterov & spokoiny (2017) have proved its convergence rate to a stationary point of the non-convex objective. 4.2.2 local extension of pareto front although solving eq. 6 for a diverse set of reference vectors using the algorithm in sec. 4.2.1 can produce a diverse set of pareto policies, the computational cost linearly increases with the number of policies. in practice, a few pareto policies cannot cover all possible trade-offs and thus may lead to sub-optimal choices of policy in deployment. in order to efficiently obtain more policies with different fine-grained levels of model return-uncertainty trade-off, we propose a “pareto extension” that starts from the diverse set of pareto policies and locally searches for more policies near them on the pareto front. this warm-start strategy avoids training more policies from scratch and practically saves a significant amount of computation. as illustrated in fig. 2 and alg. 1, we perturb each reference vector vi for more reference vectors v+ i = vi + (cid:15), v− v+ (8) these new vectors create more constrained bi-objective optimization problems in the same form as eq. (6). instead of using a random initialization, we start from and fine-tune {πi}n i=1 for a few iterations to achieve the pareto policies of these new optimization problems. these policies and {πi}n i=1 together constitute a pool of pareto policies. when deployed to a realistic environment, we first evaluate these policies for a few steps and then select the one achieving the highest return to deploy. more details about our selection strategy are given in appendix a.4. as long as we include sufficient diverse policies in the pool, we can find a promising policy that significantly outperforms the policies trained by other model-based offline rl methods. i and v− i , i.e., 5 experiments this section aims to answer the following questions by evaluating p3 with other offline rl methods on the datasets from the d4rl gym benchmark (fu et al., 2020). (1) comparison to prior work: does p3 outperform other state-of-the-art offline rl methods? moreover, when the dataset does not contain high-quality samples for some state-action pairs, can the policies generated by p3 generalize to unseen states in order to achieve high returns? (2) ablation study: in the experiments, we apply several techniques used in previous work (see appendix a.6 for more details). how do they affect performance? (3) effectiveness of p3: why is it challenging to find the optimal trade-off between the model return and its uncertainty? we empirically explain how p3 efficiently alleviate this problem by generating a rich pool of diverse and representative policies. to answer question (1), we compare p3’s performance with the state-of-the-art offline rl algorithms, including bcq (fujimoto et al., 2019), bear (kumar et al., 2019), cql (kumar et al., 2020), uwac (wu et al., 2021), td3+bc (fujimoto & gu, 2021), mopo (yu et al., 2020), morel (kidambi et al., 2020), and combo (yu et al., 2021). for fairness of comparison, we re-run these algorithms using author-provided implementations1 and train each algorithm for 1000 epochs. we also carefully tune the hyperparameters of baselines such as bcq, cql, td3+bc, mopo, and morel by grid search and choose the best ones for each benchmark dataset. most of them achieve higher scores than those previous versions. for other baselines, such as uwac and combo, we adopt the hyperparameters in their original papers, assuming they have chosen the best hyperparameters they can find. more details on the experimental setting can be found in appendix a.6. our experiment results are provided in table 1. it is obvious that p3 achieves the highest average-score across all datasets, and significantly outperforms the baseline methods in 5 out of the 9 low/mediumquality datasets. moreover, we find that the low/medium-quality datasets in the d4rl benchmark are “imbalanced”. as illustrated in fig. 8, there are a large number of bad samples, some mediocre 1as noted by https://github.com/aravindr93/mjrl/issues/35, we remark that the implementation provided by morel’s author achieves lower results than their reported ones. bcq bear cql uwac* td3+bc mopo mopo* morel combo* p3+fqe m halfcheetah o d hopper n a r walker2d m halfcheetah u i hopper d e m walker2d y halfcheetah hopper walker2d a l p e r m u i d e m mean t halfcheetah r e hopper p x e walker2d t halfcheetah r e p hopper x e walker2d m u i d e m mean total mean table 1: results on d4rl gym experiments. normalized score (mean±std) over the final 10 evaluations and 5 seeds. ∗ marks previously reported results. dataset quality gradually improves from random to medium-expert. figure 3: learning curves on low-quality datasets. returns are averaged over 10 evaluations and 5 seeds. the shaded area depicts the standard deviation over 5 seeds. p3 outperforms two recent model-based offline rl methods (i.e., mopo and morel) and the sota model-free method (i.e, td3+bc). a full results of all datasets are in fig. 7 of appendix. samples, and a few good samples, causing problems with learning accurate behavior policies or generalizable environment models (buckman et al., 2021; zhang et al., 2021). therefore, many offline rl algorithms, especially model-free algorithms that heavily rely on the accurate recovery of behavior policy (fujimoto et al., 2019; kumar et al., 2020; fujimoto & gu, 2021), perform poorly on these datasets. according to the results in table 1, p3 achieves sota performance and outperforms other baselines by a large margin on random, medium, and medium-replay datasets, indicating the advantages of p3 on learning from low-quality experiences. we use online policy evaluation to select the best policy during the test phase of p3, which can be computationally intensive or cause overheads when deployed to realistic environments. to overcome this drawback, we replace the online evaluation with fitted q evaluation (fqe) (le et al., 2019), an offline policy evaluation method, which (approximately) evaluates policies using the offline data only. the implementation details and experimental results are reported in appendix a.4 and table 1, respectively. we surprisingly find that “p3+fqe (offline policy evaluation)” only slightly degrades from “p3+online policy evaluation” on the performance, but still outperforms all baselines on the low/medium datasets, suggesting that fqe enables more efficient inference for p3 so it has exactly the same inference cost as other baselines. to answer question (2), we conduct a thorough ablation study toward five variants of p3, each removing/changing one component used in p3. table 2 reports their results on the d4rl gym benchmark with 9 representative environment-dataset combinations. in fig. 9 in appendix, we visualize the pareto policies obtained by p3 to highlight the effectiveness and superiority of our method. among the five variants of p3, “scalarization” replaces our porposed alg. 2 with the scalarization method (boyd & vandenberghe, 2004, chapter 4.7); “no statenorm” removes the state normalization (mania et al., 2018; fujimoto & gu, 2021); “no rankshaping” removes the rank-based scale shaping (wierstra et al., 2014); “no paretoextension” removes the pareto extension proposed in sec. 4.2.2; “no behaviorcloning” removes the behavior cloning initialization (kumar et al., 2020; kidambi et al., 2020). more details on these variants are provided in appendix a.9. according to the results in table 2 and fig. 9, we give the following conclusions: (1) except “scalarization” and “no paretoextension”, other variants perform comparably to our p3 while outperforming the previous results achieved by model-based (mopo) and model-free (td3+bc) rl algorithms on data quality environment p3: scalarization p3: no statenorm p3: no rankshaping p3: no paretoextension p3: no behaviorcloning p3: our version mopo td3+bc random medium-replay medium-expert halfcheetah hopper walker2d halfcheetah hopper walker2d halfcheetah hopper walker2d table 2: ablation study. normalized score (mean±std) of p3 variants over the final 10 evaluations and 5 seeds when applied to three representative d4rl datasets, i.e., random, medium-replay, and medium-expert, corresponding to low, medium and high-quality data, respectively. figure 4: model-based offline rl’s performance in the deployed environment (heatmap) under different tradeoffs between the model return (y-axis) and uncertainty (x-axis). each red circle is a pareto policy from the pool generated by p3. zoom in for more details. more results are shown in fig. 10 of appendix. the low/medium-quality datasets, reflecting that these widely-used techniques can improve p3’s performance but are not crucial to our appealing results. (2) “scalarization” shows noticeable degradation in performance and cannot obtain a dense set of diverse policies, as shown in fig. 9. the results can be explained as follows: the scalarization method only finds a few separated policies, and it is difficult to find one with advantageous trade-off among them. in addition, we remark that the computational cost of multiple training with different weight assignments is similar to the cost of running p3. (3) “no paretoextension” degrades p3’s performance on all 9 environment-dataset combinations, corroborating that a dense set of policies on the pareto front is essential to our results. to answer question (3), in fig. 4, we study how the p3 policies with different uncertainty-return trade-off perform in the deployed environment. for low/medium-quality datasets (the left three plots in fig. 4), the optimal policies with high realistic returns (bright areas in the heatmap) spread across almost the whole pareto front. therefore, to find the best policy, it is essential to explore the whole pareto front and select one from a diverse set of pareto optimal/stationary policies as p3 does. this explains why p3 performs the best on all the low/medium datasets. on the contrary, for high-quality datasets (the right two plots in fig. 4), the optimal policies with high realistic returns gather within a small region of the pareto front and associate to one trade-off level. therefore, by carefully tuning the trade-off weight, previous methods can still find the optimal policy without visiting the whole pareto front. hence, we observe less advantages of p3 on the high-quality datasets. the reason behind is that the mdp models are very confident on high-return (realistic) state-action pairs if most samples in the training data are with high-return (high-quality), while they can be uncertain about many high-return pairs if the training data only cover a few high-return samples (low/medium quality). it is worth noting that collecting high-quality datasets is usually expensive or infeasible in practice and many applications lack sufficient high-quality data. in these imperfect but practical scenarios, p3 performs significantly better and more stably than existing model-based offline rl methods that only learns one single policy. conclusion in this paper, we find that model-based offline rl’s performance significantly relies on the trade-off between model return and its uncertainty, while determining the optimal trade-off is challenging without access to the realistic environment. to address the problem, we study a bi-objective formulation for model-based offline rl and develop an efficient method that produces a pool of diverse policies on the pareto front performing different levels of trade-offs, which provides flexibility to select the best policy in the inference stage. we extensively validate the efficacy of our method on the d4rl benchmark, where ours largely outperforms several recent baselines and exhibits promising results on low-quality datasets. acknowledgments yijun yang and yuhui shi are supported in part by the shenzhen fundamental research program under grant no. jcyj20200109141235597, the national science foundation of china under grant no. 61761136008, the shenzhen peacock plan under grant no. kqtd2016112514355531, the program for guangdong introducing innovative and entrepreneurial teams under grant no. 2017zt07x386. reproducibility statements code is available at https://github.com/overeuro/p3. we provide a full description of all our experiments in section 5 and appendix a.6. references abbas abdolmaleki, sandy h. huang, giulia vezzani, bobak shahriari, jost tobias springenberg, shruti mishra, dhruva tb, arunkumar byravan, konstantinos bousmalis, andr´as gy¨orgy, csaba szepesv´ari, raia hadsell, nicolas heess, and martin a. riedmiller. on multi-objective policy optimization as a tool for reinforcement learning. corr, 2021. felix berkenkamp, matteo turchetta, angela schoellig, and andreas krause. safe model-based reinforcement learning with stability guarantees. in neurips, 2017. rinu boney, juho kannala, and alexander ilin. regularizing model-based planning with energy-based models. in corl, 2019. stephen boyd and lieven vandenberghe. convex optimization. cambridge university press, 2004. jacob buckman, carles gelada, and marc g. bellemare. the importance of pessimism in fixed-dataset policy optimization. in iclr, 2021. ran cheng, yaochu jin, markus olhofer, and bernhard sendhoff. a reference vector guided evolutionary algorithm for many-objective optimization. ieee trans. evol. comput., 2016. krzysztof choromanski, aldo pacchiano, jack parker-holder, yunhao tang, deepali jain, yuxiang yang, atil iscen, jasmine hsu, and vikas sindhwani. provably robust blackbox optimization for reinforcement learning. in corl, 2020. kurtland chua, roberto calandra, rowan mcallister, and sergey levine. deep reinforcement learning in a handful of trials using probabilistic dynamics models. in neurips, 2018. ignasi clavera, jonas rothfuss, john schulman, yasuhiro fujita, tamim asfour, and pieter abbeel. model-based reinforcement learning via meta-policy optimization. in corl, 2018. kalyanmoy deb, samir agrawal, amrit pratap, and t. meyarivan. a fast and elitist multiobjective genetic algorithm: nsga-ii. ieee trans. evol. comput., 2002. jean-antoine d´esid´eri. multiple-gradient descent algorithm (mgda) for multiobjective optimization. | 9 | [
108,
217.5490784,
505.744956483,
227.5616784
] |
vZTp1oPV3PC.pdf | 2,023 | 2 | one transformer can understand both 2d & 3d molecular data tianlang chen2,5∗, tie-yan liu3, shengjie luo1, shuxin zheng3, 1national key laboratory of general artificial intelligence, school of intelligence science and technology, peking university 2school of eecs, peking university 3microsoft research 4center for data science, peking university 5shanghai artificial intelligence laboratory luosj@stu.pku.edu.cn, tlchen@pku.edu.cn, xyx050@stu.pku.edu.cn {shuz, tyliu}@microsoft.com, {wanglw, dihe}@pku.edu.cn yixian xu2∗, liwei wang1,4†, di he1† abstract unlike vision and language data which usually has a unique format, molecules can naturally be characterized using different chemical formulations. one can view a molecule as a 2d graph or define it as a collection of atoms located in a 3d space. for molecular representation learning, most previous works designed neural networks only for a particular data format, making the learned models likely to fail for other data formats. we believe a general-purpose neural network model for chemistry should be able to handle molecular tasks across data modalities. to achieve this goal, in this work, we develop a novel transformer-based molecular model called transformer-m, which can take molecular data of 2d or 3d formats as input and generate meaningful semantic representations. using the standard transformer as the backbone architecture, transformer-m develops two separated channels to encode 2d and 3d structural information and incorporate them with the atom features in the network modules. when the input data is in a particular format, the corresponding channel will be activated, and the other will be disabled. by training on 2d and 3d molecular data with properly designed supervised signals, transformer-m automatically learns to leverage knowledge from different data modalities and correctly capture the representations. we conducted extensive experiments for transformer-m. all empirical results show that transformer-m can simultaneously achieve strong performance on 2d and 3d tasks, suggesting its broad applicability. the code and models will be made publicly available at https://github.com/lsj2408/transformer-m. introduction deep learning approaches have revolutionized many domains, including computer vision (he et al., 2016), natural language processing (devlin et al., 2019; brown et al., 2020), and games (mnih et al., 2013; silver et al., 2016). recently, researchers have started investigating whether the power of neural networks could help solve important scientific problems in chemistry, e.g., predicting the property of molecules and simulating the molecular dynamics from large-scale training data (hu et al., 2020a; 2021; zhang et al., 2018; chanussot et al., 2020). one key difference between chemistry and conventional domains such as vision and language is the multimodality of data. in vision and language, a data instance is usually characterized in a particular form. for example, an image is defined as rgb values in a pixel grid, while a sentence is defined as tokens in a sequence. in contrast, molecules naturally have different chemical formulations. a molecule can be represented as a sequence (weininger, 1988), a 2d graph (wiswesser, 1985), or a collection of atoms located in a 3d space. 2d and 3d structures are the most popularly used formulations as many valuable properties and statistics can be obtained from them (chmiela et al., 2017; stokes et al., 2020). however, as far as we know, most previous works focus on designing neural network models for either 2d or 3d structures, making the model learned in one form fail to be applied in tasks of the other form. we argue that a general-purpose neural network model in chemistry should at least be able to handle molecular tasks across data modalities. in this paper, we take the first step toward this goal by ∗these two authors contributed equally to this project †correspondence to: di he <dihe@pku.edu.cn> and liwei wang <wanglw@pku.edu.cn>. developing transformer-m, a versatile transformer-based molecular model that performs well for both 2d and 3d molecular representation learning. note that for a molecule, its 2d and 3d forms describe the same collection of atoms but use different characterizations of the structure. therefore, the key challenge is to design a model expressive and compatible in capturing structural knowledge in different formulations and train the parameters to learn from both information. transformer is more favorable than other architectures as it can explicitly plug structural signals in the model as bias terms (e.g., positional encodings (vaswani et al., 2017; raffel et al., 2020)). we can conveniently set 2d and 3d structural information as different bias terms through separated channels and incorporate them with the atom features in the attention layers. architecture. the backbone network of our transformer-m is composed of standard transformer blocks. we develop two separate channels to encode 2d and 3d structural information. the 2d channel uses degree encoding, shortest path distance encoding, and edge encoding extracted from the 2d graph structure, following ying et al. (2021a). the shortest path distance encoding and edge encoding reflect the spatial relations and bond features of a pair of atoms and are used as bias terms in the softmax attention. the degree encoding is added to the atom features in the input layer. for the 3d channel, we follow shi et al. (2022) to use the 3d distance encoding to encode the spatial distance between atoms in the 3d geometric structure. each atom pair’s euclidean distance is encoded via the gaussian basis kernel function (scholkopf et al., 1997) and will be used as a bias term in the softmax attention. for each atom, we sum up the 3d distance encodings between it and all other atoms, and add it to atom features in the input layer. see figure 1 for an illustration. training. except for the parameters in the two structural channels, all other parameters in transformer-m (e.g., self-attention and feed-forward networks) are shared for different data modalities. we design a joint-training approach for transformer-m to learn its parameters. during training, when the instances in a batch are only associated with 2d graph structures, the 2d channel will be activated, and the 3d channel will be disabled. similarly, when the instances in a batch use 3d geometric structures, the 3d channel will be activated, and the 2d channel will be disabled. when both 2d and 3d information are given, both channels will be activated. in such a way, we can collect 2d and 3d data from separate databases and train transformer-m with different training objectives, making the training process more flexible. we expect a single model to learn to identify and incorporate information from different modalities and efficiently utilize the parameters, leading to better generalization performance. experimental results. we use the pcqm4mv2 dataset in the ogb large-scale challenge (ogblsc) (hu et al., 2021) to train our transformer-m, which consists of 3.4 million molecules of both 2d and 3d forms. the model is trained to predict the pre-computed homo-lumo gap of each data instance in different formats with a pre-text 3d denoising task specifically for 3d data. with the pre-trained model, we directly use or fine-tune the parameters for various molecular tasks of different data formats. first, we show that on the validation set of the pcqm4mv2 task, which only contains 2d molecular graphs, our transformer-m surpasses all previous works by a large margin. the improvement is credited to the joint training, which effectively mitigates the overfitting problem. second, on pdbbind (wang et al., 2004; 2005b) (2d&3d), the fine-tuned transformer-m achieves state-of-the-art performance compared to strong baselines. lastly, on qm9 (ramakrishnan et al., 2014) (3d) benchmark, the fine-tuned transformer-m models achieve competitive performance compared to recent methods. all results show that our transformer-m has the potential to be used as a general-purpose model in a broad range of applications in chemistry. related works neural networks for learning 2d molecular representations. graph neural network (gnn) is popularly used in molecular graph representation learning (kipf & welling, 2016; hamilton et al., 2017; gilmer et al., 2017; xu et al., 2019; veliˇckovi´c et al., 2018). a gnn learns node and graph representations by recursively aggregating (i.e., message passing) and updating the node representations from neighbor representations. different architectures are developed by using different aggregation and update strategies. we refer the readers to wu et al. (2020) for a comprehensive survey. recently, many works extended the transformer model to graph tasks (dwivedi & bresson, 2020; kreuzer et al., 2021; ying et al., 2021a; luo et al., 2022; kim et al., 2022; rampášek et al., 2022; park et al., 2022; hussain et al., 2022; zhang et al., 2023). seminal works include graphormer (ying et al., 2021a), which developed graph structural encodings and used them in a standard transformer model. neural networks for learning 3d molecular representations. learning molecular representations with 3d geometric information is essential in many applications, such as molecular dynamics simulation. recently, researchers have designed architectures to preserve invariant and equivariant properties for several necessary transformations like rotation and translation. schütt et al. (2017) used continuous-filter convolutional layers to model quantum interactions in molecules. thomas et al. (2018) used filters built from spherical harmonics to construct a rotation- and translationequivariant neural network. klicpera et al. (2020) proposed directional message passing, which ensured their embeddings to be rotationally equivariant. liu et al. (2022); wang et al. (2022) use spherical coordinates to capture geometric information and achieve equivariance. hutchinson et al. (2021); thölke & de fabritiis (2021) built transformer models preserving equivariant properties. shi et al. (2022) extended ying et al. (2021a) to a 3d transformer model which attains better results on large-scale molecular modeling challenges (chanussot et al., 2020). multi-view learning for molecules. the 2d graph structure and 3d geometric structure can be considered as different views of the same molecule. inspired by the contrastive pre-training approach in vision (chen et al., 2020; he et al., 2020; radford et al., 2021), many works studied pre-training methods for molecules by jointly using the 2d and 3d information. stärk et al. (2022) used two encoders to encode the 2d and 3d molecular information separately while maximizing the mutual information between the representations. liu et al. (2021a) derived the graphmvp framework, which uses contrastive learning and reconstruction to pre-train a 2d encoder and a 3d encoder. zhu et al. (2022) unified the 2d and 3d pre-training methods above and proposed a 2d gnn model that can be enhanced by 3d geometric features. different from these works, we aim to develop a single model which is compatible with both 2d and 3d molecular tasks. furthermore, all the above works train models using paired 2d and 3d data, while such paired data is not a strong requirement to train our model. general-purpose models. building a single agent that works for multiple tasks, even across modalities, is a recent discovery in deep learning. in the early years, researchers found that a single multilingual translation model can translate tens of languages using the same weights and perform better than a bilingual translation model for rare languages (lample & conneau, 2019; conneau et al., 2019; xue et al., 2020; liu et al., 2020). large-scale language model (devlin et al., 2019; brown et al., 2020) is another example that can be applied to different downstream tasks using in-context learning or fine-tuning. reed et al. (2022) further pushed the boundary by building a single generalist agent, gato. this agent uses the same network with the same weights but can play atari, caption images, and make conversations like a human. our work also lies in this direction. we focus on developing a general-purpose model in chemistry, which can take molecules in different formats as input and perform well on various molecular tasks with a small number of additional training data. 3 transformer-m in this section, we introduce transformer-m, a versatile transformer serving as a general architecture for 2d and 3d molecular representation learning. first, we introduce notations and recap the preliminaries in the backbone transformer architecture (section 3.1). after that, we present the proposed transformer-m model with two structural channels for different data modalities (section 3.2). notations and the backbone transformer a molecule m is made up of a collection of atoms held together by attractive forces. we denote x ∈ rn×d as the atoms with features, where n is the number of atoms, and d is the feature dimension. the structure of m can be represented in different formulations, such as 2d graph structure and 3d geometric structure. for the 2d graph structure, atoms are explicitly connected by chemical bonds, and we define m2d = (x, e), where e(i,j) ∈ e denotes the edge feature (i.e., the type of the bond) between atom i and j if the edge exists. for the 3d geometric structure, for each atom i, its position ri in the cartesian coordinate system is provided. we define m3d = (x, r), where r = {r1, ..., rn} and ri ∈ r3. our goal is to design a parametric model which can take either m2d or m3d (or both of them) as input, obtain contextual representations, and make predictions on downstream tasks. transformer layer. the backbone architecture we use in this work is the transformer model (vaswani et al., 2017). a transformer is composed of stacked transformer blocks. a transformer block consists of two layers: a self-attention layer followed by a feed-forward layer, with both layers having normalization (e.g., layernorm (ba et al., 2016)) and skip connections (he et al., 2016). denote x (l) as the input to the (l + 1)-th block and define x (0) = x. for an input x (l), the (l + 1)-th block works as follows: ah(x (l)) = softmax (cid:32) x (l)w l,h q (x (l)w l,h d k )⊤ ˆx (l) = x (l) + ah(x (l))x (l)w l,h v w l,h o ; x (l+1) = ˆx (l) + gelu( ˆx (l)w l o ∈ rdh ×d, w l,h where w l,h 2 ∈ rr×d. h is the number v ∈ rd×dh , w l of attention heads, dh is the dimension of each head, and r is the dimension of the hidden layer. ah(x) is usually referred to as the attention matrix. 1 ∈ rd×r, w l k , w l,h q , w l,h positional encoding. another essential component in the transformer is positional encoding. note that the self-attention layer and the feed-forward layer do not make use of the order of input elements (e.g., word tokens), making the model impossible to capture the structural information. the original paper (vaswani et al., 2017) developed effective positional encodings to encode the sentence structural information and explicitly integrate them as bias terms into the model. shortly, many works realized that positional encoding plays a crucial role in extending standard transformer to more complicated data structures beyond language. by carefully designing structural encoding using domain knowledge, transformer has successfully been applied to the image and graph domain and achieved impressive performance (dosovitskiy et al., 2020; liu et al., 2021b; ying et al., 2021a). transformer-m and training strategy as we can see, the two molecular formulations defined in section 3.1 use the same atom feature space but different characterizations of the structure (graph structure e v.s. geometric structure r). therefore, the key challenge is to design a compatible architecture that can utilize either structural information in e or r (or both) and incorporate them with the atom features in a principled way. the transformer is a suitable backbone to achieve the goal as we can encode structural information as bias terms and properly plug them into different modules. furthermore, with transformer, we can treat e and r in a unified way by decomposing the structural information into pair-wise and atom-wise encodings. without loss of generality, we choose to use the encoding strategies in the graph and geometric transformers proposed by ying et al. (2021a); shi et al. (2022). for the sake of completeness, we briefly introduce those structural encodings and show how to leverage them in transformer-m. note that our design methodology also works with other encoding strategies (hussain et al., 2022; park et al., 2022; thölke & de fabritiis, 2021). see appendix b.5 for the detailed results. encoding pair-wise relations in e. we use two terms to encode the structural relations between any atom pairs in the graph. first, we encode the shortest path distance (spd) between two atoms to reflect their spatial relation. let φspd ij denote the spd encoding between atom i and j, which is a learnable scalar determined by the distance of the shortest path between i and j. second, we encode the edge features (e.g., the chemical bond types) along the shortest path between i and j to reflect the bond information. for most molecules, there exists only one distinct shortest path between any two atoms. denote the edges in the shortest path from i to j as spij = (e1, e2, ..., en ), and the edge encoding between i and j is defined as φedge ij = 1 n=1 en(wn)t , where wn are learnable vectors n of the same dimension as the edge feature. denote φspd and φedge as the matrix form of the spd encoding and edge encoding, both of which are of shape n × n. encoding pair-wise relations in r. we encode the euclidean distance to reflect the spatial relation between any pair of atoms in the 3d space. for each atom pair (i, j), we first process their euclidean distance with the gaussian basis kernel function (scholkopf et al., 1997), ψk (i,j) = (cid:17)2(cid:19) exp (cid:16) γ(i,j)∥ri−rj ∥+β(i,j)−µk |σk| , k = 1, ..., k, where k is the number of gaussian basis kernels. then the 3d distance encoding φ3d distance = gelu (cid:0)ψ(i,j)w 1 d ∈ rk×1 are (i,j); ...; ψk learnable parameters. γ(i,j), β(i,j) are learnable scalars indexed by the pair of atom types, and µk, σk are learnable kernel center and learnable scaling factor of the k-th gaussian basis kernel. denote φ3d distance as the matrix form of the 3d distance encoding, whose shape is n × n. is obtained according to φ3d distance d ∈ rk×k, w 2 d, where ψ(i,j) = [ψ1 (i,j)]⊤, w 1 d ij ij figure 1: an illustration of our transformer-m model architecture. we build two channels on the backbone transformer. the red channel is activated for data with 2d graph structures to incorporate degree, shortest path distance, and edge information. the purple channel is activated for data with 3d geometric structures to leverage euclidean distance information. different encodings are located in appropriate modules. integrating φspd, φedge and φ3d distance in transformer-m. all pair-wise encodings defined above capture the interatomic information, which is in a similar spirit to the relative positional encoding for sequential tasks (raffel et al., 2020). therefore, we similarly locate those pair-wise signals in the self-attention module to provide complementary information to the dot-product term xwq(xwk)⊤. for simplicity, we omit the index of attention head h and layer l, and the modified attention matrix is defined as: a(x) = softmax xwq(xwk)⊤ √ d + φspd + φedge (cid:125) (cid:123)(cid:122) (cid:124) 2d pair-wise channel + φ3d distance (cid:125) (cid:123)(cid:122) 3d pair-wise channel encoding atom-wise structural information in e. for atom i, eqn. (4) computes the normalized weights according to the semantic (first term) and spatial relation (last three terms) between i and other atoms. however, the information is still not sufficient. for example, the importance (i.e., centrality) of each atom is missing in the attention. for each atom i, we use its degree as the centrality information. formally, let ψdegree denote the degree encoding of the atom i, which is a d-dimensional learnable vector determined by the degree of the atom. denote ψdegree = [ψdegree ] as the centrality encoding of all the atoms, which is of shape n × d. , ..., ψdegree n , ψdegree 2 i encoding atom-wise structural information in r. similar to the 2d atom-wise centrality encoding, for geometric data, we encode the centrality of each atom in the 3d space. for each atom i, we sum up the 3d distance encodings between it and all other atoms. let ψsum of 3d distance denote the i d ∈ rk×d is j∈[n] ψ(i,j)w 3 d, where w 3 centrality encoding of atom i, we have ψsum of 3d distance a learnable weight matrix. similarly, we define ψsum of 3d distance as the encoding of all atoms, whose shape is n × d. i integrating ψdegree and ψsum of 3d distance in transformer-m. we add the atom-wise encodings of 2d and 3d structures to the atom features in the input layer. formally, the input x (0) is modified as: ψdegree (cid:124) (cid:123)(cid:122) (cid:125) 2d atom-wise channel + ψsum of 3d distance , (cid:125) (cid:124) (cid:123)(cid:122) 3d atom-wise channel through this simple way, the structural information of molecules in both 2d and 3d formats is integrated into one transformer model. it is easy to check that transformer-m preserves equivariant properties for both data formats. training. the next step is to learn the parameters in transformer-m to capture meaningful representations from each data format. to achieve this goal, we develop a simple and flexible joint training method to learn transformer-m. we first collect datasets in different formats (2d/3d) and define supervised/self-supervised tasks (e.g., energy regression) on each format, and train the model on all the data toward each objective, respectively. to be concrete, during training, if a data instance comes from a dataset in the 2d format, the 2d channel is activated, and the 3d channel is disabled. the model parameter will be optimized to minimize the corresponding (i.e., 2d) objective. when a data instance comes from a dataset in the 3d format, only the 3d channel is activated, and the model will learn to minimize the 3d objective. both channels are activated if the model takes molecules in both 2d and 3d formats as input. compared with the multi-view learning approaches, we can train transformer-m using unpaired 2d and 3d data, making the training process more flexible. the transformer-m may generalize better due to the joint training. several previous works (liu et al., 2021a) observed that 2d graph structure and 3d geometric structure contain complementary chemical knowledge. for example, the 2d graph structure only contains bonds with bond type, while the 3d geometric structure contains fine-grained information such as lengths and angles. as another example, the 3d geometric structures are usually obtained from computational simulations like density functional theory (dft) (burke, 2012), which could have approximation errors. the 2d graphs are constructed by domain experts, which to some extent, provide references to the 3d structure. by jointly training using 2d and 3d data with parameter sharing, our model can learn more chemical knowledge instead of overfitting to data noise and perform better on both 2d and 3d tasks. future directions. as an initial attempt, our transformer-m opens up a way to develop generalpurpose molecular models to handle diverse chemical tasks in different data formats. we believe it is a starting point with more possibilities to explore in the future. for example, in this work, we use a simple way and linearly combine the structural information of 2d and 3d structures, and we believe there should be other efficient ways to fuse such encodings. our model can also be combined with previous multi-view contrastive learning approaches. it is worth investigating how to pre-train our model using those methods. experiments in this section, we empirically study the performance of transformer-m. first, we pre-train our model on the pcqm4mv2 training set from ogb large-scale challenge (hu et al., 2021) (section 4.1). with the pre-trained model, we conduct experiments on molecular tasks in different data formats and evaluate the versatility and effectiveness of our transformer-m. due to space limitation, we study three representative tasks, pcqm4mv2 (2d, section 4.2), pdbbind (2d & 3d, section 4.3) and qm9 (3d, section 4.4). ablation studies are presented in section 4.5. all codes are implemented based on the official codebase of graphormer (ying et al., 2021a) in pytorch (paszke et al., 2019). large-scale pre-training our transformer-m is pre-trained on the training set of pcqm4mv2 from the ogb large-scale challenge (hu et al., 2021). the total number of training samples is 3.37 million. each molecule is associated with its 2d graph structure and 3d geometric structure. the homo-lumo energy gap of each molecule is provided as its label, which is obtained by dft-based methods (burke, 2012). we follow ying et al. (2021a) and employ a 12-layer transformer-m model. the dimension of hidden layers and feed-forward layers is set to 768. the number of attention heads is set to 32. the number of gaussian basis kernels is set to 128. to train transformer-m, we provide three modes for each data instance: (1) activate the 2d channels and disable the 3d channels (2d mode); (2) activate the 3d channels and disable the 2d channels (3d mode); (3) activate both channels (2d+3d mode). the mode of each data instance during training is randomly drawn on the fly according to a pre-defined distribution, implemented similarly to dropout (srivastava et al., 2014). in this work, we use two training objectives. the first one is a supervised learning objective, which aims to predict the homo-lumo energy gap of each molecule. besides, we also use a self-supervised learning objective called 3d position denoising (godwin et al., 2022; zaidi et al., 2022), which is particularly effective. during training, if a data instance is in the 3d mode, we add gaussian noise to the position of each atom and require the model to predict the noise from the noisy input. the model is optimized to minimize a linear combination of the two objectives above. details of settings are in appendix b.1. table 1: results on pcqm4mv2 validation set in ogb large-scale challenge (hu et al., 2021). the evaluation metric is the mean absolute error (mae) [ev]. we report the official results of baselines from ogb and use ∗ to indicate our implemented results. bold values indicate the best performance. method mlp-fingerprint (hu et al., 2021) gcn (kipf & welling, 2016) gin (xu et al., 2019) gine-vn (brossard et al., 2020; gilmer et al., 2017) gcn-vn (kipf & welling, 2016; gilmer et al., 2017) gin-vn (xu et al., 2019; gilmer et al., 2017) deepergcn-vn (li et al., 2020; gilmer et al., 2017) graphgpssmall (rampášek et al., 2022) coatgin (cui, 2022) tokengt (kim et al., 2022) grpebase (park et al., 2022) egt (hussain et al., 2022) grpelarge (park et al., 2022) graphormer (ying et al., 2021a; shi et al., 2022) graphgpsbase (rampášek et al., 2022) transformer-m (ours) 4.2 pcqm4mv2 performance (2d) after the model is pre-trained, we evaluate our transformer-m on the validation set of pcqm4mv2. note that the validation set of pcqm4mv2 consists of molecules in the 2d format only. therefore, we can use it to evaluate how well transformer-m performs on 2d molecular data. the goal of the task is to predict the homu-lumo energy gap, and the evaluation metric is the mean absolute error (mae). as our training objectives include the homo-lumo gap prediction task, we didn’t fine-tune the model parameters on any data. during inference, only the 2d channels are activated. we choose several strong baselines covering message passing neural network (mpnn) variants and graph transformers. detailed descriptions of baselines are presented in appendix b.2. the results are shown in table 1. it can be easily seen that our transformer-m surpasses all baselines by a large margin, e.g., 8.2% relative mae reduction compared to the previous best model (rampášek et al., 2022), establishing a new state-of-the-art on pcqm4mv2 dataset. note that our general architecture is the same as the graphormer model (ying et al., 2021a). the only difference between transformer-m and the graphormer baseline is that graphormer is trained on 2d data only, while transformer-m is trained using both 2d and 3d structural information. therefore, we can conclude that transformer-m performs well on 2d molecular data, and the 2d-3d joint training with shared parameters indeed helps the model learn more chemical knowledge. 4.3 pdbbind performance (2d & 3d) to verify the compatibility of our transformer-m, we further fine-tune our model on the pdbbind dataset (version 2016, wang et al. (2004; 2005b)), one of the most widely used datasets for structurebased virtual screening (jiménez et al., 2018; stepniewska-dziubinska et al., 2018; zheng et al., 2019). pdbbind dataset consists of protein-ligand complexes as data instances, which are obtained in bioassay experiments associated with the pka (or − log kd, − log ki) affinity values. for each data instance, the 3d geometric structures are provided and the 2d graph structures are constructed via pre-defined rules. the task requires models to predict the binding affinity of protein-ligand complexes, which is extremely vital for drug discovery. after pre-trained on the pcqm4mv2 training set, our transformer-m model is fine-tuned and evaluated on the core set of the pdbbind dataset. we compare our model with competitive baselines covering classical methods, cnn-based methods, and gnns. all experiments are repeated five times with different seeds. average performance is reported. due to space limitation, we present the details of baselines and experiment settings in appendix b.3. the results are presented in table 2. our transformer-m consistently outperforms all the baselines on all evaluation metrics by a large margin, e.g., 3.3% absolute improvement on pearson’s correlation coefficient (r). it is worth noting that data instances of the pdbbind dataset are protein-ligand complexes, while our model is pre-trained on simple molecules, demonstrating the transferability of transformer-m. table 2: results on pdbbind core set (version 2016) (wang et al., 2004; 2005b). the evaluation metrics include pearson’s correlation coefficient (r), mean absolute error (mae), root-mean squared error (rmse), and standard deviation (sd). we report the official results of baselines from li et al. (2021). bold values indicate the best performance. pdbbind core set rmse ↓ method sd ↓ mae ↓ table 3: results on qm9 (ramakrishnan et al., 2014). the evaluation metric is the mean absolute error (mae). we report the official results of baselines from thölke & de fabritiis (2021); godwin et al. (2022); jiao et al. (2022). bold values indicate the best performance. method edgepred (hamilton et al., 2017) attrmask (hu et al., 2019) infograph (sun et al., 2019) graphcl (you et al., 2020) gpt-gnn (hu et al., 2020b) graphmvp (jing et al., 2021) gem (fang et al., 2021) 3d infomax (stärk et al., 2022) pospred (jiao et al., 2022) 3d-mgp (jiao et al., 2022) schnet (schütt et al., 2017) physnet (unke & meuwly, 2019) cormorant (anderson et al., 2019) dimenet++ (klicpera et al., 2020) painn (schütt et al., 2021) lietf (hutchinson et al., 2021) torchmd-net (thölke & de fabritiis, 2021) egnn (satorras et al., 2021) noisynode (godwin et al., 2022) transformer-m (ours) ϵhom o ϵlu m o ∆ϵ g u cv qm9 performance (3d) we use the qm9 dataset (ramakrishnan et al., 2014) to evaluate our transformer-m on molecular tasks in the 3d data format. qm9 is a quantum chemistry benchmark consisting of 134k stable small organic molecules. these molecules correspond to the subset of all 133,885 species out of the gdb-17 chemical universe of 166 billion organic molecules. each molecule is associated with 12 targets covering its energetic, electronic, and thermodynamic properties. the 3d geometric structure of the molecule is used as input. following thölke & de fabritiis (2021), we randomly choose 10,000 and 10,831 molecules for validation and test evaluation, respectively. the remaining molecules are used to fine-tune our transformer-m model. we observed that several previous works used different data splitting ratios or didn’t describe the evaluation details. for a fair comparison, we choose baselines that use similar splitting ratios in the original papers. the details of baselines and experiment settings are presented in appendix b.4. the results are presented in table 3. it can be seen that our transformer-m achieves competitive performance compared to those baselines, suggesting that the model is compatible with 3d molecular data. in particular, transformer-m performs best on humo, lumo, and humo-lumo predictions. this indicates that the knowledge learned in the pre-training task transfers better to similar tasks. note that the model doesn’t perform quite well on some other tasks. we believe the transformer-m can be improved in several aspects, including employing a carefully designed output layer (thölke & de fabritiis, 2021) or pre-training with more self-supervised training signals. ablation study | 8 | [
108.249,
609.4560784,
209.0760807,
619.4186784
] |
3yJ-hcJBqe.pdf | 2,023 | 0 | adaptive robust evidential optimization for open set detection from imbalanced data hitesh sapkota & qi yu rochester institute of technology {hxs1943, qi.yu}@rit.edu abstract open set detection (osd) aims at identifying data samples of an unknown class (i.e., open set) from those of known classes (i.e., closed set) based on a model trained from closed set samples. however, a closed set may involve a highly imbalanced class distribution. accurately differentiating open set samples and those from a minority class in the closed set poses a fundamental challenge as the model may be equally uncertain when recognizing samples from the minority class. in this paper, we propose adaptive robust evidential optimization (areo) that offers a principled way to quantify sample uncertainty through evidential learning while optimally balancing the model training over all classes in the closed set through adaptive distributively robust optimization (dro). to avoid the model to primarily focus on the most difficult samples by following the standard dro, adaptive dro training is performed, which is governed by a novel multi-scheduler learning mechanism to ensure an optimal model training behavior that gives sufficient attention to the difficult samples and the minority class while capable of learning common patterns from the majority classes. our experimental results on multiple real-world datasets demonstrate that the proposed model outputs uncertainty scores that can clearly separate samples from closed and open sets, respectively, and the detection results outperform the competitive baselines. introduction in many practical scenarios (e.g., drug discovery, anomaly detection etc.), it is likely to encounter unknown samples and it is desirable that the model can properly detect these samples as unknown. various approaches have been proposed to tackle the unknown sample detection problem (bendale & boult, 2016; sun et al., 2020), using techniques such as weibull-calibration svm (w-svm) (scheirer et al., 2013), reconstruction error (zhang & patel, 2017), nearest neighbor (júnior et al., 2016), and quasi-linear function (cevikalp & yavuz, 2017). as a representative example, the openmax framework removes softmax from the last layer of a neural network and includes an additional layer to produce the probability of a sample being unknown. this essentially redistributes the probability mass to (k + 1) classes (with unknown being a new class). multiple efforts follow this direction (sun et al., 2020; neal et al., 2018). while this technique is viable to detect open-set samples, the additional layer is included during the testing phase. as a result, the training still follows the closed set assumption. recent advances in uncertainty quantification provide a more systematic way to break the closed set limitation by explicitly modeling the uncertainty mass that corresponds to the unknown class. one representative work is the evidential deep learning (edl) model (sensoy et al., 2018), which treats the predicted multi-class probability as a multinomial opinion according to subjective logic (jøsang, 2016). similar to edl, prior networks (pns) (malinin & gales, 2018) explicitly considers the distributional uncertainty that quantifies the distributional mismatch (malinin & gales, 2018). the posterior networks further improves pns by leveraging normalizing flows for density estimation in the latent space to predict a posterior distribution, which can be used to identify out-of-distribution (ood) samples from in-distribution ones (charpentier et al., 2020). despite the promising progress in osd that focuses on differentiating samples from the closed and open sets, respectively, limited attention has been devoted to the situation where the closed set involves highly imbalanced classes, which may be quite common in many practical settings. for example, for anomaly detection, the known types of anomalies available for model training are usually unevenly distributed into multiple categories (e.g., car accident vs. shooting). similarly, for computer-aided medical diagnosis, the known diseases (to the model) may be highly imbalanced based on the available cases. thus, following the standard empirical risk minimization (erm) framework for training, the model may not learn properly from the minority class due to the lack of positive samples. as a result, it is more likely to misidentify a minority-class sample as an unknown-class sample during osd, leading to a high false-positive rate. distributionally robust optimization (dro) offers an effective means to handle the imbalance class distribution in the closed set setting (qi et al., 2020; zhu et al., 2019). in dro, the worst case weighted loss is optimized, where the weights are searched in a given neighborhood (referred to as the uncertainty set) of the empirical sample distribution such that the overall loss is maximized. by expanding the uncertainty set, the model is encouraged to assign higher weights to difficult samples. as a result, samples from the minority class will be given more emphasis during model training if not properly learned (which incurs a larger loss). another common solution to handle imbalanced class distribution in the closed set is through oversampling to achieve a more balanced class distribution (chawla et al., 2002). while both oversampling and dro may help to improve the closed set performance, neither of them is adequate to address osd from imbalanced data. a fundamental challenge lies in the interplay between samples from the minority class and the difficult samples from the majority classes. as a result, simply oversampling the minority class may neglect these difficult samples. similarly, applying dro with a flexible uncertainty set may put too much emphasis on these difficult samples and ignore the minority class as well as some representative samples from the majority classes, which affects proper model training. in fact, directly applying these models for osd may lead to even worse detection performance, which is evidenced by our experimental results. few recent approaches try to address the osd under class-imbalanced setting. liu et al. (2019) leverage the visual similarity across the centroids of closed set classes to allow more effective training from the minority class samples. however, it is possible that the samples from the minority class may look quite different from most other samples, making such a strategy less effective. further, wang et al. (2022) try to push minority class samples away from open set ones in the feature space using contrastive learning. however, the final osd depends heavily on the selection of open set samples as evidenced by our experiment results. to systematically tackle the fundamental challenge as outlined above, we propose adaptive robust evidential optimization (areo) that offers a principled way to quantify sample uncertainty through evidential learning while optimally balancing the model training over all classes in the closed set through novel adaptive dro learning. to avoid the model from primarily focusing on the most difficult samples by following the standard dro, the adaptive learning strategy gradually increases the size of the uncertainty set using multi scheduler function (msf), which allows the model to learn from easy to hard samples. a class-ratio biased loss is further assigned to the minority class to ensure proper learning from its limited samples. our main contribution is fourfold: • a novel extension of dro to evidential learning, which enables principled uncertainty quantification under the class imbalanced setting, critical for many applications, including osd, • adaptive dro training governed by a uniquely designed multi-scheduler learning mechanism to ensure an optimal model training behavior that gives sufficient attention to the difficult samples and the minority class while capable of learning common patterns from the majority classes, • theoretical connection to a boosting model (i.e., adaboost), which ensures the nice convergence and generalization properties of areo, • state-of-the-art osd performance on various datasets. 2 related work open set detection. various svm based techniques (scheirer et al., 2013; jain et al., 2014; scheirer et al., 2014) have been proposed for osd. for instance, scheirer et al. (scheirer et al., 2013) proposed an svm based model, which performs detection using a weibull-calibrated svm (w-svm) by leveraging extreme value theory (evt). reconstruction based approaches have been proposed (zhang & patel, 2017), where a threshold defined over the reconstruction error is used to decide whether the sample is from a known or an unknown class. other traditional models, such as nearest neighbor (júnior et al., 2016), quasi-linear function (cevikalp & yavuz, 2017), have also been explored as well. deep learning models have been increasingly applied for open set detection (yoshihashi et al., 2019; sun et al., 2020; bendale & boult, 2016). as an example, openmax replaces the softmax function and probability of the softmax is redistributed to produce the probability of a sample being unknown (bendale & boult, 2016). sun et al. (2020) proposed vae based open set recognition, where the probability of a sample belonging to each of the known classes is used as a proxy to detect whether the sample is known or unknown. each known class distribution is modeled as a gaussian using the training data. some recent approaches aim to learn a more compact representation of closed set samples (cevikalp et al., 2021; yang et al., 2020) or push the open set class samples to a specific region in an embedding space for better recognition (chen et al., 2021). special loss functions (dhamija et al., 2018) and generative processes (perera et al., 2020) have also been leveraged to separate open set samples from closed set ones. recently, systematic approaches have been presented to break the closed set limitation by explicitly modeling the uncertainty mass belonging to the unknown distribution. one of the representative work inline with this is the evidential deep learning (edl) model (sensoy et al., 2018). similar to this work, malinin & gales (2018) propose prior networks (pns) that explicitly consider the distributional uncertainty to quantify the distribution mismatch. despite having a natural way to quantify the uncertainty, both of these methods require ood data samples for model training, which is less practical. charpentier et al. (2020) propose the posterior networks that leverage the normalizing flows for density estimation in the latent space in order to predict the posterior distribution by only using in-distribution samples. despite the significant progress in osd, limited attention has been drawn to the scenario, where the closed set involves highly imbalanced classes. few recent works try to tackle this fundamental challenge of osd under class-imbalanced setting. liu et al. (2019) propose a technique based on the assumption that visual similarity exists between head and tail classes in the closed set. a model is designed to leverage this similarity to make it more robust for recognizing minority class samples. however, such an assumption may not universally hold, which limits the applicability of the model in general settings. further, wang et al. (2022) leverage contrastive learning to push the minority class samples away from the open set ones in the feature space during the training process. however, the final osd performance highly depends on the training open set samples. distributionally robust optimization. distributionally robust optimization is based on principled statistical learning theory, where the worst case weighted loss is optimized by searching the weights in a given uncertainty set (duchi & namkoong, 2019; zhu et al., 2019; namkoong & duchi, 2016). dro offers a systematic way to handle the imbalanced class distribution and has been commonly used in supervised learning setting (qi et al., 2020; zhu et al., 2019) as well as in multiple instance learning (sapkota et al., 2021). in a similar way, li et al. (2020) propose a technique called tilted empirical risk minimization (term) by redefining the erm with the introduction of hyperparameter t. depending on the tunable parameter t value, different variants of loss (maximum, minimum, and average) are recovered and thereby provide a unified way to perform effective training in the presence of outlier and class imbalance scenarios. while dro may help to improve the closed set performance, it is not sufficient to address the osd problem with imbalanced data. this is because dro with a flexible uncertainty set may put too much emphasis on the difficult samples and ignores the ones from the minority class as well as representative samples from majority classes. our proposed areo model offers an adaptive learning strategy to learn from easy samples in the early training phase and gradually shift the focus to the difficult samples. furthermore, the class-ratio biased loss ensures proper learning from the limited samples in the minority class. methodology 3.1 preliminaries evidential learning for osd. let dn = {x, y} = {(x1, y1), ..., (xn , yn )} be a set of training samples in the closed set. each xn ∈ rd is a d-dimensional feature vector and yn ∈ {0, 1}c indicates the one hot encoding associated with its class label: ynj = 1 and ynk = 0 for all k ̸= j with j being the true label. following the principle of subjective logic (sl) (jøsang, 2016), we consider a total of c + 1 mass values with c being the number of classes. we assign a belief mass bc, ∀c ∈ [c], to each singleton, which corresponds to one class in the closed set and the remaining mass is referred to as the uncertainty mass, denoted by u. the belief masses and the uncertainty mass are all non-negative and sum to one: u + (cid:80)c c=1 bc = 1, u ≥ 0 and bc ≥ 0. they can be evaluated as bc = ec s u = c s where s = (cid:80)c c=1(ec + 1) with ec ≥ 0 being the evidence derived for the cth singleton, which can be generated by a neural network enabled with a non-negative output. the belief mass assignment in the above expression corresponds to a dirichlet distribution with the concentration parameters αc = ec + 1: dir(p|α) = for p ∈ sc otherwise where sc is a (c − 1)-simplex and b(α) is a beta function. given the evidences, the expected probability for the cth singleton is given by e[pc] = αc s . consider a sample xn and let f (xn, θ) denote the evidence vector generated by an evidential neural network parameterized by θ. this allows us to fully characterize the dirichlet distribution, whose mean vector gives rise to the probability of assigning xn to each class. there are multiple ways to design a loss function to train the evidential neural network (sensoy et al., 2018). a simple but effective option is the sum of square loss: rlel n (θ) = ∥yn − e[pn]∥2 2 + λtkl[dir(pn| ˜αn)|dir(pn|(1, ...., 1)⊤)] where λt = min(1, t 10 ) is the annealing coefficient at epoch t and ˜αn = yn + (1 − yn) ⊙ αn. remark. besides being used as a powerful model for closed set classification, a unique benefit of evidential learning is that it offers a principled way to quantify the uncertainty mass, which is explicitly allocated to account for something that is ‘unknown’ to the model. intuitively, a properly trained evidential model will output a high total evidence for data samples whose features are sufficiently exposed to the model during training. in contrast, it should predict a low total evidence for less representative samples in the training data. for these samples, their corresponding uncertainty mass u will be large (as the total mass sums to one). as a result, the uncertainty mass fits squarely for detecting open set samples, which have not been exposed to the model that is trained using the closed set samples. distributionally robust optimization. distributionally robust optimization (dro) is inherently used to handle the minority and/or difficult class samples by optimizing the worst-case loss where weights assigned to each sample are given by uncertainty set. let ln(θ) be the loss for the xn sample network parameterized by θ. then the corresponding dro loss is given as ldro(θ) = max p∈p dro pnln(θ) the uncertainty set defined to assign weights (p) is given as p dro := p ∈ rn : p⊤1 = 1, p ≥ 0, df (p∥ where df (p∥q) is f -divergence between two distributions p and q and η controls the size of the uncertainty set. when η is large, the weight distribution p can deviate a lot from the uniform distribution, making it possible to assign a very high weight to certain data samples. in contrast, a small η will constrain p to be close to the uniform distribution and all samples share a similar weight. 3.2 distributionally robust evidential optimization the standard evidential learning does not explicitly consider an imbalanced class distribution. further, it also does not focus on the difficult samples resulting from multi-modality where a single class can have multiple types of samples. as a result, minority classes and/or difficult samples are usually assigned a higher uncertainty mass due to a lack of sufficient training data. while this may not significantly impact the closed set performance (i.e., accuracy), it poses a more severe issue for osd as difficult/minority class samples become equally uncertain as those open set samples. to address this challenging solution, one straightforward way would be to integrate evidential learning with dro for robust uncertainty mass quantification on minority class/difficult samples in the close-set. intuitively, since the model explicitly focuses on learning from minority class/difficult samples, it provides a low uncertainty mass for minority/difficult samples while remain high (in terms of uncertainty mass) for those open set samples. this novel integration of dro and evidential learning allows us to define a distributionally robust evidential loss (drel) given as ldrel(θ) = max p∈p dro pnlel n (θ) (a) cosine (b) offset cosine (c) exponential (d) multi-scheduler figure 1: examples of scheduler functions details of solving (6) is provided in appendix b. depending on η in the uncertainty set, we can decide whether we want to assign an equal weight to all data samples or focus on the most difficult ones. the lemma below reveals the relationship between drel and the standard evidential loss. lemma 1. with η → 0, the edl loss under dro reduces to the standard edl loss. when η is set to be very small, the model gives similar weights to all samples, which allows them to participate equally in the training process. at another extreme, we can direct the model to fully focus on the most difficult sample with the maximum loss, as summarized in the lemma below. lemma 2. with η → ∞, the loss under dro becomes equivalent to a maximum loss based approach focusing only on the hardest sample. the above lemma implies that a highly flexible uncertainty set may cause the model to put too much emphasis on difficult samples. since these difficult samples may come from the majority classes, simply setting a large η will not be necessary to direct the model’s attention to the samples from the minority class. furthermore, using a flexible uncertainty set in the initial phase of the model training may misguide the model to neglect a large number of representative data samples. as a result, the model will not be able to capture the common patterns that exhibit in most of the training samples. as such, the direct integration of dro and edl does work well which is also justified experimentally through the comparison of the proposed technique with dro technique. adaptive robust evidential optimization (areo) t 2 cos (cid:0) πt the key idea to address the limitations in the distributionally robust evidential optimization is to gradually increase the size of the uncertainty set, which allows the model to learn from easy to hard samples from closed set classes. scheduler functions (sf) provide a natural way to achieve the desired training behavior. figure 1 (a-c) shows three typical sfs, including cosine in (a): cos (cid:0) πt (cid:1), offset cosine in (b): 1 , where t denotes the index 2 , and exponential in (c): exp of the training epoch, t is the terminating epoch, and β is a specific parameter of the exponential function. it can be seen that while the general trends of different sfs are similar, they exhibit some key differences that may lead to quite distinct model training behaviors. for example, a cosine function can help to ensure the uncertainty set to stay small for a relatively longer time in the beginning of model training. this ensures the model to learn from the representative samples in the majority classes (according to lemma 1). in contrast, an exponential function can change the size of the uncertainty set very rapidly, which can give the model more time to learn from the difficult samples at the later phase (according to lemma 2). the offset cosine function can offer both a relatively long initial learning and later learning phases. however, choosing a sf that best matches the nature of a given dataset poses a key challenge. furthermore, a single sf may not be rich enough to express the desired training behavior of a complex dataset. − t β to address this key challenge, we propose to conduct multi-scheduler learning to automatically construct a composite scheduler function that can be automatically learned for each given dataset to deliver the optimal training behavior. more specifically, the multi-scheduler function (msf) is formulated as a convex combination of a set of atomic sfs: msf(w, β, t, t ) = wmsfm(βm, t, t ), wm = 1, wm ≥ 0 ∀m ∈ [m ] where w are the mixing weights and β is a set of specific parameters for the atomic sfs. figure 1 (d) visualizes an example msf that combines a cosine and exponential functions with different mixing weights and fixed β = 20, t = 600. as can be seen, the msf is much more expressive then either its component sf, which makes it capable to represent a much broader range of training behaviors. by leveraging the proposed msf to control the size of the uncertainty set, we can achieve adaptive robust training. let η0 be the initial size of the uncertainty set and the size of the set at epoch t is ηt = ηt−1 msf(w, β, t, t ) based on this adaptive uncertainty set, we define the adaptive robust evidential loss (arel) as larel(θ) = max p∈p aro pnlel n (θ) where lel framework and p aro is the adaptive robust uncertainty set defined as n is the uncertainty mass loss for the xn given by eq. (3) under adaptive robust optimization p aro := p ∈ rn : p⊤1 = 1, p ≥ 0, df (p∥ ) ≤ ηt as ηt increases, the model gradually shifts its focus from easier samples to the more difficult ones. in this way, the model can be trained to first capture the common patterns in the data and then conduct fine-tuning by attending to those difficult samples. however, for imbalanced classes, there may be a good number of difficult samples from the majority classes. therefore, solely controlling the size of the uncertainty set does not guarantee a sufficient training over the minority class. to address this, we further leverage the label of the minority-class c to formulate a ratio biased weight augmentation on samples from this class. let p(c) = (cid:80) ∀ync=1 pn be the total weight for minority class c obtained by solving (9). then, the weights for the minority class samples are adjusted as: (cid:40)p(c), if p(c) ≥ 1 c , p(c)msf(w′,β′,t,t )(cid:17) (cid:16) 1 min c , otherwise (cid:102)pn = (cid:103)p(c) p(c) pn, if ync = 1 1− (cid:103)p(c) 1−p(c) pn, otherwise as the msf monotonically decreases over the training epochs, the total weight for the minority class samples will eventually reach 1 c , making it equally weighted as the other (c − 1) classes. remark. our approach considers a minority class if there is an obvious gap between the percentage of samples from the minority class over the total samples from all c classes and 1 c . any other class that is not a minority one is regarded as a majority class. our approach can handle the multiple minority classes which can be achieved by applying the ratio biased weighted augmentation (given by eq. (11)) to each minority class. the adaptive robust training is achieved through a bi-level optimization, where the inner loop optimizes the the model parameters (θ) and the outer loop optimizes the msf parameters w = {w, w′, β, β′}: min w larel val (θ∗, w), s.t. θ∗ = arg min θ larel train(θ, w) val train, larel where larel are training and validation losses, respectively. the outer loop optimization can be solved by computing the hypergradients (maclaurin et al., 2015; pedregosa, 2016) or through a population-based methods (jaderberg et al., 2017), where the former may easily get stuck in local optimum (tao et al., 2020). to this end, we extend the existing population based method to learn an optimal msf and the details are given in appendix b. theoretical analysis we establish the key theoretical properties of areo, including the convergence speed in model training and the generalization capability by formally demonstrating the equivalence between areo and adaboost under a non-convex robust uncertainty loss. the key idea is to leverage the equivalence between adaboost and the gradient descent search of an optimal function from a linear combination of a set of (weak) learners (mohri et al., 2012; blanchet et al., 2019). let f = {f1, ..., fk} be a set of different classifiers, and the linear span generated by the set f is σkfk, 1 ≤ k ≤ k ls(f) = f : f = areo training consists of two alternative updates between optimizing the worst case probability and predicting function f . the update in function prediction can be regarded as finding a sub-gradient gt ∈ ∂larel(ft) and updating with (cid:81) ls(f )(g) = arg minf ∈ls(f ) ∥f − gt∥dn where dn is the training data. letting ln(ft) be the loss associated with the data sample xn, the update of p involves the optimization of the following objective with ft being fixed: larel(ft) = max p∈p aro pnln(ft) where the uncertainty set is given by (10). the corresponding lagrangian of the above optimization problem is given by pnln(ft) − α pn log pn − ηt it should be noted that finding the optimal f value is non-trivial because the optimization involves the nonconvex loss (i.e., larel). this creates difficulty showing equivalence between areo and adaboost. to ensure the convergence of f to a stationary point, we adapt the probabilistic gradient estimator technique (page) (li et al., 2021) to our unique adaptive robust evidential optimization setting. this convergence guarantee helps to move forward showing the equivalence between areo and adaboost given by the theorem below. theorem 3. under the assumption of finite exponential moment for ln(f ), with α ≥ 0 being sufficiently large and the worst case probability p∗ is given by ηt = β∗ψ p∗ n = exp (cid:16) ln(ft) α (cid:16) lj (ft) α j=1 exp α∗ , α∗ ≥ 0 be the optimal α, and ψ(β) = log where β∗ = 1 . the alternative optimization between f and with above worst case probability solution exactly recovers the adaboost algorithm proposed in (freund & schapire, 1997). n n=1 exp(βln(ft)) remark. there are several key benefits of connecting areo with adaboost. first, adaboost is less prone to overfitting even running for a large number of iterations (mease & wyner, 2008). inheriting such a property is crucial for osd as an overfitted evidential model can produce highly confident wrong predictions. this implies that a low uncertainty may be predicted for samples that the model is less familiar with, resulting in a false negative detection of an open set sample. furthermore, since the target function is expressed as a linear combination of a set of weak learners, the optimal function can be regarded as maximizing the l1 geometric margin among the training samples to ensure good generalization capability like other maximum-margin classifiers (mohri et al., 2012). this ensures a decent closed set performance from areo (as shown by our experiments). the proof of theorem 3 is provided in appendix c. experiments we perform extensive experimentation to evaluate the effectiveness of the proposed areo model. we first describe five real-world image datasets where a minority class is introduced to create an imbalanced setting. we then assess the osd performance of the proposed technique by comparing with competitive baselines. finally, we conduct some qualitative analysis, which uncovers deeper insights on the performance advantage of the proposed model. 4.1 datasets our experiments involve five real-world image datasets: cifar10, cifar100 (krizhevsky, 2009), imagenet (deng et al., 2009), mnist (deng, 2012), and architecture heritage elements dataset (ahed) (llamas, 2017). in our experimentation, model training is performed solely based on the closed set samples. during the detection phase, the testing samples corresponding to the closed set classes will be assessed against the samples from open set classes. for all datasets, for the hyperparameter optimization, randomly selected 20% of the training set is used. the brief description for each dataset is given below. for the detailed description and data sample distribution in majority and minority classes, please refer to the appendix. • mnist: five classes are treated as open set and the rest as the closed set. to make the dataset imbalanced, we consider class ‘3’ as a minority class and randomly select 30% data samples as compared with other majority classes. the same imbalanced ratio is applied to both training and testing sets. in addition to the mnist open set classes as described above, we follow other existing works (sun et al., 2020) and further test the osd performance on additional open set samples from three more sources: (1) mnist-noise, (2) noise, and (3) omnigolot (lake et al., 2015). • cifar10: five classes are assigned as open set and closed set, respectively. bird is made as the minority class using the same strategy introduced above. in addition to the open set classes from cifar10 itself, we further assess the osd performance with cifar+10 and cifar+50. • cifar100: ’living being’ related super classes are assigned as the closed set and the remaining super classes are assigned as the open set. we make ‘insect’ related classes as the minority one. • imagenet: five classes are assigned as open set and closed set, respectively. we make ’king crab’ as the minority class. • architectural heritage elements dataset (ahed): five classes are assigned as open set and closed set, respectively. this is inherently highly imbalanced dataset where number of data points are unevenly distributed across different classes. the class ‘portal’ is the minority one. experimental settings evaluation metric. to assess the model performance, we report mean average precision (map) score which summarizes the precision-recall curve as a weighted mean of precision achieved at each threshold, with the increase in recall from previous threshold as the weight. specifically, in the osd, we treat the open set samples as positive and closed set samples as negative and compute the map score based on the uncertainty score produced by the trained model. different from auroc, map places more emphasis on initial part of the roc curve, which gives preference if model can rank the open set samples on the top based on their predicted uncertainty scores. this map metric works well in practice as the main focus may be devoted to the first few predicted candidate samples, especially when there is a long candidate list. the theoretical result shows that map is approximately the auroc times the initial precision of the model (su et al., 2015). therefore, we focus on reporting the map performance and leave the auroc results in appendix d. it is worth to note that our auroc results also show a consistent trend as the map results. network architecture. in terms of the architecture of the evidential neural network, for all datasets, we use an lenet5 network with tanh activation in the feature extractor and relu in the fully connected layers. for training, we use the adam optimizer with a learning rate of 0.001 and l2 regularization with a coefficient of 0.001. the detailed hyperparameter setting is provided in appendix. performance comparison in our comparison study, we include baselines that are most relevant to our model, including edl, edl augmented with oversampling using smote (chawla et al., 2002) (referred to as aedl), and edl with standard dro training (referred to as dro). further, we also compare the performance with the posterior networks (charpentier et al., 2020) and its robust form, postnet (rs), proposed by kopetzki et al. (kopetzki et al., 2021). in addition, we also compare with representative baselines with outstanding osd performance: openmax (bendale & boult, 2016), cgdl (sun et al., 2020), and oltr (liu et al., 2019). please refer to the appendix d for the more detailed description of the baselines used in our comparison study along with additional results and an ablation study. tables 1 presents the osd performance comparison between different models for all five datasets. areo consistently outperforms all the baselines across all the datasets. for certain datasets, the performance advantage over the second best model is more than or close to 10%. this clearly demonstrates the benefits of conducting evidential learning through adaptive dro training to achieve an optimal balanced learning from all classes and different types of data samples. we also observe that edl consistently performs better than other non-evidential learning based models, such as openmax, in most cases. the better osd performance from edl is attributed to its explicit modeling of the uncertainty mass that works naturally for detecting the open set samples. in contrast, directly applying dro with a flexible uncertainty set, which aims to address the imbalanced class table 1: osd (map) performance on all datasets approach edl aedl dro openmax cgdl postnet postnet (rs) oltr areo approach edl aedl dro openmax cgdl postnet postnet (rs) oltr areo mnist imagenet ahed sample bird1 bird2 bird3 boat truck (a) representative difficult samples (b) ranking of samples figure 2: (a) top row: minority class; bottom-row: majority classes; (b) sample ranking. distribution, leads to a rather poor osd performance due to the reasons as analyzed in prior sections. similarly, aedl does not perform better than the standard edl due to the lack of fine-tuning of the difficult examples from the majority classes that become inseparable from the open set samples with a high predicted uncertainty score. table 10 in the appendix also shows the closed set performance as a reference. further, for the deeper insight on the superior osd of areo please refer to appendix. qualitative examples we perform a qualitative analysis to further assess the effectiveness of areo. figure 2 (a) top row shows representative testing samples from the minority class (‘bird’) in cifar10. these images appear to be difficult even for the humans to identify the bird as only a small part is visible. thus, edl, aedl, and dro assign a relative higher uncertainty score for them. as a result, many open set samples may be assigned a relatively lower uncertainty score, leading to false negative detection on these samples. figure 2 (b) shows the ranking of these samples according to the uncertainty scores (a lower ranking indicates a lower uncertainty). in contrast, areo assigns much lower rankings for these birds objects. this analysis justifies the effectiveness of areo for detecting minority class data samples in the closed set. similarly, figure 2 (a) bottom row show representative images from some majority classes. again, areo is able to recognize these difficult samples and assign a relatively low uncertainty score to avoid them being mis-identified as open set samples as shown by figure 2 (b). conclusion in this paper, we focus on open set detection from imbalanced closed set data. to address the fundamental challenge due to the interplay between the minority-class samples and difficult samples from the majority classes, we propose an important extension of dro to the evidential learning setting, leading to a novel adaptive robust evidential optimization (areo) model. as an evidential learning model, areo effectively breaks the closed set assumption by explicitly modeling the uncertainty mass that is uniquely suitable for detecting open set samples. an adaptive dro training process is achieved through multi-scheduler learning to achieve an optimal training behavior. the experimentation conducted on five real-world datasets with diverse types of open set data samples justifies the effectiveness of the proposed model. acknowledgement this research was supported in part by an nsf iis award iis-1814450 and an onr award n0001418-1-2875. the views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agency. references abhijit bendale and terrance e. boult. towards open set deep networks. 2016 ieee conference on computer vision and pattern recognition (cvpr), pp. 1563–1572, 2016. josé h. blanchet, yang kang, fan zhang, and zhangyi hu. a distributionally robust boosting algorithm. 2019 winter simulation conference (wsc), pp. 3728–3739, 2019. hakan cevikalp and hasan serhan yavuz. fast and accurate face recognition with image sets. 2017 ieee international conference on computer vision workshops (iccvw), pp. 1564–1572, 2017. hakan cevikalp, bedirhan uzun, okan köpüklü, and gurkan ozturk. deep compact polyhedral conic classifier for open and closed set recognition. pattern recognition, 119:108080, 2021. bertrand charpentier, daniel zügner, and stephan günnemann. posterior network: uncertainty estimation without ood samples via density-based pseudo-counts. arxiv, abs/2006.09239, 2020. nitesh v. chawla, kevin w. bowyer, lawrence o. hall, and w. philip kegelmeyer. smote: synthetic minority over-sampling technique. j. artif. int. res., 16(1):321–357, june 2002. issn 1076-9757. guangyao chen, peixi peng, xiangqian wang, and yonghong tian. adversarial reciprocal points learning for open set recognition. ieee transactions on pattern analysis and machine intelligence, 2021. doi: 10.1109/tpami.2021.3106743. jia deng, wei dong, richard socher, li-jia li, kai li, and li fei-fei. imagenet: a large-scale hierarchical image database. in 2009 ieee conference on computer vision and pattern recognition, pp. 248–255. ieee, 2009. li deng. the mnist database of handwritten digit images for machine learning research. ieee signal akshay raj dhamija, manuel günther, and terrance e. boult. reducing network agnostophobia. john duchi and hongseok namkoong. variance-based regularization with convex objectives. j. yoav freund and robert e. schapire. a decision-theoretic generalization of on-line learning and an application to boosting. in colt 1997, 1997. max jaderberg, valentin dalibard, simon osindero, wojciech m. czarnecki, jeff donahue, ali razavi, oriol vinyals, tim green, iain dunning, karen simonyan, chrisantha fernando, and koray kavukcuoglu. population based training of neural networks. arxiv, abs/1711.09846, 2017. lalit p. jain, walter j. scheirer, and terrance e. boult. multi-class open set recognition using probability of inclusion. in eccv, 2014. minki jeong, seokeon choi, and changick kim. few-shot open-set recognition by transformation consistency. in cvpr, 2021. audun jøsang. subjective logic. springer, 2016. pedro ribeiro mendes júnior, roberto medeiros de souza, rafael de oliveira werneck, bernardo v. stein, daniel v. pazinato, waldir r. de almeida, otávio augusto bizetto penatti, ricardo da silva torres, and anderson rocha. nearest neighbors distance ratio open-set classifier. machine learning, 106:359–386, 2016. shu kong and deva ramanan. opengan: open-set recognition via open data generation. corr, anna-kathrin kopetzki, bertrand charpentier, daniel zügner, sandhya giri, and stephan günnemann. evaluating robustness of predictive uncertainty estimation: are dirichlet-based models reliable? in icml, 2021. alex krizhevsky. learning multiple layers of features from tiny images. master’s thesis, department of computer science, university of toronto, 2009. brenden m. lake, ruslan salakhutdinov, and joshua b. tenenbaum. human-level concept learning through probabilistic program induction. science, 350:1332 – 1338, 2015. y. lecun, l. bottou, y. bengio, and p. haffner. gradient-based learning applied to document tian li, ahmad beirami, maziar sanjabi, and virginia smith. tilted empirical risk minimization. zhize li, hongyan bao, xiangliang zhang, and peter richtarik. page: a simple and optimal probabilistic gradient estimator for nonconvex optimization. in marina meila and tong zhang (eds.), proceedings of the 38th international conference on machine learning, volume 139 of proceedings of machine learning research, pp. 6286–6295. pmlr, 18–24 jul 2021. url https://proceedings.mlr.press/v139/li21a.html. bo liu, hao kang, haoxiang li, gang hua, and nuno vasconcelos. few-shot open-set recognition using meta-learning. 2020 ieee/cvf conference on computer vision and pattern recognition (cvpr), pp. 8795–8804, 2020. ziwei liu, zhongqi miao, xiaohang zhan, jiayun wang, boqing gong, and stella x. yu. large-scale long-tailed recognition in an open world. 2019 ieee/cvf conference on computer vision and pattern recognition (cvpr), pp. 2532–2541, 2019. jose llamas. architectural heritage elements image dataset, 2017. david g. luenberger. optimization by vector space methods. john wiley and sons, inc., usa, 1st dougal maclaurin, david duvenaud, and ryan p. adams. gradient-based hyperparameter optimization through reversible learning. in proceedings of the 32nd international conference on international conference on machine learning - volume 37, icml’15, pp. 2113–2122. jmlr.org, 2015. andrey malinin and mark gales. predictive uncertainty estimation via prior networks. in proceedings of the 32nd international conference on neural information processing systems, nips’18, pp. 7047–7058, red hook, ny, usa, 2018. curran associates inc. | 10 | [
108,
241.8430784,
505.746094192,
273.9518182
] |
1YLJDvSx6J4.pdf | 2,021 | 0 | learning from protein structure with geometric vector perceptrons bowen jing∗, stephan eismann∗, patricia suriana, raphael j.l. townshend, ron o. dror stanford university {bjing, seismann, psuriana, raphael, rondror}@cs.stanford.edu abstract learning on 3d structures of large biomolecules is emerging as a distinct area in machine learning, but there has yet to emerge a unifying network architecture that simultaneously leverages the geometric and relational aspects of the problem domain. to address this gap, we introduce geometric vector perceptrons, which extend standard dense layers to operate on collections of euclidean vectors. graph neural networks equipped with such layers are able to perform both geometric and relational reasoning on efficient representations of macromolecules. we demonstrate our approach on two important problems in learning from protein structure: model quality assessment and computational protein design. our approach improves over existing classes of architectures on both problems, including state-ofthe-art convolutional neural networks and graph neural networks. we release our code at https://github.com/drorlab/gvp. introduction many efforts in structural biology aim to predict, or derive insights from, the structure of a macromolecule (such as a protein, rna, or dna), represented as a set of positions associated with atoms or groups of atoms in 3d euclidean space. these problems can often be framed as functions mapping the input domain of structures to some property of interest—for example, predicting the quality of a structural model or determining whether two molecules will bind in a particular geometry. thanks to their importance and difficulty, such problems, which we broadly refer to as learning from structure, have recently developed into an exciting and promising application area for deep learning (graves et al., 2020; ingraham et al., 2019; pereira et al., 2016; townshend et al., 2019; won et al., 2019). successful applications of deep learning are often driven by techniques that leverage the problem structure of the domain—for example, convolutions in computer vision (cohen & shashua, 2017) and attention in natural language processing (vaswani et al., 2017). what are the relevant considerations in the domain of learning from structure? using proteins as the most common example, we have on the one hand the arrangement and orientation of the amino acid residues in space, which govern the dynamics and function of the molecule (berg et al., 2002). on the other hand, proteins also possess relational structure in terms of their amino-acid sequence and the residue-residue interactions that mediate the aforementioned protein properties (hammes-schiffer & benkovic, 2006). we refer to these as the geometric and relational aspects of the problem domain, respectively. recent state-of-the-art methods for learning from structure leverage one of these two aspects. commonly, such methods employ either graph neural networks (gnns), which are expressive in terms of relational reasoning (battaglia et al., 2018), or convolutional neural networks (cnns), which operate directly on the geometry of the structure. here, we present a unifying architecture that bridges these two families of methods to leverage both aspects of the problem domain. we do so by introducing geometric vector perceptrons (gvps), a drop-in replacement for standard multi-layer perceptrons (mlps) in aggregation and feed-forward layers of gnns. gvps operate directly on both scalar and geometric features—features that transform as a vector under a rotation of spatial coordinates. gvps therefore allow for the embedding of geometric information at nodes and ∗equal contribution edges without reducing such information to scalars that may not fully capture complex geometry. we postulate that our approach makes it easier for a gnn to learn functions whose significant features are both geometric and relational. our method (gvp-gnn) can be applied to any problem where the input domain is a structure of a single macromolecule or of molecules bound to one another. in this work, we specifically demonstrate our approach on two problems connected to protein structure: computational protein design and model quality assessment. computational protein design (cpd) is the conceptual inverse of protein structure prediction, aiming to infer an amino acid sequence that will fold into a given structure. model quality assessment (mqa) aims to select the best structural model of a protein from a large pool of candidate structures and is an important step in structure prediction (cheng et al., 2019). our method outperforms existing methods on both tasks. related work ml methods for learning from protein structure largely fall into one of three types, operating on sequential, voxelized, or graph-structured representations of proteins. we briefly discuss each type and introduce state-of-the-art examples for mqa and cpd to set the stage for our experiments later. sequential representations in traditional models of learning from protein structure, each amino acid is represented as a feature vector using hand-crafted representations of the 3d structural environment. these representations include residue contacts (olechnoviˇc & venclovas, 2017), orientations or positions collectively projected to local coordinates (karasikov et al., 2019), physicsinspired energy terms (o’connell et al., 2018; uziela et al., 2017), or context-free grammars of protein topology (greener et al., 2018). the structure is then viewed as a sequence or collection of such features which can be fed into a 1d convolutional network, rnn, or dense feedforward network. although these methods only indirectly represent the full 3d structure of the protein, a number of them, such as proq4 (hurtado et al., 2018), voromqa (olechnoviˇc & venclovas, 2017), and sbrod (karasikov et al., 2019), are competitive in assessments of mqa. voxelized representations in lieu of hand-crafted representations of structure, 3d convolutional neural networks (cnns) can operate directly on the positions of atoms in space, encoded as occupancy maps in a voxelized 3d volume. the hierarchical convolutions of such networks are easily compatible with the detection of structural motifs, binding pockets, and the specific shapes of other important structural features, leveraging the geometric aspect of the domain. a number of cpd methods (anand et al., 2020; zhang et al., 2019; shroff et al., 2019) and the mqa methods 3dcnn (derevyanko et al., 2018) and ornate (pag`es et al., 2019) exemplify the power of this approach. graph-structured representations a protein structure can also be represented as a proximity graph over amino acid nodes, reducing the challenge of representing a collective structural neighborhood in a single feature vector to that of representing individual edges. graph neural networks (gnns) can then perform complex relational reasoning over structures (battaglia et al., 2018)—for example, identifying key relationships among amino acids, or flexible structural motifs described as a connectivity pattern rather than a rigid shape. recent state-of-the-art gnns include structured transformer (ingraham et al., 2019) on cpd, proteinsolver (strokach et al., 2020) on cpd and mutation stability prediction, and graphqa (baldassarre et al., 2020) on mqa. these methods vary in their representation of geometry: while some, such as proteinsolver and graphqa, represent edges as a function of their length, others, such as structured transformer, indirectly encode the 3d geometry of the proximity graph in terms of relative orientations and other scalar features. methods our architecture seeks to combine the strengths of cnn and gnn methods in learning from biomolecular structure by improving the latter’s ability to reason geometrically. the gnns described in the previous section encode the 3d geometry of the protein by encoding vector features (such as node orientations and edge directions) in terms of rotation-invariant scalars, often by defining a local coordinate system at each node. we instead propose that these features be directly figure 1: (a) schematic of the geometric vector perceptron illustrating algorithm 1. given a tuple of scalar and vector input features (s, v), the perceptron computes an updated tuple (s(cid:48), v(cid:48)). s(cid:48) is a function of both s and v. (b) illustration of the structure-based prediction tasks. in computational protein design (top), the goal is to predict an amino acid sequence that would fold into a given protein backbone structure. individual atoms are represented as colored spheres. in model quality assessment (bottom), the goal is to predict the quality score of a candidate structure, which measures the similarity of the candidate with respect to the experimentally determined structure (in gray). represented as geometric vectors—features in r3 which transform appropriately under a change of spatial coordinates—at all steps of graph propagation. this conceptual shift has two important ramifications. first, the input representation is more efficient: instead of encoding the orientation of a node by its relative orientation with all of its neighbors, we only have to represent one absolute orientation per node. second, it standardizes a global coordinate system across the entire structure, which allows geometric features to be directly propagated without transforming between local coordinates. for example, representations of arbitrary positions in space—including points that are not themselves nodes—can be easily propagated across the graph by euclidean vector addition. we postulate this allows the gnn to more easily access global geometric properties of the structure. the key challenge with this representation, however, is to perform graph propagation in a way that simultaneously preserves the full expressive power of the original gnn while maintaining the rotation invariance provided by the scalar representations. we do so by introducing a new module, the geometric vector perceptron, to replace dense layers in a gnn. geometric vector perceptrons the geometric vector perceptron is a simple module for learning vector-valued and scalar-valued functions over geometric vectors and scalars. that is, given a tuple (s, v) of scalar features s ∈ rn and vector features v ∈ rν×3, we compute new features (s(cid:48), v(cid:48)) ∈ rm × rµ×3. the computation is illustrated in figure 1a and formally described in algorithm 1. at its core, the gvp consists of two separate linear transformations wm, wh for the scalar and vector features, followed by nonlinearities σ, σ+. however, before the scalar features are transformed, we concatenate the l2 norm of the transformed vector features vh; this allows us to extract rotation-invariant information from the input vectors v. an additional linear transformation wµ is inserted just before the vector nonlinearity to control the output dimensionality independently of the number of norms extracted. the gvp is conceptually simple, yet provably possesses the desired properties of invariance/equivariance and expressiveness. first, the vector and scalar outputs of the gvp are equivariant and invariant, respectively, with respect to an arbitrary composition r of rotations and reflections in 3d euclidean space — i.e., if gvp(s, v) = (s(cid:48), v(cid:48)) then gvp(s, r(v)) = (s(cid:48), r(v(cid:48))) algorithm 1 geometric vector perceptron input: scalar and vector features (s, v) ∈ rn × rν×3 . output: scalar and vector features (s(cid:48), v(cid:48)) ∈ rm × rµ×3 . h ← max (ν, µ) gvp: vh ← whv ∈ rh×3 vµ ← wµvh ∈ rµ×3 sh ← (cid:107)vh(cid:107)2 (row-wise) ∈ rh vµ ← (cid:107)vµ(cid:107)2 (row-wise) ∈ rµ sh+n ← concat (sh, s) ∈ rh+n sm ← wmsh+n + b ∈ rm s(cid:48) ← σ (sm) ∈ rm v(cid:48) ← σ+ (vµ) (cid:12) vµ (row-wise multiplication) ∈ rµ×3 return (s(cid:48), v(cid:48)) this is due to the fact that the only operations on vector-valued inputs are scalar multiplication, linear combination, and the l2 norm.1 we include a formal proof in appendix a. in addition, the gvp architecture can approximate any continuous rotation- and reflection-invariant scalar-valued function of v. more precisely, let gs be a gvp defined with n, µ = 0—that is, one which transforms vector features to scalar features. then for any function f : rν×3 → r invariant with respect to rotations and reflections in 3d, there exists a functional form gs able to (cid:15)-approximate f , given mild assumptions. theorem. let r describe an arbitrary rotation and/or reflection in r3. for ν ≥ 3 let ων ⊂ rν×3 . . . , vν]t ∈ rν×3 such that v1, v2, v3 are linearly independent and be the set of all v = [v1, 0 < ||vi||2 ≤ b for all i and some finite b > 0. then for any continuous f : ων → r such that f (r(v)) = f (v) and for any (cid:15) > 0, there exists a form f (v) = wt gs(v) such that |f (v) − f (v)| < (cid:15) for all v ∈ ων. we include a formal proof in appendix a. as a corollary, a gvp with nonzero n, µ is also able to approximate similarly-defined functions over the full input domain rn × rν×3. in addition to the gvp layer itself, we use a version of dropout that drops entire vector channels at random (as opposed to coordinates within vector channels). we also introduce layer normalization for the vector features as v ← v/ that is, we scale the row vectors of v such that their root-mean-square norm is one. this vector layer norm has no trainable parameters, but we continue to use normal layer normalization on scalar channels with trainable parameters γ, β. we study our hypothesis that gvps augment the geometric reasoning ability of gnns on a synthetic dataset (appendix b). the synthetic dataset allows us to control the function underlying the groundtruth label in order to explicitly separate geometric and relational aspects in different tasks. the gvp-augmented gnn (or gvp-gnn) matches a cnn on a geometric task and a standard gnn on a relational task. however, when we combine the two tasks in one objective, the gvp-gnn does significantly better than either a gnn or a cnn. representations of proteins the main empirical validation of our architecture is its performance on two real-world tasks: computational protein design (cpd) and model quality assessment (mqa). these tasks, as illustrated in figure 1b and described in detail in section 4, are complementary in that one (cpd) predicts a property for each amino acid while the other (mqa) predicts a global property. 1the nonlinearity σ+ is a scaling by σ+ applied to the l2 norm. we represent a protein structure input as a proximity graph with a minimal number of scalar and vector features to specify the 3d structure of the molecule. a protein structure is a sequence of amino acids, where each amino acid consists of four backbone atoms2 and a set of sidechain atoms located in 3d euclidean space. we represent only the backbone because the sidechains are unknown in cpd, and our mqa benchmark corresponds to the assessment of backbone structure only. let xi be the position of atom x in the ith amino acid (e.g. ni is the position of the nitrogen atom in the ith amino acid). we represent backbone structure as a graph g = (v, e) where each node vi ∈ v corresponds to an amino acid and has embedding h(i) v with the following features: • scalar features {sin, cos} ◦ {φ, ψ, ω}, where φ, ψ, ω are the dihedral angles computed from ci−1, ni, cαi, ci, and ni+1. • the forward and reverse unit vectors in the directions of cαi+1 − cαi and cαi−1 − cαi, respectively. • the unit vector in the imputed direction of cβi − cαi.3 this is computed by assuming tetrahedral geometry and normalizing (n × c)/||n × c||2 − (n + c)/||n + c||2 where n = ni − cαi and c = ci − cαi. this vector, along with the forward and reverse unit vectors, unambiguously define the orientation of each amino acid residue. • a one-hot representation of amino acid identity, when available. | 4 | [
135.397,
462.8540784,
399.5354138,
472.8166784
] |
8tYRqb05pVn.pdf | 2,023 | 1 | linearly mapping from image to text space jack merullo, louis castricato, carsten eickhoff, ellie pavlick department of computer science brown university providence, ri, usa {jack merullo,louis castricato,carsten,ellie pavlick}@brown.edu abstract the extent to which text-only language models (lms) learn to represent features of the non-linguistic world is an open question. prior work has shown that pretrained lms can be taught to caption images when a vision model’s parameters are optimized to encode images in the language space. we test a stronger hypothesis: that the conceptual representations learned by frozen text-only models and vision-only models are similar enough that this can be achieved with a linear map. we show that the image representations from vision models can be transferred as continuous prompts to frozen lms by training only a single linear projection. using these to prompt the lm achieves competitive performance on captioning and visual question answering tasks compared to models that tune both the image encoder and text decoder (such as the magma model). we compare three image encoders with increasing amounts of linguistic supervision seen during pretraining: beit (no linguistic information), nf-resnet (lexical category information), and clip (full natural language descriptions). we find that all three encoders perform equally well at transferring visual property information to the language model (e.g., whether an animal is large or small), but that image encoders pretrained with linguistic supervision more saliently encode category information (e.g., distinguishing hippo vs. elephant) and thus perform significantly better on benchmark language-and-vision tasks. our results indicate that lms encode conceptual information structurally similarly to vision-based models, even those that are solely trained on images. code is available here: https://github.com/jmerullo/limber introduction much recent work in nlp has revolved around studying the limits on representational capacity incurred by training on form-only text data, as discussed in bender & koller (2020). tied to this argument is the idea that without explicit grounding, language models are not inclined to learn conceptual representations of language that reflect the rich conceptual knowledge that humans gain from interacting with the physical, non-linguistic world. despite this, there have been remarkable findings in large language models’ abilities to generalize to and reason about non-linguistic phenomena (tsimpoukelli et al., 2021; eichenberg et al., 2021; li et al., 2021; patel & pavlick, 2022). thus, an open question in the field is to what extent (if at all) a language model trained on text-only data can learn aspects of the physical world. in this paper, we test a specific hypothesis about the relationship figure 1: we train linear projections from image representations into the input space of a language model to produce captions describing images. we find that lms can describe the contents of most image representations, but performance varies based on the type of image encoder used. between language model and image encoder representations: that these conceptual representations can be approximately mapped to one another through a linear transformation. to do this, we train a single linear layer to project from the representation space of images into the language space of a generative lm without tuning any other model parameters, which we call limber: linearly mapping between representation spaces. that is, we linearly transform an image representation into “soft prompts”–vector(s) in the embedding space that do not correspond to discrete language tokens (lester et al., 2021). the weights of this linear projection are tuned for an image captioning task (illustrated in figure 1). we can then evaluate its performance on vision-language (vl) tasks at test time by exploring the text the lm generates. because of the simplicity of the linear transformation, we would expect that if the conceptual representation spaces of the two models are structured similarly, this transfer will be successful and the lm will have little trouble describing the contents of images. we use three different image encoders with increasing levels of linguistic supervision in pretraining: beit (bao et al., 2021), normalizer free resnet50 (nfrn50) (brock et al., 2021), and clip (radford et al., 2021) to train different projections into the lm. by linguistic supervision, we refer to the extent to which the image encoder was exposed to language data during its pretraining, thus influencing the expected representational similarity between it and an lm. while clip was pretrained to align images with full natural language captions in a shared image-text representation space, beit had no exposure to language and was trained by predicting the contents of masked out sections of images. nfrn50 falls somewhere between these extremes: having been pretrained on an image classification task for identifying the subject of an image over the set of classes in imagenet1k russakovsky et al. (2015). although there is no natural language in this task, the pretraining objective encourages the model to map visual features along lexical categorical concepts (the image classes) derived from the wordnet hierarchy (miller, 1995). we show that prompting an lm with any of the three image encoders effectively transfers semantic content in the image that the lm describes with natural language. however, performance also appears proportional to the strength of the linguistic supervision the image encoder had. while clip and nfrn50 perform competitively with tuning the models freely (e.g., tsimpoukelli et al. (2021), eichenberg et al. (2021)), beit appears to transfer mostly coarse-grained visual properties and struggles with encouraging the lm to generate exact lexical categories. we interpret this as evidence that models trained on either language or vision data learn conceptual spaces that are structurally similar to each other, but that the exact degree of similarity depends on the type of supervision the image encoder receives. in summary, we show: (1) that visual semantic information can be linearly mapped to language models in the form of soft prompts without tuning any model parameters. (2) that this mapping allows generative models to describe images and answer questions about images at a level that is comparable to what is achieved by multimodal models which tune image and language representations jointly. and (3) by training our prompting pipeline with different image encoder backbones, we demonstrate that linguistic supervision in pretraining plays a key role in concept formation in models and thus, the transferability of visual features from vision to text spaces. related work our approach takes inspiration from recent work in adapting pretrained language models for accepting representations of images as inputs. particularly, the frozen and magma models (tsimpoukelli et al., 2021; eichenberg et al., 2021), as well as sung et al. (2022); alayrac et al. (2022); mokady et al. (2021); luo et al. (2022); lin et al. (2021); zhai et al. (2022), which show that pretrained image and text networks can be tuned together on an image captioning task and applied to downstream vision-language (vl) tasks. these approaches either fine-tune the pretrained models, or train non-linear mlp projection/fusion networks between modalities, making interpretation of the representations difficult compared to our approach. scialom et al. (2020) show a learned linear transformation is sufficient for bert to encode image region representations which are then fed to a text decoder to generate questions about the image, but it is not well understood what abstractions lms are able to transfer from a transformation of this type, or if a text decoder can operate on linear transformations of visual encodings directly. pretrained/from scratch lms have typically been used in the past for image captioning applications in which an image representation is fed into the lm as input (desai & johnson, 2021; shen et al., 2021; devlin et al., 2015). gui et al. (2022); yuan et al. pretrain vision-language models from scratch using image-caption data. zeng et al. (2022); xie et al. (2022); wang et al. (2022) augment multimodal performance by feeding text prompts derived from vl models into an lm, in order to incorporate knowledge learned by lm training. these show lms can interface with visual inputs described in text; our work questions whether the visual input can be fed directly into the lm, without bridging through language first. the success of aforementioned models on vl tasks indicates there is a representational similarity learned by text and image models independently, which we investigate in this paper. our work is also highly related to the idea of model “stitching” (lenc & vedaldi, 2015) in which two different models are attached at a certain layer. limber can be described as stitching the output of an image encoder to the input of an lm in the form of soft prompts (lester et al., 2021). stitching offers distinct advantages in evaluating the representational similarity between two models, as described in bansal et al. (2021), over other conventional methods like rsa and cka (kriegeskorte et al., 2008; kornblith et al., 2019). for example, limber allows us to show not just that clip encodings are more similar to text encodings than beit representations, but that beit representations are nevertheless able to transfer visual property information to the lm (§5.3). there has been considerable interest in recent work in establishing if lms model aspects of the nonlinguistic world in order to model language. lu et al. (2021) show that the weights of a pretrained lm can generalize to tasks with different modalities. hao et al. (2022) similarly show that lms can act as interfaces for multiple modalities. li et al. (2021) show that models of entities and situations can be derived from contextual word representations. patel & pavlick (2022) show that very large lms (gpt-3 scale (brown et al., 2020)) can learn in-context non-linguistic conceptual domains depicted in text. our work differs from these in that we have an lm interface directly with non-text data without changing model weights and show that, although fundamentally different, the representation space of a text-only lm shares non-trivial similarities to that of several vision-based models. method: linearly mapping from image to text representations while previous work has shown success in mapping images to language model soft prompts as a method for multimodal pretraining (e.g., frozen, magma; see section 2), there have been no attempts to restrict the mechanism behind this mapping and understand how it works. our basic approach is to train a single linear layer p to project from the hidden size hi of a pretrained image encoder into the input space el of a generative language model for an image captioning task. the projected inputs do not correspond to discrete language tokens, and can be thought of as soft prompts (lester et al., 2021) representing the image. for brevity, we refer to training p as linearly mapping between representation spaces (i.e., limber)1. our approach can also be viewed as paring down the method used in tsimpoukelli et al. (2021) and eichenberg et al. (2021), such that the only trained parameters reside in the projection p . by freezing the image encoder e and lm on either side of the projection, we can examine the similarities between the representation spaces of the two as a function of the ability of the lm to describe an image input or perform some task relating to it. we expect that, if a language model represents visual conceptual information structurally similarly to that learned by a vision encoder, then a simple linear transformation to the language space is all that is required to transfer visual features into the language model. before describing the training procedure, we will describe the basic components of the model, and the variations we chose. language model lm & image encoders e we hypothesize that the conceptual representations learned by an lm are equivalent, up to a linear transformation, to the representations from an image encoder e. the language model used is the 6 billion parameter decoder-only gpt-j model (wang & komatsuzaki, 2021). p is trained to project from hi to the input space el = 4096 of the lm. we train several models with different e’s to determine the compatibility between encodings from 1we avoid specifying images or text in our backronym because one could linearly map between any two representation spaces of any modalities (e.g. video-to-text or text-to-text) e and the lm. we also test how the choice of e influences performance on this task, specifically, with regards to the degree of linguistic supervision e saw in pretraining, as described in section 1. from e we extract an image encoding of dimensionality hi representing the image. we then project that encoding to a el ∗ k sequence of soft prompts, which we hereafter refer to as image prompts. k is determined by the architecture of the e. for example, for consistency with the magma model, we use the 12x12x3072d feature map before pooling from clip, which we flatten to k = 12 ∗ 12 = 144. the encoders we experiment with are (1) clip rn50x16 (radford et al., 2021), k = 144, hi = 3072. because clip is trained to learn multimodal image-text embeddings, we expect that it will be easier for the model to learn a projection into language space than a vision only encoder. (2) nfrn50 (brock et al., 2021), k = 2, hi = 2048. we train three variants using nfresnet50: one pretrained and frozen during caption training (nfrn50), one tuned during caption training (nfrn50 tuned; note that the lm is still frozen), and one randomly initialized (nfrn50 random). the nfrn50 models are pretrained on an image classification task on data that is labeled according to wordnet hypo/hypernym structure. this signal trains the model to separate object classes according to these words. for this reason, we consider it to have indirect access to linguistic supervision. (3) beit-large (bao et al., 2021), k = 196, hi = 1024. beit is pretrained using a self-supervised masked visual token modeling task and does not have access to any labeled data which may give the model an inductive bias towards a linguistic structure. we use the 16-pixel patch version that was pretrained only on imagenet22k. we additionally test two variants of this model: beit random, is randomly initialized, and beit ft, which was pretrained on the same task and then finetuned for image classification on the same dataset. we use this model to show that it is indeed the linguistic supervision of the pretraining objective which induces better performance in the captioning task. training procedure following the magma and frozen models (eichenberg et al., 2021; tsimpoukelli et al., 2021), we train a projection on an image captioning task so that we can learn to align the representation spaces of e and the lm. all models are trained with the same basic hyperparameters and settings as described in the magma paper (see appendix a for details) on the conceptual captions 3m dataset (cc3m, sharma et al. (2018)) for 15,000 training steps. baselines as baselines, we use nfrn50 random, nfrn50 tuned, and train our own instance of magmabase. please note that nfrn50 tuned is a stand-in for the frozen model: it is architecturally the same, but differs in that we use the hyperparameters used to train the magma model. nfrn50 random allows us to test the efficacy of limber when the image encoder backbone has not learned any useful visual features. the magma we train uses the clip rn50x16 image encoder (radford et al., 2021), gpt-j as the lm, and adapters in sequence in the attention blocks with a downsample factor of 4. limitations due to computational constraints, we did not control for the prompt length (k) for each image encoder. tsimpoukelli et al. (2021) experiment with a small range for the value of k for the frozen model and show that while there are some differences, k is mostly a factor in hyperparameter tuning and should not strongly affect the comparison between models. we use much higher values of k for clip and beit, and this is therefore is not strongly controlled for in our study. we consider lm runoff another potential confound. in some cases, if the lm recognizes and generates a relevant word for one concept (e.g., “the beach”), it might continue generating relevant information due to a strong linguistic prior for that info showing up (e.g., “building a sandcastle”), giving the illusion it is recognizing every element in an image (even if it never saw “the sandcastle”). regardless, the scope of this problem is very limited, and across multiple large datasets our results show that recovery of any image information is still possible, even if the full and precise extent of which is impossible to know. we also include a ‘blind’ model in visual question answering analysis to further control for this. figure 2: curated examples of captioning and zero-shot vqa illustrating the ability of each model to transfer information to the lm without tuning either model. we use these examples to also illustrate common failure modes for beit prompts of sometimes generating incorrect but conceptually related captions/answers. performance on vision-language tasks we first verify that image representations that are linearly projected into the input space of the lm carry semantic information about the content of the image that the lm can make sense of. since we only tune a single projection between the image encoder and text decoder, the prompt tokens in the lm are equivalent to the image representation up to that linear transformation. if lms are learning a conceptual space that reflects that of the non-linguistic, purely visually grounded space of the image encoder, the lm should be able to capture the image information and describe it in text. data we evaluate on image prompts generated by each image encoder on multiple image captioning datasets: mscoco (lin et al., 2014) and nocaps (agrawal et al., 2019), as well as the vqa2 (goyal et al., 2017) visual question-answering dataset. following convention from simvlm and magma, we input the prefix “a picture of” after every image to prompt the model. like in previous work, we find that this is a favorable prompt which tends to increase performance. metrics for image captioning, we report cider-d (vedantam et al., 2015), clipscore, and refclipscore (hessel et al., 2021). cider-d rewards generating accurate words which are more likely to be visually informative, and clipscore can evaluate similarity between an image and caption without references, which helps us give credit for captions that vary greatly from the ground truth, but similar in semantic content (e.g. describing a pool as a lake). we report additional captioning metrics in appendix b. for visual question answering, we follow the few-shot procedure used in eichenberg et al. (2021) in which we prompt the models with the “[image] q: [q] a:” format. we take the first word of the generation and, like in the magma paper, truncate to the length of the longest ground truth answer. we also use the normalization procedure and accuracy metric described in the vqa repo2 results our main results can be seen in table 1. as evidenced by comparing magma and clip, and nfrn50 tuned and frozen, we find that there is relatively little benefit in training parameters in either encoder or decoder. note that the magma model we implemented is identical to the frozen clip model, with the exceptions that magma tunes the image encoder and lm. on captioning and vqa tasks, performance of the jointly-tuned models (magma, nfrn50 tuned) is not consistently better, and is often worse, than just training the projection with frozen models. this trend persists across over 10 automatic captioning metrics, which are described in appendix b. our results indicate that there is in fact a relationship between the linguistic supervision of the pretraining task 2https://github.com/gt-vision-lab/vqa image captioning nocaps - cider-d nfrn50 tuned magma (released) magma (ours) beit random nfrn50 random beit nfrn50 beit ft. clip vqa n-shots blind nfrn50 tuned magma (ours) magma (reported) nfrn50 random beit nfrn50 clip coco nocaps (all) table 1: captioning performance and visual question answering (vqa) accuracy for all variations on model architecture and image encoders used. on captioning, we see a consistent increasing trend in performance that correlates with an increase in linguistic supervision. however beit (the only vision-only model), performs far above a randomly initialized nfrn50 model and is on par with the other models on clipscore (clip-s) and refclip score (ref-s) (hessel et al., 2021). we see that beit performs at the level of our random baselines on vqa, suggesting there is a deficiency in relating visual information to more complex visual-linguistic reasoning tasks and performance on transferring to the lm. that is, clip outperforms nfrn50, which outperforms beit. to confirm this, we apply limber to a beit model finetuned on image classification (beit ft.), and find that this model improves performance drastically, even outperforming clip on nocaps, and improving over beit on all metrics, including clip-score by 9-10 points. this suggests the importance of the linguistic supervision in the pretraining task, rather than perhaps architecture, as the important factor for successful transfer. notably, we find that even vanilla beit, which has no linguistic supervision in pretraining, still transfers well to the lm for captioning, far outperforming random nfrn50 across the board, which had no pretraining to learn visual features. we do find that beit captions using vaguer language, and/or semantically related-but-incorrect descriptions of objects (figure 2; more examples in appendix b). we see this reflected in the clipscores of the captions as well, which reward semantic similarity rather than precise lexical overlap with a reference caption. beit captions score 62 and 63.6 for nocaps and coco respectively; on average only 4.5 points behind nfrn50 but 14.3 ahead of random nfrn50. perhaps we see the greatest failure of beit prompts in the inability to transfer details that the lm can use to answer questions about images (at 4-shot vqa, beit scores 31.72% while a ‘blind’ lm with no image input scores 36.99%). we hypothesize this is because beit representations do not encode visual information that corresponds well to lexical categories. in section 5, we provide evidence in favor of this hypothesis, and investigate the granularity of detail prompts from each frozen encoder transfer to the lm. transfer of visual concepts examining the conditions that cause an image prompt to succeed or fail to transfer to the lm can help us understand the differences between the text and image representation spaces. doing so can also help us understand why beit prompts perform so poorly for vqa despite performing decently for captioning. in section 5.1, we analyze the ability to accurately generate specific lexical categories in captions when they appear in images (e.g., mentioning “squirrel” when given a picture of one). figure 3: on average, recall of nouns in generated captions follows the standard pattern (clip>nfrn50>beit). however, judging by wu-palmer similarity, beit performs nearly the same or better than nfrn50 and clip on 4/5 of the noun categories. this indicates that although beit struggles to transfer the exact correct concept, it is transferring a related one based on visual similarity. on the right we show this effect for individual vehicle words. beit may have never learned to distinguish the ‘bus’ concept, but the lm still understands to generate a highly related concept, i.e., another vehicle. average random wu-palmer similarity is around .4 consistently. following that, in section 5.3 we focus on mistakes the models make: when the lm generates a bad caption, does it generate a caption that describes entities with similar visual properties? for example, a caption generated from an image of a “small”, “woodland”, and “furry” animal might not mention the actual animal depicted (e.g., a squirrel); but does it instead mention a different but similar furry animal (e.g., a rabbit)? we find that only linguistically informed image encoders (nfrn50, clip) tend to strongly encode concepts aligning to lexical categories, but all pretrained models including beit encode property information approximately equally well, and far better than a randomly initialized image encoder baseline. transfer of lexical categorical concepts using the coco validation set, we count the top 50 nouns, modifiers (e.g., adjectives), and relations (e.g., verbs, prepositional phrases) that appear in the ground truth captions and calculate how often they appear in the generated captions that were used to calculate the scores in table 1. metrics we calculate the precision/recall/f1 for each word, broken down along conceptual categories. to test our hypothesis that beit transfers coarser information, we also report the wu-palmer similarity (wup) (wu & palmer, 1994) between the ground truth word and the most similar word in the generated caption. the wup score works by calculating the distance between the ground truth word and the generated word in the wordnet taxonomy, offering a way to measure ‘how close’ a word was to the correct answer. results in figure 3, we show that beit’s recall for nouns in categories like ‘people’, ‘environment’, ‘vehicles’, and ‘objects’ is lower than nfrn50 or clip, but is comparable in terms of wup similiarity in many categories. unlike nfrn50 and clip’s pretraining, beit’s pretraining does not encourage it to learn conceptual differences between two similar looking objects that use different words. compared to prompts from a randomly initialized nfrn50, for which very few consistent patterns emerge, the lm can still extract the broad conceptual meaning behind beit prompts, as evidenced by high wup similarity (and clipscore results in table 1). we interpret these results as supporting the hypothesis that beit prompts transfer conceptual information from the purely visual to purely text space, but only in terms of coarse-grained conceptual information corresponding to visual properties, not lexical categories. our full analysis, including additional metrics and results for each individual word from the top 50 nouns, modifiers, and relations can be found in appendix b. probing to rule out the possibility that beit representations are encoding lexical concept information, but are merely unable to linearly transfer it to the lm due to representational differences, we train linear probes on several datasets for image classification. we find that beit typically does not encode finegrained information as well as nfrn50 or clip, though it far outperforms the randomly initialized nfrn50 baseline. we discuss training details and results in appendix e. transfer of coarse-grained perceptual concepts to better understand what beit encodes, if not word category information, we further investigate where errors arise, and how the structures of the embedding spaces for each frozen image encoder differ. for the sake of this analysis, we constrain the task to generating captions for pictures of animals. the reason for this narrower scope is that the captions are easier to analyze: the caption describing a picture of an animal should virtually always mention the name of that animal, and the word used to describe the animal is mostly unambiguous. (a) left: wu-palmer similarity for captions in which the models don’t mention the animal show that beit, nfrn50, and clip are all similarly close, meaning that even if they predict the wrong animal, it is on average very taxonomically similar. right: when the model mistakes one animal for another in the dataset, how similar are the awa properties for the true animal and the one it mistakes it most for? the average number of overlapping properties show that animals predicted from beit are at least as similar to the real animal as nfrn50 and clip. median is shown as the solid orange line while the dashed green line shows the mean. (b) umap projections of awa images:while nfrn50 and clip cluster tightly along lexical categories (color coded by animal), beit clusters the most distinctly along animals that live in water/the ocean; the randomly initialized nfrn50 mostly randomly overlap in one cluster. figure 4 data for this task we use the animals with attributes 2 (awa) dataset (xian et al., 2019) which contains 37k total images covering 50 animal classes. each animal class also comes with annotations for 85 properties describing the animals (e.g., ‘claws’, ‘stripes’, ‘jungle’), which allow us to analyze if prompts from certain encoders consistently make mistakes along any of these dimensions. metrics when an image prompt produces a caption, we can measure the similarity of any animals mentioned to the wordnet synset of the ground truth animal label. we can also measure similarity using the annotated properties provided by the awa dataset. for a given animal (e.g., “squirrel”), we can look at the other animal in the dataset that it is most often mistaken for (e.g., “rabbit”) and compare the proportion of properties that they share. results we generate captions for each image using prompts from each frozen image encoder. we consider a caption to be ‘correct’ if it contains the name of the animal the image depicts. clip and nfrn50 are correct most often: 59% and 43% of the time respectively. beit and the randomly initialized nfrn50 only achieve 13% and 0.4% accuracy, respectively. this aligns with previous observations that beit struggles with encoding fine-grained lexical level concepts. by looking at failure cases for each model, we can establish whether each model is predicting the presence of a similar animal or not. in figure 4a, we show that when captions generated from each model mistake one animal for another, the mistaken animals are highly similar to the ground truth animal when measuring both wu-palmer similarity (averages: beit: 0.8, nfrn50: 0.81, clip: 0.8) and overlap of awa properties (averages: beit: 0.62, nfrn50: 0.68, clip: 0.59). although beit prompts do not transfer the exact animal concept to the lm, the coarse grained perceptual information is transferred and ‘understood’ by the lm. in figure 4b we create umap projections of the encodings for each image in awa and indeed find that nfrn50 and clip cluster according to tight lexical categories (the animal types), beit clusters most tightly by perceptual features, such as habitat, having flippers, etc. discussion & future work we connect image representations as inputs to an lm with an extremely simple transformation: a linear projection. we interpret the success of generating text relevant to an image with this approach as indicative of an underexplored representational similarity between language and vision representations. depending on the linguistic guidance in the pretraining task used to pretrain the image encoder, we see varying performance. using a vision-only encoder (beit) leads to generations from the lm that are often incorrect but close under measures of perceptual relatedness. unless finetuned with image classification beit has no inductive bias in pretraining to distinguish concepts that we might normally distinguish with language, especially when they are perceptually very similar. the fact that only image encoders trained with linguistic supervision can do this suggests interesting future work on the role of language in category formation. despite strong performance with a linear transformation, the representation spaces of these models seem to contain differences that cannot be approximated in the language space. multimodal models, ideally, will learn richer representations by taking advantage of these differences. it is useful to think about how current multimodal pretraining objectives succeed and/or fail at doing this. limber can serve as a strong baseline for future multimodal models, as it provides a point of comparison for a minimal mapping between vision and language. we see limber as a useful tool for understanding how representations trained from different modalities can be similar or different. for concepts that are represented similarly, can we take advantage of this fact to reduce the amount of data required for learning good text representations? where they are different, can multimodal models learn richer representations by incorporating information from both? for example, vision data may help with reporting bias in text corpora (paik et al., 2021). answering these questions can help us understand the limits of text-only pretraining, as well as how to better ground lms to non-linguistic data. conclusion | 8 | [
108.299,
258.7336768,
195.3774711,
270.6888768
] |
okwxL_c4x84.pdf | 2,023 | 0 | clifford neural layers for pde modeling johannes brandstetter microsoft research ai4science johannesb@microsoft.com rianne van den berg microsoft research ai4science rvandenberg@microsoft.com max welling microsoft research ai4science maxwelling@microsoft.com jayesh k. gupta microsoft autonomous systems and robotics research jayesh.gupta@microsoft.com abstract partial differential equations (pdes) see widespread use in sciences and engineering to describe simulation of physical processes as scalar and vector fields interacting and coevolving over time. due to the computationally expensive nature of their standard solution methods, neural pde surrogates have become an active research topic to accelerate these simulations. however, current methods do not explicitly take into account the relationship between different fields and their internal components, which are often correlated. viewing the time evolution of such correlated fields through the lens of multivector fields allows us to overcome these limitations. multivector fields consist of scalar, vector, as well as higher-order components, such as bivectors and trivectors. their algebraic properties, such as multiplication, addition and other arithmetic operations can be described by clifford algebras. to our knowledge, this paper presents the first usage of such multivector representations together with clifford convolutions and clifford fourier transforms in the context of deep learning. the resulting clifford neural layers are universally applicable and will find direct use in the areas of fluid dynamics, weather forecasting, and the modeling of physical systems in general. we empirically evaluate the benefit of clifford neural layers by replacing convolution and fourier operations in common neural pde surrogates by their clifford counterparts on 2d navier-stokes and weather modeling tasks, as well as 3d maxwell equations. for similar parameter count, clifford neural layers consistently improve generalization capabilities of the tested neural pde surrogates. source code for our pytorch implementation is available at https://microsoft.github.io/cliffordlayers/ introduction most scientific phenomena are described by the evolution and interaction of physical quantities over space and time. the concept of fields is one widely used construct to continuously parameterize these quantities over chosen coordinates (mcmullin, 2002). prominent examples include (i) fluid mechanics, which has applications in domains ranging from mechanical and civil engineering, to geophysics and meteorology, and (ii) electromagnetism, which provides mathematical models for electric, optical, or radio technologies. the underlying equations of these examples are famously described in various forms of the navier-stokes equations and maxwell’s equations. for the majority of these equations, solutions are analytically intractable, and obtaining accurate predictions necessitates falling back on numerical approximation schemes often with prohibitive computation costs. deep learning’s success in various fields has led to a surge of interest in scientific applications, especially at augmenting and replacing numerical solving schemes in fluid dynamics with neural networks (li et al., 2020; kochkov et al., 2021; lu et al., 2021; rasp & thuerey, 2021; keisler, 2022; weyn et al., 2020; sønderby et al., 2020; pathak et al., 2022). taking weather simulations as our motivating example to ground our discussion, two different kinds of fields emerge: scalar fields such as temperature or humidity, and vector fields such as wind velocity or pressure gradients. current deep learning based approaches treat different vector field (a) scalar pressure field (b) vector wind velocity field figure 1: fields of the earth’s shallow water model. vector components of the wind velocities (right) are strongly related, i.e. they form a vector field. additionally, the wind vector field and the scalar pressure field (left) are related since the gradient of the pressure field causes air movement and subsequently influences the wind components. we therefore aim to describe scalar and vector field as one multivector field, which models the dependencies correctly. components the same as scalar fields, and stack all scalar fields along the channel dimension, thereby omitting the geometric relations between different components, both within vector fields as well as between individual vector and scalar fields. this practice leaves out important inductive bias information present in the input data. for example, wind velocities in the x- and y- directions are strongly related, i.e. they form a vector field. additionally, the wind vector field and the scalar pressure field are related since the gradient of the pressure field causes air movement and subsequently influences the wind components. in this work, we therefore build neural pde surrogates which model the relation between different fields (e.g. wind and pressure field) and field components (e.g. x- and y- component of the wind velocities). figure 1 shows an example of a wind vector field as per the earth’s shallow water model in two dimensions, and the related scalar pressure field. clifford algebras (suter, 2003; hestenes, 2003; 2012; dorst et al., 2010; renaud, 2020) are at the core intersection of geometry and algebra, introduced to simplify spatial and geometrical relations between many mathematical concepts. for example, clifford algebras naturally unify real numbers, vectors, complex numbers, quaternions, exterior algebras, and many more. most notably, in contrast to standard vector analysis where primitives are scalars and vectors, clifford algebras have additional spatial primitives for representing plane and volume segments. an expository example is the crossproduct of two vectors in 3 dimensions, which naturally translates to a plane segment spanned by these two vectors. the cross product is often represented as a vector due to its 3 independent components, but the cross product has a sign flip under reflection that a true vector does not. in clifford algebras, different spatial primitives can be summarized into objects called multivectors, as illustrated in figure 2. in this work, we replace operations over feature fields in deep learning architectures by their clifford algebra counterparts, which operate on multivector feature fields. operations on, and mappings between multivectors are defined by clifford algebras. for example, we will endow a convolutional kernel with multivector components, such that it can convolve over multivector feature maps. scalar {1} vectors {e1, e2, e3} bivectors {e1e2, e1e3, e2e3} trivector {e1e2e3} multivector figure 2: multivector components of clifford algebras. background: clifford algebras we introduce important mathematical concepts and discuss three clifford algebras, cl2,0(r), cl0,2(r), cl3,0(r), which we later use for the layers introduced in section 3. a more detailed introduction as well as connections to complex numbers and quaternions is given in appendix a. clifford algebras. consider the vector space rn with standard euclidean product ⟨., .⟩, where n = p + q, and p and q are non-negative integers. a real clifford algebra clp,q(r) is an associative algebra1 generated by p + q orthonormal basis elements e1, . . . , ep+q of the generating vector space rn, such that the following quadratic relations hold: e2 j = −1 for p < j ≤ p + q; eiej = −ejei for i ̸= j . the pair (p, q) is called the signature and defines a clifford algebra clp,q(r), together with the basis elements that span the vector space gp+q of clp,q(r). vector spaces of clifford algebras have scalar elements and vector elements, but can also have elements consisting of multiple basis elements of the generating vector space rn, which can be interpreted as plane and volume segments. exemplary low-dimensional clifford algebras are: (i) cl0,0(r) which is a one-dimensional algebra that is spanned by the basis element {1} and is therefore isomorphic to r, the field of real numbers; (ii) cl0,1(r) which is a two-dimensional algebra with vector space g1 spanned by {1, e1} where the basis vector e1 squares to −1, and is therefore isomorphic to c, the field of complex numbers; (iii) cl0,2(r) which is a 4-dimensional algebra with vector space g2 spanned by {1, e1, e2, e1e2}, where e1, e2, e1e2 all square to −1 and anti-commute. thus, cl0,2(r) is isomorphic to the quaternions h. grade, dual, geometric product. the grade of a clifford algebra basis element is the dimension of the subspace it represents. for example, the basis elements {1, e1, e2, e1e2} of the vector space g2 of the clifford algebra cl2,0(r) have the grades {0, 1, 1, 2}. using the concept of grades, we can divide clifford algebras into linear subspaces made up of elements of each grade. the grade subspace of smallest dimension is m0, the subspace of all scalars (elements with 0 basis vectors of the generating vector space). elements of m1 are called vectors, elements of m2 are bivectors, and so on. in general, a vector space gp+q of a clifford algebra clp,q(r) can be written as the direct sum of all of these subspaces: gp+q = m0 ⊕ m1 ⊕ . . . ⊕ mp+q . the elements of a clifford algebra are called multivectors, containing elements of subspaces, i.e. scalars, vectors, bivectors, . . . , k-vectors. the basis element with the highest grade is called the pseudoscalar2, which in r2 corresponds to the bivector e1e2, and in r3 to the trivector e1e2e3. the dual a∗ of a multivector a is defined as a∗ = aip+q , where ip+q represents the respective pseudoscalar of the clifford algebra. this definition allows us to relate different multivectors to each other, which is a useful property when defining clifford fourier transforms. for example, for clifford algebras in r2 the dual of the scalar is the bivector, and in r3, the dual of the scalar is the trivector. finally, the geometric product is a bilinear operation on multivectors. for arbitrary multivectors a, b, c ∈ gp+q, and scalar λ, the geometric product has the following properties: (i) closure, i.e. ab ∈ gp+q, (ii) associativity, i.e. (ab)c = a(bc); (iii) commutative scalar multiplication, i.e. λa = aλ; (iv) distributive over addition, i.e. a(b + c) = ab + ac. the geometric product is in general non-commutative, i.e. ab ̸= ba. note that equation 1 describe the geometric product specifically between basis elements of the generating vector space. clifford algebras cl2,0(r) and cl0,2(r). the 4-dimensional vector spaces of these clifford algebras have the basis vectors {1, e1, e2, e1e2} where e1, e2 square to +1 for cl2,0(r) and to −1 for cl0,2(r). for cl2,0(r), the geometric product of two multivectors a = a0 + a1e1 + a2e2 + a12e1e2 and b = b0 + b1e1 + b2e2 + b12e1e2 is given by: which can be derived by collecting terms that multiply the same basis elements, see appendix a. a vector x = (x1, x2) ∈ r2 with standard euclidean product ⟨., .⟩ can be related to x1e1 + x2e2 ∈ r2 ⊂ g2. clifford multiplication of two vectors x, y ∈ r2 ⊂ g2 yields the geometric product xy: figure 3: antisymmetry of bivector exterior (wedge) product. 1operations of addition and multiplication are associative. 2in contrast to scalars, pseudoscalars change sign under reflections. where ∧ is the exterior or wedge product. the asymmetric quantity x ∧ y = −y ∧ x is associated with the bivector, which can be interpreted as an oriented plane segment as shown in figure 3. a unit bivector i2, spanned by the (orthonormal) basis vectors e1 and e2 is determined by the product: which if squared yields i2 −1. from equation 4, it follows that e2 = e1i2 = −i2e1 and e1 = i2e2 = −e2i2 . using the pseudoscalar i2, the dual of a scalar is a bivector and the dual of a vector is again a vector. the dual pairs of the base vectors are 1 ↔ e1e2 and e1 ↔ e2. for cl2,0(r), these dual pairs allow us to write an arbitrary multivector a as 2 = −1. thus, i2 represents a geometric (cid:123)(cid:122) spinor part (cid:1) (cid:1) which can be regarded as two complex-valued parts: the spinor3 part, which commutes with the base element 1, i.e. 1i2 = i21, and the vector part, which anti-commutes with the respective base element e1, i.e. e1i2 = e1e1e2 = −e1e2e1 = −i2e1. for cl(0, 2)(r), the vector part changes to (cid:1). this decomposition will be the basis for clifford fourier transforms. e1 the clifford algebra cl0,2(r) is isomorphic to the quaternions h, which are an extension of complex numbers and are commonly written in the literature as a + bˆı + cˆȷ + dˆk. quaternions also form a 4-dimensional algebra spanned by {1, ˆı, ˆȷ, ˆk}, where ˆı, ˆȷ, ˆk all square to −1. the algebra isomorphism to cl0,2(r) is easy to verify since e1, e2, e1e2 all square to −1 and anti-commute. the basis element 1 is called the scalar and the basis elements ˆı, ˆȷ, ˆk are called the vector part of a quaternion. quaternions have practical uses in applied mathematics, particularly for expressing rotations, which we will use to define the rotational clifford convolution layer in section 3. clifford algebra cl3,0(r). the 8-dimensional vector space g3 of the clifford algebra cl3,0(r) has the basis vectors {1, e1, e2, e3, e1e2, e3e1, e2e3, e1e2e3}, i.e. it consists of one scalar, three vectors {e1, e2, e3}, three bivectors {e1e2, e3e1, e2e3}4, and one trivector e1e2e3. the trivector is the pseudoscalar i3 of the algebra. the geometric product of two multivectors is defined analogously to the geometric product of cl2,0(r), see appendix a. the dual pairs of cl3,0(r) are: 1 ↔ e1e2e3 = i3, e1 ↔ e2e3, e2 ↔ e3e1, and e3 ↔ e1e2. an intriguing example of the duality of the multivectors of cl3,0(r) emerges when writing the expression of the electromagnetic field f in terms of an electric vector field e and a magnetic vector field b, such that f = e + bi3 , where e = exe1+eye2+eze3 and b = bxe1+bye2+bze3. in this way the electromagnetic field f decomposes into electric vector and magnetic bivector parts via the pseudoscalar i3 (hestenes, 2003). for example, for the base component bxe1 of b it holds that bxe1i3 = bxe1e1e2e3 = bxe2e3 which is a bivector and the dual to the base component exe1 of e. consequently, the multivector representing f consists of three vectors (the electric field components) and three bivectors (the magnetic field components multiplied by i3). this viewpoint gives clifford neural layers a natural advantage over their default counterparts as we will see in section 4. clifford neural layers here, we introduce 2d clifford convolution and 2d clifford fourier transform layers. appendix b contains extensions to 3 dimensions. in appendices b, d, related literature is discussed, most notably complex (bassey et al., 2021) and quaternion neural networks (parcollet et al., 2020). clifford cnn layers. regular convolutional neural network (cnn) layers take as input feature maps f : z2 → rcin and convolve5 them with a set of cout filters {wi}cout i=1 with wi : z2 → rcin : [f ⋆ wi](x) = (cid:10)f (y), wi(y − x)(cid:11) = cin(cid:88) f j(y)wi,j(y − x) , 3spinors are elements of a complex vector space that can be associated with euclidean space. unlike vectors, spinors transform to their negative when rotated 360◦. 4the bivector e1e3 has negative orientation. 5in deep learning, a convolution operation in the forward pass is implemented as cross-correlation. which can be interpreted as an inner product of input feature maps with the corresponding filters at every point y ∈ z2. by applying cout filters, the output feature maps can be interpreted as cout dimensional feature vectors at every point y ∈ z2. we now extend cnn layers such that the element-wise product of scalars f j(y)wi,j(y−x) is replaced by the geometric product of multivector inputs and multivector filters f j(y)wi,j(y − x), where the chosen signature of cl is reflected in the geometric product. we replace the feature maps f : z2 → rcin by multivector feature maps f : z2 → (g2)cin and convolve them with a set of cout multivector filters {wi}cout i=1 : z2 → (g2)cin : (cid:2)f ⋆ wi(cid:3) (x) = cin(cid:88) f (x) w(x) multivector input fields multivector kernels multivector output fields note that each geometric product, indexed by i ∈ {1, ..., cout} and j ∈ {1, ..., cin}, now results in a new multivector rather than a scalar. hence, the output of a layer is a grid of cout multivectors. we can e.g. implement a cl(2, 0)(r) clifford cnn layer using equation 2 where {b0, b1, b2, b12} → {wi,j 12 } correspond to 4 different kernels representing one 2d multivector kernel, i.e. 4 different convolution layers, and {a0, a1, a2, a12} → {f j 2 , f j 12} correspond to the scalar, vector and bivector parts of the input multivector field. the channels of the different layers represent different stacks of scalars, vectors, and bivectors. analogously, we can implement a cl(3, 0)(r) cnn layer using equation 42 in appendix b. a schematic sketch of a clifford convolution layer is shown in figure 4. figure 4: sketch of clifford convolution. multivector input fields are convolved with multivector kernels. geometric product 2 , wi,j 1 , wi,j 0 , wi,j f ′(x) 0 + wi,j 1 ˆı + wi,j 2 ˆȷ + wi,j rotational clifford cnn layers. here we introduce an alternative parameterization to the clifford cnn layer introduced in equation 7 by using the isomorphism of the clifford algebra cl0,2(r) to quaternions. we take advantage of the fact that a quaternion rotation can be realized by a matrix multiplication (jia, 2008; kuipers, 1999; schwichtenberg, 2015). using the isomorphism, we can ˆk and wi,j represent the feature maps f j and filters wi,j as quaternions: f j = f j ˆk6. we can now devise an alternative parameterization of the product = wi,j between the feature map f j and wi,j. to be more precise, we introduce a composite operation that results in a scalar quantity and a quaternion rotation, where the latter acts on the vector part of the quaternion f j and only produces nonzero expansion coefficients for the vector part of the quaternion output. a quaternion rotation wi,jf j(wi,j)−1 acts on the vector part (ˆı, ˆȷ, ˆk) of f j, and can be algebraically manipulated into a vector-matrix operation ri,jf j, where ri,j : h → h is built up from the elements of wi,j (kuipers, 1999). in other words, one can transform the vector part (ˆı, ˆȷ, ˆk) of f j ∈ h via a rotation matrix ri,j that is built from the scalar and vector part (1, ˆı, ˆȷ, ˆk) i=1 : z2 → (g2)cin acts on the feature of wi,j ∈ h. altogether, a rotational multivector filter {wi map f j through a rotational transformation ri,j(wi,j rot,12) acting on vector and bivector parts of the multivector feature map f : z2 → (g2)cin , and an additional scalar response of the multivector filters: rot}cout rot,1, wi,j rot,0, wi,j rot,2, wi,j (cid:2)f ⋆ wi rot (cid:3) (x) = cin(cid:88) (cid:2)f j(y)wi,j rot (y − x))(cid:3) 0 (cid:123)(cid:122) (cid:125) (cid:124) scalar output +ri,j(y − x) · rot (y − x))(cid:3) where (cid:2)f j(y)wi,j 1 wi,j rot,12 , i.e., the scalar output of the geometric product of cl0,2(r) as in equation 34. a detailed description of the rotational multivector filters ri,j(y − x) is outlined in appendix b. while in principle the clifford cnn layer in equation 7 and the rotational clifford cnn layer in equation 8 are equally flexible, our experiments in section 4 show that rotational clifford cnn layers lead to better performance. rot,2 − f j rot,1 − f j rot,0 − f j 0 wi,j 2 wi,j 6note that the expansion coefficients for the feature map f j and filters wi,j in terms of the basis elements of g2 and in terms of quaternion elements ˆı, ˆȷ and ˆk are the same. clifford convolutions satisfy the property of equivariance under translation of the multivector inputs, as shown in theorem 1 in appendix b. analogous to theorem 1, translation equivariance can be derived for rotational clifford cnn layers. clifford fourier layers. f (x) = f (x1, . . . , xn) : rn → c at m1 × . . . × mn grid points is defined as: f (m1, . . . , mn) · e−2πi·(cid:0) m1ξ1 f{f }(ξ1, . . . , ξn) = mn(cid:88) the discrete fourier transform of an n-dimensional complex signal +...+ mn ξn mn (cid:1) where (ξ1, . . . , ξn) ∈ zm1 . . . × . . . zmn . in fourier neural operators (fno) (li et al., 2020), discrete fourier transforms on real-valued input fields and respective back-transforms – implemented as fast fourier transforms on real-valued inputs (rffts)7 – are interleaved with a weight multiplication by a complex weight matrix of shape cin ×cout for each mode, which results in a complex-valued weight tensor of the form w ∈ ccin×cout×(ξmax n ), where fourier modes above cut-off frequencies (ξmax n ) are set to zero. additionally, a residual connection is usually implemented as convolution layer with kernel size 1. in figure 5a, a sketch of an fno layer is shown. for cl(2, 0)(r), the clifford fourier transform (ebling & scheuermann, 2005; ebling, 2006; hitzer, 2012) for multivector valued functions f (x) : r2 → g2 and vectors x, ξ ∈ r2 is defined as: (cid:90) , . . . , ξmax 1 ×...×ξmax ˆf (ξ) = f{f }(ξ) = f (x)e−2πi2⟨x,ξ⟩ dx , ∀ξ ∈ r2 , provided that the integral exists. in contrast to standard fourier transforms, f (x) and ˆf (ξ) represent multivector fields in the spatial and the frequency domain, respectively. furthermore, i2 = e1e2 is used in the exponent. inserting the definition of multivector fields, we can rewrite equation 10 as: f{f }(ξ) = f f e−2πi2⟨x,ξ⟩ dx we obtain a clifford fourier transform by applying two standard fourier transforms to the dual pairs f0 = f0(x) + f12(x)i2 and f1 = f1(x) + f2(x)i2, which both can be treated as a complexvalued signals f0, f1 : r2 → c. consequently, f (x) can be understood as an element of c2. the 2d clifford fourier transform is the linear combination of two classical fourier transforms. discrete versions of equation 11 are obtained analogously to equation 9, see appendix b. similar to fno, multivector weight tensors w ∈ (g2)cin×cout×(ξmax 2 ) are applied, where again fourier modes above cut-off frequencies (ξmax ) are set to zero. in doing so, we point-wise modify the clifford fourier modes ˆf (ξ) = f{f }(ξ) = ˆf0(ξ)+ ˆf1(ξ)e1 + ˆf2(ξ)e2 + ˆf12(ξ)e12 via the geometric product. the clifford fourier modes follow naturally when combining spinor and vector parts of equation 11. finally, the residual connection is replaced by a clifford convolution with multivector kernel k. a schematic sketch is shown in figure 5b. for cl(3, 0)(r), clifford fourier transforms follow a similar elegant construction, where we apply four separate fourier transforms to 1 ×ξmax , ξmax 2 i.e. scalar/trivector and vector/bivector components are combined into complex fields and then subjected to a fourier transform. experiments the we assess clifford neural layers for different architectures in three experimental settings: incompressible navier-stokes equations, shallow water equations for weather modeling, and 3dimensional maxwell’s equations. we replace carefully designed baseline architectures by their 7the fft of a real-valued signal is hermitian-symmetric, so the output contains only the positive frequencies below the nyquist frequency for the last spatial dimension. f(f (x)) w rfft−1 f ∗(x) f(f (x)) v(x) fft fft f (x) s(x) r fft f (x) w f †(x) w w fft−1 v∗(x) fft−1 s∗(x) f ∗(x) f †(x) (a) fno layer (b) clifford fno layer figure 5: sketch of fourier neural operator (fno) and clifford fourier operator (cfno) layers. the real valued fast fourier transform (rfft) over real valued scalar input fields f (x) is replaced by the complex fast fourier transform (fft) over the complex valued dual parts v(x) and s(x) of multivector fields f (x). pointwise multiplication in the fourier space via complex weight tensor w is replaced by the geometric product in the clifford fourier space via multivector weight tensor w . additionally, the convolution path is replaced by clifford convolutions with multivector kernels w. clifford counterparts. baseline resnet architectures comprise 8 residual blocks, each consisting of two convolution layers with 3 × 3 kernels, shortcut connections, group normalization (wu & he, 2018), and gelu activation functions (hendrycks & gimpel, 2016). baseline 2-dimensional fourier neural operators (fnos) consist of 8 (4) fno blocks, gelu activations and no normalization scheme, using 16 (8) fourier modes for the 2- and 3-dimensional equations, respectively. for clifford networks, we change convolutions and fourier transforms to their respective clifford operation, and substitute normalization techniques and activation functions with clifford counterparts, keeping the number of parameters similar. we evaluate different training set sizes, and report losses for scalar and vector fields. all datasets share the common trait of containing multiple input and output fields. more precisely, one scalar and one 2-dimensional vector field in case of the navier-stokes and the shallow water equations, and a 3-dimensional (electric) vector field and its dual (magnetic) bivector field in case of the maxwell’s equations. example inputs and targets of the neural pde surrogates are shown in figure 6. the number of input timesteps t vary for different experiments. the one-step loss is the mean-squared error at the next timestep summed over fields. the rollout loss is the mean-squared error after applying the neural pde surrogate 5 times, summing over fields and time dimension. more information on the implementation details of the tested architectures, loss functions, and more detailed results can be found in appendix c. navier-stokes in 2d. the incompressible navier-stokes equations (temam, 2001) conserve the velocity flow fields v : x → r2 where x ∈ r2 via: t u p n i t e g r a t ∂v ∂t = −v · ∇v + µ∇2v − ∇p + f , ∇ · v = 0 , figure 6: example input and target fields for the navier-stokes experiments. input fields comprise a t = 2 timestep history. where v · ∇v is the convection, i.e. the rate of change of v along v, µ∇2v the viscosity, i.e. the diffusion or net movement of v, ∇p the internal pressure and f an external force, which in our case is a buoyancy force. an additional incompressibility constraint ∇ · v = 0 yields mass conservation of the navier-stokes equations. in addition to the velocity field, we introduce a scalar field representing a scalar quantity, i.e. smoke, that is being transported via the velocity field. the scalar field is advected by the vector field, i.e. as the vector field changes, the scalar field is transported along with it, whereas the scalar field influences the vector field only via an external force term. we call this weak coupling between vector and scalar fields. we implement the 2d navier-stokes equation using φflow8(holl et al., 2020), obtaining data on a grid with spatial resolution of 128 × 128 (∆x = 0.25, ∆y = 0.25), 8https://github.com/tum-pbs/phiflow rollout resnet cresnet cresnetrot one-step rollout fno cfno one-step e s m rollout fno cfno one-step rollout resnet cresnet cresnetrot one-step e s m 0 0 4 0 1 num. train trajectories 2 9 7 1 num. train trajectories (a) navier-stokes equations (b) shallow water equations figure 7: results for resnet based (left) and fourier based (right) architectures on the 2-dimensional navier-stokes and shallow water experiments. one-step and rollout loss are shown. and temporal resolution of ∆t = 1.5 s. results for one-step loss and rollout loss on the test set are shown in figure 7a. for resnet-like architectures, we observe that both cresnet and cresnetrot improve upon the resnet baseline. additionally, we observe that rollout losses are also lower for the two clifford based architectures, which we attribute to better and more stable models that do not overfit to one-step predictions so easily. lastly, while in principle cresnet and cresnetrot based architectures are equally flexible, cresnetrot ones in general perform better than cresnet ones. for fno and respective clifford fourier based (cfno) architectures, the loss is in general much lower than for resnet based architectures. cfno architectures improve upon fno architectures for all dataset sizes, and for one-step as well as rollout losses. shallow water equations. this set of coupled equations (vreugdenhil, 1994) can be derived from integrating the incompressible navier–stokes equations, in cases where the horizontal length scale is much larger than the vertical length scale. as such, the equations model a thin layer of fluid of constant density in hydrostatic balance, bounded from below by the bottom topography and from above by a free surface via 3 coupled pdes, describing the velocity in x- direction, the velocity in the y- direction, and the scalar pressure field. the shallow water equations can be therefore be used as simplified weather model, as done in this work and exemplified in figure 1. the relation between vector and scalar components is relatively strong (strong coupling due to the 3-coupled pdes). we obtain data for the 2d shallow water equations on a grid with spatial resolution of 192 × 96 (∆x = 1.875◦, ∆y = 3.75◦), and temporal resolution of ∆t = 6 h. we observe similar results than for the navier-stokes experiments. for low number of trajectories, resnet architectures seem to lack expressiveness, where arguably some data smoothing is learned first. thus, resnets need significantly more trajectories compared to (c)fno architectures to obtain reasonable loss values, which seems to go hand in hand with clifford layers gaining advantage. in general, performance differences between baseline and clifford architectures are even more pronounced, which we attribute to the stronger coupling of the scalar and the vector fields. maxwell’s equations in matter in 3d. in isotropic media, maxwell’s equations (griffiths, 2005) propagate solutions of the displacement field d, which is related to the electrical field via d = ϵ0ϵre, where ϵ0 is the permittivity of free space and ϵr is the permittivity of the medium, and the magnetization field h, which is related to the magnetic field b via h = µ0µrb, where µ0 is the permeability of free space and µr is the permeability of the medium. the electromagnetic field f has the intriguing property that the electric field e and the magnetic field b are dual pairs, thus f = e + bi3, i.e. strong coupling between the electric field and its dual (bivector) magnetic field. this duality also holds for d and h. concretely, the fields of interest are the vector-valued d-field (dx, dy, dz) and the vector-valued h-field (hx, hy, hz). we obtain data for the 3d maxwell’s equations on a grid with spatial resolution of 32 × 32 × 32 (∆x = ∆y = ∆z = 5 · 10−7m), and temporal resolution of ∆t = 50 s. we randomly place 18 different light sources outside a cube which emit light with different amplitude and different phase shifts, causing the resulting d and h fields to interfere. the wavelength of the emitted light is 10−5m. we test fno based architectures and respective clifford counterparts (cfno). due to the vector-bivector character of electric and magnetic field components, maxwell’s equations are an ideal playground to stress-test the inductive bias advantages of clifford base architectures. results for one-step loss and rollout loss on the test set are shown in figure 8. cfno architectures improve upon fno architectures, especially for low numbers of trajectories. results demonstrate the much stronger inductive bias of clifford based 3-dimensional fourier layers, and their general applicability to 3-dimensional problems, which are structurally even more interesting than 2-dimensional ones. conclusion we introduced clifford neural layers that handle the various scalar (e.g. charge density), vector (e.g. electric field), bivector (magnetic field) and higher order fields as proper geometric objects organized as multivectors. this geometric algebra perspective allowed us to naturally generalize convolution and fourier transformations to their clifford counterparts, providing an elegant rule to design new neural network layers. the multivector viewpoint denotes an inductive bias advantage, leading to a better representation of the relationship between fields and their individual components, which is prominently demonstrated by the fact that our clifford layers significantly outperformed equivalent standard neural pde surrogates. rollout fno cfno one-step e s m num. train trajectories results figure 8: for fourier based architectures on maxwell equation’s. limitations. one limitation is the current speed of fast fourier transform (fft) operations on machine learning accelerators like gpus. while an active area of research, current available versions of cufft9 kernels wrapped in pytorch (paszke et al., 2019) are not yet as heavily optimized10, especially for the gradient pass. in contrast to fno layers, which operate on real-valued signals, clifford fourier layers use complex-valued fft operations where the backward pass is approximately twice as slow. for similar parameter counts, inference times of fno and cfno networks are similar. similar to grassucci et al. (2021) who investigated the speed of geometric convolution layers, we found that clifford convolutions are more parameter efficient since they share parameters among filters, with the downside that the net number of operations is larger, resulting in increased training times by a factor of about 2. finally, from a pde point of view, the presented approaches to obtain pde surrogates are limited since the neural networks have to be retrained for different equation parameters or e.g. different ∆t. future work. besides modeling of pdes, weather, and fluid dynamics, we see potential applications of clifford layers for e.g. mri or radar data, and for neural implicit representations (xie et al., 2022; rella et al., 2022). extensions towards graph networks and attention based models will be useful to explore. furthermore, custom multivector gpu kernels can overcome many of the speed issues as the compute density of clifford operations is much higher which is better for hardware accelerators (hoffmann et al., 2020). the use of a just-in-time compiled language with better array abstractions like julia (bezanson et al., 2017) could significantly simplify the interface. finally, combining the ideas of multivector modeling together with various physics-informed neural network approaches (raissi et al., 2019; lutter et al., 2018; gupta et al., 2019; cranmer et al., 2020; zubov et al., 2021) is an attractive next step. 9https://developer.nvidia.com/cufft 10for alternative efficient gpu-accelerated multidimensional fft libraries see e.g. https://github. com/dtolm/vkfft reproducibility and ethical statement reproducibility statement. we have included error bars, and ablation studies wherever we found it necessary and appropriate. we have described our architectures in section 4 and provided further implementation details in appendix section c. we have further include pseudocode for the newly proposed layers in appendix section b.6. we open-sourced our pytorch implementation at https://microsoft.github.io/cliffordlayers/ for others to use. we aim to develop this codebase further in the future. ethical statement. neural pde surrogates will play an important role in modeling many natural phenomena, and thus developing them further might enable us to achieve shortcuts or alternatives for computationally expensive simulations. for example, if used as such, pde surrogates will potentially help to advance different fields of research, especially in the natural sciences. examples related to this paper are fluid dynamics or weather modeling. therefore, pde surrogates might potentially be directly or indirectly related to reducing the carbon footprint. on the downside, relying on simulations always requires rigorous cross-checks and monitoring, especially when we “learn to simulate”. references azzam alfarraj and guo-wei wei. geometric algebra generation of molecular surfaces. journal of troy arcomano, istvan szunyogh, jaideep pathak, alexander wikner, brian r hunt, and edward ott. a machine learning-based global atmospheric forecast model. geophysical research letters, 47(9):e2020gl087776, 2020. jimmy lei ba, jamie ryan kiros, and geoffrey e hinton. layer normalization. arxiv preprint yohai bar-sinai, stephan hoyer, jason hickey, and michael p brenner. learning data-driven discretizations for partial differential equations. proceedings of the national academy of sciences, 116(31):15344–15349, 2019. joshua bassey, lijun qian, and xianfang li. a survey of complex-valued neural networks. arxiv jeff bezanson, alan edelman, stefan karpinski, and viral b shah. julia: a fresh approach to numerical computing. siam review, 59(1):65–98, 2017. url https://doi.org/10.1137/ 141000671. saakaar bhatnagar, yaser afshar, shaowu pan, karthik duraisamy, and shailendra kaushik. prediction of aerodynamic flow fields using convolutional neural networks. computational mechanics, 64(2):525–545, 2019. fred brackx, eckhard hitzer, and stephen j sangwine. history of quaternion and clifford-fourier transforms and wavelets. quaternion and clifford fourier transforms and wavelets, 27:xi– xxvii, 2013. johannes brandstetter, rob hesselink, elise van der pol, erik bekkers, and max welling. gearxiv preprint ometric and physical quantities improve e (3) equivariant message passing. arxiv:2110.02905, 2021. johannes brandstetter, max welling, and daniel e worrall. lie point symmetry data augmentation for neural pde solvers. arxiv preprint arxiv:2202.07643, 2022a. johannes brandstetter, daniel worrall, and max welling. message passing neural pde solvers. susanne c brenner, l ridgway scott, and l ridgway scott. the mathematical theory of finite element methods, volume 3. springer, 2008. michael m bronstein, joan bruna, yann lecun, arthur szlam, and pierre vandergheynst. geometric deep learning: going beyond euclidean data. ieee signal processing magazine, 34(4):18–42, 2017. michael m bronstein, joan bruna, taco cohen, and petar veliˇckovi´c. geometric deep learning: grids, groups, graphs, geodesics, and gauges. arxiv preprint arxiv:2104.13478, 2021. sven buchholz. a theory of neural computation with clifford algebras. 2005. sven buchholz and gerald sommer. introduction to neural computation in clifford algebra. in geometric computing with clifford algebras, pp. 291–314. springer, 2001. shuhao cao. choose a transformer: fourier or galerkin. advances in neural information processing gengxiang chen, yingguang li, qinglu meng, jing zhou, xiaozhong hao, et al. residual fourier neural operator for thermochemical curing of composites. arxiv preprint arxiv:2111.10262, 2021. taco cohen and max welling. group equivariant convolutional networks. in international conference on machine learning (icml), pp. 2990–2999. pmlr, 2016a. taco s cohen and max welling. steerable cnns. arxiv preprint arxiv:1612.08498, 2016b. taco s cohen, mario geiger, and maurice weiler. a general theory of equivariant cnns on homogeneous spaces. advances in neural information processing systems (neurips), 32, 2019. james w cooley and john w tukey. an algorithm for the machine calculation of complex fourier miles cranmer, sam greydanus, stephan hoyer, peter battaglia, david spergel, and shirley ho. lagrangian neural networks. arxiv preprint arxiv:2003.04630, 2020. leo dorst, daniel fontijne, and stephen mann. geometric algebra for computer science: an objectoriented approach to geometry. elsevier, 2010. j. ebling and g. scheuermann. clifford fourier transform on vector fields. ieee transactions on julia ebling. visualization and analysis of flow fields based on clifford convolution. 2006. julia ebling and gerik scheuermann. clifford convolution and pattern matching on vector fields. in todd a ell. quaternion-fourier transforms for analysis of two-dimensional linear time-invariant partial differential systems. in proceedings of 32nd ieee conference on decision and control, pp. 1830–1841. ieee, 1993. todd a ell and stephen j sangwine. hypercomplex fourier transforms of color images. ieee transactions on image processing, 16(1):22–35, 2006. todd a ell, nicolas le bihan, and stephen j sangwine. quaternion fourier transforms for signal and image processing. john wiley & sons, 2014. todd anthony ell. hypercomplex spectral transformations. phd thesis, university of minnesota, thomas frerix, dmitrii kochkov, jamie smith, daniel cremers, michael brenner, and stephan hoyer. variational data assimilation with a learned inverse observation operator. in international conference on machine learning (icml), pp. 3449–3458. pmlr, 2021. kunihiko fukushima and sei miyake. neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition. in competition and cooperation in neural nets, pp. 267–285. springer, 1982. victor garcia satorras, zeynep akata, and max welling. combining generative and discriminative in h. wallach, h. larochelle, a. beygelzimer, f. d’alch´e buc, models for hybrid inference. e. fox, and r. garnett (eds.), advances in neural information processing systems (neurips), pp. 13802–13812. curran associates, inc., 2019. chase j gaudet and anthony s maida. deep quaternion networks. in international joint conference on neural networks (ijcnn), pp. 1–8. ieee, 2018. mario geiger and tess smidt. e3nn: euclidean neural networks. arxiv preprint arxiv:2207.09453, nicholas geneva and nicholas zabaras. modeling the dynamics of pde systems with physicsconstrained deep auto-regressive networks. journal of computational physics, 403:109056, 2020. xavier glorot and yoshua bengio. understanding the difficulty of training deep feedforward neural in international conference on artificial intelligence and statistics (aistats), pp. networks. 249–256. jmlr workshop and conference proceedings, 2010. eleonora grassucci, aston zhang, and danilo comminiello. lightweight convolutional neural networks by hypercomplex parameterization. arxiv preprint arxiv:2110.04176, 2021. daniel greenfeld, meirav galun, ronen basri, irad yavneh, and ron kimmel. learning to optimize in international conference on machine learning (icml), pp. 2415– multigrid pde solvers. 2423, 2019. david j griffiths. introduction to electrodynamics, 2005. steven guan, ko-tsung hsu, and parag v chitnis. fourier neural operator networks: a fast and general solver for the photoacoustic wave equation. arxiv preprint arxiv:2108.09374, 2021. john guibas, morteza mardani, zongyi li, andrew tao, anima anandkumar, and bryan catanzaro. adaptive fourier neural operators: efficient token mixers for transformers. arxiv preprint arxiv:2111.13587, 2021. xiaoxiao guo, wei li, and francesco iorio. convolutional neural networks for steady flow approximation. in proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 481–490, 2016. jayesh k gupta, kunal menda, zachary manchester, and mykel j kochenderfer. a general framework for structured learning of mechanical systems. arxiv preprint arxiv:1902.08705, 2019. jiequn han, arnulf jentzen, and weinan e. solving high-dimensional partial differential equations using deep learning. proceedings of the national academy of sciences, 115(34):8505–8510, 2018. kaiming he, xiangyu zhang, shaoqing ren, and jian sun. delving deep into rectifiers: surpassing human-level performance on imagenet classification. in ieee international conference on computer vision (iccv), pp. 1026–1034, 2015. kaiming he, xiangyu zhang, shaoqing ren, and jian sun. deep residual learning for image recognition. in ieee conference on computer vision and pattern recognition (cvpr), pp. 770–778, 2016. yacov hel-or and patrick c teo. canonical decomposition of steerable functions. journal of mathematical imaging and vision, 9(1):83–95, 1998. dan hendrycks and kevin gimpel. gaussian error linear units (gelus). arxiv preprint jan hermann, zeno sch¨atzle, and frank no´e. deep-neural-network solution of the electronic schr¨odinger equation. nature chemistry, 12(10):891–897, 2020. david hestenes. oersted medal lecture 2002: reforming the mathematical language of physics, david hestenes. new foundations for classical mechanics, volume 15. springer science & business david hestenes and garret sobczyk. clifford algebra to geometric calculus: a unified language for mathematics and physics, volume 5. springer science & business media, 2012. eckhard hitzer. the clifford fourier transform in real clifford algebras. 2012. eckhard hitzer. quaternion and clifford fourier transforms. chapman and hall/crc, 2021. eckhard hitzer and stephen j sangwine. quaternion and clifford fourier transforms and wavelets. sepp hochreiter and j¨urgen schmidhuber. long short-term memory. neural computation, 9(8): jordan hoffmann, simon schmitt, simon osindero, karen simonyan, and erich elsen. algebranets. philipp holl, vladlen koltun, kiwon um, and nils thuerey. phiflow: a differentiable pde solving framework for deep learning via physical simulations. in neurips workshop, volume 2, 2020. jun-ting hsieh, shengjia zhao, stephan eismann, lucia mirabella, and stefano ermon. learning neural pde solvers with convergence guarantees. arxiv preprint arxiv:1906.01200, 2019. jie hu, li shen, and gang sun. squeeze-and-excitation networks. in ieee conference on computer vision and pattern recognition (cvpr), pp. 7132–7141, 2018. rakhoon hwang, jae yong lee, jin young shin, and hyung ju hwang. solving pde-constrained in aaai conference on artificial intelligence, volcontrol problems using operator learning. ume 36, pp. 4504–4512, 2022. sergey ioffe and christian szegedy. batch normalization: accelerating deep network training by reducing internal covariate shift. in international conference on machine learning (icml), pp. 448–456. pmlr, 2015. erik jenner and maurice weiler. steerable partial differential operators for equivariant neural netyan-bin jia. quaternions and rotations. com s, 477(577):15, 2008. xiaowei jin, shengze cai, hui li, and george em karniadakis. nsfnets (navier-stokes flow nets): physics-informed neural networks for the incompressible navier-stokes equations. journal of computational physics, 426:109951, 2021. ryan keisler. forecasting global weather with graph neural networks. arxiv preprint diederik p kingma and jimmy ba. adam: a method for stochastic optimization. arxiv preprint georgios kissas, jacob h seidman, leonardo ferreira guilhoto, victor m preciado, george j pappas, and paris perdikaris. learning operators with coupled attention. journal of machine learning research, 23(215):1–63, 2022. clement kleinstreuer. engineering fluid dynamics: an interdisciplinary systems approach. cambridge university press, 1997. milan kl¨ower, tom kimpson, alistair white, and mos`e giordano. milankl/speedyweather.jl: dmitrii kochkov, jamie a smith, ayya alieva, qing wang, michael p brenner, and stephan hoyer. machine learning–accelerated computational fluid dynamics. proceedings of the national academy of sciences, 118(21):e2101784118, 2021. risi kondor and shubhendu trivedi. on the generalization of equivariance and convolution in neural in international conference on machine learning networks to the action of compact groups. (icml), pp. 2747–2755. pmlr, 2018. nikola kovachki, samuel lanthaler, and siddhartha mishra. on universal approximation and error bounds for fourier neural operators. journal of machine learning research (jmlr), 22:art–no, 2021. alex krizhevsky, ilya sutskever, and geoffrey e hinton. imagenet classification with deep convolutional neural networks. advances in neural information processing systems (neurips), 25, 2012. fred kucharski, franco molteni, martin p. king, riccardo farneti, in-sik kang, and laura feudale. on the need of intermediate complexity general circulation models: a “speedy” example. bulletin of the american meteorological society, 94(1):25–30, january 2013. doi: 10.1175/ bams-d-11-00238.1. url https://doi.org/10.1175/bams-d-11-00238.1. jack b kuipers. quaternions and rotation sequences: a primer with applications to orbits, aerospace, and virtual reality. princeton university press, 1999. yasuaki kuroe. models of clifford recurrent neural networks and their dynamics. international joint conference on neural networks, pp. 1035–1041. ieee, 2011. yann lecun, l´eon bottou, yoshua bengio, and patrick haffner. gradient-based learning applied to document recognition. proceedings of the ieee, 86(11):2278–2324, 1998. zijie li, kazem meidani, and amir barati farimani. transformer for partial differential equations’ operator learning. arxiv preprint arxiv:2205.13671, 2022a. zongyi li, nikola kovachki, kamyar azizzadenesheli, burigede liu, kaushik bhattacharya, andrew stuart, and anima anandkumar. fourier neural operator for parametric partial differential equations. arxiv preprint arxiv:2010.08895, 2020. zongyi li, nikola kovachki, kamyar azizzadenesheli, burigede liu, kaushik bhattacharya, andrew stuart, and anima anandkumar. markov neural operators for learning chaotic systems. arxiv preprint arxiv:2106.06898, 2021a. zongyi li, hongkai zheng, nikola kovachki, david jin, haoxuan chen, burigede liu, kamyar azizzadenesheli, and anima anandkumar. physics-informed neural operator for learning partial differential equations. arxiv preprint arxiv:2111.03794, 2021b. zongyi li, daniel zhengyu huang, burigede liu, and anima anandkumar. fourier neural operator with learned deformations for pdes on general geometries. arxiv preprint arxiv:2207.05209, 2022b. marten lienen and stephan g¨unnemann. learning the dynamics of physical systems from sparse observations with finite element networks. arxiv preprint arxiv:2203.08852, 2022. joowon lim and demetri psaltis. maxwellnet: physics-driven deep neural network training based on maxwell’s equations. apl photonics, 7(1):011301, 2022. burigede liu, nikola kovachki, zongyi li, kamyar azizzadenesheli, anima anandkumar, andrew m stuart, and kaushik bhattacharya. a learning-based multiscale method and its application to inelastic impact problems. journal of the mechanics and physics of solids, 158:104668, 2022. ilya loshchilov and frank hutter. sgdr: stochastic gradient descent with warm restarts. arxiv winfried l¨otzsch, simon ohler, and johannes s otterbach. learning the solution operator of boundary value problems using graph neural networks. arxiv preprint arxiv:2206.14092, 2022. pertti lounesto. clifford algebras and spinors. in clifford algebras and their applications in mathematical physics, pp. 25–37. springer, 1986. lu lu, pengzhan jin, guofei pang, zhongqiang zhang, and george em karniadakis. learning nonlinear operators via deeponet based on the universal approximation theorem of operators. nature machine intelligence, 3(3):218–229, 2021. lu lu, xuhui meng, shengze cai, zhiping mao, somdatta goswami, zhongqiang zhang, and george em karniadakis. a comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data. computer methods in applied mechanics and engineering, 393:114778, 2022. michael lutter, christian ritter, and jan peters. deep lagrangian networks: using physics as model prior for deep learning. in international conference on learning representations (iclr), 2018. hao ma, yuxuan zhang, nils thuerey, xiangyu hu, and oskar j haidn. physics-driven learning of the steady navier-stokes equations using deep convolutional neural networks. arxiv preprint arxiv:2106.09301, 2021a. wei ma, zhaocheng liu, zhaxylyk a kudyshev, alexandra boltasseva, wenshan cai, and yongmin liu. deep learning for the design of photonic structures. nature photonics, 15(2):77–90, 2021b. romit maulik, vishwas rao, jiali wang, gianmarco mengaldo, emil constantinescu, bethany lusch, prasanna balaprakash, ian foster, and rao kotamarthi. efficient high-dimensional variational data assimilation with machine-learned reduced-order models. geoscientific model development, 15(8):3433–3445, 2022. andreas mayr, sebastian lehner, arno mayrhofer, christoph kloss, sepp hochreiter, and joarxiv preprint hannes brandstetter. boundary graph neural networks for 3d simulations. arxiv:2106.11299, 2021. ernan mcmullin. the origins of the field concept in physics. physics in perspective, 4(1):13–39, pavlo melnyk, michael felsberg, and m˚arten wadenb¨ack. embed me if you can: a geometric perceptron. in proceedings of the ieee/cvf international conference on computer vision, pp. 1276–1284, 2021. philip mirowski. more heat than light: economics as social physics, physics as nature’s economics. cambridge university press, 1991. franco molteni. atmospheric simulations using a gcm with simplified physical parametrizations. i: model climatology and variability in multi-decadal experiments. climate dynamics, 20(2): 175–191, 2003. c eddie moxey, stephen j sangwine, and todd a ell. hypercomplex correlation techniques for vector images. ieee transactions on signal processing, 51(7):1941–1953, 2003. e ulises moya-s´anchez, sebasti`a xamb´o-descamps, abraham s´anchez p´erez, sebasti´an salazarcolores, and ulises cort´es. a trainable monogenic convnet layer robust in front of large contrast changes in image classification. ieee access, 9:163735–163746, 2021. tu dinh nguyen, dinh phung, et al. quaternion graph neural networks. in asian conference on titouan parcollet, mirco ravanelli, mohamed morchid, georges linar`es, chiheb trabelsi, rearxiv preprint nato de mori, and yoshua bengio. quaternion recurrent neural networks. arxiv:1806.04418, 2018a. titouan parcollet, ying zhang, mohamed morchid, chiheb trabelsi, georges linar`es, renato de mori, and yoshua bengio. quaternion convolutional neural networks for end-to-end automatic speech recognition. arxiv preprint arxiv:1806.07789, 2018b. titouan parcollet, mohamed morchid, and georges linar`es. quaternion convolutional neural netin icassp 2019 - 2019 ieee international conworks for heterogeneous image processing. ference on acoustics, speech and signal processing (icassp), pp. 8514–8518, 2019. doi: 10.1109/icassp.2019.8682495. titouan parcollet, mohamed morchid, and georges linar`es. a survey of quaternion neural netadam paszke, sam gross, francisco massa, adam lerer, james bradbury, gregory chanan, trevor killeen, zeming lin, natalia gimelshein, luca antiga, alban desmaison, andreas kopf, edward yang, zachary devito, martin raison, alykhan tejani, sasank chilamkurthy, benoit steiner, lu fang, junjie bai, and soumith chintala. pytorch: an imperative style, high-performance deep learning library. in h. wallach, h. larochelle, a. beygelzimer, f. d'alch´e-buc, e. fox, and r. garnett (eds.), advances in neural information processing systems (neurips), pp. 8024–8035. 2019. jaideep pathak, shashank subramanian, peter harrington, sanjeev raja, ashesh chattopadhyay, morteza mardani, thorsten kurth, david hall, zongyi li, kamyar azizzadenesheli, pedram hassanzadeh, karthik kashinath, and animashree anandkumar. fourcastnet: a global datadriven high-resolution weather model using adaptive fourier neural operators. arxiv preprint arxiv:2202.11214, 2022. jk pearson and dl bisset. neural networks in the clifford domain. in proceedings of 1994 ieee international conference on neural networks (icnn’94), volume 3, pp. 1465–1469. ieee, 1994. justin pearson. clifford networks. in complex-valued neural networks: theories and applications, rapha¨el pestourie, youssef mroueh, chris rackauckas, payel das, and steven g johnson. physicsenhanced deep surrogates for pdes. arxiv preprint arxiv:2111.05841, 2021. tobias pfaff, meire fortunato, alvaro sanchez-gonzalez, and peter w battaglia. learning meshbased simulation with graph networks. arxiv preprint arxiv:2010.03409, 2020. david pfau, james s spencer, alexander gdg matthews, and w matthew c foulkes. ab initio solution of the many-electron schr¨odinger equation with deep neural networks. physical review research, 2(3):033429, 2020. timothy praditia, matthias karlbauer, sebastian otte, sergey oladyshkin, martin v butz, and wolfgang nowak. finite volume neural network: modeling subsurface contaminant transport. arxiv preprint arxiv:2104.06010, 2021. md ashiqur rahman, manuel a florez, anima anandkumar, zachary e ross, and kamyar azizzadenesheli. generative adversarial neural operators. arxiv preprint arxiv:2205.03017, 2022a. md ashiqur rahman, zachary e ross, and kamyar azizzadenesheli. u-no: u-shaped neural opermaziar raissi, paris perdikaris, and george e karniadakis. physics-informed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. journal of computational physics, 378:686–707, 2019. maziar raissi, alireza yazdani, and george em karniadakis. hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. science, 367(6481):1026–1030, 2020. yongming rao, wenliang zhao, zheng zhu, jiwen lu, and jie zhou. global filter networks for image classification. advances in neural information processing systems (neurips), 34, 2021. stephan rasp and nils thuerey. data-driven medium-range weather prediction with a resnet pretrained on climate simulations: a new model for weatherbench. journal of advances in modeling earth systems, 13(2):e2020ms002405, 2021. edoardo mello rella, ajad chhatkuli, ender konukoglu, and luc van gool. neural vector fields for surface representation and inference. arxiv preprint arxiv:2204.06552, 2022. pierre renaud. clifford algebras lecture notes on applications in physics, 2020. olaf ronneberger, philipp fischer, and thomas brox. u-net: convolutional networks for biomedin international conference on medical image computing and ical image segmentation. computer-assisted intervention, pp. 234–241. springer, 2015. alvaro sanchez-gonzalez, jonathan godwin, tobias pfaff, rex ying, jure leskovec, and peter w. battaglia. learning to simulate complex physics with graph networks. arxiv preprint arxiv:2002.09405, 2020. mark sandler, andrew howard, menglong zhu, andrey zhmoginov, and liang-chieh chen. mobilenetv2: inverted residuals and linear bottlenecks. in ieee conference on computer vision and pattern recognition (cvpr), pp. 4510–4520, 2018. stephen j sangwine and todd a ell. colour image filters based on hypercomplex convolution. iee proceedings-vision, image and signal processing, 147(2):89–93, 2000. jakob schwichtenberg. physics from symmetry. springer, 2015. wenlei shi, xinquan huang, xiaotian gao, xinran wei, jia zhang, jiang bian, mao yang, and tieyan liu. lordnet: learning to solve parametric partial differential equations without simulated data. arxiv preprint arxiv:2206.09418, 2022. xingjian shi, zhourong chen, hao wang, dit-yan yeung, wai-kin wong, and wang-chun woo. convolutional lstm network: a machine learning approach for precipitation nowcasting. advances in neural information processing systems, 28, 2015. justin sirignano and konstantinos spiliopoulos. dgm: a deep learning algorithm for solving partial differential equations. journal of computational physics, 375:1339–1364, 2018. casper kaae sønderby, lasse espeholt, jonathan heek, mostafa dehghani, avital oliver, tim salimans, shreya agrawal, jason hickey, and nal kalchbrenner. metnet: a neural weather model for precipitation forecasting. arxiv preprint arxiv:2003.12140, 2020. matthew spellings. geometric algebra attention networks for small point clouds. arxiv preprint kimberly stachenfeld, drummond b fielding, dmitrii kochkov, miles cranmer, tobias pfaff, jonathan godwin, can cui, shirley ho, peter battaglia, and alvaro sanchez-gonzalez. learned coarse models for efficient turbulence simulation. arxiv preprint arxiv:2112.15275, 2021. andrew m steane. an introduction to spinors. arxiv preprint arxiv:1312.3824, 2013. jaap suter. geometric algebra primer. http://www.jaapsuter.com/ geometric-algebra.pdf, 2003. mingxing tan and quoc le. efficientnet: rethinking model scaling for convolutional neural networks. in international conference on machine learning (icml), pp. 6105–6114. pmlr, 2019. roger temam. navier-stokes equations: theory and numerical analysis, volume 343. american mathematical soc., 2001. nils thuerey, philipp holl, maximilian mueller, patrick schnell, felix trost, and kiwon um. physics-based deep learning. arxiv preprint arxiv:2109.05237, 2021. chiheb trabelsi, olexa bilaniuk, dmitriy serdyuk, sandeep subramanian, jo˜ao felipe santos santos, soroush mehri, negar rostamzadeh, yoshua bengio, and christopher joseph pal. deep complex networks. in international conference on learning representations (iclr), 2017. marco as trindade, vinicius nl rocha, and s floquet. clifford algebras, quantum neural networks and generalized quantum fourier transform. arxiv preprint arxiv:2206.01808, 2022. dmitry ulyanov, andrea vedaldi, and victor lempitsky. improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. in ieee conference on computer vision and pattern recognition (cvpr), pp. 6924–6932, 2017. kiwon um, robert brand, yun raymond fei, philipp holl, and nils thuerey. solver-in-the-loop: learning from differentiable physics to interact with iterative pde-solvers. advances in neural information processing systems (neurips), 33:6111–6122, 2020. charles van loan. computational frameworks for the fast fourier transform. siam, 1992. cornelis boudewijn vreugdenhil. numerical methods for shallow-water flow, volume 13. springer science & business media, 1994. nils wandel, michael weinmann, and reinhard klein. learning incompressible fluid dynamarxiv preprint ics from scratch–towards fast, differentiable fluid models that generalize. arxiv:2006.08762, 2020. nils wandel, michael weinmann, michael neidlin, and reinhard klein. spline-pinn: approaching pdes without data using fast, physics-informed hermite-spline cnns. in proceedings of the aaai conference on artificial intelligence, volume 36, pp. 8529–8538, 2022. rui wang, karthik kashinath, mustafa mustafa, adrian albert, and rose yu. towards physicsinformed deep learning for turbulent flow prediction. in acm sigkdd international conference on knowledge discovery & data mining, pp. 1457–1466, 2020a. rui wang, robin walters, and rose yu. incorporating symmetry into deep dynamics models for improved generalization. arxiv preprint arxiv:2002.03061, 2020b. sifan wang, hanwen wang, and paris perdikaris. learning the solution operator of parametric partial differential equations with physics-informed deeponets. science advances, 7(40):eabi8605, 2021. maurice weiler and gabriele cesa. general e(2)-equivariant steerable cnns. advances in neural information processing systems (neurips), 32, 2019. maurice weiler, mario geiger, max welling, wouter boomsma, and taco s cohen. 3d steerable cnns: learning rotationally equivariant features in volumetric data. advances in neural information processing systems (neurips), 31, 2018. gege wen, zongyi li, kamyar azizzadenesheli, anima anandkumar, and sally m benson. ufno—an enhanced fourier neural operator-based deep-learning model for multiphase flow. advances in water resources, 163:104180, 2022. jonathan a weyn, dale r durran, and rich caruana. improving data-driven global weather prediction using deep convolutional neural networks on a cubed sphere. journal of advances in modeling earth systems, 12(9):e2020ms002109, 2020. jonathan a weyn, dale r durran, rich caruana, and nathaniel cresswell-clay. sub-seasonal forecasting with a large ensemble of deep-learning weather prediction models. journal of advances in modeling earth systems, 13(7):e2021ms002502, 2021. daniel e worrall, stephan j garbin, daniyar turmukhambetov, and gabriel j brostow. harmonic networks: deep translation and rotation equivariance. in ieee conference on computer vision and pattern recognition (cvpr), pp. 5028–5037, 2017. tailin wu, takashi maruyama, and jure leskovec. learning to accelerate partial differential equations via latent global evolution. arxiv preprint arxiv:2206.07681, 2022. yuxin wu and kaiming he. group normalization. in european conference on computer vision yiheng xie, towaki takikawa, shunsuke saito, or litany, shiqin yan, numair khan, federico tombari, james tompkin, vincent sitzmann, and srinath sridhar. neural fields in visual computing and beyond. in computer graphics forum, volume 41, pp. 641–676. wiley online library, 2022. yan yang, angela f gao, jorge c castellanos, zachary e ross, kamyar azizzadenesheli, and robert w clayton. seismic wave propagation and inversion with neural operators. the seismic record, 1(3):126–134, 2021. di zang, xihao chen, juntao lei, zengqiang wang, junqi zhang, jiujun cheng, and keshuang tang. a multi-channel geometric algebra residual network for traffic data prediction. iet intelligent transport systems, 2022. xiangyu zhang, xinyu zhou, mengxiao lin, and jian sun. shufflenet: an extremely efficient convolutional neural network for mobile devices. in ieee conference on computer vision and pattern recognition (cvpr), pp. 6848–6856, 2018. xuanyu zhu, yi xu, hongteng xu, and changjian chen. quaternion convolutional neural networks. in european conference on computer vision (eccv), 2018. yinhao zhu and nicholas zabaras. bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. journal of computational physics, 366:415– 447, 2018. kirill zubov, zoe mccarthy, yingbo ma, francesco calisto, valerio pagliarino, simone azeglio, luca bottero, emmanuel luj´an, valentin sulzer, ashutosh bharambe, et al. neuralpde: automating physics-informed neural networks (pinns) with error approximations. arxiv preprint arxiv:2107.09443, 2021. appendices contents 1 introduction 2 background: clifford algebras 3 clifford neural layers 4 experiments 5 conclusion a mathematical background a.1 clifford algebras a.2 examples of low-dimensional clifford algebras . . . . . . . . . . . . . . . . . . . a.2.1 clifford algebra cl0,1(r) a.2.2 clifford algebra cl2,0(r) a.2.3 clifford algebra cl0,2(r) a.2.4 clifford algebra cl3,0(r) a.3 the electromagnetic field in 3 dimensions . . . . . . . . . . . . . . . . . . . . . . b clifford neural layers b.1 clifford convolution layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . b.1.1 translation equivariance of clifford convolutions . . . . . . . . . . . . . . b.1.2 rotational clifford cnn layers 3d clifford convolution layers . . . . . . . . . . . . . . . . . . . . . . . . b.2 clifford normalization . b.3 clifford initialization . b.4 equivariance under rotations and reflections . . . . . . . . . . . . . . . . . . . . . b.5 clifford fourier layers 2d clifford fourier transform . . . . . . . . . . . . . . . . . . . . . . . . 2d clifford convolution theorem . . . . . . . . . . . . . . . . . . . . . . . 3d clifford fourier transform . . . . . . . . . . . . . . . . . . . . . . . . 3d clifford convolution theorem . . . . . . . . . . . . . . . . . . . . . . . implementation of clifford fourier layers . . . . . . . . . . . . . . . . . . b.6 pseudocode . c experiments c.1 loss function and metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . c.2 models c.3 training and model selection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . c.4 navier-stokes in 2d . c.5 shallow water equations. c.6 maxwell’s equations in matter in 3d. . . . . . . . . . . . . . . . . . . . . . . . . . d related work e glossary a mathematical background this appendix supports section 2 of the main paper. we give a more detailed explanation of real clifford algebras and have a closer look at cl2,0(r), cl0,2(r), and cl3,0(r). for a detailed introduction into clifford algebras we recommend suter (2003); hestenes (2003; 2012); dorst et al. (2010); renaud (2020) a.1 clifford algebras vector spaces and algebras over a field. a vector space over a field f is a set v together with two binary operations that satisfy the axioms for vector addition and scalar multiplication. the axioms of addition ensure that if two elements of v get added together, we end up with another element of v . the elements of f are called scalars. examples of a field f are the real numbers r and the complex numbers c. although it is common practice to refer to the elements of a general vector space v as vectors, to avoid confusion we will reserve the usage of this term to the more specific case of elements of rn. as we will see below, general vector spaces can consist of more complicated, higher-order objects than scalars, vectors or matrices. an algebra over a field consists of a vector space v over a field f together with an additional bilinear law of composition of elements of the vector space, v × v → v , that is, if a and b are any two elements of v , then ab : v × v → v is an element of v , satisfying a pair of distribution laws: a(λ1b + λ2c) = λ1ab + λ2ac and (λ1a + λ2b)c = λ1ac + λ2bc for λ1, λ2 ∈ f and a, b, c ∈ v . note that general vector spaces don’t have bilinear operations defined on their elements. clifford algebras over r. in this manuscript we will focus on clifford algebras over r. for a more general exposition on clifford algebras over different fields the reader is referred to lounesto (1986). a real clifford algebra is generated by the n-dimensional vector space rn through a set of relations that hold for the basis elements of the vector space rn. let us denote the basis elements of rn with e1, ..., en, and without loss of generality choose these basis elements to be mutually orthonormal. taking two nonnegative integers p and q, such that p + q = n, then a real clifford algebra clp,q(r) with the “signature” (p, q), is generated through the following relations that define how the bilinear product of the algebra operates on the basis elements of rn: e2 i = +1 e2 j = −1 eiej = −ejei for 1 ≤ i ≤ p , for p < j ≤ p + q , for i ̸= j . through these relations we can generate a basis for the vector space of the clifford algebra, which we will denote with g. equations 14 and 15 show that the product between two vectors yields a scalar. according to the aforementioned definition of an algebra over a field, a clifford algebra with a vector space g is equipped with a bilinear product g × g (cid:55)→ g, that combines two elements from the vector space g and yields another element of the same space g. therefore, both scalars and vectors must be elements of the vector space g. equation 16 shows that besides scalar and vector elements, higher order elements consisting of a combination of two basis elements, such as eiej and ejei, are also part of the vector space g. finally, by combining equations 14, 15, 16 we can create even higher order elements such as eiejek for i ̸= j ̸= k, or e1e2...ep+q, which all must be part of the vector space g. in order to determine what the basis elements are that span the vector space g of clp,q(r), we note that elements eσ(1)eσ(2)...eσ(k) and e1e2...ek are related through a simple scalar multiplicative factor of plus or minus one, depending on the sign of the permutation σ. therefore, it suffices to consider the unordered combinations of basis elements of rn: the basis of the vector space g is given by {1, e1, e2, ..., ep+q, e1e2, ..., ep+q−1ep+q, ..., e1e2...ep+q}. in summary, we have introduced two different vector spaces. first, the vector space rn which generates the clifford algebra, and second the vector space g, which is the vector space spanned by the basis elements of the clifford algebra clp,q(r). convention is to denote the vector space of a real clifford algebra with a superscript n of the dimension of the generating vector space, yielding gn for a generating vector space rn. note that the dimension of the vector space gn is 2n = 2p+q. exemplary low-dimensional clifford algebras are: (i) cl0,0(r) which is a one-dimensional algebra that is spanned by the vector {1} and is therefore isomorphic to r, the field of real numbers; (ii) cl0,1(r) which is a two-dimensional algebra with vector space g1 spanned by {1, e1} where the basis vector e1 squares to −1, and is therefore isomorphic to c, the field of complex numbers; (iii) cl0,2(r) which is a 4-dimensional algebra with vector space g2 spanned by {1, e1, e2, e1e2}, where e1, e2 square to −1 and anti-commute. thus, cl0,2(r) is isomorphic to the quaternions h. definition 1: grade of clifford algebra element the grade of a clifford algebra basis element is the dimension of the subspace it represents. for example, the basis elements {1, e1, e2, e1e2} of the clifford algebras cl0,2(r) and cl2,0(r) have the grades {0, 1, 1, 2}. using the concept of grades, we can divide the vector spaces of clifford algebras into linear subspaces made up of elements of each grade. the grade subspace of smallest dimension is m0, the subspace of all scalars (elements with 0 basis vectors). elements of m1 are called vectors, elements of m2 are bivectors, and so on. in general, the vector space gp+q of a clifford algebra clp,q can be written as the direct sum of all of these subspaces: gp+q = m0 ⊕ m1 ⊕ . . . ⊕ mp+q . the elements of a clifford algebra are called multivectors, containing elements of subspaces, i.e. scalars, vectors, bivectors, trivectors etc. the basis element with the highest grade is called the pseudoscalar11, which in r2 corresponds to the bivector e1e2, and in r3 to the trivector e1e2e3. the pseudoscalar is often denoted with the symbol ip+q. from hereon, only multivectors will be denoted with boldface symbols. geometric product. using equations 14, 15, 16, we have seen how basis elements of the vector space gp+q of the clifford algebra are formed using basis elements of the generating vector space v . we now, look at how elements of gp+q are combined, i.e. how multivectors are bilinearly operated on. the geometric product is the bilinear operation on multivectors in clifford algebras. for arbitrary multivectors a, b, c ∈ gp+q, and scalar λ the geometric product has the following properties: ab ∈ gp+q (ab)c = a(bc) λa = aλ a(b + c) = ab + ac closure , associativity , commutative scalar multiplication , distributive over addition . the geometric product is in general non-commutative, i.e. ab ̸= ba. as we describe later, the geometric product is made up of two things: an inner product (that captures similarity) and exterior (wedge) product that captures difference. definition 2: dual of a multivector the dual a∗ of a multivector a is defined as: where ip+q represents the respective pseudoscalar of the clifford algebra. a∗ = aip+q , this definition allows us to relate different multivectors to each other, which is a useful property when defining clifford fourier transforms. for example, for clifford algebras in r2 the dual of a scalar is a bivector, and for the clifford algebra r3 the dual of a scalar is a trivector. 11in contrast to scalars, pseudoscalars change sign under reflection. a.2 examples of low-dimensional clifford algebras a.2.1 clifford algebra cl0,1(r) the clifford algebra cl0,1(r) is a two-dimensional algebra with vector space g1 spanned by {1, e1}, and where the basis vector e1 squares to −1. cl0,1(r) is thus algebra-isomorphic to c, the field of complex numbers. this becomes more obvious if we identify the basis element with the highest grade, i.e. e1, as the pseudoscalar i1 which is the imaginary part of the complex numbers. the geometric product between two multivectors a = a0 + a1e1 and b = b0 + b1e1 is therefore also isomorphic to the product of two complex numbers: a.2.2 clifford algebra cl2,0(r) the clifford algebra cl2,0(r) is a 4-dimensional algebra with vector space g2 spanned by the basis vectors {1, e1, e2, e1e2} where e1, e2 square to +1. the geometric product of two multivectors a = a0 + a1e1 + a2e2 + a12e1e2 and b = b0 + b1e1 + b2e2 + b12e1e2 is defined via: using the relations e1e1 = 1, e2e2 = 1, and eiej = −ejei for i ̸= j ∈ {e1, e2}, from which it follows that e1e2e1e2 = −1, we obtain: a vector x ∈ r2 ⊂ g2 is identified with x1e1 + x2e2 ∈ r2 ⊂ g2. clifford multiplication of two vectors x, y ∈ r2 ⊂ g2 yields the geometric product xy: = ⟨x, y⟩ + x ∧ y , inner product outer/wedge product the asymmetric quantity x∧y = −y ∧x is associated with the now often mentioned bivector, which can be interpreted as an oriented plane segment. equation 26 can be rewritten to express the (symmetric) inner product and the (anti-symmetric) outer product in terms of the geometric product: x ∧ y = ⟨x, y⟩ = (xy − yx) (xy + yx) . from the basis vectors of the vector space g2 of the clifford algebra cl2,0(r), i.e. {1, e1, e2, e1e2}, probably the most interesting is e1e2. we therefore have a closer look the unit bivector i2 = e1e2 which is the plane spanned by e1 and e2 and determined by the geometric product: where the inner product ⟨e1, e2⟩ is zero due to the orthogonality of the base vectors. the bivector i2 if squared yields i2 −1. from equation 29, it follows that 2 = −1, and thus i2 represents a true geometric e2 = e1i2 = − i2e1 e1 = i2e2 = − e2i2 . (30) using definition 2, the dual of a multivector a ∈ g2 is defined via the bivector as i2a. thus, the dual of a scalar is a bivector and the dual of a vector is again a vector. the dual pairs of the base vectors are 1 ↔ e1e2 and e1 ↔ e2. these dual pairs allow us to write an arbitrary multivector a as (cid:1) (cid:123)(cid:122) spinor part (cid:1) which can be regarded as two complex-valued parts: the spinor part, which commutes with i2 and the vector part, which anti-commutes with i2. a.2.3 clifford algebra cl0,2(r) the clifford algebra cl0,2(r) is a 4-dimensional algebra with vector space g2 spanned by the basis vectors {1, e1, e2, e1e2} where e1, e2 square to −1. the clifford algebra cl0,2(r) is algebraisomorphic to the quaternions h, which are commonly written in literature (schwichtenberg, 2015) as a + bˆı + cˆȷ + dˆk, where the (imaginary) base elements ˆı, ˆȷ, and ˆk fulfill the relations: ˆı2 = ˆȷ2 = −1 ˆıˆȷ = ˆk ˆȷˆı = −ˆk ˆk2 = ˆıˆȷˆıˆȷ = − ˆıˆȷˆȷˆı = ˆıˆı = −1 . (32) quaternions also form a 4-dimensional algebra spanned by {1, ˆı, ˆȷ, ˆk}, where ˆı, ˆȷ, ˆk all square to −1. the basis element 1 is often called the scalar part, and the basis elements ˆı, ˆȷ, ˆk are called the vector part of a quaternion. the geometric product of two multivectors a = a0 + a1e1 + a2e2 + a12e1e2 and b = b0 + b1e1 + b2e2 + b12e1e2 is defined as: using the relations e1e1 = −1, e2e2 = −1, and eiej = − ejei for i ̸= j ∈ {e1, e2}, from which it follows that e1e2e1e2 = −1, we obtain: a.2.4 clifford algebra cl3,0(r) the clifford algebra is a 8-dimensional algebra with vector space g3 spanned by the basis vectors {1, e1, e2, e3, e1e2, e1e3, e2e3, e1e2e3}, i.e. one scalar, three vectors {e1, e2, e3}, three bivectors {e1e2, e1e3, e2e3}, and one trivector e1e2e3. the trivector is the pseudoscalar i3 of the algebra. the geometric product of two multivectors is defined analogously to the geometric product of cl2,0(r), following the associative and bilinear multiplication of multivectors follows: e2 i = 1 eiej = −ejei using definition 2, the dual pairs of cl3,0 are: the geometric product for cl3,0(r) is defined analogously to the geometric product of cl2,0(r) via: where minus signs appear to do reordering of basis elements. equation 41 simplifies to a.3 the electromagnetic field in 3 dimensions through the lense of cl(3, 0)(r), an intriguing example of the duality of multivectors is found when writing the expression of the electromagnetic field f in terms of an electric vector field e and a magnetic vector field b (hestenes & sobczyk, 2012; hestenes, 2003), such that f = e + bi3 . both the electric field e and the magnetic field b are described by maxwell’s equations (griffiths, 2005). the two fields are strongly coupled, e.g. temporal changes of electric fields induce magnetic fields and vice versa. probably the most illustrative co-occurence of electric and magnetic fields is when describing the propagation of light. in standard vector algebra, e is a vector while b is a pseudovector, i.e. the two kinds of fields are distinguished by a difference in sign under space inversion. equation 43 naturally decomposes the electromagnetic field into vector and bivector parts via the pseudoscalar i3. for example, for the base component bxe1 of b it holds that bxe1i3 = bxe1e1e2e3 = bxe2e3, which is a bivector and the dual to the base component e1 of e. geometric algebra reveals that a pseudovector is nothing else than a bivector represented by its dual, so the magnetic field b in equation 43 is fully represented by the complete bivector bi3, rather than b alone. consequently, the multivector representing f consists of three vectors (the electric field components) and three bivectors e1i3 = e2e3, e2i3 = e3e1, e3i3 = e1e2 (the magnetic field components multiplied by i3). b clifford neural layers this appendix supports section 3 of the main paper. clifford convolutions are related to the work on complex networks by trabelsi et al. (2017), and closely related to work on quaternion neural networks (zhu et al., 2018; parcollet et al., 2018a; gaudet & maida, 2018; parcollet et al., 2018b; 2019; 2020; nguyen et al., 2021). probably the most related work are (i) by zang et al. (2022) who build geometric algebra convolution networks to process spatial and temporal data, and (ii) spellings (2021) who build rotation- and permutationequivariant graph network architectures based on geometric algebra products of node features. higher order information is built from available node inputs. b.1 clifford convolution layers we derive the implementation of translation equivariant clifford convolution layers for multivectors in g2, i.e. multivectors of clifford algebras generated by the 2-dimensional vector space r2. finally, we make the extension to clifford algebras generated by the 3-dimensional vector space r3. regular cnn layers. regular convolutional neural network (cnn) layers take as input feature maps f : z2 → rcin and convolve12 them with a set of cout filters {wi}cout i=1 : z2 → rcin : (cid:2)f ⋆ wi(cid:3) (x) = (cid:10)f (y), wi(y − x)(cid:11) cin(cid:88) f j(y)wi,j(y − x) . equation 44 can be interpreted as inner product of the input feature maps with corresponding filters at every point y ∈ z2. by applying cout filters, the output feature maps can be interpreted as cout− dimensional features vectors at every point y ∈ z2. we now want to extend convolution layers such that the elementwise product of scalars f j(y)wi,j(y − x) are replaced by the geometric product of multivector inputs and multivector filters f j(y)wi,j(y − x). clifford cnn layers. we replace the feature maps f : z2 → rcin by multivector feature maps f : z2 → (g2)cin and convolve them with a set of cout multivector filters {wi}cout i=1 : z2 → (g2)cin : (cid:2)f ⋆ wi(cid:3) (x) = cin(cid:88) b.1.1 translation equivariance of clifford convolutions theorem 1: translation equivariance of clifford convolutions let f : z2 → (g2)cin be a multivector feature map and let w : z2 → (g2)cin be a multivector kernel, then for cl(2, 0)(r) [[ltf ] ⋆ w] (x) = [lt [f ⋆ w]] (x). 12in deep learning, a convolution operation in the forward pass is implemented as cross-correlation. proof. [[ltf ] ⋆ w] (x) = cin(cid:88) f (y − t)w(y − x) cin(cid:88) f0(y − t)w0(y − x) + f1(y − t)w1(y − x) + f2(y − t)w2(y − x) − f12(y − t)w12(y − x) f0(y − t)w1(y − x) + f1(y − t)w0(y − x) − f2(y − t)w12(y − x) + f12(y − t)w2(y − x) f0(y − t)w2(y − x) + f1(y − t)w12(y − x) + f2(y − t)w0(y − x) − f12(y − t)w1(y − x) f0(y − t)w12(y − x) + f1(y − t)w2(y − x) − f2(f − t)w1(y − x) + f12(y − t)w0(y − x) (using y → y − t) cin(cid:88) f0(y)w0(y − (x − t)) + f1(y)w1(y − (x − t)) + f2(y)w2(y − (x − t)) − f12(y)w12(y − (x − t)) f0(y)w1(y − (x − t)) + f1(y)w0(y − (x − t)) − f2(y)w12(y − (x − t)) + f12(y)w2(y − (x − t)) f0(y)w2(y − (x − t)) + f1(y)w12(y − (x − t)) + f2(y)w0(y − (x − t)) − f12(y)w1(y − (x − t)) f0(y)w12(y − (x − t)) + f1(y)w2(y − (x − t)) − f2(y)w1(y − (x − t)) + f12(y)w0(y − (x − t)) = [lt [f ⋆ w]] (x) . 0 , wi,j implementation of cl2,0(r) and cl0,2(r) layers. we can implement a cl(2, 0)(r) clifford cnn layer using equation 25 where {b0, b1, b2, b12} → {wi,j 12 } correspond to 4 different kernels representing one 2d multivector kernel, i.e. 4 different convolution layers, and {a0, a1, a2, a12} → {f j 12} correspond to the scalar, vector and bivector parts of the input multivector field. the channels of the different layers represent different stacks of scalars, vectors, and bivectors. all kernels have the same number of input and output channels (number of input and output multivectors), and thus the channels mixing occurs for the different terms of equations 25, 42 individually. lastly, usually not all parts of the multivectors are present in the input vector fields. this can easily be accounted for by just omitting the respective parts of equations 25, 42. a similar reasoning applies to the output vector fields. for cl(0, 2)(r), the signs within the geometric product change slightly. 1 , wi,j 2 , wi,j b.1.2 rotational clifford cnn layers here we introduce an alternative parameterization to the clifford cnn layer introduced in equation 7 by using the isomorphism of the clifford algebra cl0,2(r) to quaternions 13. we take advantage of the fact that a quaternion rotation can be realized by a matrix multiplication (jia, 2008; kuipers, 1999; schwichtenberg, 2015). using the isomorphism, we can represent the feature maps ˆk14. 1ˆı+f j f j and filters wi,j as quaternions: f j = f j leveraging this quaternion representation, we can devise an alternative parameterization of the product between the feature map f j and wi,j. to be more precise, we introduce a composite operation that results in a scalar quantity and a quaternion rotation, where the latter acts on the vector part of the quaternion f j and only produces nonzero expansion coefficients for the vector part of the quaternion output. a quaternion rotation wi,jf j(wi,j)−1 acts on the vector part (ˆı, ˆȷ, ˆk) of f j, and ˆk and wi,j = wi,j 2 ˆȷ+wi,j 1 ˆı+wi,j 0 +wi,j 13we could not find neural rotational quaternion convolutions in existing literature, we however used the codebase of https://github.com/orkis-research/pytorch-quaternion-neural-networks as inspiration. 14note that the expansion coefficients for the feature map f j and filters wi,j in terms of the basis elements of g2 and in terms of quaternion elements ˆı, ˆȷ and ˆk are the same. can be algebraically manipulated into a vector-matrix operation ri,jf j, where ri,j : h → h is built up from the elements of wi,j (kuipers, 1999). in other words, one can transform the vector part (ˆı, ˆȷ, ˆk) of f j ∈ h via a rotation matrix ri,j that is built from the scalar and vector part (1, ˆı, ˆȷ, ˆk) i=1 : z2 → (g2)cin acts on the feature of wi,j ∈ h. altogether, a rotational multivector filter {wi map f j through a rotational transformation ri,j(wi,j rot,12) acting on vector and bivector parts of the multivector feature map f : z2 → (g2)cin , and an additional scalar response of the multivector filters: rot}cout rot,1, wi,j rot,2, wi,j rot,0, wi,j (cid:2)f ⋆ wi rot (cid:3) (x) = cin(cid:88) cin(cid:88) f j(y)wi,j rot (y − x) rot (y − x))(cid:3) (cid:2)f j(y)wi,j 0 (cid:123)(cid:122) (cid:125) (cid:124) scalar output +ri,j(y − x) · where (cid:2)f j(y)wi,j rot,0 − f j output of equation 34. the rotational matrix ri,j(y − x) in written out form reads: rot (y − x))(cid:3) rot,2 − f j rot,1 − f j 0 wi,j 2 wi,j 1 wi,j rot,12 , which is the scalar ri,j = ñ1 − 2(cid:2)( ˆwi,j 2(cid:2) ˆwi,j rot,1 ˆwi,j 2(cid:2) ˆwi,j rot,1 ˆwi,j rot,2)2 + ( ˆwi,j rot,2 + ˆwi,j rot,12 − ˆwi,j rot,0 ˆwi,j rot,0 ˆwi,j rot,2 2(cid:2) ˆwi,j rot,1 ˆwi,j 1 − 2(cid:2)( ˆwi,j 2(cid:2) ˆwi,j rot,2 ˆwi,j rot,2 − ˆwi,j rot,1)2 + ( ˆwi,j rot,12 + ˆwi,j rot,0 ˆwi,j rot,0 ˆwi,j 2(cid:2) ˆwi,j rot,1 ˆwi,j 2(cid:2) ˆwi,j rot,2 ˆwi,j 1 − 2(cid:2)( ˆwi,j rot,12 + ˆwi,j rot,12 − ˆwi,j rot,1)2 + ( ˆwi,j (cid:3) é rot,0 ˆwi,j (cid:3) rot,0 ˆwi,j rot,2)2(cid:3) rot,1 rot,2 where ˆwi,j rot (y − x) = ˆwi,j normalized filter with ∥ ˆwi,j ri,j for clarity. rot,0(y − x) + ˆwi,j rot,12(y − x)e12 is the rot ∥ = 1. the dependency (y − x) is omitted inside the rotation matrix rot,1(y − x)e1 + ˆwi,j rot,2(y − x)e2 + ˆwi,j 3d clifford convolution layers implementation of cl3,0(r) layers. analogously to the 2-dimensional case, we can implement a 3d clifford cnn layer using equation 42, where {b0, b1, b2, b12, b13, b23, b123} correspond to 8 different kernels representing one 3d multivector kernel, i.e. 8 different convolution layers, and {a0, a1, a2, a12, a13, a23, a123} correspond to the scalar, vector, bivector, and trivector parts of the input multivector field. convolution layers for different 3-dimensional clifford algebras change the signs in the geometric product. b.2 clifford normalization different normalization schemes have been proposed to stabilize and accelerate training deep neural networks (ioffe & szegedy, 2015; ba et al., 2016; wu & he, 2018; ulyanov et al., 2017). their standard formulation applies only to real values. simply translating and scaling multivectors such that their mean is 0 and their variance is 1 is insufficient because it does not ensure equal variance across all components. batch normalization trabelsi et al. (2017) extended the batch normalization formulation to apply to complex values. we build on the same principles to first propose an appropriate batch normalization scheme for multivectors, similar to the work of gaudet & maida (2018) for quaternions. for 2d multivectors of the form a = a0 + a1e1 + a2e2 + a12e1e2, we can formulate the problem of batch normalization as that of whitening 4d vectors: where the covariance matrix v is 2 (a − e[a]) v = the shift parameter β is a multivector with 4 learnable components and the scaling parameter γ is 4 × 4 positive matrix. the multivector batch normalization is defined as: bn (a) = γa + β when the batch sizes are small, it can be more appropriate to use group normalization or layer normalization. these can be derived with appropriate application of eq. 50 along appropriate tensor dimensions. as such, batch, layer, and group normalization can be easily extended to 3-dimensional clifford algebras. b.3 clifford initialization parcollet et al. (2018a); gaudet & maida (2018) introduced initialization schemes for quaternions which expands upon deep network initialization schemes proposed by glorot & bengio (2010); he et al. (2015). similar to clifford normalization, quaternion initialization schemes can be adapted to clifford layers in a straight forward way. effectively, tighter bounds are required for the uniform distribution form which clifford weights are sampled. however, despite intensive studies we did not observe any performance gains over default pytorch initialization schemes15 for 2-dimensional experiments. similar findings are reported in hoffmann et al. (2020). however, 3-dimensional implementations necessitate much smaller initialization values (factor 1/8). b.4 equivariance under rotations and reflections clifford convolutions satisfy the property of equivariance under translation of the multivector inputs, as shown in this appendix b. however, the current definition of clifford convolutions is not equivariant under multivector rotations or reflections. here, we derive a general kernel constraint which allows us to build generalized clifford convolutions which are equivariant w.r.t rotations or reflections of the multivectors. that is, we like to prove equivariance of a clifford layer under rotations and reflections (i.e. orthogonal transformations) if the multivector kernel multivector filters {wi}cout i=1 : z2 → (g)cin satisfies the constraint: wi,j(t x) = twi,j(x) , for 0 ≤ j < cin. we first define an orthogonal transformation on a multivector by, tf = ±uf u†, u†u = 1 where u and f are multivectors which are multiplied using the geometric product. the minus sign is picked by reflections but not by rotations, i.e. it depends on the parity of the transformation. this construction is called a “versor” product. the construction can be found in e.g. suter (2003) for vectors and its extension to arbitrary multivectors. the above construction makes it immediately clear that t(f g) = (tf )(tg). when we write t x, we mean an orthogonal transformation of an euclidean vector (which can in principle also be defined using versors). to show equivariance, we wish to prove for multivectors f : z2 → (g)cin and a set of cout multivector filters {wi}cout i=1 : z2 → (g)cin that: and equations 54, 55 yield: f ′(t x) = tf (x) , wi(t x) = twi(x) , ⇒ (cid:2)f ⋆ wi(cid:3)′ (t x) = t (cid:2)f ⋆ wi(cid:3) (x) . that is: if the input multivector field transforms as a multivector, and the kernel satisfies the stated equivariance constraint, then the output multivector field also transforms properly as a multivector. note that t might act differently on the various components (scalars, vectors, pseudoscalars, pseudovectors) under rotations and/or reflections. 15the default pytorch initialization of linear and convolution layers is he uniform initialization (he et al., 2015) for 2-dimensional problems. the gain is calculated using leakyrelu activation functions with negative part of 5, which effectively results in glorot uniform initialization. now, (t x) (cid:2)f ⋆ wi(cid:3)′ cin(cid:88) cin(cid:88) f ′j(y)wi,j(y − t x)) f ′j(y)wi,j(t (t −1y − x))) cin(cid:88) t y′∈z2 f ′j(t y′)wi,j(t (y′ − x))), y′ = t −1y cin(cid:88) cin(cid:88) cin(cid:88) f ′j(t y′)wi,j(t (y′ − x))) tf j(y′)twi,j(y′ − x)) t(f j(y′)wi,j(y′ − x))) = t cin(cid:88) (f j(y′)wi,j(y′ − x))) y′∈z2 = t [f ⋆ wi] (x) where in the fourth line we transform variables y → y′, in the fifth line we use the invariance of the summation “measure” under t , in the sixth line we use the transformation property of f and equivariance for wi, in the seventh line we use the property of multivectors, and in the eighth line we use linearity of t. b.5 clifford fourier layers we derive the implementation of clifford fourier layers for multivectors in g2 and g3, i.e. multivectors of clifford algebras generated by the 2-dimensional vector space r2 and the 3-dimensional vector space r3. in arbitrary dimension n, the fourier transform ˆf (ξ) = f{f }(ξ) classical fourier transform. for a continuous n-dimensional complex-valued signal f (x) = f (x1, . . . , xn) : rn → c is defined as: ˆf (ξ) = f{f }(ξ) = rn f (x)e−2πi⟨x,ξ⟩ dx , ∀ξ ∈ rn , provided that the integral exists, where x and ξ are n-dimensional vectors and ⟨x, ξ⟩ is the contraction of x and ξ. usually, ⟨x, ξ⟩ is the inner product, and ξ is an element of the dual vector space rn⋆. the inversion theorem states the back-transform from the frequency domain into the spatial domain: f (x) = f −1{f{f }}(x) = rn ˆf (ξ)e2πi⟨x,ξ⟩ dξ , ∀x ∈ rn . we can rewrite the fourier transform of equation 58 in coordinates: ˆf (ξ1, . . . , ξn) = f{f }(ξ1, . . . , ξn) = f (x1, . . . , xn)e−2πi(x1ξ1+...+xnξn)dx1 . . . dxn . rn discrete/fast fourier transform. the discrete counterpart of equation 58 transforms an ndimensional complex signal f (x) = f (x1, . . . , xn) : rn → c at m1 × . . . × mn grid points into its complex fourier modes via: ˆf (ξ1, . . . , ξn) = f{f }(ξ1, . . . , ξn) = f (m1, . . . , mn) · e +...+ mnξn mn mn(cid:88) (61) where (ξ1, . . . , ξn) ∈ zm1 . . . × . . . zmn . fast fourier transforms (ffts) (cooley & tukey, 1965; van loan, 1992) immensely accelerate the computation of the transformations of equation 61 by factorizing the discrete fourier transform matrix into a product of sparse (mostly zero) factors. 2d clifford fourier transform analogous to equation 58, for cl(2, 0)(r) the clifford fourier transform (ebling & scheuermann, 2005; hitzer, 2012) and the respective inverse transform for multivector valued functions f (x) : r2 → g2 and vectors x, ξ ∈ r2 are defined as: (cid:90) ˆf (ξ) = f{f }(ξ) = f (x) = f −1{f{f }}(x) = f (x)e−2πi2⟨x,ξ⟩ dx , ∀ξ ∈ r2 , r2 provided that the integrals exist. the differences to equations 58 and 59 are that f (x) and ˆf (ξ) represent multivector fields in the spatial and the frequency domain, respectively, and that the pseudoscalar i2 = e1e2 is used in the exponent. inserting the definition of multivector fields, we can rewrite equation 62 as: (cid:90) f{f }(ξ) = f (x)e−2πi2⟨x,ξ⟩ dx , f e−2πi2⟨x,ξ⟩ dx e−2πi2⟨x,ξ⟩ dx e−2πi2⟨x,ξ⟩ dx f we obtain a clifford fourier transform by applying two standard fourier transforms for the dual pairs f0 = f0(x) + f12(x)i2 and f1 = f1(x) + f2(x)i2, which both can be treated as a complexvalued signal f0, f1 : r2 → c. consequently, f (x) can be understood as an element of c2. the 2d clifford fourier transform is the linear combination of two classical fourier transforms. the discretized versions of the spinor/vector part ( ˆfs/v) reads analogously to equation 61: fs/v(m1, m2) · e | 32 | [
312.993,
144.1100828,
380.88958656,
155.2918556
] |
De4FYqjFueZ.pdf | 2,023 | 0 | transformers learn shortcuts to automata bingbin liu1 ∗ jordan t. ash2 1carnegie mellon university surbhi goel3† akshay krishnamurthy2 cyril zhang2 2microsoft research nyc 3university of pennsylvania abstract algorithmic reasoning requires capabilities which are most naturally understood through recurrent models of computation, like the turing machine. however, transformer models, while lacking recurrence, are able to perform such reasoning using far fewer layers than the number of reasoning steps. this raises the question: what solutions are these shallow and non-recurrent models finding? we investigate this question in the setting of learning automata, discrete dynamical systems naturally suited to recurrent modeling and expressing algorithmic tasks. our theoretical results completely characterize shortcut solutions, whereby a shallow transformer with only o(t ) layers can exactly replicate the computation of an automaton on an input sequence of length t . by representing automata using the algebraic structure of their underlying transformation semigroups, we obtain o(log t )-depth simulators for all automata and o(1)-depth simulators for all automata whose associated groups are solvable. empirically, we perform synthetic experiments by training transformers to simulate a wide variety of automata, and show that shortcut solutions can be learned via standard training. we further investigate the brittleness of these solutions and propose potential mitigations. introduction modern deep learning pipelines demonstrate an increasing capability to perform combinatorial reasoning: pretrained on large, diverse distributions of natural language, math, and code, they are nascently solving tasks which seem to require a rigid “understanding” of syntax, entailment, and state inference. how do these neural networks represent the primitives of logic and the algorithms they execute internally? when considering this question, there is an immediate mismatch between classical sequential models of computation (e.g., turing machines) and the transformer architecture, which has delivered many of the recent breakthroughs in reasoning domains. if we are to think of an algorithm as a set of sequentially-executed computational rules, why would we use a shallow1 non-recurrent network? we study this question through the lens of finite semiautomata, which compute state sequences q1, . . . , qt from inputs σ1, . . . , σt by application of a transition function δ (and initial state q0): qt = δ(qt−1, σt). semiautomata are the underlying structures governing the computations realizable by automata (such as regular expression parsers or finite-state transducers), which are simply semiautomata equipped with mappings from states to output. thus, one natural motivation for studying them comes from the question of whether transformers can subsume the structures found in classical nlp pipelines. another motivation comes from the perspective of reinforcement learning and control, where transformers are beginning to be used as world models: semiautomata specify deterministic discrete-state dynamical systems. we perform a theoretical and empirical investigation of whether (and how) non-recurrent transformers learn semiautomata. we characterize and analyze how shallow transformers find shortcut ∗the majority of this work was completed while b. liu was an intern at microsoft research nyc. †this work was completed while s. goel was at microsoft research nyc. 1compared to the number of symbols it can process. for example, distilbert (sanh et al., 2019) can handle thousands of tokens with 6 sequential layers. figure 1: various examples of semiautomata. from left to right: a mod-2 counter, a 2-state memory unit, grid4, a 2-dimensional gridworld constructible via a direct product grid3 × grid4, and a rubik’s cube, whose transformation semigroup is a very large non-abelian group. solutions, which correctly and efficiently simulate the transition dynamics of semiautomata with far fewer sequential computations than required for iteratively inferring each state qt. our contributions. our theoretical results provide structural guarantees for the representability of semiautomata by shallow, non-recurrent transformers. in particular, we show that: • shortcut solutions, with depth logarithmic in the sequence length, always exist (theorem 1). • constant-depth shortcuts exist for solvable semiautomata (theorem 2). there do not exist constant-depth shortcuts for non-solvable semiautomata, unless tc0 = nc1 (theorem 4). • for a natural class of semiautomata corresponding to path integration in a “gridworld” with boundaries, we show that there are even shorter shortcuts (theorem 3), beyond those guaranteed by the general structure theorems above. we accompany these theoretical findings with an extensive set of experiments: • end-to-end learnability of shortcuts via sgd (section 4). the theory shows that shortcut solutions exist; is the non-convexity of the optimization problem an obstruction to learning them in practice? for a variety of semiautomaton simulation problems, we find empirically that there is no such obstruction. shallow non-recurrent transformers are able to learn shortcuts which generalize near-perfectly in-distribution. • more challenging settings (section 5). we compare non-recurrent and recurrent models in the presence of additional considerations: out-of-distribution generalization (including to unseen sequence lengths) and limited supervision. this reveals the brittleness of non-recurrent models, in line with prior “spurious representation” notions of shortcuts in deep learning. toward mitigating these drawbacks and obtaining the best of both worlds, we show that with recency-biased scratchpad training, transformers can be guided to learn the robust recurrent solutions. related work emergent reasoning in neural sequence models. neural sequence models, both recurrent (wu et al., 2016; peters et al., 2018; howard & ruder, 2018) and non-recurrent (vaswani et al., 2017; devlin et al., 2018), have ushered in an era of broadly-applicable and (with pretraining) sampleefficient natural language understanding. building on this, large-scale non-recurrent transformer models have demonstrated capabilities in program synthesis, mathematical reasoning, and in-context multi-task adaptation. a nascent frontier is to leverage neural dynamics models, again both recurrent (hafner et al., 2019) and non-recurrent (chen et al., 2021a; janner et al., 2021), for decision making. at the highest level, the present work seeks to idealize and understand the mechanisms behind which deep learning solves tasks requiring combinatorial and algorithmic reasoning. computational models of neural networks. in light of the above, it is empirically evident that neural networks are successfully learning circuits which generalize on some combinatorial tasks. many efforts in the theory and empirical science of deep learning are dedicated towards the rigorous analysis of this phenomenon. various perspectives map self-attention to bounded-complexity circuits (hahn, 2020; elhage et al., 2021; merrill et al., 2021; edelman et al., 2022), declarative programs (weiss et al., 2021), and turing machines (dehghani et al., 2019). the research program of bertology (clark et al., 2019; vig, 2019; tenney et al., 2019) interprets trained models in terms of known linguistic and symbolic primitives. the most relevant theoretical work to ours is (barrington & th´erien, 1988), which acts as a “rosetta stone” between classical circuit complexity and semigroup theory. the core technical ideas for theorems 1 (nc1 prefix sum), 2 (krohn-rhodes), and 4 (barrington) are inspired by the results and discussions therein. in the language of circuit complexity, our work establishes that shallow, nonrecurrent transformers can efficiently represent all of the constructions involved in the (simple) nc1 and (significantly more complex) acc0 solutions to sequential multiplication in semigroups. on the other hand, the shorter shortcut from theorem 3 carefully leverages self-attention to improve upon these results; we were unable to find an analogous refinement in the circuit complexity literature. synthetic combinatorial tasks. our problem setting of simulating finite-state semiautomata unifies the settings of several recent investigations of whether (and how) transformers learn bounded-depth dyck languages (yao et al., 2021), parities (anil et al., 2022), adders (nogueira et al., 2021; nanda & lieberum, 2022), regular languages (bhattamishra et al., 2020), and sparse logical predicates (edelman et al., 2022; barak et al., 2022). zhang et al. (2022) empirically analyze the behavior and inner workings of transformers on random-access group operations and note “shortcuts” (which skip over explicit program execution) similar to those we study. we provide an expanded discussion of related work in appendix a.5. preliminaries semiautomata and their algebraic structure a semiautomaton a := (q, σ, δ) consists of a set of states q, an input alphabet σ, and a transition function δ : q × σ → q. in this work, q and σ will always be finite sets. for all positive integers t and a starting state q0 ∈ q, a defines a map from input sequences (σ1, . . . , σt ) ∈ σt to state sequences (q1, . . . , qt ) ∈ qt : qt := δ(qt−1, σt) for t = 1, . . . , t . this is a deterministic markov model, in the sense that at time t, the future states qt+1, . . . , qt only depend on the current state qt and the future inputs σt+1, . . . , σt . we define the task of simulation: given a semiautomaton a, starting state q0, and input sequence (σ1, . . . , σt ), output the state trajectory at,q0 (σ1, . . . , σt ) := (q1, . . . , qt ). let f : σt → qt be a function (which in general can depend on a, t, q0). we will say that f simulates at,q0 if f (σ1:t ) = at,q0(σ1:t ) for all input sequences σ1:t . finally, for a positive integer t , we say that a function class f of functions from σt → qt is said to simulate a at length t if, for each q0 ∈ q, there is a function in f which simulates (a, t, q0). every semiautomaton induces a transformation semigroup t (a) of functions ρ : q → q under composition, generated by the per-input-symbol state mappings δ(·, σ) : q → q. when t (a) contains the identity function, it is called a transformation monoid. when all of the functions are invertible, t (a) is a permutation group. see figure 1 for some examples which appear both in our theory and experiments; additional background (including a self-contained tutorial on the relevant concepts in finite group and semigroup theory) is provided in appendix a.2. an elementary but interesting example is a parity counter (figure 1, left): the state is a bit, and the inputs are {“toggle the bit”, “do nothing”}; the transformation semigroup is c2, the cyclic group of order 2. parity has been studied in previous synthetic experiments (zhang et al., 2022; anil et al., 2022). recurrent and non-recurrent neural sequence models a sequence-to-sequence neural network of length t and dimension d is a function fnn : rt ×d × θ → rt ×d, with trainable parameters θ ∈ θ. equipped with an encoding layer e : σ → rd and decoding layer w : rd → q (applied position-wise), the function (w ◦ fnn ◦ e) : σt → qt has the same input and output types as at,q0. this work will investigate when the functions defined by neural networks can simulate semiautomata. a recurrent neural network (rnn) is a sequence-to-sequence neural network defined by iterated composition of a recurrent unit g : rd × rd × θ → rd. for a given initial hidden state h0 ∈ rd, and input sequence u1, . . . , ut ∈ rd, it produces an output hidden state sequence ht := g(ht−1; ut; θ), thus, for any fixed θ, an rnn defines a semiautomaton with infinitely many states and inputs: q = σ = rd. thus, as long as g can represent δ, rnns can simulate all semiautomata. in this sense, the computational models of rnns and semiautomata naturally coincide. an l-layer transformer is another sequence-to-sequence network, consisting of alternating selfattention blocks and feedforward mlp blocks ftf := (id + f (l) mlp) ◦ (id + f (l) attn) ◦ (id + f (l−1) mlp ) ◦ ... ◦ (id + f (1) attn) ◦ (id + p ). briefly, an attention layer performs ℓ1-normalized mixing operations across positions t, while a constant-layer mlp block performs position-wise function approximation (with no mixing between positions); id denotes the identity function (residual connections), and p encodes the position t.2 we use fairly standard positional encodings in both theory and experiments. importantly, the standard transformer is convolutional (in that the weights in fattn and fmlp are shared across positions t), but is non-recurrent: parameters are not shared across blocks. all architectures have a notion of computational depth d (succinctly, depth) when processing inputs of length t , which is the longest path in the computational graph. for rnns, this is θ(t ) while an l-layer transformer (with constant-layer mlps) has depth θ(l). for transformers, since they coincide up to constant factors, we use depth and number of layers interchangeably. we will also track the layers l, embedding dimension d, attention width (the largest number of parallel attention head outputs), and mlp width (the largest number of parallel hidden activations in the mlp blocks).3 theory: shortcuts abound a t -layer transformer can trivially simulate a semiautomaton at length t sequentially: like an rnn, the t-th layer can implement (an embedding of) the state transition qt−1 (cid:55)→ qt. yet, transformers succeed in practice with long contexts (≥ 103) and fewer layers (as few as 6). a natural theoretical question is that of representability: can transformers efficiently simulate semiautomata with parallel shortcut solutions, whose depths are much smaller than the sequence length t ? definition 1 (shortcut solution). let a be a semiautomaton. suppose that for every t ≥ 1, there is sequence-to-sequence neural network ft which simulates a at length t . we call this sequence {ft }t ≥1 a shortcut solution to the problem of simulating a if its depth d satisfies d ≤ o(t ). by this definition, shortcuts are quite general and some are less interesting than others. for example, it is always possible to construct a constant-depth neural network which memorizes all |σ|t values of at,q0 , but these networks must be exceptionally wide. we could also “fast-forward” state t consecutive state transitions, but, without simulation, letting each of (say) √ t ). to rule out these exploiting the structure of the semiautomaton, this would require width ω(2 cases and focus on interesting shortcuts for transformers, we want the other size parameters (attention and mlp width) to be small: say, scaling polynomially with t , or even dependent only on |q|, |σ|. to construct such shortcuts, we need ideas beyond explicit iteration of state transitions. t layers simulate semiautomata admit shallow parallel shortcuts we begin by noting that polynomial-width shortcuts always exist. this may be counterintuitive if we restrict ourselves to viewing a network’s intermediate activations as representations of states qt. when we instead view them as encoding state transformations δ(·, σ) : q → q and their compositions, a divide-and-conquer construction is evident (see figure 2a), detailed in appendix c.2: theorem 1 (simulation is parallelizable; informal). transformers can simulate all semiautomata a = (q, σ, δ) at length t , with depth o(log t ), embedding dimension o(|q|), attention width o(|q|), and mlp width o(|q|2). if we assume that an attention head can only select a constant number of indices, theorem 1 is unimprovable: the receptive field of a sublogarithmic-depth transformer is not large enough. however, it is known in theory and practice that soft-attention heads are capable of attending broadly, representing certain non-sparse dependencies (clark et al., 2019; yao et al., 2021). thus, we can ask a more challenging question: can the dense operations of attention enable even shallower shortcuts? 2we omit layer normalization. this discrepancy is superficial; see the discussion in appendix a.4. 3full statements and proofs also track ∞-weight norms (the largest absolute value of any parameter) and bit precision of each floating-point computation. we defer precise definitions and discussion to appendix a.4. figure 2: intuitions for the theoretical constructions. (a) divide-and-conquer function composition yields logarithmic-depth shortcuts (theorem 1). (b) the two “atoms” of the constant-depth krohn-rhodes decomposition (theorem 2) of a solvable semiautomaton: modular addition and sequentially resettable memory. (c) information flow of the cascade product, which is used to glue these atoms together, and easily implemented with residual connections. (d) an even shorter shortcut solution for gridworld simulation (theorem 3; see appendix c.4). the key to resolving this question comes from krohn-rhodes theory, which gives us tools to reason about the structure of arbitrary semiautomata and their transformation semigroups. a landmark result (krohn & rhodes, 1965), a vast generalization of the uniqueness of prime factorizations for integers, shows that to simulate any semiautomaton, we only need to handle two types of elementary objects: simple groups, and a memory unit (figure 1b). when the krohn-rhodes decomposition contains no non-abelian groups (we call such a semiautomaton solvable4), there exist constant-depth circuits for simulation, which we manifest as neural networks. it turns out that positional weight sharing (a.k.a. “width-1 convolutions”), non-recurrence, and selfattention are particularly well-suited for efficiently representing the krohn-rhodes decomposition of a semiautomaton: uniform-sum attention heads perform abelian group operations, proximity-based selection heads implement memory units, and the rest of the architecture (mlps and residual connections) implements the cascade product (definition 4) which combines these atomic operations. overall, we conclude: theorem 2 (transformer krohn-rhodes; informal). transformers can simulate all solvable semiautomata a = (q, σ, δ), with depth o(|q|2 log |q|), embedding dimension 2o(|q| log |q|), attention width 2o(|q| log |q|), and mlp width |q|o(2|q|) + 2o(|q| log |q|) · t .5 it is quite counterintuitive6 that as t → ∞, no additional depth is needed for such a large class of problems. we provide background and details (including the definition and implementation of this notion of semigroup product) in appendices a.2 and c.3. in figure 2b and 2c, we illustrate the three key ingredients: efficient implementations of the two atoms (modular counting and memory lookups), and the procedure for gluing them together (building a transformation cascade). what does each layer do? the construction in theorem 1 recursively composes functions, as opposed to the naive solution of directly emulating states. theorem 2 takes a very different approach: it relies on the holonomy decomposition variant of the krohn-rhodes theorem (eilenberg, 1974). rather than simulating qt or composing functions, the computational paths correspond to a |q|level tree of nested coarsenings of the semiautomaton’s dynamics: “which subset of states could qt be in right now?” within each level of this tree, the network must implement (generally noncommutative) group operations. this can be done with o(|q| log |q|) layers, by leveraging the jordan-h¨older decompositions and the universal embedding theorem (krasner & kaloujnine, 1951). can we get even shallower shortcuts? finally, we show that on a natural class of problems, the computational model of self-attention leads to further fine-grained improvements over the guarantees of krohn-rhodes theory. motivated by the application of transformers in modeling environment 4see definition 6. among the solvable groups are the dihedral groups d2n, the permutation groups sn, an for n ≤ 4, the quaternion group q8, all groups of order < 120 except a5, and all groups of odd order. 5perhaps surprisingly, the only place where a width of t is used is to implement a mod-n gate. this dependence can be removed entirely if we allow for periodic activation functions such as x (cid:55)→ sin(x). 6from the back cover of rhodes et al. (2010): the underlying theorem launched a theory which “reveals deep and unexpected connections between algebra (semigroups) and areas of science and engineering”. dynamics, we consider the semiautomaton gridn corresponding to a “gridworld”: n states on a line, with input symbols “move left if possible” and “move right if possible” (see figure 1, middle). we show that self-attention enables an extremely concise solution, with depth independent of both t and |q| = n: theorem 3 (depth-2 shortcut for gridworld; informal). for all positive integers n, t , transformers can simulate gridn at length t , with depth 2,7 embedding dimension o(1), attention width o(n), and mlp width o(t ).8 the proof builds a concise parallel nearest boundary detector, and can be found in appendix c.4. we note that this particular setting is known to be an extremal case for the holonomy construction in krohn-rhodes theory (maler (2010) discusses this, calling it the elevator automaton). it would be interesting to generalize our improvement and characterize the class of problems for which selfattention affords o(1) instead of poly(|q|)-depth solutions. aren’t neural networks universal function approximators? sufficiently wide neural networks with sufficiently expressive nonlinearities can fit arbitrary functions (hornik et al., 1989; cybenko, 1989). however, if we constrain complexity measures such as depth and width, one cannot hope to apply universality directly. it is true that one can take the discrete circuit constructions in (barrington & th´erien, 1988), “compile” every gate to a constant-depth network, and recover shortcut solutions with o(t ) depth and poly(t ) width. however, our constructions go far beyond black-box reductions– the roles of self-attention and positional parameter sharing allow for such efficient constructions that no parameter count depends on t (except the mlp width, which is removable with a periodic activation function). furthermore, the constructions are so simple and natural that they are corroborated by the preliminary “reverse engineering” investigation in section 4. lower bounds can theorem 2 be improved to handle non-solvable semiautomata? (equivalently: can theorem 1 be improved to constant depth?) it turns out that as a consequence of a classic result in circuit complexity (barrington, 1986), this question is equivalent to the major open question of tc0 ?= nc1 (thus: conjecturally, no). unless these complexity classes collapse, theorems 1 and 2 are optimal. in summary, simulating non-solvable semiautomata with constant depth is provably hard: theorem 4 (transformer barrington). let a be a non-solvable semiautomaton. then, for sufficiently large t , no o(log t )-precision transformer with depth independent of t and width polynomial in t can continuously simulate a at length t , unless tc0 = nc1. this is proven in appendix c.5. the smallest example of a non-solvable semiautomaton is the one on |q| = 5 states, whose transitions generate a5 (all of the even permutations). finally, we note that although our width bounds might be improvable, an exponential-in-|q| number of hypotheses (and hence a network with poly(|q|) parameters) is unavoidable if one wishes to learn an arbitrary |q|-state semiautomaton from data: there are |q||q|·|σ| of them, which generate |q|ω(|q|2) distinct semigroups (kleitman et al., 1976). if we wish to study how machine learning models can efficiently identify large algebraic structures, we will need finer-grained inductive biases to specify which semiautomata to prefer, a direction for future work. experiments: can sgd find the shortcuts? our theorems are limited to representability: concise shallow solutions exist, but whether gradientbased local search (i.e., standard training) finds them is another matter entirely. for example, embedded within the problem of learning to simulate the 2-state parity semiautomaton is a well-known non-convex optimization problem (daniely & malach, 2020; edelman et al., 2022; nichani et al., 2022). in general, even detecting whether t (a) contains a cycle is pspace-hard (cho & huynh, 1991). theoretically understanding how the training dynamics of deep learning transcend the worstcase hardness of non-convex optimization is a major frontier of research, that we do not attempt to 7this requires max-pooling. if we do not use max-pooling, we can instead use an mlp with width 2o(n) and depth o(1), or width o(n) and depth o(log n). 8as with theorem 2, the width can be reduced to o(n) if we employ periodic activation functions. (a) accuracy across tasks (rows) and network depths (columns). (b) attention heatmaps (grid8); unstable training (c2 and s5). figure 3: overview of the empirical results in section 4, on in-distribution learnability of shortcuts by standard transformer training. (a) truncated table of results (in-distribution accuracy); rows (b) attention specify semiautomaton simulation problems, and columns specify network depth. heads implement a nearest boundary detector (top); training is highly unstable (bottom). address here. instead, we approach the question of optimization through an empirical lens. our primary goal is to understand if gradient-based training can find shortcut solutions at all, rather than whether such training is stable. accordingly, unless otherwise noted, we report the performance of the best model among 20 replicates; the median performance is provided in appendix b. for a selection of 19 semiautomata corresponding to various groups and semigroups, we train shallow transformers to output their state sequences given random inputs. specifically, we apply gpt-2-like models (radford et al., 2019) with 1-16 layers on freshly-sampled sequences of length t = 100.9 strikingly, we obtain positive results (> 99% in-distribution accuracy) for all of them, including ones which generate the non-solvable groups a5 and s5.10 figure 3a gives a selection of our full results (in appendix b.1). we find that more complex semiautomata (corresponding to nonabelian groups) require deeper networks to learn, in agreement with our theoretical constructions. which shallow solutions are learned? our theoretical results identify shortcut solutions which follow multiple, mutually incompatible paradigms. in general, we do not attempt a full investigation of mechanistic interpretability of the trained models. as preliminary evidence, we visualize some of the attention patterns in figure 3b (top) within successfully-trained models, finding attention heads which perform flat summations (with uniform attention) and conditional resets. optimization quirks. although sufficiently deep networks find the solutions with non-negligible probability, the training dynamics are unstable; figure 3b (bottom) shows some training curves, exhibiting high variance, negative progress, or accuracy that decays with continued training. in the same vein as the “synthetic reasoning tasks” introduced by zhang et al. (2022), we hope that semiautomaton simulation will be useful as a clean, nontrivial testbed (with multiple difficulty knobs) for debugging and improving training algorithms, and perhaps the neural architectures themselves. further experiments: more challenging settings for a wide family of algebraic structures, we have proven that the function class of shallow nonrecurrent networks subsumes deeper finite-state recurrent models. furthermore, the experiments in section 4 have shown that despite the non-convexity of the optimization problem, standard training works: transformers can learn shortcuts to semiautomaton simulation, end-to-end. while encouraging, the experiments in section 4 are idealized in several ways, and it is natural to ask if transformers perform similarly in more challenging semiautomaton simulation scenarios. towards answering this 9using freshly-sampled data ensures that the model cannot achieve good performance by brute-force memorization in a number of training steps we could ever execute computationally (for sufficiently large t such as t = 100), since there are an exponential number of sequences. 10explanations on why certain groups are harder to learn are provided in appendix b.1.1. task observation dyck4,8 stack top grid9 1boundary (abab)⋆ location accept accuracy (a) accuracies with indirect supervision. lstm gets 100% on all tasks. (b) varying preveal (log spacing). (c) c2 (parity): accuracy at different p r[σ = 1] and t . figure 4: overview of the empirical results in section 5. (a) learning in the latent-state setting, with various observation maps φ(qt). (b) learning from incomplete state sequences: final accuracy vs. position-wise probability of a hidden token. (c) ood generalization: transformers fail to generalize to different distributions and lengths. question, in this section, we consider some challenges that may arise in practice and an associated set of experimental results; further details are deferred to appendix b.2. incomplete and indirect supervision automata are partially observable semiautomata. consider the case of partial observability. for any semiautomaton a = (q, σ, δ) and a (generally non-invertible) observation function φ : q → ˜q, we can define the problem of predicting ˜qt := φ(qt). if we can only obtain observations ˜qt (i.e., the state is latent), this fully captures the problem of learning a finite-state automaton from data. the results in this paper have shown that this is equivalent to the fully-observable case in terms of representation. however, the learning problem can be much harder; indeed, this may account for bhattamishra et al. (2020)’s negative results on learning regular languages with constantdepth transformers. note that this also captures autoregressive next-token prediction tasks induced by distributions (e.g., generating dyck languages (yao et al., 2021)) where the sequence’s continuations depend on a latent semiautomaton’s state (e.g., the current stack for dyck). despite these potential challenges, we find that transformers are able to find a solution with good in-distribution performance for all partially observable settings we consider; see figure 4(a). learning from incomplete state sequences. next, we consider the setting which is identical to that described in section 4, but each state qt is randomly revealed from the training data with some probability 0 ≤ preveal ≤ 1. as with partial observability, this does not affect representation issues, but can make learning/optimization much harder. figure 4b shows the accuracy of s5 for models trained on length 100 sequences for various preveal. it can be seen that transformers may be unable to find good solutions when the labels become sparser, whereas lstm’s performance stays robust across all choices of preveal. out-of-distribution shortcomings of shortcut solutions the theoretical construction of modular counters (lemma 6) suggests a possible failure mode: if attention performs prefix addition and the mlp computes the sum modulo n, the mlp could fail on sums unseen during training. this suggests that if the distribution over σ1:t shifts between training and testing (but the semiautomaton remains the same), a non-recurrent shortcut solution might map inputs into an intermediate latent variable space (like the sum) which fails to generalize. indeed, we observe that with the same models which obtain the positive in-distribution results in section 4, accuracy degrades as distribution shift increases; see figure 4(c) (left), where the performance drops as the probability of seeing input σ = 1 deviates from the training distribution (pr[σ = 1] = 0.5). from the viewpoint of mechanistic interpretation, this is further (but not absolutely conclusive) evidence that with standard training, transformers learn implementations similar to those predicted by the theory. we provide details and further empirical evidence in section b.2.3. more ambitiously, we could try to use these models to extrapolate to longer sequence lengths t than those seen in the training data. promoting this difficult desideratum of length generalization is an intricate problem in its own right; see yao et al. (2021); anil et al. (2022) for more experiments similar to ours. figure 4(c) (right) shows the performance on sequences of various lengths, where transformer’s accuracy drops sharply as we move to lengths unseen during training. in contrast, lstm performs perfectly in both out-of-distribution scenarios. details are deferred to section b.2.4. shortcuts as “unintended” solutions. throughout the deep learning literature, the term shortcut is often used in a statistical sense to connote undesired (i.e., misleading, spurious, or overfitting) learned representations (geirhos et al., 2020; robinson et al., 2021). the experiments in this section show why our circuit-depth shortcuts are statistical shortcuts. specifically, we have identified a problem with learning relaxations to sequential state simulation: the models may “hallucinate” statistically suboptimal latent variables. the positive results in sections 3 and 4 suggest that this may only be robustly diagnosable via out-of-distribution evaluation. finally, we empirically show that this flaw is circumventable. using a combination of scratchpad (a.k.a. “chain-of-thought”) (nye et al., 2021; wei et al., 2022) and recency bias (press et al., 2022), we demonstrate that transformers can be guided towards learning recurrent (depth-t ) solutions that generalize out-of-distribution and to longer sequence lengths (figure 4(c), yellow curves). computational-statistical tradeoffs. the experiments in this section highlight a statistical price for learning shortcuts to semiautomaton simulation. on the other hand, the shallowness of these shortcuts is computationally appealing: leveraging parallel computation, they enjoy much lower latency (o(log t ) or o(1), compared to o(t )), in both training and inference. whether the best of both worlds is attainable is an interesting avenue for future work. conclusions and future work we have conducted a theoretical and empirical analysis of how shallow transformers can learn shortcut solutions to the problem of simulating the transitions of semiautomata (and thus, the algebraic structures which underlie regular expressions, finite-state transducers, and deterministic mdps). using tools from semigroup theory and circuit complexity, we have constructed explicit logarithmic-depth and constant-depth shortcuts for semiautomaton simulation. experimentally, we have shown that gradient-based optimization finds shortcut solutions which generalize near-perfectly in-distribution (section 4), but are brittle out-of-distribution (section 5). we hope that these results shed new light on the power and limitations of applying shallow non-recurrent models, even when the dynamics we wish to represent are deep and recurrent. beyond transformers? the theory and experiments in this work are specialized to the transformer architecture, to provide a concrete and timely setting. however, we note that the underlying themes (continuous arithmetic circuits; parameter sharing across input positions and/or iterated function compositions; local vs. global computational units) are not specialized to any particular neural architecture11, nor the field of deep learning at all. the question of “which architectures are even more natural for learning discrete automata, while being optimizable by gradient-based search?” is extremely open-ended. we believe that the themes of sufficient depth and recurrent vs. non-recurrent function composition are relevant to the study of other (and future) deep learning methods. future topics. in terms of theory, we have only scratched the surface of the possible interplay between neural architectures and classical ideas from the complexity theories of circuits and automata. one salient direction is to generalize the shorter shortcut constructions in theorem 3. also, we have made no attempt to treat stochastic environments, which would fully capture probabilistic markov models and mdps. section 5 alludes to a landscape of algorithm design challenges in the presence of distribution shift and limited supervision. the latter (i.e., latent state inference) is known to lead to worst-case computational hardness (papadimitriou & tsitsiklis, 1987), but yields powerful empirical tools when tractable. towards fully understanding and leveraging the circumstances which allow learning algorithms to decode and simulate qt, there is much work to be done. 11in fact, the divide-and-conquer construction of theorem 1 is almost recurrent with log(t ) depth, and resembles wavenet-like hierarchical pooling (van den oord et al., 2016; larsson et al., 2016), more than transformers. acknowledgements we are very grateful to abhishek shetty for helpful discussions about circuit complexity. we also thank ashwini pokle for thoughtful comments and suggestions towards improving clarity and readability. reproducibility statement complete proofs of the theoretical results are provided in appendix c, with a self-contained tutorial of relevant group-theoretic concepts in appendix a.2. for the empirical results, all our datasets are derived from synthetic distributions, which are clearly described in appendix b.1 and b.2. the architectures, implementations (with references to popular base repositories), and hyperparameters (including training procedure) are documented in appendix b.3. we intend to release our code as open source prior to publication. references cem anil, yuhuai wu, anders andreassen, aitor lewkowycz, vedant misra, vinay ramasesh, ambrose slone, guy gur-ari, ethan dyer, and behnam neyshabur. exploring length generalization in large language models. arxiv:2207.04901, 2022. sanjeev arora and boaz barak. computational complexity: a modern approach. cambridge university press, 2009. arpit bansal, avi schwarzschild, eitan borgnia, zeyad emam, furong huang, micah goldblum, and tom goldstein. end-to-end algorithm synthesis with recurrent networks: logical extrapolation without overthinking. arxiv:-2202.05826, 2022. boaz barak, benjamin l edelman, surbhi goel, sham kakade, eran malach, and cyril zhang. hidden progress in deep learning: sgd learns parities near the computational limit. arxiv:2207.08799, 2022. david a. mix barrington. bounded-width polynomial-size branching programs recognize exactly those languages in nc1. in symposium on the theory of computing, 1986. david a. mix barrington and denis th´erien. finite monoids and the fine structure of nc1. journal satwik bhattamishra, kabir ahuja, and navin goyal. on the ability and limitations of transformers to recognize formal languages. in conference on empirical methods in natural language processing, 2020. michael m bronstein, joan bruna, taco cohen, and petar veliˇckovi´c. geometric deep learning: grids, groups, graphs, geodesics, and gauges. arxiv:2104.13478, 2021. ashok k chandra, steven fortune, and richard lipton. unbounded fan-in circuits and associative functions. in symposium on theory of computing, 1983. lili chen, kevin lu, aravind rajeswaran, kimin lee, aditya grover, michael laskin, pieter abbeel, aravind srinivas, and igor mordatch. decision transformer: reinforcement learning via sequence modeling. in advances in neural information processing systems, 2021a. mark chen, jerry tworek, heewoo jun, qiming yuan, henrique ponde de oliveira pinto, jared kaplan, harri edwards, yuri burda, nicholas joseph, greg brockman, alex ray, raul puri, gretchen krueger, michael petrov, heidy khlaaf, girish sastry, pamela mishkin, brooke chan, scott gray, nick ryder, mikhail pavlov, alethea power, lukasz kaiser, mohammad bavarian, clemens winter, philipp tillet, felipe petroski such, dave cummings, matthias plappert, fotios chantzis, elizabeth barnes, ariel herbert-voss, william hebgen guss, alex nichol, alex paino, nikolas tezak, jie tang, igor babuschkin, suchir balaji, shantanu jain, william saunders, christopher hesse, andrew n. carr, jan leiki, josh achiam, vedant misra, evan morikawa, alec radford, matthew knight, miles brundage, mira murati, katie mayer, peter welinder, bob mcgrew, dario amodei, sam mccandlish, ilya sutskever, and wojciech zaremba. evaluating large language models trained on code. arxiv:2107.03374, 2021b. sang cho and dung t huynh. finite-automaton aperiodicity is pspace-complete. theoretical computer science, 1991. noam chomsky and marcel p sch¨utzenberger. the algebraic theory of context-free languages. in studies in logic and the foundations of mathematics. 1959. xiangxiang chu, zhi tian, bo zhang, xinlong wang, xiaolin wei, huaxia xia, and chunhua shen. conditional positional encodings for vision transformers. arxiv preprint arxiv:2102.10882, 2021. kevin clark, urvashi khandelwal, omer levy, and christopher d. manning. what does bert look at? an analysis of bert’s attention. in acl workshop blackboxnlp: analyzing and interpreting neural networks for nlp, 2019. george cybenko. approximation by superpositions of a sigmoidal function. mathematics of control, signals and systems, 1989. amit daniely. depth separation for neural networks. in conference on learning theory, pp. 690– amit daniely and eran malach. learning parities with neural networks. advances in neural information processing systems, 2020. mostafa dehghani, stephan gouws, oriol vinyals, jakob uszkoreit, and lukasz kaiser. universal transformers. in international conference on learning representations, 2019. gr´egoire del´etang, anian ruoss, jordi grau-moya, tim genewein, li kevin wenliang, elliot catt, marcus hutter, shane legg, and pedro a ortega. neural networks and the chomsky hierarchy. arxiv preprint arxiv:2207.02098, 2022. jacob devlin, ming-wei chang, kenton lee, and kristina toutanova. bert: pre-training of deep bidirectional transformers for language understanding. arxiv:1810.04805, 2018. iddo drori, sarah zhang, reece shuttleworth, leonard tang, albert lu, elizabeth ke, kevin liu, linda chen, sunny tran, newman cheng, roman wang, nikhil singh, taylor l. patti, jayson lynch, avi shporer, nakul verma, eugene wu, and gilbert strang. a neural network solves, explains, and generates university math problems by program synthesis and few-shot learning at human level. proceedings of the national academy of sciences, 2022. javid ebrahimi, dhruv gelda, and wei zhang. how can self-attention networks recognize dyck-n languages? in findings of the association for computational linguistics: emnlp, 2020. benjamin l edelman, surbhi goel, sham kakade, and cyril zhang. inductive biases and variable creation in self-attention mechanisms. in international conference on machine learning, 2022. attila egri-nagy and chrystopher l nehaniv. computational holonomy decomposition of transforsamuel eilenberg. automata, languages, and machines. academic press, 1974. ronen eldan and ohad shamir. the power of depth for feedforward neural networks. in conference nelson elhage, neel nanda, catherine olsson, tom henighan, nicholas joseph, ben mann, amanda askell, yuntao bai, anna chen, tom conerly, nova dassarma, dawn drain, deep ganguli, zac hatfield-dodds, danny hernandez, andy jones, jackson kernion, liane lovitt, kamal ndousse, dario amodei, tom brown, jack clark, jared kaplan, sam mccandlish, and chris olah. a mathematical framework for transformer circuits. transformer circuits thread, 2021. url https://transformer-circuits.pub/2021/framework/index.html. merrick furst, james b. saxe, and michael sipser. parity, circuits, and the polynomial-time hierarchy. mathematical systems theory, 1984. robert geirhos, j¨orn-henrik jacobsen, claudio michaelis, richard zemel, wieland brendel, matthias bethge, and felix a wichmann. shortcut learning in deep neural networks. nature machine intelligence, 2020. surbhi goel, varun kanade, adam klivans, and justin thaler. reliably learning the relu in polynomial time. in conference on learning theory, 2017. alex graves. adaptive computation time for recurrent neural networks. arxiv preprint alex graves, greg wayne, and ivo danihelka. neural turing machines. arxiv preprint jiatao gu, james bradbury, caiming xiong, victor o.k. li, and richard socher. non-autoregressive danijar hafner, timothy lillicrap, jimmy ba, and mohammad norouzi. dream to control: learning behaviors by latent imagination. arxiv:1912.01603, 2019. michael hahn. theoretical limitations of self-attention in neural sequence models. transactions of the association for computational linguistics, 2020. adi haviv, ori ram, ofir press, peter izsak, and omer levy. transformer language models without positional encodings still learn positional information. arxiv:2203.16634, 2022. kaiming he, xiangyu zhang, shaoqing ren, and jian sun. deep residual learning for image recognition. in ieee conference on computer vision and pattern recognition, 2016. christoph hertrich, amitabh basu, marco di summa, and martin skutella. towards lower bounds on the depth of relu neural networks. in advances in neural information processing systems, 2021. w daniel hillis and guy l. steele jr. data parallel algorithms. communications of the acm, 1986. kurt hornik, maxwell stinchcombe, and halbert white. multilayer feedforward networks are universal approximators. neural networks, 1989. jeremy howard and sebastian ruder. universal language model fine-tuning for text classification. delesley hutchins, imanol schlag, yuhuai wu, ethan dyer, and behnam neyshabur. blockmichael janner, qiyang li, and sergey levine. offline reinforcement learning as one big sequence modeling problem. in advances in neural information processing systems, 2021. jungo kasai, hao peng, yizhe zhang, dani yogatama, gabriel ilharco, nikolaos pappas, yi mao, weizhu chen, and noah a smith. finetuning pretrained transformers into rnns. arxiv:2103.13076, 2021. guolin ke, di he, and tie-yan liu. rethinking positional encoding in language pre-training. arxiv daniel j kleitman, bruce r rothschild, and joel h spencer. the number of semigroups of order n. proceedings of the american mathematical society, 1976. l´aszl´o kov´acs and cheryl praeger. finite permutation groups with large abelian quotients. pacific journal of mathematics, 1989. marc krasner and l´eo kaloujnine. produit complet des groupes de permutations et probleme d’extension de groupes ii. acta scientiarum mathematicarum, 1951. kenneth krohn and john rhodes. algebraic theory of machines, i: prime decomposition theorem for finite semigroups and machines. transactions of the american mathematical society, 1965. guillaume lample and franc¸ois charton. deep learning for symbolic mathematics. gustav larsson, michael maire, and gregory shakhnarovich. fractalnet: ultra-deep neural netholden lee, rong ge, tengyu ma, andrej risteski, and sanjeev arora. on the ability of neural nets to express distributions. in conference on learning theory, pp. 1271–1296. pmlr, 2017. yujia li, david choi, junyoung chung, nate kushman, julian schrittwieser, r´emi leblond, tom eccles, james keeling, felix gimeno, agustin dal lago, thomas hubert, peter choy, cyprien de masson d’autume, igor babuschkin, xinyun chen, po-sen huang, johannes welbl, sven gowal, alexey cherepanov, james molloy, daniel j. mankowitz, esme sutherland robson, pushmeet kohli, nando de freitas, koray kavukcuoglu, and oriol vinyals. competition-level code generation with alphacode. arxiv:2203.07814, 2022. ilya loshchilov and frank hutter. decoupled weight decay regularization. arxiv:1711.05101, 2017. oded maler. on the krohn-rhodes cascaded decomposition theorem. in time for verification. 2010. oded maler and amir pnueli. on the cascaded decomposition of automata, its complexity and its application to logic (draft). 1994. carlo mereghetti and beatrice palano. threshold circuits for iterated matrix product and powering. rairo-theoretical informatics and applications, 2000. william merrill, yoav goldberg, roy schwartz, and noah a. smith. on the power of saturated transformers: a view from circuit complexity. arxiv:2106.16213, 2021. vincent micheli, eloi alonso, and franc¸ois fleuret. transformers are sample efficient world models. anirbit mukherjee and amitabh basu. lower bounds over boolean inputs for deep neural networks neel nanda of and grokking. tom lieberum. a mechanistic ysis //www.alignmentforum.org/posts/n6wm6hs7rqmkdhyjb/ a-mechanistic-interpretability-analysis-of-grokking. alignment forum, interpretability url analhttps: benjamin newman, john hewitt, percy liang, and christopher d. manning. the eos decision and length extrapolation. in blackboxnlp workshop on analyzing and interpreting neural networks for nlp, 2020. eshaan nichani, yu bai, and jason d lee. identifying good directions to escape the ntk regime and efficiently learn low-degree plus sparse polynomials. arxiv:2206.03688, 2022. rodrigo nogueira, zhiying jiang, and jimmy lin. investigating the limitations of transformers with maxwell nye, anders johan andreassen, guy gur-ari, henryk michalewski, jacob austin, david bieber, david dohan, aitor lewkowycz, maarten bosma, david luan, charles sutton, and augustus odena. show your work: scratchpads for intermediate computation with language models. arxiv:2112.00114, 2021. christos h papadimitriou and john n tsitsiklis. the complexity of markov decision processes. mathematics of operations research, 1987. adam paszke, sam gross, francisco massa, adam lerer, james bradbury, gregory chanan, trevor killeen, zeming lin, natalia gimelshein, luca antiga, alban desmaison, andreas k”opf, edward yang, zach devito, martin raison, alykhan tejani, sasank chilamkurthy, benoit steiner, lu fang, junjie bai, and soumith chintala. pytorch: an imperative style, high-performance deep learning library. advances in neural information processing systems, 2019. matthew e. peters, mark neumann, mohit iyyer, matt gardner, christopher clark, kenton lee, and luke zettlemoyer. deep contextualized word representations. arxiv:1802.05365, 2018. stanislas polu and ilya sutskever. generative language modeling for automated theorem proving. ofir press, noah smith, and mike lewis. train short, test long: attention with linear biases enables input length extrapolation. in international conference on learning representations, 2022. alec radford, jeffrey wu, rewon child, david luan, dario amodei, and ilya sutskever. language models are unsupervised multitask learners. openai blog, 2019. john h. reif and stephen r. tate. on threshold circuits and polynomial computation. siam journal on computing, 1992. john rhodes, chrystopher l nehaniv, and morris w hirsch. applications of automata theory and algebra: via the mathematical theory of complexity to biology, physics, psychology, philosophy, and games. world scientific, 2010. joshua robinson, li sun, ke yu, kayhan batmanghelich, stefanie jegelka, and suvrit sra. can contrastive learning avoid shortcut solutions? advances in neural information processing systems, 2021. itay safran, ronen eldan, and ohad shamir. depth separations in neural networks: what is actually being separated? in conference on learning theory, pp. 2664–2666. pmlr, 2019. victor sanh, lysandre debut, julien chaumond, and thomas wolf. distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arxiv:1910.01108, 2019. tal schuster, ashwin kalyan, alex polozov, and adam kalai. programming puzzles. in advances in neural information processing systems track on datasets and benchmarks, 2021. marcel paul sch¨utzenberger. on finite monoids having only trivial subgroups. information and avi schwarzschild, eitan borgnia, arjun gupta, furong huang, uzi vishkin, micah goldblum, and tom goldstein. can you learn an algorithm? generalizing from easy to hard problems with recurrent networks. in advances in neural information processing systems, 2021. hava t siegelmann and eduardo d sontag. on the computational power of neural nets. in conference on learning theory, 1992. matus telgarsky. benefits of depth in neural networks. in conference on learning theory, pp. ian tenney, dipanjan das, and ellie pavlick. bert rediscovers the classical nlp pipeline. aaron van den oord, sander dieleman, heiga zen, karen simonyan, oriol vinyals, alex graves, nal kalchbrenner, andrew senior, and koray kavukcuoglu. wavenet: a generative model for raw audio. arxiv:1609.03499, 2016. ashish vaswani, noam shazeer, niki parmar, jakob uszkoreit, llion jones, aidan n gomez, łukasz kaiser, and illia polosukhin. attention is all you need. advances in neural information processing systems, 2017. ashish vaswani, prajit ramachandran, aravind srinivas, niki parmar, blake a. hechtman, and jonathon shlens. scaling local self-attention for parameter efficient visual backbones. in ieee conference on computer vision and pattern recognition, 2021. jesse vig. visualizing attention in transformer-based language representation models. jason wei, xuezhi wang, dale schuurmans, maarten bosma, brian ichter, fei xia, ed chi, quoc le, and denny zhou. chain of thought prompting elicits reasoning in large language models. arxiv:2201.11903, 2022. gail weiss, yoav goldberg, and eran yahav. thinking like transformers. in international conference on machine learning, 2021. thomas wolf, lysandre debut, victor sanh, julien chaumond, clement delangue, anthony moi, pierric cistac, tim rault, r´emi louf, morgan funtowicz, joe davison, sam shleifer, patrick von platen, clara ma, yacine jernite, julien plu, canwen xu, teven le scao, sylvain gugger, mariama drame, quentin lhoest, and alexander m. rush. huggingface’s transformers: stateof-the-art natural language processing. arxiv:1910.03771, 2019. yonghui wu, mike schuster, zhifeng chen, quoc v le, mohammad norouzi, wolfgang macherey, maxim krikun, yuan cao, qin gao, klaus macherey, jeff klingner, apurva shah, melvin johnson, xiaobing liu, łukasz kaiser, stephan gouws, yoshikiyo kato, taku kudo, hideto kazawa, keith stevens, george kurian, nishant patil, wei wang, cliff young, jason smith, jason riesa, alex rudnick, oriol vinyals, greg corrado, macduff hughes, and jeffrey dean. google’s neural machine translation system: bridging the gap between human and machine translation. arxiv:1609.08144, 2016. yisheng xiao, lijun wu, junliang guo, juntao li, min zhang, tao qin, and tie-yan liu. a survey on non-autoregressive generation for neural machine translation and beyond. arxiv:2204.09269, 2022. keyulu xu, mozhi zhang, jingling li, simon s du, ken-ichi kawarabayashi, and stefanie jegelka. how neural networks extrapolate: from feedforward to graph neural networks. arxiv:2009.11848, 2020. shunyu yao, binghui peng, christos h. papadimitriou, and karthik narasimhan. self-attention networks can process bounded hierarchical languages. in association for computational linguistics, 2021. weirui ye, shaohuai liu, thanard kurutach, pieter abbeel, and yang gao. mastering atari games with limited data. advances in neural information processing systems, 2021. | 14 | [
117.963,
346.3420784,
427.83715,
356.5338182
] |
FLA55mBee6Q.pdf | 2,022 | 1 | coptidice: offline constrained reinforcement learning via stationary distribution correction estimation jongmin lee1∗, cosmin paduraru2, daniel j. mankowitz2, nicolas heess2, doina precup2, kee-eung kim1, arthur guez2 1kaist, 2deepmind abstract we consider the offline constrained reinforcement learning (rl) problem, in which the agent aims to compute a policy that maximizes expected return while satisfying given cost constraints, learning only from a pre-collected dataset. this problem setting is appealing in many real-world scenarios, where direct interaction with the environment is costly or risky, and where the resulting policy should comply with safety constraints. however, it is challenging to compute a policy that guarantees satisfying the cost constraints in the offline rl setting, since the offpolicy evaluation inherently has an estimation error. in this paper, we present an offline constrained rl algorithm that optimizes the policy in the space of the stationary distribution. our algorithm, coptidice, directly estimates the stationary distribution corrections of the optimal policy with respect to returns, while constraining the cost upper bound, with the goal of yielding a cost-conservative policy for actual constraint satisfaction. experimental results show that coptidice attains better policies in terms of constraint satisfaction and return-maximization, outperforming baseline algorithms. introduction | 0 | [
126.82956,
352.3286768,
205.9888518,
364.2838768
] |
YWNAX0caEjI.pdf | 2,022 | 1 | neural structured prediction for inductive node classification meng qu∗1,2, huiyu cai∗1,2, jian tang1,3,4 1mila - qu´ebec ai institute 2universit´e de montr´eal 3hec montr´eal 4canadian institute for advanced research (cifar) abstract this paper studies node classification in the inductive setting, i.e., aiming to learn a model on labeled training graphs and generalize it to infer node labels on unlabeled test graphs. this problem has been extensively studied with graph neural networks (gnns) by learning effective node representations, as well as traditional structured prediction methods for modeling the structured output of node labels, e.g., conditional random fields (crfs). in this paper, we present a new approach called the structured proxy network (spn), which combines the advantages of both worlds. spn defines flexible potential functions of crfs with gnns. however, learning such a model is nontrivial as it involves optimizing a maximin game with high-cost inference. inspired by the underlying connection between joint and marginal distributions defined by markov networks, we propose to solve an approximate version of the optimization problem as a proxy, which yields a nearoptimal solution, making learning more efficient. extensive experiments on two settings show that our approach outperforms many competitive baselines 1. introduction graph-structured data are ubiquitous in the real world, covering a variety of applications. this paper studies node classification, a fundamental problem in the machine learning community. most existing efforts focus on the transductive setting (kipf & welling, 2017; veliˇckovi´c et al., 2018), i.e., using a small set of labeled nodes in a graph to classify the rest of nodes. in this paper, we study node classification in the inductive setting (hamilton et al., 2017), which is receiving growing interest. given some training graphs with all nodes labeled, we aim to classify nodes in unlabeled test graphs. this problem has been recently studied with graph neural networks (gnns) (kipf & welling, 2017; hamilton et al., 2017; gilmer et al., 2017; veliˇckovi´c et al., 2018). gnns infer the marginal label distribution of each node by learning useful node representations based on node features and edges. once a gnn is learned on training graphs, it can be further applied to test graphs to infer node labels. owing to the high capacity of nonlinear neural architectures, gnns achieve impressive results on many datasets. however, one limitation of gnns is that they ignore the joint dependency of node labels, and therefore node labels are predicted separately without modeling structured output. indeed, modeling structured output has been widely explored by the literature of structured prediction (bakir et al., 2007). structured prediction methods predict node labels collectively, so the label prediction of each node can be improved according to the predicted labels of neighboring nodes. one representative approach is the conditional random field (crf) (lafferty et al., 2001). a crf models the joint distribution of node labels with markov networks, and thus training crfs becomes a learning task in graphical models, while predicting node labels corresponds to an inference task. typically, the potential functions in crfs are parameterized as log-linear functions, which suffer from low model capacities. one remedy for this is to define potential functions with gnns (ma et al., 2018; qu et al., 2019). however, most of the effective methods for learning crfs involve a *equal contribution. 1codes are available at https://github.com/deepgraphlearning/spn. maximin game (wainwright & jordan, 2008; sutton & mccallum, 2012), making learning often hard to converge, especially when gnns are used to parameterize potential functions. besides, as learning crfs requires doing inference on the graphical models, the combined model requires a long run time. in this paper, we address these challenges by proposing spn (structured proxy network), which is high in capacity, efficient in learning, and able to model the joint dependency of node labels. spn is inspired by theoretical works in graphical models (wainwright & jordan, 2008), which reveal close connections between the joint label distribution and the node/edge marginal label distribution in a markov network. based on that, we approximate the original optimization problem with a proxy problem, where the potential functions in crfs are defined by combining a collection of node/edge pseudomarginal distributions, which are parameterized by gnns that satisfy a few simple constraints. this proxy problem can be easily solved by maximizing the data likelihood on each node and edge, which yields a near-optimal joint label distribution on training graphs. once the model is learned, we apply it to test graphs and run loopy belief propagation (murphy et al., 1999) to infer node labels. experiments on two settings against both gnns and crfs prove the effectiveness of our approach. note that although spn is tested on inductive node classification, this method is quite general and can be applied to many other structured prediction tasks as well, such as pos tagging (church, 1988) and named entity recognition (sang & de meulder, 2003). please refer to sec. 4.3 for more details. related work graph neural networks (gnns) perform node classification by learning useful node representations (kipf & welling, 2017; gilmer et al., 2017; veliˇckovi´c et al., 2018). most earlier efforts focus on designing gnns for transductive node classification (yang et al., 2016; gao & ji, 2019; xhonneux et al., 2020), and many recent works move to the inductive setting (hamilton et al., 2017; gao et al., 2018; chiang et al., 2019; li et al., 2019; chen et al., 2020a; zeng et al., 2020). because of high capacity and efficient training, gnns achieve impressive results on inductive node classification. despite the success, gnns only try to model the marginal distribution of each node label and predict node labels separately without considering joint dependency. in contrast, spn models joint distributions of node labels with crfs, which predicts node labels collectively to improve results. another type of approach for inductive node classification is structured prediction, which focuses on modeling the dependency of node labels, so that the predicted node labels are more consistent. one representative approach is structured svm (tsochantaridis et al., 2005; finley & joachims, 2008; sarawagi & gupta, 2008), but it lacks a probabilistic interpretation to handle the uncertainty of the prediction. another representative probabilistic approach is conditional random field (lafferty et al., 2001; sutton & mccallum, 2006), which models the distribution of output spaces by using a markov network. crfs have been proven effective in many applications, such as pos tagging (lafferty et al., 2001), shallow parsing (sha & pereira, 2003), image labeling (he et al., 2004), and sequence labeling (lample et al., 2016; ma & hovy, 2016; liu et al., 2018). nevertheless, the potential functions in crfs are typically defined as log-linear functions, suffering from low model capacity. there are also some recent works trying to combine gnns and crfs. some works use gnns to solve inference problems in graphical models (dai et al., 2016; satorras et al., 2019; zhang et al., 2020; chen et al., 2020b; satorras & welling, 2020). in contrast, our approach uses gnns to parameterize the potential functions in crfs, which is in a similar vein to ma et al. (2018); qu et al. (2019); ma et al. (2019; 2021); wang et al. (2021). among them, ma et al. (2018) and qu et al. (2019) optimize the pseudolikelihood (besag, 1975) for model learning, and wang et al. (2021) optimizes a cross-entropy loss on each single node, which can yield poor approximation of the true joint likelihood (koller & friedman, 2009; sutton & mccallum, 2012). our approach instead solves a proxy problem, which yields a near-optimal solution to the original problem of maximizing likelihood, and thus gets superior results. for ma et al. (2019) and ma et al. (2021), they focus on transductive node classification and continuous labels respectively, which are different from our work. lastly, learning crfs has also been widely studied. some works solve a maximin game as a surrogate for learning (sutton & mccallum, 2012) and some others maximize a lower bound of the likelihood function (sutton & mccallum, 2009). however, these maximin games are often hard to optimize and the lower bounds are often loose. different from them, we follow wainwright et al. (2003) and build an approximate optimization problem as a proxy, which is easier to solve and yields better results. preliminary v , xv , e), where xv and y∗ this paper focuses on inductive node classification (hamilton et al., 2017), a fundamental problem in both graph machine learning and structured prediction. we employ a probabilistic formalization for the problem with some labeled training graphs and unlabeled test graphs. each training graph is given as (y∗ v are features and labels of a set of nodes v , and e is a set of edges. for each test graph (x ˜v , ˜e), only features x ˜v and edges ˜e are given. then we aim to solve: • learning. on training graphs, learn a probabilistic model to approximate p(yv |xv , e). • inference. for each test graph, infer node labels y∗ ˜v according to the distribution p(y ˜v |x ˜v , ˜e). the problem has been extensively studied in both graph machine learning and structured prediction fields, and representative methods are gnns and crfs respectively. next, we introduce the details. graph neural networks for inductive node classification, graph neural networks (gnns) learn node representations to predict marginal label distributions of nodes. gnns assume all node labels are independent conditioned on node features and edges, so the joint label distribution is factorized into a set of marginals as below: pθ(yv |xv , e) = s∈v pθ(ys|xv , e). each marginal distribution pθ(ys|xv , e) is modeled as a categorical distribution over label candidates, and the label probabilities are computed by applying a linear softmax classifier to the representation of node s. in general, node representations are learned via the message passing mechanism (gilmer et al., 2017), which brings high capacity to gnns. also, owing to the factorization in eq. (1), learning and inference can be easily solved in gnns, where we simply need to compute loss and make prediction on each node separately. however, gnns approximate only the marginal label distributions of nodes on training graphs, which may generalize badly and result in poor approximation of node marginal label distributions on test graphs. also, the labels of different nodes are separately predicted according to their own marginal label distributions, yet the joint dependency of node labels is ignored. conditional random fields for inductive node classification, conditional random fields (crfs) build graphical models for node classification. a popular model is the pair-wise crf, which formalizes the joint label distribution as: pθ(yv |xv , e) = 1 zθ(xv , e) exp{ s∈v θs(ys, xv , e) + (s,t)∈e θst(ys, yt, xv , e)} where zθ(xv , e) is the partition function. θs(ys, xv , e) and θst(ys, yt, xv , e) are scalar scores contributed by each node s and each edge (s, t). in practice, these θ-functions can be either defined as simple linear functions or complicated gnns. to make the notation concise, we will omit xv and e in the θ-functions, e.g., simplifying θs(ys, xv , e) as θs(ys). with these θ-functions, crfs are able to model the joint dependency of node labels and therefore achieve structured prediction. however, learning crfs to maximize likelihood pθ(y∗ v |xv , e) on training graphs is nontrivial in general, as the partition function zθ(xv , e) is typically intractable in graphs with loops. thus, a major line of research instead optimizes a maximin game equivalent to likelihood maximization (wainwright & jordan, 2008). the maximin game for each training graph (y∗ v , xv , e) is formalized as follows: max θ log pθ(y∗ v |xv , e) = max (cid:88) min q θ s , y∗ {θst(y∗ l(θ, q), with l(θ, q) = t ) − eqst(ys,yt)[θst(ys, yt)]} − h[q(yv )]. (3) {θs(y∗ s ) − eqs(ys)[θs(ys)]} + (s,t)∈e s∈v here, q(yv ) is a variational distribution on node labels, qs(ys) and qst(ys, yt) are its marginal distributions on nodes and edges. h[q(yv )] := −eq(yv )[log q(yv )] is the entropy of q(yv ). given the maximin game, q and θ can be alternatively optimized via coordinate descent (sutton & mccallum, 2012). in each iteration, we first update the node and edge marginals {qs(ys)}s∈v , {qst(ys, yt)}(s,t)∈e figure 1: framework overview of the spn. our approach formulates a proxy optimization problem for learning, which is much easier to solve. given a graph, a node gnn and an edge gnn are used to predict the pseudomarginal label distributions on each node and each edge respectively. then these pseudomarginals serve as building blocks to construct a near-optimal joint label distribution. towards those defined by pθ. this can be done by mcmc, but the time cost is high, so approximate inference is often used, such as loopy belief propagation (murphy et al., 1999). after q is optimized, we further update θ-functions with the node and edge marginals defined by q via gradient descent. the optimal θ-functions are characterized by the following moment-matching conditions: s pθ(ys|xv , e) = iy∗ pθ(ys, yt|xv , e) = i(y∗ {ys} ∀s ∈ v, t ){(ys, yt)} ∀(s, t) ∈ e, (4) where ia{b} is an indicator function whose value is 1 if a = b and 0 otherwise. see sec. a and sec. b in appendix for detailed derivation of the maximin game as well as the moment-matching conditions. once the θ-functions are learned, they can be further applied to each test graph (x ˜v , ˜e) to predict the joint label distribution as pθ(y ˜v |x ˜v , ˜e). then the best label assignment y∗ can be inferred by ˜v using approximate inference algorithms, such as loopy belief propagation (murphy et al., 1999). s ,y∗ the major challenge of crfs lies in learning. on the one hand, learning relies on inference, meaning that we have to update {qs(ys)}s∈v , {qst(ys, yt)}(s,t)∈e to approximate the node and edge marginals of pθ at each step, which can be expensive. on the other hand, as learning involves a maximin game and the optimal q of the inner minimization problem in eq. (3) is intractable, we can only maximize an upper bound of the likelihood function for θ, making learning unstable. the problem becomes even more severe when θ is parameterized by highly nonlinear neural models, e.g. gnns. model in this section, we introduce our proposed approach structured proxy network (spn). the general idea of spn is to combine gnns and crfs by parameterizing potential functions in crfs with gnns, and therefore spn enjoys high capacity and can model the joint dependency of node labels. however, as elaborated in sec. 3.2, learning such a model on training graphs is challenging due to the maximin game in optimization. inspired by the connection between the joint and marginal distributions of crfs, we instead construct a new optimization problem, which serves as a proxy for model learning. compared with the original maximin game, the proxy problem is much easier to solve, where we can simply train two gnns to approximate the marginal label distributions on nodes and edges, and further combine these pseudomarginals (defined in prop. 1) into a near-optimal joint label distribution. this joint label distribution can be further refined by optimizing the maximin game, although it is optional and often unnecessary, as this distribution is often close enough to the optimal one. with this proxy problem for model learning, learning becomes more stable and efficient. afterwards, the learned model is used to predict the joint label distribution on test graphs. then we run loopy belief propagation to infer node labels. now, we introduce the details of our approach. learning the learning task aims at training θ to maximize the log-likelihood function log pθ(y∗ v |xv , e) for each training graph (y∗ v , xv , e), which is highly challenging. therefore, instead of directly optimizing this goal, we solve an approximate version of the problem as a proxy, which is training a node gnn and an edge gnn to maximize the log-likelihood of observed labels on nodes and edges. the proxy problem. the proxy problem is inspired by wainwright & jordan (2008), which points out that the marginal label distributions on nodes and edges defined by a markov network have inherent connections with the joint distribution. this connection is stated in the proposition below. proposition 1 consider a set of nonzero pseudomarginals {τs(ys)}s∈v and {τst(ys, yt)}(st)∈e which satisfy (cid:80) τst(ys, yt) = τs(ys) for all (s, t) ∈ e. τst(ys, yt) = τt(yt) and (cid:80) yt ys if we parameterize the θ-functions of pθ in eq. (2) in the following way: θs(ys) = log τs(ys) ∀s ∈ v, θst(ys, yt) = log τst(ys, yt) τs(ys)τt(yt) ∀(s, t) ∈ e, then {τs(ys)}s∈v and {τst(ys, yt)}(s,t)∈e are specified by a fixed point of the sum-product loopy belief propagation algorithm when applied to the joint distribution pθ, which implies that: τs(ys) ≈ pθ(ys) ∀s ∈ v, τst(ys, yt) ≈ pθ(ys, yt) ∀(s, t) ∈ e. the proof is provided in sec. c. with the proposition, we observe that if we parameterize the θfunctions by combining a set of pseudomarginals {τs(ys)}s∈v and {τst(ys, yt)}(s,t)∈e in the way defined by eq. (5), then those pseudomarginals can well approximate the true marginals of the joint distribution pθ, i.e., τs(ys) ≈ pθ(ys) and τst(ys, yt) ≈ pθ(ys, yt) for all nodes s and edges (s, t). given this precondition, if we further have τs(ys) ≈ iy∗ t ){(ys, yt)}, then the moment-matching conditions in eq. (4) for the optimal θ-functions are roughly satisfied. this implies the joint distribution pθ(yv |xv , e) derived in this way is a near-optimal one. {ys} and τst(ys, yt) ≈ i(y∗ s ,y∗ s with the observation, rather than directly using gnns to parameterize the θ-functions, we use a node gnn and an edge gnn to parameterize the pseudomarginals {τs(ys)}s∈v and {τst(ys, yt)}(s,t)∈e. for the pseudomarginal τs(ys) on node s, we apply the node gnn to node features xv and edges e, yielding a representation us for node s. then we apply a softmax classifier to us to compute τs(ys): {us}u∈v = gnnnode(xv , e), τs(ys) = softmax(f (us))[ys], where f maps a node representation to a |y|-dimensional logit and y is the node label set. similarly, we apply the edge gnn to compute a representation vs for each node s, and model τst(ys, yt) as: {vs}s∈v = gnnedge(xv , e) τst(ys, yt) = softmax(g(vs, vt))[ys, yt], where g is a function mapping a pair of representations to a (|y| × |y|)-dimensional logit. given the parameterization, we construct the following problem as a proxy for learning θ-functions: (cid:88) d (cid:0)iy∗ {ys}, τs(ys)(cid:1) + s d (cid:0)i(y∗ t ){(ys, yt)}, τst(ys, yt)(cid:1) , s ,y∗ min τ,θ s∈v (s,t)∈e subject to θs = log τs(ys), θst(ys, yt) = log τst(ys, yt) τs(ys)τt(yt) and ys τst(ys, yt) = τt(yt), yt τst(ys, yt) = τs(ys), for all nodes and edges, where d can be any divergence measure between two distributions. by solving the above problem, {τs(ys)}s∈v and {τst(ys, yt)}(s,t)∈e will be valid pseudomarginals which can well approximate the true labels, i.e., τs(ys) ≈ iy∗ t ){(ys, yt)}. then according to the constraint in the second line of eq. (9), θ-functions are formed in a way to enable τs(ys) ≈ pθ(ys) and τst(ys, yt) ≈ pθ(ys, yt) as stated in the prop. 1. combining these two sets of formula results in pθ(ys) ≈ iy∗ {ys}. we see that the moment-matching conditions in eq. (4) for the optimal joint label distribution are roughly achieved, implying that the derived joint distribution pθ(yv |xv , e) is a near-optimal solution to the original learning problem. {ys} and τst(ys, yt) ≈ i(y∗ {ys} and pθ(ys, yt) ≈ iy∗ s ,y∗ s s s ys τst(ys, yt) = τt(yt) and (cid:80) one good property of the proxy problem is that it can be solved easily. the last consistency constraint (i.e. (cid:80) τst(ys, yt) = τs(ys)) can be ignored during optimization, since by optimizing the objective function, the optimal pseudomarginals τ should well approximate the observed node and edge marginals, i.e., τs(ys) ≈ iy∗ t ){(ys, yt)}, and hence τ will almost naturally satisfy the consistency constraint. we also tried some constrained {ys} and τst(ys, yt) ≈ i(y∗ s ,y∗ yt s optimization methods to handle the consistency constraint, but they yield no improvement. see sec. d of appendix for more details. thus, we can simply train the pseudomarginals parameterized by gnns to approximate the true node and edge labels on training graphs, i.e., minimizing d(iy∗ {ys}, τs(ys)) and d(i(y∗ t ){(ys, yt)}, τst(ys, yt)). then we build θ-functions as in eq. (5) to obtain a near-optimal joint distribution. in practice, we choose d to be the kl divergence, yielding an objective for τ as: s ,y∗ s max τ s∈v log τs(y∗ s ) + (s,t)∈e log τst(y∗ s , y∗ t ). this objective function is very intuitive, where we simply try to optimize the node gnn and edge gnn to maximize the log-likelihood function of the observed labels on nodes and edges. refinement. by solving the proxy problem, we can obtain a near-optimal joint distribution. in practice, we observe that when we have a large amount of training data, further refining this joint distribution by solving the maximin game in eq. (3) for a few iterations can lead to further improvement. formally, each iteration of refinement has two steps. in the first step, we run sum-product loopy belief propagation (murphy et al., 1999), which yields a collection of node and edge marginals (i.e., {qs(ys)}s∈v and {qst(ys, yt)}(s,t)∈e) as approximation to the marginals defined by pθ. in the second step, we update the θ-functions parameterized by the node and edge gnns to maximize: (cid:88) (cid:8)θst(y∗ s , y∗ t ) − eqst(ys,yt)[θst(ys, yt)](cid:9) . (cid:8)θs(y∗ s ) − eqs(ys)[θs(ys)](cid:9) + s∈v intuitively, we treat the true label y∗ t ) of each node and edge as positive examples, and encourage the θ-functions to raise up their scores. meanwhile, those labels sampled from qs(ys) and qst(ys, yt) act as negative examples, and the θ-functions are updated to decrease their scores. s and (y∗ (s,t)∈e s , y∗ inference after learning, we apply the node and edge gnns to each test graph (x ˜v , ˜e) to compute the θfunctions, which are integrated into an approximate joint label distribution pθ(y ˜v |x ˜v , ˜e). then we ˜s for each node ˜s ∈ ˜v , where two settings are considered. use this distribution to infer the best label y∗ node-level accuracy. typically, we care about the node-level accuracy, i.e., how likely we can ˜s for each test node ˜s ∈ ˜v should correctly classify a node in test graphs. intuitively, the best label y∗ ˜s = arg maxy˜s pθ(y˜s|x ˜v , ˜e), where pθ(y˜s|x ˜v , ˜e) is the marginal label distribution be predicted as y∗ of node ˜s induced by the joint pθ(y ˜v |x ˜v , ˜e). in practice, the exact marginal is intractable, so we apply loopy belief propagation (murphy et al., 1999) for approximate inference. for each edge (˜s, ˜t) in test graphs, we introduce a message function m˜t→˜s(y˜s) and iteratively update all messages as: {exp(θ˜t(y˜t) + θ˜s˜t(y˜s, y˜t)) m˜t→˜s(y˜s) ∝ m˜s(cid:48)→˜t(y˜t)}, y˜t ˜s(cid:48)∈n (˜t)\˜s where n (˜s) denotes the set of neighboring nodes for node ˜s. once the above process converges or after sufficient iterations, the label of each node ˜s can be inferred in the following way: y∗ ˜s = arg max y˜s [exp(θ˜s(y˜s)) ˜t∈n (˜s) m˜t→˜s(y˜s)]. graph-level accuracy. in some other cases, we might care about the graph-level accuracy, i.e., how likely we can correctly classify all nodes in a given test graph. in this case, the best prediction of node p(y ˜v |x ˜v , ˜e). this problem can be approximately solved by the labels is given by y∗ ˜v max-product variant of loopy belief propagation, which simply replaces the sum over y˜t in eq. (12) with max (weiss & freeman, 2001). afterwards, the best node label can be still decoded via eq. (13). = arg maxy ˜v discussion in practice, many structured prediction problems can be viewed as special cases of inductive node classification, where the graphs between nodes have some special structures. for example in sequence labeling tasks (e.g., named entity recognition), the graphs between nodes have sequential structures. thus, spn can be applied to these tasks as well. in order for better results, one might replace gnns with other neural models which are specifically designed for the studied task to better estimate the pseudomarginals. for example in sequence labeling tasks, recurrent neural networks can be used. experiment | 6 | [
108.299,
697.5936768,
194.1824456,
709.5488768
] |
LNpMtk15AS4.pdf | 2,023 | 0 | boosting differentiable causal discovery via adaptive sample reweighting an zhang1,2, fangfu liu3, wenchang ma2, zhibo cai4, xiang wang∗ 5, tat-seng chua1,2 1sea-next joint lab, 2national university of singapore, 3tsinghua university 4renmin university of china, 5university of science and technology of china anzhang@u.nus.edu, liuff19@mails.tsinghua.edu.cn, e0724290@u.nus.edu caizhibo@ruc.edu.cn, xiangwang1223@gmail.com, dcscts@nus.edu.sg abstract under stringent model type and variable distribution assumptions, differentiable score-based causal discovery methods learn a directed acyclic graph (dag) from observational data by evaluating candidate graphs over an average score function. despite great success in low-dimensional linear systems, it has been observed that these approaches overly exploit easier-to-fit samples, thus inevitably learning spurious edges. worse still, the common homogeneity assumption can be easily violated, due to the widespread existence of heterogeneous data in the real world, resulting in performance vulnerability when noise distributions vary. we propose a simple yet effective model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the reweighted score function, rescore for short, where the weights tailor quantitatively to the importance degree of each sample. intuitively, we leverage the bilevel optimization scheme to alternately train a standard dag learner and reweight samples — that is, upweight the samples the learner fails to fit and downweight the samples that the learner easily extracts the spurious information from. extensive experiments on both synthetic and real-world datasets are carried out to validate the effectiveness of rescore. we observe consistent and significant boosts in structure learning performance. furthermore, we visualize that rescore concurrently mitigates the influence of spurious edges and generalizes to heterogeneous data. finally, we perform the theoretical analysis to guarantee the structure identifiability and the weight adaptive properties of rescore in linear systems. our codes are available at https://github.com/anzhang314/rescore. introduction learning causal structure from purely observational data (i.e., causal discovery) is a fundamental but daunting task (chickering et al., 2004; shen et al., 2020). it strives to identify causal relationships between variables and encode the conditional independence as a directed acyclic graph (dag). differentiable score-based optimization is a crucial enabler of causal discovery (vowels et al., 2021). specifically, it is formulated as a continuous constraint optimization problem by minimizing the average score function and a smooth acyclicity constraint. to ensure the structure is fully or partially identifiable (see section 2), researchers impose stringent restrictions on model parametric family (e.g., linear, additive) and common assumptions of variable distributions (e.g., data homogeneity) (peters et al., 2014; ng et al., 2019a). following this scheme, recent follow-on studies (kalainathan et al., 2018; ng et al., 2019b; zhu et al., 2020; khemakhem et al., 2021; yu et al., 2021) extend the formulation to general nonlinear problems by utilizing a variety of deep learning models. however, upon careful inspections, we spot and justify two unsatisfactory behaviors of the current differentiable score-based methods: • differentiable score-based causal discovery is error-prone to learning spurious edges or reverse causal directions between variables, which derails the structure learning accuracy (he et al., 2021; ∗xiang wang is the corresponding author, also with the institute of artificial intelligence, hefei comprehensive national science center. figure 1: a simple example of basic chain structure that notears would learn spurious edges while rescore can help to mitigate the bad influence. ng et al., 2022). we substantiate our claim with an illustrative example as shown in figure 1 (see another example in appendix d.3.1). we find that even the fundamental chain structure in a linear system is easily misidentified by the state-of-the-art method, notears (zheng et al., 2018). • despite being appealing in synthetic data, differentiable score-based methods suffer from severe performance degradation when encountering heterogeneous data (huang et al., 2020; 2019). considering figure 1 again, notears is susceptible to learning redundant causations when the distributions of noise variables vary. taking a closer look at this dominant scheme (i.e., optimizing the dag learner via an average score function under strict assumptions), we ascribe these undesirable behaviors to its inherent limitations: • the collected datasets naturally include an overwhelming number of easy samples and a small number of informative samples that might contain crucial causation information (shrivastava et al., 2016). averagely scoring the samples deprives the discovery process of differentiating sample importance, thus easy samples dominate the learning of dag. as a result, prevailing score-based techniques fail to learn true causal relationship but instead yield the easier-to-fit spurious edges. • noise distribution shifts are inevitable and common in real-world training, as the observations are typically collected at different periods, environments, locations, and so forth (arjovsky et al., 2019). as a result, the strong assumption of noise homogeneity for differentiable dag learner is easily violated in real-world data (peters et al., 2016). a line of works (ghassami et al., 2018; wang et al., 2022) dedicated to heterogeneous data can successfully address this issue. however, they often require explicit domain annotations (i.e., ideal partition according to heterogeneity underlying the data) for each sample, which are prohibitively expensive and hard to obtain (creager et al., 2021), thus further limiting their applicability. to reshape the optimization scheme and resolve these limitations, we propose to adaptively reweight the samples, which de facto concurrently mitigates the influence of spurious edges and generalizes to heterogeneous data. the core idea is to discover and upweight a set of less-fitted samples that offer additional insight into depicting the causal edges, compared to the samples easily fitted via spurious edges. focusing more on less-fitted samples enables the dag learner to effectively generalize to heterogeneous data, especially in real-world scenarios whose samples typically come from disadvantaged domains. however, due to the difficulty of accessing domain annotations, distinguishing such disadvantaged but informative samples and adaptively assigning their weights are challenging. towards this end, we present a simple yet effective model-agnostic optimization framework, coined rescore, which automatically learns to reweight the samples and optimize the differentiable dag learner, without any knowledge of domain annotations. specifically, we frame the adaptive weights learning and the differentiable dag learning as a bilevel optimization problem, where the outer-level problem is solved subject to the optimal value of the inner-level problem: • in the inner loop, the dag learner is first fixed and evaluated by the reweighted score function to quantify the reliance on easier-to-fit samples, and then the instance-wise weights are adaptively optimized to induce the dag learner to the worst-case. • in the outer loop, upon the reweighted observation data where the weights are determined by the inner loop, any differential score-based causal discovery method can be applied to optimize the dag learner and refine the causal structure. benefiting from this optimization scheme, our rescore has three desirable properties. first, it is a model-agnostic technique that can empower any differentiable score-based causal discovery method. moreover, we theoretically reveal that the structure identifiability is inherited by rescore from the original causal discovery method in linear systems (cf. theorem 1). second, rescore jointly mitigates the negative effect of spurious edge learning and performance drop in heterogeneous data via auto-learnable adaptive weights. theoretical analysis in section 3.3 (cf. theorem 2) validates the oracle adaptive properties of weights. third, rescore boosts the causal discovery performance by a large margin. surprisingly, it performs competitively or even outperforms cd-nod (huang et al., 2020) and dicd (wang et al., 2022), which require domain annotation, on heterogeneous synthetic data and real-world data (cf. section 4.2). differentiable causal discovery we begin by introducing the task formulation of causal discovery and the identifiability issue. we then present the differentiable score-based scheme to optimize the dag learner. task formulation. causal discovery aims to infer the structural causal model (scm) (pearl, 2000; pearl et al., 2016) from the observational data, which best describes the data generating procedure. formally, let x ∈ rn×d be a matrix of observational data, which consists of n independent and identically distributed (i.i.d.) random vectors x = (x1, . . . , xd) ∈ rd. given x, we aim to learn a scm (px , g), which encodes a causal directed acyclic graph (dag) with a structural equation model (sem) to reveal the data generation from the distribution of variables x. specifically, we denote the dag by g = (v (g), e(g)), where v (g) is the variable set and e(g) collects the causal directed edges between variables. we present the joint distribution over x as px , which is markov w.r.t. g. the probability distribution function of px is factored as p(x) = (cid:81)d i=1 p (xi|xpa(i)), where pa(i) = {j ∈ v (g) : xj → xi ∈ e(g)} is the set of parents of variable xi in g and p (xi|xpa(i)) is the conditional probability density function of variable xi given xpa(i). as a result, the sem can be formulated as a collection of d structural equations: xi = fi(xpa(i), ni), where fi : r|xpa(i)| → r can be any linear or nonlinear function, and n = (n1, . . . , nd) are jointly independent noise variables. identifiability issue. in general, without further assumption on the sem (cf. equation 1), it is not possible to uniquely learn the dag g by only using the observations of px . this is the identifiability issue in causal discovery (lachapelle et al., 2020). nonetheless, with the assumption of the sem, the dag g is said to be identifiable over px , if no other sem can encode the same distribution px with a different dag under the same assumption. to guarantee the identifiability, most prior studies restrict the form of the structural equations to be additive w.r.t. to noises, i.e., additive noise models (anm). assuming anm, as long as the structural equations are linear with non-gaussian errors (shimizu et al., 2006; loh & b¨uhlmann, 2014), linear gaussian model with equal noise variances (peters & b¨uhlmann, 2014), or nonlinear structural equation model with mild conditions (hoyer et al., 2008; zhang & hyvarinen, 2009; peters et al., 2014), then the dag g is identifiable. solution to causal discovery. prevailing causal discovery approaches roughly fall into two lines: constraint- and score-based methods (spirtes & zhang, 2016; glymour et al., 2019). specifically, constraint-based methods (spirtes et al., 1995; spirtes & glymour, 1991; colombo et al., 2012) determine up to the markov equivalence class of causal graphs, based on conditional independent tests under certain assumptions. score-based methods (vowels et al., 2021) evaluate the candidate graphs with a predefined score function and search the dag space for the optimal graph. here we focus on the score-based line. score-based causal discovery. with a slight abuse of notation, g refers to a directed graph in the rest of the paper. formally, the score-based scheme casts the task of dag learning as a combinatorial optimization problem: min g s(g; x) = l(g; x) + λrsparse(g) s.t. g ∈ dag, here this problem consists of two ingredients: the combinatorial acyclicity constraint g ∈ dag and the score function s(g; x). the score function composes two terms: (1) the goodness-of-fit measure l(g; x) = 1 i=1 l(xi, f (xi)), where l(xi, f (xi)) represents the loss of fitting observation n xi; (2) the sparsity regularization rsparse(g) stipulating that the total number of edges in g should be penalized; and λ is a hyperparameter controlling the regularization strengths. next, we will elaborate on the previous implementations of these two major ingredients. to implement s(g; x), various approaches have been proposed, such as penalized least-squares loss (zheng et al., 2020; 2018; ng et al., 2019b), evidence lower bound (elbo) (yu et al., 2019), loglikelihood with complexity regularizers (kalainathan et al., 2018; van de geer & b¨uhlmann, 2013; ng et al., 2020), maximum mean discrepancy (mmd) (goudet et al., 2018), bayesian information criterion (bic) (geiger & heckerman, 1994; zhu et al., 2020), bayesian dirichlet equivalence uniform (bdeu) score (heckerman et al., 1995), bayesian gaussian equivalent (bge) score (kuipers et al., 2014), and others (huang et al., 2018; bach & jordan, 2002; sokolova et al., 2014). | 3 | [
108,
582.4570784,
504.003511,
658.3918556
] |
TGFO0DbD_pk.pdf | 2,021 | 0 | genetic soft updates for policy evolution in deep reinforcement learning enrico marchesini∗, davide corsi, alessandro farinelli university of verona, department of computer science abstract the combination of evolutionary algorithms (eas) and deep reinforcement learning (drl) has been recently proposed to merge the benefits of both solutions. existing mixed approaches, however, have been successfully applied only to actor-critic methods and present significant overhead. we address these issues by introducing a novel mixed framework that exploits a periodical genetic evaluation to soft update the weights of a drl agent. the resulting approach is applicable with any drl method and, in a worst-case scenario, it does not exhibit detrimental behaviours. experiments in robotic applications and continuous control benchmarks demonstrate the versatility of our approach that significantly outperforms prior drl, eas, and mixed approaches. finally, we employ formal verification to confirm the policy improvement, mitigating the inefficient exploration and hyper-parameter sensitivity of drl. introduction the key to a wider and successful application of drl techniques in real scenarios is the ability to adapt to the surrounding environment by generalizing from training experiences. these solutions have to cope with the uncertainties of the operational environment, requiring a huge number of trials to achieve good performance. hence, devising robust learning approaches while improving sample efficiency is one of the challenges for wider utilization of drl. despite the promising results (tai et al., 2017; zhang et al., 2017; marchesini et al., 2019), drl also suffer from convergence to local optima, which is mainly caused by the lack of diverse exploration when operating in high-dimensional spaces. several studies address the exploration problem (e.g., curiosity-driven exploration (pathak et al., 2017), count-based exploration (ostrovski et al., 2017)), but they typically rely on sensitive task-specific hyper-parameters. the sensitivity to such hyper-parameters is another significant issue in drl as it typically results in brittle convergence properties and poor performance in practical tasks (haarnoja et al., 2018). evolutionary algorithms (fogel, 2006) have been recently employed as a promising gradient-free optimization alternative to drl. the redundancy of these population-based approaches has the advantages of enabling diverse exploration and improve robustness, leading to a more stable convergence. in particular, genetic algorithms (ga) (montana & davis, 1989) show competitive results compared to gradient-based drl (such et al., 2017) and are characterized by low computational cost. these gradient-free approaches, however, struggle to solve high-dimensional problems having poor generalization skills and are significantly less sample efficient than gradient-based methods. an emergent research direction proposes the combination of gradient-free and gradient-based methods following the physical world, where evolution and learning cooperate to assimilate the best of both solutions (simpson, 1953). the first mixed approach, evolutionary reinforcement learning (erl) (khadka & tumer, 2018), relies on actor-critic architecture to inject information in an evolutionary population while both the gradient-free and gradient-based training phases proceed in parallel. similarly, proximal distilled erl (pderl) (bodnar, 2020) extends erl with different evolutionary methods. cem-rl (pourchot, 2019) brings this research direction into the family of distributed approaches, combining a portfolio of td3 (fujimoto et al., 2018) learners with the cross-entropy method (yan duan, 2016). ∗contact author: enrico.marchesini@univr.it these mixed approaches however also present several limitations, which we address through our work: (i) the parallel training phases of the drl and ea components (khadka & tumer, 2018; bodnar, 2020), or the multitude of learners (pourchot, 2019) result in significant overhead (detailed in section 4). (ii) the actor-critic formalization of previous mixed approaches, allows them to be easily evaluated in continuous locomotion benchmarks (brockman et al., 2016; todorov et al., 2012). however, this also hinders their combination with value-based drl (marchesini & farinelli, 2020a). this is important as recent work (matheron et al., 2019) shows the limitation of actor-critic in deterministic tasks, that in contrast can be effectively addressed with value-based drl. in particular, section 4 shows that a value-based implementation of khadka & tumer (2018) does not converge in our discrete robotic task. (iii) the combination strategy does not ensure better performance compared to the drl agent as it does not prevent detrimental behaviours (e.g., drop in performance). this is shown in the poor performance of a value-based implementation of erl and pderl (section 4). figure 1: supe-rl overview. we propose a novel mixed framework, called soft updates for policy evolution (supe-rl), that enables us to combine the characteristics of gas with any drl algorithm, addressing the limitations of previous approaches. supe-rl (figure 1) benefits from the high sampling efficiency of gradientbased drl while incorporating gradient-free ga to generate diverse experiences and find better policies. summarizing, supe-rl based algorithms perform a periodical genetic evaluation applying gas to the agent network. a selection operator uses a fitness metric to evaluate the population, choosing the best performing genome (i.e., the weights of the network) that is used to update the weights of the drl agent. in contrast to previous work, our genetic evaluation is only performed periodically, drastically reducing the overhead. furthermore, our soft update (section 3) allows a direct integration of gas to any drl algorithm as it is similar to perform a gradient step towards a better policy, avoiding detrimental behaviours. as detailed in section 3.1, this allows using valuebased drl, exploiting the variety of optimizations developed for the well-known dqn (van hasselt et al., 2016; schaul et al., 2016; wang et al., 2016; fortunato et al., 2017; bellemare et al., 2017). crucially, our genetic component influences the drl agent policy only if one of its mutated version performs better in a subset of evaluation episodes. hence, as detailed in section 3, with a sufficient number of episodes we obtain a good estimation of the overall performance of the population. our evaluation focuses on mapless navigation, a well-known problem in robotics and recent drl (zhang et al., 2017; wahid et al., 2019; marchesini & farinelli, 2020b). in particular, we consider two tasks developed with unity (juliani et al., 2018): (i) a discrete action space indoor scenario with obstacles for a mobile robot and (ii) a continuous task for aquatic drones, with dynamic waves and physically realistic water. besides considering standard metrics related to performance (success rate and reward), we also consider safety properties that are particularly important in these domains (e.g., the agent does not collide with obstacles). in more detail, we employ formal verification ((corsi et al., 2020)) to compute the percentage of input cases that cause violations of these properties. this is important to confirm our claim that supe-rl based approaches correctly bias the exploration process in the direction of more robust policy regions with higher returns. results show that supe-rl algorithms improve performance (i.e., training time, success rate, average reward), stability, and safety over value-based and policy-gradient drl (rainbow (hessel et al., 2018), ppo (schulman et al., 2017)) and erl. finally, we performed additional comparisons of supe-rl with: (i) pderl (bodnar, 2020) to evidence the poor performance of previous mixed approaches when combined with value-based drl; (ii) cem-rl (pourchot, 2019) in the aquatic scenario, to show the differences with a multi learner approach; (iii) erl in standard continuous benchmarks (i.e., mujoco locomotion (brockman et al., 2016; todorov et al., 2012)), where results confirm the superior performance of supe-rl. background and related work we formalize robotic navigation as a rl problem, defined over a markov decision process, as described in recent drl literature (tai et al., 2017; zhang et al., 2017; wahid et al., 2019). drl for robotic navigation focus exclusively on continuous action algorithms such as actor-critic ddpg (lillicrap et al., 2015), td3 (fujimoto et al., 2018) and ppo (schulman et al., 2017). such methods have been adopted following the idea that value-based dqn (mnih et al., 2013) can not deal with high-dimensional action spaces. however, discrete value-based solutions typically result in shorter training time being more sample efficient, and show better performance even in continuous settings. in detail, marchesini & farinelli (2020b) shows that discrete drl is a more efficient alternative to continuous drl in robotic navigation. moreover, tavakoli et al. (2018) proposes an adaptation of dueling dqn (wang et al., 2016) with double dqn (van hasselt et al., 2016) that achieves competitive results in locomotion benchmarks (brockman et al., 2016; todorov et al., 2012). more recently, de wiele et al. (2020) designed a dqn-based algorithm to handle enormous discrete and continuous action spaces. these researches further motivate our contribution in the design of a mixed approach that also works with value-based drl. evolutionary algorithms ea are an alternative black-box optimization characterized by three main operators (fogel, 2006): generation, alteration, and selection. in detail, montana & davis (1989) evolves a population of n individuals, each one represented by the network vector parameter θ (genome). each θi (i ∈ [0, .., n − 1]) is evaluated to produce a fitness f (θi), used by the selection operator to choose the best genome. for the ea component of supe-rl, we consider a mutationbased ga that shown competitive performance over gradient-based drl (such et al., 2017). combining ea and drl following the trend of using ea as an alternative for drl (salimans et al., 2017), an emergent research field focuses on combining gradient-free and gradient-based solutions. in particular, erl (khadka & tumer, 2018) considers an actor-critic ddpg agent (lillicrap et al., 2015) and a concurrent ea training that generates a population of individuals, which are mutated and selected based on their fitness. the drl agent is trained in parallel from the samples generated by both the training phases and it is periodically injected in the running population which is used to collect the training performance. the mutation function of erl ensures that, in a certain number of episodes, the gradient-based policy outperforms its evolutionary siblings, introducing the gradient-based benefits into the population. hence, biasing the selection process of the next generation and its performance. in their experiments, the authors highlight an efficient transfer of information between the two families of algorithms, outperforming ddpg in well-known locomotion benchmarks (brockman et al., 2016; todorov et al., 2012). however, both the introduction of all the experiences in the buffer and forcing the drl agent to perform better than the ea population, bias the training and can cause detrimental behaviours. inspired by erl, several combinations have been proposed (bodnar, 2020; colas et al., 2018; pourchot, 2019; khadka et al., 2019). while gep-pg (bodnar, 2020) can be considered as a simplified version of a mixed approach, where a curiositydriven approach is used to fill the buffer of the agent, proximal distilled erl (pderl) (bodnar, 2020) addresses the ea component of erl, introducing novel operators to compensate for the simplicity of the genetic representation (as investigated by lehman et al. (2018), where authors address destructive behaviors of biologically-inspired variation operators applied to neural networks, which causes catastrophic forgetting). however, as detailed in section 3.1, our genetic evaluation is used to soft update the drl agent only in the case of performance improvement, hence it does not show such catastrophic forgetting. we also mention cerl (khadka et al., 2019) and cem-rl (pourchot, 2019) as they are extensions of erl for distributed training (which we do not consider here) with multiple active learners, which leads to non-negligible overhead (nonetheless, section 4.2 reports an additional experiment in our continuous task with cem-rl, to provide a more heterogeneous overview of the superior performance of supe-rl). these works share a common baseline as they all rely on actor-critic drl and are built on the insights of erl, which is the most closely related to supe-rl. hence, we choose erl for complete performance comparison. section 4.2 also shows a comparison with pderl, to further highlights the poor performance of previous approaches when combined with value-based drl. finally, we use formal verification to support our claims on the beneficial effects of our genetic component into the policy. we report in appendix a a brief description of the considered methodology. supe-rl the main insight of supe-rl is to soft update a drl agent towards better policy regions, enabling the combination with any drl algorithm. in detail, we combine a mutation-based ga (such et al., 2017) with two drl algorithms1: (i) rainbow (hessel et al., 2018) as a value-based algorithm for 1we evaluated a variety of different learning algorithms for both action domains. among rainbow, ppo, ddpg, and td3, we chose the best-performing ones. the discrete task and (ii) ppo (schulman et al., 2017) as policy-gradient one for the continuous scenario, giving rise to two algorithms named sgrainbow and gppo. a typical supe-rl based algorithm (appendix b provides a general pseudocode) proceeds as follows: the weights of a drl agent drla (referred as genome or θa interchangeably), are initialized with random values and, as in a standard training setup, drla starts to collect experiences interacting with its environment. such experiences are stored in a buffer r to train the network. periodically (every ge episodes), supe-rl makes a genetic evaluation by generating a population of children, each one characterized by a different genome. weights θa are used to create the n individuals applying noise to the parameter vector: θa +mutpn, where n ∼ n (0, mutv) and mutp is the mutation probability. in contrast, the mutation function of erl multiplies the randomly chosen weights by n (0, mutv). such mutations act in a similar fashion of a dropout layer, biasing the drl agent to perform better than the evolutionary population in the long term. in more detail, erl authors mutate 10% of the network weights in each episode (or epoch), multiplying them by n (0, 0.1) (plus a mutation with standard deviation 10 or a reset mutation in a small percentage of cases). given that their evolutionary component is running in parallel with the gradient-based agent, we noticed that the weights in the population tend to 0, hence causing a detrimental behavior. the population and a copy of θa are then independently tested over a set of evaluation episodes which shares the same goals, to find the overall best performing individual θbest based on the fitness (the fitness definition is domain-specific; in the case of navigation, it is computed as the number of targets the agent reaches over the evaluation episodes). in this phase, we can also store a portion of diverse experiences in the buffer r used by drla, to further exploit the population-based component. finally, if the selected genome belongs to one of the children, drla weights are updated towards the mutated version and the training phase continues with the new weights. in contrast, if the best score belongs to drla, the training phase, which was running in parallel, continues. since the evaluation does not require any interaction among the population, we instantiate an independent copy of the environment for each n + 1 population component in a separate thread and test them in parallel, drastically reducing the overhead for the drla. the multi-thread nature of the unity game engine makes this parallel testing phase straightforward and particularly efficient. this approach has both the advantage of search a better-performing policy exploiting ga mutations (similar to noisy exploration (fortunato et al., 2017)) and enrich the replay buffer with new diversified experiences. as detailed in our empirical evaluation and the experiments with formal verification tools (section 3.1), our genetic evaluation leads to safer policies and a significant reduction in training time, with supe-rl resulting approximately two times faster than erl in the same scenario. it is important to mention that both supe-rl and previous mixed approaches are especially designed for scenarios in which it is possible to parallelize the learning process. hence, when the training phase is performed on real physical systems, it is not possible in general to evaluate the population in parallel. this could significantly increase the convergence time. crucially, in contrast to previous mixed approaches, supe-rl based algorithms are designed to improve the performance of drla as our combination schema does not bias the choice of the betterperforming children in the long term. in the worst-case scenario, the main agent is always the best genome in the population and does not improve the current policy, hence a supe-rl based training will match the performance of the chosen drl algorithm. value-based and policy-gradient implementations robotic navigation allows evaluating supe-rl in a variety of scenarios (e.g., discrete and continuous action spaces). hence, our two tasks present different characteristics (e.g., static and dynamic environments, sparse and dense rewards). in this section, we present the value-based and policy-gradient implementation of supe-rl, combined with a mutation-based ga (such et al., 2017). 3.1.1 sgrainbow in discrete indoor navigation we consider a turtlebot32 indoor environment with obstacles, using a discrete action space and a dense reward function. 2https://www.turtlebot.com/ environment description target goals randomly spawn in the scenario and are guaranteed to be obstacle-free. the reward rt is structured as two sparse value in case of reaching the target rreach = 1 within error µ = 5cm from the target goal, or crashing rf ail = −1 which terminates an episode (resetting the robot to its starting position). a dense part is used during the travel: ω(dt−1 − dt), where dt−1, dt indicate the euclidean distance between the robot and the goal at two consecutive time steps and ω = 10 is a multiplicative factor. network architecture: the input layer contains 19 sparse laser scans, sampled in [−90, 90] degrees in a fixed angle distribution and the target position (expressed in polar coordinates); a similar setting is used in tai et al. (2017); long et al. (2018). we did explore other encodings for the problem (e.g., adding linear velocities as output) but a higher complexity of the problem causes longer training times with negligible improvements. we performed multiple trials on different network sizes and seeds (chen & chang, 1996) and the outcome led us to use two relu hidden layers with 64 neurons each and 5 linear nodes to encode the output angular velocities [−90, −45, 0, 45, 90] deg/s. methodology for sgrainbow: the genetic evaluation of the value-based agent presents challenges due to the instability of the training algorithm (which is a known drawback of dqn-based algorithms) and the poor scalability on high-dimensional action spaces. nonetheless, recent work shows the benefits of such solutions (marchesini & farinelli, 2020b; tavakoli et al., 2018; de wiele et al., 2020) in these challenging settings, further motivating the requirement for a mixed approach that works also with value-based drl. here we will discuss only the elements of the algorithm which are relevant to sgrainbow, referring the interested reader to (hessel et al., 2018) for further details on rainbow. we developed different approaches to update the drla with the genome of the better performing child. we firstly switch drla with the child, soft updating only the target network (originally developed for ddqn van hasselt et al. (2016)) to approach the new weights. we tried different settings for the target network, but a soft update with τ (cid:48) = 0.1 showed us better performance. this method however leads to an unusual optimizer choice, which is crucial considering the high-variance of a value-based algorithm. in particular, the widely adopted adam optimizer (kingma & ba, 2014) with its self-adaptable learning rate typically requires minimal hyper-parameter tuning but, after a switch of the drla model with a better genome, a drop of performance occurs. we motivate this as the current learning rate of adam is ”balanced” for the old drla weights and requires time to adjust its value for the new unexpected and better-performing model. for this reason, this first application, which we refer to as grainbow, required to tune an sgd optimizer for this scenario, decaying its initial learning rate from 0.1 to 0.001 based on the current success rate of the model. as shown in section 4, this approach already outperformed both the rainbow algorithm and the ga. the tuning for sgd requires several trials and is one of the main limitations of grainbow. to address this, we considered tessler et al. (2019), where authors improved ddqn stability by copying the agent model to the target when the former performs better. hence, we flipped our approach by soft updating drla towards the best genome (using τ (cid:48) = 0.1) and switching the target network with such genome. this enabled us to use the adam optimizer, improving the performance of grainbow. we refer the interested reader to appendix c of the supplementary material for a performance comparison between grainbow with sgd and adam. finally, we further improved both training performance and stability, exploiting the soft update (lillicrap et al., 2015) technique for both networks. we refer to the resultant algorithm as soft grainbow (sgrainbow). in particular, we soft update both the drla agent and target models towards the weights of the best performing child to smooth the transition of the networks towards the bettermutated policy. the update rule for the drla networks is then: θa = τ (cid:48)θa + (1 − τ (cid:48))θbest. we tried different values for τ (cid:48) (results in appendix c), obtaining the best performance with τ (cid:48) = 0.3. we believe that these periodical slight changes in the drla policy, that simulate a gradient step towards better network weights, are the core mechanism that allowed supe-rl to work and improve the performance of value-based supe-rl, while also reducing the variance across different runs. a useful side message is that using sgd seems better than adam when drla ”jumps” due to target network instabilities, as adam works better if the transition of the target network is more stable. finally, results in section 4 introduce part of the experiences of the genetic evaluation into the same prioritized buffer of the drla (appendix c contains an ablation experiment on the influence of these diversified experiences with sgrainbow). 3.1.2 gppo in continuous aquatic navigation we consider an aquatic drone navigation task characterized by dynamic waves, with a continuous action space and a sparse reward function. sources of the aquatic simulator as a novel drl task and a video with an overview of the environment are available at tinyurl.com/y22xh43c. environment description: the aquatic drone is a differential drive platform, where a continuous action is mapped to the motor power. the drone receives a sparse reward rt structured as two values in case of reaching the target rreach = 1 within error µ = 5 cm from the target goal, or reaching timeout rf ail = −1 which terminates an episode (resetting the robot to its starting position). network architecture: the input layer contains the target position expressed in polar coordinates with respect to the drone and the pitch of the boat. it is possible to retrieve these values using the gps and the compass of the boat. our initial evaluation for the network size led us to use two tanh hidden layers with 32 neurons each. finally, two tanh output nodes encode each motor velocity (multiplied by a constant value to obtain our velocity limit). methodology for gppo: in contrast to the value-based implementation, the stability of on-policy ppo does not present issues related to the adam optimizer. hence, we considered the same soft update strategy adopted with sgrainbow. trivially we did not use experiences of the best child due to the on-policy nature of the algorithm. for further details on the implementation of the ppo algorithm, we refer to (schulman et al., 2017). empirical evaluation the goal of our empirical evaluation is to investigate whether supe-rl approaches combine the benefits of ga with both value-based and policy-gradient drl while maintaining minimal overhead for the training. data are collected on an i7-9700k, using the implementation of section 3.1. we considered the same set of hyper-parameters (reported in appendix d) for the baselines and supe-rl based approaches. our erl implementations refer to (khadka & tumer, 2018) for the evolutionary component, while we use rainbow and ppo for the gradient-based part. in order to get reproducible and consistent results when comparing different algorithms, the random seed is fixed across a single run of every algorithm (because there may exist a sequence of targets that favor a run or a better network initialization), while it varies in different runs. as a consequence, a certain run of every algorithm executes the same sequence of targets and initializes the networks with the same weights. all the trained models, except for the value-based erl (erl-r) and the ga, are able to navigate generalizing: starting and target position, and velocity. the turtlebot3 lidar allows navigating in unknown environments with different obstacles, while the boat maintains similar performance in different wave conditions. for each graph, we report the mean and standard deviation of ten statistically independent runs, considering (i) success rate: how many successful trajectories are performed; (ii) total reward. results are smoothed over one hundred episodes. table 1: performance in the evaluation phase to collect significant data in the evaluation of navigation performance, we chose a set of targets reachable by every model. table 1 resumes the results for the discrete (rainbow, sgrainbow) and the continuous (ppo, erl, gppo) tasks, considering only models with acceptable performance (i.e., we did not include ga and erl-r). for clarity, in the aquatic task we collected the dense reward used in the discrete one. hence, rewards will be similar (agents navigate towards the target, collecting positive rewards), but time and number of steps (i.e., trajectory length) differ significantly (≈31% and ≈37.5% for sgrainbow over rainbow; ≈18% and ≈8% for gppo over ppo and erl, respectively). in both tasks, supe-rl outperforms drl algorithms and erl in every considered metric. model rainbow sgrainbow ppo erl gppo value-based evaluation here we compare the ga, rainbow, erl-r, grainbow, and sgrainbow. as previously discussed, figures 2a, b show that a direct combination of erl with the value-based algorithm can not cope figure 2: left: indoor navigation (ga, rainbow, erl-r, grainbow, sgrainbow). right: aquatic navigation (ga, ppo, erl, gppo). (a, c) average success rate. (b, d) average total reward. figure 3: left: cumulative number of supe-rl genetic soft updates in continuous aquatic navigation. right: cumulative number of injection of the drl agent using erl in the same environment. the issues of such drl approaches, resulting in very poor performance. given these results, we decided to perform an additional experiment with the improved version of erl, pderl (bodnar, 2020), which introduces novel genetic operators to improve erl robustness. nonetheless, we conjecture that the detrimental behaviour of previous mixed approaches is related to their drl injection pattern, rather than the simplicity of the genetic approach (the goodness of a simple genetic representation is evident in our results and in such et al. (2017)). in contrast, our periodical genetic evaluation with a soft update strategy simulates a gradient step towards a better policy while, in a worst-case scenario, we do not update the drl agent, avoiding detrimental behaviours. our results (detailed in section 4.2) confirm the superior performance of pderl over erl in both our environments. however, pderl still provides inferior performance compared to supe-rl based approaches and shares the poor performance of erl when combined with the value-based algorithm. in further detail, figure 2a shows that even grainbow with the sgd optimizer, where the best genome fully substitutes the main drl agent, outperforms rainbow (i.e., 80% of success rate over 60%). moreover, the soft genetic update further improves the performance while reducing the variance and sgrainbow reaches a performance of 90% success rate in about 2000 epochs that correspond to 60 minutes of training (in contrast, rainbow reached 60% success rate in similar training time). furthermore, the standalone ga was not able to cope with the complexity of the task, where the algorithm needs to generalize the navigation while exploiting the laser values to avoid obstacles. policy-gradient evaluation here we compare ppo, an adapted version of erl with ppo, and gppo. figures 2c, d show that also in the continuous domain, supe-rl based method offers better performances considering our evaluation metrics. in detail, gppo reaches over 98% of average success rate in about 1300 epochs that correspond to 110 minutes of training, while ppo, similarly to erl, was able to reach ≈ 82% of average success rate in ≈ 1700 epochs (160 and 210 minutes of training, respectively). furthermore, as reported in table 1, gppo uses a lower number of actions compared to ppo and erl, which translates into shorter paths and travel time for the drone towards the same target. in this task, we also compared the efficiency of our genetic evaluation in finding better policies with respect to erl. figure 3 shows the cumulative number of injections (erl) and evaluations (superl) (x-axis), over the successful ones (y-axis) through the training, to reach similar performance. we compare injections and evaluations as they represent the approaches of erl and supe-rl to improve the policy, respectively. results show that our mutation schema finds more often betterperforming policies, in contrast to erl where successful network injection occurs more rarely (erl requires 400% more injection trials, i.e., 250 over 25, to match supe-rl performance, i.e., 98% success). figure 4: performance of supe-rl, ppo and erl in mujoco benchmarks: (a) reacher-v2; (b) halfcheetah-v2; (c) hopper-v2; (d) ant-v2. figure 5: comparison with cem-rl in aquatic navigation: (a) average success rate; (b) average training time. comparison with pderl in the navigation tasks: (c) value-based implementation of pderl in the discrete task; (d) policy-gradient implementation of pderl in the continuous task. evaluation in standard benchmarks we performed additional experiments on reacher-v2, halfcheetah-v2, hopper-v2, and ant-v2 mujoco tasks (brockman et al., 2016; todorov et al., 2012) with gppo and erl. in particular, we considered the same specifics for data collection detailed in khadka & tumer (2018), hence we report the same performance metrics instead of a success rate. our erl implementation with ppo returned comparable results to the original ones presented in khadka & tumer (2018); pourchot (2019); khadka et al. (2019) (data were collected using the same hardware, gppo parameters, and averaged over 5 runs). crucially, figure 4a, b, c, d shows that our supe-rl based algorithm has comparable or better performance across the considered tasks. furthermore, 4c highlights the detrimental behaviour of erl. comparison with cem-rl and pderl to confirm that previous mixed approaches can not be directly combined with value-based approaches, due to their injection pattern rather than their genetic representation, we performed an additional experiment with pderl (bodnar, 2020). such approach addresses the poor genetic representation of erl, improving its performance and robustness. figure 5c, d shows the collected data in the discrete scenario, where we combined pderl with our value-based baseline, and in the continuous task, where we combined it with the ppo baseline. we considered pderl github for the genetic component, and the same hyper-parameters and random seeds used in our previous experiments. results confirm the superior performance of pderl over erl in both environments (i.e., improved success rate and reduced variance). furthermore, the two parallel training phases do not provide a robust evaluation of the gradient-free population in the same set of tasks, hence, the resultant best agent does not represent an overall best policy for the task. follows that pderl shares the detrimental performance of previous mixed approaches when combined with value-based algorithms. finally, as detailed in section 2, the recent field of mixed approaches has been extended with distributed solutions that use a portfolio of active drl learners, such as cerl (khadka et al., 2019) and cem-rl (pourchot, 2019). this intuitively results in significant overhead in the training process. we decided to compare gppo with the td3 version of cem-rl, as the authors claim it outperforms previous mixed approaches (to further confirm the overhead, sec. 5.2.2 of cem-rl (pourchot, 2019) state that tests are on limited timesteps, due to computational demands). in contrast, supe-rl uses one drl learner and the population is only used for policy evaluation. we used the github provided by the authors to test cem-td3 in the continuous scenario (aquatic drone) and figures 5a, b show that, as expected, cem-rl required significantly more time (250 minutes over 110). in particular, cem-td3 reaches 98% success rate (the performance of supe-rl at epoch 1200) in approximately 600 episodes. however, it required 125% more wall-clock time with respect to the time required by supe-rl to reach similar performance (data were collected using the same hardware and averaged over 5 runs). table 2: verification results of: (top) indoor task; (bottom) aquatic task model rainbow sgrainbow 0 time (s) memory (mb) violation (%) time (s) memory (mb) model θa,0 θa,1 0.9 ppo erl 0.5 gppo 0 robustness of supe-rl using formal verification an important result in both tasks is the limited variance shown by supe-rl approaches, across runs with different network initialization seeds. appendix e shows a more detailed analysis of our results, where supe-rl based approaches seem to not suffer from detrimental network initialization seeds. to further confirm our claims on the beneficial transfer of information of mixed approaches, we employ a formal verification tool corsi et al. (2020), to verify the behavior of our trained models with respect to a series of safety properties. the idea behind this evaluation is to confirm that models trained with mixed approaches lead to more robust policies. given the inferior performance of erl, we also expect that supe-rl based models will present fewer configurations that cause undesirable behaviors (i.e., violates the safety property). according to the navigation scenarios, we selected the following safety properties for the i indoor and a aquatic navigation tasks: θi,0: if turtlebot3 has obstacles too close on the right and on the front, it must turn left. θi,1: if turtlebot3 has obstacles too close on the left and on the front, it must turn right. θi,2: if turtlebot3 has obstacles too close on the front, it must turn in any of the other directions. θa,0: if the aquatic drone has a target on the right, it must turn right. θa,1: if the aquatic drone has a target on the left, it must turn left. table 2 shows violation percentage, computation time, and memory returned by the verification tool, to test our safety properties. in detail, models trained with mixed approaches (i.e., sgrainbow, gppo, and erl) present fewer violations in every considered property. furthermore, there is also a significant improvement over the computation time and memory required by the verifier. this confirms our claims on the policy improvement of mixed approaches as they evaluate with a significant difference the output values, which translates into fewer bounds re-computations for the verifier. crucially, the superior safety performance of supe-rl based approaches compared to erl, further motivate the introduction of our framework. discussion we presented supe-rl, a novel mixed framework that exploits the robustness of population-based ga to improve value-based and policy-gradient drl agents. we evaluate supe-rl in two mapless navigation scenarios: an indoor navigation for a turtlebot3 platform, an aquatic navigation task, and in locomotion benchmarks. our empirical evaluation shows that supe-rl significantly outperforms drl baselines (rainbow and ppo), the ga, and the recent erl and pderl, which also show poor performance when combined with value-based drl. crucially, supe-rl is the first framework that combines ga and drl in the field of value-based discrete methods. furthermore, we exploited a formal verification tool to confirm the beneficial effects of mixed approaches in policy improvement. this evaluation confirms the superior performance of supe-rl based approaches, that returned safer models. this work paves the way for several interesting research directions which include exploiting complex evolutionary mechanisms and different crossover techniques to further improve our framework as well as the possibility of extending supe-rl to concurrently optimize the total reward and the desired safety properties. references marc g. bellemare, will dabney, and r´emi munos. a distributional perspective on reinforcement learning. in icml, 2017. pietro lio’ bodnar, ben day. proximal distilled evolutionary reinforcement learning. in aaai, greg brockman, vicki cheung, ludwig pettersson, jonas schneider, john schulman, jie tang, and wojciech zaremba. openai gym. in corr, 2016. chyi-tsong chen and wei-der chang. a feedforward neural network with function shape autotunc´edric colas, olivier sigaud, and pierre-yves oudeyer. gep-pg: decoupling exploration and exploitation in deep reinforcement learning algorithms. in icml, 2018. | 9 | [
117.963,
547.8900784,
417.07815,
558.0818182
] |
xENf4QUL4LW.pdf | 2,022 | 1 | sample selection with uncertainty of losses for learning with noisy labels xiaobo xia1 tongliang liu1† bo han2 mingming gong3 1tml lab, the university of sydney 2hong kong baptist university 3the university of melbourne 5riken aip 6the university of tokyo jun yu4 gang niu5 masashi sugiyama5,6 4university of science and technology of china abstract in learning with noisy labels, the sample selection approach is very popular, which regards small-loss data as correctly labeled during training. however, losses are generated on-the-fly based on the model being trained with noisy labels, and thus large-loss data are likely but not certain to be incorrect. there are actually two possibilities of a large-loss data point: (a) it is mislabeled, and then its loss decreases slower than other data, since deep neural networks “learn patterns first”; (b) it belongs to an underrepresented group of data and has not been selected yet. in this paper, we incorporate the uncertainty of losses by adopting interval estimation instead of point estimation of losses, where lower bounds of the confidence intervals of losses derived from distribution-free concentration inequalities, but not losses themselves, are used for sample selection. in this way, we also give large-loss but less selected data a try; then, we can better distinguish between the cases (a) and (b) by seeing if the losses effectively decrease with the uncertainty after the try. as a result, we can better explore underrepresented data that are correctly labeled but seem to be mislabeled at first glance. experiments demonstrate that the proposed method is superior to baselines and robust to a broad range of label noise types. introduction learning with noisy labels is one of the most challenging problems in weakly-supervised learning, since noisy labels are ubiquitous in the real world (mirzasoleiman et al., 2020; yu et al., 2019; nishi et al., 2021; arazo et al., 2019; yang et al., 2021a; bai & liu, 2021). for instance, both crowdsourcing and web crawling yield large numbers of noisy labels everyday (han et al., 2018). noisy labels can severely impair the performance of deep neural networks with strong memorization capacities (zhang et al., 2017; zhang & sabuncu, 2018; pleiss et al., 2020; lukasik et al., 2020; chen et al., 2022). to reduce the influence of noisy labels, a lot of approaches have been recently proposed (natarajan et al., 2013; liu & tao, 2016; ma et al., 2018; yang et al., 2021b; zheng et al., 2020; xia et al., 2019; 2020; tanaka et al., 2018; malach & shalev-shwartz, 2017; li et al., 2020b; menon et al., 2018; thekumparampil et al., 2018; xu et al., 2019; kim et al., 2019; jiang et al., 2020; harutyunyan et al., 2020). they can be generally divided into two main categories. the first one is to estimate the noise transition matrix (patrini et al., 2017; shu et al., 2020; hendrycks et al., 2018; yang et al., 2021c; wu et al., 2022), which denotes the probabilities that clean labels flip into noisy labels. however, the noise transition matrix is hard to be estimated accurately, especially when the number of classes is large (yu et al., 2019). the second approach is sample selection, which is our focus in this paper. this approach is based on selecting possibly clean examples from a mini-batch for training (han et al., 2018; wang et al., 2018; yao et al., 2020a; wang et al., 2019; yu et al., 2019; lee et al., 2019; wang et al., 2019; yao et al., 2022). intuitively, if we can exploit less noisy data for network parameter updates, the network will be more robust. a major question in sample selection is what criteria can be used to select possibly clean examples. at the present stage, the selection based on the small-loss criteria is the most common method, and has been verified to be effective in many circumstances (han et al., 2018; jiang et al., 2018; yu et al., †corresponding author figure 1: illustrations of uncertainty of losses. experiments are conducted on the imbalanced noisy mnist dataset. left: uncertainty of small-loss examples. at the beginning of training (epochs 1 and 2), due to the instability of the current prediction, the network gives a larger loss to the clean example and does not select it for updates. if we consider the mean of training losses at different epochs, the clean example can be equipped with a smaller loss and then selected for updates. right: uncertainty of large-loss examples. since the deep network learns easy examples at the beginning of training, it gives a large loss to clean imbalanced data with non-dominant labels, which causes such data unable to be selected and severely influence generalization. 2019; wei et al., 2020; yao et al., 2020a). specifically, since deep networks learn patterns first (arpit et al., 2017), they would first memorize training data of clean labels and then those of noisy labels with the assumption that clean labels are of the majority in a noisy class. small-loss examples can thus be regarded as clean examples with high probability. therefore, in each iteration, prior methods (han et al., 2018; wei et al., 2020) select the small-loss examples based on the predictions of the current network for robust training. however, such a selection procedure is debatable, since it arguably does not consider uncertainty in selection. the uncertainty comes from two aspects. first, this procedure has uncertainty about small-loss examples. specifically, the procedure uses limited time intervals and only exploits the losses provided by the current predictions. for this reason, the estimation for the noisy class posterior is unstable (yao et al., 2020b), which causes the network predictions to be equally unstable. it thus takes huge risks to only use losses provided by the current predictions (figure 1, left). once wrong selection is made, the inferiority of accumulated errors will arise (yu et al., 2019). second, this procedure has uncertainty about large-loss examples. to be specific, deep networks learn easy examples at the beginning of training, but ignore some clean examples with large losses. nevertheless, such examples are always critical for generalization. for instance, when learning with imbalanced data, distinguishing the examples with non-dominant labels are more pivotal during training (menon et al., 2020; wei et al., 2021). deep networks often give large losses to such examples (figure 1, right). therefore, when learning under the realistic scenes, e.g., learning with noisy imbalanced data, prior sample selection methods cannot address such an issue well. to relieve the above issues, we study the uncertainty of losses in the sample selection procedure to combat noisy labels. to reduce the uncertainty of small-loss examples, we extend time intervals and utilize the mean of training losses at different training iterations. in consideration of the bad influence of mislabeled data on training losses, we build two robust mean estimators from the perspectives of soft truncation and hard truncation w.r.t. the truncation level, respectively. soft truncation makes the mean estimation more robust by holistically changing the behavior of losses. hard truncation makes the mean estimation more robust by locally removing outliers from losses. to reduce the uncertainty of large-loss examples, we encourage networks to pick the sample that has not been selected in a conservative way. furthermore, to address the two issues simultaneously, we derive concentration inequalities (boucheron et al., 2013) for robust mean estimation and further employ statistical confidence bounds (auer, 2002) to consider the number of times an example was selected during training. the study of uncertainty of losses in learning with noisy labels can be justified as follows. in statistical learning, it is known that uncertainty is related to the quality of data (vapnik, 2013). philosophically, we need variety decrease for selected data and variety search for unselected data, which share a common objective, i.e., reduce the uncertainty of data to improve generalization (moore, 1990). this is our original intention, since noisy labels could bring more uncertainty because of the low quality of noisy data. nevertheless, due to the harm of noisy labels for generalization, we need to strike a good balance between variety decrease and search. technically, our method is specially designed for handling noisy labels, which robustly uses network predictions and conservatively seeks less selected examples meanwhile to reduce the uncertainty of losses and then generalize well. before delving into details, we clearly emphasize our contributions in two folds. first, we reveal prior sample selection criteria in learning with noisy labels have some potential weaknesses and discuss them in detail. the new selection criteria are then proposed with detailed theoretical analyses. second, we experimentally validate the proposed method on both synthetic noisy balanced/imbalanced datasets and real-world noisy datasets, on which it achieves superior robustness compared with the stateof-the-art methods in learning with noisy labels. the rest of the paper is organized as follows. in section 2, we propose our robust learning paradigm step by step. experimental results are discussed in section 3. the conclusion is given in section 4. method in this section, we first introduce the problem setting and some background (section 2.1). then we discuss how to exploit training losses at different iterations (section 2.2). finally, we introduce the proposed method, which exploits training losses at different iterations more robustly and encourages networks to pick the sample that is less selected but could be correctly labeled (section 2.3). preliminaries let x and y be the input and output spaces. consider a k-class classification problem, i.e., y = [k], where [k] = {1, . . . , k}. in learning with noisy labels, the training data are all sampled from a corrupted distribution on x × y. we are given a sample with noisy labels, i.e., ˜s = {(x, ˜y)}, where ˜y is the noisy label. the aim is to learn a robust classifier that could assign clean labels to test data by only exploiting a training sample with noisy labels. let f : x → rk be the classifier with learnable parameters w. at the i-th iteration during training, the parameters of the classifier f can be denoted as wi. let (cid:96) : rk × y → r be a surrogate loss function for k-class classification. we exploit the softmax cross entropy loss in this paper. given an arbitrary training example (x, ˜y), at the i-th iteration, we can obtain a loss (cid:96)i, i.e., (cid:96)i = (cid:96)(f (wi; x), ˜y). hence, until the t-th iteration, we can obtain a training loss set lt about the example (x, ˜y), i.e., lt = {(cid:96)1, . . . , (cid:96)t}. in this paper, we assume that the training losses in lt conform to a markov process, which is to represent a changing system under the assumption that future states only depend on the current state (the markov property). more specifically, at the i-th iteration, if we exploit an optimization algorithm for parameter updates (e.g., the stochastic gradient descent algorithm (bottou, 2012)) and omit other dependencies (e.g., ˜s), we will have p (wi|wi−1, . . . , w0) = p (wi|wi−1), which means that the future state of the classifier f only depends on the current state. furthermore, given a training example and the parameters of the classifier f , we can determine the loss of the training example as discussed. therefore, the training losses in lt will also conform to a markov process. extended time intervals as limited time interval cannot address the instability issue of the estimation for the noisy class posterior well (pleiss et al., 2020), we extend time intervals and exploit the training losses at different training iterations for sample selection. one straightforward idea is to use the mean of training losses at different training iterations. hence, the selection criterion could be it is intuitive and reasonable to use such a selection criterion for sample selection, since the operation of averaging can mitigate the risks caused by the unstable estimation for the noisy class posterior, following better generalization. nevertheless, such a method could arguably achieve suboptimal classification performance for learning with noisy labels. the main reason is that, due to the great harm of mislabeled data, part of training losses are with too large uncertainty and could be seen as outliers. therefore, it could be biased to use the mean of training losses consisting of such outliers (diakonikolas et al., 2020), which further influences sample selection. more evaluations for our claims are provided in section 3. robust mean estimation and conservative search we extend time intervals and meanwhile exploit the training losses at different training iterations more robustly. specifically, we build two robust mean estimators from the perspectives of soft truncation and hard truncation (catoni, 2012). note that for specific tasks, it is feasible to decide the types of robust mean estimation with statistical tests based on some assumptions (chakrabarty & samorodnitsky, 2012). we leave the analysis as future work. two distribution-free robust mean estimators are introduced as follows. soft truncation. we extend a classical m-estimator from (catoni, 2012) and exploit the widest possible choice of the influence function. more specifically, give a random variable x, let us consider a non-decreasing influence function ψ : r → r such that ψ(x) = log(1 + x + x 2/2), x ≥ 0. (2) the choice of ψ is inspired by the taylor expansion of the exponential function, which can make the estimation results more robust by reducing the side effect of extremum holistically. the illustration for this influence function is provided in appendix a.1. for our task, given the observations on training losses, i.e., lt = {(cid:96)1, . . . , (cid:96)t}, we estimate the mean robustly as follows: ˜µs = we term the above robust mean estimator (3) the soft estimator. hard truncation. we propose a new robust mean estimator based on hard truncation. specifically, given the observations on training losses lt, we first exploit the k-nearest neighbor (knn) algorithm (liao & vemuri, 2002) to remove some underlying outliers in lt. the number of outliers is denoted by to(to < t), which can be adaptively determined as discussed in (zhao et al., 2019). note that we can also employ other algorithms, e.g., principal component analysis (shyu et al., 2003) and the local outlier factor (breunig et al., 2000), to identify underlying outliers in lt. the main reason we employ knn is because of its relatively low computation costs (zhao et al., 2019). the truncated loss observations on training losses are denoted by lt−to. we then utilize lt−to for the mean estimation. as the potential outliers are removed with high probability, the robustness of the estimation results will be enhanced. we denote such an estimated mean as ˜µh. we have 1 t − to ˜µh = (cid:96)i∈lt−to the corresponding estimator (4) is termed the hard estimator. we derive concentration inequalities for the soft and hard estimators respectively. the search strategy for less selected examples and overall selection criterion are then provided. note that we do not need to explicitly quantify the mean of training losses. we only need to sort the training examples based on the proposed selection criterion and then use the selected examples for robust training. theorem 1 let zn = {z1, · · · , zn} be an observation set with mean µz and variance σ2. by exploiting the non-decreasing influence function ψ(z) = log(1 + z + z2/2). for any (cid:15) > 0, we have ψ(zi) − µz with probability at least 1 − 2(cid:15). proof can be found in appendix a.1. theorem 2 let zn = {z1, . . . , zn} be a (not necessarily time homogeneous) markov chain with mean µz, taking values in a polish state space λ1 × . . . × λn, and with a minimal mixing time τmin. the truncated set with hard truncation is denoted by zno, with no < n. if |zi| is upper bounded by z. for any (cid:15)1 > 0 and (cid:15)2 > 0, we have 1 n − no −µz zi∈zn\zno 1 n − no 2τmin log 2zno n 2τmin log with probability at least 1 − (cid:15)1 − (cid:15)2. proof can be found in appendix a.2. for our task, let the training loss be upper-bounded by l. the value of l can be determined easily by training networks on noisy datasets and observing the loss distribution (arazo et al., 2019). conservative search and selection criteria. in this paper, we will use the concentration inequalities (5) and (6) to present conservative search and the overall sample selection criterion. specifically, we exploit their lower bounds and consider the selected number of examples during training. the selection of the examples that are less selected is encouraged. denote the number of times one example was selected by nt(nt ≤ t). let (cid:15) = 1 circumstance with soft truncation, the selection criterion is 2t . for the 2t , for the situation with hard truncation, by rewriting (6), the selection criterion is √ 2τminl(t + √ (t − to) t log(4t) nt note that we directly replace t with nt. if an example is rarely selected during training, nt will be far less than n, which causes the lower bounds to change drastically. hence, we do not use the mean of all training losses, but use the mean of training losses in fixed-length time intervals. more details about this can be checked in section 3. for the selection criteria (7) and (8), we can see that they consist of two terms and have one term with a minus sign. the first term in eq. (7) (or eq. (8)) is to reduce the uncertainty of small-loss examples, where we use robust mean estimation on training losses. the second term, i.e., the statistical confidence bound, is to encourage the network to choose the less selected examples (with a small nt). the two terms are constraining and balanced with σ2 or τmin. to avoid introducing strong assumptions on the underlying distribution of losses (chakrabarty & samorodnitsky, 2012), we tune σ and τmin with a noisy validation set. for the mislabeled data, although the model has high uncertainties on them (i.e., a small nt) and tends to pick them, the overfitting to the mislabeled data is harmful. also, the mislabeled data and clean data are rather hard to distinguish in some cases as discussed. thus, we should search underlying clean data in a conservative way. in this paper, we initialize σ and τmin with small values. this way can reduce the adverse effects of mislabeled data and meanwhile select the clean examples with large losses, which helps generalize. more evaluations will be presented in section 3. 2: shuffle training dataset ˜s; for t = 1, . . . , tmax do algorithm 1 cnlcu algorithm. 1: input θ1 and θ2, learning rate η, fixed τ , epoch tk and tmax, iteration tmax; for t = 1, 2, . . . , tmax do the overall procedure of the proposed method, which combats noisy labels by concerning uncertainty (cnlcu), is provided in algorithm 1. cnlcu works in a mini-batch manner since all deep learning training methods are based on stochastic gradient descent. following (han et al., 2018), we exploit two networks with parameters θ1 and θ2 respectively to teach each other. specifically, when a mini-batch ¯s is formed (step 3), we let two networks select a small proportion of examples in this mini-batch with eq. (7) or (8) (step 4 and step 5). the number of instances is controlled by the function r(t ), and two networks only select r(t ) percentage of examples out of the mini-batch. the value of r(t ) should be larger at the beginning of training, and be smaller when the number of epochs goes large, which can make better use of memorization effects of deep networks (han et al., 2018) for sample selection. then, the selected instances are fed into its peer network for parameter updates (step 6 and step 7). 3: fetch mini-batch ¯s from ˜s; 4: obtain ¯s1 = arg mins(cid:48):|s(cid:48)|≥r(t )| ¯s| (cid:96)(cid:63)(θ1, s(cid:48)); // calculated with eq. (7) or eq. (8) 5: obtain ¯s2 = arg mins(cid:48):|s(cid:48)|≥r(t )| ¯s| (cid:96)(cid:63)(θ2, s(cid:48)); // calculated with eq. (7) or eq. (8) 6: update θ1 = θ1 − η∇(cid:96)(θ1, ¯s2); 7: update θ2 = θ2 − η∇(cid:96)(θ2, ¯s1); end 9: output θ1 and θ2. 8: update r(t ) = 1 − min end experiments in this section, we evaluate the robustness of our proposed method to noisy labels with comprehensive experiments on the synthetic balanced noisy datasets (section 3.1), synthetic imbalanced noisy datasets (section 3.2), and real-world noisy dataset (section 3.3). experiments on synthetic balanced noisy datasets datasets. we verify the effectiveness of our method on the manually corrupted version of the following datasets: mnist (lecun et al.), f-mnist (xiao et al., 2017), cifar-10 (krizhevsky, 2009), and cifar-100 (krizhevsky, 2009), because these datasets are popularly used for the evaluation of learning with noisy labels in the literature (han et al., 2018; yu et al., 2019; wu et al., 2021; lee et al., 2019). the four datasets are class-balanced. the important statistics of the used synthetic datasets are summarized in appendix b.1. generating noisy labels. we consider broad types of label noise: (1). symmetric noise (abbreviated as sym.) (wu et al., 2020; ma et al., 2018). (2) asymmetric noise (abbreviated as asym.) (ma et al., 2020; xia et al., 2021; wei et al., 2020). (3) pairflip noise (abbreviated as pair.) (han et al., 2018; yu et al., 2019; zheng et al., 2020). (4). tridiagonal noise (abbreviated as trid.) (zhang et al., 2021). (5). instance noise (abbreviated as ins.) (cheng et al., 2020; xia et al., 2020). the noise rate is set to 20% and 40% to ensure clean labels are diagonally dominant (ma et al., 2020). more details about above noise are provided in appendix b.1. we leave out 10% of noisy training examples as a validation set. baselines. we compare the proposed method (algorithm 1) with following methods which focus on sample selection, and implement all methods with default parameters by pytorch, and conduct all the experiments on nvidia titan xp gpus. (1). s2e (yao et al., 2020a), which properly controls the sample selection process so that deep networks can better benefit from the memorization effects. (2). mentornet (jiang et al., 2018), which learns a curriculum to filter out noisy data. we use self-paced mentornet in this paper. (3). co-teaching (han et al., 2018), which trains two networks simultaneously and cross-updates parameters of peer networks. (4). sigua (han et al., 2020), which exploits stochastic integrated gradient underweighted ascent to handle noisy labels. we use self-teaching sigua in this paper. (5). jocor (wei et al., 2020), which reduces the diversity of networks to improve robustness. to avoid too dense tables, we provide results of other sample selection methods and other types of baselines such as adding regularization. all results are presented in appendix b.2. here, we term our methods with soft truncation and hard truncation as cnlcu-s and cnlcu-h respectively. network structure and optimizer. for mnist, f-mnist, and cifar-10, we use a 9-layer cnn structure from (han et al., 2018). due to the limited space, the experimental details on cifar-100 are provided in appendix b.3. all network structures we used here are standard test beds for weaklysupervised learning. for all experiments, the adam optimizer (kingma & ba, 2014) (momentum=0.9) is used with an initial learning rate of 0.001, and the batch size is set to 128 and we run 200 epochs. we linearly decay learning rate to zero from 80 to 200 epochs as did in (han et al., 2018). we take two networks with the same architecture but different initializations as two classifiers as did in (han et al., 2018; yu et al., 2019; wei et al., 2020), since even with the same network and optimization method, different initializations can lead to different local optimal (han et al., 2018). the details of network structures can be checked in appendix c. for the hyper-parameters σ2 and τmin, we determine them in the range {10−1, 10−2, 10−3, 10−4} with a noisy validation set. note that the use of hyperparameters aims to reduce the dependency on strong assumptions and thus make our methods perform well in practice. we provide more details about this in appendix d. here, we assume the noise level τ is known and set r(t ) = 1 − min{ t τ, τ } with tk=10. if τ is not known in advanced, it can be inferred using validation sets tk (liu & tao, 2016; yu et al., 2018). as for performance measurement, we use test accuracy, i.e., test accuracy = (# of correct prediction) / (# of testing). all experiments are repeated five times. we report the mean and standard deviation of experimental results. experimental results. the experimental results about test accuracy are provided in tables 1, 2, and 3. specifically, for mnist, as can be seen, our proposed methods, i.e., cnlcu-s and cnlcu-h, produce the best results in the vast majority of cases. in some cases such as asymmetric noise, the baseline s2e outperforms ours, which benefits the accurate estimation for the number of selected noise type method/noise mentornet co-teaching sigua jocor cnlcu-s cnlcu-h sym. asym. pair. trid. ins. table 1: test accuracy (%) on mnist over the last ten epochs. the best two results are in bold. noise type method/noise mentornet co-teaching sigua jocor cnlcu-s cnlcu-h sym. asym. pair. trid. ins. table 2: test accuracy on f-mnist over the last ten epochs. the best two results are in bold. small-loss examples. for f-mnist, the training data becomes complicated. s2e cannot achieve the accurate estimation in such situation and thus has no great performance like it got on mnist. our methods achieve varying degrees of lead over baselines. for cifar-10, our methods once again outperforms all the baseline methods. although some baseline, e.g., co-teaching, can work well in some cases, experimental results show that it cannot handle various noise types. in contrast, the proposed methods achieve superior robustness against broad noise types. the results mean that our methods can be better applied to actual scenarios, where the noise is diversiform. ablation study. we first conduct the ablation study to analyze the sensitivity of the length of time intervals. in order to avoid too dense figures, we exploit mnist and f-mnist with the mentioned noise settings as representative examples. for cnlcu-s, the length of time intervals is chosen in the range from 3 to 8. for cnlcu-h, the length of time intervals is chosen in the range from 10 to 15. note that the reason for their different lengths is that their different mechanisms. specifically, cnlcu-s holistically changes the behavior of losses, but does not remove any loss from the loss set. we thus do not need too long length of time intervals. as a comparison, cnlcu-h needs to remove some outliers from the loss set as discussed. the length should be longer to guarantee the number of examples available for robust mean estimation. the experimental results are provided in appendix b.4, which show the proposed cnlcu-s and cnlcu-h are robust to the choices of the length of time intervals. such robustness to hyperparameters means our methods can be applied in practice and does not need too much effect to tune the hyperparameters. furthermore, since our methods concern uncertainty from two aspects, i.e., the uncertainty from both small-loss and large-loss examples, we conduct experiments to analyze each part of our methods. also, as mentioned, we compare robust mean estimation with non-robust mean estimation when learning with noisy labels. more details are provided in appendix b.4. noise type method/noise mentornet co-teaching sigua jocor cnlcu-s cnlcu-h sym. asym. pair. trid. ins. table 3: test accuracy (%) on cifar-10 over the last ten epochs. the best two results are in bold. experiments on synthetic imbalanced noisy datasets experimental setup. we exploit mnist and f-mnist. for these two datasets, we reduce the number of training examples along with the labels from “0” to “4” to 1% of previous numbers. we term such synthetic imbalanced noisy datasets as im-mnist and im-f-mnist respectively. this setting aims to simulate the extremely imbalanced circumstance, which is common in practice. moreover, we exploit asymmetric noise, since these types of noise can produce more imbalanced case (patrini et al., 2017; ma et al., 2020). other settings such as the network structure and optimizer are the same as those in experiments on synthetic balanced noisy datasets. as for performance measurements, we use test accuracy. in addition, we exploit the selected ratio of training examples with the imbalanced classes, i.e., selected ratio=(# of selected imbalanced labels / # of all selected labels). intuitively, a higher selected ratio means the proposed method can make better use of training examples with the imbalanced classes, following better generalization (kang et al., 2020). experimental results. the test accuracy achieved on im-mnist and im-f-mnist is presented in figure 2. recall the experimental results in tables 1 and 2, we can see that the imbalanced issue is catastrophic to the sample selection approach when learning with noisy labels. for im-mnist, as can be seen, all the baselines have serious overfitting in the early stages of training. the curves of test accuracy drop dramatically. as a comparison, the proposed cnlcu-s and cnlcu-h can give a try to large-loss but less selected data which are possible to be clean but equipped with imbalanced labels. therefore, our methods always outperform baselines clearly. in the case of asym. 10%, our methods achieve nearly 30% lead over baselines. for im-f-mnist, we can also see that our methods perform well and always achieve about 5% lead over all the baselines. note that due to the huge challenge of this task, some baseline, e.g., s2e, has a large error bar. in addition, the baseline sigua performs badly. it is because sigua exploits stochastic integrated gradient underweighted ascent on large-loss examples, which makes the examples with imbalanced classes more difficult to be selected than them in other sample selection methods. due to the limited space, the selected ratio achieved on im-mnist and im-f-mnist is presented in appendix b.5, which explain well why our methods perform better than multiple baselines. experiments on real-world noisy datasets experimental setup. to verify the efficacy of our methods in the real-world scenario, we conduct experiments on the noisy dataset clothing1m (xiao et al., 2015). specifically, for experiments on clothing1m, we use the 1m images with noisy labels for training and 10k clean data for test respectively. note that we do not use the 50k clean training data in all the experiments. for preprocessing, we resize the image to 256×256, crop the middle 224×224 as input, and perform normalization. the experiments on clothing1m are performed once due to the huge computational cost. we leave 10% noisy training data as a validation set for model selection. note that we do not exploit the resampling trick during training (li et al., 2020a). here, best denotes the test accuracy of the epoch where the validation accuracy was optimal. last denotes test accuracy of the last epoch. for t s i n m m i t s i n m f m i figure 2: test accuracy vs. number of epochs on im-mnist and im-f-mnist. the error bar for standard deviation in each figure has been shaded. the experiments on clothing1m, we use resnet-18 and resnet-50 which are pretrained on imagenet. we also use the adam optimizer and set the batch size to 64. during the training stage, we run 15 epochs in total and set the learning rate 8 × 10−4, 5 × 10−4, and 5 × 10−5 for 5 epochs each. experimental results. the results on clothing1m are provided in table 4. specifically, the proposed methods get better results than state-of-the-art methods on best. with resnet-18, we achieve improvements of +1.28% and +0.99%. with resnet-50, we achieve improvements of +2.51% and +2.16%. likewise, the proposed methods outperform all the baselines on last. we achieve improvements of +1.01% and +0.54% with resnet-18, and improvements of +2.47% and +2.05% with resnet-50. all these results verify the effectiveness of the proposed methods. methods best (r-18) last (r-18) best (r-50) last (r-50) table 4: test accuracy (%) on clothing1m. “r-18” (resp. “r-50”) means that we exploit resnet-18 (resp. resnet-50). the best two results are in bold. food-101 webvision (mini) methods dividemix dividemix-s dividemix-h combining with semi-supervised learning. for combating noisy labels in real-world noisy datasets, the stateof-the-art methods, e.g., dividemix (li et al., 2020a), always employ the semi-supervised learning technology. as our methods mainly focus on sample selection, to make the comparison fair, we combine our methods with semi-supervised learning. the sample selection procedure in dividemix is replaced by our methods. other settings are kept the same. following prior works (ma et al., 2020), the experiments are conducted on three real-world noisy datasets, i.e., food-101 (bossard et al., 2014), webvision (mini) (li et al., 2017), and clothing1m (xiao et al., 2015). the results are provided in table 5. as can be seen, the proposed methods are superior and can be used to improve the cutting edge performance. table 5: the test accuracy (%) on three real-world datasets. dividemix-s (resp. dividemix-h) means that our cnlcu-s (resp. cnlcu-h) is combined with the advanced techniques in dividemix. the best two results are in bold. conclusion in this paper, we focus on promoting the prior sample selection in learning with noisy labels, which starts from concerning the uncertainty of losses during training. we robustly use the training losses at different iterations to reduce the uncertainty of small-loss examples, and adopt confidence interval estimation to reduce the uncertainty of large-loss examples. experiments are conducted on benchmark datasets, demonstrating the effectiveness of our method. we believe that this paper opens up new possibilities in the topics of using sample selection to handle noisy labels, especially in improving the robustness of models on imbalanced noisy datasets. ethics statement | 9 | [
108.299,
697.5936768,
229.2579856,
709.5488768
] |
y0VvIg25yk.pdf | 2,022 | 0 | on the learning and learnability of quasimetrics tongzhou wang mit csail phillip isola mit csail abstract our world is full of asymmetries. gravity and wind can make reaching a place easier than coming back. social artifacts such as genealogy charts and citation graphs are inherently directed. in reinforcement learning and control, optimal goal-reaching strategies are rarely reversible (symmetrical). distance functions supported on these asymmetrical structures are called quasimetrics. despite their common appearance, little research has been done on the learning of quasimetrics. our theoretical analysis reveals that a common class of learning algorithms, including unconstrained multilayer perceptrons (mlps), provably fails to learn a quasimetric consistent with training data. in contrast, our proposed poisson quasimetric embedding (pqe) is the first quasimetric learning formulation that both is learnable with gradient-based optimization and enjoys strong performance guarantees. experiments on random graphs, social graphs, and offline q-learning demonstrate its effectiveness over many common baselines. project page: code: ssnl.github.io/quasimetric. github.com/ssnl/poisson_quasimetric_embedding. introduction learned symmetrical metrics have been proven useful for innumerable tasks including dimensionality reduction (tenenbaum et al., 2000), clustering (xing et al., 2002), classification (weinberger et al., 2006; hoffer & ailon, 2015), and information retrieval (wang et al., 2014). however, the real world is largely asymmetrical, and symmetrical metrics can only capture a small fraction of it. generalizing metrics, quasimetrics (defn. 2.1) allow for asymmetrical distances and can be found in a wide range of domains (see fig. 1). ubiquitous physical forces, such as gravity and wind, as well as human-defined rules, such as one-way roads, make the traveling time between places a quasimetric. furthermore, many of our social artifacts are directed graphs— genealogy charts, follow-relation on twitter (leskovec & krevl, 2014), citation graphs (price, 2011), hyperlinks over the internet, etc. shortest paths on these graphs naturally induce quasimetric spaces. in fact, we can generalize to markov decision processes (mdps) and observe that optimal goal-reaching plan costs (i.e., universal value/q-functions (schaul et al., 2015; sutton et al., 2011)) always form a quasimetric (bertsekas & tsitsiklis, 1991; tian et al., 2020). moving onto more abstract structures, quasimetrics can also be found as expected hitting times in markov chains, and as conditional shannon entropy h(· | ·) in information theory. (see the appendix for proofs and discussions of these quasimetrics.) in this work, we study the task of quasimetric learning. given a sampled training set of pairs and their quasimetric distances, we ask: how well can we learn a quasimetric that fits the training data? we define quasimetric learning in analogy to metric learning: whereas metric learning is the problem of learning a metric function, quasimetric learning is the problem of learning a quasimetric function. this may involve searching over a hypothesis space constrained to only include quasimetric functions (which is what our method does) or it could involve searching for approximately quasimetric functions (we compare to and analyze such approaches). successful formulations have many potential applications, such as structural priors in reinforcement learning (schaul et al., 2015; tian et al., 2020), graph learning (rizi et al., 2018) and causal relation learning (balashankar & subramanian, 2021). towards this goal, our contributions are • we study the quasimetric learning task with two goals: (1) fitting training data well and (2) respecting quasimetric constraints (sec. 3); figure 1: examples of quasimetric spaces. the car drawing is borrowed from sutton & barto (2018). • we prove that a large family of algorithms, including unconstrained networks trained in the neural tangent kernel (ntk) regime (jacot et al., 2018), fail at this task, while a learned embedding into a latent quasimetric space can potentially succeed (sec. 4); • we propose poisson quasimetric embeddings (pqes), the first quasimetric embedding formulation learnable with gradient-based optimization that also enjoys strong theoretical guarantees on approximating arbitrary quasimetrics (sec. 5); • our experiments complement the theory and demonstrate the benefits of pqes on random graphs, social graphs and offline q-learning (sec. 6). preliminaries on quasimetrics and poisson processes quasimetric space is a generalization of metric space where all requirements of metrics are satisfied, except that the distances can be asymmetrical. definition 2.1 (quasimetric space). a quasimetric space is a pair (x , d), where x is a set of points and d : x × x → [0, ∞] is the quasimetric, satisfying the following conditions: ∀x, y ∈ x , ∀x, y, z ∈ x , x = y ⇐⇒ d(x, y) = 0, d(x, y) + d(y, z) ≥ d(x, z). (identity of indiscernibles) (triangle inequality) being asymmetric, quasimetrics are often thought of as (shortest-path) distances of some (possibly infinite) weighted directed graph. a natural way to quantify the complexity of a quasimetric is to consider that of its underlying graph. quasimetric treewidth is an instantiation of this idea. definition 2.2 (treewidth of quasimetric spaces (mémoli et al., 2018)). consider a quasimetric space m as shortest-path distances on a positively-weighted directed graph. treewidth of m is the minimum over all such graphs’ treewidths. poisson processes are commonly used to model events (or points) randomly occurring across a set a (kingman, 2005) , e.g., raindrops hitting a windshield, photons captured by a camera. the number of such events within a subset of a is modeled as a poisson distribution, whose mean is given by a measure µ of a that determines how “frequently the events happen at each location”. definition 2.3 (poisson process). for nonatomic measure µ on set a, a poisson process on a with mean measure µ is a random countable subset p ⊂ a (i.e., the random events / points) such that • for any disjoint measurable subsets a1, . . . , an of a, the random variables n (a1), . . . , n (an) are independent, where n (b) (cid:44) #{p ∩ b} is the number of points of p in b, and • n (b) has the poisson distribution with mean µ(b), denoted as pois(µ(b)). fact 2.4 (differentiability of p [n (a1) ≤ n (a2)]). for two measurable subsets a1, a2, p [n (a1) ≤ n (a2)] = p (cid:2) pois(µ(a1 \ a2)) ≤ pois(µ(a2 \ a1)) (cid:125) (cid:123)(cid:122) (cid:124) two independent poissons (cid:3). furthermore, for independent x ∼ pois(µ1), y ∼ pois(µ2), the probability p [x ≤ y ] is differentiable w.r.t. µ1 and µ2. in the special case where µ1 or µ2 is zero, we can simply compute p [x ≤ y ] = (pois(0) is always 0) (2) where x+ (cid:44) max(0, x). for general µ1, µ2, this probability and its gradients can be obtained via a connection to noncentral χ2 distribution (johnson, 1959). we derive the formulas in the appendix. therefore, if a1 and a2 are parametrized by some θ such that µ(a1 \ a2) and µ(a2 \ a1) are differentiable w.r.t. θ, so is p [n (a1) ≤ n (a2)]. : test : train c a b triangle inequality =⇒ ? ≤ d(a, b) + d(b, c) = 31 ? ≥ d(a, b) − d(c, b) = 28 figure 2: quasimetric learning on a 3-element space. leftmost: training set contains all pairs except for (a, c). arrow labels show quasimetric distances (rather than edge weights). a quasimetric ˆd should predict ˆd(a, c) ∈ [28, 30]. right three: different formulations are trained to fit training pairs distances, and then predict on the test pair. plots show distribution of the prediction over 100 runs. quasimetric learning consider a quasimetric space (x , d). the quasimetric learning task aims to infer a quasimetric from observing a training set {(xi, yi, d(xi, yi))}i ⊂ x × x × [0, ∞]. naturally, our goals for a learned predictor ˆd : x × x → r are: respecting the quasimetric constraints and fitting training distances. crucially, we are not simply aiming for the usual sense of generalization, i.e., low population error. knowing that true distances have a quasimetric structure, we can better evaluate predictors and desire ones that fit the training data and are (approximately) quasimetrics. these objectives also indirectly capture generalization because a predictor failing either requirement must have large error on some pairs, whose true distances follow quasimetric constraints. we formalize this relation in thm. 4.3. learning algorithms and hypothesis spaces ideally, the learning should scale well with data, potentially generalize to unseen samples, and support integration with other deep learning systems (e.g., via differentiation). relaxed hypothesis spaces. one can simply learn a generic function approximator that maps the (concatenated) input pair to a scalar as the prediction of the pair’s distance, or its transformed version (e.g., log distance). this approach has been adopted in learning graph distances (rizi et al., 2018) and plan costs in mdps (tian et al., 2020). when the function approximator is a deep neural network, we refer to such methods as unconstrained networks. while they are known to fit training data well (jacot et al., 2018), in this paper we also investigate whether they learn to be (approximately) quasimetrics. restricted hypothesis spaces. alternatively, we can encode each input to a latent space z, where a latent quasimetric dz gives the distance prediction. this guarantees learning a quasimetric over data space x . often dz is restricted to a subset unable to approximate all quasimetrics, i.e., an overly restricted hypothesis space, such as metric embeddings and the recently proposed deepnorm and widenorm (pitis et al., 2020). while our proposed poisson quasimetric embedding (pqe) (specified in sec. 5) is also a latent quasimetric, it can approximate arbitrary quasimetrics (and is differentiable). pqe thus searches in a space that approximates all quasimetrics and only quasimetrics. a toy example to build up intuition on how various algorithms perform according to our two goals, we consider a toy quasimetric space with only 3 elements in fig. 2. the space has a total of 9 pairs, 8 of which form the training set. due to quasimetric requirements (esp. triangle inequality), knowing distances of these 8 pairs restricts valid values for the heldout pair to a particular range (which is [28, 31] in this case). if a model approximates 8 training pairs well and respects quasimetric constraints well, its prediction on that heldout pair should fall into this range. we train three models w.r.t. mean squared error (mse) over the training set using gradient descent: • unconstrained deep network that predicts distance, • metric embedding into a latent euclidean space with a deep encoder, • quasimetric embedding into a latent pqe space with a deep encoder (our method from sec. 5). the three approaches exhibit interesting qualitative differences. euclidean embedding, unable to model asymmetries in training data, fails to attain a low training error. while both other methods approximate training distances well, unconstrained networks greatly violate quasimetric constraints; only pqes respect the constraints and consistently predicts within the valid range. here, the structural prior of embedding into a quasimetric latent space appears important to successful learning. without any such prior, unconstrained networks fail badly. in the next section, we present a rigorous theoretical study of the quasimetric learning task, which confirms this intuition. theoretical analysis of various learning algorithms in this section, we define concrete metrics for the two quasimetric learning objectives stated above, and present positive and negative theoretical findings for various learning algorithms. overview. our analysis focuses on data-agnostic bounds, which are often of great interests in machine learning (e.g., vc-dimension (vapnik & chervonenkis, 2015)). we prove a strong negative result for a general family of learning algorithms (including unconstrained mlps trained in ntk regime, k-nearest neighbor, and min-norm linear regression): they can arbitrarily badly fail to fit training data or respect quasimetric constraints (thm. 4.6). our informative construction reveals the core reason of their failure. quasimetric embeddings, however, enjoy nice properties as long as they can approximate arbitrary quasimetrics, which motivates searching for “universal quasimetrics”. the next section presents pqes as such universal approximators and states their theoretical guarantees. assumptions. we consider quasimetric spaces (x , d) with x ⊂ rd, finite size n = |x| < ∞, and finite distances (i.e., d has range [0, ∞)). it allows discussing deep networks which can’t handle infinities well. this mild assumption can be satisfied by simply capping max distances in quasimetrics. for training, m < n2 pairs are uniformly sampled as training pairs s ⊂ x × x without replacement. in the appendix, we provide all full proofs, further discussions of our assumptions and presented results, as well as additional results concerning specific learning algorithms and settings. distortion and violation metrics for quasimetric learning ˆd(x,y) d(x,y) we use distortion as a measure of how well the distance is preserved, as is standard in embedding analyses (e.g., bourgain (1985)). in this work, we especially consider distortion over a subset of pairs, to quantify how well a predictor ˆd approximates distances over the training subset s. definition 4.1 (distortion). distortion of ˆd over a subset of pairs s ⊂ x × x is diss( ˆd) (cid:44) (cid:1), and its overall distortion is dis( ˆd) (cid:44) disx ×x ( ˆd). (cid:0) max(x,y)∈s,x(cid:54)=y for measuring consistency w.r.t. quasimetric constraints, we define the (quasimetric) violation metric. violation focuses on triangle inequality, which can often be more complex (e.g., in fig. 2), compared to the relatively simple non-negativity and identity of indiscernibles. definition 4.2 (quasimetric violation). quasimetric violation (violation for short) of ˆd is vio( ˆd) (cid:44) maxa1,a2,a3∈x (cid:1)(cid:0) max(x,y)∈s,x(cid:54)=y 0 = 1 for notation simplicity. , where we define 0 d(x,y) ˆd(x,y) both distortion and violation are nicely agnostic to scaling. furthermore, assuming non-negativity and identity of indiscernibles, vio( ˆd) ≥ 1 always, with equality iff ˆd is a quasimetric. distortion and violation also capture generalization. because the true distance d has optimal training distortion (on s) and violation, a predictor ˆd that does badly on either must also be far from truth. theorem 4.3 (distortion and violation lower-bound generalization error). for non-negative ˆd, dis( ˆd) ≥ max(diss( ˆd), vio( ˆd)), where dis( ˆd) captures generalization over the entire x space. learning algorithms equivariant to orthogonal transforms for quasimetric space (x , d), x ⊂ rd, we consider applying general learning algorithms by concatenating pairs to form inputs ∈ r2d (e.g., unconstrained networks). while straightforward, this approach means that the algorithms are generally unable to relate the same element appearing as 1st or 2nd input. as we will show, this is sufficient for a wide family of learning algorithms to fail badly– ones equivariant to orthogonal transforms, which we refer to as oreq algorithms (defn. 4.4). for an oreq algorithm, training on orthogonally transformed data does not affect its prediction, as long as test data is identically transformed. many standard learning algorithms are oreq (lemma 4.5). c x z y w c x z y w vio( ˆd) ≥ ˆd(x, z) ˆd(x, y) + ˆd(y, z) c diss( ˆd)(diss( ˆd) + ˆd(y, z)) vio( ˆd) ≥ ˆd(y, z) ˆd(y, w) + ˆd(w, z) ˆd(y, z) 2 · diss( ˆd) training ( test ( ) : d(x, z) = c, d(w, z) = 1, d(x, y) = 1, d(y, w(cid:48)) = 1. ˆd(y, z) = ? training ( test ( ) : d(x, z) = c, d(w, z) = 1, d(x, y(cid:48)) = 1, d(y, w) = 1. ˆd(y, z) = ? ) for the test pair distance d(y, z). figure 3: two training sets pose incompatible constraints ( with one-hot features, an orthogonal transform can exchange (∗, y) ↔ (∗, y(cid:48)) and (∗, w) ↔ (∗, w(cid:48)), leaving the test pair (y, z) unchanged, but transforming the training pairs from one scenario to the other. given either training set, an oreq algorithm must attain same training distortion and predict identically on (y, z). for appropriate c, this implies large distortion or violation in one of these cases. definition 4.4 (equivariant learning algorithms). given training set d = {(zi, yi)}i ⊂ z × y, where zi are inputs and yi are targets, a learning algorithm alg produces a function alg(d) : z → y such that alg(d)(z(cid:48)) is the function’s prediction on sample z(cid:48). consider t a set of transformations z → z. alg is equivariant to t iff for all transform t ∈ t , training set d, alg(d) = alg(t d) ◦ t , where t d = {(t z, y) : (z, y) ∈ d} is the training set with transformed inputs. lemma 4.5 (examples of oreq algorithms). k-nearest-neighbor with euclidean distance, mlp trained with squared loss in ntk regime, and min-norm least-squares linear regression are oreq. failure case. the algorithms treats the concatenated inputs as generic vectors. if a transform fundamentally changes the quasimetric structure but is not fully reflected in the learned function (e.g., due to equivariance), learning must fail. the two training sets in fig. 3 are sampled from two different quasimetrics over the same 6 elements an orthogonal transform links both training sets without affecting the test pair, which is constrained differently in two quasimetrics. an oreq algorithm, necessarily predicting the test pair identically seeing either training set, must thus fail on one. in the appendix, we empirically verify that unconstrained mlps indeed do fail on this construction. extending to larger quasimetric spaces, we consider graphs containing many copies of both patterns in fig. 3. with high probability, our sampled training set fails in the same way—the learning algorithm can not distinguish it from another training set with different quasimetric constraints. theorem 4.6 (failure of oreq algorithms). let (fn)n be an arbitrary sequence of large values. there is an infinite sequence of quasimetric spaces ((xn, dn))n with |xn| = n, xn ⊂ rn such that, over the random training set s of size m, any oreq algorithm must output a predictor ˆd that satisfies • ˆd fails non-negativity, or • max(diss( ˆd), vio( ˆd)) ≥ fn (i.e., ˆd approximates training s badly or is far from a quasimetric), with probability 1/2 − o(1), as long as s does not contain almost all pairs 1 − m/n2 = ω(n−1/3), and does not only include few pairs m/n2 = ω(n−1/2). furthermore, standard ntk results show that unconstrained mlps trained in ntk regime converge to a function with zero training loss. by the above theorem, the limiting function is not a quasimetric with nontrivial probability. in the appendix, we formally state this result. despite their empirical usages, these results suggest that unconstrained networks are likely not suited for quasimetric learning. quasimetric embeddings a quasimetric embedding consists of a mapping f from data space x to a latent quasimetric space (z, dz), and predicts ˆd(x, y) (cid:44) dz(f (x), f (y)). therefore, they always respect all quasimetric constraints and attain optimal violation of value 1, regardless of training data. however, unlike deep networks, their distortion (approximation) properties depend on the specific latent quasimetrics. if the latent quasimetric can generally approximate any quasimetric (with flexible learned encoders such as deep networks), we have nice guarantees for both distortion and violation. in the section below, we present poisson quasimetric embedding (pqe) as such a latent quasimetric, along with its theoretical distortion and violation guarantees. poisson quasimetric embeddings (pqes) motivated by above theoretical findings, we aim to find a latent quasimetric space (rd, dz) with a deep network encoder f : x → rd, and a quasimetric dz that is both universal and differentiable: • for any data quasimetric (x , d), there exists an encoder f such that dz(f (x), f (y)) ≈ d(x, y); • dz is differentiable (for optimizing f and possible integration with other gradient-based systems). notation 5.1. we use x, y for elements of the data space x , u, v for elements of the latent space rd, upper-case letters for random variables, and (·)z for indicating functions in latent space (e.g., dz). an existing line of machine learning research learns quasipartitions, or partial orders, via order embeddings (vendrov et al., 2015). quasipartitions are in fact special cases of quasimetrics whose distances are restricted to be binary, denoted as π. an order embedding is a representation of a quasipartition, where πoe(x, y) = 0 (i.e., x is related to y) iff f (x) ≤ f (y) coordinate-wise: z (f (x), f (y)) (cid:44) 1 − πoe(x, y) (cid:44) πoe 1f (x)j −f (y)j ≤0. order embedding is universal and can model any quasipartition (see appendix and hiraguchi (1951)). j can we extend this discrete idea to general continuous quasimetrics? quite naïvely, one may attempt a straightforward soft modification of order embedding: πsoftoe z j exp (cid:0) − (uj − vj)+(cid:1) = 1 − exp j (uj − vj)+(cid:17) which equals 0 if u ≤ v coordinate-wise, and increases to 1 as some coordinates violate this condition more. however, it is unclear whether this gives a quasimetric. a more principled way is to parametrize a (scaled) distribution of latent quasipartitions πz, whose expectation naturally gives a continuous-valued quasimetric: dz(u, v; πz, α) (cid:44) α · eπz∼πz [πz(u, v)] , poisson quasimetric embedding (pqe) gives a general recipe for constructing such πz distributions so that dz is universal and differentiable. within this framework, we will see that πsoftoe is actually a quasimetric based on such a distribution and is (almost) sufficient for our needs. z distributions of latent quasipartitions a random latent quasipartition πz : rd×rd → {0, 1} is a difficult object to model, due to complicated quasipartition constraints. fortunately, the order embedding representation (eq. (3)) is without such constraints. if, instead of fixed latents u, v, we have random latents r(u), r(v), we can compute: eπz[πz(u, v)] = er(u),r(v) (cid:2)πoe z (r(u), r(v))(cid:3) = 1 − p [r(u) ≤ r(v) coordinate-wise] . in this view, we represent a random πz via a joint distribution of random vectors1 {r(u)}u∈rd , i.e., a stochastic process. to easily compute the probability of this coordinate-wise event, we assume that each dimension of random vectors is from an independent process, and obtain eπz [πz(u, v)] = 1 − p [rj(u) ≤ rj(v)] . j the choice of stochastic process is flexible. using poisson processes (with lebesgue mean measure; defn. 2.3) that count random points on half-lines2 (−∞, a], we can have rj(u) = nj((∞, uj]), the (random) count of events in (∞, uj] from j-th poisson process: eπz∼πz [πz(u, v)] = 1 − (cid:2)nj((−∞, uj]) ≤ nj((−∞, vj])(cid:3) p exp (cid:0) − (uj − vj)+(cid:1) = πsoftoe z (u, v), j 1in general, these random vectors r(u) do not have to be of the same dimension as u ∈ r dimensions do match in the pqe variants we experiment with. d, although the 2half-lines has lebesgue measure ∞. more rigorously, consider using a small value as the lower bounds of these intervals, which leads to same result. where we used fact 2.4 and the observation that one half-line is either subset or superset of another.indeed, πsoftoe is an expected quasipartition (and thus a quasimetric), and is differentiable. z considering a mixture of such distributions for expressiveness, the full latent quasimetric formula is dpqe-lh z αi · 1 − exp (cid:0) − (ui,j − vi,j)+(cid:1)(cid:17) where we slightly abuse notation and consider latents u and v as (reshaped to) 2-dimensional. we will see that this is a special pqe case with lebesgue measure and half-lines, and thus denoted pqe-lh. i j general pqe formulation we can easily generalize the above idea to independent poisson processes of general mean measures µj and (sub)set parametrizations u → aj(u), and obtain an expected quasipartition as: eπz∼πpqe z j p [nj(aj(u)) ≤ nj(aj(v))] (cid:105) pois(µj(aj(u) \ aj(v)) ) ≤ pois(µj(aj(v) \ aj(u))) (cid:125) (cid:123)(cid:122) poisson rate of points landing only in aj (u) which is differentiable as long as the measures and set parametrizations are (after set differences). similarly, considering a mixture gives us an expressive latent quasimetric. a general pqe latent quasimetric is defined with {(µi,j, ai,j)}i,j and weights αi ≥ 0 as: dpqe z z αi · eπz∼πpqe (µi,ai) [πz(u, v)] (cid:104) p αi j i pois(µi,j(ai,j(u) \ ai,j(v))) ≤ pois(µi,j(ai,j(v) \ ai,j(u))) whose optimizable parameters include {αi}i, possible ones from {(µi,j, ai,j)}i,j (and encoder f ). this general recipe can be instantiated in many ways. setting ai,j(u) → (−∞, ui,j] and lebesgue µi,j, recovers pqe-lh. in the appendix, we consider a form with gaussian-based measures and gaussian-shapes, denoted as pqe-gg. unlike pqe-lh, pqe-gg always gives nonzero gradients. the appendix also includes several implementation techniques that empirically improve stability, including learning αi’s with deep linear networks, a formulation that outputs discounted distance, etc. continuous-valued stochastic processes but why poisson processes over more common choices such as gaussian processes? it turns out that common continuous-value processes fail to give a differentiable formula. consider a non-degenerate process {r(u)}u, where (r(u), r(v)) has bounded density if u (cid:54)= v. (cid:2)r(u) ≤ r(u + δ)(cid:3) and perturbing u → u + δ leaves p [r(u) = r(u + δ)] = 0. then one of p (cid:2)r(u + δ) ≤ r(u)(cid:3) must be far away from 1 (as they sum to 1), breaking differentiability at p p [r(u) ≤ r(u)] = 1. (this argument is formalized in the appendix.) discrete-valued processes, however, can leave most probability mass on r(u) = r(u + δ) and thus remain differentiable. theoretical guarantees our pqes bear similarity with the algorithmic quasimetric embedding construction in mémoli et al. (2018). extending their analysis to pqes, we obtain the following distortion and violation guarantees. theorem 5.2 (distortion and violation of pqes). under the assumptions of sec. 4, any quasimetric space with size n and treewidth t admits a pqe-lh and a pqe-gg with distortion o(t log2 n) and violation 1, with an expressive encoder (e.g., a relu network with ≥ 3 layers and polynomial width). in fact, these guarantees apply to any pqe formulation that satisfies a mild condition. informally, any pqe with h × k poisson processes (i.e., h mixtures) enjoys the above guarantees if it can approximate the discrete counterpart: mixtures of h order embeddings, each specified with k dimensions. in the appendix, we make this condition precise and provide a full proof of the above theorem. (a) a dense graph. (b) a sparse graph. (c) a sparse graph with block structure. figure 4: comparison of pqe and baselines on quasimetric learning in random directed graphs. experiments our experiments are designed to (1) confirm our theoretical findings and (2) compare pqes against a wider range of baselines, across different types of tasks. in all experiments, we optimize γ-discounted distances (with γ ∈ {0.9, 0.95}), and compare the following five families of methods: • pqes (2 formulations): pqe-lh and pqe-gg with techniques mentioned in sec. 5.2. • unconstrained networks (20 formulations): predict raw distance (directly, with exp transform, and with (·)2 transform) or γ-discounted distance (directly, and with a sigmoid-transform). each variant is run with a possible triangle inequality regularizer (cid:2) max(0, γ ˆd(x,y)+ ˆd(y,z) − γ ˆd(x,z))2(cid:3) for each of 4 weights ∈ {0, 0.3, 1, 3}. ex,y,z • asymmetrical dot products (20 formulations): on input pair (x, y), encode each into a feature vector with a different network, and take the dot product. identical to unconstrained networks, the output is used in the same 5 ways, with the same 4 triangle inequality regularizer options. • metric encoders (4 formulations): embed into euclidean space, (cid:96)1 space, hypersphere with (scaled) spherical distance, or a mixture of all three. • deepnorm (2 formulations) and widenorm (3 formulations): quasimetric embedding methods that often require significantly more parameters than pqes (often on the order of 106 ∼ 107 more effective parameters; see the appendix for detailed comparisons) but can only approximate a subset of all possible quasimetrics (pitis et al., 2020). we show average results from 5 runs. the appendix provides experimental details, full results (including standard deviations), additional experiments, and ablation studies. random directed graphs. we start with randomly generated directed graphs of 300 nodes, with 64-dimensional node features given by randomly initialized neural networks. after training with mse on discounted distances, we test the models’ prediction error on the unseen pairs (i.e., generalization), measured also by mse on discounted distances. on three graphs with distinct structures, pqes significantly outperform baselines across almost all training set sizes (see fig. 4). notably, while deepnorm and widenorm do well on the dense graph quasimetric, they struggle on the other two, attaining both high test mse (fig. 4) and train mse (not shown). this is consistent with the fact that they can only approximate a subset of all quasimetrics, while pqes can approximate all quasimetrics. large-scale social graph. we choose the berkeley-stanford web graph (leskovec & krevl, 2014) as the real-wold social graph for evaluation. this graph consists of 685,230 pages as nodes, and 7,600,595 hyperlinks as directed edges. we use 128-dimensional node2vec features (grover & leskovec, 2016) and the landmark method (rizi et al., 2018) to construct a training set of 2,500,000 pairs, and a test set of 150,000 pairs. pqes generally perform better than other methods, accurately predicting finite distances while predicting high values for infinite distances (see table 1). deepnorms and widenorms learn finite distances less accurately here, and also do much worse than pqes on learning the (quasi)metric of an undirected social graph (shown in the appendix). offline q-learning. optimal goal-reaching plan costs in mdps are quasimetrics (bertsekas & tsitsiklis, 1991; tian et al., 2020) (see also the appendix). in practice, optimizing deep q-functions often suffers from stability and sample efficiency issues (henderson et al., 2018; fujimoto et al., 2018). as a proof of concept, we use pqes as goal-conditional q-functions in offline q-learning, on the grid-world environment with one-way doors built upon gym-minigrid (chevalier-boisvert et al., 2018) (see fig. 1 right), following the algorithm and data sampling procedure described in tian et al. (2020). adding strong quasimetric structures greatly improves sample efficiency and greedy planning success rates over popular existing approaches such as unconstrained networks used in tian et al. (2020) and asymmetrical dot products used in schaul et al. (2015) (see fig. 5). as an interesting observation, some metric embedding formulations work comparably well. triangle inequality regularizer mse w.r.t. γ-discounted distances (×10−3) ↓ l1 error when true d < ∞ ↓ prediction ˆd when true d = ∞ ↑ pqe-lh pqe-gg best unconstrained net. best asym. dot product best metric embedding best deepnorm best widenorm table 1: quasimetric learning on large-scale web graph. “best” is selected by test mse w.r.t. γ-discounted distances. figure 5: offline q-learning results. related work metric learning. metric learning aims to approximate a target metric/similarity function, often via a learned embedding into a metric space. this idea has successful applications in dimensionality reduction (tenenbaum et al., 2000), information retrieval (wang et al., 2014), clustering (xing et al., 2002), classification (weinberger et al., 2006; hoffer & ailon, 2015), etc. while asymmetrical formulations have been explored, they either ignore quasimetric constraints (oord et al., 2018; logeswaran & lee, 2018; schaul et al., 2015), or are not general enough to approximate arbitrary quasimetric (balashankar & subramanian, 2021), which is the focus of the present paper. isometric embeddings. isometric (distance-preserving) embeddings is a highly influential and well-studied topic in mathematics and statistics. fundamental results, such as bourgain’s random embedding theorem (bourgain, 1985), laid important ground work in understanding and constructing (approximately) isometric embeddings. while most such researches concern metric spaces, mémoli et al. (2018) study an algorithmic construction of a quasimetric embedding via basic blocks called quasipartitions. their approach requires knowledge of quasimetric distances between all pairs and thus is not suitable for learning. our formulation takes inspiration from the form of their embedding, but is fully learnable with gradient-based optimization over a training subset. quasimetrics and partial orders. partial orders (quasipartitions) are special cases of quasimetrics (see sec. 5). a line of machine learning research studies embedding partial order structures into latent spaces for tasks such as relation discovery and information retrieval (vendrov et al., 2015; suzuki et al., 2019; hata et al., 2020; ganea et al., 2018). unfortunately, unlike pqes, such formulations do not straightforwardly generalize to arbitrary quasimetrics, which are more than binary relations. similar to pqes, deepnorm and widenorm are quasimetric embedding approaches learnable with gradient-based optimization (pitis et al., 2020). theoreically, they universally approximates a subset of quasimetrics (ones induced by asymmetrical norms). despite often using many more parameters, they are restricted to this subset and unable to approximate general quasimetrics like pqes do (fig. 4). implications in this work, we study quasimetric learning via both theoretical analysis and empirical evaluations. theoretically, we show strong negative results for a common family of learning algorithms, and positive guarantees for our proposed poisson quasimetric embedding (pqe). our results introduce the novel concept of equivariant learning algorithms, which may potentially be used for other learnability analyses with algorithms such as deep neural networks. additionally, a thorough average-case or data-dependent analysis would nicely complement our results, and may shed light on conditions where algorithms like deep networks can learn decent approximations to quasimetrics in practice. pqes are the first quasimetric embedding formulation that can be learned via gradient-based optimization. empirically, pqes show promising performance in various tasks. furthermore, pqes are fully differentiable, and (implicitly) enforce a quasimetric structure in any latent space. they are particularly suited for integration in large deep learning systems, as we explore in the q-learning experiments. this can potentially open the gate to many practical applications such as better embedding for planning with mdps, efficient shortest path finding via learned quasimetric heuristics, representation learning with quasimetric similarities, causal relation learning, etc. references noga alon and joel h spencer. the probabilistic method. john wiley & sons, 2004. brandon amos, lei xu, and j zico kolter. input convex neural networks. in international conference sanjeev arora, simon s du, wei hu, zhiyuan li, ruslan salakhutdinov, and ruosong wang. on exact computation with an infinitely wide neural net. arxiv preprint arxiv:1904.11955, 2019. ananth balashankar and lakshminarayanan subramanian. learning faithful representations of causal graphs. in proceedings of the 59th annual meeting of the association for computational linguistics and the 11th international joint conference on natural language processing (volume 1: long papers), pp. 839–850, 2021. umberto bertele and francesco brioschi. on non-serial dynamic programming. j. comb. theory, dimitri p bertsekas and john n tsitsiklis. an analysis of stochastic shortest path problems. mathekenneth p bogart. maximal dimensional partially ordered sets i. hiraguchi’s theorem. discrete béla bollobás and bollobás béla. random graphs. number 73. cambridge university press, 2001. jean bourgain. on lipschitz embedding of finite metric spaces in hilbert space. israel journal of barry brown, james lovato, and kathy russell. cdflib: library of fortran routines for cumulative distribution functions, inverses, and other parameters, 1994. yury brychkov. on some properties of the marcum q function. integral transforms and special john burkardt. c++ source code for cdflib. https://people.sc.fsu.edu/~jburkardt/cpp_ src/cdflib/cdflib.html, 2021. moses charikar, konstantin makarychev, and yury makarychev. directed metrics and directed graph partitioning problems. in soda, volume 6, pp. 51–60. citeseer, 2006. maxime chevalier-boisvert, lucas willems, and suman pal. minimalistic gridworld environment for openai gym. https://github.com/maximecb/gym-minigrid, 2018. pieter-tjerk de boer, dirk p kroese, shie mannor, and reuven y rubinstein. a tutorial on the cross-entropy method. annals of operations research, 134(1):19–67, 2005. paul erd˝os and alfréd rényi. on random graphs. i. publicationes mathematicae debrecen, 6: stefan felsner, ching man li, and william t. trotter. adjacency posets of planar graphs. discrete scott fujimoto, herke hoof, and david meger. addressing function approximation error in actorcritic methods. in international conference on machine learning, pp. 1587–1596. pmlr, 2018. octavian ganea, gary bécigneul, and thomas hofmann. hyperbolic entailment cones for learning hierarchical embeddings. in international conference on machine learning, pp. 1646–1655. pmlr, 2018. amnon geifman, abhay yadav, yoni kasten, meirav galun, david jacobs, and ronen basri. on the similarity between the laplace and neural tangent kernels. arxiv preprint arxiv:2007.01580, 2020. aditya grover and jure leskovec. node2vec: scalable feature learning for networks. in proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining, pp. 855–864, 2016. peter grunwald and paul vitányi. shannon information and kolmogorov complexity. arxiv preprint matthieu guillot and gautier stauffer. the stochastic shortest path problem: a polyhedral combinatorics perspective. european journal of operational research, 285(1):148–158, 2020. nozomi hata, shizuo kaji, akihiro yoshida, and katsuki fujisawa. nested subspace arrangement for representation of relational data. in international conference on machine learning, pp. 4127–4137. pmlr, 2020. peter henderson, riashat islam, philip bachman, joelle pineau, doina precup, and david meger. deep reinforcement learning that matters. in proceedings of the aaai conference on artificial intelligence, volume 32, 2018. toshio hiraguchi. on the dimension of partially ordered sets. the science reports of the kanazawa elad hoffer and nir ailon. deep metric learning using triplet network. in international workshop on similarity-based pattern recognition, pp. 84–92. springer, 2015. minyoung huh, hossein mobahi, richard zhang, brian cheung, pulkit agrawal, and phillip isola. the low-rank simplicity bias in deep networks. arxiv preprint arxiv:2103.10427, 2021. piotr indyk. algorithmic applications of low-distortion geometric embeddings. in proceedings 42nd ieee symposium on foundations of computer science, pp. 10–33. ieee, 2001. sergey ioffe and christian szegedy. batch normalization: accelerating deep network training by reducing internal covariate shift. in international conference on machine learning, pp. 448–456, 2015. arthur jacot, franck gabriel, and clément hongler. neural tangent kernel: convergence and generalization in neural networks. arxiv preprint arxiv:1806.07572, 2018. n. l. johnson. on an extension of the connexion between poisson and χ2 distributions. biometrika, william b johnson and joram lindenstrauss. extensions of lipschitz mappings into a hilbert space. diederik p kingma and jimmy ba. adam: a method for stochastic optimization. arxiv preprint john frank charles kingman. poisson processes. encyclopedia of biostatistics, 6, 2005. andrei n kolmogorov. on tables of random numbers. sankhy¯a: the indian journal of statistics, jure leskovec and andrej krevl. snap datasets: stanford large network dataset collection. http: //snap.stanford.edu/data, june 2014. ming li, paul vitányi, et al. an introduction to kolmogorov complexity and its applications, volume 3. lajanugen logeswaran and honglak lee. an efficient framework for learning sentence representations. in international conference on learning representations, 2018. ilya loshchilov and frank hutter. sgdr: stochastic gradient descent with warm restarts. arxiv j. i. marcum. table of q functions. rand corporation, santa monica, ca, 1950. david a mcallester. some pac-bayesian theorems. machine learning, 37(3):355–363, 1999. facundo mémoli, anastasios sidiropoulos, and vijay sridhar. quasimetric embeddings and their alan mislove, massimiliano marcon, krishna p. gummadi, peter druschel, and bobby bhattacharjee. measurement and analysis of online social networks. in proceedings of the 5th acm/usenix internet measurement conference (imc’07), san diego, ca, october 2007. aaron van den oord, yazhe li, and oriol vinyals. representation learning with contrastive predictive giacomo ortali and ioannis g tollis. multidimensional dominance drawings. arxiv preprint adam paszke, sam gross, francisco massa, adam lerer, james bradbury, gregory chanan, trevor killeen, zeming lin, natalia gimelshein, luca antiga, alban desmaison, andreas kopf, edward yang, zachary devito, martin raison, alykhan tejani, sasank chilamkurthy, benoit steiner, lu fang, junjie bai, and soumith chintala. pytorch: an imperative style, high-performance deep learning library. in h. wallach, h. larochelle, a. beygelzimer, f. d'alché-buc, e. fox, and r. garnett (eds.), advances in neural information processing systems 32, pp. 8026–8037. 2019. jeffrey pennington, samuel schoenholz, and surya ganguli. the emergence of spectral universality in deep networks. in international conference on artificial intelligence and statistics, pp. 1924–1932. pmlr, 2018. silviu pitis, harris chan, kiarash jamali, and jimmy ba. an inductive bias for distances: neural nets that respect the triangle inequality. arxiv preprint arxiv:2002.05825, 2020. dj de s price. networks of scientific papers. princeton university press, 2011. martin l. puterman. markov decision processes: discrete stochastic dynamic programming. john fatemeh salehi rizi, joerg schloetterer, and michael granitzer. shortest path distance approximation using deep learning techniques. in 2018 ieee/acm international conference on advances in social networks analysis and mining (asonam), pp. 1007–1014. ieee, 2018. neil robertson and paul d seymour. graph minors. iii. planar tree-width. journal of combinatorial andrew m saxe, james l mcclelland, and surya ganguli. exact solutions to the nonlinear dynamics of learning in deep linear neural networks. arxiv preprint arxiv:1312.6120, 2013. tom schaul, daniel horgan, karol gregor, and david silver. universal value function approximators. in international conference on machine learning, pp. 1312–1320. pmlr, 2015. j. g. skellam. the frequency distribution of the difference between two poisson variates belonging to different populations. journal of the royal statistical society. series a (general), 109(pt 3): 296–296, 1946. richard s sutton and andrew g barto. reinforcement learning: an introduction. mit press, 2018. richard s sutton, joseph modayil, michael delp, thomas degris, patrick m pilarski, adam white, and doina precup. horde: a scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. in the 10th international conference on autonomous agents and multiagent systems-volume 2, pp. 761–768, 2011. ryota suzuki, ryusuke takahama, and shun onoda. hyperbolic disk embeddings for directed acyclic graphs. in international conference on machine learning, pp. 6066–6075. pmlr, 2019. joshua b tenenbaum, vin de silva, and john c langford. a global geometric framework for stephen tian, suraj nair, frederik ebert, sudeep dasari, benjamin eysenbach, chelsea finn, and sergey levine. model-based visual planning with self-supervised functional distances. arxiv preprint arxiv:2012.15373, 2020. william t trotter. partially ordered sets. handbook of combinatorics, 1:433–480, 1995. leslie g valiant. a theory of the learnable. communications of the acm, 27(11):1134–1142, 1984. vladimir n vapnik and a ya chervonenkis. on the uniform convergence of relative frequencies of events to their probabilities. in measures of complexity, pp. 11–30. springer, 2015. ivan vendrov, ryan kiros, sanja fidler, and raquel urtasun. order-embeddings of images and pauli virtanen, ralf gommers, travis e. oliphant, matt haberland, tyler reddy, david cournapeau, evgeni burovski, pearu peterson, warren weckesser, jonathan bright, stéfan j. van der walt, matthew brett, joshua wilson, k. jarrod millman, nikolay mayorov, andrew r. j. nelson, eric jones, robert kern, eric larson, c j carey, ˙ilhan polat, yu feng, eric w. moore, jake vanderplas, denis laxalde, josef perktold, robert cimrman, ian henriksen, e. a. quintero, charles r. harris, anne m. archibald, antônio h. ribeiro, fabian pedregosa, paul van mulbregt, and scipy 1.0 contributors. scipy 1.0: fundamental algorithms for scientific computing in python. nature methods, 17:261–272, 2020. doi: 10.1038/s41592-019-0686-2. jiang wang, yang song, thomas leung, chuck rosenberg, jingbin wang, james philbin, bo chen, and ying wu. learning fine-grained image similarity with deep ranking. in proceedings of the ieee conference on computer vision and pattern recognition, pp. 1386–1393, 2014. tongzhou wang and phillip isola. understanding contrastive representation learning through alignment and uniformity on the hypersphere. in international conference on machine learning, pp. 9929–9939. pmlr, 2020. kilian q weinberger, john blitzer, and lawrence k saul. distance metric learning for large margin nearest neighbor classification. in advances in neural information processing systems, pp. 1473– 1480, 2006. eric p xing, andrew y ng, michael i jordan, and stuart russell. distance metric learning with application to clustering with side-information. in nips, volume 15, pp. 12. citeseer, 2002. appendix a discussions for sec. 2: preliminaries on quasimetrics and poisson processes a.1 quasimetric spaces . a.2 poisson processes . b proofs, discussions and additional results for sec. 4: theoretical analysis of various learning algorithms b.1 thm. 4.3: distortion and violation lower-bound generalization error . . . . . . . b.2 lemma 4.5: examples of oreq algorithms . . . . b.3 thm. 4.6: failure of oreq algorithms . . c proofs and discussions for sec. 5: poisson quasimetric embeddings (pqes) c.1 non-differentiability of continuous-valued stochastic processes . . . . . . . . . . c.2 pqe-gg: gaussian-based measure and gaussian shapes . . . . . . . . . . . . . . c.3 theoretical guarantees for pqes . . . . . . . . . . . . . . . . . . . . . . . . . . . c.4 implementation of poisson quasimetric embeddings (pqes) . . . . . . . . . . . . d experiment settings and additional results d.1 experiments from sec. 3.2: a toy example . . . . . . . . . . . . . . . . . . . . . d.2 experiments from sec. 6: experiments . . . . . . . . . . . . . . . . . . . . . . . . a discussions for sec. 2: preliminaries on quasimetrics and poisson processes a.1 quasimetric spaces definition 2.1 (quasimetric space). a quasimetric space is a pair (x , d), where x is a set of points and d : x × x → [0, ∞] is the quasimetric, satisfying the following conditions: ∀x, y ∈ x , ∀x, y, z ∈ x , x = y ⇐⇒ d(x, y) = 0, d(x, y) + d(y, z) ≥ d(x, z). (identity of indiscernibles) (triangle inequality) definition a.1 (quasipseudometric space). as a further generalization, we say (x , d) is a quasipseudometric space if the identity of indiscernibles requirement is only satisfied in one direction: (identity of indiscernibles) (triangle inequality) x = y =⇒ d(x, y) = 0, d(x, y) + d(y, z) ≥ d(x, z). ∀x, y ∈ x , ∀x, y, z ∈ x , a.1.1 examples of quasimetric spaces proposition a.2 (expected hitting time of a markov chain). let random variables (xt)t be a markov chain with support x . then (x , dhitting) is a quasimetric space, where dhitting(s, t) (cid:44) e [time to hit t | start from s] , where we define the hitting time of s starting from s to be 0. proof of proposition a.2. obviously dhitting is non-negative. we then verify the following quasimetric space properties: • identity of indiscernibles. by definition, we have, ∀x, y ∈ x , x (cid:54)= y, dhitting(x, x) = 0 dhitting(x, y) ≥ 1. • triangle inequality. for any x, y, z ∈ x , we have dhitting(x, y) + dhitting(y, z) = e [time to hit y then hit z | start from x] ≥ e [time to hit z | start from x] = dhitting(x, z). hence, (x , dhitting) is a quasimetric space. proposition a.3 (conditional shannon entropy). let x be the set of random variables (of some probability space). then (x , dh ) is a quasipseudometric space, where dh (x, y ) (cid:44) h(y | x). (20) if for all distinct (x, y ) ∈ x × x , x can not be written as (almost surely) a deterministic function of y , then (x , dh ) is a quasimetric space. proof of proposition a.3. obviously dh is non-negative. we then verify the following quasipseudometric space properties: • identity of indiscernibles. by definition, we have, ∀x, y ∈ x , dh (x, x) = h(x | x) = 0 dh (y, x) = h(y | x) ≥ 0, where ≤ is = iff y is a (almost surely) deterministic function of x. • triangle inequality. for any x, y, z ∈ x , we have dh (x, y ) + dh (y, z) = h(y | x) + h(z | y ) ≥ h(y | x) + h(z | xy ) = h(y z | x) ≥ h(z | x) = dh (x, z). hence, (x , dh ) is a quasipseudometric space, and a quasimetric space when the last condition is satisfied. conditional kolmogorov complexity from algorithmic information theory, the conditional kolmogorov complexity k(y | x) also similarly measures “the bits needed to create y given x as input” (kolmogorov, 1963). it is also almost a quasimetric, but the exact definition affects some constant/log terms that may make the quasimetric constraints non-exact. for instance, when defined with the prefix-free version, conditional kolmogorov complexity is always strictly positive, even for k(x | x) > 0 (li et al., 2008). one may remedy this with a definition using a universal turing machine (utm) that simply outputs the input on empty program. but to make triangle inequality work, one needs to reason about how the input and output parts work on the tape(s) of the utm. nonetheless, regardless of the definition details, conditional kolmogorov complexity do satisfy a triangle inequality up to log terms (grunwald & vitányi, 2004). so intuitively, it behaves roughly like a quasimetric defined on the space of binary strings. optimal goal-reaching plan costs in markov decision processes (mdps) we define mdps in the standard manner: m = (s, a, r, p, γ) (puterman, 1994), where s is the state space, a is the action space, r : s × a → r is the reward function, p : s × a → ∆(s) is the transition function (where ∆(s) is the set of all distributions over s), and γ ∈ (0, 1) is the discount factor. we define π as the collection of all stationary policies π : s → ∆(a) on m. for a particular policy π ∈ π, it induces random trajectories: • trajectory starting from state s ∈ s is the random variable distributed as s1 = s ai ∼ π(si), si+1 ∼ p(si, ai), • trajectory starting from state-action pair (s, a) ∈ s × a is the random variable distributed as s1 = s a1 = a ai ∼ π(si), si+1 ∼ p(si, ai), proposition a.4 (optimal goal-reaching plan costs in mdps). consider an mdp m = (s, a, r, p, γ). wlog, assume that r : s × a → (−∞, 0] has only non-positive rewards (i.e., negated costs). let x = s ∪ (s × a). then (x , dsum) and (x , dγ) are quasipseudometric spaces, where dsum(x, y) (cid:44) min π∈π e [total costs from x to y under π] (cid:2) − (cid:80) minπ∈π e(s1,a1,r1,... )=ξπ(x) (cid:3) minπ∈π e(s1,a1,r1,... )=ξπ(x) t rt 1s(cid:48) /∈{si}i∈[t] (cid:125) (cid:123)(cid:122) (cid:124) not reached s(cid:48) yet (cid:123)(cid:122) not reached s(cid:48) and performed a(cid:48) yet (cid:3) if y = s(cid:48) ∈ s (cid:125) (cid:123)(cid:122) (cid:124) goal is a state if y = (s(cid:48), a(cid:48)) ∈ s × a , (cid:125) (cid:123)(cid:122) goal is a state-action pair and is defined similarly. dγ(x, y) (cid:44) logγ max π∈π e (cid:2)γtotal costs from x to y under π(cid:3) if the reward function is always negative, (x , dsum) and (x , dγ) are quasimetric spaces. proof of proposition a.4. obviously both dsum and dγ are non-negative, and satisfy identity of indiscernibles (for quasipseudometric spaces). for triangle inequality, note that for each y, we can instead consider alternative mdps: • if y = s(cid:48) ∈ s, modify the original mdp to make s(cid:48) a sink state, where performing any action yields 0 reward (i.e., 0 cost); • if y = (s(cid:48), a(cid:48)) ∈ s × a, modify the original mdp such that performing action a(cid:48) in state s(cid:48) surely transitions to a new sink state, where performing any action yields 0 reward (i.e., 0 cost). obviously, both are markovian. furthermore, they are stochastic shortest path problems with no negative costs (guillot & stauffer, 2020), implying that there are markovian (i.e., stationary) optimal policies (respectively w.r.t. either minimizing expected total cost or maximizing expected γtotal cost). thus optimizing over the set of stationary policies, π, gives the optimal quantity over all possible policies, including concatenation of two stationary policies. thus the triangle inequality is satisfied by both. hence, (x , dsum) and (x , dγ) are quasipseudometric spaces. finally, if the reward function is always negative, x (cid:54)= y =⇒ dsum(x, y) > 0 and dγ(x, y) > 0, so (x , dsum) and (x , dγ) are quasimetric spaces. remark a.5. we make a couple remarks: • any mdp with a bounded reward function can be modified to have only non-positive rewards by subtracting the maximum reward (or larger); • we have dsum(s, (s, a)) = dγ(s, (s, a)) = −r(s, a). • when the dynamics is deterministic, dsum ≡ dγ, ∀γ ∈ (0, 1). • unless y is reachable from x with probability 1 under some policy, dsum(x, y) = ∞. • unless y is unreachable from x with probability 1 under all policies, dsum(x, y) < ∞. therefore, it is often favorable to consider dγ types. • in certain mdp formulations, the reward is stochastic and/or dependent on the reached next state. the above definitions readily extend to those cases. • γdγ ((s,a),y) is very similar to q-functions except that q-function applies discount based on time, and γdγ ((s,a),y) applies discount based on costs. we note that a q-learning-like recurrence can also be found for γdγ ((s,a),y). if the cost is constant in the sense for some fixed c < 0, r(s, a) = c, ∀(s, a) ∈ s × a, then time and cost are equivalent up to a scale. therefore, γdγ ((s,a),y) coincides with the optimal q-functions for the mdps described in proof, and γdγ (s,y) coincides with the optimal value functions for the respective mdps. a.1.2 quasimetric treewidth and graph treewidth definition 2.2 (treewidth of quasimetric spaces (mémoli et al., 2018)). consider representations of a quasimetric space m as shortest-path distances on a positively-weighted directed graph. treewidth of m is the minimum over all such graphs’ treewidths. (recall that the treewidth of a graph (after replacing directed edges with undirected ones) is a measure of its complexity.) graph treewidth is a standard complexity measure of how “similar” a graph is to a tree (robertson & seymour, 1984). informally speaking, if a graph has low treewidth, we can represent it as a tree, preserving all connected paths between vertices, except that in each tree node, we store a small number of vertices (from the original graph) rather than just 1. graph treewidth is widely used by the theoretical computer science and graph theory communities, since many np problems are solvable in polynomial time for graphs with bounded treewidth (bertele & brioschi, 1973). a.2 poisson processes definition 2.3 (poisson process). for nonatomic measure µ on set a, a poisson process on a with mean measure µ is a random countable subset p ⊂ a (i.e., the random events / points) such that • for any disjoint measurable subsets a1, . . . , an of a, the random variables n (a1), . . . , n (an) are independent, where n (b) (cid:44) #{p ∩ b} is the number of points of p in b, and • n (b) has the poisson distribution with mean µ(b), denoted as pois(µ(b)). poisson processes are usually used to model events that randomly happens “with no clear pattern”, e.g., visible stars in a patch of the sky, arrival times of internet packages to a data center. these events may randomly happen all over the sky / time. to an extent, we can say that their characteristic feature is a property of statistical independence (kingman, 2005). to understand this, imagine raindrops hitting the windshield of a car. suppose that we already know that the rain is heavy, knowing the exact pattern of the raindrops hitting on the left side of the windshield tells you little about the hitting pattern on the right side. then, we may assume that, as long as we look at regions that are disjoint on the windshield, the number of raindrops in each region are independent. this is the fundamental motivation of poisson processes. in a sense, from this characterization, poisson processes are inevitable (see sec. 1.4 of (kingman, 2005)). a.2.1 poisson race probability p [pois(µ1) ≤ pois(µ2)] and its gradient formulas in fact 2.4 we made several remarks on the poisson race probability, i.e., for independent x ∼ pois(µ1), y ∼ pois(µ2), the quantity p [x ≤ y ]. in this section, we detailedly describe how we arrived at those conclusions, and provide the exact gradient formulas for differentiating p [x ≤ y ] w.r.t. µ1 and µ2. from skellam distribution cdf to non-central χ2 distribution cdf. distribution of the difference of two independent poisson random variables is called the skellam distribution (skellam, 1946), with its parameter being the rate of the two poissons. that is, x − y ∼ skellam(µ1, µ2). therefore, p [x ≤ y ] is essentially the cumulative distribution function (cdf) of this skellam at 0. in eq. (4) of (johnson, 1959), a connection is made between the cdf of skellam(µ1, µ2) distribution, and the cdf of a non-central χ2 distribution (which is a non-centered generalization of χ2 distribution) with two parameters k > 0 degree(s) of freedom and non-centrality parameter λ ≥ 0): for integer n > 0, (41) p [skellam(µ1, µ2) ≥ n] = p (cid:3), (cid:2)noncentralχ2( 2n (cid:124)(cid:123)(cid:122)(cid:125) degree(s) of freedom , 2µ2 (cid:124)(cid:123)(cid:122)(cid:125) non-centrality parameter which can be evaluated using statistical computing packages such as scipy (virtanen et al., 2020) and cdflib (burkardt, 2021; brown et al., 1994). marcum-q-function and gradient formulas. to differentiate through eq. (41), we consider representing the non-central χ2 cdf as a marcum-q-function (marcum, 1950). one definition of the marcum-q-function qm : r × r → r in statistics is (cid:18) qm (a, b) (cid:44) x (cid:16) x a b exp im −1(ax) dx, where im −1 is the modified bessel function of order m − 1. (when m is non-integer, we refer readers to (brychkov, 2012; marcum, 1950) for definitions, which are not relevant to the discussion below.) when used in cdf of non-central χ2, we have x). (cid:2)noncentralχ2(k, λ) < x(cid:3) = 1 − q k p combining with eq. (41), and using the symmetry skellam(µ1, µ2) d= −skellam(µ2, µ1), we have, for integer n, p [x ≤ y + n] = p [skellam(µ1, µ2) ≤ n] (cid:3) if n < 0 if n ≥ 0 (cid:3) if n < 0 if n ≥ 0. prior work (brychkov, 2012) provides several derivative formula for the marcum-q-function: • for n < 0, we have p [x ≤ y + n] = e−( i (e) −n(2 where i (e) often provide due to its superior numerical precision (e.g., scipy (virtanen et al., 2020)), v (x) (cid:44) e−|x|iv(x) is the exponentially-scaled version of iv that computing libraries p [x ≤ y + n] = e−( • for n ≥ 0, we have p [x ≤ y + n] = e−( i (e) n (2 and, ∂ ∂µ2 p [x ≤ y + n] = (eq. (2) of (brychkov, 2012)) e−( setting n = 0 gives the proper forward and backward formulas for p [x ≤ y ]. b proofs, discussions and additional results for sec. 4: theoretical analysis of various learning algorithms assumptions. recall that we assumed a quasimetric space, which is stronger than a quasipseudometric space (defn. a.1), with finite distances. these are rather mild assumptions, since any quasipseudometric with infinities can always be modified to obey these assumptions by (1) adding a small metric (e.g., d(cid:15)(x, y) (cid:44) (cid:15)1x(cid:54)=y with small (cid:15) > 0) and (2) capping the infinite distances to a large value higher than any finite distance. in this work we focus on the worst-case scenario, as is common in standard worst-case analysis. (quasi)metric embedding analyses (bourgain, 1985; johnson & lindenstrauss, 1984; indyk, 2001; mémoli et al., 2018). such results are important because embeddings are often used as heuristics in downstream tasks (e.g., planning) which are sensitive to any error. while our negative result readily extends to the average-case scenario (since the error (distortion or violation) is arbitrary), we leave a thorough average-case analysis as future work. data-independent bounds. we analyze possible data-independent bounds for various algorithms. in this sense, the positive result for pqes (thm. c.4) is really strong, showing good guarantees regardless data quasimetric. the negative result (thm. 4.6) is also revealing, indicating that a family of algorithms should probably not be used, unless we know something more about data. dataindependent bounds are often of great interest in machine learning (e.g., concepts of vc-dimension (vapnik & chervonenkis, 2015) and pac learning (valiant, 1984)). an important future work is to explore data-dependent results, possibly via defining a quasimetric complexity metric that is both friendly for machine learning analysis, and connects well with combinatorics measures such as quasimetric treewidth. violation and distortion metrics. the optimal violation has value 1. specifically, it is 1 iff ˆd is a quasimetric on x (assuming non-negativity). distortion (over training set) and violation together quantify how well ˆd learns a quasimetric consistent with the training data. a predictor can fit training data well (low distortion), but ignores basic quasimetric constraints on heldout data (high violation). conversely, a predictor can perfectly obey the training data constraints (low violation), but doesn’t actually fit training data well (high distortion). indeed, (assuming non-negativity and identity of indiscernibles), perfect distortion (value 1) and violation (value 1) imply that ˆd is a quasimetric consistent with training data. relation with classical in-distribution generalization studies. classical generalization studies the prediction error over the underlying data distribution, and often involves complexity of the hypothesis class and/or training data (vapnik & chervonenkis, 2015; mcallester, 1999). our focus on quasimetric constraints violation is, in fact, not an orthogonal problem, but potentially a core part of in-distribution generalization for this setting. here, the underlying distribution is supported on all pairs of x × x . indeed, if a learning algorithm has large distortion, it must attain large prediction error on s ⊂ x × x ; if it has large violation, it must violates the quasimetric constraints and necessarily admits bad prediction error on some pairs (whose true distances obey the quasimetric constraints). thm. 4.3 (proved below) formalizes this idea, where we characterize generalization with the distortion over all possible pairs in x × x . b.1 thm. 4.3: distortion and violation lower-bound generalization error theorem 4.3 (distortion and violation lower-bound generalization error). for non-negative vio( ˆd)), where dis( ˆd) captures generalization over the entire x space. ˆd, dis( ˆd) ≥ max(diss( ˆd), b.1.1 proof proof of thm. 4.3. it is obvious that dis( ˆd) ≥ diss( ˆd). therefore, it remains to show that dis( ˆd) ≥ vio( ˆd). wlog, say vio( ˆd) > 1. otherwise, the statement is trivially true. by the definition of violation (see defn. 4.2), we have, for some x, y, z ∈ x , with ˆd(x, z) > 0, ˆd(x, z) ˆd(x, y) + ˆd(y, z) = vio( ˆd). if ˆd(x, y) + ˆd(y, z) = 0, then we must have one of the following two cases: • if d(x, y) > 0 or d(y, z) > 0, the statement is true because dis( ˆd) = ∞. • if d(x, y) = d(y, z) = 0, then d(x, z) = 0 and the statement is true since dis( ˆd) ≥ ˆd(x,z) d(x,z) = ∞. it is sufficient to prove the case that ˆd(x, y) + ˆd(y, z) > 0. we can derive ˆd(x, z) = vio( ˆd) (cid:17) (cid:16) ˆd(x, y) + ˆd(y, z) vio( ˆd) dis( ˆd) vio( ˆd) dis( ˆd) d(x, y) + d(y, z) d(x, z). if d(x, z) = 0, then dis( ˆd) = ∞ and the statement is trivially true. if d(x, z) > 0, above eq. (59) implies dis( ˆd) ≥ ˆd(x, z) d(x, z) vio( ˆd) dis( ˆd) =⇒ dis( ˆd) ≥ vio( ˆd). combining eqs. (55) and (60) gives the desired statement. b.2 lemma 4.5: examples of oreq algorithms lemma 4.5 (examples of oreq algorithms). k-nearest-neighbor with euclidean distance, mlp trained with squared loss in ntk regime, and min-norm least-squares linear regression are oreq. recall the definition of equivariant learning transforms. definition 4.4 (equivariant learning algorithms). given training set d = {(zi, yi)}i ⊂ z × y, where zi are inputs and yi are targets, a learning algorithm alg produces a function alg(d) : z → y such that alg(d)(z(cid:48)) is the function’s prediction on sample z(cid:48). consider t a set of transformations z → z. alg is equivariant to t iff for all transform t ∈ t , training set d, alg(d) = alg(t d) ◦ t , where t d = {(t z, y) : (z, y) ∈ d} is the training set with transformed inputs. b.2.1 proof proof of lemma 4.5. we consider the three algorithms individually: • k-nearest neighbor with euclidean distance. it is evident that if a learning algorithm only depend on pairwise dot products (or distances), it is equivariant to orthogonal transforms, which preserve dot products (and distances). k-nearestneighbor with euclidean distance only depends on pairwise distances, which can be written in terms of dot products: 2 = xtx + yty − 2xty. therefore, it is equivariant to orthogonal transforms. • min-norm least-squares linear regression. recall that the solution to min-norm least-squares linear regression ax = b is given by moore–penrose pseudo-inverse x = a+b. for any matrix a ∈ rm×n with svd u σv ∗ = a, and t ∈ o(n) (where o(n) is the orthogonal group in dimension n), we have (at t)+ = (u σv ∗t t)+ = t v σ+u ∗ = t a+, (62) where we used t ∗ = t t for t ∈ o(n). the solution for the transformed data at t and b is thus thus, for any new data point ˜x ∈ rn and its transformed version t ˜x ∈ rn, (at t)+b = t a+b. (t ˜x)t(at t)+b (cid:125) (cid:123)(cid:122) (cid:124) transformed problem prediction = ˜xtt tt a+ = ˜xa+ (cid:124)(cid:123)(cid:122)(cid:125) original problem prediction hence, min-norm least-squares linear regression is equivariant to orthogonal transforms. • mlp trained with squared loss in ntk regime. we first recall the ntk recursive formula from (jacot et al., 2018). denote the ntk for a mlp with l layers with the scalar kernel θ(l) : rd × rd → r. let β > 0 be the (fixed) parameter for the bias strength in the network model, and σ be the activation function. given x, z ∈ rd, it can be recursively defined as following. for h ∈ [l], θ(h)(x, z) (cid:44) θ(h−1)(x, z) ˙σ(h)(x, z) + σ(h)(x, z), where λ(h−1)(x, z) = xtz + β2, σ(h)(x, z) = c · e(u,v)∼n (0,λ(h−1)) [σ(u)σ(v)] + β2, ˙σ(h)(x, z) = c · e(u,v)∼n (0,λ(h−1)) [ ˙σ(u) ˙σ(v)] , for some constant c. it is evident from the recursive formula, that θ(h)(x, z) only depends on xtx, ztz and xtz. therefore, the ntk is invariant to orthogonal transforms. furthermore, training an mlp in ntk regime is the same as kernel regression with the ntk (jacot et al., 2018), which has a unique solution only depending on the kernel matrix on training set, denoted as ktrain ∈ rn×n, where n is the training set size. specifically, for training data {(xi, yi)}i∈[n], the solution f ∗ ntk(x) = (cid:0)θ(l)(x, x1) θ(l)(x, x2) f ∗ where y = (y1 ntk : r → r can be written as yn) is the vector of training labels. · · · θ(l)(x, xn)(cid:1) k −1 trainy, consider any orthogonal transform t ∈ o(d), and the ntk regression trained on the transformed data {(t xi, yi)}i∈[n]. denote the solution as f ∗ train is invariant to such transforms, and remains the same. therefore, ntk,t (t x) = (cid:0)θ(l)(t x, t x1) θ(l)(t x, t x2) f ∗ ntk,t : r → r. as we have shown, k −1 · · · θ(l)(t x, t xn)(cid:1) k −1 trainy = (cid:0)θ(l)(x, x1) θ(l)(x, x2) = f ∗ ntk(x). · · · θ(l)(x, xn)(cid:1) k −1 trainy hence, mlps trained (with squared loss) in ntk regime is equivariant to orthogonal transforms. furthermore, we note that there are many variants of mlp ntk formulas depending on details such as the particular initialization scheme and bias settings. however, they usually only lead to slight changes that do not affect our results. for example, while the above recursive ntk formula are derived assuming that the bias terms are initialized with a normal distribution (jacot et al., 2018), the formulas for initializing bias as zeros (geifman et al., 2020) does not affect the dependency only on dot product, and thus our results still hold true. these cases conclude the proof. c x z y w c x z y w vio( ˆd) ≥ ˆd(x, z) ˆd(x, y) + ˆd(y, z) c diss( ˆd)(diss( ˆd) + ˆd(y, z)) vio( ˆd) ≥ ˆd(y, z) ˆd(y, w) + ˆd(w, z) ˆd(y, z) 2 · diss( ˆd) training ( test ( ) : d(x, z) = c, d(w, z) = 1, d(x, y) = 1, d(y, w(cid:48)) = 1. ˆd(y, z) = ? training ( test ( ) : d(x, z) = c, d(w, z) = 1, d(x, y(cid:48)) = 1, d(y, w) = 1. ˆd(y, z) = ? ) for the test pair distance d(y, z). figure 6: two training sets pose incompatible constraints ( with one-hot features, an orthogonal transform can exchange (∗, y) ↔ (∗, y(cid:48)) and (∗, w) ↔ (∗, w(cid:48)), leaving the test pair (y, z) unchanged, but transforming the training pairs from one scenario to the other. given either training set, an oreq algorithm must attain same training distortion and predict identically on (y, z). for appropriate c, this implies large distortion or violation in one of these cases. b.3 thm. 4.6: failure of oreq algorithms we start with a more precise statement of thm. 4.6 that takes into consideration divergent m/n2: theorem 4.6 (failure of oreq algorithms). let (fn)n be an arbitrary sequence of large values. there is an infinite sequence of quasimetric spaces ((xn, dn))n with |xn| = n, xn ⊂ rn such that, over the random training set s of size m, any oreq algorithm must output a predictor ˆd that satisfies • ˆd fails non-negativity, or • max(diss( ˆd), vio( ˆd)) ≥ fn (i.e., ˆd approximates training s badly or is far from a quasimetric), with probability 1/2 − o(1), as long as s does not contain almost all pairs 1 − m/n2 = ω(n−1/3), and does not only include few pairs m/n2 = ω(n−1/2). recall that the little-omega notation means f = ω(g) ⇐⇒ g = o(f ). b.3.1 proof proof strategy. quasimetric spaces (reproduced here as fig. 6). to do so, we in our proof below, we will extend the construction discussed in sec. 4.2 to large 1. construct large quasimetric spaces containing many copies of the (potentially failing) structure in fig. 6, where we can consider training sets of certain properties such that • we can pair up such training sets, • an algorithm equivariant to orthogonal transforms must fail on one of them, • for each pair, the two training sets has equal probability of being sampled; then, it remains to show that with probability 1 − o(1) we end up with a training set of such properties. 2. consider sampling training set as individually collecting each pair with a certain probability p, and carefully analyze the conditions to sample a training set with the special properties with high probability 1 − o(1). 3. extend to fixed-size training sets and show that, under similar conditions, we sample a training set with the special properties with high probability 1 − o(1). in the discussion below and the proof, we will freely speak of infinite distances between two elements of x , but really mean a very large value (possibly finite). this allows us to make the argument clearer and less verbose. therefore, we are not restricting the applicable settings of thm. 4.6 to quasimetrics with (or without) infinite distances. in sec. 4.2, we showed how orthogonal-transform-equivariant algorithms can not predict ˆd(y, z) differently for the two particular quasimetric spaces and their training sets shown in fig. 6. but are these the only bad training sets? before the proof, let us consider what kinds of training sets are bad for these two quasimetric spaces. consider the quasimetrics dleft and dright over x (cid:44) {x, y, y(cid:48), z, w, w(cid:48)}, with distances as shown in the left and right parts of fig. 6, where we assume that the unlabeled pairs have infinite distances except in the left pattern d(x, w(cid:48)) ≤ 2, and in the both patterns d(y, z) has some appropriate value consistent with the respective triangle inequality. specifically, we ask: • for what training sets sleft ⊂ x × x can we interchange y ↔ y(cid:48) and w ↔ w(cid:48) on 2nd input to obtain a valid training set for dright, regardless of c? • for what training sets sright ⊂ x × x can we interchange y ↔ y(cid:48) and w ↔ w(cid:48) on 2nd input to obtain a valid training set for dleft, regardless of c? note that if sleft (or sright) satisfies its condition, the predictor ˆd from an algorithm equivariant to orthogonal transforms must (1) predict ˆd(y, z) identically and (2) attain the same training set distortion on it and its transformed training set. as we will see in the proof for thm. 4.6, this implies large distortion or violation for appropriate c. intuitively, all we need is that the transformed data do not break quasimetric constraints. however, its conditions are actually nontrivial as we want to set c to arbitrary: • we can’t have (x, w) ∈ sright because it would be transformed into (x, w) which has dleft(x, w) ≤ 2. then dright(x, w) ≤ 2 and then restricts the possible values of c due to triangle inequality with dright(w, z) = 1. for similar reasons, we can’t have (x, w(cid:48)) ∈ sleft. in fact, we can’t have a path of finite total distance from x to w (or w(cid:48)) in sright (or sleft). • we can not have (y(cid:48), y(cid:48)) ∈ s(·) (which has distance 0), which would get transformed into (y(cid:48), y) with distance 0, which (on the right pattern) would restrict the possible values of c due to triangle inequality, and break our assumption of d(·) not being a quasipseudometric. for similar reasons (w(cid:48), w(cid:48)), and cycles containing y(cid:48) or w(cid:48) with finite total distance, should be avoided. we note that having (y, y) or (w, w) would also break the non-quasipseudometric assumptions, and thus should avoid them as well (although cycles are okay here since they do not restrict values of c). in fact, with metrics more friendly to zero distances (than distortion and violation, which are based on distance ratios), it might be possible to allow them and obtain better bounds in the second-moment argument below in the proof for thm. 4.6. • similarly, we can’t have (y, y(cid:48)), (y(cid:48), y), (w, w(cid:48)), or (w(cid:48), w) in s(·), as they will be mapped to (y, y),(y(cid:48), y(cid:48)), (w, w), or (w(cid:48), w(cid:48)). with these understandings of the pattern shown in fig. 6, we are ready to discuss the constructed quasimetric space and training sets. proof of thm. 4.6. our proof follows the outline listed above. 1. construct large quasimetric spaces containing many copies of the (potentially failing) structure in fig. 6. for any n > 0, consider the following quasimetric space (xn, dn) of size n, with one-hot features. wlog, assume n = 12k is a multiple of 12. if it is not, set at most 11 elements to have infinite distance with every other node. this won’t affect the asymptotics. let the n = 12k elements of the space be 1 , . . . , xleft xn ={xleft 1 , . . . , yleft yleft , . . . , y(cid:48)left y(cid:48)left 1 k , xright k , yright 1 ,y(cid:48)right , . . . , xright , . . . , yright k+1 , . . . y(cid:48)right k k k ,wleft ,w(cid:48)left 1 1 , . . . , wleft , . . . , w(cid:48)left k , wright 1 ,w(cid:48)right k k , . . . , wright , k+1 , . . . , w(cid:48)right 2k zk+1, . . . , z2k}, ,z1, . . . , zk, with quasimetric distances, ∀i, j, , zj) = c , zj) = 1 , y(cid:48)right i , wright i i i i dn(xleft dn(wleft i dn(xleft i dn(yleft i dn(xleft , zj) = dn(xright , zj) = dn(wright ) = dn(xright ) = dn(yright ) = 2 , yleft i , w(cid:48)left i , w(cid:48)left i , zj) = c i i i dn(yleft i dn(yright i (81) where subscripts are colored to better show when they are the same (or different), unlisted distances are infinite (except that dn(u, u) = 0, ∀u ∈ x ). essentially, we equally divide the 12k nodes into 6 “types”, {x, y, w, z, w(cid:48), y(cid:48)}, corresponding to the 6 nodes from fig. 6, where each type has half of its nodes corresponding to the left pattern (of fig. 6), and the other half corresponding to the right pattern, except for the z types. furthermore, • among the left-pattern nodes, each set with the same subscript are bundled together in i which only has finite distance to w(cid:48)left i are ’s). however, since distance to/from yleft only has finite distance to yleft ’s or w(cid:48)left and wleft the sense that xleft (instead of other yleft k infinite anyways, we can pair j i i i for any i, j, l, h, to obtain a left pattern. (xleft i , yleft i , w(cid:48)left i , y(cid:48)left j , wleft l , zh) • among the right-pattern nodes, each set with the same subscript are bundled together which only has finite i ’s or wright ’s). however, since are distances are i (instead of other y(cid:48)right only has finite distance to y(cid:48)right , and yright j j k in the sense that xright distance to wright j infinite anyways, we can pair (xright i , yright j for any i, j, l, h, to obtain a right pattern. , y(cid:48)right i , wright j , w(cid:48)right l , zh) we can see that (x , d) indeed satisfies all quasimetric space requirements (defn. 2.1), including triangle inequalities (e.g., by, for each (a, b) with finite distance dn(a, b) < ∞, enumerating finite-length paths from a to b). now consider the sampled training set s. • we say s is bad on a left pattern specified by ileft, jleft, lleft, hleft, if s ⊃ {(xleft ileft ∅ = s ∩ {(yleft ileft (xleft ileft , zhleft ), (xleft ileft , zhleft), (yleft ileft ), (yleft , w(cid:48)left ileft ileft , yleft ileft , yleft ileft , y(cid:48)left jleft ), (yleft ileft ), (wleft lleft ), (wleft lleft , w(cid:48)left ileft , wleft lleft , w(cid:48)left ileft ), (wleft lleft ), (y(cid:48)left jleft ), (y(cid:48)left jleft , zhleft )} , y(cid:48)left jleft , yleft ileft ), (w(cid:48)left ileft ), (w(cid:48)left ileft , w(cid:48)left ), ileft , wleft lleft • we say s is bad on a right pattern specified by iright, jright, lright, hright, if , y(cid:48)right iright , yright jright , y(cid:48)right iright s ⊃ {(xright iright ∅ = s ∩ {(yright jright (xright iright , zhright), (xright iright , zhright), (yright jright ), (yright , w(cid:48)right jright jright ), (wright jright ), (y(cid:48)right iright ), (y(cid:48)right iright ), (y(cid:48)right jright ), (wright jright ), (wright jright , zhright)} , y(cid:48)right iright , yright jright , wright jright , wright jright , w(cid:48)right lright ), (w(cid:48)right lright ), (w(cid:48)right lright most importantly, • if s is bad on a left pattern specified by ileft, jleft, lleft, hleft, consider the orthogonal ileft on 2nd input. in s, the transform that interchanges yleft ileft jleft and wleft lleft ↔ w(cid:48)left ↔ y(cid:48)left , w(cid:48)right lright , wright jright (87) possible transformed pairs are d(xleft , yleft ileft ileft d(yleft , w(cid:48)left ileft ileft d(u, yleft ileft d(u, y(cid:48)left jleft d(u, w(cid:48)left ileft ) = 1 −→ d(xleft , y(cid:48)left ileft jleft ) = 1 −→ d(yleft , wleft ileft lleft d(u, y(cid:48)left jleft d(u, yleft ileft d(u, wleft lleft (known in s) (known in s) (poissble in s for some u (cid:54)= xleft ileft) (poissble in s for some u) (poissble in s for some u /∈ {xleft ileft , yleft ileft d(u, wleft lleft ) = ∞ −→ d(u, w(cid:48)left ileft (poissble in s for some u) the crucial observation is that the transformed training set just look like one sampled from a quasimetric space where – the quasimetric space has one less set of left-pattern elements, – the quasimetric space has one more set of right-pattern elements, and – transformed training set is bad on that extra right pattern (given by the extra set of right-pattern elements), which can be easily verified by comparing the transformed training set with the requirements in eqs. (86) and (87). • similarly, if s is bad on a right pattern specified by iright, jright, lright, hright, consider on 2nd the orthogonal transform that interchanges yright jright input. in s the possible transformed pairs are and wright jright ↔ w(cid:48)right lright ↔ y(cid:48)right iright d(xright , y(cid:48)right iright iright d(yright , wright jright jright d(u, yright jright d(u, y(cid:48)right iright ) = 1 −→ d(xright , yright iright jright ) = 1 −→ d(yright , w(cid:48)right jright lright d(u, y(cid:48)right iright d(u, yright jright (known in s) (known in s) (poissble in s for some u) (poissble in s for some u (cid:54)= xright iright d(u, w(cid:48)right lright d(u, wright jright (poissble in s for some u) d(u, wright jright d(u, w(cid:48)right lright (poissble in s for some u /∈ {xright iright , yright jright again, the crucial observation is that the transformed training set just look like one sampled from a quasimetric space where – the quasimetric space has one less set of right-pattern elements, – the quasimetric space has one more set of left-pattern elements, and – transformed training set is bad on that extra left pattern (given by the extra set of left-pattern elements), which can be easily verified by comparing the transformed training set with the requirements in eqs. (84) and (85). therefore, when s is bad on both a left pattern and a right pattern (necessarily on disjoint sets of pairs), we consider the following orthogonal transform composed of: (a) both transforms specified above (which only transforms 2nd inputs), (so that after this we obtain another possible training set of same size from the quasimetric space that is only different up to some permutation of x ) (b) a permutation of x (on both inputs) so that the bad left-pattern nodes and the bad right-pattern nodes exchange features, this transforms gives another possible training set of same size from the same quasimetric space, also is bad on a left pattern and a right pattern. moreover, with a particular way of select bad patterns (e.g., by the order of the subscripts), this process is reversible. therefore, we have defined a way to pair up all such bad training sets. consider the predictors ˆdbefore and ˆdafter trained on these two training sets (before and after transform) with an learning algorithm equivariant to orthogonal transforms. assuming that they satisfy non-negativity and identity of indiscernibles, we have, • the predictors have the same distortion over respective training sets. therefore we denote this distortion as diss( ˆd) without specifying the predictor ˆd or training set s. • the predictors must predict the same on heldout pairs in the sense that ˆdbefore(yleft ileft ˆdbefore(yright jright , zhleft ) = ˆdafter(yright jright , zhright) = ˆdafter(yleft ileft , zhright) , zhleft ). focusing on the first, we denote ˆd(y, z) (cid:44) ˆdbefore(yleft ileft without specifying the predictor ˆd or the specific y and z. however, the quasimetric constraints on heldout pairs (yleft , zhright) are ileft completely different (see the left vs. right part of fig. 6). therefore, as shown in fig. 6, assuming non-negativity, one of the two predictors must have total violation at least , zhleft ) = ˆdafter(yright jright , zhleft ) and (yright jright , zhright) vio( ˆd) ≥ max c diss( ˆd)(diss( ˆd) + ˆd(y, z)) , ˆd(y, z) 2 · diss( ˆd) fixing a large enough c, two terms in the max of eq. (91) can equal for some ˆd(y, z), and are respectively decreasing and increasing in ˆd(y, z). in that case, we have vio( ˆd) ≥ δ 2 · diss( ˆd) for δ > 0 such that c diss( ˆd)(diss( ˆd) + δ) δ 2 · diss( ˆd) solving the above quadratic equation gives −diss( ˆd) + leading to vio( ˆd) ≥ therefore, choosing c ≥ f 2 n(4fn + 1)2 gives diss( ˆd) ≤ fn =⇒ vio( ˆd) ≥ = fn. hence, for training sets that are bad on both a left pattern and a right pattern, we have shown a way to pair them up such that • each pair of training sets have the same size, and • the algorithm fail on one of each pair by producing a distance predictor that – has either distortion over training set ≥ fn, or violation ≥ fn, and – has test mse ≥ fn. remark b.1. note that all training sets of size m has equal probability of being sampled. therefore, to prove the theorem, it suffices to show that with probability 1 − o(1), we can sample a training set of size m that is bad on both a left pattern and a right pattern. 2. consider sampling training set as individually collecting each pair with a certain probability p, and carefully analyze the conditions to sample a training set with the special properties with high probability 1 − o(1). in probabilistic methods, it is often much easier to work with independent random variables. therefore, instead of considering uniform sampling a training set s of fixed size m, we consider including each pair in s with probability p, chosen independently. we will first show result based on this sampling procedure via a second moment argument, and later extend to the case with a fixed-size training set. first, let’s define some notations that ignore constants: f ∼ g ⇐⇒ f = (1 + o(1))g f (cid:28) g ⇐⇒ f = o(g). we start with stating a standard result from the second moment method (alon & spencer, 2004). corollary b.2 (corollary 4.3.5 of (alon & spencer, 2004)). consider random variable x = x1 + x2 + · · · + xn, where xi is the indicator random variable for event ai. write i ∼ j if i (cid:54)= j and the pair of events (ai, aj) are not independent. suppose the following quantity does not depend on i: j∼i p [aj | ai] . if e [x] → ∞ and ∆∗ (cid:28) e [x], then x ∼ e [x] with probability 1 − o(1). we will apply this corollary to obtain conditions on p such that s with probability 1 − o(1) is bad on some left pattern, and conditions such that s with probability 1 − o(1) is bad on some right pattern. a union bound would then give the desired result. • s is bad on some left pattern. recall that a left pattern is specified by ileft, jleft, lleft, hleft all ∈ [k]: , yleft ileft , w(cid:48)left ileft , y(cid:48)left jleft , wleft lleft , zhleft ) (xleft ileft therefore, we consider k4 = ( n 12 )4 events of the form aileft,jleft,lleft,hleft (cid:44) {s is bad on the left pattern at ileft, jleft, lleft, hleft}. obviously, these events are symmetrical, and the ∆∗ in eq. (104) does not depend on i. by the quasimetric space construction and the requirement for s to be bad on a left pattern in eqs. (84) and (85), we can see that (ileft, jleft, lleft, hleft) ∼ (i(cid:48) left) left or lleft = l(cid:48) only if ileft = i(cid:48) left or hleft = h(cid:48) left or jleft = j(cid:48) left, h(cid:48) left, j(cid:48) left, l(cid:48) left. therefore, we have (include 4 pairs & exclude 10 pairs) (share jleft) (share ileft) (share lleft) (share hleft) (share jleft, ileft) (share jleft, lleft) (share jleft, hleft) (share ileft, lleft) (share ileft, hleft) (share lleft, hleft) (share ileft, lleft, hleft) (share jleft, lleft, hleft) (share jleft, ileft, hleft) (share jleft, ileft, lleft) (107) therefore, to apply corollary b.2, we need to have which gives p (cid:29) n−1/2 1 − p (cid:29) n−1/3 (114) as a sufficient condition to for s to be bad on some left pattern with probability 1 − o(1). • s is bad on some right pattern. recall that a right pattern is specified by iright, jright, lright, hright all ∈ [k]: (xright iright , y(cid:48)right iright , yright jright , wright jright , w(cid:48)right lright , zhright) similarly, we consider k4 = ( n 12 )4 events of the form airight,jright,lright,hright (cid:44) {s is bad on the left pattern at iright, jright, lright, hright}. again, these events are symmetrical, and ∆∗ in eq. (104) does not depend on i. similarly, we have therefore, to apply corollary b.2, we need to have (include 4 pairs & exclude 10 pairs) (share iright) (share jright) (share hright) (share lright) (share iright, jright) (share iright, hright) (share iright, lright) (share jright, hright) (share jright, lright) (share hright, lright) (share jright, hright, lright) (share iright, hright, lright) (share iright, jright, lright) (share iright, jright, hright) (117) (118) which gives p (cid:29) n−3/4 1 − p (cid:29) n−1/3 (124) as a sufficient condition to for s to be bad on some right pattern with probability 1 − o(1). so, by union bound, as long as s is bad on some left pattern and some right pattern with probability 1 − o(1). 3. extend to fixed-size training sets and show that, under similar conditions, we sample a training set with the special properties with high probability 1 − o(1). to extend to fixed-size training sets, we consider the following alteration procedure: (a) sample training set s by independently include each pair with probability p (cid:44) m+δ n2 , for some δ > 0. (b) show that with high probability 1 − o(1), we end up with [m, m + 2δ] pairs in s. (c) make sure that p satisfy eq. (125) and eq. (126) so that s is bad on some left pattern and some right pattern with high probability 1 − o(1). (d) randomly discard the additional pairs, and show that with high probability 1 − o(1) this won’t affect that s is bad on some left pattern and some right pattern. we now consider each step in details: (a) sample training set s by independently include each pair with probability p (cid:44) m+δ n2 , for some δ > 0. for p (cid:44) m+δ n2 , the number of pairs in the training set is distributed as binomial(n2, (b) show that with high probability 1 − o(1), we end up with [m, m + 2δ] pairs in s. standard binomial concentration tells us that, (cid:20) binomial(n2, which can be satisfied if (129) (c) make sure that p satisfy eq. (125) and eq. (126) so that s is bad on some left pattern and some right pattern with high probability 1 − o(1). therefore, we want (d) randomly discard the additional pairs, and show that with high probability 1 − o(1) this won’t affect that s is bad on some left pattern and some right pattern. consider any specific bad left pattern and a right pattern in s. it is sufficient that we don’t break these two patterns during discarding. since we only discard pairs, it suffices to only consider the pairs we want to preserve, which are a total of 8 pairs across two patterns. each such pair is discarded the probability ≤ 2δ union bound, m , since we remove at most 2δ pairs. by p [all 8 pairs are preserved] ≥ 1 − hence, it suffices to make sure that collecting all requirements, we have assume that it can be easily verified that using δ (cid:44) n1.1 satisfies all conditions. hence, for a uniformly randomly sampled training set s with size m, s is bad on some left pattern and some right pattern with high probability 1 − o(1), as long as this is exactly the condition we need to prove the theorem (see remark b.1). this concludes the proof. b.3.2 discussions training set size dependency. intuitively, when the training set has almost all pairs, violation can be lowered by simply fitting training set well; when it is small and sparse, the learning algorithm may have an easier job finding some consistent quasimetric. thm. 4.6 shows that, outside these two cases, algorithms equivariant to orthogonal transforms can fail. note that for the latter case, thm. 4.6 requires the training fraction to decrease slower than n−1/2, which rules out training sizes that is linear in n. we leave improving this result as future work. nonetheless, thm. 4.6 still covers common scenarios such as a fixed fraction of all pairs, and highlights that a training-data-agnostic result (such as the ones for pqes) is not possible for these algorithms. proof techniques. in embedding theory, it is quite standard to analyze quasimetrics as directed graphs due to their lack of nice metric structure. in the proof for thm. 4.6, we used abundant techniques from the probabilistic method, which are commonly used for analyzing graph properties in the asymptotic case, including corollary b.2 from the second moment technique, and the alteration technique to extend to fixed-size training sets. while such techniques may be new in learning theory, they are standard for characterizing asymptotic probabilities on graphs, which quasimetrics are often analyzed as (charikar et al., 2006; mémoli et al., 2018). to provide more intuition on why these techniques are useful here, we note that the construction of a training set of pairs is essentially like constructing an erd˝os-rényi random graph on n2 vertices. erd˝os-rényi (undirected) random graphs come in two kinds: • uniformly sampling a fixed number of m edges; • adding an edge between each pair with probability p, decided independently. the latter, due to its independent decisions, is often much easy to analyze and preferred by many. the alteration technique (that we used in the proof) is also a standard way to transfer a result on a random graph of the latter type, to a random graph of the former type (bollobás & béla, 2001). readers can refer to (alon & spencer, 2004; bollobás & béla, 2001; erd˝os & rényi, 1959) for more in-depth treatment of these topics. | 31 | [
107.691,
381.5910784,
504.00139488,
435.3896784
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.