Dataset Viewer
Auto-converted to Parquet Duplicate
file
stringlengths
13
16
year
int64
2.02k
2.02k
label
int64
0
2
text
stringlengths
10
378k
page_no
int64
0
156
bbox
sequence
RQLLzMCefQu.pdf
2,022
0
provable rl with exogenous distractors via multistep inverse dynamics yonathan efroni1, dipendra misra1, akshay krishnamurthy1, alekh agarwal2 1microsoft research, new york, ny 2google †, john langford1 abstract many real-world applications of reinforcement learning (rl) require the agent to deal with high-dimensional observations such as those generated from a megapixel camera. prior work has addressed such problems with representation learning, through which the agent can provably extract endogenous, latent state information from raw observations and subsequently plan efficiently. however, such approaches can fail in the presence of temporally correlated noise in the observations, a phenomenon that is common in practice. we initiate the formal study of latent state discovery in the presence of such exogenous noise sources by proposing a new model, the exogenous block mdp (ex-bmdp), for rich observation rl. we start by establishing several negative results, by highlighting failure cases of prior representation learning based approaches. then, we introduce the predictive path elimination (ppe) algorithm, that learns a generalization of inverse dynamics and is provably sample and computationally efficient in ex-bmdps when the endogenous state dynamics are near deterministic. the sample complexity of ppe depends polynomially on the size of the latent endogenous state space while not directly depending on the size of the observation space, nor the exogenous state space. we provide experiments on challenging exploration problems which show that our approach works empirically. introduction in many real-world applications such as robotics there can be large disparities in the size of agent’s observation space (for example, the image generated by agent’s camera), and a much smaller latent state space (for example, the agent’s location and orientation) governing the rewards and dynamics. this size disparity offers an opportunity: how can we construct reinforcement learning (rl) algorithms which can learn an optimal policy using samples that scale with the size of the latent state space rather than the size of the observation space? several families of approaches have been proposed based on solving various ancillary prediction problems including autoencoding (tang et al., 2017; hafner et al., 2019), inverse modeling (pathak et al., 2017; burda et al., 2018), and contrastive learning (laskin et al., 2020) based approaches. these works have generated some significant empirical successes, but are there provable (and hence more reliable) foundations for their success? more generally, what are the right principles for learning with latent state spaces? in real-world applications, a key issue is robustness to noise in the observation space. when noise comes from the observation process itself, such as due to measurement error, several approaches have been recently developed to either explicitly identify (du et al., 2019; misra et al., 2020; agarwal et al., 2020a) or implicitly leverage (jiang et al., 2017) the presence of latent state structure for provably sample-efficient rl. however, in many real-world scenarios, the observations consist of many elements (e.g. weather, lighting conditions, etc.) with temporally correlated dynamics (see e.g. figure 1 and the example below) that are entirely independent of the agent’s actions and rewards. the temporal dynamics of these elements precludes us from treating them as uncorrelated noise, and as such, most previous approaches resort to modeling their dynamics. however, this is clearly wasteful as these elements have no bearing on the rl problem being solved. †work was done while the author was at microsoft research. {yefroni, dimisra, akshaykr, jcl}@microsoft.com, alekhagarwal@google.com figure 1: left: an agent is walking next to a pond in a park and observes the world as an image. the world consists of a latent endogenous state, containing variable such as agent’s position, and a much larger latent exogenous state containing variables such as motion of ducks, ripples in the water, etc. center: graphical model of the ex-bmdp. right: ppe learns a generalized form of inverse dynamics that recovers the endogenous state. as an example, consider the setting in figure 1. an agent is walking in a park on a lonely sidewalk next to a pond. the agent’s observation space is the image generated by its camera, the latent endogenous state is its position on the sidewalk, and the exogenous noise is provided by motion of ducks, swaying of trees and changes in lighting conditions, typically unaffected by the agent’s actions. while there is a line of recent empirical work that aims to remove causally irrelevant aspects of the observation (gelada et al., 2019; zhang et al., 2020), theoretical treatment is quite limited (dietterich et al., 2018) and no prior works address sample-efficient learning with provable guarantees. given this, the key question here is: how can we learn using an amount of data scaling with just the size of the endogenous latent state, while ignoring the temporally correlated exogenous observation noise? we initiate a formal treatment of rl settings where the learner’s observations are jointly generated by a latent endogenous state and an uncontrolled exogenous state, which is unaffected by the agent’s actions and does not affect the agent’s task. we study a subset of such problems called exogenous block mdps (ex-bmdps), where the endogenous state is discrete and decodable from the observations. we first highlight the challenges in solving ex-bmdps by illustrating the failures of many prior representation learning approaches (pathak et al., 2017; misra et al., 2020; jiang et al., 2017; agarwal et al., 2020a; zhang et al., 2020). these failure happen either due to creating too many latent states, such as one for each combination of ducks and passers-by in the example above leading to sample inefficiency in exploration, or due to lack of exhaustive exploration. we identify one recent approach developed by du et al. (2019) with favorable properties for exbmdps with near-deterministic latent state dynamics. in section 4 and section 5, we develop a variation of their algorithm and analyze its performance. the algorithm, called path prediction and elimination (ppe), learns a form of multi-step inverse dynamics by predicting the identity of the path that generates an observation. for near-deterministic ex-bmdps, we prove that ppe successfully explores the environment using o((sa)2h log( /δ)) samples where s is the size of the latent endogenous state space, a is the number of actions, h is the horizon and is a function class employed to solve a maximum likelihood problem. several prior works (gregor et al., 2016; paster et al., 2020) have also considered a multi-step inverse dynamics approach to learn a near optimal policy. yet, these works do not consider the ex-bmdp model. further, it is unknown whether these algorithms have provable guarantees, as ppe. theoretical analysis of the performance of these algorithms in the presence of exogenous noise is an interesting future work direction. |f| f empirically, in section 6, we demonstrate the performance of ppe and various prior baselines in a challenging exploration problem with exogenous noise. we show that baselines fail to decode the endogenous state as well as learning a good policy. we further, show that ppe is able to recover the latent endogenous model in a visually complex navigation problem, in accordance with the theory. exogenous block mdp setting we introduce a novel exogenous block markov decision process (ex-bmdp) setting to model systems with exogenous noise. we describe notations before formalizing the ex-bmdp model. notations. for a given set u for a given natural number n for a probability distribution p , we use ∆( ) to denote the set of all probability distributions over u n, we use the notation [n ] to denote the set ), we define its support as supp(p) = ∆( . u . lastly, } . p(u) > 0, u , n u x z q( x ∆( z × a → x × a → , a set of latent states ), a reward function r : with cardinality a, a transition function t : z z [0, 1], a horizon h · · · , zh ), instead receiving only the observations (x1, , zh , xh , ah , rh ) where z1 ∼ µ( zh), rh = r(xh, ah), and if h < h, then zh+1 ∼ ∈ u} we start with describing the block markov decision process (bmdp) du et al. (2019). this process with cardinality z, a finite set consists of a finite set of observations ), an emission function of actions a n, and a start state q : z × a → ). the agent interacts with the environment by repeatedly generating h-step distribution µ trajectories (z1, x1, a1, r1, [h] we have ∈ zh, ah). the agent does not xh ∼ · | observe the states (z1, , xh ) and rewards · · · , rh ). we assume that the emission distributions of any two latent states are disjoint, usually (r1, referred as the block assumption: supp(q( = z2. the agent when z1 (cid:54) z1)) ∅ ∩ ). we also define the set of non-stationary policies chooses actions using a policy π : πns = πh as a h-length tuple, with (π1, πns denoting that the action at time step h , πh ) is taken as ah ∼ xh). the value v (π) of a policy π is the expected episodic sum of rewards πh( v (π) := eπ[(cid:80)h h=1 r(xh, ah)]. the optimal policy is given by π(cid:63) = arg maxπ πns v (π). we denote by ph(x π) the probability distribution over observations x at time step h when following | h sequences of actions. a policy π. lastly, we refer to an open loop policy as an element in all an open loop policy follows a pre-determined sequence of actions for h time steps, unaffected by state information. ) and for every h · t ( a a1, .., ah } { a · · · supp(q( x → given the aforementioned definitions, we define an ex-bmdp as follows: definition 1 (exogenous block markov decision processes). an ex-bmdp is a bmdp such that the latent state can be decoupled into two parts z = (s, ξ) where s is the endogenous state ∈ s and ξ the initial distribution and transition functions are ξ is the exogenous state. for z ∈ z decoupled, that is: µ(z) = µ(s)µξ(ξ), and t (z(cid:48) s, a)tξ(ξ(cid:48) | the observation space can be arbitrarily large to model which could be a high-dimensional real vector denoting an image, sound, or haptic data in an ex-bmdp. the endogenous state s captures the information that can be manipulated by the agent. figure 1, center, visualizes the transition dynamics is finite with cardinality s. the factorization. we assume that the set of all endogenous states exogenous state ξ captures all the other information that the agent cannot control and does not affect the information it can manipulate. again, we make no assumptions on the exogenous dynamics which may be arbitrarily large. we note that the block assumption of the nor on its cardinality ξ | ex-bmdp implies the existence of two inverse mappings: φ(cid:63) : to map an observation to its endogenous state, and φ(cid:63) ξ : ξ to map it to its exogenous state. z, a) = t (s(cid:48) x → s x s x → justification of assumptions. the block assumption has been made by prior work (e.g., du et al. (2019), zhang et al. (2020)) to model many real-world settings where the observation is rich, i.e., it contains enough information to decode the latent state. the decoupled dynamics assumption made in the ex-bmdp setting is a natural way to characterize exogenous noise; the type of noise that is not affected by our actions nor affects the endogenous state but may have non-trivial dynamic. this decoupling captures the movement of ducks, captured in the visual field of the agent in figure 1, and many additional exogenous processes (e.g., movement of clouds in a navigation task). goal. our formal objective is reward-free learning. we wish to find a set of policies, we call a policy cover, that can be used to explore the entire state space. given a policy cover, and for any reward function, we can find a near optimal policy by applying dynamic programming (e.g., bagnell et al. (2004)), policy optimization (e.g., kakade and langford (2002); agarwal et al. (2020b); shani et al. (2020)) or value (e.g., antos et al. (2008)) based methods. definition 2 (α-policy cover). let ψh be a finite set of non-stationary policies. we say ψh is an α-policy cover for the hth time step if it holds that maxπ πn s ∈ z α. if α = 0 we call ψh a policy cover. for all z ph(z ph(z maxπ ψh for standard bmdps the policy cover is simply the set of policies that reaches each latent state of the bmdp (du et al., 2019; misra et al., 2020; agarwal et al., 2020a). thus, for a bmdp, the cardinality . the structure of ex-bmdps allows to reduce the size of the of the policy cover scales with |z| policy cover significantly to when the size of the exogenous state space is large. specifically, we show that the set of policies that reach each endogenous state, and do not depend on the exogenous part of the state is also a policy cover (see appendix b, proposition 4). |s| (cid:28) |z| |s| | failures of prior approaches we now describe the limitation of prior rl approaches in the presence of exogenous noise. we provide an intuitive analysis over here, and defer a formal statement and proof to appendix a. limitation of noise-contrastive learning. noise-contrastive learning has been used in rl to learn a state abstraction by exploiting temporal information. specifically, the homer algorithm (misra et al., 2020) trains a model to distinguish between real and imposter transitions. this is done by collecting a dataset of quads (x, a, x(cid:48), y) where y = 1 means the transition was (x, a, x(cid:48)) was observed and y = 0 means that (x, a, x(cid:48)) was not observed. homer then trains a model pθ(y x, a, φθ(x(cid:48))) with parameters θ, on the dataset, by predicting whether a given pair of transition was observed n for exploring the environment. homer or not. this provides a state abstraction φθ : can provably solve block mdps. unfortunately, in the presence of exogenous noise, homer distinguishes between two transitions that represent transition between the same latent endogenous states but different exogenous states. in our walk in the park example, even if the agent moves between same points in two transitions, the model maybe able to tell these transitions apart by looking at the position of ducks which may have different behaviour in the two transitions. this results in the homer creating ) many abstract states. we call this the under-abstraction problem. x → ( |z| o limitation of inverse dynamics. another common approach in empirical works is based on modeling the inverse dynamics of the system, such as the icm module of pathak et al. (2017). in such approaches, we learn a representation by using consecutive observations to predict the action that was taken between them. such a representation can ignore all information that is not relevant for action prediction, which includes all exogenous/uncontrollable information. however, it can also ignore controllable information. this may result in a failure to sufficiently explore the environment. in this sense, inverse dynamics approaches result in an over-abstraction problem where observations from different endogenous states can be mapped to the same abstract state. the over-abstraction problem was described at misra et al. (2020), when the starting state is random. in appendix a.3 we show inverse dynamics may over-abstract when the initial starting state is deterministic. limitation of bisimulation. zhang et al. (2020) proposed learning a bisimulation metric to learn a representation which is invariant to exogenous noise. unfortunately, it is known that bisimulation metric cannot be learned in a sample-efficient manner (modi et al. (2020), proposition b.1). intuitively, when the reward is same everywhere, then bisimulation merges all states into a single abstract state. this creates an over-abstraction problem in sparse reward settings, since the agent can falsely merge all states into a single abstract state until it receives a non-trivial reward. bellman rank might depend on . the bellman rank was introduced in jiang et al. (2017) as a ξ | | complexity measure for the learnability of an rl problem with function approximations. to date, most of the learnable rl problems have a small bellman rank. however, we show in appendix a that bellman rank for ex-bmdp can scale as ). this shows that ex-bmdp is a highly non-trivial ξ | | setting as we don’t even have sample-efficient algorithms regardless of computationally-efficient. o in appendix a we also describe the failures of flambe (agarwal et al., 2020a)) and autoencoding based approaches (tang et al., 2017). reinforcement learning for ex-bmdps in this section, we present an algorithm predictive path elimination (ppe) that we later show can provably solve any ex-bmdp with nearly deterministic dynamics and start state distribution of the endogenous state, while making no assumptions on the dynamics or start state distribution of the exogenous state (algorithm 1). before describing ppe, we highlight that ppe can be thought of as , stochasticity level η algorithm 1 ppe(δ, η): predictive path elimination 1: set ψ1 = 2: for h = 2, . . . , h do set n = 16 ( ψh 3: | collect a dataset solve multi-class classification problem: ˆfh = arg maxf for 1 |f || 1 ◦ of n i.i.d. tuples (x, υ) where υ )2 log i < j do ah d a ψh 1 ◦ a| calculate the path prediction gap: (cid:98)∆(i, j) = 1 n if (cid:98)∆(i, j) ◦a| denotes an empty path unf(ψh 1 ◦ a − (cid:80) (x,υ) ∈f ∈d ) and x ln f (idx(υ) p(xh | x). | (x,υ) ∈d ˆfh(i | x) ˆfh(j , then eliminate path υ with idx(υ) = j. //υi and υj visit same state ψh is defined as the set of all paths in ψh that have not been eliminated in line 8. a computationally-efficient and simpler alternative to algorithm 4 of du et al. (2019) who studied rich-observation setting without exogenous noise.1 , h ∈ a} where π a is an open-loop policy that follows π till time step h . in the hth iteration, it learns a policy ppe performs iterations over the time steps h 2, } cover ψh for time step h containing open-loop policies. this is done by first augmenting the policy cover for previous time step by one step. formally, we define υh = ψh = a ∈ ◦ 1 and then takes ψh action a. since we assume the transition dynamics to be near-deterministic, therefore, we know that there exists a policy cover for time step h that is a subset of υh and whose size is equal to the number of reachable states at time step h. further, as the transitions are near-deterministic, we refer to an open-loop policy as a path, as we can view the policy as tracing a path in the latent transition model. ppe works by eliminating paths in υh so that we are left with just a single path for each reachable of tuples (x, υ) where υ is a uniformly sampled from state. this is done by collecting a dataset υh and x by predicting the index idx(υ) of the path υ from the observation x (line 5). index of paths in υh are computed with respect to υh and remain fixed throughout training. intuitively, if ˆfh(i x) is sufficiently large, then we can hope that the path υi visits the state φ(cid:63)(x). further, we can view this prediction problem as learning a multistep inverse dynamics model since the open-loop policy contains information about all previous actions and not just the last action. for every pair of paths in υh, we first compute a path prediction gap (cid:98)∆(line 7). if the gap is too small, we show it implies that these paths reach the same endogenous state, hence we can eliminate a single redundant path from this pair (line 8). finally, ψh is defined as the set of all paths in υh which were not eliminated. ppe reduces rl to performing h standard classification problems. further, the algorithm is very simple and in practice requires just a single hyperparameter (n ). we believe these properties will make it well-suited for many problems. υ) (line 4). we train a classifier ˆfh using ph(x d d recovering an endogenous state decoder. we can recover a endogenous state decoder ˆφh for directly from ˆfh as shown below: each time step h , h ∈ { · · · ˆφh(x) = min } ˆfh(i i x) max j ˆfh(j x) ), i | ] υh| − o intuitively, this assigns the observation to the path with smallest index that has the highest chance of visiting x, and therefore, φ(cid:63)(x). we are implicitly using the decoder for exploring, since we rely on using ˆfh for making planning decisions. we will evaluate the accuracy of this decoder in section 6. recovering the latent transition dynamics. ppe can also be used to recover a latent endogenous transition dynamics. the direct way is to use the learned decoder ˆφh along with episodes collected by ppe during the course of training and do count-based estimation. however, for most problems, recovering an approximate deterministic transition dynamics suffices, which can be directly read 1alg. 4 has time complexity of o(s4a4h) compared to o(s3a3h) for ppe. furthermore, alg. 4 requires an upper bound on s, whereas ppe is adaptive to it. lastly, du et al. (2019) assumed deterministic setting while we provide a generalization to near-determinism. from the path elimination data. we accomplish this by recovering a partition of paths in ψh 1 × a where two paths in the same partition set are said to be merged with each other. in the beginning, each path is only merged with itself. when we eliminate a path υj on comparison with υi in line 8, then all paths currently merged with υj get merged with υi. we then define an abstract state space ψh. further, we recover a sh for time step h that contains an abstract state j for each path υj ∈ (cid:98) latent deterministic transition dynamics for time step h sh where we set 1 : (cid:98) (cid:98) 1 × a → sh − ˆth ψh where υ(cid:48)i ∈ ψh − 1 as ˆth − ψh gets merged with path υ(cid:48)i ◦ a ∈ 1(i, a) = j if the path υj ∈ learning a near optimal policy given a policy cover. ppe runs in a reward-free setting. however, the recovered policy cover and dynamics can be directly used to optimize any given reward function with existing methods. if the reward function depends on the exogenous state then we can use the psdp algorithm (bagnell et al., 2004) to learn a near-optimal policy. psdp is a model-free dynamic programming method that only requires policy cover as input (see appendix d.1 for details). if the reward function only depends on the endogenous state, we can use a computationally cheaper value-iteration vi that uses the recovered transition dynamics. vi is a model-based algorithm that estimates the reward for each state and action, and performs dynamic programming on the model (see appendix d.2 for details). in each case, the sample complexity of learning a near-optimal policy, given the output of ppe, scales with the size of endogenous and not the exogenous state space. theoretical analysis and discussion we provide the main sample complexity guarantee for ppe as well as additional intuition for why it works. we analyze the algorithm in near-deterministic mdps defined as follows: two transition functions t1 and t2 are η-close if for all h [h], a · | η. s, a) we emphasize that near-deterministic dynamics are common in real-world applications like robotics. assumption 1 (near deterministic endogenous dynamics). we assume the endogenous dynamics is η-close to a deterministic model (µd,η, td,η) where η ∈ sh it holds that ∈ η. analogously, two starting distribution µ1 and µ2 are η-close if || ∈ a , s f we make a realizability assumption for the regression problem solved by ppe (line 5). we assume is expressive enough to represent the bayes optimal classifier of the regression problems that created by ppe. assumption 2 (realizability). for any h where [h], and any set of paths υ h denotes the set of all paths of length h, there exists f (cid:63) ph(φ(cid:63)(x)) h with sa and υ ⊆ a | ≤ such that: f (cid:63) υ,h(idx(υ) υ(cid:48)) > 0. υ,h ∈ f ph(φ(cid:63)(x)) with (cid:80) υ and x x) = υ) | ph(φ(cid:63)(x)) υ(cid:48)) , for all υ | ∈ x realizability assumptions are common in theoretical analysis (e.g., misra et al. (2020), agarwal et al. (2020a)). in practice, we use expressive neural networks to solve the regression problem, so we expect the realizability assumption to hold. note that there are at most as(h+1) bayes classifiers for different prediction problems. however, this is acceptable since our guarantees will scale as ln and, therefore, the function class can be exponentially large to accommodate all of them. |f| f we now state the formal sample complexity guarantees for ppe below. theorem 1 (sample complexity). fix δ returns a policy cover h h=1 such that for any h (0, 1). then, with probability greater than 1 δ, ppe [h], ψh is a ηh-policy cover for time step h ψh} s, which gives the total number of episodes used by ppe as o s2a2h ln |f | sah δ and ψh| ≤ | we defer the proof to appendix c. our sample complexity guarantees do not depend directly on the size of observation space or the exogenous space. further, since our analysis only uses standard uniform convergence arguments, it extends straightforwardly to infinitely large function classes by replacing ln with other suitable complexity measures such as rademacher complexity. |f| why does ppe work? we provide an asymptotic analysis to explain why ppe works. consider a deterministic setting and the hth iteration of ppe. assume by induction that ψh 1 is an exact policy cover for time step h is also a policy cover for time step h. however, 1 ◦ a it may contain redundancies; it may contain several paths that reach the same endogenous state. we now show how a generalized inverse dynamics objective can eliminate such redundant paths. 1. therefore, υh = ψh (a) combination lock (h = 2). (b) regret plot (c) decoding accuracy figure 2: results on combination lock. left: we show the latent transition dynamics of combination lock. observations are not shown for brevity. center: shows minimal number of episodes needed to achieve a mean regret of at most v (π(cid:63))/2. right: state decoding accuracy (in percent) of decoders learned by different methods. solid lines implies no exogenous dimension while dashed lines imply an exogenous dimension of 100. let ph(ξ) denote the distribution over exogenous states at time step h which is independent of agent’s policy. the bayes optimal classifier (f (cid:63) h := fυh,h) of the prediction problem can be derived as: ph(x | υ) ph(x | υ)p(υ) ph(φ(cid:63)(x)) | υ) ph(x | υ(cid:48))p(υ(cid:48)) ph(x | υ(cid:48)) (a) = (b) = f (cid:63) h (idx(υ) | x) := ph(υ | x) = where (a) holds since all paths in υh are chosen uniformly, and (b) critically uses the fact that for any open-loop policy υ we have a factorization property, ph(x υ) = q (cid:0)x ξ(x)(cid:1) ph(φ(cid:63)(x) υ)ph(φ(cid:63) ξ(x)). xh) xh) f (cid:63) h(i υh be two paths with indices i and j respectively. we define their exact path prediction let υ1, υ2 ∈ gap as ∆(i, j) := exh [ f (cid:63) ]. assume that υ1 visits an endogenous state s at h(j | | − time step h and denote ω(s) as the number of paths in υh that reaches s. then f (cid:63) xh) = 1/ω(s) if φ(cid:63)(xh) = s, and 0 otherwise. if υ2 also visits s at time step h, then f (cid:63) xh) for h(j all xh. this implies ∆(i, j) = 0 and ppe will filter out the path with higher index since it detected both paths reach to the same endogenous state. conversely, let υ2 visit a different state at time step h. if x is an observation that maps to s, then f (cid:63) x) = 0. this gives h(i f (cid:63) and, consequently, ∆(i, j) > 0. in fact, we can show h(i x) | | | ). thus, ppe will not eliminate these paths upon comparison. our complete ∆(i, j) | analysis in the appendix generalizes the above reasoning to finite sample setting where we can only approximate f (cid:63) h and ∆, as well as to ex-bmdps with near-deterministic dynamics. as evident, the analysis critically relies on the factorization property that holds for open-loop policies but not for arbitrary ones. this is the reason why we build a policy cover with open-loop policies. x) = 1/ω(s) and f (cid:63) h(i | xh) = f (cid:63) − ≥ o h(j h(i x) υh experiments we evaluate ppe on two domains: a challenging exploration problem called combination lock to test whether ppe can learn an optimal policy and an accurate state decoder, and a visual-grid world with complex visual representations to test whether ppe is able to recover the latent dynamics. s a sh,a, sh,b, sh,c} h , an action space 0, 1 } combination lock experiments. the combination lock problem is defined for a given horizon h h by an endogenous state space h=2, an exogenous state space with 10 actions, and a deterministic endogenous start state of s1,a. ξ = for any state sh,g we call g as its type which can be a, b or c. states with type a and b are considered good states and those with type c are considered bad states. each instance of this problem is defined by two good action sequences (ah)h = a(cid:48)h, which are chosen uniformly randomly h=2 with ah (cid:54) and kept fixed throughout. at h = 1, the agent is in s1,a and action a1 leads to s2,a, a(cid:48)h leads to s2,b, and all other actions lead to s2,c. for h > 2, taking action ah in sh,a leads to sh+1,a and taking action a(cid:48)h in sh,b leads to sh+1,b. in all other cases involving taking an action in a state sh,g, we transition to the next bad state sh+1,c. we visualize the latent endogenous dynamics in figure 2a. the exogenous [h]. at state evolves as follows. we set ξ1 ∈ { ∈ 1 independently with time step h, ξh is generated from ξh
6
[ 108, 58.90055, 505.7435378, 193.1034166 ]
fPhKeld3Okz.pdf
2,022
1
gradient step denoiser for convergent plugand-play samuel hurault ∗, arthur leclaire & nicolas papadakis univ. bordeaux, bordeaux inp, cnrs, imb, umr 5251,f-33400 talence, france abstract plug-and-play (pnp) methods constitute a class of iterative algorithms for imaging problems where regularization is performed by an off-the-shelf denoiser. although pnp methods can lead to tremendous visual performance for various image problems, the few existing convergence guarantees are based on unrealistic (or suboptimal) hypotheses on the denoiser, or limited to strongly convex data-fidelity terms. we propose a new type of pnp method, based on half-quadratic splitting, for which the denoiser is realized as a gradient descent step on a functional parameterized by a deep neural network. exploiting convergence results for proximal gradient descent algorithms in the nonconvex setting, we show that the proposed pnp algorithm is a convergent iterative scheme that targets stationary points of an explicit global functional. besides, experiments show that it is possible to learn such a deep denoiser while not compromising the performance in comparison to other state-of-the-art deep denoisers used in pnp schemes. we apply our proximal gradient algorithm to various ill-posed inverse problems, e.g. deblurring, superresolution and inpainting. for all these applications, numerical results empirically confirm the convergence results. experiments also show that this new algorithm reaches state-of-the-art performance, both quantitatively and qualitatively. introduction image restoration (ir) problems can be formulated as inverse problems of the form x∗ ∈ arg min f (x) + λg(x) x where f is a term measuring the fidelity to a degraded observation y, and g is a regularization term weighted by a parameter λ ≥ 0. generally, the degradation of a clean image ˆx can be modeled by a linear operation y = aˆx + ξ, where a is a degradation matrix and ξ a white gaussian noise. in this context, the maximum a posteriori (map) derivation relates the data-fidelity term to the likelihood f (x) = − log p(y|x) = 1 2σ2 ||ax − y||2, while the regularization term is related to the chosen prior. regularization is crucial since it tackles the ill-posedness of the ir task by bringing a priori knowledge on the solution. a lot of research has been dedicated to designing accurate priors g. among the most classical priors, one can single out total variation (rudin et al., 1992), wavelet sparsity (mallat, 2009) or patch-based gaussian mixtures (zoran & weiss, 2011). designing a relevant prior g is a difficult task and recent approaches rather apply deep learning techniques to directly learn a prior from a database of clean images (lunz et al., 2018; prost et al., 2021; gonz´alez et al., 2021). generally, the problem (1) does not have a closed-form solution, and an optimization algorithm is required. first-order proximal splitting algorithms (combettes & pesquet, 2011) operate individually on f and g via the proximity operator proxf (x) = arg min z ||x − z||2 + f (z). among them, half-quadratic splitting (hqs) (geman & yang, 1995) alternately applies the proximal operators of f and g. proximal methods are particularly useful when either f or g is nonsmooth. plug-and-play (pnp) methods (venkatakrishnan et al., 2013) build on proximal splitting algorithms by replacing the proximity operator of g with a generic denoiser, e.g. a pretrained deep network. ∗corresponding author: samuel.hurault@math.u-bordeaux.fr these methods achieve state-of-the-art results (buzzard et al., 2018; ahmad et al., 2020; yuan et al., 2020; zhang et al., 2021) in various ir problems. however, since a generic denoiser cannot generally be expressed as a proximal mapping (moreau, 1965), convergence results, which stem from the properties of the proximal operator, are difficult to obtain. moreover, the regularizer g is only made implicit via the denoising operation. therefore, pnp algorithms do not seek the minimization of an explicit objective functional which strongly limits their interpretation and numerical control. in order to keep tractability of a minimization problem, romano et al. (2017) proposed, with regularization by denoising (red), an explicit prior g that exploits a given generic denoiser d in the form g(x) = 1 2 (cid:104)x, x − d(x)(cid:105). with strong assumptions on the denoiser (in particular a symmetric jacobian assumption), they show that it verifies ∇xg(x) = x − d(x). (3) such a denoiser is then plugged in gradient-based minimization schemes. despite having shown very good results on various image restoration tasks, as later pointed out by reehorst & schniter (2018) or saremi (2019), existing deep denoisers lack jacobian symmetry. hence, red does not minimize an explicit functional and is not guaranteed to converge. contributions. in this work, we develop a pnp scheme with novel theoretical convergence guarantees and state-of-the-art ir performance. departing from the pnp-hqs framework, we plug a denoiser that inherently satisfies equation (3) without sacrificing the denoising performance. the resulting fixed-point algorithm is guaranteed to converge to a stationary point of an explicit functional. this convergence guarantee does not require strong convexity of the data-fidelity term, thus encompassing ill-posed ir tasks like deblurring, super-resolution or inpainting. related works pnp methods have been successfully applied in the literature with various splitting schemes: hqs (zhang et al., 2017b; 2021), admm (romano et al., 2017; ryu et al., 2019), proximal gradient descent (pgd) (terris et al., 2020). first used with classical non deep denoisers such as bm3d (chan et al., 2016) and pseudo-linear denoisers (nair et al., 2021; gavaskar et al., 2021), more recent pnp approaches (meinhardt et al., 2017; ryu et al., 2019) rely on efficient off-theshelf deep denoisers such as dncnn (zhang et al., 2017a). state-of-the-art ir results are currently obtained with denoisers that are specifically designed to be integrated in pnp schemes, like ircnn (zhang et al., 2017b) or drunet (zhang et al., 2021). though providing excellent restorations, such schemes are not guaranteed to converge for all kinds of denoisers or ir tasks. designing convergence proofs for pnp algorithms is an active research topic. sreehari et al. (2016) used the proximal theorem of moreau (moreau, 1965) to give sufficient conditions for the denoiser to be an explicit proximal map, which are applied to a pseudo-linear denoiser. the convergence with pseudo-linear denoisers have been extensively studied (gavaskar & chaudhury, 2020; nair et al., 2021; chan, 2019). however, state-of-the-art pnp results are obtained with deep denoisers. various assumptions have been made to ensure the convergence of the related pnp schemes. with a “bounded denoiser” assumption, chan et al. (2016); gavaskar & chaudhury (2019) showed convergence of pnp-admm with stepsizes decreasing to 0. red (romano et al., 2017) and red-pro (cohen et al., 2021) respectively consider the classes of denoisers with symmetric jacobian or demicontractive mappings, but these conditions are either too restrictive or hard to verify in practice. in appendix a.3, more details are given on red-based methods. many works focus on lipschitz properties of pnp operators. depending on the splitting algorithm in use, convergence can be obtained by assuming the denoiser averaged (sun et al., 2019b), firmly nonexpansive (sun et al., 2021; terris et al., 2020) or simply nonexpansive (reehorst & schniter, 2018; liu et al., 2021). these settings are unrealistic as deep denoisers do not generally satisfy such properties. ryu et al. (2019); terris et al. (2020) propose different ways to train deep denoisers with constrained lipschitz constants, in order to fit the technical properties required for convergence. but imposing hard lipschitz constraints on the network alters its denoising performance (bohra et al., 2021; hertrich et al., 2021). yet, ryu et al. (2019) manages to get a convergent pnp scheme without assuming the nonexpansiveness of d. this comes at the cost of imposing strong convexity on the data-fidelity term f , which excludes many ir tasks like deblurring, super-resolution or inpainting. hence, given the ill-posedness of ir problems, looking for a unique solution via contractive operators is a restrictive assumption. in this work, we do not impose contractiveness, but still obtain convergence results with realistic hypotheses. one can relate the ideal deep denoiser to the “true” natural image prior p via tweedie’s identity. in (efron, 2011), it is indeed shown that the minimum mean square error (mmse) denoiser d∗ σ (at noise level σ) verifies dσ(x) = x + σ2∇x log pσ(x) where pσ is the convolution of p with the density of n (0, σ2 id). in a recent line of research (bigdeli et al., 2017; xu et al., 2020; laumont et al., 2021; kadkhodaie & simoncelli, 2020), this relation is used to plug a denoiser in gradient-based dynamics. in practice, the mmse denoiser cannot be computed explicitly and tweedie’s identity does not hold for deep approximations of the mmse. in order to be as exhaustive as possible, we detailed the addressed limitations of existing pnp methods in appendix a.1. the gradient step plug-and-play the proposed method is based on the pnp version of half-quadratic-splitting (pnp-hqs) that amounts to replacing the proximity operator of the prior g with an off-the-shelf denoiser dσ. in order to define a convergent pnp scheme, we first set up in section 3.1 a gradient step (gs) denoiser. we then introduce the gradient step pnp (gs-pnp) algorithm in section 3.2. gradient step denoiser we propose to plug a denoising operator dσ that takes the form of a gradient descent step dσ = id −∇gσ, (4) with gσ : rn → r. contrary to romano et al. (2017), our denoiser exactly represents a conservative vector field. the choice of the parameterization of gσ is fundamental for the denoising performance. as already noticed in salimans & ho (2021), we experimentally found that directly modeling gσ as a neural network (e.g. a standard network used for classification) leads to poor denoising performance. in order to keep the strength of state-of-the-art unconstrained denoisers, we rather use gσ(x) = ||x − nσ(x)||2, which leads to dσ(x) = x − ∇gσ(x) = nσ(x) + jnσ (x)t (x − nσ(x)), (6) where nσ : rn → rn is parameterized by a neural network and jnσ (x) is the jacobian of nσ at point x. as discussed in appendix a.2, the formulation (5) for gσ has been proposed in (romano et al., 2017, section 5.2) and (bigdeli & zwicker, 2017) for a distinct but related purpose, and not exploited for convergence analysis. thanks to our definition (6) for dσ, we can parameterize nσ with any differentiable neural network architecture rn → rn that has proven efficient for image denoising. although the representation power of the denoiser is limited by the particular form (6), we show (see section 5.1) that such parameterization still yields state-of-the-art denoising results. we train the denoiser dσ for gaussian noise by minimizing the mse loss function l(dσ) = ex∼p,ξσ∼n (0,σ2i)[||dσ(x + ξσ) − x||2], or l(gσ) = ex∼p,ξσ∼n (0,σ2i)[||∇gσ(x + ξσ) − ξσ||2], when written in terms of gσ using equation (4). remark 1. by definition, the optimal solution g∗ σ ∈ arg min l is related to the mmse denoiser d∗ σ, that is, the best non-linear predictor of x given x + ξσ. therefore, it satisfies tweedie’s formula σ = −σ2 log pσ + c, for some c ∈ r. hence and ∇g∗ approximating the mmse denoiser with a denoiser parameterized as (4) is related to approximating the logarithm of the smoothed image prior of pσ with − 1 σ2 gσ. this relation was used for image generation with “denoising score matching” by saremi & hyvarinen (2019); bigdeli et al. (2020). σ = −σ2∇ log pσ (efron, 2011) i.e. g∗ a plug-and-play method for explicit minimization the standard pnp-hqs operator is tpnp-hqs = dσ ◦proxτ f , i.e. (id −∇gσ)◦proxτ f when using the gs denoiser as dσ. for convergence analysis, we wish to fit the proximal gradient descent (pgd) algorithm. we thus propose to switch the proximal and gradient steps and to relax the denoising step with a parameter λ ≥ 0. our pnp algorithm with gs denoiser (gs-pnp) then writes xk+1 = t τ,λ gs-pnp(xk) with t τ,λ gs-pnp = proxτ f ◦(τ λdσ + (1 − τ λ) id), = proxτ f ◦(id −τ λ∇gσ). under suitable conditions on f and gσ (see lemma 1 in appendix c), fixed points of the pgd operator t τ,λ gs-pnp correspond to critical points of a classical objective function in ir problems f (x) = f (x) + λgσ(x). therefore, using the gs denoiser from equation (4) is equivalent to include an explicit regularization and thus leads to a tractable global optimization problem solved by the pnp algorithm. our complete pnp scheme is presented in algorithm 1. it includes a backtracking procedure on the stepsize τ that will be detailed in section 4.2. also, after convergence, we found it useful to apply an extra gradient step id −λτ ∇gσ in order to discard the residual noise brought by the last proximal step proxτ f . convergence analysis
3
[ 108.299, 582.2096768, 259.2274027, 594.1648768 ]
NX1He-aFO_F.pdf
2,021
1
learning value functions in deep policy gradients using residual variance yannis flet-berliac∗ inria, scool team univ. lille, cristal, cnrs yannis.flet-berliac@inria.fr odalric-ambrym maillard inria, scool team reda ouhamma∗ inria, scool team univ. lille, cristal, cnrs reda.ouhamma@inria.fr philippe preux inria, scool team univ. lille, cristal, cnrs abstract policy gradient algorithms have proven to be successful in diverse decision making and control tasks. however, these methods suffer from high sample complexity and instability issues. in this paper, we address these challenges by providing a different approach for training the critic in the actor-critic framework. our work builds on recent studies indicating that traditional actor-critic algorithms do not succeed in fitting the true value function, calling for the need to identify a better objective for the critic. in our method, the critic uses a new state-value (resp. state-actionvalue) function approximation that learns the value of the states (resp. state-action pairs) relative to their mean value rather than the absolute value as in conventional actor-critic. we prove the theoretical consistency of the new gradient estimator and observe dramatic empirical improvement across a variety of continuous control tasks and algorithms. furthermore, we validate our method in tasks with sparse rewards, where we provide experimental evidence and theoretical insights. introduction model-free deep reinforcement learning (rl) has been successfully used in a wide range of problem domains, ranging from teaching computers to control robots to playing sophisticated strategy games (silver et al., 2014; schulman et al., 2016; lillicrap et al., 2016; mnih et al., 2016). stateof-the-art policy gradient algorithms currently combine ingenious learning schemes with neural networks as function approximators in the so-called actor-critic framework (sutton et al., 2000; schulman et al., 2017; haarnoja et al., 2018). while such methods demonstrate great performance in continuous control tasks, several discrepancies persist between what motivates the conceptual framework of these algorithms and what is implemented in practice to obtain maximum gains. for instance, research aimed at improving the learning of value functions often restricts the class of function approximators through different assumptions, then propose a critic formulation that allows for a more stable policy gradient. however, new studies (tucker et al., 2018; ilyas et al., 2020) indicate that state-of-the-art policy gradient methods (schulman et al., 2015; 2017) fail to fit the true value function and that recently proposed state-action-dependent baselines (gu et al., 2016; liu et al., 2018; wu et al., 2018) do not reduce gradient variance more than state-dependent ones. these findings leave the reader skeptical about actor-critic algorithms, suggesting that recent research tends to improve performance by introducing a bias rather than stabilizing the learning. consequently, attempting to find a better baseline is questionable, as critics would typically fail to fit it (ilyas et al., 2020). in tucker et al. (2018), the authors argue that “much larger gains could be achieved by instead improving the accuracy of the value function”. following this line of thought, we are interested in ways to better approximate the value function. one approach addressing this issue is to put more focus on relative state-action values, an idea introduced in the literature on advantage reinforcement ∗equal contribution. learning (harmon & baird iii) followed by works on dueling (wang et al., 2016) neural networks. more recent work (lin & zhou, 2020) also suggests that considering the relative action values, or more precisely the ranking of actions in a state leads to better policies. the main argument behind this intuition is that it suffices to identify the optimal actions to solve a task. we extend this principle of relative action value with respect to the mean value to cover both state and state-action-value functions with a new objective for the critic: minimizing the variance of residual errors. in essence, this modified loss function puts more focus on the values of states (resp. state-actions) relative to their mean value rather than their absolute values, with the intuition that solving a task corresponds to identifying the optimal action(s) rather than estimating the exact value of each state. in summary, this paper: • introduces actor with variance estimated critic (avec), an actor-critic method providing a new training objective for the critic based on the residual variance. • provides evidence for the improvement of the value function approximation as well as theoretical consistency of the modified gradient estimator. • demonstrates experimentally that avec, when coupled with state-of-the-art policy gradient algorithms, yields a significant performance boost on a set of challenging tasks, including environments with sparse rewards. • provides empirical evidence supporting a better fit of the true value function and a substantial stabilization of the gradient. related work
1
[ 108.299, 442.1786768, 211.1957635, 454.1338768 ]
apv504XsysP.pdf
2,022
2
ab-initio potential energy surfaces pairing gnns with neural wave functions by nicholas gao & stephan g ¨unnemann department of informatics & munich data science institute technical university of munich, germany @in.tum.de gaoni,guennemann { abstract solving the schr¨odinger equation is key to many quantum mechanical properties. however, an analytical solution is only tractable for single-electron systems. recently, neural networks succeeded at modeling wave functions of many-electron systems. together with the variational monte-carlo (vmc) framework, this led to solutions on par with the best known classical methods. still, these neural methods require tremendous amounts of computational resources as one has to train a separate model for each molecular geometry. in this work, we combine a graph neural network (gnn) with a neural wave function to simultaneously solve the schr¨odinger equation for multiple geometries via vmc. this enables us to model continuous subsets of the potential energy surface with a single training pass. compared to existing state-of-the-art networks, our potential energy surface network (pesnet) speeds up training for multiple geometries by up to 40 times while matching or surpassing their accuracy. this may open the path to accurate and orders of magnitude cheaper quantum mechanical calculations. introduction in recent years, machine learning gained importance in computational quantum physics and chemistry to accelerate material discovery by approximating quantum mechanical (qm) calculations (huang & von in particular, a lot of work has lilienfeld, 2021). gone into building surrogate models to reproduce qm properties, e.g., energies. these models learn from datasets created using classical techniques such as density functional theory (dft) (ramakrishnan et al., 2014; klicpera et al., 2019) or coupled clusters (ccsd) (chmiela et al., 2018). while this approach has shown great success in recovering the baseline calculations, it suffers from several disadvantages. firstly, due to the tremendous success of graph neural networks (gnns) in this area, the regression target quality became the limiting factor for accuracy (klicpera et al., 2019; qiao et al., 2021; batzner et al., 2021), i.e., the network’s prediction is closer to the data label than the data label is to the actual qm property. secondly, these surrogate models are subject to the usual difficulties of neural networks such as overconfidence outside the training domain (pappu & paige, 2020; guo et al., 2017). for figure 1: schematic of pesnet. the each molecular structure (top row), metagnn takes the nuclei graph and parametrizes the wfmodel via ω and ωm. given these, the wfmodel evaluates the electronic wave function ψ(⃗r). in orthogonal research, neural networks have been used as wave function ans¨atze to solve the stationary schr¨odinger equation (kessler et al., 2021; han et al., 2019). these methods use the variational monte carlo (vmc) (mcmillan, 1965) framework to iteratively optimize a neural wave function to obtain the ground-state electronic wave function of a given system. chemists refer to such methods as ab-initio, whereas the machine learning community may refer to this as a form of self-generative learning as no dataset is required. the data (electron positions) are sampled from the wave function itself, and the loss is derived from the schr¨odinger equation (ceperley et al., 1977). this approach has shown great success as multiple authors report results outperforming the traditional ‘gold-standard’ ccsd on various systems (pfau et al., 2020; hermann et al., 2020). however, these techniques require expensive training for each geometry, resulting in high computational requirements and, thus, limiting their application to small sets of configurations. in this work, we accelerate vmc with neural wave functions by proposing an architecture that solves the schr¨odinger equation for multiple systems simultaneously. the core idea is to predict a set of parameters such that a given wave function, e.g., ferminet (pfau et al., 2020), solves the schr¨odinger equation for a specific geometry. previously, these parameters were obtained by optimizing a separate wave function for each geometry. we improve this procedure by generating the parameters with a gnn, as illustrated in figure 1. this enables us to capture continuous subsets of the potential energy surface in one training pass, removing the need for costly retraining. additionally, we take inspiration from supervised surrogate networks and enforce the invariances of the energy to physical symmetries such as translation, rotation, and reflection (sch¨utt et al., 2018). while these symmetries hold for observable metrics such as energies, the wave function itself may not have these symmetries. we solve this issue by defining a coordinate system that is equivariant to the symmetries of the energy. in our experiments, our potential energy surface network (pesnet) consistently matches or surpasses the results of the previous best neural wave functions while training less than 1 40 of the time for high-resolution potential energy surface scans. related work molecular property prediction has seen a surge in publications in recent years with the goal of predicting qm properties such as the energy of a system. classically, features were constructed by hand and fed into a machine learning model to predict target properties (christensen et al., 2020; behler, 2011; bart´ok et al., 2013). lately, gnns have proven to be more accurate and took over the field (yang et al., 2019; klicpera et al., 2019; sch¨utt et al., 2018). as gnns approach the accuracy limit, recent work focuses on improving generalization by integrating calculations from computational chemistry. for instance, qdf (tsubaki & mizoguchi, 2020) and eann (zhang et al., 2019) approximate the electron density while orbnet (qiao et al., 2020) and unite (qiao et al., 2021) include features taken from qm calculations. another promising direction is ∆-ml models, which only predict the delta between a high-accuracy qm calculation and a faster lowaccuracy one (wengert et al., 2021). despite their success, surrogate models lack reliability. even if uncertainty estimates are available (lamb & paige, 2020; hirschfeld et al., 2020), generalization outside of the training regime is unpredictable (guo et al., 2017). while such supervised models are architecturally related, they pursue a fundamentally different objective than pesnet. where surrogate models approximate qm calculations from data, this work focuses on performing the exact qm calculations from first principles. neural wave function ans¨atze in combination with the vmc framework have recently been proposed as an alternative (carleo & troyer, 2017) to classical self-consistent field (scf) methods such as hartree-fock, dft, or ccsd to solve the schr¨odinger equation (szabo & ostlund, 2012). however, early works were limited to small systems and low accuracy (kessler et al., 2021; han et al., 2019; choo et al., 2020). recently, ferminet (pfau et al., 2020) and paulinet(hermann et al., 2020) presented more scalable approaches and accuracy on par with the best traditional qm computations. to further improve accuracy, wilson et al. (2021) coupled ferminet with diffusion monte-carlo (dmc). but, all these methods need to be trained for each configuration individually. to address this issue, weight-sharing has been proposed to reduce the time per training, but this was initially limited to non-fermionic systems (yang et al., 2020). in a concurrent work, scherbela et al. (2021) extend this idea to electronic wave functions. however, their deeperwin model still requires separate models for each geometry, does not account for symmetries and achieves lower accuracy, as we show in section 4. method to build a model that solves the schr¨odinger equation for many geometries simultaneously and accounts for the symmetries of the energy, we use three key ingredients. figure 2: pesnet’s architecture is split into two main components, the metagnn and the wfmodel. denotes the vector concircles indicate parameter-free and rectangles parametrized functions, catenation, a↑ and a↓ denote the index sets of the spin-up and spin-down electrons, respectively. to avoid clutter, we left out residual connections. firstly, to solve the schr¨odinger equation, we leverage the vmc framework, i.e., we iteratively update our wave function model (wfmodel) until it converges to the ground-state electronic wave r is a function parametrized by θ that maps electron function. the wfmodel ψθ(−→r ) : rn configurations to amplitudes. it must obey the fermi-dirac statistics, i.e., the sign of the output must flip under the exchange of two electrons of the same spin. as we cover in section 3.4, the wfmodel is essential for sampling electron configurations and computing energies. secondly, we extend this to multiple geometries by introducing a gnn that reparametrizes the wfmodel. in reference to meta-learning, we call this the metagnn. it takes the nuclei coordinates −→rm and charges zm and outputs subsets ω, ωm of wfmodel’s parameters. thanks to message passing, the metagnn can capture the full 3d geometry of the nuclei graph. θ, m lastly, as we prove in appendix a, to predict energies invariant to rotations and reflections the wave function needs to be equivariant. we accomplish this by constructing an equivariant coordinate system e = [−→e 1, −→e 2, −→e 3] based on the principle component analysis (pca). together, these components form pesnet, whose architecture is shown in figure 2. since sampling and energy computations only need the wfmodel, a single forward pass of the metagnn is sufficient for each geometry during evaluation. furthermore, its end-to-end differentiability facilitates optimization, see section 3.4, and we may benefit from better generalization thanks to our equivariant wave function (elesedy & zaidi, 2021; kondor & trivedi, 2018). notation. we use bold lower-case letters h for vectors, bold upper-case w letters for matrices, −−−−→arrows to indicate vectors in 3d, −→r i to denote electron coordinates, −→rm, zm for nuclei coordinates and charge, respectively. [ ◦ ]n i=1 denote vector concatenations. ] and [ ◦ wave function model
2
[ 108.249, 172.1100784, 240.1607551, 182.0726784 ]
ZKy2X3dgPA.pdf
2,022
1
it takes two to tango: mixup for deep metric learning shashanka venkataramanan1∗ bill psomas3∗ konstantinos karantzalos3 ewa kijak1 yannis avrithis2 laurent amsaleg1 1inria, univ rennes, cnrs, irisa 2athena rc 3national technical university of athens abstract metric learning involves learning a discriminative representation such that embeddings of similar classes are encouraged to be close, while embeddings of dissimilar classes are pushed far apart. state-of-the-art methods focus mostly on sophisticated loss functions or mining strategies. on the one hand, metric learning losses consider two or more examples at a time. on the other hand, modern data augmentation methods for classification consider two or more examples at a time. the combination of the two ideas is under-studied. in this work, we aim to bridge this gap and improve representations using mixup, which is a powerful data augmentation approach interpolating two or more examples and corresponding target labels at a time. this task is challenging because unlike classification, the loss functions used in metric learning are not additive over examples, so the idea of interpolating target labels is not straightforward. to the best of our knowledge, we are the first to investigate mixing both examples and target labels for deep metric learning. we develop a generalized formulation that encompasses existing metric learning loss functions and modify it to accommodate for mixup, introducing metric mix, or metrix. we also introduce a new metric—utilization—to demonstrate that by mixing examples during training, we are exploring areas of the embedding space beyond the training classes, thereby improving representations. to validate the effect of improved representations, we show that mixing inputs, intermediate representations or embeddings along with target labels significantly outperforms state-of-the-art metric learning methods on four benchmark deep metric learning datasets. introduction classification is one of the most studied tasks in machine learning and deep learning. it is a common source of pre-trained models for transfer learning to other tasks (donahue et al., 2014; kolesnikov et al., 2020). it has been studied under different supervision settings (caron et al., 2018; sohn et al., 2020), knowledge transfer (hinton et al., 2015) and data augmentation (cubuk et al., 2018), including the recent research on mixup (zhang et al., 2018; verma et al., 2019), where embeddings and labels are interpolated. deep metric learning is about learning from pairwise interactions such that inference relies on instance embeddings, e.g. for nearest neighbor classification (oh song et al., ∗equal contribution class label interpolation anchor positive negative mixed figure 1: metrix (= metric mix) allows an anchor to interact with positive (same class), negative (different class) and interpolated examples, which also have interpolated labels. 2016), instance-level retrieval (gordo et al., 2016), few-shot learning (vinyals et al., 2016), face recognition (schroff et al., 2015) and semantic textual similarity (reimers & gurevych, 2019). following (xing et al., 2003), it is most often fully supervised by one class label per example, like classification. the two most studied problems are loss functions (musgrave et al., 2020) and hard example mining (wu et al., 2017; robinson et al., 2021). tuple-based losses with example weighting (wang et al., 2019) can play the role of both. unlike classification, classes (and distributions) at training and inference are different in metric learning. thus, one might expect interpolation-based data augmentation like mixup to be even more important in metric learning than in classification. yet, recent attempts are mostly limited to special cases of embedding interpolation and have trouble with label interpolation (ko & gu, 2020). this raises the question: what is a proper way to define and interpolate labels for metric learning? in this work, we observe that metric learning is not different from classification, where examples are replaced by pairs of examples and class labels by “positive” or “negative”, according to whether class labels of individual examples are the same or not. the positive or negative label of an example, or a pair, is determined in relation to a given example which is called an anchor. then, as shown in figure 1, a straightforward way is to use a binary (two class) label per pair and interpolate it linearly as in standard mixup. we call our method metric mix, or metrix for short. to show that mixing examples improves representation learning, we quantitatively measure the properties of the test distributions using alignment and uniformity (wang & isola, 2020). alignment measures the clustering quality and uniformity measures its distribution over the embedding space; a well clustered and uniformly spread distribution indicates higher representation quality. we also introduce a new metric, utilization, to measure the extent to which a test example, seen as a query, lies near any of the training examples, clean or mixed. by quantitatively measuring these three metrics, we show that interpolation-based data augmentation like mixup is very important in metric learning, given the difference between distributions at training and inference. in summary, we make the following contributions: 1. we define a generic way of representing and interpolating labels, which allows straightforward extension of any kind of mixup to deep metric learning for a large class of loss functions. we develop our method on a generic formulation that encapsulates these functions (section 3). 2. we define the “positivity” of a mixed example and we study precisely how it increases as a function of the interpolation factor, both in theory and empirically (subsection 3.6). 3. we systematically evaluate mixup for deep metric learning under different settings, including mixup at different representation levels (input/manifold), mixup of different pairs of examples (anchors/positives/negatives), loss functions and hard example mining (subsection 4.2). 4. we introduce a new evaluation metric, utilization, validating that a representation more appropriate for test classes is implicitly learned during exploration of the embedding space in the presence of mixup (subsection 4.3). 5. we improve the state of the art on four common metric learning benchmarks (subsection 4.2). related work metric learning metric learning aims to learn a metric such that positive pairs of examples are nearby and negative ones are far away. in deep metric learning, we learn an explicit non-linear mapping from raw input to a low-dimensional embedding space (oh song et al., 2016), where the euclidean distance has the desired properties. although learning can be unsupervised (hadsell et al., 2006), deep metric learning has mostly followed the supervised approach, where positive and negative pairs are defined as having the same or different class label, respectively (xing et al., 2003). loss functions can be distinguished into pair-based and proxy-based (musgrave et al., 2020). pairbased losses use pairs of examples (wu et al., 2017; hadsell et al., 2006), which can be defined over triplets (wang et al., 2014; schroff et al., 2015; weinberger & saul, 2009; hermans et al., 2017), quadruples (chen et al., 2017) or tuples (sohn, 2016; oh song et al., 2016; wang et al., 2019). proxy-based losses use one or more proxies per class, which are learnable parameters in the embedding space (movshovitz-attias et al., 2017; qian et al., 2019; kim et al., 2020c; teh et al., 2020; zhu et al., 2020b). pair-based losses capture data-to-data relations, but they are sensitive to noisy labels and outliers. they often involve terms where given constraints are satisfied, which produce zero gradients and do not contribute to training. this necessitates mining of hard examples that violate the constraints, like semi-hard (schroff et al., 2015) and distance weighted (wu et al., 2017). by contrast, proxy-based losses use data-to-proxy relations, assuming proxies can capture the global structure of the embedding space. they involve less computations that are more likely to produce nonzero gradient, hence have less or no dependence on mining and converge faster. mixup input mixup (zhang et al., 2018) linearly interpolates between two or more examples in the input space for data augmentation. numerous variants take advantage of the structure of the input space to interpolate non-linearly, e.g. for images (yun et al., 2019; kim et al., 2020a; 2021; hendrycks et al., 2020; devries & taylor, 2017; qin et al., 2020; uddin et al., 2021). manifold mixup (verma et al., 2019) interpolates intermediate representations instead, where the structure is learned. this can be applied to or assisted by decoding back to the input space (berthelot et al., 2018; liu et al., 2018; beckham et al., 2019; zhu et al., 2020a; venkataramanan et al., 2021). in both cases, corresponding labels are linearly interpolated too. most studies are limited to cross-entropy loss for classification. pairwise loss functions have been under-studied, as discussed below. interpolation for pairwise loss functions as discussed in subsection 3.3, interpolating target labels is not straightforward in pairwise loss functions. in deep metric learning, embedding expansion (ko & gu, 2020), hdml (zheng et al., 2019) and symmetrical synthesis (gu & ko, 2020) interpolate pairs of embeddings in a deterministic way within the same class, applying to pair-based losses, while proxy synthesis (gu et al., 2021) interpolates between classes, applying to proxy-based losses. none performs label interpolation, which means that (gu et al., 2021) risks synthesizing false negatives when the interpolation factor λ is close to 0 or 1. in contrastive representation learning, mochi (kalantidis et al., 2020) interpolates anchor with negative embeddings but not labels and chooses λ ∈ [0, 0.5] to avoid false negatives. this resembles thresholding of λ at 0.5 in opttransmix (zhu et al., 2020a). finally, i-mix (lee et al., 2021) and mixco (kim et al., 2020b) interpolate pairs of anchor embeddings as well as their (virtual) class labels linearly. there is only one positive, while all negatives are clean, so it cannot take advantage of interpolation for relative weighting of positives/negatives per anchor (wang et al., 2019). by contrast, metrix is developed for deep metric learning and applies to a large class of both pairbased and proxy-based losses. it can interpolate inputs, intermediate features or embeddings of anchors, (multiple) positives or negatives and the corresponding two-class (positive/negative) labels per anchor, such that relative weighting of positives/negatives depends on interpolation. mixup for metric learning
2
[ 108.299, 285.1006768, 284.7491658, 297.0558768 ]
ivwZO-HnzG_.pdf
2,023
2
recon: reducing conflicting gradients from the root for multi-task learning guangyuan shi, qimai li, wenlong zhang, jiaxin chen, xiao-ming wu(cid:12) department of computing, the hong kong polytechnic university, hong kong s.a.r., china {guang-yuan.shi, qee-mai.li, wenlong.zhang}@connect.polyu.hk, jiax.chen@connect.polyu.hk, xiao-ming.wu@polyu.edu.hk abstract a fundamental challenge for multi-task learning is that different tasks may conflict with each other when they are solved jointly, and a cause of this phenomenon is conflicting gradients during optimization. recent works attempt to mitigate the influence of conflicting gradients by directly altering the gradients based on some criteria. however, our empirical study shows that “gradient surgery” cannot effectively reduce the occurrence of conflicting gradients. in this paper, we take a different approach to reduce conflicting gradients from the root. in essence, we investigate the task gradients w.r.t. each shared network layer, select the layers with high conflict scores, and turn them to task-specific layers. our experiments show that such a simple approach can greatly reduce the occurrence of conflicting gradients in the remaining shared layers and achieve better performance, with only a slight increase in model parameters in many cases. our approach can be easily applied to improve various state-of-the-art methods including gradient manipulation methods and branched architecture search methods. given a network architecture (e.g., resnet18), it only needs to search for the conflict layers once, and the network can be modified to be used with different methods on the same or even different datasets to gain performance improvement. the source code is available at https://github.com/moukamisama/recon. introduction multi-task learning (mtl) is a learning paradigm in which multiple different but correlated tasks are jointly trained with a shared model (caruana, 1997), in the hope of achieving better performance with an overall smaller model size than learning each task independently. by discovering shared structures across tasks and leveraging domain-specific training signals of related tasks, mtl can achieve efficiency and effectiveness. indeed, mtl has been successfully applied in many domains including natural language processing (hashimoto et al., 2017), reinforcement learning (parisotto et al., 2016; d’eramo et al., 2020) and computer vision (vandenhende et al., 2021). a major challenge for multi-task learning is negative transfer (ruder, 2017), which refers to the performance drop on a task caused by the learning of other tasks, resulting in worse overall performance than learning them separately. this is caused by task conflicts, i.e., tasks compete with each other and unrelated information of individual tasks may impede the learning of common structures. from the optimization point of view, a cause of negative transfer is conflicting gradients (yu et al., 2020), which refers to two task gradients pointing away from each other and the update of one task will have a negative effect on the other. conflicting gradients make it difficult to optimize the multitask objective, since task gradients with larger magnitude may dominate the update vector, making the optimizer prioritize some tasks over others and struggle to converge to a desirable solution. prior works address task/gradient conflicts mainly by balancing the tasks via task reweighting or gradient manipulation. task reweighting methods adaptively re-weight the loss functions by homoscedastic uncertainty (kendall et al., 2018), balancing the pace at which tasks are learned chen et al. (2018); liu et al. (2019), or learning a loss weight parameter (liu et al., 2021b). gradient manipulation methods reduce the influence of conflicting gradients by directly altering the gradients based on different criteria (sener & koltun, 2018; yu et al., 2020; chen et al., 2020; liu et al., 2021a) or rotating the shared features (javaloy & valera, 2022). while these methods have demonstrated effectiveness in different scenarios, in our empirical study, we find that they cannot reduce the occurrence of conflicting gradients (see sec. 3.3 for more discussion). we propose a different approach to reduce conflicting gradients for mtl. specifically, we investigate layer-wise conflicting gradients, i.e., the task gradients w.r.t. each shared network layer. we first train the network with a regular mtl algorithm (e.g., joint-training) for a number of iterations, compute the conflict scores for all shared layers, and select those with highest conflict scores (indicating severe conflicts). we then set the selected shared layers task-specific and train the modified network from scratch. as demonstrated by comprehensive experiments and analysis, our simple approach recon has the following key advantages: (1) recon can greatly reduce conflicting gradients with only a slight increase in model parameters (less than 1% in some cases) and lead to significantly better performance. (2) recon can be easily applied to improve various gradient manipulation methods and branched architecture search methods. given a network architecture, it only needs to search for the conflict layers once, and the network can be modified to be used with different methods and even on different datasets to gain performance improvement. (3) recon can achieve better performance than branched architecture search methods with a much smaller model. related works in this section, we briefly review related works in multi-task learning in four categories: tasks clustering, architecture design, architecture search, and task balancing. tasks clustering methods mainly focus on identifying which tasks should be learned together (thrun & o’sullivan, 1996; zamir et al., 2018; standley et al., 2020; shen et al., 2021; fifty et al., 2021). architecture design methods include hard parameter sharing methods (kokkinos, 2017; long et al., 2017; bragman et al., 2019), which learn a shared feature extractor and task-specific decoders, and soft parameters sharing methods (misra et al., 2016; ruder et al., 2019; gao et al., 2019; 2020; liu et al., 2019), where some parameters of each task are assigned to do cross-task talk via a sharing mechanism. compared with soft parameters sharing methods, our approach recon has much better scalability when dealing with a large number of tasks. instead of designing a fixed network structure, some methods (rosenbaum et al., 2018; meyerson & miikkulainen, 2018; yang et al., 2020) propose to dynamically self-organize the network for different tasks. among them, branched architecture search (guo et al., 2020; bruggemann et al., 2020) methods are more related to our work. they propose an automated architecture search algorithm to build a tree-structured network by learning where to branch. in contrast, our method recon decides which layers to be shared across tasks by considering the severity of layer-wise conflicting gradients, resulting in a more compact architecture with lower time cost and better performance. another line of research is task balancing methods. to address task/gradient conflicts, some methods attempt to re-weight the multi-task loss function using homoscedastic uncertainty (kendall et al., 2018), task prioritization (guo et al., 2018), or similar learning pace (liu et al., 2019; 2021b). gradnorm (chen et al., 2018) learns task weights by dynamically tuning gradient magnitudes. mgda (sener & koltun, 2018) find the weights by minimizing the norm of the weighted sum of task gradients. to reduce the influence of conflicting gradients, pcgrad (yu et al., 2020) projects each gradient onto the normal plane of another gradient and uses the average of projected gradients for update. graddrop (chen et al., 2020) randomly drops some elements of gradients based on element-wise conflict. cagrad (liu et al., 2021a) ensures convergence to a minimum of the average loss across tasks by gradient manipulation. rotograd (javaloy & valera, 2022) re-weights task gradients and rotates the shared feature space. instead of manipulating gradients, our method recon leverages gradient information to modify network structure to mitigate task conflicts from the root. pilot study: task conflicts in multi-task learning multi-task learning: problem definition multi-task learning (mtl) aims to learn a set of correlated tasks {ti}t i=1 simultaneously. for each task ti, the empirical loss function is li(θsh, θi), where θsh are parameters shared among all tasks figure 1: the distributions of gradient conflicts (in terms of cos ϕij) of the joint-training baseline and state-of-the-art gradient manipulation methods on multi-fashion+mnist benchmark. and θi are task-specific parameters. the goal is to find optimal parameters θ = {θsh, θ1, θ2, · · · , θt } to achieve high performance across all tasks. formally, it aims to minimize a multi-task objective: θ∗ = arg min wili(θsh, θi), i where wi are pre-defined or dynamically computed weights for different tasks. a popular choice is to use the average loss (i.e., equal weights). however, optimizing the multi-task objective is difficult, and a known cause is conflicting gradients. conflicting gradients
2
[ 108.249, 448.8270784, 244.5652576, 458.7896784 ]
yqPnIRhHtZv.pdf
2,021
2
learning hyperbolic representations of topological features panagiotis kyriakis university of southern california los angeles, usa pkyriaki@usc.edu iordanis fostiropoulos university of southern california los angeles, usa fostirop@usc.edu paul bogdan university of southern california los angeles, usa pbogdan@usc.edu abstract learning task-specific representations of persistence diagrams is an important problem in topological data analysis and machine learning. however, current methods are restricted in terms of their expressivity as they are focused on euclidean representations. persistence diagrams often contain features of infinite persistence (i.e., essential features) and euclidean spaces shrink their importance relative to non-essential features because they cannot assign infinite distance to finite points. to deal with this issue, we propose a method to learn representations of persistence diagrams on hyperbolic spaces, more specifically on the poincare ball. by representing features of infinite persistence infinitesimally close to the boundary of the ball, their distance to non-essential features approaches infinity, thereby their relative importance is preserved. this is achieved without utilizing extremely high values for the learnable parameters, thus, the representation can be fed into downstream optimization methods and trained efficiently in an end-to-end fashion. we present experimental results on graph and image classification tasks and show that the performance of our method is on par with or exceeds the performance of other state of the art methods. introduction persistent homology is a topological data analysis tool which tracks how topological features (e.g. connected components, cycles, cavities) appear and disappear as we analyze the data at different scales or in nested sequences of subspaces (1; 2). a nested sequence of subspaces is known as a filtration. as an informal example of a filtration consider an image of variable brightness. as the brightness is increased, certain features (edges, texture) may become less or more prevalent. the birth of a topological feature refers to the "time" (i.e., the brightness value) when it appears in the filtration and the death refers to the "time" when it disappears. the lifespan of the feature is called persistence. persistent homology summarizes these topological characteristics in a form of multiset called persistence diagram, which is a highly robust and versatile descriptor of the data. persistence diagrams enjoy the stability property, which ensures that the diagrams of two similar objects are similar (3). additionally, under some assumptions, one can approximately reconstruct the input space from a diagram (which is known as solving the inverse problem) (4). however, despite their strengths, the space of persistence diagrams lacks structure as basic operations, such as addition and scalar multiplication, are not well defined. the only imposed structure is induced by the bottleneck and wasserstein metrics, which are notoriously hard to compute, thereby preventing us from leveraging them for machine learning tasks. related work. to address these issues, several vectorization methods have been proposed. some of the earliest approaches are based on kernels, i.e., generalized products that turn persistence diagrams into elements of a hilbert space. kusano et al. (5) propose a persistence weighted gaussian kernel which allows them to explicitly control the effect of persistence. alternatively, carrière et al. (6) leverage the sliced wasserstein distance to define a kernel that mimics the distance between diagrams. the approaches by bubenik (7) based on persistent landscapes, by reininghaus et al. (8) based on scale space theory and by le et al. (9) based on the fisher information metric are along the same line of work. the major drawback in utilizing kernel methods is that they suffer from scalability issues as the training scales poorly with the number of samples. in another line of work, researchers have constructed finite-dimensional embeddings, i.e., transformations turning persistence diagrams into vectors in a euclidean space. adams et al. (10) map the diagrams to persistence images and discretize them to obtain the embedding vector. carrière et al. (11) develop a stable vectorization method by computing pairwise distances between points in the persistence diagram. an approach based on interpreting the points in the diagram as roots of a complex polynomial is presented by di fabio (12). adcock et al. (13) identify an algebra of polynomials on the diagram space that can be used as coordinates and the approach is extended by kališnik in (14) to tropical functions which guarantee stability. the common drawback of these embeddings is that the representation is pre-defined, i.e., there exist no learnable parameters, therefore, it is agnostic to the specific learning task. this is clearly sub-optimal as the eminent success of deep learning has demonstrated that it is preferable to learn the representation. the more recent approaches aim at learning the representation of the persistence diagram in an end-to-end fashion. hofer et al. (15) present the first input layer based on a parameterized family of gaussian-like functionals, with the mean and variance learned during training. they extend their method in (16) allowing for a broader class of parameterized function families to be considered. it is quite common to have topological features of infinite persistence (1), i.e., features that never die. such features are called essential and in practice are usually assigned a death time equal to the maximum filtration value. this may restrict their expressivity because it shrinks their importance relative to non-essential features. while we may be able to increase the scale sufficiently high and end up having only one trivial essential feature (i.e., the 0-th order persistent homology group that becomes a single connected component at a scale that is sufficiently large), the resulting persistence diagrams may not be the ones that best summarize the data in terms of performance on the underlying learning task. this is evident in the work by hofer et al. (15) where the authors showed that essential features offer discriminative power. the work by carrière et al. (17), which introduces a network input layer the encompasses several vectorization methods, emphasizes the importance of essential features and is the first one to introduce a deep learning method incorporating extended persistence as a way to deal with them. in this paper, we approach the issue of essential features from the geometric viewpoint. we are motivated by the recent success of hyperbolic geometry and the interest in extending machine learning models to hyperbolic spaces or general manifolds. we refer the reader to the review paper by bronstein et al. (18) for an overview of geometric deep learning. here, we review the most relevant and pivotal contributions in the field. nickel et al. (19; 20) propose poincaré and lorentz embeddings for learning hierarchical representations of symbolic data and show that the representational capacity and generalization ability outperform euclidean embeddings. sala et al. (21) propose low-dimensional hyperbolic embeddings of hierarchical data and show competitive performance on worldnet. ganea et al. (22) generalize neural networks to the hyperbolic space and show that hyperbolic sentence embeddings outperform their euclidean counterparts on a range of tasks. gulcherhe et al. (23) introduce hyperbolic attention networks which show improvements in terms of generalization on machine translation and graph learning while keeping a compact representation. in the context of graph representation learning, hyperbolic graph neural networks (24) and hyperbolic graph convolutional neural networks (25) have been developed and shown to lead to improvements on various benchmarks. however, despite this success of geometric deep learning, little work has been done in applying these methods to topological features, such as persistence diagrams. the main contribution of this paper is to bridge the gap between topological data analysis and hyperbolic representation learning. we introduce a method to represent persistence diagrams on a hyperbolic space, more specifically on the poincare ball. we define a learnable parameterization of the poincare ball and leverage the vectorial structure of the tangent space to combine (in a manifoldpreserving manner) the representations of individual points of the persistence diagram. our method learns better task-specific representations than the state of the art because it does not shrink the relative importance of essential features. in fact, by allowing the representations of essential features to get infinitesimally close to the boundary of the poincare ball, their distance to the representations of non-essential features approaches infinity, therefore preserving their relative importance. to the best of our knowledge, this is the first approach for learning representations of persistence diagrams in non-euclidean spaces. background in this section, we provide a brief overview of persistent homology leading up to the definition of persistence diagrams. we refer the interested reader to the papers by edelsbrunner et al. (1; 2) for a detailed overview of persistent homology. an overview of homology can be found in the appendix. persistent homology. let k be a simplicial complex. a filtration of k is a nested sequence of subcomplexes that starts with the empty complex and ends with k, ∅ = k0 ⊆ k1 ⊆ . . . ⊆ kd = k. a typical way to construct a filtration is to consider sublevel sets of a real valued function, f : k → r. let a1 < · · · < ad be a sorted sequence of the values of f (k). then, we obtain a filtration by setting and ki = f −1((−∞, ai]) for 1 ≤ i ≤ d. we can apply simplicial homology to each of the subcomplexes of the filtration. when 0 ≤ i ≤ j ≤ d, the inclusion ki ⊆ kj induces a homomorphism f i,j n : hn(ki) → hn(kj) on the simplicial homology groups for each homology dimension n. we call the image of f i,j n a n-th persistent homology group and it consists of homology classes born before i that are still alive at j. a homology class α is born at ki if it is not in the image of the map induced by the inclusion ki−1 ⊆ ki. furthermore, if α is born at ki, it dies entering kj if the image of the map induced by ki−1 ⊆ kj−1 does not contain the image of α but the image of the map induced by ki−1 ⊆ kj does. the persistence of the homology class α is j − i. since classes may be born at the same i and die at the same j, we can use inclusion-exclusion to determine the multiplicity of each (i, j), n = βi,j−1 µi,j n − βi,j n + βi−1,j n where the n-th persistent betti numbers βi,j homology group, i.e., βi,j features that persist from i to j. by setting µi,∞ still persist at the end of the filtration (j = d), which are known as essential features. n are the ranks of the images of the n-th persistent n )), and capture the number of n-dimensional topological we can account for features that n = rank(im(f i,j n − βi−1,d n = βi,d n persistence diagrams. persistence diagrams are multisets supported by the upper diagonal part of the real plane and capture the birth/death of topological features (i.e., homology classes) across the filtration. definition 2.1 (persistence diagram). let ∆ = {x ∈ r∆ : mult(x) = ∞} be the multiset of the diagonal r∆ = {(x1, x2) ∈ r2 : x1 = x2}, where mult(·) denotes the multiplicity function and let ∗ = {(x1, x2) ∈ r ∪ (r ∪ ∞) : x2 > x1}. also, let n be a homology dimension and consider the r2 sublevel set filtration induced by a function f : k → r over the complex k. then, a persistence diagram, dn(f ), is a multiset of the form dn(f ) = {x : x ∈ r2 ∗} ∪ ∆ constructed by inserting each point (ai, aj) for i < j with multiplicity µi,j if it is an essential feature). we denote the space of all persistence diagrams with d. definition 2.2 (wasserstein distance and stability). let dn(f ), en(g) be two persistence diagrams generated by the filtration induced by the functions f, g : k → r, respectively. we define the wasserstein distance n (or µi,∞ n wq p(dn(f ), eg(g)) = inf η x∈d where p, q ∈ n and the infimum is taken over all bijections η : dn(f ) → en(g). the special case p = ∞ is known as bottleneck distance. the persistence diagrams are stable with respect to the wasserstein distance if and only if wq p(dn(f ), eg(g)) ≤ (cid:107)f − g(cid:107)∞. note that a bijection η between persistence diagrams is guaranteed to exist because their cardinalities are equal, considering that, as per def. 2.1, the points on the diagonal are added with infinite multiplicity. the strength of persistent homology stems from the above stability definition, which essentially states that the map taking a sublevel function to the persistence diagram is lipschitz continuous. this implies that if two objects are similar then their persistence diagrams are close. dn(f ) death f n o i t a r t l i f k x e l p m o c φ ◦ ρ(x) birth y z y tx0 b φ(d, θ) ∈ b figure 1: illustration of our method: initially, the points are transferred via the auxiliary transformation ρ and the parameterization φ to the poincare ball b, where learnable parameters θ are added. then, the logarithmic map is used for transforming the points to the tangent space tx0 b. finally, the resulting vectors are added and transformed back to the manifold via the exponential map. note that the persistence diagram is mapped to a single point on the poincare ball (i.e., φ(d, θ) ∈ b). persistent poincare representations in this section, we introduce our method (fig. 1) for learning representations of persistence diagrams on the poincare ball. we refer the reader to the appendix for some fundamental concepts of differential geometry. x ), where b = {x ∈ rm : (cid:107)x(cid:107) < 1} is the the poincare ball is an m-dimensional manifold (b, gb open unit ball. the space in which the ball is embedded is called ambient space and is assumed to be equal to rm. the poincare ball is conformal (i.e., angle-preserving) to the euclidean space but it does not preserve distances. the metric tensor and distance function are as follows (cid:107)x − y(cid:107)2 (1 − (cid:107)x(cid:107)2)(1 − (cid:107)y(cid:107))2 db(x, y) = arccos xge λx = x = λ2 gb where ge = im is the euclidean metric tensor. eq. 6 highlights the benefit of using the poincare ball for representing persistence diagrams. contrary to euclidean spaces, distances in the poincare ball can approach infinity for finite points. this space is ideal for representing essential features appearing in persistence diagrams without squashing their importance relative to non-essential features. informally, this is achieved by allowing the representations of the former ones to get infinitesimally close to the boundary, thereby their distances to the later ones approach infinity. fig. 2 provides an illustration. we gradually construct our representation through a composition of 3 individual transformations. the first step is to transfer the points to the ambient space (i.e., rm) of the poincare ball. let d1 be a persistence diagram. we introduce the following auxiliary transformation ∗ → rm. this auxiliary transformation is essentially a high-dimensional embedding and may contain learnable parameters. nonetheless, our main focus is to learn a hyperbolic representation and, therefore, we assume that ρ is not learnable. later in this section, we analyze conditions on ρ to guarantee the stability and expressiveness of the hyperbolic representation. the second step is to transform the embedded points from the ambient space to the poincare ball. when referring to points on a manifold, it is important to define a coordinate system. a homeomorphism ψ : b → rm is called coordinate chart and gives the local coordinates on the manifold. the inverse map φ : rm → b, is called a parameterization of b and gives the ambient coordinates. the main idea is to inject learnable parameters into this parameterization. the injected parameters could be any form of differentiable functional that preserves the homomorphic property. differentiability is needed such that our representation can be fed to downstream optimization 1the sublevel set function f and the homology dimension n are omitted. methods. in our construction, we utilize a variant of the generalized spherical coordinates. let θ ∈ rm be a vector of m parameters. we define the learnable parameterization φ : rm × rm → b as follows arctan θ1r1 and yi = θi + arccos , for i = 2, 3, ..m, where r2 i + (cid:15). the small positive constant (cid:15) is added to ensure that the denominator in eq. 8 is not zero. intuitively, eq. 8 corresponds to scaling the radius of the point by a factor θ1 and rotating it by θi radians across the angular axes. the scaling and rotation parameters are learned during training. note that the form of y1 ensures that representation belongs in the unit ball for all values of θ1. the coordinate chart is not explicitly used in our representation; it is provided in the appendix for the sake of completeness. the third step is to combine the representations of each individual point of the persistence diagram into a single point in the hyperbolic space. typically, in euclidean spaces, this is done by concatenating or adding the corresponding representations. however, in non-euclidean spaces such operations are not manifold-preserving. therefore, we transform the points from the manifold to the tangent space, combine the vectors via standard vectorial addition and transform the resulting vector back to the manifold. this approach is based on the exponential and logarithmic maps expx : txb → b and logx : b → txb. the exponential map allows us to transform a vector from the tangent space to the manifold and its inverse (i.e., the logarithmic map) from the manifold to the tangent space. for a general manifold, it is hard to find these maps as we need to solve for the minimal geodesic curve (see appendix for more details). fortunately, for the poincare ball case, they have analytical expressions, given as follows expx(v) = x ⊕ tanh , logx(y) = where ⊕ denotes the möbius addition, which is a manifold-preserving operator (i.e., for any x, y ∈ b =⇒ x ⊕ y ∈ b). the analytical expression is given in the appendix. the transformations given by these maps are norm-preserving, i.e., for example, the geodesic distance from x to the transformed point expx(v) coincides with the metric norm (cid:107)v(cid:107)g induced by the metric tensor gb x . this is an important property as we need the distance between points (and therefore the relative importance of topological features) to be preserved when transforming to and from the tangent space. we now combine the aforementioned transformations and define the poincare hyperbolic representation followed by its stability theorem. definition 3.1 (poincare representation). let d ∈ d be the persistence diagram to be represented x ) embedded in rm and x0 ∈ b be a given point. the in an m-dimensional poincare ball (b, gb representation of d on the manifold b is defined as follows φ : d × rm → b, φ(d, θ) = expx0 (cid:0)φ(ρ(x))(cid:1)(cid:17) logx0 x∈d where the exponential and logarithmic maps are given by eq. 10 and the learnable parameterization and the auxiliary transformation by eq. 8 and eq. 7, respectively. theorem 1 (stability of hyperbolic representation). let d, e be two persistence diagrams and ∗ → rm that is consider an auxiliary transformation ρ : r2 • lipschitz continuous w.r.t the induced metric norm (cid:107)·(cid:107)g, • ρ(x) = 0 for all x ∈ r∆. additionally, assume that x0 = 0. then, the hyperbolic representation given by eq. 11 is stable w.r.t the wasserstein distance when p = 1, i.e., there exists constant k > 0 such that db(φ(d, θ), φ(e, θ)) ≤ kwg where db is the geodesic distance and wg induced norm (cid:107)·(cid:107)g (i.e., the norm induced by the metric tensor g, see appendix a.2). 1 is the wasserstein metric with the q-norm replaced by the h t a e d (b, gb x ) x = λ2 gb x birth figure 2: left: example graph from the imdb-binary dataset. middle: persistence diagrams extracted using the vietoris-rips filtration. the dashed line denotes features of infinite persistence, which are represented by points of maximal death value equal to 90 (i.e., by points of finite persistence). right: equivalent representation on the 2-dimensional poincare ball. features of infinite persistence are mapped infinitesimally close to the boundary. therefore, their distance to finite persistence features approaches infinity (d ∼ (cid:15)−2). the proof of theorem 1 (given in the appendix) results from a general stability theorem (3) and is on par with similar results for other vectorizations (10) or representations (15) of persistence diagrams. one subtle difference is that theorem 1 uses the induced norm rather than the q-norm appearing in the wasserstein distance. however, since the induced norm implicitly depends on the chosen point x0, which, per requirements of theorem 1, is assumed to be equal to the origin, there is no substantial difference. the fact that we require the auxiliary transformation ρ to be zero on the diagonal is important to theoretically guarantee stability. intuitively, this can be understood by recalling (def. 2.1) that all (infinite) points on the diagonal are included in the persistence diagram. by mapping the diagonal to zero and taking x0 = 0, we ensure that the summation in eq. 11 collapses to zero when summing over the diagonal. finally, we note that the assumptions of theorem 1 are not restrictive. in fact, we can easily find lipschitz continuous transformations that are zero on the diagonal r∆, such as the exponential and rational transformations proposed by hofer et al. (15). additionally, we note that the assumptions of theorem 1 do not prohibit us from choosing an "under-powered" or degenerate ρ. for example, ρ = 0 satisfies the assumptions and therefore leads to a stable representation. however, such representation is obviously not useful for learning tasks. an implicit requirement, that guarantees not only the stability but the expressiveness of the results representation, is that ρ does not result in any information loss. this requirement is satisfied by picking a ρ that it is injective, which, given that it is a higher dimensional embedding, is a condition easy to satisfy. in practice, we use a variant of the exponential transformation by hofer et al. (15). the exact expression is given in the appendix. experiments we present experiments on diverse datasets focusing on persistence diagrams extracted from graphs and grey-scale images. the learning task is classification. our representation acts as an input to a neural network and the parameters are learned end-to-end via standard gradient methods. the architecture as well as other training details are discussed in the appendix. the code to reproduce our experiments is publicly available at https://github.com/pkyriakis/permanifold/. ablation study: to highlight to what extent our results are driven by the hyperbolic embedding, we perform an ablation study. in more detail, we consider three variants of our method: 1. persistent poincare (p-poinc): this is the original method as presented in sec. 3, 2. persistent hybrid (p-hybrid): same as p-poinc with the poincare ball replaced by the euclidean space. this implies that the exponential and logarithmic maps (eq. 10) reduce to the identity maps, i.e., expx(v) = x + v logx(y) = y − x. the learnable parameterization is as in eq. 8. 3. persistent euclidean (p-eucl): same as p-hybrid with eq. 8 replaced with simple addition of the learnable parameters, i.e., y = x + θ. baseline - essential features separation: to highlight the benefit of a unified poincare representation, we design a natural baseline that treats essential and non-essential features separately. in more detail, for each point (b, d) ∈ d, we calculate its persistence d − b and then compute the histogram of the resulting persistence values. for essential features, we compute the histogram of their birth times. then, we concatenate those histograms and feed them as input to the neural network (architecture described in the appendix). we consider the case where the essential features are included (baseline w/ essential) and the case where they are discarded (baseline w/o essential). manifold dimension and projection bases: since our method essentially represents each persistence diagram on a m−dimensional poincare ball, it may introduce substantial information compression when the points in the diagrams are not of the same order as m. a trivial approach to counteract this issue is to use a high value for m. however, we experimentally observed that a high manifold dimension does not give the optimal classification performance and it adds a computational overhead in the construction of the computation graph. empirically, the best approach is to keep m at moderate values (in the range m = 3 to m = 12), replicate the representation k times and concatenate the outputs. each replica is called a projection base and for their number we explored values dependant on the number of points in the persistence diagram. persistence diagrams obtained from images tend to have substantially fewer points than diagrams obtained from graphs. therefore, for images, we explored moderate values for k, i.e., 5 − 10, whereas for graphs we increased k in the range 200 − 500. essentially, we treat both m and k as hyper-parameters, explore their space following the aforementioned empirical rules and pick the optimal via the validation dataset. as we increase m, it is usually prudent to decrease k to maintain similar model capacity. graph classification
6
[ 108.249, 448.6240784, 236.5554711, 458.5866784 ]
E3Ys6a1NTGT.pdf
2,021
0
under review as a conference paper at iclr 2021 the importance of pessimism in fixed-dataset policy optimization anonymous authors paper under double-blind review abstract we study worst-case guarantees on the expected return of fixed-dataset policy optimization algorithms. our core contribution is a unified conceptual and mathematical framework for the study of algorithms in this regime. this analysis reveals that for na¨ıve approaches, the possibility of erroneous value overestimation leads to a difficult-to-satisfy requirement: in order to guarantee that we select a policy which is near-optimal, we may need the dataset to be informative of the value of every policy. to avoid this, algorithms can follow the pessimism principle, which states that we should choose the policy which acts optimally in the worst possible world. we show why pessimistic algorithms can achieve good performance even when the dataset is not informative of every policy, and derive families of algorithms which follow this principle. these theoretical findings are validated by experiments on a tabular gridworld, and deep learning experiments on four minatar environments. introduction we consider fixed-dataset policy optimization (fdpo), in which a dataset of transitions from an environment is used to find a policy with high return.1 we compare fdpo algorithms by their worst-case performance, expressed as high-probability guarantees on the suboptimality of the learned policy. it is perhaps obvious that in order to maximize worst-case performance, a good fdpo algorithm should select a policy with high worst-case value. we call this the pessimism principle of exploitation, as it is analogous to the widely-known optimism principle (lattimore & szepesv´ari, 2020) of exploration.2 our main contribution is a theoretical justification of the pessimism principle in fdpo, based on a bound that characterizes the suboptimality incurred by an fdpo algorithm. we further demonstrate how this bound may be used to derive principled algorithms. note that the core novelty of our work is not the idea of pessimism, which is an intuitive concept that appears in a variety of contexts; rather, our contribution is a set of theoretical results rigorously explaining how pessimism is important in the specific setting of fdpo. an example conveying the intuition behind our results can be found in appendix g.1. we first analyze a family of non-pessimistic na¨ıve fdpo algorithms, which estimate the environment from the dataset via maximum likelihood and then apply standard dynamic programming techniques. we prove a bound which shows that the worst-case suboptimality of these algorithms is guaranteed to be small when the dataset contains enough data that we are certain about the value of every possible policy. this is caused by the outsized impact of value overestimation errors on suboptimality, sometimes called the optimizer’s curse (smith & winkler, 2006). it is a fundamental consequence of ignoring the disconnect between the true environment and the picture painted by our limited observations. importantly, it is not reliant on errors introduced by function approximation. 1we use the term fixed-dataset policy optimization to emphasize the computational procedure; this setting has also been referred to as batch rl (ernst et al., 2005; lange et al., 2012) and more recently, offline rl (levine et al., 2020). we emphasize that this is a well-studied setting, and we are simply choosing to refer to it by a more descriptive name. 2the optimism principle states that we should select a policy with high best-case value. under review as a conference paper at iclr 2021 we contrast these findings with an analysis of pessimistic fdpo algorithms, which select a policy that maximizes some notion of worst-case expected return. we show that these algorithms do not require datasets which inform us about the value of every policy to achieve small suboptimality, due to the critical role that pessimism plays in preventing overestimation. our analysis naturally leads to two families of principled pessimistic fdpo algorithms. we prove their improved suboptimality guarantees, and confirm our claims with experiments on a gridworld. finally, we extend one of our pessimistic algorithms to the deep learning setting. recently, several deep-learning-based algorithms for fixed-dataset policy optimization have been proposed (agarwal et al., 2019; fujimoto et al., 2019; kumar et al., 2019; laroche et al., 2019; jaques et al., 2019; kidambi et al., 2020; yu et al., 2020; wu et al., 2019; wang et al., 2020; kumar et al., 2020; liu et al., 2020). our work is complementary to these results, as our contributions are conceptual, rather than algorithmic. our primary goal is to theoretically unify existing approaches and motivate the design of pessimistic algorithms more broadly. using experiments in the minatar game suite (young & tian, 2019), we provide empirical validation for the predictions of our analysis. the problem of fixed-dataset policy optimization is closely related to the problem of reinforcement learning, and as such, there is a large body of work which contains ideas related to those discussed in this paper. we discuss these works in detail in appendix e. background we anticipate most readers will be familiar with the concepts and notation, which is fairly standard in the reinforcement learning literature. in the interest of space, we relegate a full presentation to appendix a. here, we briefly give an informal overview of the background necessary to understand the main results. the environment as a markov decision process (mdp), denoted m := we represent (cid:104)s, a, r, p, γ, ρ(cid:105). we assume without loss of generality that r((cid:104)s, a(cid:105)) ∈ [0, 1], and denote its expectation as r((cid:104)s, a(cid:105)). ρ represents the start-state distribution. policies π can act in the environment, represented by action matrix aπ, which maps each state to the probability of each state-action when following π. value functions v assign some real value to each state. we use vπ m to denote the value function which assigns the sum of discounted rewards in the environment when following policy π. a dataset d contains transitions sampled from the environment. from a dataset, we can compute the empirical reward and transition functions, rd and pd, and the empirical policy, ˆπd. an important concept for our analysis is the value uncertainty function, denoted µπ d,δ, which returns a high-probability upper-bound to the error of a value function derived from dataset d. certain value uncertainty functions are decomposable by states or state-actions, meaning they can be written as the weighted sum of more local uncertainties. see appendix b for more detail. our goal is to analyze the suboptimality of a specific class of fdpo algorithms, called value-based fdpo algorithms, which have a straightforward structure: they use a fixed-dataset policy evaluation (fdpe) algorithm to assign a value to each policy, and then select the policy with the maximum value. furthermore, we consider fdpe algorithms whose solutions satisfy a fixed-point equation. thus, a fixed-point equation defines a fdpe objective, which in turn defines a value-based fdpo objective; we call the set of all algorithms that implement these objectives the family of algorithms defined by the fixed-point equation. over/under decomposition of suboptimality our first theoretical contribution is a simple but informative bound on the suboptimality of any value-based fdpo algorithm. next, in section 4, we make this concrete by defining the family of na¨ıve algorithms and invoking this bound. this bound is insightful because it distinguishes the impact of errors of value overestimation from errors of value underestimation, defined as: definition 1. consider any fixed-dataset policy evaluation algorithm e on any dataset d and any policy π. denote vπ d] and overestimation error as eρ[vπ d := e(d, π). we define the underestimation error as eρ[vπ m − vπ d − vπ m]. the following lemma shows how these quantities can be used to bound suboptimality. under review as a conference paper at iclr 2021 lemma 1 (value-based fdpo suboptimality bound). consider any value-based fixed-dataset policy optimization algorithm ovb, with fixed-dataset policy evaluation subroutine e. for any policy π and dataset d, denote vπ d := e(d, π). the suboptimality of ovb is bounded by subopt(ovb(d)) ≤ inf π proof. see appendix c.1. eρ[vπ∗ m m − vπ m] + eρ[vπ m − vπ d] + sup π eρ[vπ d − vπ m] this bound is tight; see appendix c.2. the bound highlights the potentially outsized impact of overestimation on the suboptimality of a fdpo algorithm. to see this, we consider each of its terms in isolation: subopt(ovb(d)) ≤ inf π (cid:124) + sup π the term labeled (a) reflects the degree to which the dataset informs us of a near-optimal policy. for any policy π, (a1) captures the suboptimality of that policy, and (a2) captures its underestimation error. since (a) takes an infimum, this term will be small whenever there is at least one reasonable policy whose value is not very underestimated. on the other hand, the term labeled (b) corresponds to the largest overestimation error on any policy. because it consists of a supremum over all policies, it will be small only when no policies are overestimated at all. even a single overestimation can lead to significant suboptimality. we see from these two terms that errors of overestimation and underestimation have differing impacts on suboptimality, suggesting that algorithms should be designed with this asymmetry in mind. we will see in section 5 how this may be done. but first, let us further understand why this is necessary by studying in more depth a family of algorithms which treats its errors of overestimation and underestimation equivalently. na¨ive algorithms the goal of this section is to paint a high-level picture of the worst-case suboptimality guarantees of a specific family of non-pessimistic approaches, which we call na¨ıve fdpo algorithms. informally, the na¨ıve approach is to take the limited dataset of observations at face value, treating it as though it paints a fully accurate picture of the environment. na¨ıve algorithms construct a maximum-likelihood mdp from the dataset, then use standard dynamic programming approaches on this empirical mdp. definition 2. a na¨ıve algorithm is any algorithm in the family defined by the fixed-point function fna¨ıve(vπ) := aπ(rd + γpdvπ). various fdpe and fdpo algorithms from this family could be described; in this work, we do not study these implementations in detail, although we do give pseudocode for some implementations in appendix d.1. one example of a na¨ıve fdpo algorithm which can be found in the literature is certainty equivalence (jiang, 2019a). the core ideas behind na¨ıve algorithms can also be found in the function approximation literature, for example in fqi (ernst et al., 2005; jiang, 2019b). additionally, when available data is held fixed, nearly all existing deep reinforcement learning algorithms are transformed into na¨ıve value-based fdpo algorithms. for example, dqn (mnih et al., 2015) with a fixed replay buffer is a na¨ıve value-based fdpo algorithm. theorem 1 (na¨ıve fdpo suboptimality bound). consider any na¨ıve value-based fixed-dataset policy optimization algorithm ovb na¨ıve. let µ be any value uncertainty function. with probability at least 1 − δ, the suboptimality of ovb na¨ıve is bounded with probability at least 1 − δ by subopt(ovb na¨ıve(d)) ≤ inf π eρ[vπ∗ m m − vπ m] + eρ[µπ d,δ] + sup π eρ[µπ d,δ] under review as a conference paper at iclr 2021 proof. this result follows directly from lemma 1 and lemma 3. the infimum term is small whenever there is some reasonably good policy with low value uncertainty. in practice, this condition can typically be satisfied, for example by including expert demonstrations in the dataset. on the other hand, the supremum term will only be small if we have low value uncertainty for all policies – a much more challenging requirement. this explains the behavior of pathological examples, e.g. in appendix g.1, where performance is poor despite access to virtually unlimited amounts of data from a near-optimal policy. such a dataset ensures that the first term will be small by reducing value uncertainty of the near-optimal data collection policy, but does little to reduce the value uncertainty of any other policy, leading the second term to be large. however, although pathological examples exist, it is clear that this bound will not be tight on all environments. it is reasonable to ask: is it likely that this bound will be tight on real-world examples? we argue that it likely will be. we identify two properties that most real-world tasks share: (1) the set of policies is pyramidal: there are an enormous number of bad policies, many mediocre policies, a few good policies, etc. (2) due to the size of the state space and cost of data collection, most policies have high value uncertainty. given that these assumptions hold, na¨ıve algorithms will perform as poorly on most real-world environments as they do on pathological examples. consider: there are many more policies than there is data, so there will be many policies with high value uncertainty; na¨ıve algorithms will likely overestimate several of these policies, and erroneously select one; since good policies are rare, the selected policy will likely be bad. it follows that running na¨ıve algorithms on real-world problems will typically yield suboptimality close to our worst-case bound. and, indeed, on deep rl benchmarks, which are selected due to their similarity to real-world settings, overestimation has been widely observed, typically correlated with poor performance (bellemare et al., 2016; van hasselt et al., 2016; fujimoto et al., 2019). the pessimism principle “behave as though the world was plausibly worse than you observed it to be.” the pessimism principle tells us how to exploit our current knowledge to find the stationary policy with the best worstcase guarantee on expected return. we consider two specific families of pessimistic algorithms, the uncertainty-aware pessimistic algorithms and proximal pessimistic algorithms, and bound the worst-case suboptimality of each. these algorithms each include a hyperparameter, α, controlling the amount of pessimism, interpolating from fully-na¨ıve to fully-pessimistic. (for a discussion of the implications of the latter extreme, see appendix g.2.) then, we will compare the two families, and see how the proximal family is simply a trivial special case of the more general uncertainty-aware family of methods. uncertainty-aware pessimistic algorithms our first family of pessimistic algorithms is the uncertainty-aware (ua) pessimistic algorithms. as the name suggests, this family of algorithms estimates the state-wise bellman uncertainty and penalizes policies accordingly, leading to a pessimistic value estimate and a preference for policies with low value uncertainty. definition 3. an uncertainty-aware pessimistic algorithm, with a bellman uncertainty function uπ d,δ and pessimism hyperparameter α ∈ [0, 1], is any algorithm in the family defined by the fixed-point function fua(vπ) = aπ(rd + γpdvπ) − αuπ d,δ this fixed-point function is simply the na¨ıve fixed-point function penalized by the bellman uncertainty. this can be interpreted as being pessimistic about the outcome of every action. note that it remains to specify a technique to compute the bellman uncertainty function, e.g. appendix b.1, in order to get a concrete algorithm. it is straightforward to construct algorithms from this family by modifying na¨ıve algorithms to subtract the penalty term. similar algorithms have been explored in the safe rl literature (ghavamzadeh et al., 2016; laroche et al., 2019) and the robust mdp literature (givan et al., 1997), where algorithms with high-probability performance guarantees are useful in the context of ensuring safety. under review as a conference paper at iclr 2021 theorem 2 (uncertainty-aware pessimistic fdpo suboptimality bound). consider an uncertaintyaware pessimistic value-based fixed-dataset policy optimization algorithm ovb d,δ be any bellman uncertainty function, µπ d,δ be a corresponding value uncertainty function, and α ∈ [0, 1] be any pessimism hyperparameter. the suboptimality of ovb ua is bounded with probability at least 1 − δ by ua . let uπ subopt(ovb ua (d)) ≤ inf π proof. see appendix c.7. eρ[vπ∗ m m − vπ d,δ] sup π eρ[µπ d,δ] this bound should be contrasted with our result from theorem 1. with α = 0, the family of pessimistic algorithms reduces to the family of na¨ıve algorithms, so the bound is correspondingly identical. we can add pessimism by increasing α, and this corresponds to a decrease in the magnitude of the supremum term. when α = 1, there is no supremum term at all. in general, the optimal value of α lies between the two extremes. to further understand the power of this approach, it is illustrative to compare it to imitation learning. consider the case where the dataset contains a small number of expert trajectories but also a large number of interactions from a random policy, i.e. when learning from suboptimal demonstrations (brown et al., 2019). if the dataset contained only a small amount of expert data, then both an ua pessimistic fdpo algorithm and an imitation learning algorithm would return a high-value policy. however, the injection of sufficiently many random interactions would degrade the performance of imitation learning algorithms, whereas ua pessimistic algorithms would continue to behave similarly to the expert data. proximal pessimistic algorithms the next family of algorithms we study are the proximal pessimistic algorithms, which implement pessimism by penalizing policies that deviate from the empirical policy. the name proximal was chosen to reflect the idea that these algorithms prefer policies which stay “nearby” to the empirical policy. many fdpo algorithms in the literature, and in particular several recently-proposed deep learning algorithms (fujimoto et al., 2019; kumar et al., 2019; laroche et al., 2019; jaques et al., 2019; wu et al., 2019; liu et al., 2020), resemble members of the family of proximal pessimistic algorithms; see appendix e. also, another variant of the proximal pessimistic family, which uses state density instead of state-conditional action density, can be found in appendix c.9. definition 4. a proximal pessimistic algorithm with pessimism hyperparameter α ∈ [0, 1] is any algorithm in the family defined by the fixed-point function fproximal(vπ) = aπ(rd + γpdvπ) − α theorem 3 (proximal pessimistic fdpo suboptimality bound). consider any proximal pessimistic value-based fixed-dataset policy optimization algorithm ovb proximal. let µ be any state-action-wise decomposable value uncertainty function, and α ∈ [0, 1] be a pessimism hyperparameter. for any dataset d, the suboptimality of ovb proximal is bounded with probability at least 1 − δ by subopt(oproximal(d)) ≤ inf π proof. see appendix c.8. (cid:18) eρ[vπ∗ m m − vπ m] + eρ (cid:18) eρ + sup π d,δ + α(i − γaπpd)−1 d,δ − α(i − γaπpd)−1 once again, we see that as α grows, the large supremum term shrinks; similarly, by lemma 5, when we have α = 1, the supremum term is guaranteed to be non-positive.3 the primary limitation of 3initially, it will contain µπ(cid:48) d,δ, but this can be removed since it is not dependent on π. under review as a conference paper at iclr 2021 the proximal approach is the looseness of the value lower-bound. intuitively, this algorithm can be understood as performing imitation learning, but permitting minor deviations. constraining the policy to be near in distribution to the empirical policy can fail to take advantage of highly-visited states which are reached via many trajectories. in fact, in contrast to both the na¨ıve approach and the ua pessimistic approach, in the limit of infinite data this approach is not guaranteed to converge to the optimal policy. also, note that when α ≥ 1 − γ, this algorithm is identical to imitation learning. the relationship between uncertainty-aware and proximal algorithms though these two families may appear on the surface to be quite different, they are in fact closely related. a key insight of our theoretical work is that it reveals the important connection between these two approaches. concretely: proximal algorithms are uncertainty-aware algorithms which use a trivial value uncertainty function. to see this, we show how to convert an uncertainty-aware penalty into a proximal penalty. let µ be any state-action-wise decomposable value uncertainty function. for any dataset d, we have d,δ = µˆπd d,δ + (i − γaπpd)−1 (cid:16) (aπ − aˆπd )(ud,δ + γpdµˆπd (cid:18) ≤ µˆπd d,δ + (i − γaπpd)−1 tvs (π, π(cid:48)) (lemma 4) (lemma 5) we began with the uncertainty penalty. in the first step, we rewrote the uncertainty for π into the sum of two terms: the uncertainty for ˆπd, and the difference in uncertainty between π and ˆπd on various 1 actions. in the second step, we chose our state-action-wise bellman uncertainty to be 1−γ , which is a trivial upper bound; we also upper-bound the signed policy difference with the total variation. this results in the proximal penalty.4 thus, we see that proximal penalties are equivalent to uncertainty-aware penalties which use a specific, trivial uncertainty function. this result suggests that uncertainty-aware algorithms are strictly better than their proximal counterparts. there is no looseness in this result: for any proximal penalty, we will always be able to find a tighter uncertainty-aware penalty by replacing the trivial uncertainty function with something tighter. however, currently, proximal algorithms are quite useful in the context of deep learning. this is because the only uncertainty function that can currently be implemented for neural networks is the trivial uncertainty function. until we discover how to compute uncertainties for neural networks, proximal pessimistic algorithms will remain the only theoretically-motivated family of algorithms. experiments we implement algorithms from each family to empirically investigate whether their performance of follows the predictions of our bounds. below, we summarize the key predictions of our theory. • imitation. this algorithm simply learns to copy the empirical policy. it performs well if and only if the data collection policy performs well. • na¨ıve. this algorithm performs well only when almost no policies have high value uncertainty. this means that when the data is collected from any mostly-deterministic policy, performance of this algorithm will be poor, since many states will be missing data. stochastic data collection improves performance. as the size of the dataset grows, this algorithm approaches optimality. • uncertainty-aware. this algorithm performs well when there is data on states visited by near-optimal policies. this is the case when a small amount of data has been collected from a near-optimal policy, or a large amount of data has been collected from a worse policy. as the size of the dataset grows, this algorithm to approaches optimality. this approach outperforms all other approaches. 4when constructing the penalty, we can ignore the first term, which does not contain π, and so is irrelevant to optimization. under review as a conference paper at iclr 2021 (a) performance of fdpo algorithms on a dataset of 2000 transitions, as the data collection policy is interpolated from random to optimal. (b) performance of fdpo algorithms as dataset size increases. data is collected with an optimal (cid:15)-greedy policy, with (cid:15) = 50%. figure 1: tabular gridworld experiments. • proximal. this algorithm roughly mirrors the performance of the imitation approach, but improves upon it. as the size of the dataset grows, this algorithm does not approach optimality, as the penalty persists even when the environment’s dynamics are perfectly captured by the dataset. our experimental results qualitatively align with our predictions in both the tabular and deep learning settings, giving evidence that the picture painted by our theoretical analysis truly describes the fdpo setting. see appendix d for pseudocode of all algorithms; see appendix f for details on the experimental setup; see appendix g.3 for additional experimental considerations for deep learning experiments that will be of interest to practicioners. for an open-source implementation, including full details suitable for replication, please refer to the code in the accompanying github repository: github.com/anonymized tabular. the first tabular experiment, whose results are shown in figure 1(a), compares the performance of the algorithms as the policy used to collect the dataset is interpolated from the uniform random policy to an optimal policy using (cid:15)-greedy. the second experiment, whose results are shown in figure 1(b), compares the performance of the algorithms as we increase the size of the dataset from 1 sample to 200000 samples. in both experiments, we notice a qualitative difference between the trends of the various algorithms, which aligns with the predictions of our theory. neural network. the results of these experiments can be seen in figure 2. similarly to the tabular experiments, we see that the na¨ıve approach performs well when data is fully exploratory, and poorly when data is collected via an optimal policy; the pure imitation approach performs better when the data collection policy is closer to optimal. the pessimistic approach achieves the best of both worlds: it correctly imitates a near-optimal policy, but also learns to improve upon it somewhat when the data is more exploratory. one notable failure case is in freeway, where the performance of the pessimistic approach barely improves upon the imitation policy, despite the na¨ıve approach performing near-optimally for intermediate values of (cid:15). discussion and conclusion in this work, we provided a conceptual and mathematical framework for thinking about fixed-dataset policy optimization. starting from intuitive building blocks of uncertainty and the over-under decomposition, we showed the core issue with na¨ıve approaches, and introduced the pessimism principle as the defining characteristic of solutions. we described two families of pessimistic algorithms, uncertainty-aware and proximal. we see theoretically that both of these approaches have advantages over the na¨ıve approach, and observed these advantages empirically. comparing these two families of pessimistic algorithms, we see both theoretically and empirically that uncertainty-aware under review as a conference paper at iclr 2021 figure 2: performance of deep fdpo algorithms on a dataset of 500000 transitions, as the data collection policy is interpolated from near-optimal to random. note that here, the only pessimistic algorithm evaluated is proximal. algorithms are strictly better than proximal algorithms, and that proximal algorithms may not yield the optimal policy, even with infinite data. future directions. our results indicate that research in fdpo should not focus on proximal algorithms. the development of neural uncertainty estimation techniques will enable principled uncertainty-aware deep learning algorithms. as is evidenced by our tabular results, we expect these approaches to yield dramatic performance improvements, rendering algorithms derived from the proximal family (kumar et al., 2019; fujimoto et al., 2019; laroche et al., 2019; kumar et al., 2020) obsolete. on ad-hoc solutions. it is undoubtably disappointing to see that proximal algorithms, which are far easier to implement, are fundamentally limited in this way. it is tempting to propose various adhoc solutions to mitigate the flaws of proximal pessimistic algorithms in practice. however, in order to ensure that the resulting algorithm is principled, one must be careful. for example, one might consider tuning α; however, doing the tuning requires evaluating each policy in the environment, which involves gaining information by interacting with the environment, which is not permitted by the problem setting. or, one might consider e.g. an adaptive pessimism hyperparameter which decays with the size of the dataset; however, in order for such a penalty to be principled, it must be based on an uncertainty function, at which point we may as well just use an uncertainty-aware algorithm. stochastic policies. one surprising property of pessimsitic algorithms is that the optimal policy is often stochastic. this is because the penalty term included in their fixed-point objective is often minimized by stochastic policies. for the penalty of proximal pessimistic algorithms, it is easy to see that this will be the case for any non-deterministic empirical policy; for ua pessimsitic algorithms, it is dependent on the choice of bellman uncertainty function, but often still holds (see appendix b.2 for the derivation of a bellman uncertainty function with this property). this observation lends mathematical rigor to the intuition that agents should ‘hedge their bets’ in the face of epistemic uncertainty. this property also means that the simple approach of selecting the argmax action is no longer adequate for policy improvement. in appendix d.2.2 we discuss a policy improvement procedure that takes into account the proximal penalty to find the stochastic optimal policy. implications for rl. finally, due to the close connection between the fdpo and rl settings, this work has implications for deep reinforcement learning. many popular deep rl algorithms utilize a replay buffer to break the correlation between samples in each minibatch (mnih et al., 2015). however, since these algorithms typically alternate between collecting data and training the network, the replay buffer can also be viewed as a ‘temporarily fixed’ dataset during the training phase. these algorithms are often very sensitive to hyperparemters; in particular, they perform poorly when the number of learning steps per interaction is large (fedus et al., 2020). this effect can be explained by our analysis: additional steps of learning cause the policy to approach its na¨ıve fdpo fixed-point, which has poor worst-case suboptimality. a pessimistic algorithm with a better fixed-point could therefore allow us to train more per interaction, improving sample efficiency. a potential direction of future work is therefore to incorporate pessimism into deep rl. under review as a conference paper at iclr 2021 references rishabh agarwal, dale schuurmans, and mohammad norouzi. striving for simplicity in off-policy deep reinforcement learning. arxiv preprint arxiv:1907.04543, 2019. andr´as antos, csaba szepesv´ari, and r´emi munos. value-iteration based fitted policy iteration: learning with a single trajectory. in 2007 ieee international symposium on approximate dynamic programming and reinforcement learning, pp. 330–337. ieee, 2007. marc g bellemare, georg ostrovski, arthur guez, philip s thomas, and r´emi munos. increasing in thirtieth aaai conference on the action gap: new operators for reinforcement learning. artificial intelligence, 2016. daniel s. brown, wonjoon goo, prabhat nagarajan, and scott niekum. extrapolating beyond suboptimal demonstrations via inverse reinforcement learning from observations. in proceedings of the international conference on machine learning, 2019. michael k cohen and marcus hutter. pessimism about unknown unknowns inspires conservatism. in conference on learning theory, pp. 1344–1373. pmlr, 2020. damien ernst, pierre geurts, and louis wehenkel. tree-based batch mode reinforcement learning. journal of machine learning research, 6:503–556, 2005. william fedus, prajit ramachandran, rishabh agarwal, yoshua bengio, hugo larochelle, mark arxiv preprint rowland, and will dabney. revisiting fundamentals of experience replay. arxiv:2007.06700, 2020. scott fujimoto, david meger, and doina precup. off-policy deep reinforcement learning without exploration. in international conference on machine learning, pp. 2052–2062, 2019. mohammad ghavamzadeh, marek petrik, and yinlam chow. safe policy improvement by minimizing robust baseline regret. in advances in neural information processing systems, pp. 2298–2306, 2016. robert givan, sonia leach, and thomas dean. bounded parameter markov decision processes. in european conference on planning, pp. 234–246. springer, 1997. vineet goyal and julien grand-clement. robust markov decision process: beyond rectangularity. wei hu, lechao xiao, and jeffrey pennington. provable benefit of orthogonal initialization in optimizing deep linear networks. arxiv preprint arxiv:2001.05992, 2020. ahmed hussein, mohamed medhat gaber, eyad elyan, and chrisina jayne. imitation learning: a survey of learning methods. acm computing surveys (csur), 50(2):1–35, 2017. garud n iyengar. robust dynamic programming. mathematics of operations research, 30(2): natasha jaques, asma ghandeharioun, judy hanwen shen, craig ferguson, agata lapedriza, noah jones, shixiang gu, and rosalind picard. way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arxiv preprint arxiv:1907.00456, 2019. nan jiang. note on certainty equivalence, 2019a. nan jiang. note on fqi, 2019b. nan jiang and jiawei huang. minimax confidence interval for off-policy evaluation and policy sham kakade and john langford. approximately optimal approximate reinforcement learning. in rahul kidambi, aravind rajeswaran, praneeth netrapalli, and thorsten joachims. morel: modelbased offline reinforcement learning. arxiv preprint arxiv:2005.05951, 2020. under review as a conference paper at iclr 2021 aviral kumar, justin fu, matthew soh, george tucker, and sergey levine. stabilizing off-policy q-learning via bootstrapping error reduction. in advances in neural information processing systems, pp. 11784–11794, 2019. aviral kumar, aurick zhou, george tucker, and sergey levine. conservative q-learning for offline reinforcement learning. arxiv preprint arxiv:2006.04779, 2020. sascha lange, thomas gabel, and martin riedmiller. batch reinforcement learning. reinforcement romain laroche, paul trichelair, and remi tachet des combes. safe policy improvement with baseline bootstrapping. in international conference on machine learning, pp. 3652–3661, 2019. tor lattimore and csaba szepesv´ari. bandit algorithms. cambridge university press, 2020. sergey levine, aviral kumar, george tucker, and justin fu. offline reinforcement learning: tuto
9
[ 108, 549.4730784, 504.0033874, 559.4356784 ]
_4GFbtOuWq-.pdf
2,022
1
capacity of group-invariant linear readouts from equivariant representations: how many objects can be linearly classified under all possible views? matthew farrell∗‡, blake bordelon∗‡, shubhendu trivedi†, & cengiz pehlevan‡ ‡ harvard university {msfarrell,blake bordelon,cpehlevan}@seas.harvard.edu † massachusetts institute of technology shubhendu@csail.mit.edu abstract equivariance has emerged as a desirable property of representations of objects subject to identity-preserving transformations that constitute a group, such as translations and rotations. however, the expressivity of a representation constrained by group equivariance is still not fully understood. we address this gap by providing a generalization of cover’s function counting theorem that quantifies the number of linearly separable and group-invariant binary dichotomies that can be assigned to equivariant representations of objects. we find that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action. we show how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling. while other operations do not change the fraction of separable dichotomies, local pooling decreases the fraction, despite being a highly nonlinear operation. finally, we test our theory on intermediate representations of randomly initialized and fully trained convolutional neural networks and find perfect agreement. introduction the ability to robustly categorize objects under conditions and transformations that preserve the object categories is essential to animal intelligence, and to pursuits of practical importance such as improving computer vision systems. however, for general-purpose understanding and geometric reasoning, invariant representations of these objects in sensory processing circuits are not enough. perceptual representations must also accurately encode their transformation properties. one such property is that of exhibiting equivariance to transformations of the object. when such transformations are restricted to be an algebraic group, the resulting equivariant representations have found significant success in machine learning starting with classical convolutional neural networks (cnns) (denker et al., 1989; lecun et al., 1989) and recently being generalized by the influential work of cohen & welling (2016). such representations have elicited burgeoning interest as they capture many transformations of practical interest such as translations, permutations, rotations, and reflections. furthermore, equivariance to these transformations can be easily “hard-coded” into neural networks. indeed, a new breed of cnn architectures that explicitly account for such transformations are seeing diverse and rapidly growing applications (townshend et al., 2021; baek et al., 2021; satorras et al., 2021; anderson et al., 2019; bogatskiy et al., 2020; klicpera et al., 2020; winkels & cohen, 2019; gordon et al., 2020; sosnovik et al., 2021; eismann et al., 2020). in addition, equivariant cnns have been shown to capture response properties of neurons in the primary visual cortex beyond classical g´abor filter models (ecker et al., 2018). ∗these authors contributed equally. while it is clear that equivariance imposes a strong constraint on the geometry of representations and thus of perceptual manifolds (seung & lee, 2000; dicarlo & cox, 2007) that are carved out by these representations as the objects transform, the implications of such constraints on their expressivity are not well understood. in this work we take a step towards addressing this gap. our starting point is the classical notion of the perceptron capacity (sometimes also known as the fractional memory/storage capacity) – a quantity fundamental to the task of object categorization and closely related to vc dimension (vapnik & chervonenkis, 1968). defined as the maximum number of points for which all (or 1-δ fraction of) possible binary label assignments (i.e. dichotomies) afford a hyperplane that separates points with one label from the points with the other, it can be seen to offer a quantification of the expressivity of a representation. classical work on perceptron capacity focused on points in general position (wendel, 1962; cover, 1965; schl¨afli, 1950; gardner, 1987; 1988). however, understanding the perceptron capacity when the inputs are not merely points, but are endowed with richer structure, has only recently started to attract attention. for instance, work by chung et al. (2018); pastore et al. (2020); rotondo et al. (2020); cohen et al. (2020) considered general perceptual manifolds and examined the role of their geometry to obtain extensions to the perceptron capacity results. however, such work crucially relied on the assumption that each manifold is oriented randomly, a condition which is strongly violated by equivariant representations. with these motivations, our particular contributions in this paper are the following: • we extend cover’s function counting theorem and vc dimension to equivariant representations, finding that both scale with the dimension of the subspace fixed by the group action. • we demonstrate the applicability of our result to g-convolutional network layers, including pooling layers, through theory and verify through simulation. 1.1 related works work most related to ours falls along two major axes. the first follows the classical perceptron capacity result on the linear separability of points (schl¨afli, 1950; wendel, 1962; cover, 1965; gardner, 1987; 1988). this result initiated a long history of investigation in theoretical neuroscience, (brunel et al., 2004; chapeton et al., 2012; rigotti et al., 2013; brunel, 2016; rubin et al., e.g. 2017; pehlevan & sengupta, 2017), where it is used to understand the memory capacity of neuronal architectures. similarly, in machine learning, the perceptron capacity and its variants, including notions for multilayer perceptrons, have been fundamental to a fruitful line of study in the context of finite sample expressivity and generalization (baum, 1988; kowalczyk, 1997; sontag, 1997; huang, 2003; yun et al., 2019; vershynin, 2020). work closest in spirit to ours comes from theoretical neuroscience and statistical physics (chung et al., 2018; pastore et al., 2020; rotondo et al., 2020; cohen et al., 2020), which considered general perceptual manifolds, albeit oriented randomly, and examined the role of their geometry to obtain extensions to the perceptron capacity result. the second line of relevant literature is that on group-equivariant convolutional neural networks (gcnns). the main inspiration for such networks comes from the spectacular success of classical cnns (lecun et al., 1989) which directly built in translational symmetry into the network architecture. in particular, the internal representations of a cnn are approximately1 translation equivariant: if the input image is translated by an amount t, the feature map of each internal layer is translated by the same amount. furthermore, an invariant read-out on top ensures that a cnn is translation invariant. cohen & welling (2016) observed that a viable approach to generalize cnns to other data types could involve considering equivariance to more general transformation groups. this idea has been used to construct networks equivariant to a wide variety of transformations such as planar rotations (worrall et al., 2017; weiler et al., 2018b; bekkers et al., 2018; veeling et al., 2018; smets et al., 2020), 3d rotations (cohen et al., 2018; esteves et al., 2018; worrall & brostow, 2018; weiler et al., 2018a; kondor et al., 2018a; perraudin et al., 2019), permutations (zaheer et al., 2017; hartford et al., 2018; kondor et al., 2018b; maron et al., 2019a; 2020), general euclidean isometries (weiler et al., 2018a; weiler & cesa, 2019; finzi et al., 2020), scaling (marcos et al., 2018; worrall & welling, 2019; sosnovik et al., 2020) and more exotic symmetries (bogatskiy et al., 2020; shutty & wierzynski, 2020; finzi et al., 2021) etc. 1some operations such as max pooling and boundary effects of the convolutions technically break strict equivariance, as well as the final densely connected layers. a quite general theory of equivariant/invariant networks has also emerged. kondor & trivedi (2018) gave a complete description of gcnns for scalar fields on homogeneous spaces of compact groups. this was generalized further to cover the steerable case in (cohen et al., 2019b) and to general gauge fields in (cohen et al., 2019a; weiler et al., 2021). this theory also includes universal approximation results (yarotsky, 2018; keriven & peyr´e, 2019; sannai et al., 2019b; maron et al., 2019b; segol & lipman, 2020; ravanbakhsh, 2020). nevertheless, while benefits of equivariance/invariance in terms of improved sample complexity and ease of training are quoted frequently, a firm theoretical understanding is still largely missing. some results however do exist, going back to (shawe-taylor, 1991). abu-mostafa (1993) made the argument that restricting a classifier to be invariant can not increase its vc dimension. sokolic et al. (2017) extend this idea to derive generalization bounds for invariant classifiers, while sannai et al. (2019a) do so specifically working with the permutation group. elesedy & zaidi (2021) show a strict generalization benefit for equivariant linear models, showing that the generalization gap between a least squares model and its equivariant version depends on the dimension of the space of anti-symmetric linear maps. some benefits of related ideas such as data augmentation and invariant averaging are formally shown in (lyle et al., 2020; chen et al., 2020). here we focus on the limits to expressivity enforced by equivariance. 2 problem formulation suppose x abstractly represents an object and let r(x) ∈ rn be some feature map of x to an n -dimensional space (such as an intermediate layer of a deep neural network). we consider transformations of this object, such that they form a group in the algebraic sense of the word. we denote the abstract transformation of x by element g ∈ g as gx. groups g may be represented by invertible matrices, which act on a vector space v (which themselves form the group gl(v ) of invertible linear transformations on v ). we are interested in feature maps r which satisfy the following group equivariance condition: r(gx) = π(g)r(x), where π : g → gl(rn ) is a linear representation of g which acts on feature map r(x). note that many representations of g are possible, including the trivial representation: π(g) = i for all g. we are interested in perceptual object manifolds generated by the actions of g. each of the p manifolds can be written as a set of points {π(g)rµ : g ∈ g} where µ ∈ [p ] ≡ {1, 2, . . . , p }; that is, these manifolds are orbits of the point rµ ≡ r(xµ) under the action of π. we will refer to such manifolds as π-manifolds.2 each of these π-manifolds represents a single object under the transformation encoded by π; hence, each of the points in a π-manifold is assigned the same class label. a perceptron endowed with a set of linear readout weights w will attempt to determine the correct class of every point in every manifold. the condition for realizing (i.e. linearly separating) the dichotomy {yµ}µ can be written as yµw(cid:62)π(g)rµ > 0 for all g ∈ g and µ ∈ [p ], where yµ = +1 if the µth manifold belongs to the first class and yµ = −1 if the µth manifold belongs to the second class. the perceptron capacity is the fraction of dichotomies that can be linearly separated; that is, separated by a hyperplane. for concreteness, one might imagine that each of the rµ is the neural representation for an image of a dog (if yµ = +1) or of a cat (if yµ = −1). the action π(g) could, for instance, correspond to the image shifting to the left or right, where the size of the shift is given by g. different representations of even the same group can have different coding properties, an important point for investigating biological circuits and one that we leverage to construct a new gcnn architecture in section 5. perceptron capacity of group-generated manifolds
2
[ 108.299, 156.2966768, 440.3236016, 168.2518768 ]
zEn1BhaNYsC.pdf
2,023
1
minimax optimal kernel operator learning via multilevel training jikai jin school of mathematical sciences peking university beijing, china jkjin@pku.edu.cn yiping lu institute for computational & mathematical engineering stanford university stanford, ca, us yplu@stanford.edu jose blanchet management science and engineering stanford university stanford, ca, us jose.blanchet@stanford.edu lexing ying department of mathematics stanford university stanford, ca, us lexing@stanford.edu abstract learning mappings between infinite-dimensional function spaces have achieved empirical success in many disciplines of machine learning, including generative modeling, functional data analysis, causal inference, and multi-agent reinforcement learning. in this paper, we study the statistical limit of learning a hilbertschmidt operator between two infinite-dimensional sobolev reproducing kernel hilbert spaces (rkhss). we establish the information-theoretic lower bound in terms of the sobolev hilbert-schmidt norm and show that a regularization that learns the spectral components below the bias contour and ignores the ones above the variance contour can achieve the optimal learning rate. at the same time, the spectral components between the bias and variance contours give us flexibility in designing computationally feasible machine learning algorithms. based on this observation, we develop a multilevel kernel operator learning algorithm that is optimal when learning linear operators between infinite-dimensional function spaces. introduction
0
[ 126.82956, 288.1406768, 205.9888518, 300.0958768 ]
a4COps0uokg.pdf
2,023
1
user-interactive offline reinforcement learning phillip swazinna siemens & tu munich munich, germany swazinna@in.tum.de steffen udluft siemens technology munich, germany steffen.udluft@siemens.com thomas runkler siemens & tu munich munich, germany thomas.runkler@siemens.com abstract offline reinforcement learning algorithms still lack trust in practice due to the risk that the learned policy performs worse than the original policy that generated the dataset or behaves in an unexpected way that is unfamiliar to the user. at the same time, offline rl algorithms are not able to tune their most important hyperparameter - the proximity of the learned policy to the original policy. we propose an algorithm that allows the user to tune this hyperparameter at runtime, thereby addressing both of the above mentioned issues simultaneously. this allows users to start with the original behavior and grant successively greater deviation, as well as stopping at any time when the policy deteriorates or the behavior is too far from the familiar one. introduction recently, offline reinforcement learning (rl) methods have shown that it is possible to learn effective policies from a static pre-collected dataset instead of directly interacting with the environment (laroche et al., 2019; fujimoto et al., 2019; yu et al., 2020; swazinna et al., 2021b). since direct interaction is in practice usually very costly, these techniques have alleviated a large obstacle on the path of applying reinforcement learning techniques in real world problems. a major issue that these algorithms still face is tuning their most important hyperparameter: the proximity to the original policy. virtually all algorithms tackling the offline setting have such a hyperparameter, and it is obviously hard to tune, since no interaction with the real environment is permitted until final deployment. practitioners thus risk being overly conservative (resulting in no improvement) or overly progressive (risking worse performing policies) in their choice. additionally, one of the arguably largest obstacles on the path to deployment of rl trained policies in most industrial control problems is that (offline) rl algorithms ignore the presence of domain experts, who can be seen as users of the final product - the policy. instead, most algorithms today can be seen as trying to make human practitioners obsolete. we argue that it is important to provide these users with a utility - something that makes them want to use rl solutions. other research fields, such as machine learning for medical diagnoses, have already established the idea that domain experts are crucially important to solve the task and complement human users in various ways babbar et al. (2022); cai et al. (2019); de-arteaga et al. (2021); fard & pineau (2011); tang et al. (2020). we see our work in line with these and other researchers (shneiderman, 2020; schmidt et al., 2021), who suggest that the next generation of ai systems needs to adopt a user-centered approach and develop systems that behave more like an intelligent tool, combining both high levels of human control and high levels of automation. we seek to develop an offline rl method that does just that. furthermore, we see giving control to the user as a requirement that may in the future be much more enforced when regulations regarding ai systems become more strict: the eu’s high level expert group on ai has already recognized “human autonomy and oversight” as a key requirement for trustworthy ai in their ethics guidelines for trustworthy ai (smuha, 2019). in the future, solutions found with rl might thus be required by law to exhibit features that enable more human control. in this paper, we thus propose a simple method to provide users with more control over how an offline rl policy will behave after deployment. the algorithm that we develop trains a conditional policy, that can after training adapt the trade-off between proximity to the data generating policy on the one hand and estimated performance on the other. close proximity to a known solution naturally facilitates trust, enabling conservative users to choose behavior they are more inclined to confidently deploy. that way, users may benefit from the automation provided by offline rl (users don’t need to handcraft controllers, possibly even interactively choose actions) yet still remain in control as they can e.g. make the policy move to a more conservative or more liberal trade-off. we show how such an algorithm can be designed, as well as compare its performance with a variety of offline rl baselines and show that a user can achieve state of the art performance with it. furthermore, we show that our method has advantages over simpler approaches like training many policies with diverse hyperparameters. finally, since we train a policy conditional on one of the most important hyperparameters in offline rl, we show how a user could potentially use it to tune this hyperparameter. in many cases of our evaluations, this works almost regret-free, since we observe that the performance as a function of the hyperparameter is mostly a smooth function. related work offline rl recently, a plethora of methods has been published that learn policies from static datasets. early works, such as fqi and nfq (ernst et al., 2005; riedmiller, 2005), were termed batch instead of offline since they didn’t explicitly address issue that the data collection cannot be influenced. instead, similarly to other batch methods (depeweg et al., 2016; hein et al., 2018; kaiser et al., 2020), they assumed a uniform random data collection that made generalization to the real environment simpler. among the first to explicitly address the limitations in the offline setting under unknown data collection were spibb(-dqn) (laroche et al., 2019) in the discrete and bcq (fujimoto et al., 2019) in the continuous actions case. many works with different focuses followed: some treat discrete mdps and come with provable bounds on the performance at least with a certain probability thomas et al. (2015); nadjahi et al. (2019), however many more focused on the continuous setting: emaq, bear, brac, abm, various dice based methods, rem, pebl, psec-td-0, cql, iql, bail, crr, coil, o-raac, opal, td3+bc, and rvs (ghasemipour et al., 2021; kumar et al., 2019; wu et al., 2019; siegel et al., 2020; nachum et al., 2019; zhang et al., 2020; agarwal et al., 2020; smit et al., 2021; pavse et al., 2020; kumar et al., 2020; kostrikov et al., 2021; chen et al., 2019; wang et al., 2020; liu et al., 2021; urpí et al., 2021; ajay et al., 2020; brandfonbrener et al., 2021; emmons et al., 2021) are just a few of the proposed model-free methods over the last few years. additionally, many model-based as well as hybrid approaches have been proposed, such as mopo, morel, moose, combo, rambo, and wsbc (yu et al., 2020; kidambi et al., 2020; swazinna et al., 2021b; yu et al., 2021; rigter et al., 2022; swazinna et al., 2021a). even approaches that train policies purely supervised, by conditioning on performance, have been proposed (peng et al., 2019; emmons et al., 2021; chen et al., 2021). model based algorithms more often use model uncertainty, while model-free methods use a more direct behavior regularization approach. offline policy evaluation or offline hyperparameter selection is concerned with evaluating (or at least ranking) policies that have been found by an offline rl algorithm, in order to either pick the best performing one or to tune hyperparameters. often, dynamics models are used to evaluate policies found in model-free algorithms, however also model-free evaluation methods exist (hans et al., 2011; paine et al., 2020; konyushova et al., 2021; zhang et al., 2021b; fu et al., 2021). unfortunately, but also intuitively, this problem is rather hard since if any method is found that can more accurately assess the policy performance than the mechanism in the offline algorithm used for training, it should be used instead of the previously employed method for training. also, the general dilemma of not knowing in which parts of the state-action space we know enough to optimize behavior seems to always remain. works such as zhang et al. (2021a); lu et al. (2021) become applicable if limited online evaluations are allowed, making hyperparameter tuning much more viable. offline rl with online adaptation other works propose an online learning phase that follows after offline learning has conceded. in the most basic form, kurenkov & kolesnikov (2021) introduce an online evaluation budget that lets them find the best set of hyperparameters for an offline rl algorithm given limited online evaluation resources. in an effort to minimize such a budget, yang et al. (2021) train a set of policies spanning a diverse set of uncertainty-performance trade-offs. ma et al. (2021) propose a conservative adaptive penalty, that penalizes unknown behavior more during the beginning and less during the end of training, leading to safer policies during training. in pong et al. (2021); nair et al. (2020); zhao et al. (2021) methods for effective online learning phases that follow the offline learning phase are proposed. in contrast to these methods, we are not aiming for a fully automated solution. instead, we want to provide the user with a valuable tool after training, so we do not propose an actual online phase, also since practitioners may find any performance deterioration inacceptable. to the best of our knowledge, no prior offline rl method produces policies that remain adaptable after deployment without any further training. lion: learning in interactive offline environments in this work, we address two dilemmas of the offline rl setting: first and foremost, we would like to provide the user with a high level control option in order to influence the behavior of the policy, since we argue that the user is crucially important for solving the task and not to be made obsolete by an algorithm. further we address the issue that in offline rl, the correct hyperparameter controlling the trade-off between conservatism and performance is unknown and can hardly be tuned. by training a policy conditioned in the proximity hyperparameter, we aim to enable the user to find a good trade-off hyperparameter. code will be made available at https://github.com/pswazinna/lion. as mentioned, behavior cloning, will most likely yield the most trustworthy solution due to its familiarity, however the solution is of very limited use since it does not outperform the previous one. offline rl on the other hand is problematic since we cannot simply evaluate policy candidates on the real system and offline policy evaluation is still an open problem (hans et al., 2011; paine et al., 2020; konyushova et al., 2021; zhang et al., 2021b; fu et al., 2021). in the following, we thus propose a solution that moves the hyperparameter choice from training to deployment time, enabling the user to interactively find the desired trade-off between bc and offline optimization. a user may then slowly move from conservative towards better solutions. training during training time, we optimize three components: a model of the original policy βϕ(s), an ensemble of transition dynamics models {f i (s, a)|i ∈ 0, . . . , n − 1}, as well as the user adaptive ψi policy πθ(s, λ). the dynamics models {f i} as well as the original policy β are trained in isolation before the actual policy training starts. both π and β are always simple feedforward neural networks which map states directly to actions in a deterministic fashion (practitioners likely favor deterministic policies over stochastic ones due to trust issues). β is trained to simply imitate the behavior present in the dataset by minimizing the mean squared distance to the observed actions: l(ϕ) = [at − βϕ(st)]2 st,at∼d depending on the environment, the transition models are either also feedforward networks or simple recurrent networks with a single recurrent layer. the recurrent networks build their hidden state over g steps and are then trained to predict a window of size f into the future (similarly to (hein et al., 2017b)), while the feedforward dynamics simply predict single step transitions. both use mean squared error as loss: l(ψi) = l(ψi) = (cid:2)st+1 − f i ψi (st, at)(cid:3)2 st,at,st+1∼d t∼d [st+g+f +1 − f i ψi (st, at, . . . st+g, at+g, . . . ˆst+g+f , at+g+f )]2 where ˆst+h+f are the model predictions that are fed back to be used as input again. for simplicity, in this notation we assume the reward to be part of the state. also we do not explicitly show the recurrence and carrying over of the hidden states. after having trained the two components βϕ(s) and {f i (s, a)}, we can then move on to policy ψi training. similarly to moose and wsbc, we optimize the policy πθ by sampling start states from d and performing virtual rollouts throughout the dynamics ensemble using the current policy candidate. in every step, the ensemble predicts the reward as the minimum among its members and the next state that goes with it. at the same time we collect the mean squared differences between the actions that πθ took in the rollout and the one that βϕ would have taken. the loss is then computed as a weighted sum of the two components. crucially, we sample the weighting factor λ randomly and pass it to the policy as an additional input - the policy thus needs to learn all behaviors ranging from pure behavior cloning to entirely free optimization: l(θ) = − t γt[λe(st, at) − (1 − λ)p(at)] at = πθ(st, λ) where we sample λ between 0 & 1, e(st, at) = min{r(f i ψi(st, at))|i ∈ 0, ..., n − 1} denotes the output of the ensemble prediction for reward (we omit explicit notation of recurrence for simplicity) and p(at) = [βψ(st) − at]2 denotes the penalty based on the mean squared distance between the original policy and the actions proposed by πθ. see fig. 1 for a visualization of our proposed training procedure. figure 1: schematic of lion policy training. during policy training (eq. 3) only πθ (in green) is adapted, while the original policy model βϕ (orange) and the dynamics ensemble {fψi} (blue) are already trained and remain unchanged. from left to right, we first sample a start state (black) from the dataset and a λ value from its distribution. then, we let the original policy (orange) as well as the currently trained policy (green) predict actions - note that the newly trained policy is conditioned on λ. both actions are then compared to calculate the penalty for that timestep (red). the action from the currently trained policy is then also fed into the trained transition model (blue) together with the current state (black / blue), to get the reward for that timestep (yellow) as well as the next state (blue). this procedure is repeated until the horizon of the episode is reached. the rewards and penalties are then summed up and weighted by λ to be used as a loss function for policy training. we motivate our purely model-based approach (no value function involved) with the fact that we have fewer moving parts: our ensemble can be kept fixed once it is trained, while a value function has to be learned jointly with πθ, which is in our case more complex than usual. see experimental results in fig. 10 a brief attempt at making our approach work in the model-free domain. in addition to eq. 3, we need to penalize divergence not only from the learned model of the original policy during virtual roll-outs, but also from the actual actions in the dataset at λ = 0. it seems that if this is not done, the trained policy π sticks to the (also trained) original policy β during the rollouts, but during those rollouts, there are states that did not appear in the original dataset, enabling π to actually diverge from the true trajectory distribution. we thus penalize both rollout as well as data divergence at λ = 0: l(θ) = − t γt[λe(st, at) − (1 − λ)p(at)] + η s,a∼d where η controls the penalty weight for not following dataset actions at λ = 0, see appendix a for more details. furthermore, we normalize states to have zero mean and unit variance during every forward pass through dynamics model or policy, using the mean and standard deviation observed in the dataset. we also normalize the rewards provided by the ensemble rt = e(st, at), so that they live in the same magnitude as the action penalties (we assume actions to be in [−1, 1]d, so that the penalty can be in [0, 4]d where d is the action dimensionality). intuitively, one might choose to sample λ uniformly between zero and one, however instead we choose a beta distribution with parameters (0.1, 0.1), which could be called bathtub-shaped. similarly to (seo et al., 2021), we find that it is important to put emphasis on the edge cases, so that the extreme behavior is properly learned, rather than putting equal probability mass on each value in the [0, 1] range. the interpolation between the edges seems to be easier and thus require less samples. fig. 11 shows policy results for different lambda distributions during training. deployment at inference time, the trained policy can at any point be influenced by the user that would otherwise be in control of the system, by choosing the λ that is passed to the policy together with the current system state to obtain an action: algorithm 1 lion (training) 1: require dataset d = {τi}, randomly initialized parameters θ, ϕ, ψ, lambda distribution parameters beta(a, b), horizon h, number of policy updates u ψi with d and equation 2 sample start states s0 ∼ d sample lambda values λ ∼ beta(a, b) initialize policy loss l(θ) = 0 for t in 0..h do 2: // dynamics and original policy models can be trained supervised and independently of other components 3: train original policy model βϕ using d and equation 1 4: train dynamics models f i 5: for j in 1..u do 6: 7: 8: 9: 10: 11: 12: 13: at = πθ(st, λ) λ ∈ user(st). (5) he or she may choose to be conservative or adventurous, observe the feedback and always adjust the proximity parameter of the policy accordingly. at this point, any disliked behavior can immediately be corrected without any time loss due to re-training and deploying a new policy, even if the user’s specific preferences were not known at training time. we propose to initially start with λ = 0 during deployment, in order to check whether the policy is actually able to reproduce the original policy and to gain the user’s trust in the found solution. then, depending on how critical failures are and how much time is at hand, λ may be increased in small steps for as long as the user is still comfortable with the observed behavior. figure 3 shows an example of how the policy behavior changes over the course of λ. once the performance stops to increase or the user is otherwise not satisfied, we can immediately return to the last satisfying λ value. ψi(st, at) i = arg mini{r(f i l(θ)+ = −γt[λrt − (1 − λ)p(at)] update πθ using gradient ∇θl(θ) and adam calculate policy actions at = πθ(st, λ) calculate behavioral actions bt = βϕ(st) calculate penalty term p(at) = [βψ(st) − at]2 rt, st+1 = f i ψi(st, at))} s.t. experiments at first, we intuitively showcase lion in a simple 2d-world in order to get an understanding of how the policy changes its behavior based on λ. afterwards, we move to a more serious test, evaluating our algorithm on the 16 industrial benchmark (ib) datasets (hein et al., 2017a; swazinna et al., 2021b). we aim to answer the following questions: • do lion policies behave as expected, i.e. do they reproduce the original policy at λ = 0 and deviate more and more from it with increased freedom to optimize for return? • do lion policies at least in parts of the spanned λ space perform better or similarly well to state of the art offline rl algorithms? • is it easy to find the λ values that maximize return for practitioners? that is, are the performance courses smooth or do they have multiple local mini- & maxima? • is it possible for users to exploit the λ regularization at runtime to restrict the policy to only exhibit behavior he or she is comfortable with? 2d-world we evaluate the lion approach on a simplistic 2d benchmark. the states are x & y coordinates in the environment and rewards are given based on the position of the agent, following a gaussian distribution around a fixed point in the e−0.5((st−µ)/σ)2 state space, i.e. r(st) = 1 . √ in this example we set µ = (3, 6)t and σ = (1.5, 1.5)t. a visualization of the reward distribution can be seen in fig. 2 (b). we collect data from the environment using a simple policy that moves either to position (2.5, 2.5)t or to (7.5, 7.5)t, depending on which is closer to the randomly drawn start state (shown in fig. 2(a)), adding ε = 10% random actions as exploration. then we follow the outlined training procedure, by training a transition model, original policy model and finally a new policy that can at runtime change its behavior based on the desired proximity to the original policy. fig. 3 shows policy maps for λ ∈ {0.0, 0.6, 0.65, 0.7, 0.85, 1.0}, moving from simply imitating the original policy, over different mixtures, to pure return optimization. since the task is easy and accurately modeled by the dynamics ensemble, one may give absolute freedom to the policy and optimize for return only. as it can be seen, the policy moves quickly to the center of the reward distribution for λ = 1. figure 2: (a) original policy for data collection and - color represents action direction (b) reward distribution in the 2d environment - color represents reward value figure 3: policy maps for increasing values of λ in the 2d environment - colors represent action direction. initially, the policy simply imitates the original policy (see fig. 2 (a)). with increased freedom, the policy moves less to the upper right and more to the bottom left goal state of the original policy, since that one is closer to the high rewards. then, the policy moves its goal slowly upwards on the y-axis until it is approximately at the center of the reward distribution. since enough data was available (1,000 interactions) and the environment so simple, the models capture the true dynamics well and the optimal solution is found at λ = 1. this is however not necessarily the case if not enough or not the right data was collected (e.g. due to a suboptimal original policy - see fig. 4). industrial benchmark datasets we evaluate lion on the industrial benchmark datasets initially proposed in (swazinna et al., 2021b). the 16 datasets are created with three different baseline original policies (optimized, mediocre, bad) mixed with varying degrees of exploration. the optimized baseline is an rl trained policy and simulates an expert practitioner. the mediocre baseline moves the system back and forth around a fixed point that is rather well behaved, while the bad baseline steers to a point on the edge of the state space in which rewards are deliberately bad. each baseline is combined with ε ∈ {0.0, 0.2, 0.4, 0.6, 0.8, 1.0}-greedy exploration to collect a dataset (making the ε = 0.0 datasets extreme cases of the narrow distribution problem). together, they constitute a diverse set of offline rl settings. the exact baseline policies are given by: πbad = πmed = πopt = the datasets contain 100,000 interactions collected by the respective baseline policy combined with the ε-greedy exploration. the ib is a high dimensional and partially observable environment - if access to the full markov state were provided, it would contain 20 state variables. since only six of those are observable, and the relationship to the other variables and their subdynamics are complex and feature heavily delayed components, prior work hein et al. (2017b) has stated that up to 30 past time steps are needed to form a state that can hope to recover the true dynamics, so the state can be considered 180 dimensional. in our case we thus set the number of history steps g = 30. the action space is 3 dimensional. the benchmark is not supposed to mimic a single industrial application, but rather exhibit common issues observable in many different applications (partial observability, delayed rewards, multimodal and heteroskedastic noise, ...). the reward is a weighted combination of the observable variables fatigue and consumption, which are conflicting (usually move in opposite directions and need trade-off) and are influenced by various unobservable variables. as in prior work hein et al. (2018); depeweg et al. (2016); swazinna et al. (2021b) we optimize for a horizon of 100. the datasets are available at https://github.com/siemens/industrialbenchmark/ tree/offline_datasets/datasets under the apache license 2.0. figure 4: evaluation performance (top portion of each graph) and distance to the original policy (lower portion of each graph) of the lion approach over the chosen λ hyperparameter. various state of the art baselines are added as dashed lines with their standard set of hyperparameters (results from (swazinna et al., 2022)). even though the baselines all exhibit some hyperparameter that controls the distance to the original policy, all are implemented differently and we can neither map them to a corresponding lambda value of our algorithm, nor change the behavior at runtime, which is why we display them as dashed lines over the entire λ-spectrum. see fig. 12 for the 100% exploration dataset. baselines we compare performances of lion with various state of the art offline rl baselines: • bear, brac, bcq, cql and td3+bc (kumar et al., 2019; wu et al., 2019; fujimoto et al., 2019; kumar et al., 2020; fujimoto & gu, 2021) are model-free algorithms. they mostly regularize the policy by minimizing a divergence to the original policy. bcq samples only likely actions and cql searches for a q-function that lower bounds the true one. • moose and wsbc (swazinna et al., 2021a;b) are purely model based algorithms that optimize the policy via virtual trajectories through the learned transition model. moose penalizes reconstruction loss of actions under the original policy (learned by an autoencoder), while wsbc constrains the policy directly in weight space. moose is from the policy training perspective the closest to our lion approach. • mopo and morel (yu et al., 2020; kidambi et al., 2020) are hybrid methods that learn a transition model as well as a value function. both use the models to collect additional data and regularize the policy by means of model uncertainty. mopo penalizes uncertainty directly, while morel simply stops episodes in which future states become too unreliable. morel uses model-disagreement and mopo gaussian outputs to quantify uncertainty. evaluation in order to test whether the trained lion policies are able to provide state of the art performance anywhere in the λ range, we evaluate them for λ from 0 to 1 in many small steps. figs. 4 and 12 show results for the 16 ib datasets. we find that the performance curves do not exhibit many local optima. rather, there is usually a single maximum before which the performance is rising and after which the performance is strictly dropping. this is a very desirable characteristic for usage in the user interactive setting, as it enables users to easily find the best performing λ value for the policy. in 13 out of 16 datasets, users can thus match or outperform the current state of the art method on that dataset, and achieve close to on-par performance on the remaining three. the distance-to-original-policy curves are even monotonously increasing from start to finish, making it possible for the practitioner to find the best solution he or she is still comfortable with in terms of distance to the familiar behavior. discrete baseline a simpler approach might be to train an existing offline rl algorithm for many trade-offs in advance, to provide at least discrete options. two downsides are obvious: (a) we wouldn’t be able to handle the continuous case, i.e. when a user wants a trade-off that lies between two discrete policies, and (b) the computational cost increases linearly with the number of policies trained. we show that a potentially even bigger issue exists in figure 5: when we train a discrete collection of policies with different hyperparameters, completely independently of each other, they often exhibit wildly different behaviors even when the change in hyperparameter was small. lion instead expresses the collection as a single policy network, training them jointly and thus forcing them to smoothly interpolate among each other. this helps to make the performance a smooth function of the hyperparameter (although this must not always be the case) and results in a performance landscape that is much easier to navigate for a user searching for a good trade-off. figure 5: prior offline rl algorithms like mopo do not behave consistently when trained across a range of penalizing hyperparameters. return conditioning baseline another interesting line of work trains policies conditional on the return to go, such as rvs (emmons et al., 2021) (reinforcement learning via supervised learning) or dt (chen et al., 2021) (decision transformer). a key advantage of these methods is their simplicity they require neither transition model nor value function, just a policy suffices, and the learning can be performed in an entirely supervised fashion. the resulting policies could be interpreted in a similar way as lion policies: conditioning on returns close to the original performance would result in the original behavior, while choosing to condition on higher returns may lead to improved performance if the extrapolation works well. in fig. 6 we report results of the rvs algorithm on the same datasets as the discrete baseline. the returns in the datasets do not exhibit a lot of variance, so it is unsurprising that the approach did not succeed in learning a lot of different behaviors. figure 6: return conditioned policies did not learn many different behaviors on the ib datasets. finding a suitable λ we would like to emphasize that we neither want to optimize all offline hyperparameters with our solution, nor are we interested in a fully automated solution. users may thus adopt arbitrary strategies to finding their personal trade-off of preference. we will however provide a conservative example strategy: the operator starts with the most conservative value available and then moves in small, but constant steps towards more freedom. whenever the performance drops below the previous best or the baseline performance, he immediately stops and uses the last λ before that. table 1 summarizes how this strategy would perform. dataset bad-0.4 mediocre-0.0 optimized-0.6 rvs lion mopo final λ final λ final ˆr return -102.4 -70.8 -58.9 table 1: if a user adopts the simple strategy of moving in small steps (0.05 for lion, 0.1 for mopo since its range is larger, 10.0 / 1.0 for rvs) from conservative towards better solutions, immediately stopping when a performance drop is observed, lion finds much better solutions due to the consistent interpolation between trade-offs. note that in mopo, we start with large λ = 2.5 (1.0 is the default) since there it controls the penalty, while we start with λ = 0 in lion, where it controls the return. discussion & conclusion in this work we presented a novel offline rl approach that, to the best of our knowledge, is the first to let the user adapt the policy behavior after training is finished. we let the user tune the behavior by allowing him to choose the desired proximity to the original policy, in an attempt to solve two issues: (1) the problem that practitioners cannot tune the hyperparameter in offline rl & (2) the general issue that users have no high level control option when using rl policies (they might even have individual preferences with regards to the behavior of a policy that go beyond just performance). we find that effectively, lion provides a high level control option to the user, while still profiting from a high level of automation. it furthermore takes much of the risk that users normally assume in offline rl away since deployments can always start with a bc policy when they start at λ = 0, before moving to better options. while behavior cloning does not have to work in general, we did not experience any issues with it in our experiments, and it should be easier than performing rl since it can be done entirely in a supervised fashion. given that bc works, deployments can thus start with minimal risk. in prior offline algorithms, users experienced the risk that the algorithm did not produce a satisfying policy on the particular dataset they chose. e.g.: wsbc produces state of the art results for many of the ib datasets, however for mediocre-0.6 it produces a catastrophic -243 (original performance is -75). similarly, cql is the prior best method on optimized-0.8, however the same method produces a performance of -292 on bad-0.2 (moose, mopo, and wsbc get between -110 & -130). due to the smoothness of the interpolation of behaviors in lion, practitioners should be able to use it to find better trade-offs with lower risk than prior methods. adaptable policies are thus likely a step towards more deployments in industrial applications. future work as outlined at the end of section c of the appendix, we were unable to incorporate value functions into our approach. this can be seen as a limiting factor, since there exist environments with sparse or very delayed rewards or that for other reasons exhibit long planning horizons. the industrial benchmark features delayed rewards and evaluation trajectories are 100 steps long, however other environments can be more extreme in their characteristics. at some point, even the best dynamics models suffer from compounding errors and cannot accurately predict the far away future. we do not believe that it is in principle not possible to combine the lion approach with value functions, however future work will likely need to find methods to stabilize the learning process. other potential limitations of our approach include difficulties with the behavior cloning, e.g. when the original policy is stochastic or was not defined in the same state space as we use (e.g. different human operators controlled the system at different times in the dataset), as well as difficulties when interpolating between vastly different behaviors on the pareto front spanned by proximity and performance. we mention these potential limitations only for the sake of completeness since we were unable to observe them in our practical experiments. ethics statement
9
[ 108.299, 697.5936768, 229.2579856, 709.5488768 ]
bZJbzaj_IlP.pdf
2,022
2
a non-parametric regression viewpoint : generalization of overparametrized deep relu network under noisy observations namjoon suh, hyunouk ko, xiaoming huo h.milton stewart school of industrial and systems engineering georgia institute of technology atlanta, ga, usa {namjsuh,hko39,huo}@gatech.edu abstract we study the generalization properties of the overparameterized deep neural network (dnn) with rectified linear unit (relu) activations. under the nonparametric regression framework, it is assumed that the ground-truth function is from a reproducing kernel hilbert space (rkhs) induced by a neural tangent kernel (ntk) of relu dnn, and a dataset is given with the noises. without a delicate adoption of early stopping, we prove that the overparametrized dnn trained by vanilla gradient descent does not recover the ground-truth function. it turns out that the estimated dnn’s l2 prediction error is bounded away from 0. as a complement of the above result, we show that the (cid:96)2-regularized gradient descent enables the overparametrized dnn to achieve the minimax optimal convergence rate of the l2 prediction error, without early stopping. notably, the rate we obtained is faster than o(n−1/2) known in the literature. introduction over the past few years, neural tangent kernel (ntk) [arora et al., 2019b; jacot et al., 2018; lee et al., 2018; chizat & bach, 2018] has been one of the most seminal discoveries in the theory of neural network. the underpinning idea of the ntk-type theory comes from the observation that in a wide-enough neural net, model parameters updated by gradient descent (gd) stay close to their initializations during the training, so that the dynamics of the networks can be approximated by the first-order taylor expansion with respect to its parameters at initialization. the linearization of learning dynamics on neural networks has been helpful in showing the linear convergence of the training error on both overparametrized shallow [li & liang, 2018; du et al., 2018] and deep neural networks [allen-zhu et al., 2018; zou et al., 2018; 2020], as well as the characterizations of generalization error on both models [arora et al., 2019a; cao & gu, 2019]. these findings clearly lead to the equivalence between learning dynamics of neural networks and the kernel methods in reproducing kernel hilbert spaces (rkhs) associated with ntk. 1 specifically, arora et al. [2019a] provided the o(n−1/2) generalization bound of shallow neural network, where n denotes the training sample size. in the context of nonparametric regression, recently, two papers, nitanda & suzuki [2020] and hu et al. [2021], showed that neural network can obtain the convergence rate faster than o(n−1/2) by specifying the complexities of target function and hypothesis space. specifically, nitanda & suzuki [2020] showed that the shallow neural network with smoothly approximated relu (swish, see ramachandran et al. [2017]) activation trained via (cid:96)2-regularized averaged stochastic gradient descent (sgd) can recover the target function from rkhss induced from ntk with swish activation. similarly, hu et al. [2021] showed that a shallow neural network with relu activation trained via (cid:96)2-regularized gd can generalize well, when the target function (i.e., f (cid:63) ρ ) is from hntk . 1henceforth, we denote hntk and hntk l networks l ≥ 2 with relu activations, respecitvely. as rkhss induced from ntk of shallow l = 1 and deep neural notably, the rate that the papers nitanda & suzuki [2020] and hu et al. [2021] obtained is minimax optimal, meaning that no estimators perform substantially better than the (cid:96)2-regularized gd or averaged sgd algorithms for recovering functions from respective function spaces. nevertheless, these results are restricted to shallow neural networks, and cannot explain the generalization abilities of deep neural network (dnn). similarly with arora et al. [2019a], cao & gu [2019] obtained the o(n−1/2) generalization bound, showing that the sgd generalize well for f (cid:63) l , when f (cid:63) ρ has a bounded rkhs norm. however, the rate they obtained is slower than the minimax rate we can actually achieve. furthermore, their results become vacuous under the presence of additive noises on the data set. motivated from these observations, the fundamental question in this study is as follows: ρ ∈ hntk when the noisy dataset is generated from a function from hntk l , does the overparametrized dnn obtained via ((cid:96)2-regularized) gd provably generalize well the unseen data? we consider a neural network that has l ≥ 2 hidden layers with width m (cid:29) n. (i.e., overparametrized deep neural network.) we focus on the least-squares loss and assume that the activation function is relu. a positivity assumption of ntk from relu dnn is imposed, meaning that λ∞ > 0, where λ∞ denotes the minimum eigenvalue of the ntk. we give a more formal mathematical definition of relu dnn in the following subsection 2.2. under these settings, we provide an affirmative answer to the above question by investigating the behavior of l2-prediction error of the obtained neural network with respect to gd iterations. contributions our derivations of algorithm-dependent prediction risk bound require the analysis on training dynamics of the estimated neural network through (regularized) gd algorithm. we include these results as the contributions of our paper, which can be of independent interests as well. • in an unregulaized case, under the assumption λ∞ > 0, we show that the training loss converges to 0 at a linear rate. as will be detailed in subsection 3.3, this is the different result from the seminal work of allen-zhu et al. [2018], where they also prove a linear convergence of training loss of relu dnn, but under different data distribution assumption. • we show that the dnn updated via vanilla gd does not recover the ground truth function f (cid:63) ρ ∈ hntk under noisy observations, if the dnn is trained for either too short or too long: that is, the prediction error is bounded away from 0 by some constant as n goes to infinity. • in regularized case, we prove the mean-squared error (mse) of dnn is upper bounded by some positive constant. additionally, we proved the dynamics of the estimated neural network get close to the solution of kernel ridge regression associated with ntk from relu dnn. l • we show that the (cid:96)2-regularization can be helpful in achieving the minimax optimal rate of the prediction risk for recovering f (cid:63) ρ ∈ hntk under the noisy data. specifically, it is shown that after some iterations of (cid:96)2-regularized gd, the minimax optimal rate (which is o(cid:0)n− d 2d−1 (cid:1), where d is a feature dimension.) can be achieved. l note that our paper is an extension of hu et al. [2021] to dnn model, showing that the (cid:96)2-regularized dnn can achieve a minimax optimal rate of prediction error for recovering f (cid:63) l . however, we would like to emphasize that our work is not a trivial application of their work from at least two technical aspects. these aspects are more detailed in the following subsection. ρ ∈ hntk technical comparisons with hu et al. [2021]
1
[ 108.249, 146.2630784, 351.6933716, 156.2256784 ]
End of preview. Expand in Data Studio

Dataset Card for "ICLR-pdfs"

More Information needed

Downloads last month
1,248