diff --git "a/title_31K_G/test_title_long_2405.02235v1.json" "b/title_31K_G/test_title_long_2405.02235v1.json" new file mode 100644--- /dev/null +++ "b/title_31K_G/test_title_long_2405.02235v1.json" @@ -0,0 +1,609 @@ +{ + "url": "http://arxiv.org/abs/2405.02235v1", + "title": "Learning Optimal Deterministic Policies with Stochastic Policy Gradients", + "abstract": "Policy gradient (PG) methods are successful approaches to deal with\ncontinuous reinforcement learning (RL) problems. They learn stochastic\nparametric (hyper)policies by either exploring in the space of actions or in\nthe space of parameters. Stochastic controllers, however, are often undesirable\nfrom a practical perspective because of their lack of robustness, safety, and\ntraceability. In common practice, stochastic (hyper)policies are learned only\nto deploy their deterministic version. In this paper, we make a step towards\nthe theoretical understanding of this practice. After introducing a novel\nframework for modeling this scenario, we study the global convergence to the\nbest deterministic policy, under (weak) gradient domination assumptions. Then,\nwe illustrate how to tune the exploration level used for learning to optimize\nthe trade-off between the sample complexity and the performance of the deployed\ndeterministic policy. Finally, we quantitatively compare action-based and\nparameter-based exploration, giving a formal guise to intuitive results.", + "authors": "Alessandro Montenegro, Marco Mussi, Alberto Maria Metelli, Matteo Papini", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Model AND Based AND Reinforcement AND Learning", + "gt": "Learning Optimal Deterministic Policies with Stochastic Policy Gradients", + "main_content": "Introduction Within reinforcement learning (RL, Sutton & Barto, 2018) approaches, policy gradients (PGs, Deisenroth et al., 2013) algorithms have proved very effective in dealing with realworld control problems. Their advantages include the applicability to continuous state and action spaces (Peters & Schaal, 2006), resilience to sensor and actuator noise (Gravell et al., 2020), robustness to partial observability (Azizzadenesheli et al., 2018), and the possibility of incorporating prior knowledge in the policy design phase (Ghavamzadeh & Engel, 2006), improving explainability (Likmeta et al., 2020). PG algorithms search directly in a space of parametric policies for the one that maximizes a performance 1Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133, Milan, Italy. Correspondence to: Alessandro Montenegro . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). function. Nonetheless, as always in RL, the exploration problem has to be addressed, and practical methods involve injecting noise in the actions or in the parameters. This limits the application of PG methods in many real-world scenarios, such as autonomous driving, industrial plants, and robotic controllers. This is because stochastic policies typically do not meet the reliability, safety, and traceability standards of this kind of applications. The problem of learning deterministic policies has been explicitly addressed in the PG literature by Silver et al. (2014) with their deterministic policy gradient, which spawned very successful deep RL algorithms (Lillicrap et al., 2016; Fujimoto et al., 2018). This approach, however, is affected by several drawbacks, mostly due to its inherent off-policy nature. First, this makes DPG hard to analyze from a theoretical perspective: local convergence guarantees have been established only recently, and only under assumptions that are very demanding for deterministic policies (Xiong et al., 2022). Furthermore, its practical versions are known to be very susceptible hyperparameter tuning. We study here a simpler and fairly common approach: that of learning stochastic policies with PG algorithms, then deploying the corresponding deterministic version, \u201cswitching off\u201d the noise.1 Intuitively, the amount of exploration (e.g., the variance of a Gaussian policy) should be selected wisely. Indeed, the smaller the exploration level, the closer the optimized objective is to that of a deterministic policy. At the same time, with a small exploration, learning can severely slow down and get stuck on bad local optima. Policy gradient methods can be partitioned based on the space on which the exploration is carried out, distinguishing between: action-based (AB) and parameter-based (PB, Sehnke et al., 2010) exploration. The first, of which REINFORCE (Williams, 1992) and GPOMDP (Baxter & Bartlett, 2001; Sutton et al., 1999) are the progenitor algorithms, performs exploration in the action space, with a stochastic (e.g., Gaussian) policy. On the other hand, PB exploration, introduced by Parameter-Exploring Policy Gradients (PGPE, Sehnke et al., 2010), implements the exploration at the level of policy parameters by means of a stochastic hyperpolicy. The latter performs perturbations of the parameters of a (typ1This can be observed in several libraries (e.g., Raffin et al., 2021b) and benchmarks (e.g., Duan et al., 2016). 1 arXiv:2405.02235v1 [cs.LG] 3 May 2024 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients ically deterministic) action policy. Of course, this dualism only considers the simplest form of noise-based, undirected exploration. Efficient exploration in large-scale MDPs is a very active area of research, with a large gap between theory and practice (Ghavamzadeh et al., 2020) placing the matter well beyond the scope of this paper. Also, we consider noise magnitudes that are fixed during the learning process, as the common practice of learning the exploration parameters themselves breaks all known sample complexity guarantees of vanilla PG (cf. Appendix C). To this day, a large effort has been put into providing convergence guarantees and sample complexity analyses for AB exploration algorithms (e.g., Papini et al., 2018; Yuan et al., 2022; Fatkhullin et al., 2023a), while the theoretical analysis of PB exploration has been taking a back seat since (Zhao et al., 2011). We are not aware of any global convergence results for parameter-based PGs. Furthermore, even for AB exploration, current studies focus on the convergence to the best stochastic policy. Original Contributions. In this paper, we make a step towards the theoretical understanding of the practice of deploying a deterministic policy learned with PG methods: \u2022 We introduce a framework for modeling the practice of deploying a deterministic policy, by formalizing the notion of white noise-based exploration, allowing for a unified treatment of both AB and PB exploration. \u2022 We study the convergence to the best deterministic policy for both AB and PB exploration. For this reason, we focus on the global convergence, rather than on the first-order stationary point (FOSP) convergence, and we leverage on commonly used (weak) gradient domination assumptions. \u2022 We quantitatively show how the exploration level (i.e., noise) generates a trade-off between the sample complexity and the performance of the deployed deterministic policy. Then, we illustrate how it can be tuned to optimize such a trade-off, delivering sample complexity guarantees. In light of these results, we compare the advantages and disadvantages of AB and PB exploration in terms of samplecomplexity and requested assumptions, giving a formal guise to intuitive results. We also elaborate on how the assumptions used in the convergence analysis can be reconnected to basic characteristics of the MDP and the policy classes. We conclude with a numerical validation to empirically illustrate the discussed trade-offs. The proofs of the results presented in the main paper are reported in Appendix D. The related works are discussed in Appendix B. 2. Preliminaries Notation. For a measurable set X, we denote with \u2206pXq the set of probability measures over X. For P P\u2206pXq, we denote with p its density function. With little abuse of notation, we will interchangeably use x\u201eP or x\u201ep to denote that random variable x is sampled from the P. For nPN, we denote by JnK:\u201ct1, ..., nu. Lipschitz Continuous and Smooth Functions. A function f :X \u010eRd \u00d1R is L-Lipschitz continuous (L-LC) if |fpxq\u00b4fpx1q|\u010fL}x\u00b4x1}2 for every x,x1 PX. f is L2Lipschitz smooth (L2-LS) if it is continuously differentiable and its gradient \u2207xf is L2-LC, i.e., }\u2207xfpxq\u00b4 \u2207xfpx1q}2 \u010fL2}x\u00b4x1}2 for every x,x1 PX. Markov Decision Processes. A Markov Decision Process (MDP, Puterman, 1990) is represented by M:\u201c pS,A,p,r,\u03c10,\u03b3q, where S \u010eRdS and A\u010eRdA are the measurable state and action spaces, p:S \u02c6A\u00dd \u00d1\u2206pSq is the transition model, where pps1|s,aq specifies the probability density of landing in state s1 PS by playing action aPA in state sPS, r:S \u02c6A\u00dd \u00d1r\u00b4Rmax,Rmaxs is the reward function, where rps,aq specifies the reward the agent gets by playing action a in state s, \u03c10 P\u2206pSq is the initial-state distribution, and \u03b3 Pr0,1s is the discount factor. A trajectory \u03c4 \u201cps\u03c4,0,a\u03c4,0,...,s\u03c4,T \u00b41,a\u03c4,T \u00b41q of length T PNYt`8u is a sequence of T state-action pairs. The discounted return of a trajectory \u03c4 is Rp\u03c4q:\u201c\u0159T \u00b41 t\u201c0 \u03b3trps\u03c4,t,a\u03c4,tq. Deterministic Parametric Policies. We consider a parametric deterministic policy \u00b5\u03b8 :S \u00d1A, where \u03b8P\u0398\u010eRd\u0398 is the parameter vector belonging to the parameter space \u0398. The performance of \u00b5\u03b8 is assessed via the expected return JD :\u0398\u00d1R, defined as: JDp\u03b8q:\u201c E \u03c4\u201epDp\u00a8|\u03b8qrRp\u03c4qs, (1) where pDp\u03c4;\u03b8q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5\u03b8ps\u03c4,tqq is the density of trajectory \u03c4 induced by policy \u00b5\u03b8.2 The agent\u2019s goal consists of finding an optimal parameter \u03b8\u02da D P argmax\u03b8P\u0398 JDp\u03b8q and we denote J\u02da D :\u201cJDp\u03b8\u02da Dq. Action-Based (AB) Exploration. In AB exploration, we consider a parametric stochastic policy \u03c0\u03c1 :S \u00d1\u2206pAq, where \u03c1PP is the parameter vector belonging to the parameter space P \u010eRdP. The policy is used to sample actions at \u201e\u03c0\u03c1p\u00a8|stq to be played in state st for every step t of interaction. The performance of \u03c0\u03c1 is assessed via the expected return JA :P \u00d1R, defined as: JAp\u03c1q:\u201c E \u03c4\u201epAp\u00a8|\u03c1qrRp\u03c4qs, where (2) pAp\u03c4;\u03c1q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 \u03c0\u03c1pa\u03c4,t|s\u03c4,tqpps\u03c4,t`1|s\u03c4,t,a\u03c4,tq is the density of trajectory \u03c4 induced by policy \u03c0\u03c1.2 In AB exploration, we aim at learning \u03c1\u02da A Pargmax\u03c1PP JAp\u03c1q and we denote JA\u02da :\u201cJAp\u03c1\u02da Aq. If JAp\u03c1q is differentiable w.r.t. \u03c1, PG methods (Peters & Schaal, 2008) update the 2For both JD (resp. JA, JP) and pD (resp. pA, pP), we use the D (resp. A, P) subscript to denote that the dependence on \u03b8 (resp. \u03c1) is through a Deterministic policy (resp. Action-based exploration policy, Parameter-based exploration hyperpolicy). 2 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients parameter \u03c1 via gradient ascent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JAp\u03c1tq, where \u03b6t \u01050 is the step size and p \u2207\u03c1JAp\u03c1q is an estimator of \u2207\u03c1JAp\u03c1q. In particular, the GPOMDP estimator is:3 p \u2207\u03c1JAp\u03c1q:\u201c 1 N N \u00ff i\u201c1 T \u00b41 \u00ff t\u201c0 \u02dc t \u00ff k\u201c0 \u2207\u03c1log\u03c0\u03c1pa\u03c4i,k|s\u03c4i,kq \u00b8 \u03b3trps\u03c4i,t,a\u03c4i,tq, where N is the number of independent trajectories t\u03c4iuN i\u201c1 collected with policy \u03c0\u03c1 (\u03c4i \u201epAp\u00a8;\u03c1q), called batch size. Parameter-Based (PB) Exploration. In PB exploration, we use a parametric stochastic hyperpolicy \u03bd\u03c1 \u010e\u2206p\u0398q, where \u03c1PRdP is the parameter vector. The hyperpolicy is used to sample parameters \u03b8\u201e\u03bd\u03c1 to be plugged in the deterministic policy \u00b5\u03b8 at the beginning of every trajectory. The performance index of \u03bd\u03c1 is JP :Rd\u03c1 \u00dd \u00d1R, that is the expectation over \u03b8 of JDp\u03b8q defined as:2 JPp\u03c1q:\u201c E \u03b8\u201e\u03bd\u03c1 rJDp\u03b8qs. PB exploration aims at learning \u03c1\u02da P Pargmax\u03c1PP JPp\u03c1q and we denote JP\u02da :\u201cJPp\u03c1\u02da Pq. If JDp\u03c1q is differentiable w.r.t. \u03c1, PGPE (Sehnke et al., 2010) updates the hyperparameter \u03c1 via gradient accent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JPp\u03c1tq. In particular, PGPE uses an estimator of \u2207\u03c1JPp\u03c1q defined as: p \u2207\u03c1JPp\u03c1q\u201c 1 N N \u00ff i\u201c1 \u2207\u03c1 log\u03bd\u03c1p\u03b8iqRp\u03c4iq, where N is the number of independent parameterstrajectories pairs tp\u03b8i,\u03c4iquN i\u201c1, collected with hyperpolicy \u03bd\u03c1 (\u03b8i \u201e\u03bd\u03c1 and \u03c4i \u201epDp\u00a8;\u03b8iq), called batch size. 3. White-Noise Exploration We formalize a class of stochastic (hyper)policies widely employed in the practice of AB and PB exploration, namely white noise-based (hyper)policies. These policies \u03c0\u03b8p\u00a8|sq (resp. hyperpolicies \u03bd\u03b8) are obtained by adding a white noise \u03f5 to the deterministic action a\u201c\u00b5\u03b8psq (resp. to the parameter \u03b8) independent of the state s (resp. parameter \u03b8). Definition 3.1 (White Noise). Let dPN and \u03c3\u01050. A probability distribution \u03a6d P\u2206pRdq is a white-noise if: E \u03f5\u201e\u03a6dr\u03f5s\u201c0d, E \u03f5\u201e\u03a6dr}\u03f5}2 2s\u010fd\u03c32. (3) This definition complies with the zero-mean Gaussian distribution \u03f5\u201eNp0d,\u03a3q, where E\u03f5\u201eN p0d,\u03a3qr}\u03f5}2 2s\u201ctrp\u03a3q\u010f d\u03bbmaxp\u03a3q. In particular, for an isotropic Gaussian \u03a3\u201c \u03c32Id, we have that trp\u03a3q\u201cd\u03c32. We now formalize the notion of white noise-based (hyper)policy. Definition 3.2 (White noise-based policies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6dA be a white noise (Definition 3.1). A white noise-based pol3We limit our analysis to the GPOMDP estimator (Baxter & Bartlett, 2001), neglecting the REINFORCE (Williams, 1992) since it is known that the latter suffers from larger variance. icy \u03c0\u03b8 :S \u00d1\u2206pAq is such that, for every state sPS, action a\u201e\u03c0\u03b8p\u00a8|sq satisfies a\u201c\u00b5\u03b8psq`\u03f5 where \u03f5\u201e\u03a6dA independently at every step. This definition considers stochastic policies \u03c0\u03b8p\u00a8|sq that are obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at every step, to the action \u00b5\u03b8psq prescribed by the deterministic policy (i.e., AB exploration), resulting in playing action \u00b5\u03b8psq`\u03f5. An analogous definition can be formulated for hyperpolicies. Definition 3.3 (White noise-based hyperpolicies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6d\u0398 be a white-noise (Definition 3.1). A white noisebased hyperpolicy \u03bd\u03b8 P\u2206p\u0398q is such that, for every parameter \u03b8P\u0398, parameter \u03b81 \u201e\u03bd\u03b8 satisfies \u03b81 \u201c\u03b8`\u03f5 where \u03f5\u201e\u03a6d\u0398 independently in every trajectory. This definition considers stochastic hyperpolicies \u03bd\u03b8 obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at the beginning of each trajectory, to the parameter \u03b8 defining the deterministic policy \u00b5\u03b8, resulting in playing deterministic policy \u00b5\u03b8`\u03f5 (i.e., PB exploration). Definitions 3.2 and 3.3 allow to represent a class of widelyused (hyper)policies, like Gaussian hyperpolicies and Gaussian policies with state-independent variance. Furthermore, once the parameter \u03b8 is learned with either AB and PB exploration, deploying the corresponding deterministic policy (i.e., \u201cswitching off\u201d the noise) is straightforward.4 4. Fundamental Assumptions In this section, we present the fundamental assumptions on the MDP (p and r), deterministic policy \u00b5\u03b8, and white noise \u03a6. For the sake of generality, we will consider abstract assumptions in the next sections and, then, show their relation to the fundamental ones (see Appendix A for details). Assumptions on the MDP. We start with the assumptions on the regularity of the MDP, i.e., on transition model p and reward function r, w.r.t. variations of the played action a. Assumption 4.1 (Lipschitz MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are Lp-LC and Lr-LC, respectively, w.r.t. the action for every s,s1 PS, i.e., for every a,aPA: |logpps1|s,aq\u00b4logpps1|s,aq|\u010fLp}a\u00b4a}2, (4) |rps,aq\u00b4rps,aq|\u010fLr}a\u00b4a}2. (5) Assumption 4.2 (Smooth MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are L2,p-LS and L2,r-LS, respectively, w.r.t. the 4For white noise-based (hyper)policies there exists a one-toone mapping between the parameter space of (hyper)policies and that of deterministic policies (P \u201c\u0398). For simplicity, we assume \u0398\u201cRd\u0398 and A\u201cRdA (see Appendix C). 3 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients action for every s,s1 PS, i.e., for every a,aPA: }\u2207a logpps1|s,aq\u00b4\u2207a logpps1|s,aq}2 \u010fL2,p}a\u00b4a}2, }\u2207arps,aq\u00b4\u2207arps,aq}2 \u010fL2,r}a\u00b4a}2. Intuitively, these assumptions ensure that when we perform AB and/or PB exploration altering the played action w.r.t. a deterministic policy, the effect on the environment dynamics and on reward (and on their gradients) is controllable. Assumptions on the deterministic policy. We now move to the assumptions on the regularity of the deterministic policy \u00b5\u03b8 w.r.t. the parameter \u03b8. Assumption 4.3 (Lipschitz deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L\u00b5-LC w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u00b5\u03b8psq\u00b4\u00b5\u03b8psq}2 \u010fL\u00b5}\u03b8\u00b4\u03b8}2. (6) Assumption 4.4 (Smooth deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L2,\u00b5-LS w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u2207\u03b8\u00b5\u03b8psq\u00b4\u2207\u03b8\u00b5\u03b8psq}2 \u010fL2,\u00b5}\u03b8\u00b4\u03b8}2. (7) Similarly, these assumptions ensure that if we deploy an altered parameter \u03b8, like in PB exploration, the effect on the played action (and on its gradient) is bounded. Assumptions 4.1 and 4.3 are standard in the DPG literature (Silver et al., 2014). Assumption 4.2, instead, can be interpreted as the counterpart of the Q-function smoothness used in the DPG analysis (Kumar et al., 2020; Xiong et al., 2022), while Assumption 4.4 has been used to study the convergence of DPG (Xiong et al., 2022). Similar conditions to our Assumption 4.1 were adopted by Pirotta et al. (2015), but measuring the continuity of p in the Kantorovich metric, a weaker requirement that, unfortunately, does not come with a corresponding smoothness condition. Assumptions on the (hyper)policies. We introduce the assumptions on the score functions of the white noise \u03a6. Assumption 4.5 (Bounded Scores of \u03a6). Let \u03a6P\u2206pRdq be a white noise with variance bound \u03c3\u01050 (Definition 3.1) and density \u03d5. \u03d5 is differentiable in its argument and there exists a universal constant c\u01050 s.t.: (i) E\u03f5\u201e\u03a6r}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u010fcd\u03c3\u00b42; (ii) E\u03f5\u201e\u03a6r}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u010fc\u03c3\u00b42. Intuitively, this assumption is equivalent to the more common ones requiring the boundedness of the expected norms of the score function (and its gradient) (Papini et al., 2022; Yuan et al., 2022, cf. Appendix E). Note that a zero-mean Gaussian \u03a6\u201cNp0d,\u03a3q fulfills Assumption 4.5. Indeed, one has \u2207\u03f5 log\u03d5p\u03f5q\u201c\u03a3\u00b41\u03f5 and \u22072 \u03f5 log\u03d5p\u03f5q\u201c \u03a3\u00b41. Thus, Er}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u201ctrp\u03a3\u00b41q\u010fd\u03bbminp\u03a3q\u00b41 and Er}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u201c\u03bbminp\u03a3q\u00b41. In particular, for an isotropic Gaussian \u03a3\u201c\u03c32I, we have \u03bbminp\u03a3q\u201c\u03c32, fulfilling Assumption 4.5 with c\u201c1. 5. Deploying Deterministic Policies In this section, we study the performance JD of the deterministic policy \u00b5\u03b8, when the parameter \u03b8 is learned via AB or PB white noise-based exploration (Section 3). We will refer to this scenario as deploying the parameters, which reflects the common practice of \u201cswitching off the noise\u201d once the learning process is over. PB Exploration. Let us start with PB exploration by observing that for white noise-based hyperpolicies (Definition 3.3), we can express the expected return JP as a function of JD and of the noise \u03f5 for every \u03b8P\u0398: JPp\u03b8q\u201c E \u03f5\u201e\u03a6d\u0398 rJDp\u03b8`\u03f5qs. (8) This illustrates that PB exploration can be obtained by perturbing the parameter \u03b8 of a deterministic policy \u00b5\u03b8 via the noise \u03f5\u201e\u03a6d\u0398. To achieve guarantees on the deterministic performance JD of a parameter \u03b8 learned with PB exploration, we enforce the following regularity condition. Assumption 5.1 (Lipschitz JD w.r.t. \u03b8). JD is LJ-LC in the parameter \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: |JDp\u03b8q\u00b4JDp\u03b81q|\u010fLJ}\u03b8\u00b4\u03b81}2. (9) When the MDP and the deterministic policy are LC as in Assumptions 4.1 and 4.3, LJ is Opp1\u00b4\u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). This way, we guarantee that perturbation \u03f5 on the parameter \u03b8 determines a variation on function JD depending on the magnitude of \u03f5, which allows obtaining the following result. Theorem 5.1 (Deterministic deployment of parameters learned with PB white-noise exploration). If the hyperpolicy complies with Definition 3.3, under Assumption 5.1: (i) (Uniform bound) for every \u03b8P\u0398, it holds that |JDp\u03b8q\u00b4JPp\u03b8q|\u010fLJ ?d\u0398\u03c3P; (ii) (JD upper bound) Let \u03b8\u02da P Pargmax\u03b8P\u0398 JPp\u03b8q, it holds that: J\u02da D \u00b4JDp\u03b8\u02da Pq\u010f2LJ ?d\u0398\u03c3P; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Pq\u011b0.28LJ ?d\u0398\u03c3P. Some observations are in order. (i) shows that the performance of the hyperpolicy JPp\u03b8q is representative of the deterministic performance JDp\u03b8q up to an additive term depending on LJ ?d\u0398\u03c3P. As expected, this term grows with the Lipschitz constant LJ of the function JD, with the standard deviation \u03c3P of the additive noise, and with the dimensionality of the parameter space d\u0398. In particular, this implies that lim\u03c3P\u00d10` JPp\u03b8q\u201cJDp\u03b8q. (ii) is a consequence of (i) and provides an upper bound between the optimal performance obtained if we were able to directly optimize the deterministic policy max\u03b8P\u0398 JDp\u03b8q and the performance of the parameter \u03b8\u02da P learned by optimizing JPp\u03b8q, i.e., via 4 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients PB exploration, when deployed on the deterministic policy. Finally, (iii) provides a lower bound to the same quantity on a specific instance of MDP and hyperpolicy, proving that the dependence on LJ ?d\u0398\u03c3P is tight up to constant terms. AB Exploration. Let us move to the AB exploration case where understanding the effect of the noise is more complex since it is applied to every action independently at every step. To this end, we introduce the notion of non-stationary deterministic policy \u00b5\u201cp\u00b5tqT \u00b41 t\u201c0 , where at time step t the deterministic policy \u00b5t :S \u00d1A is played, and its expected return (with abuse of notation) is JDp\u00b5q\u201cE\u03c4\u201epDp\u00a8|\u00b5qrRp\u03c4qs where pDp\u00a8|\u00b5q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5tps\u03c4,tqq. Let \u03f5\u201c p\u03f5tqT \u00b41 t\u201c0 \u201e\u03a6T dA be a sequence of noises sampled independently, we denote with \u00b5\u03b8 `\u03f5\u201cp\u00b5\u03b8 `\u03f5tqT \u00b41 t\u201c0 the nonstationary policy that, at time t, perturbs the action as \u00b5\u03b8pstq`\u03f5t. Since the noise is independent on the state, we express JA as a function of JD for every \u03b8P\u0398 as follows: JAp\u03b8q\u201c E \u03f5\u201e\u03a6T dA \u201d JDp\u00b5\u03b8 `\u03f5q \u0131 . (10) Thus, to ensure that the parameter learned by AB exploration achieves performance guarantees when evaluated as a deterministic policy, we need to enforce some regularity condition on JD as a function of \u00b5. Assumption 5.2 (Lipschitz JD w.r.t. \u00b5). JD of the nonstationary deterministic policy \u00b5 is pLtqT \u00b41 t\u201c0 -LC in the nonstationary policy, i.e., for every \u00b5,\u00b51: |JDp\u00b5q\u00b4JDp\u00b51q|\u010f T \u00b41 \u00ff t\u201c0 Lt sup sPS \u203a \u203a\u00b5tpsq\u00b4\u00b51 tpsq \u203a \u203a 2 . (11) Furthermore, we denote L:\u201c\u0159T \u00b41 t\u201c0 Lt. When the MDP is LC as in Assumptions 4.1, L is Opp1\u00b4 \u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). The assumption enforces that changing the deterministic policy at step t from \u00b5t to \u00b51 t, the variation of JD is controlled by the action distance (in the worst state s) multiplied by a time-dependent Lipschitz constant. This form of condition allows us to show the following result. Theorem 5.2 (Deterministic deployment of parameters learned with AB white-noise exploration). If the policy complies with Definition 3.2 and under Assumption 5.2: (i) (Uniform bound) for every \u03b8P\u0398, it holds that: |JDp\u03b8q\u00b4JAp\u03b8q|\u010fL?dA\u03c3A; (ii) (JD upper bound) Letting \u03b8\u02da A Pargmax\u03b8P\u0398 JAp\u03b8q, it holds that J\u02da D \u00b4JDp\u03b8\u02da Aq\u010f2L?dA\u03c3A; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Aq\u011b0.28L?dA\u03c3A. Similarly to Theorem 5.1, (i) and (ii) provide an upper bound on the difference between the policy performance JAp\u03b8q and the corresponding deterministic policy JDp\u03b8q and on the performance of \u03b8\u02da A when deployed on a deterministic policy. Clearly, also in the AB exploration, we have that lim\u03c3A\u00d10` JAp\u03b8q\u201cJDp\u03b8q. As in the PB case, (iii) shows that the upper bound (ii) is tight up to constant terms. Finally, let us note that our bounds for PB exploration depend on the dimension of the parameter space d\u0398 that is replaced by that of the action space dA in AB exploration. 6. Global Convergence Analysis In this section, we present our main results about the convergence of AB and PB white noise-based exploration to global optimal parameter \u03b8\u02da D of the performance of the deterministic policy JD. Let K PN be the number of iterations and N the batch size; given an accuracy threshold \u03f5\u01050, our goal is to bound the sample complexity NK to fulfill the following last-iterate global convergence condition: J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5, (12) where \u03b8K is the (hyper)parameter at the end of learning. 6.1. General Global Convergence Analysis In this section, we provide a global convergence analysis for a generic stochastic first-order algorithm optimizing the differentiable objective function J: on the parameters space \u0398\u010eRd, that can be instanced for both AB (setting J: \u201cJA) and PB (setting J: \u201cJP) exploration, when optimizing the corresponding objective. At every iteration kPJKK, the algorithm performs the gradient ascent update: \u03b8k`1 \u00d0 \u00dd\u03b8k `\u03b6k p \u2207\u03b8J:p\u03b8kq, (13) where \u03b6k \u01050 is the step size and p \u2207\u03b8J:p\u03b8kq is an unbiased estimate of \u2207\u03b8J:p\u03b8kq and denote J\u02da : \u201cmax\u03b8P\u0398 J:p\u03b8q. We enforce the following standard assumptions. Assumption 6.1 (Weak gradient domination for J:). There exist \u03b1\u01050 and \u03b2 \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8J:p\u03b8q}2 `\u03b2. Assumption 6.1 is the gold standard for the global convergence of stochastic optimization (Yuan et al., 2022; Masiha et al., 2022; Fatkhullin et al., 2023a). Note that, when \u03b2 \u201c0, we recover the (strong) gradient domination (GD) property: J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8Jp:\u03b8q}2 for all \u03b8P\u0398. GD is stricter than WGD, and requires that J: has no local optima. Instead, WGD admits local maxima as long as their performance is \u03b2-close to the globally optimal one.5 Assumption 6.2 (Smooth J: w.r.t. parameters \u03b8). J: is 5In this section, we will assume that J: (i.e., either JA or JA) is already endowed with the WGD property. In Section 7, we illustrate how it can be obtained in several common scenarios. 5 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients L2,:-LS w.r.t. parameters \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: }\u2207\u03b8J:p\u03b81q\u00b4\u2207\u03b8J:p\u03b8q}2 \u010fL2,:}\u03b81 \u00b4\u03b8}2. (14) Assumption 6.2 is ubiquitous in the convergence analysis of policy gradient algorithms (Papini et al., 2018; Agarwal et al., 2021; Yuan et al., 2022; Bhandari & Russo, 2024), which is usually studied as an instance of (nonconvex) smooth stochastic optimization. The smoothness of J: PtJA,JPu can be: (i) inherited from the deterministic objective JD (originating, in turn, from the regularity of the MDP) and of the deterministic policy \u00b5\u03b8 (Assumptions 4.14.4); or (ii) enforced through the properties on the white noise \u03a6 (Assumption 4.5). The first result was observed in a similar form by Pirotta et al. (2015, Theorem 3), while a generalization of the second was established by Papini et al. (2022) and refined by Yuan et al. (2022). Assumption 6.3 (Bounded estimator variance p \u2207\u03b8J:p\u03b8q). The estimator p \u2207\u03b8J:p\u03b8q computed with batch size N has a bounded variance, i.e., there exists V: \u011b0 such that, for every \u03b8P\u0398, we have: Varrp \u2207\u03b8J:p\u03b8qs\u010fV:{N. Assumption 6.3 guarantees that the gradient estimator is characterized by a bounded variance V: which scales with the batch size N. Under Assumptions 4.5 (and 4.4 for GPOMDP), the term V: can be further characterized (see Table 2 in Appendix A). We are now ready to state the global convergence result. Theorem 6.1. Consider an algorithm running the update rule of Equation (13). Under Assumptions 6.1, 6.2, and 6.3, with a suitable constant step size, to guarantee J\u02da : \u00b4ErJ:p\u03b8Kqs\u010f\u03f5`\u03b2 the sample complexity is at most: NK \u201c 16\u03b14L2,:V: \u03f53 log maxt0,J\u02da : \u00b4J:p\u03b80q\u00b4\u03b2u \u03f5 . (15) This result establishes a convergence of order r Op\u03f5\u00b43q to the global optimum J\u02da : of the general objective J:. Recalling that J: PtJA,JPu, Theorem 6.1 provides: (i) the first global convergence guarantee for PGPE for PB exploration (setting J: \u201cJP) and (ii) a global convergence guarantee for PG (e.g., GPOMDP) for AB exploration of the same order (up to logarithmic terms in \u03f5\u00b41) of the state-of-the-art one of Yuan et al. (2022) (setting J: \u201cJA). Note that our guarantee is obtained for a constant step size and holds for the last parameter \u03b8K, delivering a last-iterate result, rather than a best-iterate one as in (Yuan et al., 2022, Corollary 3.7). Clearly, this result is not yet our ultimate goal since, we need to assess how far the performance of the learned parameter \u03b8K is from that of the optimal deterministic objective J\u02da D. 6.2. Global Convergence of PGPE and GPOMDP In this section, we provide results on the global convergence of PGPE and GPOMDP with white-noise exploration. The sample complexity bounds are summarized in Table 1 and presented extensively in Appendix D. They all follow from our general Theorem 6.1 and our results on the deployment of deterministic policies from Section 5. PGPE. We start by commenting on the sample complexity of PGPE for a constant, generic hyperpolicy variance \u03c3P , shown in the first column. First, the guarantee on J\u02da D \u00b4ErJDp\u03b8Kqs contains the additional variancedependent term 3LP ?d\u0398\u03c3P originating from the deterministic deployment. Second, the sample complexity scales with r Op\u03f5\u00b43q. Third, by enforcing the smoothness of the MDP and of the deterministic policy (Assumptions 4.2 and 4.4), we improve the dependence on d\u0398 and on \u03c3P at the price of an additional p1\u00b4\u03b3q\u00b41 factor. A choice of \u03c3P which adapts to \u03f5 allows us to achieve the global convergence on the deterministic objective JD, up to \u03f5`\u03b2 only. Moving to the second column, we observe that the convergence rate becomes r Op\u03f5\u00b47q, which reduces to r Op\u03f5\u00b45q with the additional smoothness assumptions, which also improve the dependence on both p1\u00b4\u03b3q\u00b41 and d\u0398. The slower rate \u03f5\u00b45 or \u03f5\u00b47, compared to the \u03f5\u00b43 of the fixedvariance case, is easily explained by the more challenging requirement of converging to the optimal deterministic policy rather than the optimal stochastic hyperpolicy, as for standard PGPE. Note that we have set the standard deviation equal to \u03c3P \u201c \u03f5 6LP ?d\u0398 \u201cOp\u03f5p1\u00b4\u03b3q2d\u00b41{2 \u0398 q that, as expected, decreases with the desired accuracy \u03f5.6 GPOMDP. We now consider the global convergence of GPOMDP, starting again with a generic policy variance \u03c3A (third column). The result is similar to that of PGPE with three notable exceptions. First, an additional p1\u00b4\u03b3q\u00b41 factor appears in the sample complexity due the variance bound of GPOMDP (Papini et al., 2022). This suggests that GPOMDP struggles more than PGPE in long-horizon environments, as already observed by Zhao et al. (2011). Second, the dependence on the dimensionality of the parameter space d\u0398 is replaced with the dimensionality of the action space dA. This is expected and derives from the nature of exploration that is performed in the parameter space for PGPE and in the action space for GPOMPD. Finally, the smoothness of the deterministic policy (Asm. 4.4) is always needed. Adding also the smoothness of the MDP (Asm. 4.2), we can trade a dA factor for a p1\u00b4\u03b3q\u00b41 one. Again, a careful \u03f5-dependent choice of \u03c3A allows us to achieve global convergence on the deterministic objective JD. In the last column, we can notice that the convergence rates display the same dependence on \u03f5 as in PGPE. How6These results should be interpreted as a demonstration that global convergence to deterministic policies is possible rather than a practical recipe to set the value of \u03c3P. We do hope that our theory can guide the design of practical solutions in future works. 6 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Table 1. Sample complexity NK \u201c r Op\u00a8q of GPOMDP and PGPE to converge to a deterministic optimal policy, retaining only dependencies on \u03f5, p1\u00b4\u03b3q\u00b41, \u03c3A, \u03c3P, d\u0398, dA, and \u03b1. Task-dependent constants LP and LA are Opp1\u00b4\u03b3q\u00b42q\u2014see Table 2 in Appendix A. ever, the dependence on the effective horizon p1\u00b4\u03b3q\u00b41 is worse. In this case, the additional smoothness assumption improves the dependency on dA and p1\u00b4\u03b3q\u00b41. 7. About the Weak Gradient Domination So far, we have assumed WGD for the AB JA and PB JP (Assumption 6.1). In this section, we discuss several scenarios in which such an assumption holds. 7.1. Inherited Weak Gradient Domination We start by discussing the case in which the deterministic policy objective JD already enjoys the (W)GD property. Assumption 7.1 (Weak gradient domination for JD). There exist \u03b1D \u01050 and \u03b2D \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da D \u00b4JDp\u03b8q\u010f\u03b1D}\u2207\u03b8JDp\u03b8q}2 `\u03b2D. Although the notion of WGD has been mostly applied to stochastic policies in the literature (Liu et al., 2020; Yuan et al., 2022), there is no reason why it should not be plausible for deterministic policies. Bhandari & Russo (2024) provide sufficient conditions for the performance function not to have any local optima, which is a stronger condition, without discriminating between deterministic and stochastic policies (cf. their Remark 1). Moreover, one of their examples is linear-quadratic regulators with deterministic linear policies. We show that, under Lipschiztianity and smoothness of the MDP and deterministic policy (Assumptions 4.1-4.4), this is sufficient to enforce the WGD property for both the PB JP and the AB JA objectives. Let us start with JP. Theorem 7.1 (Inherited weak gradient domination for JP). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JP \u02da \u00b4JPp\u03b8q\u010f\u03b1D}\u2207\u03b8JPp\u03b8q}2 `\u03b2D `p\u03b1DL2 `LP q\u03c3P a d\u0398, where L2 \u201cOpp1\u00b4\u03b3q\u00b43q (full expression in Lemma E.2). The result shows that the WGD property of JD entails that of JP with the same \u03b1D coefficient, but a different \u03b2 \u201c \u03b2Dp\u03b1DL2 `LP q\u03c3P ?d\u0398 that accounts for the gap between the two objectives encoded in \u03c3P. Note that even if JD enjoys a (strong) GD (i.e., \u03b2D \u201c0), in general, JP inherits a WGD property. In the setting of Theorem 7.1, convergence in the sense of J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5`\u03b2D can be achieved with r Op\u03b16 D\u03f5\u00b45d2 \u0398p1\u00b4\u03b3q\u00b411q samples by carefully setting the hyperpolicy variance (see Theorem D.12 for details). An analogous result can be obtained for AB exploration. Theorem 7.2 (Inherited weak gradient domination on JA). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JA \u02da \u00b4JAp\u03b8q\u010f\u03b1D}\u2207\u03b8JAp\u03b8q}2 `\u03b2D `p\u03b1D\u03c8`LAq\u03c3A a dA, where \u03c8\u201cOpp1\u00b4\u03b3q\u00b44q (full expression in the proof). The sample complexity, in this case, is r Op\u03b16 D\u03f5\u00b45d2 Ap1\u00b4 \u03b3q\u00b414q (see Theorem D.13 for details). 7.2. Policy-induced Weak Gradient Domination When the the objective function does not enjoy weak gradient domination in the space of deterministic policies, we can still have WGD with respect to stochastic policies if they satisfy a condition known as Fisher-non-degeneracy (Liu et al., 2020; Ding et al., 2022). As far as we know, WGD by Fishernon-degeneracy is a peculiar property of AB exploration that has no equivalent in PB exploration. White-noise policies satisfying Assumption 4.5 are Fisher-non-degenerate under the following standard assumption (Liu et al., 2020): Assumption 7.2 (Explorability). There exists \u03bbE \u01050 s.t. E\u03c0\u03b8r\u2207\u03b8\u00b5\u03b8psq\u2207\u03b8\u00b5\u03b8psqJs\u013e\u03bbEI for all \u03b8P\u0398, where the expectation over states is induced by the stochastic policy. We can use this fact to prove WGD for white-noise policies: Theorem 7.3 (Policy-induced weak gradient domination). Under Assumptions 4.5, 7.2 and D.1, we have: JA \u02da \u00b4JAp\u03b8q\u010fC ?dA\u03c3A \u03bbE }\u2207\u03b8JAp\u03b8q}2 ` ?\u03f5bias 1\u00b4\u03b3 , for some numerical constant C \u01050, that is, Assumption 6.1 (:=A) is satisfied with \u03b1\u201cC ?dA\u03c3A \u03bbE and \u03b2 \u201c ?\u03f5bias 1\u00b4\u03b3 . 7 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Here \u03f5bias is the compatible-critic error, which can be very small for rich policy classes (Ding et al., 2022). We can leverage this to prove the global convergence of GPOMDP as in Section 7.1, this time to JD \u00b4ErJDp\u03b8qs\u010f\u03f5` ?\u03f5bias 1\u00b4\u03b3 . Tuning \u03c3A, we can achieve a sample complexity of r Op\u03f5\u00b41\u03bb\u00b44 E d4 Ap1\u00b4\u03b3q\u00b410q (see Theorem D.16 for details) This seems to violate the \u2126p\u03f5\u00b42q lower bound by Azar et al. (2013). However, the factor \u03bbE can depend on \u03c3A \u201cOp\u03f5q in highly non-trivial ways, and, thus, can hide additional factors of \u03f5. For this reason, the results granted by the Fishernon-degeneracy of white-noise policies are not compared with the ones granted by inherited WGD from Section 7.1. Intuitively, \u03bbE encodes some difficulties of exploration that are absent in \u201cnice\u201d MDPs satisfying Assumption 7.1. See Appendix D.4 for further discussion and omitted proofs. 8. Numerical Validation In this section, we empirically validate some of the theoretical results presented in the paper. We conduct a study on the gap in performance between the deterministic objective JD and the ones of GPOMDP and PGPE (respectively JA and JP) by varying the value of their exploration parameters (\u03c3A and \u03c3P, respectively). Details on the employed versions of PGPE and GPOMDP can be found in Appendix G. Additional experimental results can be found in Appendix H. We run PGPE and GPOMDP for K \u201c2000 iterations with batch size N \u201c100 on three environments from the MuJoCo (Todorov et al., 2012) suite: Swimmer-v4 (T \u201c200), Hopper-v4 (T \u201c100), and HalfCheetah-v4 (T \u201c100). For all the environments the deterministic policy is linear in the state and the noise is Gaussian. We consider \u03c32 : P t0.01,0.1,1,10,100u. More details in Appendix H.1. From Figure 1, we note that as the exploration parameter grows, the distance of JPp\u03b8Kq and JAp\u03b8Kq from JDp\u03b8Kq increases, coherently with Theorems 5.1 and 5.2. Among the tested values for \u03c3P and \u03c3A, some lead to the highest values of JDp\u03b8Kq. Empirically, we note that PGPE delivers the best deterministic policy with \u03c32 P \u201c10 for Swimmer and with \u03c32 P \u201c1 for the other environments. GPOMDP performs the best with \u03c32 A \u201c1 for Swimmer, and with \u03c32 A \u201c10 in the other cases. These outcomes agree with the theoretical results in showing that there exists an optimal value for \u03c3:. We can also appreciate the trade-off between GPOMDP and PGPE w.r.t. the parameter dimensionality d\u0398 and the horizon T, by comparing the best values of JD found by the two algorithms in each environment. GPOMDP is better than PGPE in Hopper and HalfCheetah. This can be explained by the fact that such environments are characterized by higher values of d\u0398. Instead, in Swimmer, PGPE performs better than GPOMDP. This can be explained by the higher value of T and the lower value of d\u0398. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 P J:p\u03b8Kq JD JP (a) PGPE on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 A J:p\u03b8Kq JD JA (b) GPOMDP on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 P J:p\u03b8Kq JD JP (c) PGPE on Hopper. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 A J:p\u03b8Kq JD JA (d) GPOMDP on Hopper. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 P J:p\u03b8Kq JD JP (e) PGPE on Swimmer. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 A J:p\u03b8Kq JD JA (f) GPOMDP on Swimmer. Figure 1. Variance study on Mujoco (5 runs, mean \u02d8 95% C.I.). 9. Conclusions We have perfected recent theoretical results on the global convergence of policy gradient algorithms to address the practical problem of finding a good deterministic parametric policy. We have studied the effects of noise on the learning process and identified a theoretical value of the variance of the (hyper)policy that allows to find a good deterministic policy using a polynomial number of samples. We have compared the two common forms of noisy exploration, action-based and parameter-based, both from a theoretical and an empirical perspective. Our work paves the way for several exciting research directions. First, our theoretical selection of the policy variance is not practical, but our theoretical findings should guide the design of sound and efficient adaptive-variance schedules. We have shown how white-noise exploration preserves weak gradient domination\u2014the natural next question is whether a sufficient amount of noise can smooth or even eliminate the local optima of the objective function. Finally, we have focused on \u201cvanilla\u201d policy gradient methods, but our ideas could be applied to more advanced algorithms, such as the ones recently proposed by Fatkhullin et al. (2023a), to find optimal deterministic policies with r Op\u03f5\u00b42q samples. 8 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.", + "additional_graph_info": { + "graph": [ + [ + "Alessandro Montenegro", + "Marco Mussi" + ], + [ + "Alessandro Montenegro", + "Alberto Maria Metelli" + ], + [ + "Alessandro Montenegro", + "Matteo Papini" + ], + [ + "Marco Mussi", + "Alberto Maria Metelli" + ], + [ + "Alberto Maria Metelli", + "Mirco Mutti" + ], + [ + "Alberto Maria Metelli", + "Matteo Papini" + ], + [ + "Alberto Maria Metelli", + "Filippo Lazzati" + ], + [ + "Matteo Papini", + "Matteo Pirotta" + ], + [ + "Matteo Papini", + "Andrea Tirinzoni" + ], + [ + "Matteo Papini", + "Alessandro Lazaric" + ] + ], + "node_feat": { + "Alessandro Montenegro": [ + { + "url": "http://arxiv.org/abs/2405.02235v1", + "title": "Learning Optimal Deterministic Policies with Stochastic Policy Gradients", + "abstract": "Policy gradient (PG) methods are successful approaches to deal with\ncontinuous reinforcement learning (RL) problems. They learn stochastic\nparametric (hyper)policies by either exploring in the space of actions or in\nthe space of parameters. Stochastic controllers, however, are often undesirable\nfrom a practical perspective because of their lack of robustness, safety, and\ntraceability. In common practice, stochastic (hyper)policies are learned only\nto deploy their deterministic version. In this paper, we make a step towards\nthe theoretical understanding of this practice. After introducing a novel\nframework for modeling this scenario, we study the global convergence to the\nbest deterministic policy, under (weak) gradient domination assumptions. Then,\nwe illustrate how to tune the exploration level used for learning to optimize\nthe trade-off between the sample complexity and the performance of the deployed\ndeterministic policy. Finally, we quantitatively compare action-based and\nparameter-based exploration, giving a formal guise to intuitive results.", + "authors": "Alessandro Montenegro, Marco Mussi, Alberto Maria Metelli, Matteo Papini", + "published": "2024-05-03", + "updated": "2024-05-03", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Within reinforcement learning (RL, Sutton & Barto, 2018) approaches, policy gradients (PGs, Deisenroth et al., 2013) algorithms have proved very effective in dealing with realworld control problems. Their advantages include the applicability to continuous state and action spaces (Peters & Schaal, 2006), resilience to sensor and actuator noise (Gravell et al., 2020), robustness to partial observability (Azizzadenesheli et al., 2018), and the possibility of incorporating prior knowledge in the policy design phase (Ghavamzadeh & Engel, 2006), improving explainability (Likmeta et al., 2020). PG algorithms search directly in a space of parametric policies for the one that maximizes a performance 1Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133, Milan, Italy. Correspondence to: Alessandro Montenegro . Proceedings of the 41 st International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s). function. Nonetheless, as always in RL, the exploration problem has to be addressed, and practical methods involve injecting noise in the actions or in the parameters. This limits the application of PG methods in many real-world scenarios, such as autonomous driving, industrial plants, and robotic controllers. This is because stochastic policies typically do not meet the reliability, safety, and traceability standards of this kind of applications. The problem of learning deterministic policies has been explicitly addressed in the PG literature by Silver et al. (2014) with their deterministic policy gradient, which spawned very successful deep RL algorithms (Lillicrap et al., 2016; Fujimoto et al., 2018). This approach, however, is affected by several drawbacks, mostly due to its inherent off-policy nature. First, this makes DPG hard to analyze from a theoretical perspective: local convergence guarantees have been established only recently, and only under assumptions that are very demanding for deterministic policies (Xiong et al., 2022). Furthermore, its practical versions are known to be very susceptible hyperparameter tuning. We study here a simpler and fairly common approach: that of learning stochastic policies with PG algorithms, then deploying the corresponding deterministic version, \u201cswitching off\u201d the noise.1 Intuitively, the amount of exploration (e.g., the variance of a Gaussian policy) should be selected wisely. Indeed, the smaller the exploration level, the closer the optimized objective is to that of a deterministic policy. At the same time, with a small exploration, learning can severely slow down and get stuck on bad local optima. Policy gradient methods can be partitioned based on the space on which the exploration is carried out, distinguishing between: action-based (AB) and parameter-based (PB, Sehnke et al., 2010) exploration. The first, of which REINFORCE (Williams, 1992) and GPOMDP (Baxter & Bartlett, 2001; Sutton et al., 1999) are the progenitor algorithms, performs exploration in the action space, with a stochastic (e.g., Gaussian) policy. On the other hand, PB exploration, introduced by Parameter-Exploring Policy Gradients (PGPE, Sehnke et al., 2010), implements the exploration at the level of policy parameters by means of a stochastic hyperpolicy. The latter performs perturbations of the parameters of a (typ1This can be observed in several libraries (e.g., Raffin et al., 2021b) and benchmarks (e.g., Duan et al., 2016). 1 arXiv:2405.02235v1 [cs.LG] 3 May 2024 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients ically deterministic) action policy. Of course, this dualism only considers the simplest form of noise-based, undirected exploration. Efficient exploration in large-scale MDPs is a very active area of research, with a large gap between theory and practice (Ghavamzadeh et al., 2020) placing the matter well beyond the scope of this paper. Also, we consider noise magnitudes that are fixed during the learning process, as the common practice of learning the exploration parameters themselves breaks all known sample complexity guarantees of vanilla PG (cf. Appendix C). To this day, a large effort has been put into providing convergence guarantees and sample complexity analyses for AB exploration algorithms (e.g., Papini et al., 2018; Yuan et al., 2022; Fatkhullin et al., 2023a), while the theoretical analysis of PB exploration has been taking a back seat since (Zhao et al., 2011). We are not aware of any global convergence results for parameter-based PGs. Furthermore, even for AB exploration, current studies focus on the convergence to the best stochastic policy. Original Contributions. In this paper, we make a step towards the theoretical understanding of the practice of deploying a deterministic policy learned with PG methods: \u2022 We introduce a framework for modeling the practice of deploying a deterministic policy, by formalizing the notion of white noise-based exploration, allowing for a unified treatment of both AB and PB exploration. \u2022 We study the convergence to the best deterministic policy for both AB and PB exploration. For this reason, we focus on the global convergence, rather than on the first-order stationary point (FOSP) convergence, and we leverage on commonly used (weak) gradient domination assumptions. \u2022 We quantitatively show how the exploration level (i.e., noise) generates a trade-off between the sample complexity and the performance of the deployed deterministic policy. Then, we illustrate how it can be tuned to optimize such a trade-off, delivering sample complexity guarantees. In light of these results, we compare the advantages and disadvantages of AB and PB exploration in terms of samplecomplexity and requested assumptions, giving a formal guise to intuitive results. We also elaborate on how the assumptions used in the convergence analysis can be reconnected to basic characteristics of the MDP and the policy classes. We conclude with a numerical validation to empirically illustrate the discussed trade-offs. The proofs of the results presented in the main paper are reported in Appendix D. The related works are discussed in Appendix B. 2. Preliminaries Notation. For a measurable set X, we denote with \u2206pXq the set of probability measures over X. For P P\u2206pXq, we denote with p its density function. With little abuse of notation, we will interchangeably use x\u201eP or x\u201ep to denote that random variable x is sampled from the P. For nPN, we denote by JnK:\u201ct1, ..., nu. Lipschitz Continuous and Smooth Functions. A function f :X \u010eRd \u00d1R is L-Lipschitz continuous (L-LC) if |fpxq\u00b4fpx1q|\u010fL}x\u00b4x1}2 for every x,x1 PX. f is L2Lipschitz smooth (L2-LS) if it is continuously differentiable and its gradient \u2207xf is L2-LC, i.e., }\u2207xfpxq\u00b4 \u2207xfpx1q}2 \u010fL2}x\u00b4x1}2 for every x,x1 PX. Markov Decision Processes. A Markov Decision Process (MDP, Puterman, 1990) is represented by M:\u201c pS,A,p,r,\u03c10,\u03b3q, where S \u010eRdS and A\u010eRdA are the measurable state and action spaces, p:S \u02c6A\u00dd \u00d1\u2206pSq is the transition model, where pps1|s,aq specifies the probability density of landing in state s1 PS by playing action aPA in state sPS, r:S \u02c6A\u00dd \u00d1r\u00b4Rmax,Rmaxs is the reward function, where rps,aq specifies the reward the agent gets by playing action a in state s, \u03c10 P\u2206pSq is the initial-state distribution, and \u03b3 Pr0,1s is the discount factor. A trajectory \u03c4 \u201cps\u03c4,0,a\u03c4,0,...,s\u03c4,T \u00b41,a\u03c4,T \u00b41q of length T PNYt`8u is a sequence of T state-action pairs. The discounted return of a trajectory \u03c4 is Rp\u03c4q:\u201c\u0159T \u00b41 t\u201c0 \u03b3trps\u03c4,t,a\u03c4,tq. Deterministic Parametric Policies. We consider a parametric deterministic policy \u00b5\u03b8 :S \u00d1A, where \u03b8P\u0398\u010eRd\u0398 is the parameter vector belonging to the parameter space \u0398. The performance of \u00b5\u03b8 is assessed via the expected return JD :\u0398\u00d1R, defined as: JDp\u03b8q:\u201c E \u03c4\u201epDp\u00a8|\u03b8qrRp\u03c4qs, (1) where pDp\u03c4;\u03b8q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5\u03b8ps\u03c4,tqq is the density of trajectory \u03c4 induced by policy \u00b5\u03b8.2 The agent\u2019s goal consists of finding an optimal parameter \u03b8\u02da D P argmax\u03b8P\u0398 JDp\u03b8q and we denote J\u02da D :\u201cJDp\u03b8\u02da Dq. Action-Based (AB) Exploration. In AB exploration, we consider a parametric stochastic policy \u03c0\u03c1 :S \u00d1\u2206pAq, where \u03c1PP is the parameter vector belonging to the parameter space P \u010eRdP. The policy is used to sample actions at \u201e\u03c0\u03c1p\u00a8|stq to be played in state st for every step t of interaction. The performance of \u03c0\u03c1 is assessed via the expected return JA :P \u00d1R, defined as: JAp\u03c1q:\u201c E \u03c4\u201epAp\u00a8|\u03c1qrRp\u03c4qs, where (2) pAp\u03c4;\u03c1q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 \u03c0\u03c1pa\u03c4,t|s\u03c4,tqpps\u03c4,t`1|s\u03c4,t,a\u03c4,tq is the density of trajectory \u03c4 induced by policy \u03c0\u03c1.2 In AB exploration, we aim at learning \u03c1\u02da A Pargmax\u03c1PP JAp\u03c1q and we denote JA\u02da :\u201cJAp\u03c1\u02da Aq. If JAp\u03c1q is differentiable w.r.t. \u03c1, PG methods (Peters & Schaal, 2008) update the 2For both JD (resp. JA, JP) and pD (resp. pA, pP), we use the D (resp. A, P) subscript to denote that the dependence on \u03b8 (resp. \u03c1) is through a Deterministic policy (resp. Action-based exploration policy, Parameter-based exploration hyperpolicy). 2 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients parameter \u03c1 via gradient ascent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JAp\u03c1tq, where \u03b6t \u01050 is the step size and p \u2207\u03c1JAp\u03c1q is an estimator of \u2207\u03c1JAp\u03c1q. In particular, the GPOMDP estimator is:3 p \u2207\u03c1JAp\u03c1q:\u201c 1 N N \u00ff i\u201c1 T \u00b41 \u00ff t\u201c0 \u02dc t \u00ff k\u201c0 \u2207\u03c1log\u03c0\u03c1pa\u03c4i,k|s\u03c4i,kq \u00b8 \u03b3trps\u03c4i,t,a\u03c4i,tq, where N is the number of independent trajectories t\u03c4iuN i\u201c1 collected with policy \u03c0\u03c1 (\u03c4i \u201epAp\u00a8;\u03c1q), called batch size. Parameter-Based (PB) Exploration. In PB exploration, we use a parametric stochastic hyperpolicy \u03bd\u03c1 \u010e\u2206p\u0398q, where \u03c1PRdP is the parameter vector. The hyperpolicy is used to sample parameters \u03b8\u201e\u03bd\u03c1 to be plugged in the deterministic policy \u00b5\u03b8 at the beginning of every trajectory. The performance index of \u03bd\u03c1 is JP :Rd\u03c1 \u00dd \u00d1R, that is the expectation over \u03b8 of JDp\u03b8q defined as:2 JPp\u03c1q:\u201c E \u03b8\u201e\u03bd\u03c1 rJDp\u03b8qs. PB exploration aims at learning \u03c1\u02da P Pargmax\u03c1PP JPp\u03c1q and we denote JP\u02da :\u201cJPp\u03c1\u02da Pq. If JDp\u03c1q is differentiable w.r.t. \u03c1, PGPE (Sehnke et al., 2010) updates the hyperparameter \u03c1 via gradient accent: \u03c1t`1 \u00d0 \u00dd\u03c1t `\u03b6t p \u2207\u03c1JPp\u03c1tq. In particular, PGPE uses an estimator of \u2207\u03c1JPp\u03c1q defined as: p \u2207\u03c1JPp\u03c1q\u201c 1 N N \u00ff i\u201c1 \u2207\u03c1 log\u03bd\u03c1p\u03b8iqRp\u03c4iq, where N is the number of independent parameterstrajectories pairs tp\u03b8i,\u03c4iquN i\u201c1, collected with hyperpolicy \u03bd\u03c1 (\u03b8i \u201e\u03bd\u03c1 and \u03c4i \u201epDp\u00a8;\u03b8iq), called batch size. 3. White-Noise Exploration We formalize a class of stochastic (hyper)policies widely employed in the practice of AB and PB exploration, namely white noise-based (hyper)policies. These policies \u03c0\u03b8p\u00a8|sq (resp. hyperpolicies \u03bd\u03b8) are obtained by adding a white noise \u03f5 to the deterministic action a\u201c\u00b5\u03b8psq (resp. to the parameter \u03b8) independent of the state s (resp. parameter \u03b8). Definition 3.1 (White Noise). Let dPN and \u03c3\u01050. A probability distribution \u03a6d P\u2206pRdq is a white-noise if: E \u03f5\u201e\u03a6dr\u03f5s\u201c0d, E \u03f5\u201e\u03a6dr}\u03f5}2 2s\u010fd\u03c32. (3) This definition complies with the zero-mean Gaussian distribution \u03f5\u201eNp0d,\u03a3q, where E\u03f5\u201eN p0d,\u03a3qr}\u03f5}2 2s\u201ctrp\u03a3q\u010f d\u03bbmaxp\u03a3q. In particular, for an isotropic Gaussian \u03a3\u201c \u03c32Id, we have that trp\u03a3q\u201cd\u03c32. We now formalize the notion of white noise-based (hyper)policy. Definition 3.2 (White noise-based policies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6dA be a white noise (Definition 3.1). A white noise-based pol3We limit our analysis to the GPOMDP estimator (Baxter & Bartlett, 2001), neglecting the REINFORCE (Williams, 1992) since it is known that the latter suffers from larger variance. icy \u03c0\u03b8 :S \u00d1\u2206pAq is such that, for every state sPS, action a\u201e\u03c0\u03b8p\u00a8|sq satisfies a\u201c\u00b5\u03b8psq`\u03f5 where \u03f5\u201e\u03a6dA independently at every step. This definition considers stochastic policies \u03c0\u03b8p\u00a8|sq that are obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at every step, to the action \u00b5\u03b8psq prescribed by the deterministic policy (i.e., AB exploration), resulting in playing action \u00b5\u03b8psq`\u03f5. An analogous definition can be formulated for hyperpolicies. Definition 3.3 (White noise-based hyperpolicies). Let \u03b8P\u0398 and \u00b5\u03b8 :S \u00d1A be a parametric deterministic policy and let \u03a6d\u0398 be a white-noise (Definition 3.1). A white noisebased hyperpolicy \u03bd\u03b8 P\u2206p\u0398q is such that, for every parameter \u03b8P\u0398, parameter \u03b81 \u201e\u03bd\u03b8 satisfies \u03b81 \u201c\u03b8`\u03f5 where \u03f5\u201e\u03a6d\u0398 independently in every trajectory. This definition considers stochastic hyperpolicies \u03bd\u03b8 obtained by adding noise \u03f5 fulfilling Definition 3.1, sampled independently at the beginning of each trajectory, to the parameter \u03b8 defining the deterministic policy \u00b5\u03b8, resulting in playing deterministic policy \u00b5\u03b8`\u03f5 (i.e., PB exploration). Definitions 3.2 and 3.3 allow to represent a class of widelyused (hyper)policies, like Gaussian hyperpolicies and Gaussian policies with state-independent variance. Furthermore, once the parameter \u03b8 is learned with either AB and PB exploration, deploying the corresponding deterministic policy (i.e., \u201cswitching off\u201d the noise) is straightforward.4 4. Fundamental Assumptions In this section, we present the fundamental assumptions on the MDP (p and r), deterministic policy \u00b5\u03b8, and white noise \u03a6. For the sake of generality, we will consider abstract assumptions in the next sections and, then, show their relation to the fundamental ones (see Appendix A for details). Assumptions on the MDP. We start with the assumptions on the regularity of the MDP, i.e., on transition model p and reward function r, w.r.t. variations of the played action a. Assumption 4.1 (Lipschitz MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are Lp-LC and Lr-LC, respectively, w.r.t. the action for every s,s1 PS, i.e., for every a,aPA: |logpps1|s,aq\u00b4logpps1|s,aq|\u010fLp}a\u00b4a}2, (4) |rps,aq\u00b4rps,aq|\u010fLr}a\u00b4a}2. (5) Assumption 4.2 (Smooth MDP (logp, r) w.r.t. actions). The log transition model logpps1|s,\u00a8q and the reward function rps,\u00a8q are L2,p-LS and L2,r-LS, respectively, w.r.t. the 4For white noise-based (hyper)policies there exists a one-toone mapping between the parameter space of (hyper)policies and that of deterministic policies (P \u201c\u0398). For simplicity, we assume \u0398\u201cRd\u0398 and A\u201cRdA (see Appendix C). 3 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients action for every s,s1 PS, i.e., for every a,aPA: }\u2207a logpps1|s,aq\u00b4\u2207a logpps1|s,aq}2 \u010fL2,p}a\u00b4a}2, }\u2207arps,aq\u00b4\u2207arps,aq}2 \u010fL2,r}a\u00b4a}2. Intuitively, these assumptions ensure that when we perform AB and/or PB exploration altering the played action w.r.t. a deterministic policy, the effect on the environment dynamics and on reward (and on their gradients) is controllable. Assumptions on the deterministic policy. We now move to the assumptions on the regularity of the deterministic policy \u00b5\u03b8 w.r.t. the parameter \u03b8. Assumption 4.3 (Lipschitz deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L\u00b5-LC w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u00b5\u03b8psq\u00b4\u00b5\u03b8psq}2 \u010fL\u00b5}\u03b8\u00b4\u03b8}2. (6) Assumption 4.4 (Smooth deterministic policy \u00b5\u03b8 w.r.t. parameters \u03b8). The deterministic policy \u00b5\u03b8psq is L2,\u00b5-LS w.r.t. parameter for every sPS, i.e., for every \u03b8,\u03b8P\u0398: }\u2207\u03b8\u00b5\u03b8psq\u00b4\u2207\u03b8\u00b5\u03b8psq}2 \u010fL2,\u00b5}\u03b8\u00b4\u03b8}2. (7) Similarly, these assumptions ensure that if we deploy an altered parameter \u03b8, like in PB exploration, the effect on the played action (and on its gradient) is bounded. Assumptions 4.1 and 4.3 are standard in the DPG literature (Silver et al., 2014). Assumption 4.2, instead, can be interpreted as the counterpart of the Q-function smoothness used in the DPG analysis (Kumar et al., 2020; Xiong et al., 2022), while Assumption 4.4 has been used to study the convergence of DPG (Xiong et al., 2022). Similar conditions to our Assumption 4.1 were adopted by Pirotta et al. (2015), but measuring the continuity of p in the Kantorovich metric, a weaker requirement that, unfortunately, does not come with a corresponding smoothness condition. Assumptions on the (hyper)policies. We introduce the assumptions on the score functions of the white noise \u03a6. Assumption 4.5 (Bounded Scores of \u03a6). Let \u03a6P\u2206pRdq be a white noise with variance bound \u03c3\u01050 (Definition 3.1) and density \u03d5. \u03d5 is differentiable in its argument and there exists a universal constant c\u01050 s.t.: (i) E\u03f5\u201e\u03a6r}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u010fcd\u03c3\u00b42; (ii) E\u03f5\u201e\u03a6r}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u010fc\u03c3\u00b42. Intuitively, this assumption is equivalent to the more common ones requiring the boundedness of the expected norms of the score function (and its gradient) (Papini et al., 2022; Yuan et al., 2022, cf. Appendix E). Note that a zero-mean Gaussian \u03a6\u201cNp0d,\u03a3q fulfills Assumption 4.5. Indeed, one has \u2207\u03f5 log\u03d5p\u03f5q\u201c\u03a3\u00b41\u03f5 and \u22072 \u03f5 log\u03d5p\u03f5q\u201c \u03a3\u00b41. Thus, Er}\u2207\u03f5 log\u03d5p\u03f5q}2 2s\u201ctrp\u03a3\u00b41q\u010fd\u03bbminp\u03a3q\u00b41 and Er}\u22072 \u03f5 log\u03d5p\u03f5q}2s\u201c\u03bbminp\u03a3q\u00b41. In particular, for an isotropic Gaussian \u03a3\u201c\u03c32I, we have \u03bbminp\u03a3q\u201c\u03c32, fulfilling Assumption 4.5 with c\u201c1. 5. Deploying Deterministic Policies In this section, we study the performance JD of the deterministic policy \u00b5\u03b8, when the parameter \u03b8 is learned via AB or PB white noise-based exploration (Section 3). We will refer to this scenario as deploying the parameters, which reflects the common practice of \u201cswitching off the noise\u201d once the learning process is over. PB Exploration. Let us start with PB exploration by observing that for white noise-based hyperpolicies (Definition 3.3), we can express the expected return JP as a function of JD and of the noise \u03f5 for every \u03b8P\u0398: JPp\u03b8q\u201c E \u03f5\u201e\u03a6d\u0398 rJDp\u03b8`\u03f5qs. (8) This illustrates that PB exploration can be obtained by perturbing the parameter \u03b8 of a deterministic policy \u00b5\u03b8 via the noise \u03f5\u201e\u03a6d\u0398. To achieve guarantees on the deterministic performance JD of a parameter \u03b8 learned with PB exploration, we enforce the following regularity condition. Assumption 5.1 (Lipschitz JD w.r.t. \u03b8). JD is LJ-LC in the parameter \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: |JDp\u03b8q\u00b4JDp\u03b81q|\u010fLJ}\u03b8\u00b4\u03b81}2. (9) When the MDP and the deterministic policy are LC as in Assumptions 4.1 and 4.3, LJ is Opp1\u00b4\u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). This way, we guarantee that perturbation \u03f5 on the parameter \u03b8 determines a variation on function JD depending on the magnitude of \u03f5, which allows obtaining the following result. Theorem 5.1 (Deterministic deployment of parameters learned with PB white-noise exploration). If the hyperpolicy complies with Definition 3.3, under Assumption 5.1: (i) (Uniform bound) for every \u03b8P\u0398, it holds that |JDp\u03b8q\u00b4JPp\u03b8q|\u010fLJ ?d\u0398\u03c3P; (ii) (JD upper bound) Let \u03b8\u02da P Pargmax\u03b8P\u0398 JPp\u03b8q, it holds that: J\u02da D \u00b4JDp\u03b8\u02da Pq\u010f2LJ ?d\u0398\u03c3P; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Pq\u011b0.28LJ ?d\u0398\u03c3P. Some observations are in order. (i) shows that the performance of the hyperpolicy JPp\u03b8q is representative of the deterministic performance JDp\u03b8q up to an additive term depending on LJ ?d\u0398\u03c3P. As expected, this term grows with the Lipschitz constant LJ of the function JD, with the standard deviation \u03c3P of the additive noise, and with the dimensionality of the parameter space d\u0398. In particular, this implies that lim\u03c3P\u00d10` JPp\u03b8q\u201cJDp\u03b8q. (ii) is a consequence of (i) and provides an upper bound between the optimal performance obtained if we were able to directly optimize the deterministic policy max\u03b8P\u0398 JDp\u03b8q and the performance of the parameter \u03b8\u02da P learned by optimizing JPp\u03b8q, i.e., via 4 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients PB exploration, when deployed on the deterministic policy. Finally, (iii) provides a lower bound to the same quantity on a specific instance of MDP and hyperpolicy, proving that the dependence on LJ ?d\u0398\u03c3P is tight up to constant terms. AB Exploration. Let us move to the AB exploration case where understanding the effect of the noise is more complex since it is applied to every action independently at every step. To this end, we introduce the notion of non-stationary deterministic policy \u00b5\u201cp\u00b5tqT \u00b41 t\u201c0 , where at time step t the deterministic policy \u00b5t :S \u00d1A is played, and its expected return (with abuse of notation) is JDp\u00b5q\u201cE\u03c4\u201epDp\u00a8|\u00b5qrRp\u03c4qs where pDp\u00a8|\u00b5q:\u201c\u03c10ps\u03c4,0q\u015bT \u00b41 t\u201c0 pps\u03c4,t`1|s\u03c4,t,\u00b5tps\u03c4,tqq. Let \u03f5\u201c p\u03f5tqT \u00b41 t\u201c0 \u201e\u03a6T dA be a sequence of noises sampled independently, we denote with \u00b5\u03b8 `\u03f5\u201cp\u00b5\u03b8 `\u03f5tqT \u00b41 t\u201c0 the nonstationary policy that, at time t, perturbs the action as \u00b5\u03b8pstq`\u03f5t. Since the noise is independent on the state, we express JA as a function of JD for every \u03b8P\u0398 as follows: JAp\u03b8q\u201c E \u03f5\u201e\u03a6T dA \u201d JDp\u00b5\u03b8 `\u03f5q \u0131 . (10) Thus, to ensure that the parameter learned by AB exploration achieves performance guarantees when evaluated as a deterministic policy, we need to enforce some regularity condition on JD as a function of \u00b5. Assumption 5.2 (Lipschitz JD w.r.t. \u00b5). JD of the nonstationary deterministic policy \u00b5 is pLtqT \u00b41 t\u201c0 -LC in the nonstationary policy, i.e., for every \u00b5,\u00b51: |JDp\u00b5q\u00b4JDp\u00b51q|\u010f T \u00b41 \u00ff t\u201c0 Lt sup sPS \u203a \u203a\u00b5tpsq\u00b4\u00b51 tpsq \u203a \u203a 2 . (11) Furthermore, we denote L:\u201c\u0159T \u00b41 t\u201c0 Lt. When the MDP is LC as in Assumptions 4.1, L is Opp1\u00b4 \u03b3q\u00b42q (see Table 2 in Appendix A for the full expression). The assumption enforces that changing the deterministic policy at step t from \u00b5t to \u00b51 t, the variation of JD is controlled by the action distance (in the worst state s) multiplied by a time-dependent Lipschitz constant. This form of condition allows us to show the following result. Theorem 5.2 (Deterministic deployment of parameters learned with AB white-noise exploration). If the policy complies with Definition 3.2 and under Assumption 5.2: (i) (Uniform bound) for every \u03b8P\u0398, it holds that: |JDp\u03b8q\u00b4JAp\u03b8q|\u010fL?dA\u03c3A; (ii) (JD upper bound) Letting \u03b8\u02da A Pargmax\u03b8P\u0398 JAp\u03b8q, it holds that J\u02da D \u00b4JDp\u03b8\u02da Aq\u010f2L?dA\u03c3A; (iii) (JD lower bound) There exists an MDP, a deterministic policy class \u00b5\u03b8 fulfilling Assumption 5.1, and a noise complying with Definition 3.1, such that J\u02da D \u00b4JDp\u03b8\u02da Aq\u011b0.28L?dA\u03c3A. Similarly to Theorem 5.1, (i) and (ii) provide an upper bound on the difference between the policy performance JAp\u03b8q and the corresponding deterministic policy JDp\u03b8q and on the performance of \u03b8\u02da A when deployed on a deterministic policy. Clearly, also in the AB exploration, we have that lim\u03c3A\u00d10` JAp\u03b8q\u201cJDp\u03b8q. As in the PB case, (iii) shows that the upper bound (ii) is tight up to constant terms. Finally, let us note that our bounds for PB exploration depend on the dimension of the parameter space d\u0398 that is replaced by that of the action space dA in AB exploration. 6. Global Convergence Analysis In this section, we present our main results about the convergence of AB and PB white noise-based exploration to global optimal parameter \u03b8\u02da D of the performance of the deterministic policy JD. Let K PN be the number of iterations and N the batch size; given an accuracy threshold \u03f5\u01050, our goal is to bound the sample complexity NK to fulfill the following last-iterate global convergence condition: J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5, (12) where \u03b8K is the (hyper)parameter at the end of learning. 6.1. General Global Convergence Analysis In this section, we provide a global convergence analysis for a generic stochastic first-order algorithm optimizing the differentiable objective function J: on the parameters space \u0398\u010eRd, that can be instanced for both AB (setting J: \u201cJA) and PB (setting J: \u201cJP) exploration, when optimizing the corresponding objective. At every iteration kPJKK, the algorithm performs the gradient ascent update: \u03b8k`1 \u00d0 \u00dd\u03b8k `\u03b6k p \u2207\u03b8J:p\u03b8kq, (13) where \u03b6k \u01050 is the step size and p \u2207\u03b8J:p\u03b8kq is an unbiased estimate of \u2207\u03b8J:p\u03b8kq and denote J\u02da : \u201cmax\u03b8P\u0398 J:p\u03b8q. We enforce the following standard assumptions. Assumption 6.1 (Weak gradient domination for J:). There exist \u03b1\u01050 and \u03b2 \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8J:p\u03b8q}2 `\u03b2. Assumption 6.1 is the gold standard for the global convergence of stochastic optimization (Yuan et al., 2022; Masiha et al., 2022; Fatkhullin et al., 2023a). Note that, when \u03b2 \u201c0, we recover the (strong) gradient domination (GD) property: J\u02da : \u00b4J:p\u03b8q\u010f\u03b1}\u2207\u03b8Jp:\u03b8q}2 for all \u03b8P\u0398. GD is stricter than WGD, and requires that J: has no local optima. Instead, WGD admits local maxima as long as their performance is \u03b2-close to the globally optimal one.5 Assumption 6.2 (Smooth J: w.r.t. parameters \u03b8). J: is 5In this section, we will assume that J: (i.e., either JA or JA) is already endowed with the WGD property. In Section 7, we illustrate how it can be obtained in several common scenarios. 5 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients L2,:-LS w.r.t. parameters \u03b8, i.e., for every \u03b8,\u03b81 P\u0398: }\u2207\u03b8J:p\u03b81q\u00b4\u2207\u03b8J:p\u03b8q}2 \u010fL2,:}\u03b81 \u00b4\u03b8}2. (14) Assumption 6.2 is ubiquitous in the convergence analysis of policy gradient algorithms (Papini et al., 2018; Agarwal et al., 2021; Yuan et al., 2022; Bhandari & Russo, 2024), which is usually studied as an instance of (nonconvex) smooth stochastic optimization. The smoothness of J: PtJA,JPu can be: (i) inherited from the deterministic objective JD (originating, in turn, from the regularity of the MDP) and of the deterministic policy \u00b5\u03b8 (Assumptions 4.14.4); or (ii) enforced through the properties on the white noise \u03a6 (Assumption 4.5). The first result was observed in a similar form by Pirotta et al. (2015, Theorem 3), while a generalization of the second was established by Papini et al. (2022) and refined by Yuan et al. (2022). Assumption 6.3 (Bounded estimator variance p \u2207\u03b8J:p\u03b8q). The estimator p \u2207\u03b8J:p\u03b8q computed with batch size N has a bounded variance, i.e., there exists V: \u011b0 such that, for every \u03b8P\u0398, we have: Varrp \u2207\u03b8J:p\u03b8qs\u010fV:{N. Assumption 6.3 guarantees that the gradient estimator is characterized by a bounded variance V: which scales with the batch size N. Under Assumptions 4.5 (and 4.4 for GPOMDP), the term V: can be further characterized (see Table 2 in Appendix A). We are now ready to state the global convergence result. Theorem 6.1. Consider an algorithm running the update rule of Equation (13). Under Assumptions 6.1, 6.2, and 6.3, with a suitable constant step size, to guarantee J\u02da : \u00b4ErJ:p\u03b8Kqs\u010f\u03f5`\u03b2 the sample complexity is at most: NK \u201c 16\u03b14L2,:V: \u03f53 log maxt0,J\u02da : \u00b4J:p\u03b80q\u00b4\u03b2u \u03f5 . (15) This result establishes a convergence of order r Op\u03f5\u00b43q to the global optimum J\u02da : of the general objective J:. Recalling that J: PtJA,JPu, Theorem 6.1 provides: (i) the first global convergence guarantee for PGPE for PB exploration (setting J: \u201cJP) and (ii) a global convergence guarantee for PG (e.g., GPOMDP) for AB exploration of the same order (up to logarithmic terms in \u03f5\u00b41) of the state-of-the-art one of Yuan et al. (2022) (setting J: \u201cJA). Note that our guarantee is obtained for a constant step size and holds for the last parameter \u03b8K, delivering a last-iterate result, rather than a best-iterate one as in (Yuan et al., 2022, Corollary 3.7). Clearly, this result is not yet our ultimate goal since, we need to assess how far the performance of the learned parameter \u03b8K is from that of the optimal deterministic objective J\u02da D. 6.2. Global Convergence of PGPE and GPOMDP In this section, we provide results on the global convergence of PGPE and GPOMDP with white-noise exploration. The sample complexity bounds are summarized in Table 1 and presented extensively in Appendix D. They all follow from our general Theorem 6.1 and our results on the deployment of deterministic policies from Section 5. PGPE. We start by commenting on the sample complexity of PGPE for a constant, generic hyperpolicy variance \u03c3P , shown in the first column. First, the guarantee on J\u02da D \u00b4ErJDp\u03b8Kqs contains the additional variancedependent term 3LP ?d\u0398\u03c3P originating from the deterministic deployment. Second, the sample complexity scales with r Op\u03f5\u00b43q. Third, by enforcing the smoothness of the MDP and of the deterministic policy (Assumptions 4.2 and 4.4), we improve the dependence on d\u0398 and on \u03c3P at the price of an additional p1\u00b4\u03b3q\u00b41 factor. A choice of \u03c3P which adapts to \u03f5 allows us to achieve the global convergence on the deterministic objective JD, up to \u03f5`\u03b2 only. Moving to the second column, we observe that the convergence rate becomes r Op\u03f5\u00b47q, which reduces to r Op\u03f5\u00b45q with the additional smoothness assumptions, which also improve the dependence on both p1\u00b4\u03b3q\u00b41 and d\u0398. The slower rate \u03f5\u00b45 or \u03f5\u00b47, compared to the \u03f5\u00b43 of the fixedvariance case, is easily explained by the more challenging requirement of converging to the optimal deterministic policy rather than the optimal stochastic hyperpolicy, as for standard PGPE. Note that we have set the standard deviation equal to \u03c3P \u201c \u03f5 6LP ?d\u0398 \u201cOp\u03f5p1\u00b4\u03b3q2d\u00b41{2 \u0398 q that, as expected, decreases with the desired accuracy \u03f5.6 GPOMDP. We now consider the global convergence of GPOMDP, starting again with a generic policy variance \u03c3A (third column). The result is similar to that of PGPE with three notable exceptions. First, an additional p1\u00b4\u03b3q\u00b41 factor appears in the sample complexity due the variance bound of GPOMDP (Papini et al., 2022). This suggests that GPOMDP struggles more than PGPE in long-horizon environments, as already observed by Zhao et al. (2011). Second, the dependence on the dimensionality of the parameter space d\u0398 is replaced with the dimensionality of the action space dA. This is expected and derives from the nature of exploration that is performed in the parameter space for PGPE and in the action space for GPOMPD. Finally, the smoothness of the deterministic policy (Asm. 4.4) is always needed. Adding also the smoothness of the MDP (Asm. 4.2), we can trade a dA factor for a p1\u00b4\u03b3q\u00b41 one. Again, a careful \u03f5-dependent choice of \u03c3A allows us to achieve global convergence on the deterministic objective JD. In the last column, we can notice that the convergence rates display the same dependence on \u03f5 as in PGPE. How6These results should be interpreted as a demonstration that global convergence to deterministic policies is possible rather than a practical recipe to set the value of \u03c3P. We do hope that our theory can guide the design of practical solutions in future works. 6 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Table 1. Sample complexity NK \u201c r Op\u00a8q of GPOMDP and PGPE to converge to a deterministic optimal policy, retaining only dependencies on \u03f5, p1\u00b4\u03b3q\u00b41, \u03c3A, \u03c3P, d\u0398, dA, and \u03b1. Task-dependent constants LP and LA are Opp1\u00b4\u03b3q\u00b42q\u2014see Table 2 in Appendix A. ever, the dependence on the effective horizon p1\u00b4\u03b3q\u00b41 is worse. In this case, the additional smoothness assumption improves the dependency on dA and p1\u00b4\u03b3q\u00b41. 7. About the Weak Gradient Domination So far, we have assumed WGD for the AB JA and PB JP (Assumption 6.1). In this section, we discuss several scenarios in which such an assumption holds. 7.1. Inherited Weak Gradient Domination We start by discussing the case in which the deterministic policy objective JD already enjoys the (W)GD property. Assumption 7.1 (Weak gradient domination for JD). There exist \u03b1D \u01050 and \u03b2D \u011b0 such that for every \u03b8P\u0398 it holds that J\u02da D \u00b4JDp\u03b8q\u010f\u03b1D}\u2207\u03b8JDp\u03b8q}2 `\u03b2D. Although the notion of WGD has been mostly applied to stochastic policies in the literature (Liu et al., 2020; Yuan et al., 2022), there is no reason why it should not be plausible for deterministic policies. Bhandari & Russo (2024) provide sufficient conditions for the performance function not to have any local optima, which is a stronger condition, without discriminating between deterministic and stochastic policies (cf. their Remark 1). Moreover, one of their examples is linear-quadratic regulators with deterministic linear policies. We show that, under Lipschiztianity and smoothness of the MDP and deterministic policy (Assumptions 4.1-4.4), this is sufficient to enforce the WGD property for both the PB JP and the AB JA objectives. Let us start with JP. Theorem 7.1 (Inherited weak gradient domination for JP). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JP \u02da \u00b4JPp\u03b8q\u010f\u03b1D}\u2207\u03b8JPp\u03b8q}2 `\u03b2D `p\u03b1DL2 `LP q\u03c3P a d\u0398, where L2 \u201cOpp1\u00b4\u03b3q\u00b43q (full expression in Lemma E.2). The result shows that the WGD property of JD entails that of JP with the same \u03b1D coefficient, but a different \u03b2 \u201c \u03b2Dp\u03b1DL2 `LP q\u03c3P ?d\u0398 that accounts for the gap between the two objectives encoded in \u03c3P. Note that even if JD enjoys a (strong) GD (i.e., \u03b2D \u201c0), in general, JP inherits a WGD property. In the setting of Theorem 7.1, convergence in the sense of J\u02da D \u00b4ErJDp\u03b8Kqs\u010f\u03f5`\u03b2D can be achieved with r Op\u03b16 D\u03f5\u00b45d2 \u0398p1\u00b4\u03b3q\u00b411q samples by carefully setting the hyperpolicy variance (see Theorem D.12 for details). An analogous result can be obtained for AB exploration. Theorem 7.2 (Inherited weak gradient domination on JA). Under Assumptions 7.1, 4.1, 4.3, 4.2, 4.4, for every \u03b8P\u0398: JA \u02da \u00b4JAp\u03b8q\u010f\u03b1D}\u2207\u03b8JAp\u03b8q}2 `\u03b2D `p\u03b1D\u03c8`LAq\u03c3A a dA, where \u03c8\u201cOpp1\u00b4\u03b3q\u00b44q (full expression in the proof). The sample complexity, in this case, is r Op\u03b16 D\u03f5\u00b45d2 Ap1\u00b4 \u03b3q\u00b414q (see Theorem D.13 for details). 7.2. Policy-induced Weak Gradient Domination When the the objective function does not enjoy weak gradient domination in the space of deterministic policies, we can still have WGD with respect to stochastic policies if they satisfy a condition known as Fisher-non-degeneracy (Liu et al., 2020; Ding et al., 2022). As far as we know, WGD by Fishernon-degeneracy is a peculiar property of AB exploration that has no equivalent in PB exploration. White-noise policies satisfying Assumption 4.5 are Fisher-non-degenerate under the following standard assumption (Liu et al., 2020): Assumption 7.2 (Explorability). There exists \u03bbE \u01050 s.t. E\u03c0\u03b8r\u2207\u03b8\u00b5\u03b8psq\u2207\u03b8\u00b5\u03b8psqJs\u013e\u03bbEI for all \u03b8P\u0398, where the expectation over states is induced by the stochastic policy. We can use this fact to prove WGD for white-noise policies: Theorem 7.3 (Policy-induced weak gradient domination). Under Assumptions 4.5, 7.2 and D.1, we have: JA \u02da \u00b4JAp\u03b8q\u010fC ?dA\u03c3A \u03bbE }\u2207\u03b8JAp\u03b8q}2 ` ?\u03f5bias 1\u00b4\u03b3 , for some numerical constant C \u01050, that is, Assumption 6.1 (:=A) is satisfied with \u03b1\u201cC ?dA\u03c3A \u03bbE and \u03b2 \u201c ?\u03f5bias 1\u00b4\u03b3 . 7 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Here \u03f5bias is the compatible-critic error, which can be very small for rich policy classes (Ding et al., 2022). We can leverage this to prove the global convergence of GPOMDP as in Section 7.1, this time to JD \u00b4ErJDp\u03b8qs\u010f\u03f5` ?\u03f5bias 1\u00b4\u03b3 . Tuning \u03c3A, we can achieve a sample complexity of r Op\u03f5\u00b41\u03bb\u00b44 E d4 Ap1\u00b4\u03b3q\u00b410q (see Theorem D.16 for details) This seems to violate the \u2126p\u03f5\u00b42q lower bound by Azar et al. (2013). However, the factor \u03bbE can depend on \u03c3A \u201cOp\u03f5q in highly non-trivial ways, and, thus, can hide additional factors of \u03f5. For this reason, the results granted by the Fishernon-degeneracy of white-noise policies are not compared with the ones granted by inherited WGD from Section 7.1. Intuitively, \u03bbE encodes some difficulties of exploration that are absent in \u201cnice\u201d MDPs satisfying Assumption 7.1. See Appendix D.4 for further discussion and omitted proofs. 8. Numerical Validation In this section, we empirically validate some of the theoretical results presented in the paper. We conduct a study on the gap in performance between the deterministic objective JD and the ones of GPOMDP and PGPE (respectively JA and JP) by varying the value of their exploration parameters (\u03c3A and \u03c3P, respectively). Details on the employed versions of PGPE and GPOMDP can be found in Appendix G. Additional experimental results can be found in Appendix H. We run PGPE and GPOMDP for K \u201c2000 iterations with batch size N \u201c100 on three environments from the MuJoCo (Todorov et al., 2012) suite: Swimmer-v4 (T \u201c200), Hopper-v4 (T \u201c100), and HalfCheetah-v4 (T \u201c100). For all the environments the deterministic policy is linear in the state and the noise is Gaussian. We consider \u03c32 : P t0.01,0.1,1,10,100u. More details in Appendix H.1. From Figure 1, we note that as the exploration parameter grows, the distance of JPp\u03b8Kq and JAp\u03b8Kq from JDp\u03b8Kq increases, coherently with Theorems 5.1 and 5.2. Among the tested values for \u03c3P and \u03c3A, some lead to the highest values of JDp\u03b8Kq. Empirically, we note that PGPE delivers the best deterministic policy with \u03c32 P \u201c10 for Swimmer and with \u03c32 P \u201c1 for the other environments. GPOMDP performs the best with \u03c32 A \u201c1 for Swimmer, and with \u03c32 A \u201c10 in the other cases. These outcomes agree with the theoretical results in showing that there exists an optimal value for \u03c3:. We can also appreciate the trade-off between GPOMDP and PGPE w.r.t. the parameter dimensionality d\u0398 and the horizon T, by comparing the best values of JD found by the two algorithms in each environment. GPOMDP is better than PGPE in Hopper and HalfCheetah. This can be explained by the fact that such environments are characterized by higher values of d\u0398. Instead, in Swimmer, PGPE performs better than GPOMDP. This can be explained by the higher value of T and the lower value of d\u0398. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 P J:p\u03b8Kq JD JP (a) PGPE on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 \u00b4100 0 100 200 \u03c32 A J:p\u03b8Kq JD JA (b) GPOMDP on HalfCheetah. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 P J:p\u03b8Kq JD JP (c) PGPE on Hopper. 10\u00b42 10\u00b41 100 101 102 150 200 250 \u03c32 A J:p\u03b8Kq JD JA (d) GPOMDP on Hopper. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 P J:p\u03b8Kq JD JP (e) PGPE on Swimmer. 10\u00b42 10\u00b41 100 101 102 20 40 60 \u03c32 A J:p\u03b8Kq JD JA (f) GPOMDP on Swimmer. Figure 1. Variance study on Mujoco (5 runs, mean \u02d8 95% C.I.). 9. Conclusions We have perfected recent theoretical results on the global convergence of policy gradient algorithms to address the practical problem of finding a good deterministic parametric policy. We have studied the effects of noise on the learning process and identified a theoretical value of the variance of the (hyper)policy that allows to find a good deterministic policy using a polynomial number of samples. We have compared the two common forms of noisy exploration, action-based and parameter-based, both from a theoretical and an empirical perspective. Our work paves the way for several exciting research directions. First, our theoretical selection of the policy variance is not practical, but our theoretical findings should guide the design of sound and efficient adaptive-variance schedules. We have shown how white-noise exploration preserves weak gradient domination\u2014the natural next question is whether a sufficient amount of noise can smooth or even eliminate the local optima of the objective function. Finally, we have focused on \u201cvanilla\u201d policy gradient methods, but our ideas could be applied to more advanced algorithms, such as the ones recently proposed by Fatkhullin et al. (2023a), to find optimal deterministic policies with r Op\u03f5\u00b42q samples. 8 \fLearning Optimal Deterministic Policies with Stochastic Policy Gradients Impact Statement This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here." + } + ], + "Marco Mussi": [ + { + "url": "http://arxiv.org/abs/2302.07510v2", + "title": "Best Arm Identification for Stochastic Rising Bandits", + "abstract": "Stochastic Rising Bandits (SRBs) model sequential decision-making problems in\nwhich the expected rewards of the available options increase every time they\nare selected. This setting captures a wide range of scenarios in which the\navailable options are learning entities whose performance improves (in\nexpectation) over time. While previous works addressed the regret minimization\nproblem, this paper, focuses on the fixed-budget Best Arm Identification (BAI)\nproblem for SRBs. In this scenario, given a fixed budget of rounds, we are\nasked to provide a recommendation about the best option at the end of the\nidentification process. We propose two algorithms to tackle the above-mentioned\nsetting, namely R-UCBE, which resorts to a UCB-like approach, and R-SR, which\nemploys a successive reject procedure. Then, we prove that, with a sufficiently\nlarge budget, they provide guarantees on the probability of properly\nidentifying the optimal option at the end of the learning process. Furthermore,\nwe derive a lower bound on the error probability, matched by our R-SR (up to\nlogarithmic factors), and illustrate how the need for a sufficiently large\nbudget is unavoidable in the SRB setting. Finally, we numerically validate the\nproposed algorithms in both synthetic and real-world environments and compare\nthem with the currently available BAI strategies.", + "authors": "Marco Mussi, Alessandro Montenegro, Francesco Trov\u00f3, Marcello Restelli, Alberto Maria Metelli", + "published": "2023-02-15", + "updated": "2023-06-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Multi-Armed Bandits (MAB, Lattimore and Szepesv\u00e1ri, 2020) are a well-known framework that effectively solves learning problems requiring sequential decisions. Given a time horizon, the learner chooses, at each round, a single option (a.k.a. arm) and observes the corresponding noisy reward, which is a realization of an unknown distribution. The MAB problem is commonly studied in two flavours: regret minimization (Auer et al., 2002) and best arm identification (Bubeck et al., 2009). In regret minimization, the goal is to control the cumulative loss w.r.t. the optimal arm over a time horizon. Conversely, in best arm identification, the goal is to provide a recommendation about the best arm at the end of the time horizon. Specifically, we are interested in the fixed-budget scenario, where we seek to minimize the error probability of recommending the wrong arm at the end of the time budget, no matter the loss incurred during learning. This work focuses on the Stochastic Rising Bandits (SRB), a specific instance of the rested bandit (Tekin and Liu, 2012) setting in which the expected reward of an arm increases according to the Preprint. Under review. arXiv:2302.07510v2 [cs.LG] 1 Jun 2023 \fnumber of times it has been pulled. Online learning in such a scenario has been recently addressed from a regret minimization perspective by Metelli et al. (2022), in which the authors provide noregret algorithms for the SRB setting in both the rested and restless cases. The SRB setting models several real-world scenarios where arms improve their performance over time. A classic example is the so-called Combined Algorithm Selection and Hyperparameter optimization (CASH, Thornton et al., 2013; Kotthoff et al., 2017; Erickson et al., 2020; Li et al., 2020; Z\u00f6ller and Huber, 2021), a problem of paramount importance in Automated Machine Learning (AutoML, Feurer et al., 2015; Yao et al., 2018; Hutter et al., 2019; Mussi et al., 2023). In CASH, the goal is to identify the best learning algorithm together with the best hyperparameter configuration for a given ML task (e.g., classification or regression). In this problem, every arm represents a hyperparameter tuner acting on a specific learning algorithm. A pull corresponds to a unit of time/computation in which we improve (on average) the hyperparameter configuration (via the tuner) for the corresponding learning algorithm.1 CASH was handled in a bandit Best Arm Identification (BAI) fashion in Li et al. (2020) and Cella et al. (2021). The former handles the problem by considering rising rested bandits with deterministic rewards, failing to represent the intrinsic uncertain nature of such processes. Instead, the latter, while allowing stochastic rewards, assumes that the expected rewards evolve according to a known parametric functional class, whose parameters have to be learned. Original Contributions In this paper, we address the design of algorithms to solve the BAI task in the rested SRB setting when a fixed budget is provided.2 More specifically, we are interested in algorithms guaranteeing a sufficiently large probability of recommending the arm with the largest expected reward at the end of the time budget (as if only this arm were pulled from the beginning). The main contributions of the paper are summarized as follows:3 \u2022 We propose two algorithms to solve the BAI problem in the SRB setting: R-UCBE (an optimistic approach, Section 4) and R-SR (a phases-based rejection algorithm, Section 5). First, we introduce specifically designed estimators required by the algorithms (Section 3). Then, we provide guarantees on the error probability of the misidentification of the best arm. \u2022 We derive the first error probability lower bound for the SRB setting, matched by our R-SR algorithm up to logarithmic factors, which highlights the complexity of the problem and the need for a sufficiently large time budget (Section 6). \u2022 Finally, we conduct numerical simulations on synthetically generated data and a real-world online best model selection problem. We compare the proposed algorithms with the ones available in the bandit literature to tackle the SRB problem (Section 8). 2 Problem Formulation In this section, we revise the Stochastic Rising Bandits (SRB) setting (Heidari et al., 2016; Metelli et al., 2022). Then, we formulate our best arm identification problem, introduce the definition of error probability, and provide a preliminary characterization of the problem. Setting We consider a rested Multi-Armed Bandit problem \u03bd \u201c p\u03bdiqiPJKK with a finite number of arms K.4 Let T P N be the time budget of the learning process. At every round t P JTK, the agent selects an arm It P JKK, plays it, and observes a reward xt \u201e \u03bdItpNIt,tq, where \u03bdItpNIt,tq is the reward distribution of the chosen arm It at round t and depends on the number of pulls performed so far Ni,t :\u201c \u0159t \u03c4\u201c1 1tI\u03c4 \u201c iu (i.e., rested). The rewards are stochastic, formally xt :\u201c \u00b5ItpNIt,tq ` \u03b7t, where \u00b5Itp\u00a8q is the expected reward of arm It and \u03b7t is a zero-mean \u03c32subgaussian noise, conditioned to the past.5 As customary in the bandit literature, we assume that the rewards are bounded in expectation, formally \u00b5ipnq P r0, 1s, @i P JKK, n P JTK. As in (Metelli et al., 2022), we focus on a particular family of rested bandits in which the expected rewards are monotonically non-decreasing and concave in expectation. Assumption 2.1 (Non-decreasing and concave expected rewards). Let \u03bd be a rested MAB, defining \u03b3ipnq :\u201c \u00b5ipn ` 1q \u00b4 \u00b5ipnq, for every n P N and every arm i P JKK the rewards are non-decreasing 1Additional motivating examples are discussed in Appendix A. 2We focus on the rested setting only and, thus, from now on, we will omit \u201crested\u201d in the setting name. 3The proofs of all the statements in this work are provided in Appendix C. 4Let y, z P N, we denote with JzK :\u201c t1, . . . , zu, and with Jy, zK :\u201c ty, . . . , zu. 5A zero-mean random variable x is \u03c32-subgaussian if it holds Exre\u03bexs \u010f e \u03c32\u03be2 2 for every \u03be P R. 2 \fand concave, formally: Non-decreasing: \u03b3ipnq \u011b 0, Concave: \u03b3ipn ` 1q \u010f \u03b3ipnq. Intuitively, the \u03b3ipnq represents the increment of the real process \u00b5ip\u00a8q evaluated at the nth pull. Notice that concavity emerges in several settings, such as the best model selection and economics, representing the decreasing marginal returns (Lehmann et al., 2001; Heidari et al., 2016). Learning Problem The goal of BAI in the SRB setting is to select the arm providing the largest expected reward with a large enough probability given a fixed budget T P N. Unlike the stationary BAI problem (Audibert et al., 2010), in which the optimal arm is not changing, in this setting, we need to decide when to evaluate the optimality of an arm. We define optimality by considering the largest expected reward at time T. Formally, given a time budget T, the optimal arm i\u02dapTq P JKK, which we assume unique, satisfies: i\u02dapTq :\u201c arg max iPJKK \u00b5ipTq, where we highlighted the dependence on T as, with different values of the budget, i\u02dapTq may change. Let i P JKKzti\u02dapTqu be a suboptimal arm, we define the suboptimality gap as \u2206ipTq :\u201c \u00b5i\u02dapT qpTq\u00b4\u00b5ipTq. We employ the notation piq P JKK to denote the ith best arm at time T (arbitrarily breaking ties), i.e., we have \u2206p2qpTq \u010f \u00a8 \u00a8 \u00a8 \u010f \u2206pKqpTq. Given an algorithm A that recommends \u02c6 I\u02dapTq P JKK at the end of the learning process, we measure its performance with the error probability, i.e., the probability of recommending a suboptimal arm at the end of the time budget T: eT pAq :\u201c PAp\u02c6 I\u02dapTq \u2030 i\u02dapTqq. Problem Characterization We now provide a characterization of a specific class of polynomial functions to upper bound the increments \u03b3ipnq. Assumption 2.2 (Bounded \u03b3ipnq). Let \u03bd be a rested MAB, there exist c \u0105 0 and \u03b2 \u0105 1 such that for every arm i P JKK and number of pulls n P J0, TK it holds that \u03b3ipnq \u010f cn\u00b4\u03b2. We anticipate that, even if our algorithms will not require such an assumption, it will be used for deriving the lower bound and for providing more human-readable error probability guarantees. Furthermore, we observe that our Assumption 2.2 is fulfilled by a strict superset of the functions employed in Cella et al. (2021). 3 Estimators In this section, we introduce the estimators of the arm expected reward employed by the proposed algorithms.6 A visual representation of such estimators is provided in Figure 1. Let \u03b5 P p0, 1{2q be the fraction of samples collected up to the current time t we use to build estimators of the expected reward. We employ an adaptive arm-dependent window size hpNi,t\u00b41q :\u201c t\u03b5Ni,t\u00b41u to include the most recent samples collected only, avoiding the use of samples that are no longer representative. We define the set of the last hpNi,t\u00b41q rounds in which the ith arm was pulled as: Ti,t :\u201c t\u03c4 P JTK : I\u03c4 \u201c i ^ Ni,\u03c4 \u201c Ni,t\u00b41 \u00b4 l, l P J0, hpNi,t\u00b41q \u00b4 1Ku . Furthermore, the set of the pairs of rounds \u03c4 and \u03c4 1 belonging to the sets of the last and second-last hpNi,t\u00b41q-wide windows of the ith arm is defined as: Si,t :\u201c \u2423 p\u03c4, \u03c4 1q P JTK \u02c6 JTK : I\u03c4 \u201c I\u03c4 1 \u201c i ^ Ni,\u03c4 \u201c Ni,t\u00b41 \u00b4 l, Ni,\u03c4 1 \u201c Ni,\u03c4 \u00b4 hpNi,t\u00b41q, l P J0, hpNi,t\u00b41q \u00b4 1K ( . In the following, we design a pessimistic estimator and an optimistic estimator of the expected reward of each arm at the end of the budget time T, i.e., \u00b5ipTq.7 6The estimators are adaptations of those presented by Metelli et al. (2022) to handle a fixed time budget T. 7Na\u00efvely computing the estimators from their definition requires OphpNi,t\u00b41qq number of operations. An efficient way to incrementally update them, using Op1q operations, is provided in Appendix B. 3 \fPessimistic Estimator The pessimistic estimator \u02c6 \u00b5ipNi,t\u00b41q is a negatively biased estimate of \u00b5ipTq obtained assuming that the function \u00b5ip\u00a8q remains constant up to time T. This corresponds to the minimum admissible value under Assumption 2.1 (due to the Non-decreasing constraint). This estimator is an average of the last hpNi,t\u00b41q observed rewards collected from the ith arm, formally: \u02c6 \u00b5ipNi,t\u00b41q :\u201c 1 hpNi,t\u00b41q \u00ff \u03c4PTi,t x\u03c4. (1) The estimator enjoys the following concentration property. Lemma 3.1 (Concentration of \u02c6 \u00b5i). Under Assumption 2.1, for every a \u0105 0, simultaneously for every arm i P JKK and number of pulls n P J0, TK, with probability at least 1 \u00b4 2TKe\u00b4a{2 it holds that: \u02c6 \u03b2ipnq \u00b4 \u02c6 \u03b6ipnq \u010f \u02c6 \u00b5ipnq \u00b4 \u00b5ipnq \u010f \u02c6 \u03b2ipnq, where \u02c6 \u03b2ipnq :\u201c \u03c3 b a hpnq and \u02c6 \u03b6ipnq :\u201c 1 2p2T \u00b4 n ` hpnq \u00b4 1q \u03b3ipn \u00b4 hpnq ` 1q. t T \u03c4 \u03c4 1 \u02c7 \u00b5T i pNi,t\u00b41q \u02c6 \u00b5ipNi,t\u00b41q Ni,t\u00b41 \u02c7 \u03b2T i pNi,t\u00b41q BT i pNi,t\u00b41q hpNi,t\u00b41q Figure 1: Graphical representation of the pessimistic \u02c6 \u00b5ipNi,t\u00b41q and the optimistic \u02c7 \u00b5T i pNi,t\u00b41q estimators. As supported by intuition, we observe that the estimator is affected by a negative bias that is represented by \u02c6 \u03b6ipnq that vanishes as n \u00d1 8 under Assumption 2.1 with a rate that depends on the increment functions \u03b3ip\u00a8q. Considering also the term \u02c6 \u03b2ipnq and recalling that hpnq \u201c Opnq, under Assumption 2.2, the overall concentration rate is Opn\u00b41{2 ` cTn\u00b4\u03b2q. Optimistic Estimator The optimistic estimator \u02c7 \u00b5T i pNi,t\u00b41q is a positively biased estimation of \u00b5ipTq obtained assuming that function \u00b5ip\u00a8q linearly increases up to time T. This corresponds to the maximum value admissible under Assumption 2.1 (due to the Concavity constraint). The estimator is constructed by adding to the pessimistic estimator \u02c6 \u00b5ipNi,t\u00b41q an estimate of the increment occurring in the next step up to T. The latter uses the last 2hpNi,t\u00b41q samples to obtain an upper bound of such growth thanks to the concavity assumption, formally: \u02c7 \u00b5T i pNi,t\u00b41q :\u201c \u02c6 \u00b5ipNi,t\u00b41q ` \u00ff pj,kqPSi,t pT \u00b4 jq xj \u00b4 xk hpNi,t\u00b41q2 . (2) The estimator displays the following concentration guarantee. Lemma 3.2 (Concentration of \u02c7 \u00b5T i ). Under Assumption 2.1, for every a \u0105 0, simultaneously for every arm i P JKK and number of pulls n P J0, TK, with probability at least 1 \u00b4 2TKe\u00b4a{10 it holds that: \u02c7 \u03b2T i pnq \u010f \u02c7 \u00b5T i pnq \u00b4 \u00b5ipnq \u010f \u02c7 \u03b2T i pnq ` \u02c7 \u03b6T i pnq, where \u02c7 \u03b2T i pnq :\u201c \u03c3\u00a8pT \u00b4n`hpnq\u00b41q b a hpnq3 and \u02c7 \u03b6T i pnq :\u201c 1 2p2T \u00b4n`hpnq\u00b41q \u03b3ipn\u00b42hpnq`1q. Differently from the pessimistic estimation, the optimistic one displays a positive vanishing bias \u02c7 \u03b6T i pnq. Under Assumption 2.2, we observe that the overall concentration rate is OpTn\u00b43{2`cTn\u00b4\u03b2q. 4 Optimistic Algorithm: Rising Upper Confidence Bound Exploration In this section, we introduce and analyze Rising Upper Confidence Bound Exploration (R-UCBE) an optimistic error probability minimization algorithm for the SRB setting with a fixed budget. The algorithm explores by means of a UCB-like approach and, for this reason, makes use of the optimistic estimator \u02c7 \u00b5T i plus a bound to account for the uncertainty of the estimation.8 8In R-UCBE, the choice of considering the optimistic estimator is natural and obliged since the pessimistic estimator is affected by negative bias and cannot be used to deliver optimistic estimates. 4 \fAlgorithm The algorithm, whose pseudo-code is reported in Algorithm 1, requires as input an exploration parameter a \u011b 0, the window size \u03b5 P p0, 1{2q, the time budget T, and the number of arms K. At first, it initializes to zero the counters Ni,0, and sets to `8 the upper bounds BT i pNi,0q of all the arms (Line 2). Subsequently, at each time t P JTK, the algorithm selects the arm It with the largest upper confidence bound (Line 4): It P arg max iPJKK BT i pNi,t\u00b41q :\u201c \u02c7 \u00b5T i pNi,t\u00b41q ` \u02c7 \u03b2T i pNi,t\u00b41q, (3) with: \u02c7 \u03b2T i pNi,t\u00b41q :\u201c \u03c3 \u00a8 pT \u00b4 Ni,t\u00b41 ` hpNi,t\u00b41q \u00b4 1q c a hpNi,t\u00b41q3 , (4) where \u02c7 \u03b2T i pNi,t\u00b41q represents the exploration bonus (a graphical representation is reported in Figure 1). Once the arm is chosen, the algorithm plays it and observes the feedback xt (Line 5). Then, the optimistic estimate \u02c7 \u00b5T ItpNIt,tq and the exploration bonus \u02c7 \u03b2T ItpNIt,tq of the selected arm It are updated (Lines 8-9). The procedure is repeated until the algorithm reaches the time budget T. The final recommendation of the best arm is performed using the last computed values of the bounds BT i pNi,T q, returning the arm \u02c6 I\u02dapTq corresponding to the largest upper confidence bound (Line 12). Bound on the Error Probability of R-UCBE We now provide bounds on the error probability for R-UCBE. We start with a general analysis that makes no assumption on the increments \u03b3ip\u00a8q and, then, we provide a more explicit result under Assumption 2.2. The general result is formalized as follows. Theorem 4.1. Under Assumption 2.1, let a\u02da be the largest positive value of a satisfying: T \u00b4 \u00ff i\u2030i\u02dapT q yipaq \u011b 1, (5) where for every i P JKK, yipaq is the largest integer for which it holds: T\u03b3iptp1 \u00b4 2\u03b5qyuq looooooooomooooooooon pAq ` 2T\u03c3 c a t\u03b5yu3 loooooomoooooon pBq \u011b \u2206ipTq. (6) If a\u02da exists, then for every a P r0, a\u02das the error probability of R-UCBE is bounded by: eT pR-UCBEq \u010f 2TK exp \u00b4 \u00b4 a 10 \u00af . (7) Some comments are in order. First, a\u02da is defined implicitly, depending on the constants \u03c3, T, the increments \u03b3ip\u00a8q, and the suboptimality gaps \u2206ipTq. In principle, there might exist no a\u02da \u0105 0 fulfilling condition in Equation (5) (this can happen, for instance, when the budget T is not large enough), and, in such a case, we are unable to provide theoretical guarantees on the error probability of R-UCBE. Second, the result presented in Theorem 4.1 holds for generic increasing and concave expected reward functions. This result shows that, as expected, the error probability decreases when the exploration parameter a increases. However, this behavior stops when we reach the threshold a\u02da. Intuitively, the value of a\u02da sets the maximum amount of exploration we should use for learning. Under Assumption 2.2, i.e., using the knowledge on the increment \u03b3ip\u00a8q upper bound, we derive a result providing conditions on the time budget T under which a\u02da exists and an explicit value for a\u02da. Corollary 4.2. Under Assumptions 2.1 and 2.2, if the time budget T satisfies: T \u011b $ \u2019 & \u2019 % \u00b4 c 1 \u03b2 p1 \u00b4 2\u03b5q\u00b41 pH1,1{\u03b2pTqq ` pK \u00b4 1q \u00af \u03b2 \u03b2\u00b41 if \u03b2 P p1, 3{2q \u00b4 c 2 3 p1 \u00b4 2\u03b5q\u00b4 2 3 \u03b2 pH1,2{3pTqq ` pK \u00b4 1q \u00af3 if \u03b2 P r3{2, `8q , (8) there exists a\u02da \u0105 0 defined as: a\u02da \u201c $ \u2019 \u2019 & \u2019 \u2019 % \u03f53 4\u03c32 \u02c6\u00b4 T 1\u00b41{\u03b2\u00b4pK\u00b41q H1,1{\u03b2pT q \u00af\u03b2 \u00b4 cp1 \u00b4 2\u03b5q\u00b4\u03b2 \u02d92 if \u03b2 P p1, 3{2q \u03f53 4\u03c32 \u02c6\u00b4 T 1{3\u00b4pK\u00b41q H1,2{3pT q \u00af3{2 \u00b4 cp1 \u00b4 2\u03b5q\u00b4\u03b2 \u02d92 if \u03b2 P r3{2, `8q , 5 \fAlgorithm 1: R-UCBE. Input :Time budget T, Number of arms K, Window size \u03b5, Exploration parameter a 1 Initialize Ni,0 \u201c 0, 2 BT i p0q \u201c `8, @i P JKK 3 for t P JTK do 4 Compute It P arg maxiPJKK BT i pNi,t\u00b41q 5 Pull arm It and observe xt 6 NIt,t \u00d0 NIt,t\u00b41 ` 1 7 Ni,t \u00d0 Ni,t\u00b41, @i \u2030 It 8 Update \u02c7 \u00b5T ItpNIt,tq 9 Update \u02c7 \u03b2T ItpNIt,tq 10 Compute BT ItpNIt,tq \u201c \u02c7 \u00b5T ItpNIt,tq` \u02c7 \u03b2T ItpNIt,tq 11 end 12 Recommend p I\u02dapTq P arg maxiPJKK BT i pNi,T q Algorithm 2: R-SR. Input :Time budget T, Number of arms K, Window size \u03b5 1 Initialize t \u00d0 1, N0 \u201c 0, X0 \u201c JKK 2 for j P JK \u00b4 1K do 3 for i P Xj\u00b41 do 4 for l P JNj\u00b41 ` 1, NjK do 5 Pull arm i and observe xt 6 t \u00d0 t ` 1 7 end 8 Update \u02c6 \u00b5ipNjq 9 end 10 Define Ij P arg miniPXj\u00b41 \u02c6 \u00b5ipNjq 11 Update Xj \u201c Xj\u00b41 ztIju 12 end 13 Recommend p I\u02dapTq P XK\u00b41 (unique) where H1,\u03b7pTq :\u201c \u0159 i\u2030i\u02dapT q 1 \u2206\u03b7 i pT q for \u03b7 \u0105 0. Then, for every a P r0, a\u02das, the error probability of R-UCBE is bounded by: eT pR-UCBEq \u010f 2TK exp \u00b4 \u00b4 a 10 \u00af . First of all, we notice that the error probability eT pR-UCBEq presented in Theorem 4.2 holds under the condition that the time budget T fulfills Equation (8). We defer a more detailed discussion on this condition to Remark 5.1, where we show that the existence of a finite value of T fulfilling Equation (8) is ensured under mild conditions. Let us remark that term H1,\u03b7pTq characterizes the complexity of the SRB setting, corresponding to term H1 of Audibert et al. (2010) for the classical BAI problem when \u03b7 \u201c 2. As expected, in the small-\u03b2 regime (i.e., \u03b2 P p1, 3{2s), looking at the dependence of H1,1{\u03b2pTq on \u03b2, we realize that the complexity of a problem decreases as the parameter \u03b2 increases. Indeed, the larger \u03b2, the faster the expected reward reaches a stationary behavior. Nevertheless, even in the large-\u03b2 regime (i.e., \u03b2 \u0105 3{2), the complexity of the problem is governed by H1,2{3pTq, leading to an error probability larger than the corresponding one for BAI in standard bandits (Audibert et al., 2010). This can be explained by the fact that R-UCBE uses the optimistic estimator that, as shown in Section 3, enjoys a slower concentration rate compared to the standard sample mean, even for stationary bandits. This two-regime behavior has an interesting interpretation when comparing Corollary 4.2 with Theorem 4.1. Indeed, \u03b2 \u201c 3{2 is the break-even threshold in which the two terms of the l.h.s. of Equation (6) have the same convergence rate. Specifically, the term pAq takes into account the expected rewards growth (i.e., the bias in the estimators), while pBq considers the uncertainty in the estimations of the R-UCBE algorithm (i.e., the variance). Intuitively, when the expected reward function displays a slow growth (i.e., \u03b3ipnq \u010f cn\u00b4\u03b2 with \u03b2 \u0103 3{2), the bias term pAq dominates the variance term pBq and the value of a\u02da changes accordingly. Conversely, when the variance term pBq is the dominant one (i.e., \u03b3ipnq \u010f cn\u00b4\u03b2 with \u03b2 \u0105 3{2), the threshold a\u02da is governed by the estimation uncertainty, being the bias negligible. As common in optimistic algorithms for BAI (Audibert et al., 2010), setting a theoretically sound value of exploration parameter a (i.e., computing a\u02da), requires additional knowledge of the setting, namely the complexity index H1,\u03b7pTq.9 In the next section, we propose an algorithm that relaxes this requirement. 5 Phase-Based Algorithm: Rising Successive Rejects In this section, we introduce the Rising Successive Rejects (R-SR), a phase-based solution inspired by the one proposed by Audibert et al. (2010), which overcomes the drawback of R-UCBE of requiring knowledge of H1,\u03b7pTq. 9We defer the empirical study of the sensitivity of a to Section 8. 6 \fAlgorithm R-SR, whose pseudo-code is reported in Algorithm 2, takes as input the time budget T and the number of arms K. At first, it initializes the set of the active arms X0 with all the available arms (Line 1). This set will contain the arms that are still eligible candidates to be recommended. The entire process proceeds through K \u00b4 1 phases. More specifically, during the jth phase, the arms still remaining in the active arms set Xj\u00b41 are played (Line 5) for Nj \u00b4 Nj\u00b41 times each, where: Nj :\u201c R 1 logpKq T \u00b4 K K ` 1 \u00b4 j V , (9) and logpKq :\u201c 1 2 ` \u0159K i\u201c2 1 i . At the end of each phase, the arm with the smallest value of the pessimistic estimator \u02c6 \u00b5ipNjq is discarded from the set of active arms (Line 11). At the end of the pK \u00b4 1qth phase, the algorithm recommends the (unique) arm left in XK\u00b41 (Line 13). It is worth noting that R-SR makes use of the pessimistic estimator \u02c6 \u00b5ipnq. Even if both estimators defined in Section 3 are viable for R-SR, the choice of using the pessimistic estimator is justified by its better concentration rate Opn\u00b41{2q compared to that of the optimistic estimator OpTn\u00b43{2q, being n \u010f T (see Section 3). Note that the phase lengths are the ones adopted by Audibert et al. (2010). This choice allows us to provide theoretical results without requiring domain knowledge (still under a large enough budget). An optimized version of Nj may be derived assuming full knowledge of the gaps \u2206ipTq, but, unfortunately, such a hypothetical approach would have similar drawbacks as R-UCBE. Bound on the Error Probability of R-SR The following theorem provides the guarantee on the error probability for the R-SR algorithm. Theorem 5.1. Under Assumptions 2.1 and 2.2, if the time budget T satisfies: T \u011b 2 \u03b2`1 \u03b2\u00b41 c 1 \u03b2\u00b41 logpKq \u03b2 \u03b2\u00b41 max iPJ2,KK ! i \u03b2 \u03b2\u00b41 \u2206piqpTq\u00b4 1 \u03b2\u00b41 ) , (10) then, the error probability of R-SR is bounded by: eT pR-SRq \u010f KpK \u00b4 1q 2 exp \u02c6 \u00b4 \u03b5 8\u03c32 \u00a8 T \u00b4 K logpKqH2pTq \u02d9 , where H2pTq :\u201c maxiPJKK \u2423 i\u2206piqpTq\u00b42( and logpKq \u201c 1 2 ` \u0159K i\u201c2 1 i . Similar to the R-UCBE, the complexity of the problem is characterized by term H2pTq that, for the standard MAB setting, reduces to the H2 term of Audibert et al. (2010). Furthermore, when the condition of Equation (10) on the time budget T is satisfied, the error probability coincides with that of the SR algorithm for standard MABs (apart for constant terms). The following remark elaborates on the conditions of Equations (8) and (10) about the minimum requested time budget. Remark 5.1 (About the minimum time budget T). To satisfy the eT bounds presented in Corollary 4.2 and Theorem 5.1, R-UCBE and R-SR require the conditions provided by Equations (8) and (10) about the time budget T, respectively. First, let us notice that if the suboptimal arms converge to an expected reward different from that of the optimal arm as T \u00d1 `8, it is always possible to find a finite value of T \u0103 `8 such that these conditions are fulfilled. Formally, assume that there exists T0 \u0103 `8 and that for every T \u011b T0 we have that for all suboptimal arms i \u2030 i\u02dapTq it holds that \u2206ipTq \u011b \u22068 \u0105 0. In such a case, the l.h.s. of Equations (8) and (10) are upper bounded by a function of \u22068 and are independent on T. Instead, if a suboptimal arm converges to the same expected reward as the optimal arm when T \u00d1 `8, the identification problem is more challenging and, depending on the speed at which the two arms converge as a function of T, might slow down the learning process arbitrarily. This should not surprise as the BAI problem becomes non-learnable even in standard (stationary) MABs when multiple optimal arms are present (Heide et al., 2021). 6 Lower Bound In this section, we investigate the complexity of the BAI problem for SRBs with a fixed budget. Minimum time budget T We show that, under Assumptions 2.1 and 2.2, any algorithm requires a minimum time budget T to be guaranteed to identify the optimal arm, even in a deterministic setting. 7 \fError Probability eT p\u00a8q Time Budget T SRB 1 4 exp \u02dc \u00b4 8T \u03c32 \u0159 i\u2030i\u02dapT q 1 \u22062 i pT q \u00b8 \u00ff i\u2030i\u02dapT q 1 \u2206ipTq 1 \u03b2\u00b41 R-UCBE 2 T K exp \u00b4 \u00b4 a 10 \u00af $ \u2019 \u2019 \u2019 \u2019 & \u2019 \u2019 \u2019 \u2019 % \u02c6 c 1 \u03b2 p1 \u00b4 2\u03b5q\u00b41 \u02c6 \u00ff i\u2030i\u02dapT q 1 \u22061{\u03b2 i pTq \u02d9 ` pK \u00b4 1q \u02d9 \u03b2 \u03b2\u00b41 if \u03b2 P p1, 3{2q \u02c6 c 2 3 p1 \u00b4 2\u03b5q\u00b4 2 3 \u03b2 \u02c6 \u00ff i\u2030i\u02dapT q 1 \u22062{3 i pTq \u02d9 ` pK \u00b4 1q \u02d93 if \u03b2 P r3{2, `8q R-SR KpK \u00b4 1q 2 exp \u00a8 \u02da \u02dd\u00b4 \u03b5 8\u03c32 T \u00b4 K logpKq max iPJKK ! i\u2206\u00b42 piq pTq ) \u02db \u2039 \u201a 2 1`\u03b2 \u03b2\u00b41 c 1 \u03b2\u00b41 logpKq \u03b2 \u03b2\u00b41 max iPJ2,KK ! i \u03b2 \u03b2\u00b41 \u2206piqpTq\u00b4 1 \u03b2\u00b41 ) Table 1: Bounds on the time budget and error probability: lower for the setting and upper for the algorithms. Theorem 6.1. For every algorithm A, there exists a deterministic SRB satisfying Assumptions 2.1 and 2.2 such that the optimal arm i\u02dapTq cannot be identified for some time budgets T unless: T \u011b H1,1{p\u03b2\u00b41qpTq \u201c \u00ff i\u2030i\u02dapT q 1 \u2206ipTq 1 \u03b2\u00b41 . (11) Theorem 6.1 formalizes the intuition that any of the suboptimal arms must be pulled a sufficient number of times to ensure that, if pulled further, it cannot become the optimal arm. It is worth comparing this bound on the time budget with the corresponding conditions on the minimum time budget requested by Equations (8) and (10) for R-UCBE and R-SR, respectively. Regarding R-UCBE, we notice that the minimum admissible time budget in the small-\u03b2 regime is of order H1,1{\u03b2pTq\u03b2{p\u03b2\u00b41q which is larger than term H1,1{p\u03b2\u00b41qpTq of Equation (11).10 Similarly, in the large-\u03b2 regime (i.e., \u03b2 \u0105 3{2), the R-UCBE requirement is of order H1,2{3pTq3 \u011b H1,2pTq which is larger than the term of Theorem 6.1 since 1{p\u03b2 \u00b4 1q \u0103 2. Concerning R-SR, it is easy to show that H1,1{p\u03b2\u00b41qpTq \u00ab maxiPJ2,KK i\u2206piqpTq\u00b41{p\u03b2\u00b41q, apart from logarithmic terms, by means of the argument provided by (Audibert et al., 2010, Section 6.1). Thus, up to logarithmic terms, Equation (10) provides a tight condition on the minimum budget. Error Probability Lower Bound We now present a lower bound on the error probability. Theorem 6.2. For every algorithm A run with a time budget T fulfilling Equation (11), there exists a SRB satisfying Assumptions 2.1 and 2.2 such that the error probability is lower bounded by: eT pAq \u011b 1 4 exp \u02c6 \u00b4 8T \u03c32H1,2pTq \u02d9 , where H1,2pTq :\u201c \u00ff i\u2030i\u02dapT q 1 \u22062 i pTq. Some comments are in order. First, we stated the lower bound for the case in which the minimum time budget satisfies the inequality of Theorem 6.1, which is a necessary condition for identifying the optimal arm. Second, the lower bound on the error probability matches, up to logarithmic factors, that of our R-SR, suggesting the superiority of this algorithm compared to R-UCBE. Finally, provided that the identifiability condition of Equation (11), such a result corresponds to that of the standard (stationary) MABs (Audibert et al., 2010; Kaufmann et al., 2016). A summary of all the bounds provided in the paper is presented in Table 1. 7 Related Works In this section, we summarize the relevant literature related both to the works focusing on the best arm identification problem and rested bandits. The SRB setting was proposed by Heidari et al. (2016) for the first time. Their work and subsequently the one by Metelli et al. (2022) analyzed the problem from a regret minimization point of view. Best Arm Identification in Stochastic Rising Bandits As highlighted in Section 1, the works mostly related to ours are the ones by Li et al. (2020) and Cella et al. (2021). They both focus on the 10See Lemma C.11. 8 \fBAI problem in the rested setting, given a fixed-budget. More specifically, Li et al. (2020) consider rising rested bandits in which the reward function of each arm increases as it is pulled. However, they limit to deterministic arms and, thus, fail to deal with the intrinsic stochasticity of the real-world processes they want to model. Instead, Cella et al. (2021) deal with the problem of identifying the arm with the smallest loss in a setting where the losses incurred by selecting an arm decrease over time. It is easy to show that such a setting can be transformed straightforwardly in the SRB one. However, the authors develop two algorithms whose theoretical guarantees hold under the assumption that the expected loss follows a specific known parametric functional form, whose parameters are to be estimated. This constitutes a major limitation to the presented work since checking such an assumption is not feasible in real-world settings. Best Arm Identification The pure exploration and BAI problems have been first introduced by Bubeck et al. (2009), while algorithms able to learn in such a setting have been provided by Audibert et al. (2010). The work by Gabillon et al. (2012) proposes a unified approach to deal with stochastic best arm identification problems by having either a fixed budget or fixed confidence. However, the stochastic algorithms developed in this line of research only provide theoretical guarantees in settings where the expected reward is stationary over the pulls. Abbasi-Yadkori et al. (2018) propose a method able to handle both the stochastic and adversarial cases, but they do not make explicit use of the properties (e.g., increasing nature) of the expected reward. Finally, (Garivier and Kaufmann, 2016; Kaufmann et al., 2016; Carpentier and Locatelli, 2016) analyze the problem of BAI from the lower bound perspective. Rested Bandits Bandit settings in which the evolution of an arm reward depends on the number of times the arm has been pulled, such as the one analyzed in our paper, are generally referred to as rested. A first general formulation of the rested bandit setting appeared in the work by Tekin and Liu (2012) and was further discussed by Mintz et al. (2020) and Seznec et al. (2020). In these works, the evolution of the expected reward of each arm is regulated by a Markovian process that is assumed to visit the same state multiple times. This is not the case for the rising bandits, where the arm expected rewards continuously increase over the time budget. Finally, a specific instance of the rested bandits is constituted by the rotting bandits (Levine et al., 2017; Seznec et al., 2019, 2020), in which the expected payoff for a given arm decreases with the number of pulls. However, as pointed out by Metelli et al. (2022), techniques developed for this setting cannot be directly translated into ours, due to the inherently different nature of the problem. 8 Numerical Validation In this section, we provide a numerical validation of R-UCBE and R-SR. We compare them with state-of-the-art bandit baselines designed for stationary and non-stationary BAI in a synthetic setting, and we evaluate the sensitivity of R-UCBE to its exploration parameter a. Additional details about the experiments presented in this section are available in Appendix E. Additional experimental results on both synthetic settings and in a real-world experiment are available in Appendix F.11 Baselines We compare our algorithms against a wide range of solutions for BAI: \u2022 RR: uniformly pulls all the arms until the budget ends in a round-robin fashion and, in the end, makes a recommendation based on the empirical mean of their reward over the collected samples; \u2022 RR-SW: makes use of the same exploration strategy as RR to pull arms but makes a recommendation based on the empirical mean over the last \u03b5T K collected samples from an arm.12 \u2022 UCB-E and SR (Audibert et al., 2010): algorithms for the stationary BAI problem; \u2022 Prob-1 (Abbasi-Yadkori et al., 2018): an algorithm dealing with the adversarial BAI setting; \u2022 ETC and Rest-Sure (Cella et al., 2021): algorithms developed for the decreasing loss BAI setting.13 The hyperparameters required by the above methods have been set as prescribed in the original papers. For both our algorithms and RR-SW, we set \u03b5 \u201c 0.25. Setting To assess the quality of the recommendation \u02c6 I\u02dapTq provided by our algorithms, we consider a synthetic SRB setting with K \u201c 5 and \u03c3 \u201c 0.01. Figure 2 shows the evolution of the expected 11The code to run the experiments is available in the supplementary material. It will be published in a public repository conditionally to the acceptance of the paper. 12The formal description of this baseline, as well as its theoretical analysis, is provided in Appendix D. 13This problem is equivalent to ours, given a linear transformation of the reward. 9 \f0 50 100 150 200 250 300 0 0.2 0.4 0.6 0.8 1 Number of pulls n Expected reward \u00b5ipnq 1 2 3 4 5 Figure 2: Expected values \u00b5ipnq for the arms of the synthetic setting. 102 102.5 103 103.5 0 0.2 0.4 0.6 0.8 1 Time Budgets T Empirical Error eT R-UCBE R-SR RR RR-SW UCB-E SR Prob-1 ETC Rest-Sure Figure 3: Empirical error rate for the synthetically generated setting (100 runs, mean \u02d8 95% c.i.). 100 120 140 160 180 200 220 240 0 0.2 0.4 0.6 0.8 1 Time Budgets T Empirical Error eT a \u201c a\u02da{50 a \u201c a\u02da{10 a \u201c a\u02da a \u201c 10a\u02da a \u201c 50a\u02da Figure 4: Empirical error rate for the R-UCBE at different a (1000 runs, mean \u02d8 95% c.i.). values of the arms w.r.t. the number of pulls. In this setting, the optimal arm changes depending on whether T P r1, 185s or T P p185, `8q. Thus, when the time budget is close to that value, the problem is more challenging since the optimal and second-best arms expected rewards are close to each other. For this reason, the BAI algorithms are less likely to provide a correct recommendation than for time budgets for which the two expected rewards are well separated. We compare the analyzed algorithms A in terms of empirical error eT pAq (the smaller, the better), i.e., the empirical counterpart of eT pAq averaged over 100 runs, considering time budgets T P r100, 3200s. Results The empirical error probability provided by the analyzed algorithms in the synthetically generated setting is presented in Figure 3. We report with a dashed vertical blue line at T \u201c 185, i.e., the budgets after which the optimal arm no longer changes. Before such a budget, all the algorithms provide large errors (i.e., \u00af eT pAq \u0105 0.2). However, R-UCBE outperforms the others by a large margin, suggesting that an optimistic estimator might be advantageous when the time budget is small. Shortly after T \u201c 185, R-UCBE starts providing the correct suggestion consistently. R-SR begins to identify the optimal arm (i.e., with \u00af eT pR-SRq \u0103 0.05) for time budgets T \u0105 1000. Nonetheless, both algorithms perform significantly better than the baseline algorithms used for comparison. Sensitivity Analysis for the Exploration Parameter of R-UCBE We perform a sensitivity analysis on the exploration parameter a of R-UCBE. Such a parameter should be set to a value less or equal to a\u02da, and the computation of the latter is challenging. We tested the sensitivity of R-UCBE to this hyperparameter by looking at the error probability for a P ta\u02da{50, a\u02da{10, a\u02da, 10a\u02da, 50a\u02dau. Figure 4 shows the empirical errors of R-UCBE with different parameters a, where the blue dashed vertical line denotes the last time the optimal arm changes over the time budget. It is worth noting how, even in this case, we have two significantly different behaviors before and after such a time. Indeed, if T \u010f 185, we have that a misspecification with larger values than a\u02da does not significantly impact the performance of R-UCBE, while smaller values slightly decrease the performance. Conversely, for T \u0105 185 learning with different values of a seems not to impact the algorithm performance significantly. This corroborates the previous results about the competitive performance of R-UCBE. 9 Discussion and Conclusions This paper introduces the BAI problem with a fixed budget for the Stochastic Rising Bandits setting. Notably, such setting models many real-world scenarios in which the reward of the available options increases over time, and the interest is on the recommendation of the one having the largest expected rewards after the time budget has elapsed. In this setting, we presented two algorithms, namely R-UCBE and R-SR providing theoretical guarantees on the error probability. R-UCBE is an optimistic algorithm requiring an exploration parameter whose optimal value requires prior information on the setting. Conversely, R-SR is a phase-based solution that only requires the time budget to run. We established lower bounds for the error probability an algorithm suffers in such a setting, which is matched by our R-SR, up to logarithmic factors. Furthermore, we showed how a requirement on the minimum time budget is unavoidable to ensure the identifiability of the optimal arm. Finally, we validate the performance of the two algorithms in both synthetically generated and real-world settings. A possible future line of research is to derive an algorithm balancing the tradeoff between theoretical guarantees on the eT and the chance of providing such guarantees with lower time budgets. 10" + }, + { + "url": "http://arxiv.org/abs/2211.09612v1", + "title": "Dynamic Pricing with Volume Discounts in Online Settings", + "abstract": "According to the main international reports, more pervasive industrial and\nbusiness-process automation, thanks to machine learning and advanced analytic\ntools, will unlock more than 14 trillion USD worldwide annually by 2030. In the\nspecific case of pricing problems-which constitute the class of problems we\ninvestigate in this paper-, the estimated unlocked value will be about 0.5\ntrillion USD per year. In particular, this paper focuses on pricing in\ne-commerce when the objective function is profit maximization and only\ntransaction data are available. This setting is one of the most common in\nreal-world applications. Our work aims to find a pricing strategy that allows\ndefining optimal prices at different volume thresholds to serve different\nclasses of users. Furthermore, we face the major challenge, common in\nreal-world settings, of dealing with limited data available. We design a\ntwo-phase online learning algorithm, namely PVD-B, capable of exploiting the\ndata incrementally in an online fashion. The algorithm first estimates the\ndemand curve and retrieves the optimal average price, and subsequently it\noffers discounts to differentiate the prices for each volume threshold. We ran\na real-world 4-month-long A/B testing experiment in collaboration with an\nItalian e-commerce company, in which our algorithm PVD-B-corresponding to A\nconfiguration-has been compared with human pricing specialists-corresponding to\nB configuration. At the end of the experiment, our algorithm produced a total\nturnover of about 300 KEuros, outperforming the B configuration performance by\nabout 55%. The Italian company we collaborated with decided to adopt our\nalgorithm for more than 1,200 products since January 2022.", + "authors": "Marco Mussi, Gianmarco Genalti, Alessandro Nuara, Francesco Trov\u00f2, Marcello Restelli, Nicola Gatti", + "published": "2022-11-17", + "updated": "2022-11-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Most international economic forecasts agree that nearly 50% of the annual value unlocked by the adoption of Arti\ufb01cial Intelligence (AI) from 2030 on will be in marketing&sales (Chui et al. 2018). Examples of activities in which AI tools can play a central role for marketing&sales include attracting and acquiring new customers, suggesting and recommending products, and optimizing customers\u2019 retention and loyalty. In particular, AI can effectively automate these processes so as to increase their ef\ufb01ciency dramatically. This paper focuses on pricing for e-commerce when, as it is usual, the objective is pro\ufb01t maximization and only *These authors contributed equally. Copyright \u00a9 2023, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. transaction data are available. In particular, we focus on settings in which an e-commerce website sells goods other than luxury, Veblen, and Giffen. Thus, we can assume, without loss of generality, that the demand curve is monotonically decreasing in price. Furthermore, we assume that the ecommerce website works with different classes of customers both in B2B and B2C scenarios. However, at the stage the price of the product is chosen and displayed to a user, the seller does not know whether the user comes from the former or latter scenario. Usually, volume discount is used to deal with multiple classes of users when it is not possible to distinguish the classes at the price formation stage. In particular, the rationale is to propose different prices for different volume thresholds thanks to the introduction of discounts. This approach allows showing the same thresholds and prices to all incoming users and, at the same time, it introduces price discrimination to provide a different pricing strategy for different classes of users. To the best of our knowledge, even if the problem of learning the price that maximizes the seller\u2019s revenue has been extensively studied in the economic (Klenow and Malin 2010), game theory (Kopalle and Shumsky 2010) and learning (Den Boer 2015) \ufb01elds, no dynamic pricing algorithm in the literature deals with volume discounts in a data-driven way. Original Contribution In this work, we design an onlinelearning pricing algorithm, namely the Pricing with Volume Discounts Bandit (PVD-B) algorithm. We face the problem of assigning different prices to different volume thresholds using transaction data (coming from historical purchases and during the operational life of the e-commerce website). Given the complex dynamics of the problem, we decompose the algorithm into two phases: an optimal average price estimation and, based on the above estimation, a price adaptation method to provide different prices for the given volume discount thresholds. The adoption of tools from online learning guarantees convergence to optimal prices. In collaboration with an Italian e-commerce website, we ran a real-world 4-month-long A/B testing experiment over a set of \u2248300 products, in which our algorithm PVD-B\u2014 corresponding to A con\ufb01guration\u2014has been compared with human pricing specialists\u2014corresponding to B con\ufb01guration. At the beginning of the test, the available data concerned the purchases occurred in the previous 2 years. At arXiv:2211.09612v1 [cs.LG] 17 Nov 2022 \fthe end of the experiment, the total turnover of A con\ufb01guration was more than 300 KEuro and our algorithm PVD-B performed better than the B con\ufb01guration in terms of the objective function (i.e., total pro\ufb01t) for about 55%. The company we collaborated with decided to adopt our algorithm for more than 1,200 products since January 2022. Related Works A comprehensive analysis of the dynamic pricing literature is provided in Narahari et al. (2005); Bertsimas and Perakis (2006); Den Boer (2015). In particular, Multi-Armed Bandits (MAB) techniques have been extensively employed for dynamic pricing when the available information concerned the interactions between the e-commerce website and customers. Rothschild (1974) presents one of the seminal works on the adoption of MAB algorithms for dynamic pricing. This algorithm has been subsequently extended in several directions to capture the characteristics of different pricing settings. Kleinberg and Leighton (2003) study the problem of dealing with continuous-demand functions and proposes a discretization of the price values to provide theoretical guarantees on the regret of the algorithm. This approach suffers from the drawback that the reward is assumed to have a unique maximum in the price. Such an assumption is hard to be veri\ufb01ed in practice. Instead, Trov` o et al. (2015, 2018) relaxed this assumption, assuming that the demand function is monotonically decreasing and exploiting this assumption in the learning algorithm to provide uncertainty bounds tighter than those of classical frequentist MAB algorithms. However, the model formulation explicitly imposes neither monotonicity nor weak monotonicity on the estimated demand functions, so decisions that violate business logic can be allowed during the learning process. The authors show how the monotonicity assumption does not improve the asymptotic bound of regret provided by the MAB theory. On the other hand, exploiting monotonicity allows for an empirical improvement in performance (Mussi et al. 2022). The same argument also holds for the work proposed by Misra, Schwartz, and Abernethy (2019), where the monotonicity property of the demand function is used to ensure faster convergence. However, monotonicity is not forced as a model-speci\ufb01c feature. Besbes and Zeevi (2015) show that linear models are a suitable and ef\ufb01cient tool for modeling a demand function. In their work, downward monotonicity is forced on a model-wise level, but it is only analyzed in a stationary environment. Other works that adopt a parametric formulation of the demand function are by Besbes and Zeevi (2009); Broder and Rusmevichientong (2012). These works assume stationary customer behavior. Bauer and Jannach (2018); Cope (2007) are two of the main works on Bayesian inference applied to dynamic pricing. They both fail to impose monotonic constraints on the model. Interestingly, Bauer and Jannach (2018) take into account non-stationary features (e.g., competitors\u2019 prices). Araman and Caldentey (2009) use a Bayesian approach to dynamic pricing using a prior belief on the parameters to capture market-related information and force the model to be monotonic. Wang, Chen, and Simchi-Levi (2021) investigates non-parametric 0 100 200 300 400 500 0 20 40 60 80 100 Basket Value Units per Basket Business Customers Private Customers Figure 1: Units per basket and basket values for different classes of users. models for demand function estimation. In this case, the authors assume that the demand function is smooth. Finally, Nambiar, Simchi-Levi, and Wang (2019) propose a model to deal with both the non-stationarity data and the model misspeci\ufb01cation. However, the required contextual knowledge at a product-wise level is not usually available in practice. To the best of our knowledge, the literature lacks a datadriven methodology for \ufb01nding an optimal volume discount pricing schedule to maximize retailers\u2019 pro\ufb01ts and revenues in a B2C environment. The works from Hilmola (2021); Rubin and Benton (2003) focus on the Economic Order Quantity (EOQ) model that requires demand size over an annual budget and stock size. Sadrian and Yoon (1992) relax the EOQ hypothesis and provide a rational and straightforward pricing strategy that forces a lower bound on the company\u2019s expected pro\ufb01t by calculating volume thresholds and corresponding discounts afterward. The authors show the importance of volume discounts when increasing higher-priced products sales. Problem Formulation We study the scenario in which an e-commerce website sells non-perishable products with unlimited availability. The assumption of independence among the products allows us to focus singularly on every product. The extension to the case with a set of products is straightforward. Commonly, the behavior of the users purchasing items from the e-commerce website is fragmented into multiple classes. For instance, Figure 1 provides the shopping baskets cardinality (in terms of units and economic value) for the e-commerce under analysis in this work, distinct classes of users: privates and businesses. The \ufb01gure highlights how the privates purchase smaller amounts of products while the business is transacting with larger amounts. This fact suggests that an optimal seller strategy may include different pricing for the two. However, the user classes are not disclosed until payment is made, and, therefore, a pricing scheme that explicitly uses such a feature is not a viable option. In this work, we circumvent the issue of lack of customer information by using a discount threshold scheme that differentiates the per-unit price of the items by using the number of items purchased in a single transaction as a proxy \f5 10 15 20 25 0 20 40 60 80 100 Number of purchased units Shopping Basket Value Figure 2: Effect on the basket value given a volume discounts scheme. to distinguish the two classes.12 In addition, further bene\ufb01ts of such a pricing scheme have been shown in the economic literature, e.g., in Monahan (1984). Indeed, this model has been shown to anticipate buyer behavior and increase average order size, allowing the retailer to access the supplier\u2019s rebate on large restocks, reduce processing costs, and anticipate cash \ufb02ows through the \ufb01scal year. Formally, for a given time t, let us de\ufb01ne a vector of volume thresholds \u03c9t := [\u03c91t, . . . , \u03c9\u03b7t] \u2208N\u03b7, with \u03c9it > \u03c9ht, for each i > h and \u03c91t = 1. The corresponding price vector will be pt := [p1t, . . . , p\u03b7t] \u2208P\u03b7, with p1t > . . . > p\u03b7t > 0. P is the set of feasible prices, and \u03b7 \u2208N is the number of thresholds. The i-th element pit of pt denotes the price proposed for each product when a user wishes to purchase a number of them in {\u03c9i, . . . , \u03c9i+1 \u22121}.3 Figure 2 exempli\ufb01es the mechanism of the thresholded volume discounts when \u03b7 = 3. The seller\u2019s objective is to maximize the perround pro\ufb01t, de\ufb01ned as: Rt(pt, \u03c9t) := \u03b7 X i=1 (pit \u2212c) \u00b7 vi(pt, \u03c9t, t), (1) where c \u2208R+ is the unit cost of an item (assumed constant) and vi(pt, \u03c9t, t) \u2208N is the number of items sold at time t and at price pit, when the purchase consists of a number of item in {\u03c9i, . . . , \u03c9i+1 \u22121} and the overall seller strategy consists of prices pt and thresholds \u03c9t.4 However, in a real-world scenario, the functions {vi(\u00b7, \u00b7, \u00b7)}i=1:\u03b7 are unknown to the seller and, therefore, need to be estimated using the transactions collected over 1Notice that for the sake of presentation, the example presented two classes, but multiple (> 2) behaviors may exist, requiring multiple prices and thresholds. 2We remark that all the different prices (and related quantity thresholds) are displayed to any customer visiting the product web page. 3For instance, if \u03c9t = [1, 3, 5] and pt = [6, 5, 4], a customer purchasing 2 units will pay 2 \u00b7 6 = 12, and a customer purchasing 4 units will pay them 4 \u00b7 5 = 20. 4Let us remark that this formulation can be extended in a straightforward way if the seller\u2019s goal also concerns the turnover, i.e., by de\ufb01ning Rt(pt, \u03c9t) as a convex combination of turnover and per-round pro\ufb01t. time. Notice that the volumes vi(pt, \u03c9t, t) for the i-th volume interval also depend on the choices of the other prices and thresholds, as users might be more prone to purchase more items if there is a signi\ufb01cant difference in price than buying fewer products in a single round. In this way, the problem can naturally be cast as a Multi-Armed Bandit (MAB) problem (see, e.g., Lattimore and Szepesv\u00b4 ari (2020) for a comprehensive review of MAB methods) where the goal is to properly balance the acquisition of information on the functions vi(\u00b7, \u00b7, \u00b7), while maximizing the cumulative reward, a.k.a. exploration/exploitation dilemma. Formally, in a MAB problem, we are given a set of available options (a.k.a. arms), and we choose an arm at each time t. In our case, the arms are all the possible prices pt and thresholds \u03c9t, and the goal is to maximize the reward (in our setting pro\ufb01t) over a time horizon of T round. A policy U is an algorithm that returns at each time t a pair (pt, \u03c9t) based on the information, i.e., volumes \u02dc vit and corresponding prices pit, we collected in the previous t \u22121 rounds. A policy is evaluated in terms of average total reward, i.e., our goal is to design policies that maximize: RT (U) := T X t=1 \u03b7 X i=1 (pit \u2212c) \u00b7 vi(pt, \u03c9t, t). (2) It is common in the MAB literature to use regret instead of reward as a performance metric. However, the minimization of the former corresponds to the maximization of the latter; therefore, our goal is the standard in MAB settings. Here, the total reward has been selected as a performance metric since it does not require knowledge of the optimum price strategy, which is unknown in the real world. Algorithm The problem presented before is computationally heavy (i.e., exponential in the number of thresholds \u03b7) and cannot be addressed effectively in the presence of scarce data. Indeed, estimating the volume functions vi(pt, \u03c9t, t), each of which has 2\u03b7 + 1 input parameters, would take a long time due to the requirement of collecting a large amount of transaction data. In what follows, we approximate the original problem in two different directions to allow learning the volume functions in a short time. We assume that the i-th volume function depends only on the price pit selected for the i-th interval and on time t, or, formally, vi(pt, \u03c9t, t) = vi(pit, t). Let us de\ufb01ne the function of the total volumes provided by a pricing strategy (pt, \u03c9t) as: \u00af v(\u00af pt, t) := \u03b7 X i=1 vi(pit, t), (3) where \u00af pt is a weighted average value of the prices vector pt, formally: \u00af pt = \u03b7 X i=1 \u03b1it \u00b7 pit, (4) where \u03b1i \u2208[0, 1], \u2200i \u2208{1, . . . , \u03b7} must be estimated guaranteeing that on average a threshold pricing strategy \fTransaction Data \u03b3 \u03b2z pit v p\u2217 t p1t p2t p3t \u03c9k v p p\u2217 t Data-Driven Data-Driven Thresholds Selection Basket Fractions Estimation Buyback Probability Estimation p\u2217 t \u03b2k Thresholds Fractions Volume Discounts Learning Optimal Price Estimation Estimation Figure 3: General overview of the PVD-B algorithm. {\u03c9it, pit}\u03b7 i=1 yields a reward greater than or equal to the theoretical one formulated as follows: \u00af Rt(\u00af pt) := ( \u00af pt \u2212c) \u00b7 \u00af v(\u00af pt, t). (5) Thanks to the previous de\ufb01nitions, we can reformulate the optimization problem into two consecutive steps: \u2022 Finding a single optimal price p\u2217 t that maximizes the revenue provided by the total volume function de\ufb01ned as R\u2217 t (p\u2217 t ) := (p\u2217 t \u2212c) \u00b7 v(p\u2217 t , t); \u2022 Given p\u2217 t , \ufb01nd a pricing strategy (p\u2217 t , \u03c9\u2217 t ) whose weighted average (see Equation 4) to the optimal price p\u2217 t . Notice that the second step allows the algorithm to use all the data available for estimating the function v(\u00b7, \u00b7), instead of partitioning them into \u03b7 disjoint sets and independently estimating the vi(\u00b7, \u00b7) functions. This allows us to speed up the learning process. In what follows, we detail the two phases of the PVD-Balgorithm, which corresponds to the solution of the above problem, i.e., the Optimal Price Estimation and Volume Discounts Learning phases. The overall procedure is depicted in Figure 3. More speci\ufb01cally, the former phase aims to estimate the optimal price p\u2217 t for the total volumes relying on the transaction data. Instead, the latter phase combines the previous estimate of the optimal price p\u2217 t with the parameters extracted from the transaction data to compute the optimal thresholds \u03c9\u2217 t and the pricing strategy p\u2217 t . Optimal Price Estimation As a \ufb01rst step, we estimate the optimal average price p\u2217 t . The algorithm takes as input the records of past orders, i.e., tuples (\u02dc pit, \u02dc vit, t) with the price, volume and time of each user purchase occurred in the past time instants. For each time t, it computes the tuple (\u00af pt, \u00af vt, t), where the average price \u00af pt and the total volume \u00af vt is computed as described in Equations (4) and (3), respectively, relying on the above-mentioned collected data. These data are used to compute an estimate \u02c6 v(\u00b7, \u00b7) of the total volume function v(\u00b7, \u00b7). The PVD-B algorithm resorts to a Bayesian Linear Regression (BLR, Tipping 2001) model to approximate the function v(\u00b7, \u00b7). Formally, the estimates of the total volume for price p at time \u03c4 is equal to: \u02c6 v(p, \u03c4) = U X u=1 \u03b8u\u03c6u(p) + D X d=1 \u03b8d\u03c6d(\u03c4), (6) where \u03c61(p), . . . , \u03c6U(p) are the basis functions constructed over the price p having as prior a Lognormal distribution and \u03c61(\u03c4), . . . \u03c6D(\u03c4) are the basis functions constructed over the time \u03c4 having as prior a Gaussian distribution. Two remarks are necessary. First, the basis \u03c6d(\u00b7) has been introduced to consider the seasonality that affects the selling process of the e-commerce website. Second, the choice of Lognormal prior for the price basis \u03c6u(\u00b7) forces such distributions to be non-negative, which in turn induces the monotonicity of the approximating function w.r.t. the price p (see Wilson et al. (2020) for more details). As discussed before, this property is also commonly re\ufb02ected by real demand functions and allows for fast learning of such curves. The output of the BLR regression model provides a distribution \u02c6 v(p, \u03c4) for each p, allowing the use of MAB algorithms and, in particular, the use of a Thompson Sampling (TS)-like (Kaufmann, Korda, and Munos 2012; Agrawal and Goyal 2012) approach, as a strategy to \ufb01nd a value for the optimal price p\u2217 \u03c4 balancing exploration and exploitation. The corresponding procedure is summarized in Figure 4. More speci\ufb01cally, from the distribution \u02c6 v(\u00b7, \u03c4) (Left, represented as expected values and uncertainty bounds), we sample a function \u02c6 vT S(\u00b7, \u03c4) (Center, represented in green), and, \ufb01nally, we perform the optimization (Right) of the pro\ufb01t as follows: p\u2217 \u03c4 \u2208arg max p\u2208P (p \u2212c) \u00b7 \u02c6 vT S(p, \u03c4). (7) Volume Discounts Learning Let \u03b7 be the number of volume thresholds to show to customers, along with the corresponding prices.5 Let \u03b2z, with z \u2208N, be the proportion of 5Here, we assume the e-commerce experts provide this value. If not provided, clustering techniques over historical data can be used \fv p v p v p p\u2217 \u03c4 Objective function MAB samples (p \u2208P) Thompson Sampling Realization Prediction Uncertainly Figure 4: Procedure to retrieve optimal price p\u2217 \u03c4 at time \u03c4 using a TS-like approach. baskets containing the product with a volume of z. The average volume for the product in each basket is \u00af V = P\u221e i=1 \u03b2i\u00b7i. Given the threshold \u03c9k, the total proportion of baskets inside that range is given by: \u00af \u03b2k = \u03c9k+1\u22121 X i=\u03c9k \u03b2i. (8) The average volume of products for the baskets in a given threshold \u03c9k is consequently de\ufb01ned as: \u00af Vk = P\u03c9k+1\u22121 i=\u03c9k \u03b2i \u00b7 i P\u03c9k+1\u22121 i=\u03c9k \u03b2i . (9) Suppose a customer needs N units of the given product. This need can be ful\ufb01lled by dividing his/her order across any number of time steps, i.e., performing a purchase of units (or a volume) in a range {\u03c9k, . . . , \u03c9k+1 \u22121} for a speci\ufb01c k and repeating such a purchase until the required amount of units is reached. After the customer bought the product, he/she has a probability \u03b3 of returning to the same retailer buying another batch of the same size. This kind of modeling of the user\u2019s behavior is re\ufb02ecting accurately those customers buying goods with a short lifespan and, in general, the ones for which a customer is led to schedule periodic purchases (i.e., toilet paper, consumable of\ufb01ce supplies). With probability 1 \u2212\u03b3, the customer will not return the next time. We consider \u03b3 a property of the system, so we assume that the price does not affect the buyback probability. In what follows, we use historical transaction data to estimate the value of \u03b3 for a given product. Let \u00af m denote the desired margin when a single unit is purchased. It follows that the expected margin \u00af \u00b5 coming from a customer with a need of N units and who performs only single-unit orders is: \u00af \u00b5 = N X \u03c4=1 \u03b3\u03c4\u22121 \u00af m = 1 \u2212\u03b3N 1 \u2212\u03b3 \u00af m, (10) where the last equality comes from the truncated geometric series identity. A customer with the same need, but whose orders contain a number of units in {\u03c9k, . . . , \u03c9k+1 \u22121}, which are associated with a margin \u00af mk, will generate the to determine the optimal number of groups in the user distribution, e.g., Pelleg, Moore et al. (2000). following expected margin: \u00af \u00b5k = l N \u00af Vk m X \u03c4=1 \u03b3\u03c4\u22121 \u00af mk \u00af Vk = 1 \u2212\u03b3 l N \u00af Vk m 1 \u2212\u03b3 (1 \u2212\u03b4k) \u00af m \u00af Vk, (11) where \u03b4k is the discount applied to the single-unit margin \u00af m, namely: \u00af mk = \u00af m(1 \u2212\u03b4k), k \u2208{1, . . . , \u03b7}, (12) where \u03b41 = 0. By imposing \u00af \u00b5k \u2265\u00af \u00b51, we get: \u03b4k \u22641 \u2212 1 \u2212\u03b3N \u00af Vk \u0012 1 \u2212\u03b3 l N \u00af Vk m\u0013. (13) Given the desired margin m\u2217 t = p\u2217 t \u2212c derived in the previous section, the expected pro\ufb01t without any discount can be computed as m\u2217 t \u00af V . Suppose we are applying a volume discount policy: we expect it will not decrease the total expected margin given without it. Unit-volume margin \u00af m can be computed by imposing that the expected margin without any discount policy coincides with the one including them: \u03b7 X k=1 \u00af Vk \u00af mk = m\u2217 t \u00af V . (14) Substituting Eq. 12 into Eq. 14, we get: \u00af m = m\u2217 t \u00af V P\u03b7 k=1(1 \u2212\u03b4k) \u00af Vk . (15) Finally, the margins \u00af m1, . . . , \u00af m\u03b7 for the different volume thresholds are determined as \u00af mk = \u00af m(1 \u2212\u03b4k), for k \u2208 {1, . . . , \u03b7}, where \u03b41 = 0. The complete algorithm, including both optimal average price estimation and volume discounts, is summarized in Figure 3. Data-Driven Buyback Probability Estimation Even if estimating \u03b3 in an online fashion would be a natural approach, it is prohibitive due to our environment\u2019s strong seasonality and non-stationary nature. Indeed, studying customers\u2019 churn usually requires a large amount of contextual data and is a challenging task for many \ufb01elds (Kamalraj and Malathi 2013). Instead, we propose a methodology purely based on the available transaction data (where the customers are uniquely identi\ufb01ed) to estimate \u03b3 in an of\ufb02ine fashion. We de\ufb01ne two time-intervals: a \u201cmeasure\u201d period TM \u2208\u00af T \fMonday TuesdayWednesday Thursday Friday Saturday Sunday 0 5 10 15 20 25 Day of the week % of Weekly Volumes Figure 5: Seasonality over a single week (mean \u00b1 std). and a \u201ccontrol\u201d one TC \u2208\u00af T , where TM \u2229TC = \u2205and \u00af T is the set of time periods i.e., contiguous sequences of times. Intuitively, we observe which customers buy during the \u201cmeasure\u201d period and compute what percentage of them come back in the subsequent period, the \u201ccontrol\u201d one. Formally, given a set of customers G := {g1, . . . , gL}, we de\ufb01ne a function H : G \u00d7 \u00af T \u2192N that associates a customer to the number of purchases made in a speci\ufb01c period. We also introduce h : \u00af T \u2192P(G), which maps a period of time into the subset of unique customers who made at least one purchase in that period.6 Thus, we are able to compute the total number of returns R and non-returning customers A, formally R = P g\u2208G [H(g, TM)]\u2212|h(TM)|+|h(TM)\u2229h(TC)|, and A = |h(TM)|\u2212|h(TM)\u2229h(TC)|. Notice that R is composed by those customers that have already occurred during the \u201cmeasure\u201d period (since a customer that purchased n times during TM already returned n\u22121 times) and those happened during the \u201ccontrol\u201d period (customers seen in both periods). Instead, A is the number of customers who purchased at least one time in the \u201cmeasure\u201d period and did not show up during the \u201ccontrol\u201d one. Notice that the functions H and h can be easily calculated starting from transaction data once the two periods have been de\ufb01ned. In our test, we decided to use the 6 months before the experimental campaign as a \u201ccontrol\u201d period and the previous 6 months as a \u201cmeasure\u201d one. Finally, thanks to the two quantities de\ufb01ned above, the value of \u03b3 can be approximated as \u03b3 = R R+A. Data-Driven Threshold Selection Threshold values {\u03c9k}\u03b7 k=1 can be selected in several ways. Our solution is to de\ufb01ne a split criterion that divides the products within the shopping baskets into \u03b7 sets of equal cardinality. Formally, we de\ufb01ne q : N \u2192N as the function that maps a number of units into the number of shopping baskets that contain that many units. Notice that if B is the total number of shopping baskets over the period examined, then q(z) = B \u00b7 \u03b2z, where the values of \u03b2z and B can be estimated from the transaction data, and, consequently, q(\u00b7) can be computed entirely from data. Intuitively, we can build a data set where each z \u2208N is repeated q(z) \u00b7 z times, de\ufb01ned as: Q := {1, . . . , 1 | {z } q(1) times , . . . , d, . . . , d | {z } d \u00b7 q(d) times , . . .}. (16) 6With P(A) we denote the power set of A. 0 5 10 15 20 25 30 35 40 45 50 0 1 2 3 4 Week % of Yearly Volumes Test Period Figure 6: Seasonality over the weeks of a year (mean \u00b1 std). To get the k-th threshold, we extract the l |Q| \u00b7 k \u03b7 m -th element from the sorted data set Q de\ufb01ned as in Eq. 16. Experimental Evaluation We performed a real-world experiment in collaboration with an Italian e-commerce company in which our algorithms priced a set of products adopting a long-tail economic model (Anderson 2006). The e-commerce website collects data on each purchase (date and time), as a row of a transaction data set, including features such as the identi\ufb01er of the purchased product, the number of units sold, the price, the cost, and the class of the customer (business or private) inferred from the \ufb01scal status observed after the purchase. The experimental campaign focused on products usually bought with high volumes to evaluate our algorithms better. We resorted to an online A/B. The experimental campaign was conducted in one of the main categories of the e-commerce website, with a test set (A) composed of Nt = 295 products and a control set (B) composed of Nc = 33 products of the same category and with the same characteristics.7 The test included products with a yearly turnover of 300 KEuros and a total pro\ufb01t of 83 KEuros. The algorithm produces new prices every 7 days since a signi\ufb01cant intra-week seasonality has been observed (see Figure 5). Moreover, the products sold by the e-commerce website are subject to a signi\ufb01cant seasonality over different periods of the years, as shown in Figure 6. Due to the particular kind of products sold we dealt with and the nature of the target customer segment, volume discounts are crucial to the business since they affect customers\u2019 loyalty and the logistic organization of the company. The e-commerce website\u2019s specialists de\ufb01ned the number \u03b7 = 3 of volume thresholds that should be displayed for every product. The test was conducted for 17 weeks, from 16 June 2021 to 17 October 2021, during which no communication and marketing actions were performed in attempt not to in\ufb02uence the customers\u2019 behavior. The business goal was to maximize test set\u2019s (A) average pro\ufb01t R(A) T , as de\ufb01ned in Eq. (2), where T = 17. This score is to be compared with that one achieved by the B set R(B) T over the same period. To evaluate the performance of 7The test and the control sets were de\ufb01ned by e-commerce specialists according to both technical and market issues. \f\u221210 \u22128 \u22126 \u22124 \u22122 0 2 4 0 1,000 2,000 3,000 Difference of medians, p-value=0.54 Observed Statistic \u221212 \u221210 \u22128 \u22126 \u22124 \u22122 0 2 4 6 8 0 1,000 2,000 Difference of means, p-value=0.45 Observed Statistic Figure 7: Distribution of the two-sided permutation tests statistics before the test, R = 10000 random permutations. \u22128 \u22126 \u22124 \u22122 0 2 4 0 1,000 2,000 Difference of medians, p-value=0.01 Observed Statistic \u221212 \u221210 \u22128 \u22126 \u22124 \u22122 0 2 4 6 0 1,000 2,000 3,000 Difference of means, p-value=0.02 Observed Statistic Figure 8: Distribution of the two-sided permutation tests statistics after the test, R = 10000 random permutations. our algorithm, we performed a statistical test applied to the product margins to check whether the two groups are comparable. More speci\ufb01cally, for each product, we computed the average weekly net margins (Rt as in Eq. 1) during the \ufb01rst six months of 2021 (i.e., t is in the \ufb01rst 26 weeks of 2021) and we design a test to check if the median and mean of the net margin across the products of the A set are larger than the ones of the B set. We performed one-sided permutation tests with the null hypothesis being \u201cThe A set has not a higher median/mean of net margin w.r.t. the B set\u201d. Figure 7 shows the distributions of the tests\u2019 statistics together with the observed one, in which the resulting p-values concerning medians and means are respectively 0.54 and 0.45, and, therefore, resulting in the fact that there is not enough statistical evidence to say the two are different. This shows that set A has not a larger median/mean w.r.t. set B on the chosen performance metric before the beginning of the test. We use the BLR basis functions following different criteria to perform the test mentioned above. To model the price elasticity over the customers\u2019 base, we choose reverted hyperbolic tangent functions. Instead, to grasp the irregular nature of e-commerce\u2019s seasonality, we choose Radial Basis Functions (RBF). Finally, the trend is modeled by choosing polynomial basis functions. Both RBF and reverted hyperbolic tangents are evaluated with different shifts and scales, while polynomial features with different degrees. The algorithm ran in a Docker container with Python 3.8 environment on Linux. Every week, the algorithm made an SQL query to retrieve that data about the products and then returned the prices. The hardware was a Quad-core Intel Core i7 8th Gen with 8Gb DDR4 RAM. The time required for a run of the algorithm over all the products is about 25 minutes. Given that the algorithm is applied to each product independently of the others, the running time scale linearly w.r.t. the number of products. 0 0.2 0.4 0.6 0.8 1 0 200 400 Proportion of Products % increase Test set Control set Figure 9: Distribution of the objective function improvements in Test and Control set. Results The goods priced by PVD-B during the testing period provided an improvement (on average) in terms of the performance metric R(A) T of +55% w.r.t. the one R(B) T of control set of goods, or formally R(A) R(B) = 1.55. After 17 weeks, we performed the same statistical test on the weekly performance metric obtained between the two sets of products during the test period. Figure 8 shows the distribution of the test\u2019s statistics along with the observed ones. The two tests, performed with the same seed and number of random permutations of the previous, yielded this time p-values on the medians and the means of respectively of 0.01 and 0.02, allowing us to reject the null hypothesis and conclude that the test set of products has both a larger median and mean of the average weekly performance metric w.r.t. the control set of products, with at least a con\ufb01dence of 98%. Regarding the performances on a product-wise level, we report in Figure 9 the sorted percentages of improvement on the performance metric w.r.t. to the period of 2021 preceding the test for every single product. In the test set, 138 products over 295 (\u224847%) improved their average performance w.r.t. the corresponding period of 2021, while in control set only 8 products over 33 (\u224825%) were able to improve. This corroborates the idea that the proposed method is able to improve the performance of the e-commerce website by in\ufb02uencing the purchase process of a large number of products. Effect of Volume Discounts A \ufb01nal analysis consists in evaluating how the volume discounts algorithm can modify the probability distribution of the units count of the same product in a basket. More precisely, we need to check whether the algorithm affects the volumes \u00af \u03b2k to increase the pro\ufb01t. In our speci\ufb01c setting, this corresponds to checking if the value \u00af \u03b21 decreases in favor of \u00af \u03b22 and/or \u00af \u03b23. For this analysis, the parameters of the volume-discount algorithm have been estimated using the period from 16 June 2019 to 16 June 2021, and the estimation of \u03b3 was performed on the data split in an estimation period TM from 17 June 2019 to 16 June 2020 and a control one TC from 17 June 2020 to 16 June 2021. We analyze the context from both customers\u2019 and products\u2019 perspectives. Figure 10 shows the distribution of the number of orders performed by the customers. The histogram shows almost half of the customers perform a single \f0 2 4 6 8 10 12 14 16 18 20 0 0.2 0.4 Number of purchases Fraction of the customers Figure 10: Number of purchases done by the customers. 0 0.2 0.4 0.6 0.8 1 0 20 40 Estimated \u03b3 Number of products Figure 11: Distribution of the parameter \u03b3. order and leave the shop without coming back. Figure 11 analyze the same phenomenon from the product perspective. Given a product, we can interpret the \u03b3 parameter as the probability that a customer buying that product will, sooner or later, buy such product again. Higher is the \u03b3, the more conservative will be the discount strategy we adopt. Results The estimated per-product discounts between the three thresholds are presented in Figure 12. The PVD-B algorithm applies an average discount of \u224810% for the second volume interval and \u224820% for the third one. This implies a shift in the number of purchases among the intervals. In Table 1, the variations of the three \u00af \u03b2k are reported: during the test, we achieved an increase of the values \u00af \u03b22 and \u00af \u03b23 while observing a reduction in \u00af \u03b21. Finally, in Table 2 we report the average variations in terms of average units per basket of the 4 above-mentioned products during the test period. The effect of applying the PVD-B algorithm and, therefore, introducing volume discounts modi\ufb01es the basket\u2019s average size by increasing the units purchased by \u224833%. Considerations After the A/B Test After the end of the A/B test, the e-commerce specialists were satis\ufb01ed with the achieved results, including the performance of the volume-discount algorithm. Thus, the company extended the adoption of our algorithm to all the products presenting a suf\ufb01cient amount of volumes in the catalog of the e-commerce website (\u22481200 products). Currently, our algorithm prices about 1, 200 products generating a cumulative annual revenue of about 1.5 MEuro, which corresponds to about 50% of the total e-commerce website turnover. Furthermore, the algorithm now runs in the cloud \u22120.2 \u22120.1 0 0 10 20 30 Variation Number of products (a) From p1t to p2t. \u22120.4 \u22120.3 \u22120.2 \u22120.1 0 0 10 20 30 40 Variation (b) From p1t to p3t. Figure 12: Average (on time) discounts between volumes\u2019 thresholds in test products. Table 1: Variations of \u00af \u03b2k after the test period. Product \u2206\u00af \u03b21 \u2206\u00af \u03b22 \u2206\u00af \u03b23 1 -32% +10% +22% 2 -26% +25% +1% 3 -15% +4% +11% 4 -5% +1% +4% Mean -19.5% +10% +9.5% Table 2: Variation of units per basket after the test period. Product \u2206units 1 +63% 2 +43% 3 +11% 4 +14% Mean +33% in a SaaS fashion. An automatized routine runs a query on the dataset of the e-commerce website, extracting the transaction data needed and then running the algorithm. The results are provided to the business unit in a csv \ufb01le. Conclusion and Future Works In this paper, we present PVD-B, an algorithm capable of de\ufb01ning the price and volume discounts in an online setting. Our approach exploits the transaction data of the ecommerce website to optimize the pricing strategy in an online fashion. We test our approach in a real-world 4-months experiment by optimizing the price of 295 products of an e-commerce website. The results show that our approach increases the e-commerce website pro\ufb01ts by outperforming the previous management and gaining an increase of 55%. In future works, we plan to insert in the model time correlations between the purchases and the effect of loyalty in increasing revenue. Furthermore, in this work, we price products independently, while cross-selling approaches could further increase pro\ufb01ts for some classes of products. The design of algorithms taking into account also these dependencies constitutes an interesting new line of work." + }, + { + "url": "http://arxiv.org/abs/2211.08997v2", + "title": "Dynamical Linear Bandits", + "abstract": "In many real-world sequential decision-making problems, an action does not\nimmediately reflect on the feedback and spreads its effects over a long time\nframe. For instance, in online advertising, investing in a platform produces an\ninstantaneous increase of awareness, but the actual reward, i.e., a conversion,\nmight occur far in the future. Furthermore, whether a conversion takes place\ndepends on: how fast the awareness grows, its vanishing effects, and the\nsynergy or interference with other advertising platforms. Previous work has\ninvestigated the Multi-Armed Bandit framework with the possibility of delayed\nand aggregated feedback, without a particular structure on how an action\npropagates in the future, disregarding possible dynamical effects. In this\npaper, we introduce a novel setting, the Dynamical Linear Bandits (DLB), an\nextension of the linear bandits characterized by a hidden state. When an action\nis performed, the learner observes a noisy reward whose mean is a linear\nfunction of the hidden state and of the action. Then, the hidden state evolves\naccording to linear dynamics, affected by the performed action too. We start by\nintroducing the setting, discussing the notion of optimal policy, and deriving\nan expected regret lower bound. Then, we provide an optimistic regret\nminimization algorithm, Dynamical Linear Upper Confidence Bound (DynLin-UCB),\nthat suffers an expected regret of order $\\widetilde{\\mathcal{O}} \\Big( \\frac{d\n\\sqrt{T}}{(1-\\overline{\\rho})^{3/2}} \\Big)$, where $\\overline{\\rho}$ is a\nmeasure of the stability of the system, and $d$ is the dimension of the action\nvector. Finally, we conduct a numerical validation on a synthetic environment\nand on real-world data to show the effectiveness of DynLin-UCB in comparison\nwith several baselines.", + "authors": "Marco Mussi, Alberto Maria Metelli, Marcello Restelli", + "published": "2022-11-16", + "updated": "2023-05-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction In a large variety of sequential decision-making problems, a learner must choose an action that, when executed, determines an evolution of the underlying system state that is hidden to the learner. In these partially observable problems, the learner observes a reward (i.e., feedback) representing the combined effect of multiple actions played in the past. For instance, in online advertising campaigns, the process that leads to a conversion, i.e., marketing funnel (Court et al., 2009), is characterized by complex dynamics and comprises several phases. When heterogeneous campaigns/platforms are involved, a profitable budget investment policy has to account for the interplay between campaigns/platforms. In this scenario, a conversion (e.g., a user\u2019s purchase of a promoted product) should be attributed not only to the latest ad the user was exposed to, but also to previous ones (Berman, 2018). The joint consideration of each funnel phase is a fundamental step towards an optimal investment solution while considering the advertising campaigns/platforms independently leads to sub-optimal solutions. Consider, for instance, a simplified version of the funnel with two types of campaigns: awareness (i.e., impression) ads and conversion ads. The first kind of ad aims at improving brand awareness, while the latter aims at creating the actual conversion. If we evaluate the performances in terms of conversions only, we will discover that impression ads are not instantaneously effective in creating conversions, so we will be tempted to reduce the budget invested in such a campaign. However, this approach is sub-optimal because impression ads increase the chance to convert when a conversion ad is shown after the impression (e.g., Hoban & Bucklin, 2015). In addition, the effect of some ads, especially impression ads delivered via television, may be delayed. It has been demonstrated (Chapelle, 2014) that users remember advertising over time in a vanishing way, leading to consequences that non-dynamical models cannot capture. This kind of interplay comprises more general scenarios than the simple reward delay, including the case where the interaction is governed by a dynamics hidden to the observer. While this scenario can be indubitably modeled as a Partially Observable Markov Decision Process (POMDP, \u02da Astr\u00a8 om, 1965), the complexity of the framework and its general1 arXiv:2211.08997v2 [cs.LG] 30 May 2023 \fDynamical Linear Bandits ity are often not required to capture the main features of the problem. Indeed, for specific classes of problems, the Multi-Armed Bandit (MAB, Lattimore & Szepesv\u00b4 ari, 2020) literature has explored the possibility of experiencing delayed reward either assuming that the actual reward will be observed, individually, in the future (e.g., Joulani et al., 2013) or with the more realistic assumption that an aggregated feedback is available (e.g., Pike-Burke et al., 2018), with also specific applications to online advertising (Vernade et al., 2017). Although effective in dealing with delay effects and the possibility of a reward spread in the future (CesaBianchi et al., 2018), they do not account for the additional, more complex, dynamical effects, which can be regarded as the evolution of a hidden state. In this work, we take a different perspective. We propose to model the non-observable dynamical effects underlying the phenomena as a Linear Time-Invariant (LTI) system (Hespanha, 2018). In particular, the system is characterized by a hidden internal state xt (e.g., awareness) which evolves via linear dynamics fed by the action ut (e.g., amount invested) and affected by noise. At each round, the learner experiences a reward yt (e.g., conversions), which is a noisy observation that linearly combines the state xt and the action ut. Our goal consists in learning an optimal policy so as to maximize the expected cumulative reward. We call this setting Dynamical Linear Bandits (DLBs) that, as we shall see, reduces to linear bandits (Abbasi-Yadkori et al., 2011) when no dynamics are involved. Because of the dynamics, the effect of each action persists over time indefinitely but, under stability conditions, it vanishes asymptotically. This allows representing interference and synergy between platforms, thanks to the dynamic nature of the system. Contributions In Section 2, we introduce the Dynamical Linear Bandit (DLB) setting to represent sequential decision-making problems characterized by a hidden state that evolves linearly according to an unknown dynamics. We show that, under stability conditions, the optimal policy corresponds to playing the constant action that leads the system to the most profitable steady state. Then, we derive an expected regret lower bound of order \u2126 \u00b4 d ? T p1\u00b4\u03c1q1{2 \u00af , being d the dimensionality of the action space and \u03c1 \u0103 1 the spectral radius of the dynamical matrix of the system evolution law.1 In Section 3, we propose a novel optimistic regret minimization algorithm, Dynamical Linear Upper Confidence Bound (DynLin-UCB), for the DLB setting. DynLin-UCB takes inspiration from Lin-UCB but subdivides the optimization horizon T into increasing-length epochs. In each epoch, an action is selected optimistically and kept constant (i.e., persisted) so that the system approximately reaches the steady state. We provide a regret analysis for DynLin-UCB showing that, under certain assumptions, 1The smaller \u03c1, the faster the system reaches its steady state. it enjoys r O \u00b4 d ? T p1\u00b4\u03c1q3{2 \u00af expected regret. In Section 5, we provide a numerical validation, with both synthetic and realworld data, compared with bandit baselines. The proofs of all the results are reported in Appendix B. Notation Let a, b P N with a \u010f b, we introduce the symbols: Ja, bK :\u201c ta, . . . , bu, JbK :\u201c J1, bK, and Ja, 8M \u201c ta, a ` 1, . . . u . Let x, y P Rn, we denote with xx, yy \u201c xTy \u201c \u0159n j\u201c1 xiyi the inner product. For a positive semidefinite matrix A P Rn\u02c6n, we denote with }x}2 A \u201c xTAx the weighted 2-norm. The spectral radius \u03c1pAq is the largest absolute value of the eigenvalues of A, the spectral norm }A}2 is the square root of the maximum eigenvalue of ATA. We introduce the maximum spectral norm to spectral radius ratio of the powers of A defined as \u03a6pAq \u201c sup\u03c4\u011b0 }A\u03c4}2{\u03c1pAq\u03c4 (Oymak & Ozay, 2019). We denote with In the identity matrix of order n and with 0n the vector of all zeros of dimension n. A random vector x P Rn is \u03c32-subgaussian, in the sense of Hsu et al. (2012), if for every vector \u03b6 P Rn it holds that E rexp px\u03b6, xyqs \u010f expp}\u03b6}2 2\u03c32{2q. 2. Setting In this section, we introduce the Dynamical Linear Bandits (DLBs), the learner-environment interaction, assumptions, and regret (Section 2.1). Then, we derive a closed-form expression for the optimal policy for DLBs (Section 2.2). Finally, we derive a lower bound to the regret, highlighting the intrinsic complexities of the DLB setting (Section 2.3). 2.1. Problem Formulation In a Dynamical Linear Bandit (DLB), the environment is characterized by a hidden state, i.e., a n-dimensional real vector, initialized to x1 P X, where X \u010e Rn is the state space. At each round t P N, the environment is in the hidden state xt P X, the learner chooses an action, i.e., a d-dimensional real vector ut P U, where U \u010e Rd is the action space. Then, the learner receives a noisy reward yt \u201c x\u03c9, xty ` x\u03b8, uty ` \u03b7t P Y, where Y \u010e R is the reward space, \u03c9 P Rn, \u03b8 P Rd are unknown parameters, and \u03b7t is a zero-mean \u03c32\u2013subgaussian random noise, conditioned to the past. Then, the environment evolves to the new state according to the unknown linear dynamics xt`1 \u201c Axt ` But ` \u03f5t, where A P Rn\u02c6n is the dynamic matrix, B P Rn\u02c6d is the action-state matrix, and \u03f5t is a zero-mean \u03c32\u2013subgaussian random noise, conditioned to the past, independent of \u03b7t.2 Remark 2.1. The setting proposed above is a particular case of a POMDP (\u02da Astr\u00a8 om, 1965), in which the state xt is non-observable, while the learner accesses the noisy 2n is the order of the LTI system (Kalman, 1963). We make no assumption on the value of n and on its knowledge. 2 \fDynamical Linear Bandits observation yt that corresponds to the noisy reward too. Furthermore, the setting can be viewed as a MISO (Multiple Input Single Output) discrete-time LTI system (Kalman, 1963). Finally, the DLB reduces to (non-contextual) linear bandit (Abbasi-Yadkori et al., 2011) when the hidden state does not affect the reward, i.e., when \u03c9 \u201c 0. Markov Parameters We revise a useful representation, that for every H P JtK allows expressing yt in terms of the sequence of the most recent H ` 1 actions pusqsPJt\u00b4H,tK, reward noise \u03b7t, H state noises p\u03f5sqsPJt\u00b4H,t\u00b41K, and starting state xt\u00b4H (Ho & Kalman, 1966; Oymak & Ozay, 2019; Tsiamis & Pappas, 2019; Sarkar et al., 2021): yt\u201c H \u00ff s\u201c0 xhtsu,ut\u00b4sy loooooooomoooooooon action effect `\u03c9TAHxt\u00b4H loooooomoooooon starting state `\u03b7t` H \u00ff s\u201c1 \u03c9TAs\u00b41\u03f5t\u00b4s looooooooooomooooooooooon noise , (1) where the sequence of vectors htsu P Rd for every s P N are called Markov parameters and are defined as: ht0u \u201c \u03b8 and htsu \u201c BTpAs\u00b41qT\u03c9 if s \u011b 1. Furthermore, we introduce the cumulative Markov parameters, defined for every s, s1 P N with s \u010f s1 as hJs,s1K \u201c \u0159s1 l\u201cs htlu and the corresponding limit as s1 \u00d1 `8, i.e., hJs,`8M \u201c \u0159`8 l\u201cs htlu. Finally, we use the abbreviation h \u201c hJ0,`8M \u201c \u03b8 ` BTpIn \u00b4 Aq\u00b4T\u03c9. We will make use of the following standard assumption related to the stability of the dynamic matrix A, widely employed in discrete\u2013time LTI literature (Oymak & Ozay, 2019; Lale et al., 2020a;b). Assumption 2.1 (Stability). The spectral radius of A is strictly smaller than 1, i.e., \u03c1pAq \u0103 1, and the maximum spectral norm to spectral radius ratio of the powers of A is bounded, i.e., \u03a6pAq \u0103 `8.3 Policies and Performance The learner\u2019s behavior is modeled via a deterministic policy \u03c0 \u201c p\u03c0tqtPN defined, for every round t P N, as \u03c0t : Ht\u00b41 \u00d1 U, mapping the history of observations Ht\u00b41 \u201c pu1, y1, . . . , ut\u00b41, yt\u00b41q P Ht\u00b41 to an action ut \u201c \u03c0tpHt\u00b41q P U, where Ht\u00b41 \u201c pU \u02c6 Yqt\u00b41 is the set of histories of length t \u00b4 1. The performance of a policy \u03c0 is evaluated in terms of the (infinite-horizon) expected average reward: Jp\u03c0q :\u201c lim inf H\u00d1`8 E \u00ab 1 H H \u00ff t\u201c1 yt ff , (2) where $ \u2019 & \u2019 % xt`1 \u201c Axt ` But ` \u03f5t yt \u201c x\u03c9, xty ` x\u03b8, uty ` \u03b7t ut \u201c \u03c0tpHt\u00b41q , @t P N, where the expectation is taken w.r.t. the randomness of the state noise \u03f5t and reward noise \u03b7t. If a policy \u03c0 is constant, i.e., \u03c0tpHt\u00b41q \u201c u for every t P N, we abbreviate Jpuq \u201c 3The latter is a mild assumption: if A is diagonalizable as A \u201c Q\u039bQ\u00b41, then \u03a6pAq \u010f }Q}2}Q\u00b41}2 and it is finite. In particular, if A is symmetric then \u03a6pAq \u201c 1. Jp\u03c0q. A policy \u03c0\u02da is an optimal policy if it maximizes the expected average reward, i.e., \u03c0\u02da P arg max\u03c0 Jp\u03c0q, and its performance is denoted by J\u02da :\u201c Jp\u03c0\u02daq. We further introduce the following assumption that requires the boundedness of the norms of the relevant quantities. Assumption 2.2 (Boundedness). There exist \u0398, \u2126, B, U \u0103 `8 s.t.: }\u03b8}2 \u010f \u0398, }\u03c9}2 \u010f \u2126, }B}2 \u010f B, supuPU }u}2 \u010f U, and supxPX }x}2 \u010f X, supuPU |Jpuq| \u010f 1.4 Regret The regret suffered by playing a policy \u03c0, competing against the optimal infinite-horizon policy \u03c0\u02da over a learning horizon T P N is given by: Rp\u03c0, Tq :\u201c TJ\u02da \u00b4 T \u00ff t\u201c1 yt, (3) where yt is the sequence of rewards collected by playing \u03c0 as in Equation (2). The goal of the learner consists in minimizing the expected regret E Rp\u03c0, Tq, where the expectation is taken w.r.t. the randomness of the reward. 2.2. Optimal Policy In this section, we derive a closed-form expression for the optimal policy \u03c0\u02da for the infinite\u2013horizon objective function, as introduced in Equation (2). Theorem 2.1 (Optimal Policy). Under Assumptions 2.1 and 2.2, an optimal policy \u03c0\u02da maximizing the (infinitehorizon) expected average reward Jp\u03c0q (Equation 2), for every round t P N and history Ht\u00b41 P Ht\u00b41 is given by: \u03c0\u02da t pHt\u00b41q\u201cu\u02da where u\u02daPargmax uPU Jpuq\u201cxh,uy. (4) Some remarks are in order. The optimal policy plays the constant action u\u02da P U which brings the system in the \u201cmost profitable\u201d steady-state.5 Indeed, the expression xh, uy can be rewritten expanding the cumulative Markov parameter as p\u03b8T ` \u03c9TpIn \u00b4 Aq\u00b41Bqu\u02da and x\u02da \u201c pIn \u00b4 Aq\u00b41Bu\u02da is the expression of the steady state x\u02da \u201c Ax\u02da ` Bu\u02da, when applying action u\u02da. It is worth noting the role of Assumption 2.1 which guarantees the existence of the inverse pIn \u00b4 Aq\u00b41. In this sense, our problem shares the constant nature of the optimal policy with the linear bandit setting (Abbasi-Yadkori et al., 2011), although ours is characterized by an evolving state, which introduces a new tradeoff in the action selection. From the LTI system perspective, this implies that we can restrict to open-loop stationary policies. The reason why DLBs do not benefit from closed-loop policies, differently from other classical problems, such as 4The assumption of the bounded state norm }x}2 \u010f X holds whenever the state noise \u03f5 is bounded. As shown by Agarwal et al. (2019), this assumption can be relaxed, for unbounded subgaussian noise, by conditioning to the event that none of the noise vectors are ever large at the cost of an additional log T factor in the regret. 5In Appendix C, we show that the optimal policy is non\u2013 stationary for the finite\u2013horizon case. 3 \fDynamical Linear Bandits the LQG (Abbasi-Yadkori & Szepesv\u00b4 ari, 2011), lies in the linearity of the reward yt and in the additive noise \u03b7t and \u03f5t, making their presence irrelevant (in expectation) for control purposes. Nonetheless, as we shall see, our problem poses additional challenges compared to linear bandits since, in order to assess the quality of an action u P U, instantaneous rewards are not reliable, and we need to let the system evolve to the steady state and, only then, observe the reward. 2.3. Regret Lower Bound In this section, we provide a lower bound to the expected regret that any learning algorithm suffers when addressing the learning problem in a DLB. Theorem 2.2 (Lower Bound). For any policy \u03c0 (even stochastic), there exists a DLB fulfilling Assumptions 2.1 and 2.2, such that for sufficiently large T \u011b O \u00b4 d2 1\u00b4\u03c1pAq \u00af , policy \u03c0 suffers an expected regret lower bounded by: ERp\u03c0, Tq \u011b \u2126 \u02dc d ? T p1 \u00b4 \u03c1pAqq 1 2 \u00b8 . The lower bound highlights the main challenges of the DLB learning problem. First of all, we observe a dependence on 1{p1 \u00b4 \u03c1pAqq, being \u03c1pAq the spectral radius of the matrix A. This is in line with the intuition that, as \u03c1pAq approaches 1, the problem becomes more challenging. Furthermore, we note that when \u03c1pAq \u201c 0, i.e., the problem has no dynamical effects, the lower bound matches the one of linear bandits (Lattimore & Szepesv\u00b4 ari, 2020). It is worth noting that, for technical reasons, the result of Theorem 2.2 is derived under the assumption that, at every round t P JTK, the agent observes both the state xt and the reward yt (see Appendix B). Clearly, this represents a simpler setting w.r.t. DLBs (in which xt is hidden) and, consequently, Theorem 2.2 is a viable lower bound for DLBs too. 3. Algorithm In this section, we present an optimistic regret minimization algorithm for the DLB setting. Dynamical Linear Upper Confidence Bound (DynLin-UCB), whose pseudocode is reported in Algorithm 1, requires the knowledge of an upperbound \u03c1 \u0103 1 on the spectral radius of the dynamic matrix A (i.e., \u03c1pAq \u010f \u03c1) and on the maximum spectral norm to spectral radius ratio \u03a6 \u0103 `8 (i.e., \u03a6pAq \u010f \u03a6), as well as the bounds on the relevant quantities of Assumption 2.2.6 6As an alternative, one can consider a more demanding requirement of the knowledge of a bound on the spectral norm }A}2 of A. Similar assumptions regarding the knowledge of analogous quantities are considered in the literature, e.g., decay of Markov operator norms (Simchowitz et al., 2020) and strong stability (Plevrakis & Hazan, 2020), spectral norm bound (Lale et al., 2020a). As a side note, the knowledge of \u03c1 \u011b \u03c1pAq (or an equivalent quantity) is proved to be unavoidable by Theorem 2.2. Indeed, if no restriction DynLin-UCB is based on the following simple observation. To assess the quality of action u P U, we need to persist in applying it so that the system approximately reaches the corresponding steady state and, then, observe the reward yt, representing a reliable estimate of Jpuq \u201c xh, uy. We shall show that, under Assumption 2.1, the number of rounds needed to approximately reach such a steady state is logarithmic in the learning horizon T and depends on the upper bound of the spectral norm \u03c1. After initializing the Gram matrix V0 \u201c \u03bbId and the vectors b0 and p h0 both to 0d (line 1), DynLin-UCB subdivides the learning horizon T in M \u010f T epochs. Each epoch m P JMK is composed of Hm `1 rounds, where Hm \u201c tlog m{ logp1{\u03c1qu is logarithmic in the epoch index m. At the beginning of each epoch, m P JMK, DynLin-UCB computes the upper confidence bound (UCB) index (line 4) defined for every u P U as: UCBtpuq :\u201c xp ht\u00b41, uy ` \u03b2t\u00b41 }u}V\u00b41 t\u00b41 , (5) where p ht\u00b41 \u201c V\u00b41 t\u00b41bt\u00b41 is the Ridge regression estimator of the cumulative Markov parameter h, as in Equation (4) and \u03b2t\u00b41 \u011b 0 is an exploration coefficient to be defined later. Similar to Lin-UCB (Abbasi-Yadkori et al., 2011), the index UCBtpuq is designed to be optimistic, i.e., Jpuq \u010f UCBtpuq in high-probability for all u P U. Then, the optimistic action ut P arg maxuPU UCBtpuq is executed (line 6) and persisted for the next Hm rounds (lines 811). The length of the epoch Hm is selected such that, under Assumption 2.1, the system has approximately reached the steady state after Hm ` 1 rounds. In this way, at the end of epoch m, the reward yt is an almost-unbiased sample of the steady-state performance Jputq. This sample is employed to update the Gram matrix estimate Vt and the vector bt (line 13), while the samples collected in the previous Hm rounds are discarded (line 9). It is worth noting that by setting Hm \u201c 0 for all m P JMK, DynLin-UCB reduces to Lin-UCB. The following sections provide the concentration of the estimator p ht\u00b41 of h (Section 3.1) and the regret analysis of DynLin-UCB (Section 3.2). 3.1. Self-Normalized Concentration Inequality for the Cumulative Markov Parameter In this section, we provide a self-normalized concentration result for the estimate p ht of the cumulative Markov parameter h. For every epoch m P JMK, we denote with tm the last round of epoch m: t0 \u201c 0 and tm \u201c tm\u00b41 ` 1 ` Hm. At the end of each epoch m, we solve the Ridge regression problem, defined for every round t P JTK as: p ht\u201cargmin r hPRd \u00ff lPJMK:tl\u010ftm pytl \u00b4xr h,utlyq2`\u03bb \u203a \u203ar h \u203a \u203a2 2\u201cV\u00b41 t bt. on \u03c1pAq is enforced (i.e., just \u03c1pAq \u0103 1), one can always consider the DLB in which \u03c1pAq \u201c 1 \u00b4 1{T \u0103 1 making the regret lower bound degenerate to linear. 4 \fDynamical Linear Bandits Algorithm 1: DynLin-UCB. Input :Regularization parameter \u03bb \u0105 0, exploration coefficients p\u03b2t\u00b41qtPJT K, spectral radius upper bound 0 \u010f \u03c1 \u0103 1 1 Initialize t \u00d0 1, V0 \u201c \u03bbId, b0 \u201c 0d, p h0 \u201c 0d, 2 Define M \u201cmintM 1 PN :\u0159M1 m\u201c1 1`t log m logp1{\u03c1qu\u0105Tu\u00b41 3 for m P JMK do 4 Compute ut P arg maxuPU UCBtpuq 5 where UCBtpuq:\u201cxp ht\u00b41,uy`\u03b2t\u00b41 }u}V\u00b41 t\u00b41 6 Play arm ut and observe yt 7 Define Hm \u201c t log m logp1{\u03c1qu 8 for j P JHmK do 9 Update Vt \u201c Vt\u00b41, bt \u201c bt\u00b41 10 t \u00d0 t ` 1 11 Play arm ut \u201c ut\u00b41 and observe yt 12 end 13 Update Vt \u201c Vt\u00b41 ` utuT t, bt \u201c bt\u00b41 ` utyt 14 Compute p ht \u201c V\u00b41 t bt 15 t \u00d0 t ` 1 16 end We now present the following self-normalized maximal concentration inequality and, then, we compare it with the existing results in the literature. Theorem 3.1 (Self-Normalized Concentration). Let pp htqtPN be the sequence of solutions of the Ridge regression problems of Algorithm 1. Then, under Assumption 2.1 and 2.2, for every \u03bb \u011b 0 and \u03b4 P p0, 1q, with probability at least 1 \u00b4 \u03b4, simultaneously for all rounds t P N, it holds that: \u203a \u203a \u203ap ht \u00b4 h \u203a \u203a \u203a Vt \u010f c1 ? \u03bb logpept ` 1qq ` c2 ? \u03bb ` d 2r \u03c32 \u02c6 log \u02c61 \u03b4 \u02d9 ` 1 2 log \u02c6det pVtq \u03bbd \u02d9\u02d9 , where c1 \u201c U\u2126\u03a6pAq \u00b4 UB 1\u00b4\u03c1pAq ` X \u00af , c2 \u201c \u0398 ` \u2126B\u03a6pAq 1\u00b4\u03c1pAq , and r \u03c32 \u201c \u03c32 \u00b4 1 ` \u21262\u03a6pAq2 1\u00b4\u03c1pAq2 \u00af . First, we note that when \u2126\u201c 0 (\u03c9 \u201c 0n), i.e., the state does not affect the reward, the bound perfectly reduces to the selfnormalized concentration used in linear bandits (AbbasiYadkori et al., 2011, Theorem 1). In particular, we recognize the second term due to the regularization parameter \u03bb \u0105 0 and the third one, which involves the subgaussianity parameter r \u03c32, related to the joint contribution of the state and reward noises. Furthermore, the first term is an additional bias that derives from the epochs of length Hm ` 1. The choice of the value Hm represents one of the main technical novelties that, on the one hand, leads to a bias that conveniently grows logarithmically with t and, on the other hand, can be computed without the knowledge of T. It is worth looking at our result from the perspective of learning the LTI system parameters. We can compare our Theorem 3.1 with the concentration presented in (Lale et al., 2020a, Appendix C), which represents, to the best of our knowledge, the only result for the closed-loop identification of LTI systems with non-observable states. First, note that, although we focus on a MISO system (yt is a scalar, being our reward), extending our estimator to multiple-outputs (MIMO) is straightforward. Second, the approach of (Lale et al., 2020a) employs the predictive form of the LTI system to cope with the correlation introduced by closed-loop control. This choice allows for convenient analysis of the estimated Markov parameters of the predictive form. However, recovering the parameters of the original system requires an application of the Ho-Kalman method (Ho & Kalman, 1966) which, unfortunately, does not preserve the concentration properties in general, but only for persistently exciting actions. Our method, instead, forces to play an open-loop policy within a single epoch (each with logarithmic duration), while the overall behavior is closed-loop, as the next action depends on the previous-epoch estimates. In this way, we are able to provide a concentration guarantee on the parameters of the original system without assuming additional properties on the action signal. 3.2. Regret Analysis In this section, we provide the analysis of the regret of DynLin-UCB, when we select the exploration coefficient \u03b2t based on the knowledge of the upper bounds \u03c1 \u0103 1, \u03a6 \u0103 `8, and those specified in Assumption 2.2, defined for every round t P JTK as: \u03b2t :\u201c c1 ? \u03bb logpept ` 1qq ` c2 ? \u03bb ` d 2\u03c32 \u02c6 log \u02c61 \u03b4 \u02d9 ` d 2 log \u02c6 1 ` tU 2 d\u03bb \u02d9\u02d9 , where c1 \u201c U\u2126\u03a6 \u00b4 UB 1\u00b4\u03c1 ` X \u00af , c2 \u201c \u0398 ` \u2126B\u03a6 1\u00b4\u03c1 , and \u03c32 \u201c \u03c32 \u00b4 1 ` \u21262\u03a6 2 1\u00b4\u03c12 \u00af . The following result provides the bound on the expected regret of DynLin-UCB. Theorem 3.2 (Upper Bound). Under Assumptions 2.1 and 2.2, selecting \u03b2t as in Equation (6) and \u03b4 \u201c 1{T, DynLin-UCB suffers an expected regret bounded as (highlighting the dependencies on T, \u03c1, d, and \u03c3 only): E Rp\u03c0DynLin-UCB, Tq \u010f O \u02dc d\u03c3 ? Tplog Tq 3 2 1 \u00b4 \u03c1 ` ? dTplog Tq2 p1 \u00b4 \u03c1q 3 2 ` 1 p1 \u00b4 \u03c1pAqq2 \u00b8 . Proof Sketch. The analysis of DynLin-UCB poses additional challenges compared to that of Lin-UCB (AbbasiYadkori et al., 2011) because of the dynamic effects of the hidden state. The idea behind the proof is to first derive 5 \fDynamical Linear Bandits a bound on a different notion of regret, i.e., the offline regret: Roffp\u03c0,Tq\u201cTJ\u02da \u00b4\u0159T t\u201c1 Jputq, that compares J\u02da with the steady-state performance Jputq of the action ut \u201c\u03c0tpHt\u00b41q (Theorem B.2). This analysis of Roffp\u03c0,Tq can be comfortably carried out, by adopting a proof strategy similar to that of Lin-UCB. However, when applying action ut, the DLB does not immediately reach the performance Jputq as the expected reward Eryts experiences a transitional phase before converging to the steady state. Under stability (Assumption 2.1), it is possible to show that the expected offline regret and the expected regret differ by a constant: |ERp\u03c0,Tq\u00b4ERoffp\u03c0,Tq|\u010fOp1{p1\u00b4\u03c1pAqq2q (Lemma B.1). Some observations are in order. We first note a dependence on the term 1{p1 \u00b4 \u03c1q, which, in turn, depends on the upper bound \u03c1 of the spectral gap \u03c1pAq. If the system does not display a dynamics, i.e., we can set \u03c1 \u201c 0, we obtain a regret bound that, apart from logarithmic terms, coincides with that of Lin-UCB, i.e., r Opd\u03c3 ? Tq. Instead, for slowconverging systems, i.e., \u03c1 \u00ab 1, the regret bound enlarges, as expected. Clearly, a value of \u03c1 too large compared to the optimization horizon T (e.g., \u03c1 \u201c 1 \u00b4 1{T 1{3) makes the regret bound degenerate to linear. This is a case in which the underlying system is so slow that the whole horizon T is insufficient to approximately reach the steady state. Third, the regret bound is the sum of three components: the first one depends on the subgaussian proxy \u03c3 and is due to the noisy estimation of the relevant quantities; the second one is a bias due to the epoch-based structure of DynLin-UCB; finally, the third one is constant (does not depend on T) accounts for the time needed to reach the steady state. Remark 3.1 (Regret upper bound (Theorem 3.2) and lower bound (Theorem 2.2) Comparison). Apart from logarithmic terms, we notice a tight dependence on d and on T. Instead, concerning the spectral properties of A, in the upper bound, we experience a dependence on 1{p1\u00b4\u03c1q raised to a higher power (either 1 for the term multiplied by d and 3{2 for the term multiplied by ? d) w.r.t. the exponent appearing in the lower bound (i.e., 1{2). It is currently an open question whether the lower bound is not tight (which is obtained for a simpler setting in which the state is observable xt) or whether more efficient algorithms for DLBs can be designed. Furthermore, Theorem 3.2 highlights the impact of the upper bound \u03c1 compared with the true \u03c1pAq. 4. Related Works In this section, we survey and compare the literature with a particular focus on bandits with delayed, aggregated, and composite feedback (Joulani et al., 2013) and online control for Linear Time-Invariant (LTI) systems (Hespanha, 2018). Additional related works are reported in Appendix A. Bandits with Delayed/Aggregated/Composite Feedback The Multi-Armed Bandit setting has been widely employed as a principled approach to address sequential decision-making problems (Lattimore & Szepesv\u00b4 ari, 2020). The possibility of experiencing delayed rewards has been introduced by Joulani et al. (2013) and widely exploited in advertising applications (Chapelle, 2014; Vernade et al., 2017). A large number of approaches have extended this setting either considering stochastic delays (Vernade et al., 2020), unknown delays (Li et al., 2019; Lancewicki et al., 2021), arm-dependent delays (Manegueu et al., 2020), nonstochastic delays (Ito et al., 2020; Thune et al., 2019; Jin et al., 2022). Some methods relaxed the assumption that the individual reward is revealed after the delay expires, admitting the possibility of receiving anonymous feedback, which can be aggregated (Pike-Burke et al., 2018; Zhang et al., 2021) or composite (Cesa-Bianchi et al., 2018; Garg & Akash, 2019; Wang et al., 2021). Most of these approaches are able to achieve r Op ? Tq regret, plus additional terms depending on the extent of the delay. In our DLBs, the reward is generated over time as a combined effect of past and present actions through a hidden state, while these approaches generate the reward instantaneously and reveal it (individually or in aggregate) to the learner in the future and no underlying state dynamics is present. Online Control of Linear Time-Invariant Systems The particular structure imposed by linear dynamics makes our approach comparable to LTI online control for partially observable systems (e.g., Lale et al., 2020b; Simchowitz et al., 2020; Plevrakis & Hazan, 2020). While the dynamical model is similar, in online control of LTI systems, the perspective is quite different. Most of the works either consider the Linear Quadratic Regulator (Mania et al., 2019; Lale et al., 2020b) or (strongly) convex objective functions (Mania et al., 2019; Simchowitz et al., 2020; Lale et al., 2020a), achieving, in most of the cases r Op ? Tq regret for strongly convex functions and r OpT 2{3q for convex functions. Recently, r Op ? Tq regret rate has been obtained for convex function too, by means of geometric exploration methods (Plevrakis & Hazan, 2020). Compared to DynLin-UCB, the algorithm of Plevrakis & Hazan (2020) considers general convex costs but assumes the observability of the state and limits to the class of disturbance response controllers (Li & Bosch, 1993) that do not include the constant policy. Moreover, the regret bound of Plevrakis & Hazan (2020) differs from Theorem 3.2, as it shows a cubic dependence on the system order7 and an implicit nontrivial dependence on the dynamic matrix A. Instead, our Theorem 3.2 is remarkably independent of the system order n. Furthermore, Lale et al. (2020a) reach OplogpTqq regret in the case of strongly convex cost functions compet7This holds for known cost functions. Instead, for unknown costs, the exponent becomes 24 (Plevrakis & Hazan, 2020). 6 \fDynamical Linear Bandits ing against the best persistently exciting controller (i.e., a controller implicitly maintaining a non-null exploration). Some approaches are designed to deal with adversarial noise (Simchowitz et al., 2020). All of these solutions, however, look for the best closed-loop controller within a specific class, e.g., disturbance response control (Li & Bosch, 1993). These controllers, however, do not allow us to easily incorporate constraints on the action space, which could be of crucial importance in practice, e.g., in advertising domains. DynLin-UCB works with an arbitrary action space and, thanks to the linearity of the reward, does not require complex closed-loop controllers. 5. Numerical Simulations In this section, we provide numerical validations of DynLin-UCB in both a synthetic scenario and a domain obtained from real-world data. The goal of these simulations is to highlight the behavior of DynLin-UCB in comparison with bandit baselines, describing advantages and disadvantages. The first experiment is a synthetic setting in which we can evaluate the performances of all the solutions and the sensitivity of DynLin-UCB w.r.t. the \u03c1 parameter (Section 5.1). Then, we show a comparison in a DLB scenario retrieved from real-world data (Section 5.2). The code of the experiments can be found at https://github.com/marcomussi/DLB. Details and additional experiments can be found in Appendix E. Baselines We consider as main baseline Lin-UCB (AbbasiYadkori et al., 2011), designed for linear bandits. We include Exp3 (Auer et al., 1995) usually employed in (nonadaptive) adversarial settings, and its extension to k-length memory (adaptive) adversaries Exp3-k by Dekel et al. (2012).8 Additionally, we perform a comparison with algorithms for regret minimization in non-stationary environments: D-Lin-UCB (Russac et al., 2019), an extension of Lin-UCB for non-stationary settings, and AR2 (Chen et al., 2021), a bandit algorithm for processes presenting temporal structure. Lastly, in the case of real-world data, we compare our solution with a human-expert policy (Expert). This policy is directly generalized from the original dataset by learning via regression the average budget allocation over all platforms from the available data. For the baselines which do not support vectorial actions, we perform a discretization of the action space U that surely contains optimal action. Concerning the hyperparameters of the baselines, whenever possible, they are selected as in the respective original papers. The experiments are presented with a regularization parameter \u03bb P t1, log T} for the algorithms which require it (i.e., DynLin-UCB, Lin-UCB, and 8k is proportional to tlog M{ logp1{\u03c1qu. In Appendix A.3 we elaborate on the use of adversarial bandit algorithms for DLBs. D-Lin-UCB).9 Further information about the hyperparameters of the baselines and the adopted optimistic exploration bounds are presented in Appendix E.1. 5.1. Synthetic Data Setting We consider a DLB defined by the following matrices A \u201c diagpp0.2, 0, 0.1qq, B \u201c diagpp0.25, 0, 0.1qq, \u03b8 \u201c p0, 0.5, 0.1qT, \u03c9 \u201c p1, 0, 0.1qT and a Gaussian noise with \u03c3 \u201c 0.01 (diagonal covariance matrix for the state noise).10 This way, the spectral gap of the dynamical matrix is \u03c1pAq \u201c 0.2 and \u03a6pAq \u201c 1. Moreover, the cumulative Markov parameter is given by h \u201c p0.56, 0.5, 0.11qT. We consider the action space U \u201c tpu1, u2, u3qT P r0, 1s3 with u1 ` u2 ` u3 \u010f 1.5u that simulates a total budget of 1.5 to be allocated to the three platforms. Thus, a \u201cmyopic\u201d agent would simply look at how the action immediately propagates to the reward through \u03b8, and will invest the budget in the second component of the action, which is weighted by 0.5. Instead, a \u201cfar-sighted\u201d agent, aware of the system evolution, will look at the cumulative Markov parameter h, realizing that the most convenient action is investing in the first component, weighted by 0.56. Therefore, the optimal action is u\u02da \u201c p1, 0.5, 0qT leading to J\u02da \u201c 0.81. Comparison with the bandit baselines Figure 1 shows the performance in terms of cumulative regret of DynLin-UCB, Lin-UCB, D-Lin-UCB, AR2, Exp3, and Exp3-k. The experiments are conducted over a time horizon of 1 million rounds. For DynLin-UCB, we employed, for the sake of this experiment, the true value of the spectral gap, i.e., \u03c1 \u201c \u03c1pAq \u201c 0.2. First of all, we observe that both Exp3 and Exp3-k suffers a significantly large cumulative regret. Similar behavior is displayed by AR2. Moreover, all the versions of Lin-UCB and D-Lin-UCB suffer linear regret. The best performance of D-Lin-UCB is obtained when the discount factor \u03b3 is close to 1 (the weights take the form wt \u201c \u03b3\u00b4t), and the behavior is comparable with the one of Lin-UCB. Even for a quite fast system (\u03c1pAq \u201c 0.2), ignoring the system dynamics, and the presence of the hidden state, has made both Lin-UCB and D-Lin-UCB commit (in their best version, with \u03bb \u201c log T) to the sub-optimal (myopic) action u\u02dd \u201c p0.5, 1, 0qT with performance J\u02dd \u201c 0.78 \u0103 J\u02da, with also a relevant variance. On the other hand, DynLin-UCB is able to maintain 9For DynLin-UCB, log T is a nearly optimal choice for \u03bb as it can be seen by looking at the first two addenda of the exploration factor in Equation (6). 10It is worth noting that the decision of using diagonal matrices is just for explanation purposes and w.l.o.g. (at least in the class of diagonalizable dynamic matrices). Indeed, we are just interested in the cumulative Markov parameter h and we could have obtained the same results with an equivalent (non-diagonal) representation, by applying an inevitable transformation T as A1 \u201c TAT\u00b41, \u03c91 \u201c T\u00b4T\u03c9, and B1 \u201c TB. 7 \fDynamical Linear Bandits 0 0.2 0.4 0.6 0.8 1 \u00a8106 0 1 2 3 \u00a8104 Rounds Cumulative Regret DynLin-UCB (logT ) DynLin-UCB (1) Lin-UCB (logT ) Lin-UCB (1) D-Lin-UCB (logT ) D-Lin-UCB (1) AR2 Exp3 Exp3-k Figure 1. Cumulative regret as a function of the rounds comparing DynLin-UCB and the other bandit baselines (50 runs, mean \u02d8 std). 0 0.5 1 1.5 2 2.5 3 \u00a8105 0 0.2 0.4 0.6 0.8 1 \u00a8104 Rounds Cumulative Regret \u03c1 \u201c \u03c1pAq Lin-UCB \u03c1 \u201c 0.4 \u03c1 \u201c 0.1 \u03c1 \u201c 0.05 \u03c1 \u201c 0 Figure 2. Cumulative regret as a function of the rounds comparing Lin-UCB, and DynLin-UCB with \u03bb \u201c log T, varying the upper bound on the spectral radius \u03c1 (50 runs, mean \u02d8 std). 0 0.2 0.4 0.6 0.8 1 \u00a8106 0 0.5 1 1.5 2 2.5 \u00a8105 Rounds Cumulative Regret DynLin-UCB (logT ) DynLin-UCB (1) Lin-UCB (logT ) Lin-UCB (1) D-Lin-UCB (logT ) D-Lin-UCB (1) AR2 Exp3 Exp3-k Expert Figure 3. Cumulative regret for DynLin-UCB, the other bandit baselines and the Expert in the system generalized from real-world data (50 runs, mean \u02d8 std). a smaller and stable (variance is negligible) sublinear regret in both its versions, with a notable advantage when using \u03bb \u201c log T. Sensitivity to the Choice of \u03c1 The upper bound \u03c1 of the spectral radius \u03c1pAq \u201c 0.2 represents a crucial parameter of DynLin-UCB. While an overestimation \u03c1 \" \u03c1pAq does not compromise the regret rate but tends to slow down the convergence process, a severe underestimation \u03c1 ! \u03c1pAq might prevent learning at all. In Figure 2, we test DynLin-UCB against a misspecification of \u03c1, when \u03bb \u201c log T. We can see that by considering \u03c1 \u201c 2\u03c1pAq, DynLin-UCB experiences a larger regret but still sublinear and smaller w.r.t. Lin-UCB with \u03bb \u201c log T. Even by reducing \u03c1 P t0.1, 0.05u, DynLin-UCB is able to keep the regret sublinear, showing remarkable robustness to misspecification. Clearly, setting \u03c1 \u201c 0 makes the regret almost degenerate to linear. 5.2. Real-world Data We present an experimental evaluation based on realworld data coming from three web advertising platforms (Facebook, Google, and Bing), related to several campaigns for an invested budget of 5 Million EUR over 2 years. Starting from such data, we learn the best DLB model by means of a specifically designed variant of the Ho-Kalman algorithm (Ho & Kalman, 1966).11 We used the learned model to build up a simulator. The resulting system has \u03c1pAq \u201c 0.67. We evaluate DynLin-UCB against the baselines for T \u201c 106 steps over 50 runs. Results Figure 3 shows the results in terms of cumulative regret. It is worth noting that no algorithm, except for DynLin-UCB, is able to converge to the optimal choice. Indeed, they immediately commit to a sub-optimal solution. 11See Appendix D. DynLin-UCB, instead, shows a convergence trend towards the optimal policy over time for both \u03bb \u201c 1 and \u03bb \u201c log T, even if the best-performing version is the one which employs \u03bb \u201c log T. The Expert, which has a preference towards maximizing the instantaneous effect of the actions only and does not take into account correlations between platforms, displays a sub-optimal performance. 6. Discussion and Conclusions In this paper, we have introduced the Dynamical Linear Bandits (DLBs), a novel model to represent sequential decisionmaking problems in which the system is characterized by a non-observable hidden state that evolves according to linear dynamics and by an observable noisy reward that linearly combines the hidden state and the action played. This model accounts for scenarios that cannot be easily represented by existing bandit models that consider delayed and aggregated feedback. We have derived a regret lower bound that highlights the main complexities of the DLB problem. Then, we have proposed a novel optimistic regret minimization approach, DynLin-UCB, that, under stability assumption, is able to achieve sub-linear regret. The numerical simulation in both synthetic and real-world domains succeeded in showing that, in a setting where the baselines mostly suffer linear regret, our algorithm consistently enjoys sublinear regret. Furthermore, DynLin-UCB proved to be robust to misspecification of its most relevant hyper-parameter \u03c1. To the best of our knowledge, this is the first work addressing this family of problems, characterized by hidden linear dynamics, with a simple, yet effective, bandit-like approach. Short-term future directions include efforts in closing the gap between the regret lower and upper bounds. Long-term future directions should focus on extending the present approach to non-linear system dynamics and embedding in the algorithm additional budget constraints enforced over the optimization horizon. 8 \fDynamical Linear Bandits Acknowledgements This paper is supported by PNRR-PE-AI FAIR project funded by the NextGeneration EU program." + }, + { + "url": "http://arxiv.org/abs/2205.10416v1", + "title": "ARLO: A Framework for Automated Reinforcement Learning", + "abstract": "Automated Reinforcement Learning (AutoRL) is a relatively new area of\nresearch that is gaining increasing attention. The objective of AutoRL consists\nin easing the employment of Reinforcement Learning (RL) techniques for the\nbroader public by alleviating some of its main challenges, including data\ncollection, algorithm selection, and hyper-parameter tuning. In this work, we\npropose a general and flexible framework, namely ARLO: Automated Reinforcement\nLearning Optimizer, to construct automated pipelines for AutoRL. Based on this,\nwe propose a pipeline for offline and one for online RL, discussing the\ncomponents, interaction, and highlighting the difference between the two\nsettings. Furthermore, we provide a Python implementation of such pipelines,\nreleased as an open-source library. Our implementation has been tested on an\nillustrative LQG domain and on classic MuJoCo environments, showing the ability\nto reach competitive performances requiring limited human intervention. We also\nshowcase the full pipeline on a realistic dam environment, automatically\nperforming the feature selection and the model generation tasks.", + "authors": "Marco Mussi, Davide Lombarda, Alberto Maria Metelli, Francesco Trov\u00f2, Marcello Restelli", + "published": "2022-05-20", + "updated": "2022-05-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Reinforcement Learning (RL, Sutton and Barto, 2018) has recently achieved successful results in solving several complex control problems, including autonomous driving (Wang et al., 2018), robot manipulators (Nguyen and La, 2019), and \ufb01nance (Zhang et al., 2020). These outstanding achievements are rooted in the employment of powerful training algorithms combined with complex model representations, such as deep neural networks (Arulkumaran et al., 2017). Unfortunately, empirical experience suggests that this class of approaches heavily depends on \ufb01ne-tuning, where an inaccurate choice of the hyper-parameters makes the difference between learning the optimal policy and not learning at all (Bu\u00b8 soniu et al., 2018). This represents an indubitable limitation, making this powerful tool not immediately usable by non-expert users. While this scenario is common even in general Machine Learning (ML, Bishop and Nasrabadi, 2006), the inherent complexity of RL, due to the sequential nature of the problem, exacerbates this issue even more. The research effort towards the democratization of ML has reached a mature level of development for supervised learning. Indeed, several Automated Machine Learning (AutoML) frameworks and corresponding libraries have been developed and tested, such as the ones proposed by Feurer et al. (2015, 2020); LeDell and Poirier (2020); Olson et al. (2016). AutoML is intended to automate the whole ML pipeline, starting from the preliminary operations on the data, ending with the trained and evaluated \ufb01nal model. This way, the complete ML process can be regarded, by the nonexpert user, as a black-box, abstracting from the unnecessary details and favoring the adoption of ML as a production tool. For a detailed review of the currently available AutoML frameworks, we refer the reader to the recent survey by He et al. (2021). Conversely, RL is currently far from being a tool usable by a non-expert user since a complete and reliable Automated Reinforcement Learning (AutoRL) pipeline is currently missing. This automation gap between \fARLO: A Framework for Automated Reinforcement Learning RL and supervised learning is even more severe from a theoretical perspective since, to the best of our knowledge, a general and \ufb02exible notion of AutoRL pipeline has not been formalized yet. Recently, a surge of scienti\ufb01c works in the RL \ufb01eld (Parker-Holder et al., 2022; Afshar et al., 2022) attempted to tackle either speci\ufb01c stages of the RL pipeline individually (e.g., feature construction, policy generation), or focus on speci\ufb01c application scenarios. While providing a vast analysis of the available approaches for each single stage, they review the state-of-the-art to solve single tasks individually and do not propose a full pipeline and do not study the peculiarities characterizing the interaction between such stages. On the other hand, a na\u00efve adaptation of the existing automated pipelines designed for AutoML to the RL setting is not a viable approach since they fail to capture the unique characteristics of RL related to the presence of an interacting environment and the sequential nature of the learning problem. Contributions In this paper, we make a step towards the formalization of an AutoRL framework. The contributions of this work can be synthesized as follows. \u2022 We propose a general and \ufb02exible formalization of a pipeline for AutoRL. Grounding on such a de\ufb01nition, we instantiate it for two different scenarios: of\ufb02ine and online RL.1 The former assumes that the RL process is carried out based on a \ufb01xed batch of data. The latter takes into account the availability of an interactive environment. \u2022 We describe the individual stages of the two pipelines and their respective characteristics, highlighting the interactions between them and focusing on their inputs and outputs. Furthermore, we discuss the corresponding units, i.e., possible implementations of stages, and introduce a general approach to select the best-tuned unit in a \ufb01nite set. \u2022 We provide an implementation of the framework in an open-source Python library, called ARLO.2 The library contains the implementation of all the stages, the two RL pipelines, and the needed tools to run, optimize, and evaluate the pipelines. \u2022 Finally, we test the implementation on an illustrative LQG and a MuJoCo environment, showing the ability to reach optimal performances without requiring any manual adjustment by humans. At last, we provide an experiment on a realistic dam environment with a pipeline performing the data generation, feature selection, policy generation, and policy evaluation stages. Given the wide variety of RL problems and solutions, we restrict our formalization to the case of stationary and fully observable environments. We leave the extension to more complex settings (e.g., multi-objective, multi-agent, lifelong) as a future work. Limitations and Broader Impact Statement The goal of AutoRL is to bring RL closer to the non-expert user. This represents a source of opportunities and risks. On the one hand, making RL usable to a wide audience contributes to the democratization of the \ufb01eld, overcoming the need for speci\ufb01c education and opening it to the large public. On the other hand, such an abstract approach tends to compromise the transparency of the learning process and traceability of the resulting model. Shadowing the underlying principles, AutoRL might pose the risk of misuse of RL approaches, leading to results not in line with expectations. Furthermore, AutoRL, even more than RL, requires huge amounts of data and computation that might represent a limit of the framework. Outline The paper is structured as follows. In Section 2, we present the fundamental notions of Markov Decision Processes and the basics of RL. In Section 3, we introduce a general notion of pipeline, stage, and unit. In Section 4, we present the online and of\ufb02ine pipelines for RL. In Section 5, we describe the details of the components included in the two pipelines. In Section 6, we report the results of the tests performed on standard benchmarks and on a realistic environment. In Section 7, we highlight the conclusions of our works and we propose future research lines. 2 Preliminaries A Markov Decision Process (MDP, Puterman, 2014) is de\ufb01ned as a tuple M = (S, A, P, R, \u03b3, \u00b50), where S is the set of states, A is the set of actions, P(s\u2032|s, a) is the state transition model, specifying the probability to land in state s\u2032 starting from state s and performing action a, R(s, a) is the reward function, de\ufb01ning the expected reward when the agent is in state s and performs action a, \u03b3 \u2208[0, 1] is the discount factor, and \u00b50(s) is the initial state distribution. The agent\u2019s behavior is de\ufb01ned in terms of a policy \u03c0(a|s) de\ufb01ning the probability of performing action a in state s. 1The reader might be tempted to address the of\ufb02ine RL setting with AutoML, given the \ufb01xed available dataset and, thus, the similarity with supervised learning. We stress that this choice is inappropriate as the peculiarities of RL are still crucial, especially the sequential properties of the problem. 2The library is available at https://github.com/arlo-lib/ARLO. 2 \fARLO: A Framework for Automated Reinforcement Learning Interaction Protocol The initial state is sampled from the initial-state distribution s0 \u223c\u00b50, the agent selects an action based on its policy a0 \u223c\u03c0(\u00b7|s), the environment provides the agent with the reward R(s0, a0), and the state evolves according to the transition model s1 \u223cP(\u00b7|s0, a0). The process is repeated for T steps, where T \u2208N\u222a{+\u221e} is the (possibly in\ufb01nite) horizon. Objective The goal of RL consists in learning an optimal policy \u03c0(a|s), i.e., a policy maximizing the expected discounted sum of the rewards, a.k.a. the expected return (Sutton and Barto, 2018): J(\u03c0) := E\u03c0 \u0014 T \u22121 X t=0 \u03b3tR(st, at) \u0015 , (1) where the expectation E\u03c0[\u00b7] is computed w.r.t. the randomness of environment and policy. Environments and Datasets We introduce the notion of environment and dataset. Formally, an environment E is a device to interact with the underlying MDP, that, given a state st and an action at, it provides the next state s\u2032 t \u223cP(\u00b7|st, at) and the reward rt = R(st, at). An environment is a generative model if it allows to freely choose the state st at each step, or a forward model if, instead, we can perform steps in the MDP (st+1 = s\u2032 t) or start again sampling st from the initial-state distribution \u00b50. A dataset D := {\u03c4i}n i=1 is a set of trajectories \u03c4i, where each trajectory is a sequence \u03c4i = (s0 i , a0 i , r1 i , . . . , sTi\u22121 i , aTi\u22121 i , rTi i , sTi i ) and Ti is the length of the trajectory. Online vs. Of\ufb02ine RL We distinguish between two main groups of RL algorithms: online and of\ufb02ine RL. The online RL algorithms (Sutton and Barto, 2018) aim at learning a policy \u03c0 by directly interacting with an environment E. Typically they employ the last available policy to collect data and leverage the experience to improve it. Conversely, the of\ufb02ine RL paradigm (Levine et al., 2020) consists in carrying out the policy learning on a dataset D previously collected.3 The ability to learn a (near-)optimal policy heavily depends on the exploration properties of the dataset D. Regarding of\ufb02ine RL, several works covered its peculiarities. In Levine et al. (2020) the authors survey the \ufb01eld of of\ufb02ine RL, presenting open problems, unique challenges and limitations. In Paine et al. (2020) the authors focused on the evaluation problem present in of\ufb02ine RL, namely on the evaluation of a learnt policy without resorting to an environment. Furthermore, this work showcases how of\ufb02ine RL algorithms are not robust with respect to hyperparameters tuning. 3 Framework In this section, we present the abstract formalization of the proposed AutoRL pipeline, detailing the notions of pipeline, stage, and unit. Stages and Pipelines A stage \u03c8 represents a single component of the pipeline with a speci\ufb01c purpose. For instance, the portion of the pipeline in charge of performing feature engineering is regarded as a stage. A stage \u03c8 interacts with the other stages of the pipeline by means of an interface, de\ufb01ning its inputs and outputs. We denote a stage\u2019s inputs with In(\u03c8) and its outputs with Out(\u03c8). A pipeline is a sequence of m \u2208N stages \u03a8 = (\u03c81, . . . , \u03c8m). The possibility of staking speci\ufb01c stages in sequence depends, in general, on problem-dependent constraints. Units A unit constitutes the actual implementation of the stages corresponding to algorithms that are in charge of generating the output required by the corresponding stage.4 We de\ufb01ne three relevant types of units: \ufb01xed, tunable, and automatic. Fixed Unit A \ufb01xed unit (Figure 1a) corresponds to an algorithm \u03c8 = A(h), where A(h) denotes algorithm A that generates the stage output, instanced with hyper-parameters h \u2208H selected from an hyper-parameter set H. Tunable Unit A tunable unit (Figure 1b) is described by a tuple \u03c8 = (A, H, T, \u2113) where A(\u00b7) is an algorithm, H is a hyper-parameters set, T is a tuner (e.g., genetic algorithm, particle swarm, Bayesian optimizer), and \u2113(A, h) \u2208R is a tuning performance index mapping an algorithm A(\u00b7) and hyper-parameters h \u2208H pair to a real number. The tuning optimization problem can be formulated as \ufb01nding the hyper-parameters h\u2217\u2208H maximizing the performance index \u2113. Formally: h\u2217\u2208arg max h\u2208H \u2113(A, h). This optimization is addressed by the tuner T. When the stage corresponding to the tunable unit is executed, it reduces to the \ufb01xed unit A(h\u2217), and, subsequently, it generates the block outputs. 3Even in this case, we may have an environment E to test the performance of the learned policy. Commonly, it is a less costly version, e.g., in terms of computational or real costs, of the environment where the \ufb01nal policy will be applied. 4From a software engineering perspective, a stage is an abstract class, while a unit a concrete class. 3 \fARLO: A Framework for Automated Reinforcement Learning A (h\u2217) In(\u03c8i) Out(\u03c8i) (a) Fixed Unit. A(h), In(\u03c8i) Out(\u03c8i) h \u2208H (b) Tunable Unit. A1(h), Ak(h), Out(\u03c8i) In(\u03c8i) ... h \u2208Hk h \u2208H1 (c) Automatic Unit. Figure 1: The three types of units. Automatic Unit An automatic unit (Figure 1c) is a set of tunable units paired with a performance index, i.e., \u03c8 = ({\u03c8j}k j=1, \u2113), where \u03c8j = (Aj, Hj, Tj, \u2113j), for j \u2208{1, . . ., k}, and \u2113(A, h) \u2208R is a performance index for algorithm A with hyper-parameters h. The goal of an automatic unit consists in selecting the best tuned algorithm among the available ones, by ranking them based on the additional performance index \u2113. We de\ufb01ne the automatic optimization problem as follows: j\u2217\u2208arg max j\u2208{1,...,k} \u2113(Aj, h\u2217 j), where h\u2217 j \u2208arg max h\u2208Hj \u2113j(Aj, h), j \u2208{1, . . ., k}. When the stage corresponding to the automatic unit is executed, it reduces the automatic unit a \ufb01xed one Aj\u2217(h\u2217 j\u2217), and, subsequently, it generates the corresponding output. The problem of jointly \ufb01nding the best algorithm and its related hyper-parameter con\ufb01guration is also referred in the AutoML community as CASH (Combined Algorithm Selection and Hyper-parameter Optimization Problem, Thornton et al., 2013). Intuitively, a \ufb01xed unit is a human hand-crafted unit in which an algorithm is selected and the related hyper-parameters are speci\ufb01ed. No automatic operations nor evaluation are performed here. In a tunable unit, the algorithm is speci\ufb01ed but the task to \ufb01nd the best hyper-parameter con\ufb01guration is demanded to the pipeline. In an automatic unit, both the choice of the best algorithm and the best hyper-parameter con\ufb01guration is demanded to the pipeline. 4 AutoRL Pipelines In this section, we present the main methodological contribution of the paper, discussing the two AutoRL pipelines: online and of\ufb02ine. We focus on how to build these pipelines describing the stages\u2019 interactions. The detailed description of each individual stage is reported in Section 5. A graphical representation of the pipelines is provided in Figure 2. Online Pipeline The Online AutoRL Pipeline (Figure 2a) takes as input an environment E that is fed to the Feature Engineering stage, which modi\ufb01es its state/action representations and the reward to facilitate the learning performed in the next stages. It outputs a transformed environment E\u2032, based on the features created in this stage. Subsequently, the environment E\u2032 is used to learn an estimate \u02c6 \u03c0\u2217of the optimal policy through the Policy Generation. Finally, the Policy Evaluation phase provides an estimate of the performance \u03b7(\u02c6 \u03c0\u2217), based on a performance index \u03b7. Of\ufb02ine Pipeline In the Of\ufb02ine AutoRL Pipeline (Figure 2b), differently from the online one, two additional preliminary stages are included: Data Generation and Data Preparation. If an environment E is provided as input, the Data Generation stage creates a dataset D. This stage is omitted if a dataset D is already available, e.g., in the case the dataset D comes from a real process. In such a case, the environment E is employed for the evaluation of the policy performance only. The Data Preparation stage modi\ufb01es the dataset D, by applying corrections over the individual instances (i.e., the rows of the dataset) obtaining D\u2032. Then, the environment E and dataset D\u2032 pass through the Feature Engineering stage, which, similarly to its online counterpart, generates a dataset D\u2032\u2032 and an environment E\u2032 with transformed states, actions, and reward. After that, the dataset D\u2032\u2032 is used for learning an estimate of the optimal policy \u02c6 \u03c0\u2217through the Policy Generation stage. Differently from the online one, this stage uses the dataset D\u2032\u2032, while the environment E\u2032 is employed for estimating \u03b7(\u02c6 \u03c0\u2217) in the Policy Evaluation stage. 4 \fARLO: A Framework for Automated Reinforcement Learning E Engineering Feature E\u2032 Policy Policy Evaluation Generation \u02c6 \u03c0\u2217 \u03b7 (\u02c6 \u03c0\u2217) (a) Online Pipeline. E Generation Data D Preparation Data E D\u2032 Engineering Feature E\u2032 D\u2032\u2032 Policy Policy \u02c6 \u03c0\u2217 Evaluation (D,E) Generation \u03b7 (\u02c6 \u03c0\u2217) (b) Of\ufb02ine Pipeline. Figure 2: The Online (a) and Of\ufb02ine (b) AutoRL Pipelines. 5 Stages and Units We now provide examples of units for each of the stages, highlighting the differences between the online and of\ufb02ine pipelines. For each stage, we de\ufb01ne its goal, performance index for tunable or automatic units, and implementation selected from the state-of-the-art methodologies. 5.1 Data Generation The Data Generation stage takes as input an environment E and returns the unaltered environment E and a dataset D generated by interacting with the environment. The goal of this stage is to create a dataset that is retrieved by exploring the state space as much as possible. Based on the type of environment, i.e., generative or forward model, the resulting dataset is made of transitions or trajectories. In principle, this stage should output a dataset as \u201cinformative\u201d as possible, i.e., that represents exhaustively the corresponding environment. As performance index for evaluating the quality of a Data Generation unit, we adopt the entropy of the state-action visitation distribution d\u03c0(s, a) generated by the policy \u03c0(a|s), that is proportional to: \u2212 Z s\u2208S Z a\u2208A d\u03c0(s, a) log d\u03c0(s, a)da ds.5 A straightforward implementation of Data Generation consists in collecting data with the random uniform policy. However, this approach is not guaranteed to explore the state space effectively (Mutti et al., 2021; Endrawis et al., 2021). In the pipeline, we consider the state-of-the-art solutions proposed by Pathak et al. (2019), and Mutti et al. (2021). The former employs Proximal Policy Optimization (PPO, Schulman et al., 2017) using the estimated variance of the MDP dynamics as reward, as a proxy for the entropy. Instead, the latter provides a novel policy search algorithm maximizing a K-nearest neighbours-based estimate of the state distribution entropy. 5.2 Data Preparation This phase uses a dataset D, coming either from a real-world environment or generated in the Data Generation stage, and returns a dataset D\u2032 with the same state-action features and reward, but with a possibly different number of entries. The goal of this phase is to optimize an existing dataset in order to be processed better from in subsequent stages. Data Preparation includes data augmentation, data imputation, and data scaling, and can embed further domain-speci\ufb01c sub-stages (e.g., for images, audio data), and/or consistency checks (e.g., \ufb01lling missing values). No single automatic unit is deemed adoptable due to the dif\ufb01culty of de\ufb01ning a general enough performance index for this stage. However, domain-speci\ufb01c performance indexes are available, e.g., for the data imputation sub-stages, we may rely on the indexes de\ufb01ned by Jadhav et al. (2019). Possible implementations of this stage include the techniques for classical ML preprocessing, such as imputation from a dataset of trajectories via KNN imputation or Bayesian Multiple Imputation (Lizotte et al., 2008). Moreover, for pixel-based observations (e.g., the Gym Atari environments) data augmentation techniques, e.g., cropping, re\ufb02ection, scaling, were employed in Ye et al. (2020). Other approaches viable for feature-based representations are presented in Laskin et al. (2020), where experiments on the OpenAI Procgen Benchmark and on the MuJoCo environments. 5In this stage, we rely on the Particle Based Entropy estimation developed by Singh et al. (2003). 5 \fARLO: A Framework for Automated Reinforcement Learning E E\u2032 Feature Generation Reward Shaping Feature Selection D F\u2032 Data Generation Environment Engineering E (a) Feature Engineering stage in Online Pipeline. E E\u2032 Feature Generation Reward Shaping Feature Selection D\u2032 F\u2032 Environment Engineering E D\u2032\u2032 (b) Feature Engineering stage in Of\ufb02ine Pipeline. Figure 3: The of\ufb02ine and online Feature Engineering stages. 5.3 Feature Engineering The Feature Engineering stage displays signi\ufb01cant differences between online and of\ufb02ine pipelines (Figure 3). Of\ufb02ine pipelines (Figure 3b) take as input an environment E and a dataset D\u2032 and return a feature-adjusted environment E\u2032 and dataset D\u2032\u2032. Conversely, online pipelines (Figure 3a) take as input an environment E and return a featureadjusted environment E\u2032. In both cases, this stage requires an internal dataset for feature engineering that, for the online case, has to be generated. The core task of this stage is to select and generate a set of features that properly model the state-action space of the problem and perform reward shaping actions to facilitate the following learning phase. Feature Engineering stage includes one or more of the following sub-stages: \u2022 Feature Generation, in charge of creating new features. This sub-stage makes use of techniques such as radial basis functions, tile coding, or coarse coding (Sutton and Barto, 2018). \u2022 Feature Selection, aimed at selecting a meaningful subset of features, either to reduce the computation requirements or to regularize the following policy learning phase. Viable options are Mutual Informationbased selection (Beraha et al., 2019), correlation-based \ufb01ltering methods, and tree-based variable selection (Castelletti et al., 2011). \u2022 Reward Shaping, performing speci\ufb01c transformations on the reward function, possibly preserving the optimal policy, to speed up the convergence of an RL algorithm (Ng et al., 1999). For instance, in presence of sparse reward functions, reward shaping can be regarded as a form of curriculum learning (Portelas et al., 2020). These sub-stages return a transformation that is applied to the environment through the Environment Engineering stage. In the of\ufb02ine case, the same transformation is applied to the dataset, while for the online case the internal dataset is disregarded. We consider as a performance index for the complete feature engineering stage the mutual information between the current state-action pair (s, a) features and the next-state reward (s\u2032, r) features (Kraskov et al., 2004; Gao et al., 2017) regularized, e.g., by the number of selected features.6 5.4 Policy Generation The Policy Generation stage is in charge of the training phase of the RL learning algorithm. More speci\ufb01cally, it takes as input an environment E\u2032 or a dataset D\u2032\u2032, in the online and of\ufb02ine RL pipelines, respectively, to output an estimate \u02c6 \u03c0\u2217of the optimal policy. Among the most common choices of performance indexes for this stage, we mention the expected return, i.e., the expected discounted sum of the rewards, the average reward, i.e., the long-term expected average reward, and the total reward i.e., expected cumulative sum of the rewards (in the case the environment is episodic, Puterman, 2014). 6For instance, one may use the ratio between the mutual information and the number of selected features. 6 \fARLO: A Framework for Automated Reinforcement Learning For speci\ufb01c applications, e.g., risk-averse setting, one may adopt the mean-variance, mean-volatility, and CVaR (Pratt, 1978; Bisi et al., 2021). Many works deal with hyper-parameter optimization for RL algorithms. In Franke et al. (2021) a framework based on Population Based Training (PBT, Jaderberg et al., 2017) is proposed to tune off-policy RL algorithms. In Parker-Holder et al. (2021) a new time-varying bandit algorithm was presented for tuning RL algorithms. Hyperparameter tuning is a widely researched topic and the techniques developed by ML algorithms can be used for RL algorithms as well. Nevertheless, the sample inef\ufb01ciency of tuning techniques is a common problem, not unique to RL. Another issue is the sensitivity to hyper-parameters con\ufb01gurations, which increases the dif\ufb01culty of benchmarking tuning algorithms due to the dif\ufb01culty of obtaining reproducibleresults. Further methods were proposed by Zhang et al. (2021); Lee et al. (2021); Team et al. (2021); Saphal et al. (2021); Falkner et al. (2018). The speci\ufb01c implementation of the Policy Generation stage depends on the selected RL algorithm. For of\ufb02ine pipelines, we mention, among the others, Least Squares Policy Iteration (LSPI, Lagoudakis and Parr, 2003), Fitted Q-Iteration (FQI, Ernst et al., 2005). For online pipelines, a large surge of RL algorithms have been developed in the recent years. We mention, among the most popular ones, Deep Q-Networks (DQN, Schaul et al., 2016), Deep Deterministic Policy Gradient (DDPG, Lillicrap et al., 2016), Trust Region Policy Optimization (TRPO, Schulman et al., 2015), Soft Actor Critic (SAC, Haarnoja et al., 2018), and Proximal Policy Optimization (PPO, Schulman et al., 2017). 5.5 Policy Evaluation The Policy Evaluation stage takes as input the policy \u02c6 \u03c0\u2217produced by the Policy Generation phase and an environment E\u2032, and produces as output an estimation of a performance index \u03b7(\u02c6 \u03c0\u2217). Regarding the performance index used in this stage, the options are the same we mentioned for Policy Generation. Notice that the performance index chosen in this stage may differ from the one of the Policy Generation one. For instance, it is a common practice to train RL algorithms using a discounted objective and evaluate the resulting policies using an undiscounted one (Duan et al., 2016). Notice that, due to the nature of the task, only \ufb01xed units are used in this stage. 6 Experimental Results In this section, we use the Python implementation of ARLO on 3 RL problems. In addition to the presented stages, the library also allows to create newly de\ufb01ned stages, if needed, and a set of analysis tools. The implementation of the framework is available at https://github.com/arlo-lib/ARLO. The implemented methods are reported in Appendix A. The Policy Generation stages have been integrated with the MushroomRL (D\u2019Eramo et al., 2021) library.7 The optimization of the tunable units has been performed using a genetic algorithm as described in B. A comprehensive description of all the features available in the ARLO library, as well as details on how to integrate already developed methods for RL, are provided at https://arlo-lib.github.io/arlo-lib. In Sections 6.1 and 6.2, we present the results our online pipelines whose Policy Generation stages contain tunable units to select the best hyper-parameters over two simulated problems. In Section 6.3, we consider an of\ufb02ine pipeline including tunable Feature Engineering and Policy Generation stages on a realistic dam control problem. The experimental details are reported in the supplementary material. For the different experiments we considered different seeds. These did not only in\ufb02uence the RL algorithm, but were used for all the components making up the considered pipeline. 6.1 Linear Quadratic Gaussian Regulator In this experiment, we address a Linear-Quadratic Gaussian Regulator (LQG, Dorato et al., 1994) by the state dynamics st+1 = Ast + Bat + \u03c3, where st is the state at time t, at is the action at time t, A is the state dynamic matrix, B is the action dynamic matrix, and \u03c3 is a Gaussian white noise. The reward function is rt+1 = \u2212sT t Qst \u2212aT t Rat, where Q and R are the state and action cost weight matrices respectively8. The discount factor is equal to \u03b3 = 0.9 and the time horizon is T = 15. We employ the Soft-Actor Critic (SAC, Haarnoja et al., 2018) algorithm. To tune its hyper-parameters, we create an online RL pipeline, using the expected return (Eq. (1)) as performance index, and a genetic algorithm (like in 7The ARLO library includes an easy procedure to integrate algorithms coming from other RL libraries. 8The details about the hyper-parameters con\ufb01guration space, the tuning procedure, and the compute requirements for the LQG experiment can be found in the Appendix B.1. 7 \fARLO: A Framework for Automated Reinforcement Learning Sehgal et al., 2019) as tuning algorithm. The results are obtained after 50 generations of the genetic algorithm, each using a population of 20 agents. Table 1: Results achieved tuning SAC hyper-parameters on an LQG environment. Method Default Tuned Van Dooren (1981) \u22127.2 (4.9) 1st Seed \u221259.0 (24.0) \u22128.6 (4.7) 2nd Seed \u221267.4 (16.1) \u22128.2 (5.1) 3rd Seed \u221252.4 (12.5) \u22128.7 (4.7) Table 2: Results achieved tuning DDPG hyper-parameters on HalfCheetah-v3 environment. Method Default Tuned Islam et al. (2017) 3725.3 (512.8) 1st Seed 1157.7 (45.6) 3407.2 (952.1) 2nd Seed 850.8 (78.9) 4624.6 (110.9) 3rd Seed 956.2 (34.2) 3076.9 (77.9) Results We compare the results provided by the ARLO framework with the optimal solution (Van Dooren, 1981). In Table 1, we report the estimated expected return, averaged over 100 episodes (with the standard deviation in brackets), for the default con\ufb01guration and the corresponding tuned policy over three different seeds. Even if the performance of the tuned algorithms does not match the one of the optimal solution, the default hyper-parameter con\ufb01guration of SAC is notably under-performing (\u22485 times worse) compared to the tuned con\ufb01guration (\u22481.2 times worse). This result suggests that the proposed framework can generate solutions compatible with the optimal one without exploiting speci\ufb01c domain knowledge about the problem. 6.2 HalfCheetah In the second experiment, we apply the online RL pipeline to the MuJoCo HalfCheetah-v3 environment from OpenAI Gym (Brockman et al., 2016).9 As learning algorithm for the Policy Generation we employ the Deep Deterministic Policy Gradient (DDPG, Lillicrap et al., 2016), whose hyper-parameter tuning is known to be a challenging task (Islam et al., 2017)10. The hyper-parameters of DDPG have been tuned using the undiscounted cumulative reward as a performance index and, as a tuner, a genetic algorithm. We employ a discount factor \u03b3 = 1 and the time horizon to T = 1000. Results In Table 2, we report the estimated total reward, averaged over 100 episodes, for the default and tuned con\ufb01gurations over three different seeds (the standard deviation is provided in brackets). The provided performances are in line with the literature ones (Islam et al., 2017) and show that the proposed pipeline provides an automatic way of achieving competitive performance. In Figure 4, we report the different hyper-parameters selected during the learning phase by individuals (agents) used in the genetic algorithm optimization procedure throughout the tuning procedure. These results show how some of the parameters have a strong in\ufb02uence on the reward obtained by the agents, i.e., the actor and critic learning rate and the steps per \ufb01t (Figures 4a, 4b, and 4c, respectively), which implies that the value of the parameter concentrates around the optimal value after a few generations of the genetic algorithm. Conversely, those which do not in\ufb02uence the outcome of the optimization procedure, i.e., the steps (Figure 4d), continue to explore the available range until the end of the generations. 6.3 Dam To showcase the capabilities of our framework, we propose an experiment with a more complex of\ufb02ine RL pipeline that includes Data Generation, Feature Engineering, Policy Generation, and Policy Evaluation stages11. The selected environment consists of the control of a water reservoir (dam) that models the dynamics of a real alpine 9https://www.gymlibrary.ml/pages/environments/mujoco/half_cheetah. 10The details about the hyper-parameters con\ufb01guration space, the tuning procedure, and the compute requirements for the HalfCheetah experiment can be found in the Appendix B.2. 11The details about the hyper-parameters con\ufb01guration space, the tuning procedure, and the compute requirements for the Dam experiment can be found in the Appendix B.3. 8 \fARLO: A Framework for Automated Reinforcement Learning 0 20 40 10\u22124 10\u22123 10\u22122 Generation Actor learning rate (a) Actor learning rate. 0 20 40 10\u22123 10\u22122 Generation Critic learning rate (b) Critic learning rate. 0 20 40 100 101 102 103 104 Generation Steps per \ufb01t (c) Steps per \ufb01t. 0 20 40 0.5 1 1.5 \u00b7104 Generation Number of steps (d) Number of steps. Figure 4: Values of the hyper-parameters generated by the genetic optimization procedure over the 50 generations. The orange line corresponds to the best found value. Table 3: Results achieved tuning the hyper-parameters of a Feature Engineering stage. Method Discounted Reward Baseline \u22121649.85 (112.88) Tuned Con\ufb01guration \u22121224.67 (124.41) lake (Castelletti et al., 2011). The agent observes the current level of the lake and the sequence of the most recent 30 daily in\ufb02ows. The actuation consists of the amount of daily water release. The goal of the agent is to trade-off between avoiding \ufb02oods and ful\ufb01lling the downstream water demand. The dataset is generated using a random uniform policy. The Feature Engineering stage performs forward feature selection via mutual information (as presented in Beraha et al., 2019) to identify a subset of the available in\ufb02ows features. The Policy Generation stage uses the Fitted Q-Iteration (FQI, Ernst et al., 2005) algorithm. The hyper-parameters of FQI are \ufb01xed to a hand-tuned con\ufb01guration as the one presented by Tirinzoni et al. (2018). The objective of this experiment is to show in a realistic environment that tuning the hyper-parameters of a Feature Engineering stage is bene\ufb01cial for the \ufb01nal performance. Results In Table 3, we report the estimated expected return over 10 episodes for the baseline con\ufb01guration (standard deviation in brackets), in which all the features have been considered, and for the tuned con\ufb01guration, in which only a subset of the features was selected automatically by the pipeline. We observe that the result achieved by the tuned agent signi\ufb01cantly outperforms the baseline one, meaning that the feature selection techniques select only the most informative feature for the problem, with bene\ufb01cial effects on the successive learning phase. 7 Conclusions and Limitations Conclusions This paper introduced the ARLO framework for automating reinforcement learning by proposing two pipelines, one for the online setting and one for the of\ufb02ine setting. Moreover, we showcased the capabilities of such a framework by creating a Python library, and we tested its performance in both simulated and realistic settings. While the proposed framework in its current formulation is \ufb02exible and allows adding customized stages, the complete democratization of RL is far from being achieved. First, the procedures to optimize the different stages revealed to be computationally demanding. Thus, adding tools to predict and control the amount of computational time required by a pipeline is of paramount importance to obtaining a \ufb02exible tool. Another interesting development, going in the opposite direction of what we have just mentioned, consists in including a \u201cwhole pipeline optimization\u201d procedure, which jointly optimize the entire learning process. This direction requires a preliminary development of less computationally demanding algorithms for each stage of the pipeline. Finally, we focused our attention on fully-observable, stationary, single agent, single-objective settings. Developing a more general pipeline to relax some or all the above assumptions would ease the application of RL algorithms in a more wide spectrum of real-world problems. Limitations The goal of AutoRL is to bring RL closer to the non-expert user. This represents a source of opportunities and risks. On the one hand, making RL usable to a wide audience contributes to the democratization of the \ufb01eld, overcoming the need for speci\ufb01c education and opening it to the large public. On the other hand, such an abstract approach tends to compromise the transparency of the learning process and traceability of the resulting model. Shadowing the underlying principles, AutoRL might pose the risk of misuse of RL approaches, leading to results not in line with expectations. Furthermore, AutoRL, even more than RL, requires huge amounts of data and computation that might represent a limit of the framework. 9 \fARLO: A Framework for Automated Reinforcement Learning" + } + ], + "Alberto Maria Metelli": [ + { + "url": "http://arxiv.org/abs/2402.13821v1", + "title": "Performance Improvement Bounds for Lipschitz Configurable Markov Decision Processes", + "abstract": "Configurable Markov Decision Processes (Conf-MDPs) have recently been\nintroduced as an extension of the traditional Markov Decision Processes (MDPs)\nto model the real-world scenarios in which there is the possibility to\nintervene in the environment in order to configure some of its parameters. In\nthis paper, we focus on a particular subclass of Conf-MDP that satisfies\nregularity conditions, namely Lipschitz continuity. We start by providing a\nbound on the Wasserstein distance between $\\gamma$-discounted stationary\ndistributions induced by changing policy and configuration. This result\ngeneralizes the already existing bounds both for Conf-MDPs and traditional\nMDPs. Then, we derive a novel performance improvement lower bound.", + "authors": "Alberto Maria Metelli", + "published": "2024-02-21", + "updated": "2024-02-21", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction The framework of the Con\ufb01gurable Markov Decision Processes (Conf-MDPs, Metelli et al., 2018, 2019, 2022) has been introduced in recent years to model a wide range of real-world scenarios in which an agent has the opportunity to alter some environmental parameters in order to improve its learning experience. Conf-MDPs can be thought to as an extension of the traditional Markov Decision Processes (MDP, Puterman, 1994) to account for scenarios that emerge quite often in the Reinforcement Learning (RL, Sutton and Barto, 2018) problems, in which the environment rarely represents an immutable entity and can, indeed, be subject to partial control. In the Conf-MDP framework, the activity of altering the environmental parameters is named environment con\ufb01guration and serves different purposes. In the simplest scenario, the con\ufb01guration is carried out by the agent itself that acts as a con\ufb01gurator. This might suggest, at a \ufb01rst sight, that environment con\ufb01guration can be modeled within the agent actuation. While in principle this approach is possible, it tends to disregard the domain peculiarities of environment con\ufb01guration that usually is performed at a slower frequency compared to policy learning and might generate notable costs. The more realistic setting is the one in which an additional entity, the con\ufb01gurator is present and in charge of the environment con\ufb01guration process. The con\ufb01gurator acts on the environment transition model (that we will call con\ufb01guration in this setting) and might have an objective that is different from that of the agent. Thus, we can distinguish between two scenarios Metelli (2021, 2022): the cooperative and the non-cooperative settings. In the former, the agent and the con\ufb01gurator share the same interests, i.e., the have the same reward function. In such a case, the learning problem is simple as there is no con\ufb02ict between the two involved entities and several algorithms have been proposed in the literature Metelli et al. (2018, 2019). In the non-cooperative setting, instead, agent and con\ufb01gurator have possibly diverging interests, i.e., they reward function differ. Here the de\ufb01nition of a solution concept is more challenging and requires considering game-theoretic equilibria Metelli (2021). Recently a regret minimization approach has been proposed to address the learning problem in general-sum non-cooperative Conf-MDPs Ramponi et al. (2021). \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT In this paper, we focus on cooperative Conf-MDP1 and we derive several theoretical results on the performance improvement that can be obtained as an effect of altering either the policy of the environment con\ufb01guration. We start from the theoretical results presented in Metelli et al. (2018), and further developed in Metelli (2021), and we generalize them to a wide class of Conf-MDPs. Speci\ufb01cally, we focus on two kind of results: (i) bounds on the distance between the \u03b3-discounted stationary distributions and (ii) performance improvement lower bounds. The former quanti\ufb01es the distance in the state visitation distributions induced when changing simultaneously the agent policy and the environment con\ufb01guration; whereas the latter provides a computable approximated expression of the minimum performance attained with the selection of a new policy and a new con\ufb01guration. As demonstrated by previous works, lower bounds on the performance improvement represent valuable tools to build a wide variety of effective learning approaches, including safe Pirotta et al. (2013); Metelli et al. (2021) and trust-region Schulman et al. (2015); Achiam et al. (2017) approaches. More in detail, we focus on a speci\ufb01c class of regular Conf-MDPs that we name Lipschitz (LC) Conf-MDPs, in analogy with the traditional Lipschitz MDPs Rachelson and Lagoudakis (2010); Pirotta et al. (2015). We introduce them in Section 3. From an intuitive perspective, and with little inaccuracy, we can think of the LC condition as the existence of the \ufb01rst derivative of the relevant quantities. Then, in Section 4, we provide the bounds on the \u03b3-discounted stationary distributions. We start with a bound that merges together the effects of the policy and the con\ufb01guration and then we move to a looser result that highlights the individual contributions of the policy and of the con\ufb01guration. We compare the derived bound with the ones already present in the literature (even for MDPs only) and we show that, several of them, can be derived by our result, under speci\ufb01c conditions. In Section 5, we revise the notion of advantage function for Conf-MDPs, which quanti\ufb01es the one-step gain in performance obtained by either modifying the policy, the con\ufb01guration, or both. We report the policy, con\ufb01guration, and joint advantages and we study their relationships. Finally, in Section 6, we provide the performance improvement lower bounds. Speci\ufb01cally, we \ufb01rst derive a general tight bound that is hardly computable as it involves quantities that are usually unknown in practice. Then, we move to the derivation of a looser bound that has the advantage of being computable. Although mainly theoretical, the contributions provided in this paper are of independent interest, even outside the Conf-MDP \ufb01eld,2 and can be effectively employed as a starting point for the design of safe and trust region methods. Part of the results presented in this paper has previously appeared in Metelli et al. (2018); Metelli (2021). 2 Preliminaries In this section, we introduce the necessary background about probability, Lipschitz continuity, Wasserstien distance (Section 2.1) and Conf-MDPs (Section 2.2), that will be employed in the following sections. 2.1 Mathematical Background Probability Let X be a set and F be a \u03c3-algebra over X. We denote with \u2206X the set of probability measures over the measurable space pX, Fq. Let Y be a set, we denote with \u2206X Y the set of functions with signature Y \u00d1 \u2206X and with YX the set of functions with signature X \u00d1 Y. Let x P X, the Dirac delta measure is denoted by \u03b4x. Lipschitz Continuity Let pX, dX q and pY, dYq be two metric spaces, where dX : X \u00d1 r0, `8q and dY : Y \u00d1 r0, `8q are the corresponding distance functions. A function f P YX is Lipschitz continuous with Lipschitz constant Lf (Lf-LC) if it holds that: dY pfpxq, fpxqq \u010f LfdX px, xq , @x, x P X. (1) We de\ufb01ne the Lipschitz semi-norm of function f as the smallest Lf \u0105 0 for which Equation (1) holds: }f}L \u201c sup x,xPX , x\u2030x dY pfpxq, fpxqq dX px, xq . Being a semi-norm, } \u00a8 }L is non-negative and ful\ufb01lls the triangular inequality. If Y \u201c R, i.e., f is a real-valued function, we typically employ as distance function dY the Euclidean distance, i.e., dYpy, yq \u201c |y \u00b4 y|. If Y \u201c \u2206Z, i.e., f has values in probability distributions, we employ as distance function the Wasserstein distance (Villani, 2009). 1For the sake of brevity, we will simply use the abbreviation \u201cConf-MDP\u201d to denote a cooperative Conf-MDP. 2Every result we present can be employed for traditional MPDs as well, by just assuming not to change the environment con\ufb01guration. 2 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT Wasserstein Distance Let p, q P \u2206X be two probability measures over the metric space pX, dX q. The L1Wasserstein (or Kantorovich) distance between p and q is de\ufb01ned as Villani (2009): Wpp, qq \u201c sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u017c X pppdxq \u00b4 qpdxqq fpxq \u02c7 \u02c7 \u02c7 \u02c7 . If the metric dX is chosen to be the discrete metric, i.e., dX px, x1q \u201c 1tx \u2030 x1u for all x, x1 P X, then the L1Wasserstein reduces to the Total Variation (TV) divergence. 2.2 Con\ufb01gurable Markov Decision Processes A (cooperative) Con\ufb01gurable Markov Decision Process (Conf-MDP, Metelli et al., 2018) is a tuple C \u201c pS, A, r, \u03b3q, where S is the state space, A is the action space, r P RS\u02c6A\u02c6S is the reward function mapping every state-action-nextstate triple ps, a, s1q P S \u02c6 A \u02c6 S to the reward rps, a, s1q P R, and \u03b3 P r0, 1q is the discount factor. The behavior of the con\ufb01gurator is modeled by a con\ufb01guration (or transition model) p P \u2206S S\u02c6A that maps every state-action pair ps, aq P S \u02c6 A to the probability measure pp\u00a8|s, aq P \u2206S of the next state. The behavior of the agent is modeled by a policy \u03c0 P \u2206A S that maps every state s P S to the probability measure \u03c0p\u00a8|sq P \u2206A of the action to be played. We denote with p\u03c0 P \u2206S S the state-state transition kernel, de\ufb01ned as: p\u03c0pds1|sq \u201c \u017c A \u03c0pda|sqppds1|s, aq, @s, s1 P S. Value Functions Similarly to traditional MDPs, the value functions Sutton and Barto (2018) can be de\ufb01ne for ConfMDPs as done in Metelli (2021). Let p\u03c0, pq P \u2206A S \u02c6 \u2206S S\u02c6A be a policy-con\ufb01guration pair, we de\ufb01ne the following value functions, for every ps, a, s1q P S \u02c6 A \u02c6 S: v\u03c0,ppsq \u201c \u017c A \u03c0pda|sq \u017c S ppds1|s, aq ` rps, a, s1q ` \u03b3v\u03c0,pps1q \u02d8 , q\u03c0,pps, aq \u201c \u017c S ppds1|s, aq ` rps, a, s1q ` \u03b3v\u03c0,pps1q \u02d8 , u\u03c0,pps, a, s1q \u201c rps, a, s1q ` \u03b3v\u03c0,pps1q. These value functions represent the expected discounted cumulative reward (i.e., the expected return) experienced by the agent-con\ufb01gurator when interacting with the environment with a policy \u03c0 and a con\ufb01guration p. While v\u03c0,ppsq (i.e., state value function or V-function) and q\u03c0,pps, aq (i.e., state-action value function or Q-function) are de\ufb01ned analogously to the case of MDPs, u\u03c0,pps, a, s1q (i.e., state-action-next-state value function or U-function) is a peculiar value function for Conf-MDPs Metelli et al. (2018) that turns out to be relevant for learning optimal con\ufb01gurations. We refer the reader to (Metelli, 2021, Chapter 4) for a detailed review of the value function and the Bellman operators and equations for Conf-MDPs. \u03b3-discounted Stationary Distributions A policy-con\ufb01guration pair p\u03c0, pq P \u2206A S \u02c6 \u2206S S\u02c6A induces a visitation distribution of the states. The \u03b3-discounted stationary distribution accounts the expected discounted number of visits3 induced by a policy-con\ufb01guration pair p\u03c0, pq P \u2206A S \u02c6 \u2206S S\u02c6A, starting from an initial state sampled from \u03c1 P \u2206S, and it is de\ufb01ned in several equivalent forms Sutton et al. (1999): \u00b5\u03b3,\u03c1 \u03c0,p \u201c p1 \u00b4 \u03b3q\u03c1 ` \u03b3p\u03c0\u00b5\u03b3,\u03c1 \u03c0,p \u201c p1 \u00b4 \u03b3q `8 \u00ff t\u201c0 \u03b3t\u03c1pt \u03c0 \u201c p1 \u00b4 \u03b3q\u03c1 pIdS \u00b4 \u03b3p\u03c0q\u00b41 . (2) For the sake of brevity, whenever clear from the context, we omit the superscripts \u03b3, \u03c1, simply employing \u00b5\u03c0,p. Given an initial state distribution \u03c1 P \u2206S, we de\ufb01ne the expected return of a policy-con\ufb01guration pair p\u03c0, pq P \u2206A S \u02c6 \u2206S S\u02c6A in two interchangeable ways: J\u03c0,p \u201c \u017c S \u03c1pdsqv\u03c0,ppsq \u201c 1 1 \u00b4 \u03b3 \u017c S\u02c6A\u02c6S \u00b5\u03c0,ppdsq\u03c0pda|sqppds1|s, aqrps, a, s1q. 3\u201cDiscounted\u201d means that if a visit to a given state is performed at step t of interaction it counts \u03b3t. 3 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT 3 Lipschitz Con\ufb01gurable Markov Decision Processes In this section, we introduce the notion of regular Conf-MDPs that we will employ in the derivation of the theoretical results. We rephrase the traditional notion of Lipschitz (LC) MDP Rachelson and Lagoudakis (2010); Pirotta et al. (2015) to the case of Conf-MDPs. Assumption 3.1 (Lipschitz Con\ufb01gurable Markov Decision Processes (LC)). Let C \u201c pS, A, r, \u03b3q be a Conf-MDP, where pS, dSq and pA, dAq are metric spaces endowed with the corresponding distance functions dS and dA, respectively. C is Lr-LC if it holds that: \u02c7 \u02c7rps, a, s1q \u00b4 rps, a, s1q \u02c7 \u02c7 \u010f LrdS\u02c6A\u02c6S ` ps, a, s1q, ps, a, s1q \u02d8 , @ps, a, s1q, ps, a, s1q P S \u02c6 A, where dS\u02c6A\u02c6S pps, a, s1q, ps, a, s1qq :\u201c dSps, sq ` dApa, aq ` dSps1, s1q. Compared with Rachelson and Lagoudakis (2010); Pirotta et al. (2015), we enforce a condition on the reward function only, since the transition model (i.e., con\ufb01guration) does not belong to the de\ufb01nition of the Conf-MDP. Moreover, with negligible loss of generality, as done in Rachelson and Lagoudakis (2010); Pirotta et al. (2015), we consider a disjoint metric for the state and action spaces. We now proceed at introducing the notion of Lipschitz con\ufb01guration and policy. Assumption 3.2 (Lipschitz Con\ufb01guration and Policy (LC)). Let C \u201c pS, A, r, \u03b3q be a Conf-MDP, where pS, dSq and pA, dAq are metric spaces endowed with the corresponding distance functions dS and dA, respectively. Let p P \u2206S S\u02c6A be a con\ufb01guration. p is Lp-LC if it holds that: W ppp\u00a8|s, aq, pp\u00a8|s, aqq \u010f LpdS\u02c6A pps, aq, ps, aqq , @ps, aq, ps, aq P S \u02c6 A, where dS\u02c6A pps, aq, ps, aqq :\u201c dSps, sq ` dApa, aq. Let \u03c0 P \u2206A S be a policy. \u03c0 is L\u03c0-LC if it holds that: W p\u03c0p\u00a8|sq, \u03c0p\u00a8|sqq \u010f L\u03c0dSps, sq, @s, s P S. The choice of employing the Wasserstein distance over other distributional divergences, such as the TV divergence Munos and Szepesv\u00e1ri (2008), is justi\ufb01ed by the fact that the former allows quantifying the distance between deterministic distributions. Instead, the TV divergence takes its maximum value whenever the involved distributions have disjoint support. The Wasserstein distance is a standard choice in the RL literature Pirotta et al. (2015); Metelli et al. (2020). Lipschitz semi-norms of the Value functions Once we enforce the LC conditions on the Conf-MDP and on the policy and con\ufb01gurations (Assumptions 3.1 and 3.2), it is well-known that the state value function v\u03c0,p and stateaction value function q\u03c0,p are LC, under the assumption that \u03b3Lpp1 ` L\u03c0q \u0103 1 Rachelson and Lagoudakis (2010). In particular, their Lipschitz semi-norms (i.e., Lipschitz constants) are bounded as: }q\u03c0,p}L \u010f Lr 1 \u00b4 \u03b3Lpp1 ` L\u03c0q, }v\u03c0,p}L \u010f }q\u03c0,p}L p1 ` L\u03c0q \u010f Lrp1 ` L\u03c0q 1 \u00b4 \u03b3Lpp1 ` L\u03c0q. The following result provides a bound on the Lipschitz semi-norm of the state-action-next-state value function u\u03c0,p, that has been speci\ufb01cally introduced for the Conf-MDPs. Lemma 3.1. Let C be an Lr-LC Conf-MDP, p P \u2206S S\u02c6A be an Lp-LC con\ufb01guration, and \u03c0 P \u2206A S be an L\u03c0-LC policy. Then, the state-action-next-state value function u\u03c0,p is LC, under the assumption that \u03b3Lpp1`L\u03c0q \u0103 1, with Lipschitz semi-norm: }u\u03c0,p}L \u010f Lr ` \u03b3 }v\u03c0,p}L \u010f Lrp2 ` L\u03c0 \u00b4 \u03b3Lpp1 ` L\u03c0qq 1 \u00b4 \u03b3Lpp1 ` L\u03c0q . Proof. Let ps, a, s1q, ps, a, s1q P S \u02c6 A \u02c6 S, we have: \u02c7 \u02c7u\u03c0,pps, a, s1q \u00b4 u\u03c0,pps, a, s1q \u02c7 \u02c7 \u201c \u02c7 \u02c7rps, a, s1q ` \u03b3v\u03c0,pps1q \u00b4 rps, a, s1q \u00b4 \u03b3v\u03c0,pps1q \u02c7 \u02c7 \u010f \u02c7 \u02c7rps, a, s1q \u00b4 rps, a, s1q \u02c7 \u02c7 ` \u03b3 \u02c7 \u02c7v\u03c0,pps1q \u00b4 v\u03c0,pps1q \u02c7 \u02c7 \u010f LrdS\u02c6A\u02c6S ` ps, a, s1q, ps, a, s1q \u02d8 ` \u03b3 }v\u03c0,p}L dSps1, s1q \u010f ` Lr ` \u03b3 }v\u03c0,p}L \u02d8 dS\u02c6A\u02c6Spps, a, s1q, ps, a, s1q, having observed that dS\u02c6A\u02c6S pps, a, s1q, ps, a, s1qq \u011b dSps1, s1q. 4 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT 4 Bound on the \u03b3-discounted Stationary Distribution In this section, we provide a bound for the Wasserstein distance of \u03b3-discounted stationary distributions under different policy-con\ufb01guration pairs. We start with a result that bounds the Wasserstein distance of the \u03b3-discounted stationary distributions in terms of the Wasserstein distance between the corresponding state-state transition kernels (Section 4.1) and, then, we provide a looser bound that decouples the contribution of the policy from that of the transition model (Section 4.2). Both results are obtained for LC Conf-MDPs, policies, and con\ufb01gurations. Finally, we provide a comparison with similar results already present in the literature (Section 4.3). 4.1 Coupled Bound The following theorem provides the coupled bound on the \u03b3-discounted stationary distribution, that merges together the contributions of the con\ufb01guration and of the policy. Theorem 4.1. Let C be a Conf-MDP, let p\u03c0, pq, p\u03c01, p1q P \u2206A S \u02c6 \u2206S S\u02c6A be two policy-con\ufb01guration pairs such that \u03c01 is L\u03c01-LC and p1 is Lp1-LC. Then, if \u03b3Lp1p1 ` L\u03c01q \u0103 1, it holds that: W p\u00b5\u03c01,p1, \u00b5\u03c0,pq \u010f \u03b3 1 \u00b4 \u03b3Lp1p1 ` L\u03c01q \u017c S \u00b5\u03c0,ppdsqW ` p1 \u03c01p\u00a8|sq, p\u03c0p\u00a8|sq \u02d8 . Proof. The derivation presents similarities with the proof of Lemma 2 of Pirotta et al. (2015). Exploiting the recursive equation of the \u03b3-discounted state distribution (Equation 2), we can write the distributions difference as follows, in operator form: \u00b5\u03c01,p1 \u00b4 \u00b5\u03c0,p \u201c p1 \u00b4 \u03b3q\u03c1 ` \u03b3\u00b5\u03c01,p1p1 \u03c01 \u00b4 p1 \u00b4 \u03b3q\u03c1 \u00b4 \u03b3\u00b5\u03c0,pp\u03c0 \u201c \u03b3\u00b5\u03c01,p1p1 \u03c01 \u00b4 \u03b3\u00b5\u03c0,pp\u03c0 \u02d8 \u00b5\u03c0,pp1 \u03c01 \u201c \u03b3 p\u00b5\u03c01,p1 \u00b4 \u00b5\u03c0,pq p1 \u03c01 ` \u03b3\u00b5\u03c0,p ` p1 \u03c01 \u00b4 p\u03c0 \u02d8 \u201c \u03b3\u00b5\u03c0,p ` p1 \u03c01 \u00b4 p\u03c0 \u02d8 ` IdS \u00b4 \u03b3p1 \u03c01 \u02d8\u00b41 , (3) where we exploited the recursive de\ufb01nition of \u00b5\u03c01,p1 \u00b4 \u00b5\u03c0,p and recalled that \u03b3 \u0103 1. We proceed by computing the Wasserstein distance: Wp\u00b5\u03c01,p1, \u00b5\u03c0,pq \u201c sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u017c S p\u00b5\u03c01,p1pdsq \u00b4 \u00b5\u03c0,ppdsqq fpsq \u02c7 \u02c7 \u02c7 \u02c7 \u201c \u03b3 sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u017c S \u00b5\u03c0,ppds2q \u017c S ` p1 \u03c01pds1|s2q, p\u03c0pds1|s2q \u02d8 \u017c S ` IdS \u00b4 \u03b3p1 \u03c01 \u02d8\u00b41 pds|s1qfpsq looooooooooooooooooomooooooooooooooooooon \u201c:gfps1q \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 (4) \u010f \u03b3 \u017c S \u00b5\u03c0,ppds2q sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u017c S ` p1 \u03c01pds1|s2q, p\u03c0pds1|s2q \u02d8 gfps1q \u02c7 \u02c7 \u02c7 \u02c7 (5) \u010f \u03b3 \u017c S \u00b5\u03c0,ppds2qW ` p1 \u03c01p\u00a8|s2q, p\u03c0p\u00a8|s2q \u02d8 sup f:}f}L\u010f1 }gf}L , (6) where in line (4) we exploited Equation (3), line (5) follows from Jensen\u2019s inequality, and line (6) comes from the de\ufb01nition of Wasserstein distance. We now compute the Lipschitz semi-norm }gf}L. Let us introduce the distribution \u00b5\u03b3,\u03b4s \u03c01,p1 \u201c p1 \u00b4 \u03b3q\u03b4s ` \u03b3p1 \u03c01\u00b5\u03b3,\u03b4s \u03c01,p1 \u201c p1 \u00b4 \u03b3q\u03b4s pIdS \u00b4 \u03b3p1 \u03c01q\u00b41, i.e., the \u03b3-discounted stationary distribution when the initial state distribution is the Dirac delta \u03b4s. We observe that p1 \u00b4 \u03b3qgfpsq \u201c \u015f S \u00b5\u03b3,\u03b4s \u03c01,p1pds1qfps1q. Thus, to compute the Lipschitz semi-norm }gf}L, we proceed as follows for s, s P S: p1 \u00b4 \u03b3qgfpsq \u00b4 p1 \u00b4 \u03b3qgfpsq \u201c \u017c S \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1pds1q \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1pds1q \u00af fps1q \u010f W \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1, \u00b5\u03b3,\u03b4s \u03c01,p1 \u00af . 5 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT Thus, we have: W \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1, \u00b5\u03b3,\u03b4s \u03c01,p1 \u00af \u201c sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7p1 \u00b4 \u03b3q \u017c S ` \u03b4spds1q \u00b4 \u03b4spds1q \u02d8 fps1q ` \u03b3 \u017c S \u00b4\u00b4 p1 \u03c01\u00b5\u03b3,\u03b4s \u03c01,p1 \u00af pds1q \u00b4 \u00b4 p1 \u03c01\u00b5\u03b3,\u03b4s \u03c01,p1 \u00af pds1q \u00af fps1q \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 (7) \u010f p1 \u00b4 \u03b3qdSps, sq ` \u03b3 sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u017c S \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1pds2q \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1pds2q \u00af \u017c S p1 \u03c01pds1|s2qfps1q looooooooooomooooooooooon \u201c:hfps2q \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 \u02c7 (8) \u010f p1 \u00b4 \u03b3qdSps, sq ` \u03b3 sup f:}f}L\u010f1 }hf}L sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u017c S \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1pds2q \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1pds2q \u00af fps2q \u02c7 \u02c7 \u02c7 \u02c7 \u201c p1 \u00b4 \u03b3qdSps, sq ` \u03b3 sup f:}f}L\u010f1 }hf}L W \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1, \u00b5\u03b3,\u03b4s \u03c01,p1 \u00af , where line (7) follows from the de\ufb01nition of \u00b5\u03b3,\u03b4s \u03c01,p1, line (8) follows from observing that \u015f S p\u03b4spds1q \u00b4 \u03b4spds1qq fps1q \u201c fpsq\u00b4fpsq \u010f dSps, sq since }f}L \u010f 1 and by the de\ufb01nition of p1 \u03c01\u00b5\u03b3,\u03b4s \u03c01,p1. The Lipschitz semi-norm }hf}L is computed in Lemma 9.1, resulting in Lp1p1 ` L\u03c01q. Putting all together and exploiting the recursion, we obtain: W \u00b4 \u00b5\u03b3,\u03b4s \u03c01,p1, \u00b5\u03b3,\u03b4s \u03c01,p1 \u00af \u010f 1 \u00b4 \u03b3 1 \u00b4 \u03b3Lp1p1 ` L\u03c01qdSps, sq \u00f9 \u00f1 gfpsq \u00b4 gfpsq \u010f 1 1 \u00b4 \u03b3Lp1p1 ` L\u03c01qdSps, sq, that concludes the proof. Thus, we have bounded the Wasserstein distance between the \u03b3-discounted stationary distributions with the Wasserstein distance between the state transition kernels, averaged over the distribution \u00b5\u03c0,p. The resulting multiplicative constant involves the Lipschitz constants of the policy \u03c01 and of the con\ufb01guration p1. Therefore, remarkably, in order to obtain such a result it is not required that the policy-con\ufb01guration pair p\u03c0, pq is LC. 4.2 Decoupled Bound In some applications, it is useful to decouple the contribution of the con\ufb01guration p and that of the policy \u03c0 in the bound of the Wasserstein distance between the \u03b3-discounted stationary distributions. Indeed, in several applicative scenarios of the Conf-MDP, as noted earlier, the con\ufb01guration activity (i.e., altering the transition model of the environment) and the policy learning are carried out by two different entities, possibly at different time scales. The following corollary provides a looser bound in which the contribution of policy and con\ufb01gurations are separated. Corollary 4.2. Let C be a Conf-MDP, let p\u03c0, pq, p\u03c01, p1q P \u2206A S \u02c6 \u2206S S\u02c6A be two policy-con\ufb01guration pairs such that \u03c01 is L\u03c01-LC and p1 is Lp1-LC. Then, if \u03b3Lp1p1 ` L\u03c01q \u0103 1, it holds that: W p\u00b5\u03c01,p1, \u00b5\u03c0,pq \u010f \u03b3 1 \u00b4 \u03b3Lp1p1 ` L\u03c01q \u017c S\u02c6A \u00b5\u03c0,ppdsq\u03c0pda|sqW ` p1p\u00a8|s, aq, pp\u00a8|s, aq \u02d8 ` \u03b3Lp1 1 \u00b4 \u03b3Lp1p1 ` L\u03c01q \u017c S \u00b5\u03c0,ppdsqW ` \u03c01p\u00a8|sq, \u03c0p\u00a8|sq \u02d8 . 4.3 Comparison with Existing Bounds The bounds presented in the previous sections generalize several existing results present the literature. In particular, Corollary 4.2 is an extension of Lemma 3 of Pirotta et al. (2015) for the case of Conf-MDPs in which we allow for the modi\ufb01cation of the con\ufb01guration (i.e., transition model) too. Furthermore, even in the non-con\ufb01gurable setting, i.e., when setting p \u201c p1 a.s., our result is stronger as it involves the Wasserstein distance between policies averaged over the \u03b3-discounted stationary distribution instead of its supremum over the state space: \u017c S \u00b5\u03c0,ppdsqW ` \u03c01p\u00a8|sq, \u03c0p\u00a8|sq \u02d8 loooooooooooooooooomoooooooooooooooooon Our Corollay 4.2 \u010f sup sPS W ` \u03c01p\u00a8|sq, \u03c0p\u00a8|sq \u02d8 loooooooooooomoooooooooooon Lemma 3 of Pirotta et al. (2015) . 6 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT A result connected to ours, with a bound involving the modi\ufb01cation of the transition model, appeared in Asadi et al. (2018) for model-based RL. However, the provided equation holds under a uniform bound on the Wasserstein distance between transition models supps,aqPS\u02c6A Wpp1p\u00a8|s, aq, pp\u00a8|s, aqq, that, instead, we relax considering the Wasserstein distance averaged over the \u03b3-discounted stationary distribution, i.e., \u015f S\u02c6A \u00b5\u03c0,ppdsq\u03c0pda|sqWpp1p\u00a8|s, aq, pp\u00a8|s, aqq. Finally, in Saleh et al. (2022) a bound on the Wasserstein distance between the \u03b3-discounted stationary distributions, as a function of the modi\ufb01cation of the policy only, is proposed under a different set of regularity assumptions. In particular, two conditions on the transition kernel are enforced for every \u03c0, \u03c01 P \u2206A S and \u03bd, \u03bd1 P \u2206S: W \u02c6\u017c S \u03bdpdsqp\u03c0p\u00a8|sq, \u017c S \u03bdpdsqp\u03c01p\u00a8|sq \u02d9 \u010f r L\u03c0 sup sPS W ` \u03c0p\u00a8|sq, \u03c01p\u00a8|sq \u02d8 , W \u02c6\u017c S \u03bdpdsqp\u03c0p\u00a8|sq, \u017c S \u03bd1pdsqp\u03c0p\u00a8|sq \u02d9 \u010f r L\u03bdW ` \u03bd, \u03bd1\u02d8 , where r L\u03c0 and r L\u03bd are suitable Lipschitz constants. These conditions, however, are not comparable with the ones considered in the present paper (Section 3), as they evaluate how fast the transition kernel changes when altering either the policy or the initial state distribution. Consequently, it is hard to argue whether of the resulting bound is tighter than ours. Nevertheless, it is worth noting that ours is the only one that involves the average Wasserstein distance between policies, rather than the supremum over the state (or state-action) space. 5 Relative Advantage Functions In this section, we revise the relative advantage functions for Conf-MDPs that are needed for the derivation of the performance improvement bounds, as introduced in Metelli et al. (2018); Metelli (2021). The notion of advantage function exists in the traditional MDP setting and evaluates the one-step improvement in executing an action a P A compared to executing the current policy \u03c0 P \u2206A S Puterman (1994). We now extend the advantage function notion to the Conf-MDP setting, properly accounting for the presence of the agent and the con\ufb01guration. Speci\ufb01cally, we introduce three notions: the policy advantage function, the con\ufb01guration advantage function, and the coupled advantage function, respectively de\ufb01ned for every ps, a, s1q P S \u02c6 A \u02c6 S as: A\u03c0,pps, aq \u201c q\u03c0,pps, aq \u00b4 v\u03c0,ppsq, A\u03c0,pps, a, s1q \u201c u\u03c0,pps, a, s1q \u00b4 q\u03c0,pps, aq, r A\u03c0,pps, a, s1q \u201c u\u03c0,pps, a, s1q \u00b4 v\u03c0,ppsq \u201c A\u03c0,pps, a, s1q ` A\u03c0,pps, aq. These functions evaluate the one-step performance improvement obtained in state s P S by either playing action a P A, for the policy advantage, selecting the next state s1 P S given that action a P A was played, for the model advantage, or both for the coupled advantage, compared to playing policy \u03c0 and transition model p. To quantify the one-step improvement in performance attained by a new policy \u03c01 or transition model p1 when the current policy is \u03c0 and the current model is p, we introduce the (decoupled) relative advantage functions (Kakade and Langford, 2002) de\ufb01ned for every state-action pair ps, aq P S \u02c6 A as: A\u03c01,p \u03c0,p psq \u201c \u017c A \u03c01pda|sqA\u03c0,pps, aq, A\u03c0,p1 \u03c0,p ps, aq \u201c \u017c S p1pds1|s, aqA\u03c0,pps, a, s1q, and the corresponding expected values under the \u03b3-discounted distributions: A\u03c01,p \u03c0,p,\u03c1 \u201c \u017c S \u00b5\u03c0,ppdsqA\u03c01,p \u03c0,p psq A\u03c0,p1 \u03c0,p,\u03c1 \u201c \u017c S \u017c A \u00b5\u03c0,ppdsq\u03c0pda|sqA\u03c0,p1 \u03c0,p ps, aq. In order to account for the combined effect of choosing the action with a new policy \u03c01 and the next state with the new con\ufb01guration p1, we introduce the coupled relative advantage function de\ufb01ned for every state s P S as: A\u03c01,p1 \u03c0,p psq \u201c \u017c S \u017c A \u03c01pda|sqp1pds1|s, aq r A\u03c0,pps, a, s1q. 7 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT Thus, A\u03c01,p1 \u03c0,p evaluates the one-step improvement obtained by the new policy-con\ufb01guration pair p\u03c01, p1q P \u2206A S \u02c6\u2206S S\u02c6A over the current one p\u03c0, pq P \u2206A S \u02c6 \u2206S S\u02c6A, i.e., the local gain in performance achieved by selecting an action with \u03c01 and the next state with p1. The corresponding expectation under the \u03b3-discounted distribution is given by: A\u03c01,p1 \u03c0,p,\u03c1 \u201c \u017c S \u00b5\u03c0,ppdsqA\u03c01,p1 \u03c0,p psq. To lighten the notation, we remove the subscript of the initial state distribution \u03c1 whenever clear from the context. Thus, we simply write A\u03c01,p \u03c0,p , A\u03c0,p1 \u03c0,p , and A\u03c01,p1 \u03c0,p . The following result relates the coupled relative advantage function with the corresponding (decoupled) relative advantage functions. Lemma 5.1 (Lemma A.1 of Metelli et al. (2018)). Let A\u03c01,p1 \u03c0,p be the coupled relative advantage function, A\u03c01,p \u03c0,p and A\u03c0,p1 \u03c0,p be the (decoupled) policy and con\ufb01guration relative advantage functions respectively. Then, for every state s P S it holds that: A\u03c01,p1 \u03c0,p psq \u201c A\u03c01,p \u03c0,p psq ` \u017c A \u03c01pda|sqA\u03c0,p1 \u03c0,p ps, aq. This result has a meaningful interpretation. In order to assess the one-step performance improvement in state s P S obtained moving from policy-con\ufb01guration pair p\u03c0, pq to the new one p\u03c01, p1q, i.e., A\u03c01,p1 \u03c0,p psq, we can add the contribution of two terms: (i) the one-step performance improvement due to the policy, i.e., A\u03c01,p \u03c0,p psq; (ii) the one-step performance improvement due to the con\ufb01guration, i.e., A\u03c0,p1 \u03c0,p ps, aq, with the action sampled from the new policy \u03c01. 6 Bound on the Performance Improvement In this section, we make use of the results provided in the previous sections, to derive lower bounds on the performance improvement yielded by changing the policy and the con\ufb01guration. We start the section by revising the performance difference lemma for Conf-MDPs, as derived in Metelli et al. (2018). Then, we proceed at proving the performance improvement lower bounds. Speci\ufb01cally, in Section 6.1, we derive a tight lower bound that, unfortunately, is hardly usable in practice since it involves the presence of non-easily computable quantities and keeps together the contributions of the policy and the con\ufb01guration (coupled). Then, in Section 6.2, we derive a looser bound that separates the effect of policy and con\ufb01guration (decoupled) and just requires the LC property. Theorem 6.1 (Performance Difference Lemma Theorem 3.1 of Metelli et al. (2018)). Let C be a Conf-MDP. The performance improvement of policy-con\ufb01guration pair p\u03c01, p1q P \u2206A S \u02c6 \u2206S S\u02c6A over p\u03c0, pq P \u2206A S \u02c6 \u2206S S\u02c6A is given by: J\u03c01,p1 \u00b4 J\u03c0,p \u201c 1 1 \u00b4 \u03b3 \u017c S \u00b5\u03c01,p1pdsqA\u03c01,p1 \u03c0,p psq. This result represents the extension of the well-known performance difference lemma of Kakade and Langford (2002) to the case in which we are allowed to change the transition model (i.e., con\ufb01guration) too. The meaning of the result can be summarized as follows. The performance improvement J\u03c01,p1 \u00b4 J\u03c0,p attained by moving from pair p\u03c0, pq to pair p\u03c01, p1q is computed by means of the coupled relative advantage function A\u03c01,p1 \u03c0,p , averaged over the \u03b3-discounted stationary distribution \u00b5\u03c01,p1, induced by the new pair p\u03c01, p1q. 6.1 Coupled Bound From a practical perspective, the expression of Theorem 6.1 is an equality but cannot be directly used since it requires computing (or estimating) an expectation w.r.t. the \u03b3-discounted distribution \u00b5\u03c01,p1, depending on the new pair p\u03c01, p1q that we have not access to. Instead, we would like to obtain a performance improvement lower bound that can be estimated by using the current pair p\u03c0, pq. The following result serves the purpose. Theorem 6.2 (Coupled Bound). Let C be a Lr-LC Conf-MDP, let \u03c0, \u03c01 P \u2206A S be two L\u03c0, L\u03c01-LC policies respectively, and let p, p1 P \u2206S S\u02c6A be two Lp, Lp1-LC con\ufb01gurations. The performance improvement of policy-con\ufb01guration pair p\u03c01, P 1q over p\u03c0, Pq is lower bounded as: J\u03c01,p1 \u00b4 J\u03c0,p loooooomoooooon performance improvement \u011b 1 1 \u00b4 \u03b3 A\u03c01,p1 \u03c0,p looooomooooon advantage \u00b4 \u03b3 p1 \u00b4 \u03b3qp1 \u00b4 \u03b3Lp1p1 ` L\u03c01qq \u203a \u203a \u203aA\u03c01,p1 \u03c0,p \u203a \u203a \u203a L \u017c S \u00b5\u03c0,ppdsqW ` p1 \u03c01p\u00a8|sq, p\u03c0p\u00a8|sq \u02d8 loooooooooooooooooooooooooooooooooooooooooooooomoooooooooooooooooooooooooooooooooooooooooooooon dissimilarity penalization . 8 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT Proof. Exploiting the bounds on the \u03b3-discounted state distributions difference (Theorem 4.1) we can easily derive the performance improvement bound: J\u03c01,p1 \u00b4 J\u03c0,p \u201c 1 1 \u00b4 \u03b3 \u017c S \u00b5\u03c01,p1pdsqA\u03c01,p1 \u03c0,p psq \u201c 1 1 \u00b4 \u03b3 \u017c S \u00b5\u03c0,ppdsqA\u03c01,p1 \u03c0,p psq ` 1 1 \u00b4 \u03b3 \u017c S p\u00b5\u03c01,p1pdsq \u00b4 \u00b5\u03c0,ppdsqq A\u03c01,p1 \u03c0,p psq (9) \u011b A\u03c01,p1 \u03c0,p 1 \u00b4 \u03b3 \u00b4 1 1 \u00b4 \u03b3 \u02c7 \u02c7 \u02c7 \u02c7 \u017c S p\u00b5\u03c01,p1pdsq \u00b4 \u00b5\u03c0,ppdsqq A\u03c01,p1 \u03c0,p psq \u02c7 \u02c7 \u02c7 \u02c7 (10) \u011b A\u03c01,p1 \u03c0,p 1 \u00b4 \u03b3 \u00b4 \u03b3 p1 \u00b4 \u03b3qp1 \u00b4 \u03b3Lp1p1 ` L\u03c01qq \u017c S \u00b5\u03c0,ppdsqW ` p1 \u03c01p\u00a8|sq, p\u03c0p\u00a8|sq \u02d8 \u203a \u203a \u203aA\u03c01,p1 \u03c0,p \u203a \u203a \u203a L , (11) where line (10) follows from line (9) by observing that b \u011b \u00b4|b| for any b P R and line (11) is obtained by using Theorem 4.1 and the de\ufb01nition of Lipschitz semi-norm. The resulting bound is made of two terms, as the commonly known bounds in the literature (Kakade and Langford, 2002; Pirotta et al., 2013; Metelli et al., 2021). The \ufb01rst term, advantage, accounts for the improvement in performance that can be obtained locally by replacing the pair p\u03c0, pq with the new one p\u03c01, p1q. Instead, the second term, dissimilarity penalization, disincentives a choice of the next pair that is too dissimilar from the current one. As expected, the multiplicative constant in front of the Wasserstein distance between the state-state transition kernels involves the Lipschitz constants of all the constitutive elements of the Conf-MDP, policy, and con\ufb01guration. 6.2 Decoupled Bound Despite its generality, the bound of Theorem 6.2 is hardly usable in practice because of the need for computing the Lipschitz semi-norm }A\u03c01,p1 \u03c0,p }L. Intuitively, this factor strictly depends on the similarity between the policy-con\ufb01guration pairs p\u03c0, pq and p\u03c01, p1q. Indeed, when p\u03c0, pq \u201c p\u03c01, p1q a.s., we have that A\u03c01,p1 \u03c0,p psq \u201c 0 for all s P S and, consequently, }A\u03c01,p1 \u03c0,p }L \u201c 0. Unfortunately, obtaining a bound on this Lipschitz semi-norm that depends on some form of distance between the pairs p\u03c0, pq and p\u03c01, p1q is technically challenging and may require additional assumptions. Thus, we defer this goal to future works, in this section, we focus on a simpler (and looser) bound that makes use of the LC condition only, accepting to bound }A\u03c01,p1 \u03c0,p }L with a constant independent from the distance between the pairs p\u03c0, pq and p\u03c01, p1q. The following result provides such a bound. Theorem 6.3 (Decoupled Bound for Lipschitz Conf-MDPs). Let C be a Lr-LC Conf-MDP, let \u03c0, \u03c01 P \u2206A S be two L\u03c0, L\u03c01-LC policies respectively, and let p, p1 P \u2206S S\u02c6A be two Lp, Lp1-LC con\ufb01gurations. The performance improvement of policy-con\ufb01guration pair p\u03c01, P 1q over p\u03c0, Pq is lower bounded as: J\u03c01,p1 \u00b4 J\u03c0,p \u011b A\u03c01,p \u03c0,p ` A\u03c0,p1 \u03c0,p 1 \u00b4 \u03b3 \u00b4 \u02dc c1 \u017c S\u02c6A \u00b5\u03c0,ppdsq\u03c0pda|sqW ` p1p\u00a8|s, aq, pp\u00a8|s, aq \u02d8 ` c2 \u017c S \u00b5\u03c0,ppdsqW ` \u03c01p\u00a8|sq, \u03c0p\u00a8|sq \u02d8 \u00b8 , where c1 and c2 are constant whose values are made explicit in the proof and depend on \u03b3, Lr, L\u03c0, L\u03c01, Lp, and Lp1. Some observations are in order. First, in order to obtain this result, we need to require that the Conf-MDP and both pairs p\u03c0, pq and p\u03c01, p1q to be LC. Indeed, the constants c1 and c2 will depend on the Lipschitz semi-norm of the Conf-MDP elements, and on the Lipschitz constants of both policies and both con\ufb01gurations. Second, Theorem 6.3 represents an decoupled bound, that separates the effects of the policy and the con\ufb01guration. This is made evident in the advantage term too that is now replaced with the sum of the expected decoupled relative advantages. Third, and more important, the bound displays a linear dependence on the Wasserstein distance between the transition models and the policies. Compared to well-known bounds in the literature in which the square of the divergence is present (typically TV divergence), such as Pirotta et al. (2013), this represents a looser dependence. 7 Conclusions In this paper, we have investigated the performance improvement bounds for Conf-MDP under Lipschitz regularity. We have \ufb01rst derived bounds on the Wasserstein distance between \u03b3-discounted stationary distributions that generalize 9 \fPerformance Improvement Bounds for Lipschitz Conf-MDPs A PREPRINT existing bounds in the literature. Then, we have provided two performance improvement lower bounds. Future promising works include the employment of the derived bounds for devising safe and trust region learning algorithms." + }, + { + "url": "http://arxiv.org/abs/2304.12966v1", + "title": "Towards Theoretical Understanding of Inverse Reinforcement Learning", + "abstract": "Inverse reinforcement learning (IRL) denotes a powerful family of algorithms\nfor recovering a reward function justifying the behavior demonstrated by an\nexpert agent. A well-known limitation of IRL is the ambiguity in the choice of\nthe reward function, due to the existence of multiple rewards that explain the\nobserved behavior. This limitation has been recently circumvented by\nformulating IRL as the problem of estimating the feasible reward set, i.e., the\nregion of the rewards compatible with the expert's behavior. In this paper, we\nmake a step towards closing the theory gap of IRL in the case of finite-horizon\nproblems with a generative model. We start by formally introducing the problem\nof estimating the feasible reward set, the corresponding PAC requirement, and\ndiscussing the properties of particular classes of rewards. Then, we provide\nthe first minimax lower bound on the sample complexity for the problem of\nestimating the feasible reward set of order ${\\Omega}\\Bigl(\n\\frac{H^3SA}{\\epsilon^2} \\bigl( \\log \\bigl(\\frac{1}{\\delta}\\bigl) + S\n\\bigl)\\Bigl)$, being $S$ and $A$ the number of states and actions respectively,\n$H$ the horizon, $\\epsilon$ the desired accuracy, and $\\delta$ the confidence.\nWe analyze the sample complexity of a uniform sampling strategy (US-IRL),\nproving a matching upper bound up to logarithmic factors. Finally, we outline\nseveral open questions in IRL and propose future research directions.", + "authors": "Alberto Maria Metelli, Filippo Lazzati, Marcello Restelli", + "published": "2023-04-25", + "updated": "2023-04-25", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Inverse reinforcement learning (IRL) aims at ef\ufb01ciently learning a desired behavior by observing an expert agent and inferring their intent encoded in a reward function (refer to Osa et al. (2018); Arora & Doshi (2021); Adams et al. (2022) for recent surveys on IRL). This ab*Equal contribution 1Politecnico di Milano, 32, Piazza Leonardo da Vinci, Milan, Italy. Correspondence to: Alberto Maria Metelli . Proceedings of the 39 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). stract setting, that diverges from standard reinforcement learning (RL, Sutton & Barto, 1998), as the reward function has to be learned, arises in a large variety of real-world tasks. In particular, in a human-in-the-loop (Wu et al., 2022) scenario, when the expert is represented by a human solving a task, an explicit speci\ufb01cation of the reward function representing the human\u2019s goal is often unavailable. Experience suggests that humans are uncomfortable when asked to describe their intent and, thus, the underlying reward; while they are much more comfortable providing demonstrations of what is believed to be the right behavior. Indeed, human behavior is usually the product of many, possibly con\ufb02icting, objectives.1 Succeeding in retrieving a representation of the expert\u2019s reward has notable implications. First, we obtain explicit information for understanding the motivations behind the expert\u2019s choices (interpretability). Second, the reward can be employed in RL to train arti\ufb01cial agents, under shifts in the features of the underlying system (transferability). Since the beginning, the community recognized that the IRL problem is, per se, ill-posed, as multiple reward functions are compatible with the expert\u2019s behavior (Ng & Russell, 2000). This ambiguity was heterogeneously addressed by the algorithmic proposals that have followed over the years, which realized in several selection criteria, including maximum margin (Ratliff et al., 2006), maximum entropy (Zeng et al., 2022), minimum Hessian eigenvalue (Metelli et al., 2017). Some of these approaches come with theoretical guarantees on the sample complexity, although according to different performance indexes (e.g., Abbeel & Ng, 2004; Syed & Schapire, 2007; Pirotta & Restelli, 2016). A promising line of research that aspires to overcome the ambiguity issue has been recently investigated in (Metelli et al., 2021; Lindner et al., 2022). These works focus on estimating all the reward functions compatible with the expert\u2019s demonstrated behavior, namely the feasible rewards. Remarkably, this viewpoint that focuses on the feasible reward set, rather than on one reward obtained with a speci\ufb01c selection criterion, as previous works did, circumvents the ambiguity problem, postponing the reward 1In RL, the Sutton\u2019s hypothesis (Sutton & Barto, 1998) conjectures that a scalar reward is an adequate notion of goal. \fTowards Theoretical Understanding of Inverse Reinforcement Learning selection and pointing to the expert\u2019s intent. Although these works provide sample complexity guarantees in different settings, a rigorous understanding of the inherent complexity of the IRL problem is currently lacking. Contributions In this paper, we aim at taking a step toward the theoretical understanding of the IRL problem. As in (Metelli et al., 2021; Lindner et al., 2022), we consider the problem of estimating the feasible reward set. We focus on a generative model setting, where the agent can query the environment and the expert in any state, and consider \ufb01nite-horizon decision problems. The contributions of the paper can be summarized as follows. \u2022 We propose a novel framework to evaluate the accuracy in recovering the feasible reward set, based on the Hausdorff metric (Rockafellar & Wets, 1998). This tool generalizes existing performance indexes. Furthermore, we show that the feasible reward set enjoys a desirable Lipschitz continuity property w.r.t. the IRL problem (Section 3). \u2022 We devise a PAC (Probability Approximately Correct) framework for estimating the feasible reward set, providing the de\ufb01nition of p\u01eb, \u03b4q-PAC IRL algorithm. Then, we investigate the relationships between several performance indexes based on the Hausdorff metric (Section 4). \u2022 We conceive, based on the provided PAC requirements introduced, a novel sample complexity lower bound of order \u2126 \u00b4 H3SA \u01eb2 ` log ` 1 \u03b4 \u02d8 ` S \u02d8\u00af . This represents the most signi\ufb01cant contribution and, to the best of our knowledge, it is the \ufb01rst lower bound that values the importance of the relevant features of the IRL problem. From a technical perspective, the lower bound construction merges new proof ideas with reworks of existing techniques (Section 5). \u2022 We analyze a uniform sampling exploration strategy (UniformSampling-IRL, US-IRL) showing that, in the generative model setting, it matches the lower bound up to logarithmic factors (Section 6). The complete proofs of the results presented in the main paper are reported in Appendix B. 2. Preliminaries In this section, we provide the background that will be employed in the subsequent sections. Mathematical Background Let a, b P N with a \u010f b, we denote with Ja, bK :\u201c ta, . . . , bu and with JaK :\u201c J1, aK. Let X be a set, we denote with \u2206X the set of probability measures over X. Let Y be a set, we denote with \u2206X Y the set of functions with signature Y \u00d1 \u2206X . Let pX, dq be a (pre)metric space, where X is a set and d : X \u02c6 X \u00d1 r0, `8s is a (pre)metric.2 Let Y, Y1 \u010e X be non-empty sets, we de\ufb01ne the Hausdorff (pre)metric (Rockafellar & Wets, 1998) Hd : 2X \u02c6 2X \u00d1 r0, `8s between Y and Y1 induced by the (pre)metric d as follows: HdpY,Y1q:\u201cmax \" sup yPY inf y1PY1dpy,y1q, sup y1PY1 inf yPYdpy,y1q * . (1) Markov Decision Processes without Reward A timeinhomogeneous \ufb01nite-horizon Markov decision process without reward (MDP\\R) is de\ufb01ned as a 4-tuple M \u201c pS, A, p, Hq where S is a \ufb01nite state space (S \u201c |S|), A is a \ufb01nite action space (A \u201c |A|), p \u201c pphqhPJHK is the transition model where for every stage h P JHK we have ph P \u2206S S\u02c6A, and H P N is the horizon. An MDP\\R is time-homogeneous if, for every stage h P JH \u00b41K, we have ph \u201c ph`1 a.s.; in such a case, we denote the transition model with the symbol p only. A time-inhomogeneous reward function is de\ufb01ned as r \u201c prhqhPJHK, where for every stage h P JHK we have rh : S \u02c6 A \u00d1 r\u00b41, 1s.3 A Markov decision process (MDP, Puterman, 1994) is obtained by pairing an MDP\\R M with a reward function r. The agent\u2019s behavior is modeled with a time-inhomogeneous policy \u03c0 \u201c p\u03c0hqhPJHK where for every stage h P JHK, we have \u03c0h P \u2206A S . Let f P RS and g P RS\u02c6A, we denote with phfps, aq \u201c \u0159 s1PS phps1|s, aqfps1q and with \u03c0hgpsq \u201c \u0159 aPA \u03c0hpa|sqgps, aq the expectation operators w.r.t. the transition model and the policy, respectively. Value Functions and Optimality Given an MDP\\R M, a policy \u03c0, and a reward function r, the Q-function Q\u03c0p\u00a8; rq \u201c pQ\u03c0 hp\u00a8; rqqhPJHK induced by r represents the expected sum of rewards collected starting from ps, a, hq P S \u02c6 A \u02c6 JHK and following policy \u03c0 thereafter: Q\u03c0 hps, a; rq :\u201c E pM,\u03c0q \u00ab H \u00ff l\u201ch rlpsl, alq|sh \u201c s, ah \u201c a \ufb00 , where EpM,\u03c0q denotes the expectation w.r.t. M and \u03c0, i.e., ah \u201e \u03c0hp\u00a8|shq and sh`1 \u201e php\u00a8|sh, ahq for every stage h P Jh, HK. The Q-function ful\ufb01lls the Bellman equations (Puterman, 1994) for every ps, a, hq P S \u02c6 A \u02c6 JHK: Q\u03c0 hps, a; rq \u201c rhps, aq ` phV \u03c0 h`1ps, a; rq, V \u03c0 h ps; rq \u201c \u03c0hQ\u03c0 hps; rq and V \u03c0 H`1ps; rq \u201c 0, where V \u03c0p\u00a8; rq \u201c pV \u03c0 h p\u00a8; rqqhPJHK is the V-function. The advantage function A\u03c0 hps, a; rq \u201c Q\u03c0 hps, a; rq \u00b4 V \u03c0 h ps; rq represents the relative gain of playing action a P A rather than following policy \u03c0 in the state-stage pair ps, hq. A policy \u03c0\u02da is optimal if it has non-positive advantage ev2A premetric d satis\ufb01es the axioms: dpx, x1q \u011b 0 and dpx, xq \u201c 0 for all x, x1 P X . Any metric is clearly a premetric. 3For the sake of simplicity and w.l.o.g., we restrict to reward functions bounded by 1 in absolute value. \fTowards Theoretical Understanding of Inverse Reinforcement Learning erywhere, i.e., A\u03c0\u02da h ps, a; rq \u010f 0 for every ps, a, hq P S \u02c6A\u02c6JHK. The Qand V-functions of an optimal policy are denoted with Q\u02da hps, a; rq and V \u02da h ps; rq. Inverse Reinforcement Learning An inverse reinforcement learning problem (IRL, Ng & Russell, 2000) is de\ufb01ned as a pair pM, \u03c0Eq, where M is an MDP\\R and \u03c0E is an expert\u2019s policy. Informally, solving an IRL problem consists in \ufb01nding a reward function prhqhPJHK making \u03c0E optimal for the MDP\\R M paired with reward function r. Any reward function ful\ufb01lling this condition is called feasible and the set of all such reward functions is called feasible reward set (Metelli et al., 2021; Lindner et al., 2022), de\ufb01ned as: RpM,\u03c0Eq :\u201c ! prhqhPJHK \f \f \f@hPJHK : rh :S \u02c6A\u00d1r\u00b41,1s ^@ps,a,hqPS \u02c6A\u02c6JHK : A\u03c0E h ps,a;rq\u010f0 ) . (2) We will omit the subscript pM, \u03c0Eq whenever clear from the context. Empirical MDP and Empirical Expert\u2019s Policy Let D\u201ctpsl,al,hl,s1 l,aE l qulPJtK be a dataset of tPN tuples, where for every lPJtK, we have s1 l\u201ephlp\u00a8|sl,alq and aE l \u201e \u03c0E hlp\u00a8|slq. We introduce the counts for every ps,a,hqPS\u02c6 A\u02c6JHK: nt hps,a,s1q:\u201c\u0159t l\u201c11tpsl,al,hl,s1 lq\u201cps,a,h,s1qu, nt hps,aq:\u201c\u0159 s1PS nt hps,a,s1q, nt hpsq:\u201c\u0159 aPAnt hps,aq, and nt,E h ps,aq:\u201c\u0159t l\u201c11tpsl,aE l q\u201cps,aqu. These quantities allow de\ufb01ning the empirical transition model p pt\u201cpp pt hqhPJHK and empirical expert\u2019s policy p \u03c0t,E\u201cp\u03c0t,E h qhPJHK as follows: p pt hps1|s, aq :\u201c # nt hps,a,s1q nt hps,aq if nt hps, aq \u0105 0 1 S otherwise , p \u03c0E,t h pa|sq :\u201c # nE,t h ps,aq nt hpsq if nt hpsq \u0105 0 1 A otherwise . (3) In the time-homogeneous case, we simply merge the samples collected at different stages h P JHK. We denote with p x Mt, p \u03c0E,tq the empirical IRL problem, where x Mt \u201c pS, A, p pt, Hq the empirical MDP\\R induced by p pt. Finally, we denote with p Rt :\u201c Rp x Mt,p \u03c0E,tq the feasible reward set induced p x Mt, p \u03c0E,tq. We will omit the superscript t, whenever clear from the context and write p R. 3. Lipschitz Framework for IRL In this section, we analyze the regularity properties of the feasible reward set in terms of the Lipschitz continuity w.r.t. the IRL problem. To make the idea more concrete, suppose that R is the feasible reward set obtained from the IRL problem pM, \u03c0Eq and that p R is obtained with a different IRL problem p x M, p \u03c0Eq, which we can think to as an empirical version of pM, \u03c0Eq, with an estimated transition model p p replacing the true model p. Intuitively, to have any learning guarantee, \u201csimilar\u201d IRL problems (p \u00ab p p and \u03c0E \u00ab p \u03c0E) should lead to \u201csimilar\u201d feasible reward sets (R \u00ab p R).4 To formally de\ufb01ne a Lipschitz framework, we need to select a (pre)metric for evaluating dissimilarities between feasible reward sets and IRL problems. While we defer the presentation of the (pre)metric for the IRL problems to Section 3.1, where it will emerge naturally, for the feasible reward sets, we employ the Hausdorff (pre)metric HdpR, p Rq (Equation 1), induced by a (pre)metric dpr, p rq used to evaluate the dissimilarity between individual reward functions r P R and p r P p R. With this choice, two feasible reward sets are \u201csimilar\u201d if every reward r P R is \u201csimilar\u201d to some reward p r P p R in terms of the (pre)metric d. In the next sections, we employ as d the metric induced by the L8-norm between the reward functions r P R and p r P p R:5 dGpr, p rq :\u201c max ps,a,hqPS\u02c6A\u02c6JHK |rhps, aq \u00b4 p rhps, aq| , (4) where G stands for \u201cgenerative\u201d. In Section 3.1, we prove that the Lipschitz continuity is ful\ufb01lled when no restrictions on the reward function are enforced (besides boundedness in r\u00b41, 1s). Then, in Section 3.2, we show that, when further restrictions on the viable rewards are required (e.g., state-only reward), such a regularity property no longer holds. 3.1. Lipschitz Continuous Feasible Reward Sets In order to prove the Lipschitz continuity property, we use the explicit form of the feasible reward sets introduced in (Metelli et al., 2021) and extended by (Lindner et al., 2022) for the \ufb01nite-horizon case, that we report below. Lemma 3.1 (Lemma 4 of Lindner et al. (2022)). A reward function r \u201c prhqhPJHK is feasible for the IRL problem pM, \u03c0Eq if and only if there exist two functions pAh, VhqhPJHK where for every h P JHK we have Ah : S \u02c6 A \u00d1 R\u011b0, Vh : S \u02c6 A \u00d1 R, and VH`1 \u201c 0, such that for every ps, a, hq P S \u02c6 A \u02c6 JHK it holds that: rhps,aq\u201c\u00b4Ahps,aq1t\u03c0E h pa|sq\u201c0u `Vhpsq\u00b4phVh`1ps,aq. Furthermore, if |rhps, aq| \u010f 1, if follows that |Vhpsq| \u010f H \u00b4 h ` 1 and Ahps, aq \u010f H \u00b4 h ` 1. A form of regularity of the feasible reward set was already studied in Theorem of 3.1 of Metelli et al. (2021) and in Theorem 5 of Lindner et al. (2022), providing an error propagation analysis. These results are based on showing the existence of a particular reward r r feasible for the IRL 4If not, any arbitrary accurate estimate pp p, p \u03c0Eq of pp, \u03c0Eq, may induce feasible sets p R and R with \ufb01nite non-zero dissimilarity. 5We discuss other choices of d in Section 4. \fTowards Theoretical Understanding of Inverse Reinforcement Learning problem p x M, p \u03c0Eq, whose distance from the original reward function r P R is bounded by a dissimilarity term between pM, \u03c0Eq and p x M, p \u03c0Eq. Unfortunately, such a reward r r is not guaranteed to be bounded in r\u00b41, 1s even when the original reward r is (and, thus, it might be r r R p R according to Equation 2).6 In Lemma B.1, with a modi\ufb01ed construction, we show the existence of another particular feasible reward p r bounded in r\u00b41, 1s (and, thus, p r P p R). From this, the Lipschitz continuity of the feasible reward sets follows. Theorem 3.2 (Lipschitz Continuity). Let R and p R be the feasible reward sets of the IRL problems pM, \u03c0Eq and p x M, p \u03c0Eq. Then, it holds that:7 HdGpR, p Rq \u010f 2\u03c1GppM, \u03c0Eq, p x M, p \u03c0Eqq 1 ` \u03c1GppM, \u03c0Eq, p x M, p \u03c0Eqq , (5) where \u03c1Gp\u00a8, \u00a8q is a (pre)metric between IRL problems, de\ufb01ned as: \u03c1GppM,\u03c0Eq,p x M,p \u03c0Eqq:\u201c max ps,a,hqPS\u02c6A\u02c6JHKpH\u00b4h`1q \u02c6 \u00b4\u02c7 \u02c7 \u02c71t\u03c0E h pa|sq\u201c0u\u00b41tp \u03c0E h pa|sq\u201c0u \u02c7 \u02c7 \u02c7`}php\u00a8|s,aq\u00b4p php\u00a8|s,aq}1 \u00af . Some observations are in order. First, the function \u03c1G is indeed a (pre)metric since it is non-negative and takes value 0 when the IRL problems coincide. Second, as supported by intuition, \u03c1G is composed of two terms related to the estimation of the expert\u2019s policy and of the transition model. While for the transition model, the dissimilarity is formalized by the L1-norm distance }php\u00a8|s, aq \u00b4 p php\u00a8|s, aq}1, for the policy, the resulting term deserves some comments. Indeed, the dissimilarity |1t\u03c0E h pa|sq\u201c0u \u00b4 1tp \u03c0E h pa|sq\u201c0u| highlights that what matters is whether an action a P A is played by the expert and not the corresponding probability \u03c0E h pa|sq. Indeed, the expert\u2019s policy plays an action (with any non-zero probability) only if it is an optimal action. 3.2. Non-Lipschitz Continuous Feasible Reward Sets In this section, we illustrate three cases of feasible reward sets restrictions that turn out not to ful\ufb01ll the condition of Theorem 3.2. These examples consider three conditions commonly enforced in the literature: state-only reward function rhpsq (Example 3.1), time-homogeneous reward function rps, aq (Example 3.2), and \u03b2-margin reward function (Example 3.3). We present counter-examples in which in front of \u01eb-close transition models, the induced feasible sets are far apart by a constant independent of \u01eb. For space reasons, we report the complete derivation in Appendix C. Example 3.1 (State-only reward rhpsq). State-only reward functions have been widely considered in many IRL ap6We illustrate in Fact B.1 an example of this phenomenon. 7This implies the standard Lipschitz continuity, by simply bounding 2\u03c1GppM,\u03c0Eq,p x M,p \u03c0Eqq 1`\u03c1GppM,\u03c0Eq,p x M,p \u03c0Eqq \u010f 2\u03c1GppM, \u03c0Eq, p x M, p \u03c0Eqq. s0 s\u00b4 s` a1 a2 1{2 1{2 1 1 (a) s0 s1 a1 a2 1{2 1{2 1 (b) Figure 1. The MDP\\R employed in the examples of Section 3.2. denotes a transition executed for multiple actions. proaches (e.g., Ng & Russell, 2000; Abbeel & Ng, 2004; Syed & Schapire, 2007; Komanduru & Honorio, 2019). We formalize the state-only feasible reward set as follows: Rstate \u201c R X t@ps, a, a1, hq : rhps, aq \u201c rhps, a1qu. Consider the MDP\\R of Figure 1a with H \u201c2, \u03c0E h ps0q\u201c p \u03c0E h ps0q\u201ca1 with hPt1,2u. Set p1ps`|s0,a1q\u201c1{2`\u01eb{4 and p p1ps`|s0,a1q\u201c1{2\u00b4\u01eb{4 and, thus, }p1p\u00a8|s0,a1q\u00b4 p p1p\u00a8|s0,a1q}1 \u201c\u01eb. Let us set r2ps`q\u201c1 and r2ps\u00b4q\u201c\u00b41, which makes \u03c0E optimal under p. We observe that p R is de\ufb01ned by p r2ps\u00b4q\u010fp r2ps`q. Recalling that the rewards are bounded in r\u00b41,1s, we have HdGpRstate, p Rstateq\u011b1. Example 3.2 (Time-homogeneous reward rps, aq). Timehomogeneous reward functions have been employed in several RL (e.g., Dann & Brunskill, 2015) and IRL settings (e.g., Lindner et al., 2022). We formalize the timehomogeneous feasible reward set as follows: Rhom \u201c R X t@ps, a, h, h1q : rhps, aq \u201c rh1ps, aqu. Consider the MDP\\R of Figure 1b with H \u201c2, \u03c0E 1 ps0q\u201c p \u03c0E 1 ps0q\u201ca1 and \u03c0E 2 ps0q\u201cp \u03c0E 2 ps0q\u201ca2. For hPt1,2u, we set phps0|s0,a1q\u201c1{2`\u01eb{4 and p phps0|s0,a1q\u201c1{2\u00b4\u01eb{4, thus, }php\u00a8|s0,a1q\u00b4 p php\u00a8|s0,a1q}1 \u201c\u01eb. We set rps0,a1q\u201c1, rps0,a2q\u201c1\u00b4\u01eb{6, and rps1,a1q\u201crps1,a2q\u201c1{2 making \u03c0E optimal. We can prove that HdGpRhom, p Rhomq\u011b1{4. Example 3.3 (\u03b2-margin reward). A \u03b2-margin reward enforces a suboptimality gap of at least \u03b2 \u0105 0 (Ng & Russell, 2000; Komanduru & Honorio, 2019). We formalize it in the \ufb01nite-horizon case with a sequence \u03b2 \u201c p\u03b2hqhPJHK, possibly different for every stage: R\u03b2-mar \u201cRXt@ps,a,hq : A\u03c0E h ps,a;rqPt0uYp\u00b48,\u00b4\u03b2hsu. \fTowards Theoretical Understanding of Inverse Reinforcement Learning Consider the MDP\\R in Figure 1a with \u03c0E h ps0q \u201c p \u03c0E h ps0q \u201c a1 for h P t1, 2u. We set p1ps`|s0, a1q \u201c 1{2`\u01eb and p p1ps`|s0, a1q \u201c 1{2 \u00b4 \u01eb. We set for MDP\\R M the reward function as r1ps0, aq \u201c 0 and rhps`, aq \u201c \u00b4rhps\u00b4, aq \u201c 1 for a P ta1, a2u and h P J2, HK. In ps0, 1q the suboptimality gap is \u03b21 \u201c 2 ` 2\u01ebpH \u00b4 1q. By selecting H \u011b 1 ` 1{\u01eb, the feasible set p R\u03b2-mar is empty. These examples show that, under certain classes of restrictions, the feasible reward set is not Lipschitz continuous w.r.t. the transition model and, more in general, w.r.t. the IRL problem. The generalization of these examples to more abstract conditions for guaranteeing the Lipschitz continuity of the feasible reward set is beyond the scope of the paper. 4. PAC Framework for IRL with a Generative Model In this section, we discuss the PAC (Probably Approximately Correct) requirements for estimating the feasible reward set with access to a generative model of the environment. We \ufb01rst provide the notion of a learning algorithm estimating the feasible reward set with a generative model (Section 4.1). Then, we formally present the PAC requirement for the Hausdorff (pre)metric Hd (Section 4.2). Finally, we discuss the relationships between the PAC requirements with different choices of (pre)metric d (Section 4.3). 4.1. Learning Algorithms with a Generative Model A learning algorithm for estimating the feasible reward set is a pair A \u201c p\u00b5, \u03c4q, where \u00b5 \u201c p\u00b5tqtPN is a sampling strategy de\ufb01ned for every time step t P N as \u00b5t P \u2206S\u02c6A\u02c6JHK Dt\u00b41 with Dt \u201c pS \u02c6 A \u02c6 JHK \u02c6 S \u02c6 Aqt and \u03c4 is a stopping time w.r.t. a suitably de\ufb01ned \ufb01ltration. At every step t P N, the learning algorithm query the environment in a triple pst, at, htq, selected based on the sampling strategy \u00b5tp\u00a8|Dt\u00b41q, where Dt\u00b41 \u201c ppsl, al, hl, s1 l, aE l qqt\u00b41 l\u201c1 P Dt\u00b41 is the dataset of past samples. Then, the algorithm observes the next state s1 t \u201e phtp\u00a8|st, atq and expert\u2019s action aE t \u201e \u03c0E htp\u00a8|stq and updates the dataset Dt \u201c Dt\u00b41 \u2018 pst, at, ht, s1 t, aE t q. Based on the collected data D\u03c4, the algorithm computes the empirical IRL problem px M \u03c4, p \u03c0E,\u03c4q, based on Equation (3) and the empirical feasible reward set p R\u03c4. 4.2. PAC Requirement We now introduce a general notion of a PAC requirement for estimating the feasible reward set of an IRL problem. To this end, we consider the Hausdorff (pre)metric introduced in Section 3 de\ufb01ned in terms of the reward (pre)metric dpr, p rq. We denote with d-IRL the problem of estimating the feasible reward set under the Hausdorff (pre)metric Hd. De\ufb01nition 4.1 (PAC Algorithm for d-IRL). Let \u01eb P p0, 2q and \u03b4 P p0, 1q. An algorithm A \u201c p\u00b5, \u03c4q is p\u01eb, \u03b4q-PAC for d-IRL if: P pM,\u03c0Eq,A \u00b4 HdpR, p R\u03c4q \u010f \u01eb \u00af \u011b 1 \u00b4 \u03b4, where PpM,\u03c0Eq,A denotes the probability measure induced by executing the algorithm A in the IRL problem pM, \u03c0Eq and p R\u03c4 is the feasible reward set induced by the empirical IRL problem p x M\u03c4, p \u03c0E,\u03c4q estimated with the dataset D\u03c4. The sample complexity is de\ufb01ned as \u03c4 :\u201c |D\u03c4|. In the next section, we show the relationship between PAC requirements de\ufb01ned for notable choices of d. 4.3. Different Choices of d So far, we have evaluated the dissimilarity between the feasible reward sets by means of the Hausdorff induced by dG, i.e., the L8-norm of between individual reward functions. In the literature, other (pre)metrics d have been proposed (e.g., Metelli et al., 2021; Lindner et al., 2022). dG Q\u02da-IRL Since the recovered reward functions are often used for performing forward RL, an index of interest is the dissimilarity between optimal Q-functions obtained with the reward r P R and p r P p R in the original MDP\\R: dG Q\u02dapr, p rq :\u201c max ps,a,hqPS\u02c6A\u02c6JHK |Q\u02da hps, a; rq \u00b4 Q\u02da hps, a; p rq| . dG V \u02da-IRL We are often interested in not just being accurate in estimating the optimal Q-function, but rather in the performance of an optimal policy p \u03c0\u02da, learned with the recovered reward p r P p R, evaluated under the true reward r P R: dG V \u02dapr,p rq:\u201c sup p \u03c0\u02daP\u03a0\u02dapp rq max ps,hqPS\u02c6JHK \u02c7 \u02c7 \u02c7V \u02da h ps;rq\u00b4V p \u03c0\u02da h ps;rq \u02c7 \u02c7 \u02c7, where \u03a0\u02dapp rq:\u201ct\u03c0:@ps,a,hqPS\u02c6A\u02c6JHK:A\u03c0 hps,a;p rq\u010f0u is the set of optimal policies under the recovered reward p r. The following result formalizes the relationships between the presented d-IRL problems. Theorem 4.1 (Relationships between d-IRL problems). Let us introduce the graphical convention for c \u0105 0: x-IRL y-IRL c meaning that any p\u01eb, \u03b4q-PAC x-IRL algorithm is pc\u01eb, \u03b4qPAC y-IRL. Then, the following statements hold: \fTowards Theoretical Understanding of Inverse Reinforcement Learning dG-IRL dG Q\u02da-IRL dG V \u02da-IRL . 2H H 2H Theorem 4.1 shows that any p\u01eb, \u03b4q-PAC guarantee on dG, implies p\u01eb1, \u03b4q-PAC guarantees on both dG Q\u02da and dG V \u02da, where \u01eb1 \u201c \u0398pH\u01ebq is linear in the horizon H. This justi\ufb01es why focusing on dG-IRL, as in the following section where sample complexity lower bounds are derived. The lower bound analysis for dG Q\u02da-IRL and dG V \u02da-IRL is left to future works. 5. Lower Bounds In this section, we establish sample complexity lower bounds for the dG-IRL problem based on the PAC requirement of De\ufb01nition 4.1 in the generative model setting. We start presenting the general result (Section 5.1) and, then, we comment on its form and, subsequently, provide a sketch of the construction of the hard instances for obtaining the lower bound (Section 5.2). For the sake of presentation, we assume that the expert\u2019s policy \u03c0E is known; the extension to the case of unknown \u03c0E is reported in Appendix D. 5.1. Main Result In this section, we report the main result of the lower bound of the sample complexity of learning the feasible reward set. Theorem 5.1 (Lower Bound for dG-IRL). Let A \u201c p\u00b5, \u03c4q be an p\u01eb, \u03b4q-PAC algorithm for dG-IRL. Then, there exists an IRL problem pM, \u03c0Eq such that, if \u03b4 \u010f 1{32, S \u011b 9, A \u011b 2, and H \u011b 12, the expected sample complexity is lower bounded by: \u2022 if the transition model p is time-inhomogeneous: E pM,\u03c0Eq,A r\u03c4s \u011b 1 1024 H3SA \u01eb2 \u02c61 2 log \u02c61 \u03b4 \u02d9 ` 1 5S \u02d9 ; \u2022 if the transition model p is time-homogeneous: E pM,\u03c0Eq,A r\u03c4s \u011b 1 512 H2SA \u01eb2 \u02c61 2 log \u02c61 \u03b4 \u02d9 ` 1 5S \u02d9 , where EpM,\u03c0Eq,A denotes the expectation w.r.t. the probability measure PpM,\u03c0Eq,A. Some observations are in order. First, the derived lower bound displays a linear dependence on the number of actions A and dependence on the horizon H raised to a power 2 or 3, which depends on whether the underlying transition model is time-homogeneous, as common even for forward RL (e.g., Dann & Brunskill, 2015; Domingues et al., 2021). Second, we identify two different regimes visible inside the parenthesis related to the dependence on the number of states S and the con\ufb01dence \u03b4. Speci\ufb01cally, for small values of \u03b4 (i.e., \u03b4 \u00ab 0), the dominating part is log ` 1 \u03b4 \u02d8 , leading to a sample complexity of order \u2126 \u00b4 H3SA \u01eb2 log ` 1 \u03b4 \u02d8\u00af . Instead, for large \u03b4 (i.e., \u03b4 \u00ab 1{32), the most relevant part is the one corresponding to S, leading to sample complexity of order \u2126 \u00b4 H3S2A \u01eb2 \u00af (both for the time-inhomogeneous case). An analogous two-regime behavior has been previously observed in the reward-free exploration setting (Jin et al., 2020; Kaufmann et al., 2021; M\u00b4 enard et al., 2021). 5.2. Sketch of the Proof In this section, we provide a sketch of the construction of the lower bounds of Theorem 5.1. The idea consists in deriving two separate bounds depending on the regime of \u03b4, which are based on two building blocks reported in Figure 2. These instances are used to build lower bounds for a single state s\u02da and the extension to multiple states and stages follows standard constructions (e.g., Domingues et al., 2021). Small-\u03b4 regime Figure 2a reports the instances employed in this regime. The expert\u2019s policy is \u03c0Epsq \u201c a0. From state s\u02da, all actions bring the system to the absorbing states s` and s\u00b4 with equal probability, except for action a\u02da \u2030 a0 that increases by \u01eb1 \u0105 0 the probability of reaching state s`. The learner, in order to recover a correct feasible reward set, has to identify which is the action behaving like a\u02da (among the A available ones) to force action a0 to be optimal. Considering \u0398pAq instances, in which action a\u02da changes, an application of BretagnolleHuber inequality (Lattimore & Szepesv\u00b4 ari, 2020, Theorem 14.2) allows deriving a sample complexity lower bounded by \u2126 \u00b4 AH2 \u01eb2 log ` 1 \u03b4 \u02d8\u00af . Large-\u03b4 regime Figure 2b depicts the instances used in this regime. The expert\u2019s policy is again \u03c0Epsq \u201c a0. The system, instead, is made of S \u201c \u0398pSq next states reachable with equal probability by playing action a0. All other actions aj \u2030 a0 alter the probability distribution of the next state. Speci\ufb01cally, by playing the action aj \u2030 a0, the probability of reaching the next state s1 k is given by p1`\u01eb1vpjq k q{S, where vpjq P t\u00b41, 1uS is a vector such that \u0159S k\u201c1 vpjq k \u201c 0. By varying vj in a suitable set, de\ufb01ned by means of a packing argument, we obtain \u0398p2Sq instances each one separated by a \ufb01nite dissimilarity, depending on \u01eb1. We obtain the lower bound by means of an application of the Fano\u2019s inequality (Gerchinovitz et al., 2017, Proposition 4) which results in order \u2126 \u00b4 pp1\u00b4\u03b4q\u00b4log 2qS2AH2 \u01eb2 \u00af . Extension to Multiple States and Stages At the beginning, the system randomly chooses a problem between Fig\fTowards Theoretical Understanding of Inverse Reinforcement Learning s\u02da s\u00b4 s` a\u02da \u2030 a\u02da 1{2\u00b4\u01eb1 1{2 1{2`\u01eb1 1{2 1 1 (a) MDP\\R used for the small-\u03b4 regime. s\u02da sS . . . s1 s2 a0 aj \u2030 a0 1{2 1{2 1{2 p1`\u01eb1vSq{S p1`\u01eb1v1q{S p1`\u01eb1v2q{S 1 1 1 (b) MDP\\R used for the large-\u03b4 regime. Figure 2. The MDP\\R employed in the constructions of the lower bounds of Section 5. The expert\u2019s policy is \u03c0Epsq \u201c a0. denotes a transition executed for multiple actions. Input: signi\ufb01cance \u03b4 P p0, 1q, \u01eb target accuracy t \u00d0 0, \u01eb0 \u00d0 `8 while \u01ebt \u0105 \u01eb do t \u00d0 t ` SAH Collect one sample from each ps, a, hq P S \u02c6 A \u02c6 JHK Update p pt according with (3) Update \u01ebt \u201c maxps,a,hqPS\u02c6A\u02c6JHK Ct hps, aq (resp. r Ct hps, aq) end while Algorithm 1. UniformSampling-IRL (US-IRL) for timeinhomogeneous (resp. time-homogeneous) transition models. ure 2a and Figure 2b. Then, it transitions to the state in which the system may randomly remain for H \u0103 H stages after which it transitions with uniform probability to any of the \u0398pSq states. H \u201c \u0398pHq for the time-inhomogeneous (resp. H \u201c Op1q for the time-homogeneous) case. In any state s\u02da and stage h\u02da, the agent can face the problems shown in Figure 2. By varying s\u02da and h\u02da among its possible HS (resp. S) values, we get the bounds in Theorem 5.1. Remark 5.1 (Generative vs Forward models). This construction suf\ufb01ces for obtaining a bound for the generative model, but it can be easily extended to work with the forward model of the environment (in which the agent interacts via trajectories only) by means of a standard tree-based construction (Jin et al., 2020; Domingues et al., 2021). In such a case, the resulting PAC guarantee would no longer be expressed via the L8-norm distance dG between reward, but worst-case over the visitation distributions induced by the policies: dFpr, p rq :\u201c sup\u03c0 EM,\u03c0r|rhps, aq \u00b4 p rhps, aq|s. 6. Algorithm In this section, we analyze the sample complexity of a uniform sampling strategy (UniformSampling-IRL, US-IRL) for the dG-IRL problem (Algorithm 1). We start presenting the sample complexity analysis (Section 6.1) and, then, we provide a sketch of the proof (Section 6.2). 6.1. Main Result The US-IRL algorithm was presented in (Metelli et al., 2021; Lindner et al., 2022) but analyzed for different IRL formulations (see Section 7). We revise it since it matches our sample complexity lower bounds, provided that more sophisticated concentration tools w.r.t. those employed in (Metelli et al., 2021; Lindner et al., 2022). For the sake of presentation, we assume that the expert\u2019s policy \u03c0E is known; the extension to unknown \u03c0E is reported in Appendix D. At each iteration, the algorithm collects a sample from every ps, a, hq P S \u02c6 A \u02c6 JHK and, for timeinhomogeneous models, computes the con\ufb01dence function: Ct hps, aq :\u201c 2 ? 2pH \u00b4 h ` 1q d 2\u03b2 ` nt hps, aq, \u03b4 \u02d8 nt hps, aq , (6) where \u03b2 ` n, \u03b4 \u02d8:\u201c logpSAH{\u03b4q`pS \u00b41q log ` ep1`n{pS\u00b4 1q \u02d8 .8 The algorithm stops as soon as all con\ufb01dence functions fall below the threshold \u01eb. The following theorem provides the sample complexity of US-IRL. Theorem 6.1 (Sample Complexity of US-IRL). Let \u01eb \u0105 0 and \u03b4 P p0, 1q, US-IRL is p\u01eb, \u03b4q-PAC for dG-IRL and with probability at least 1 \u00b4 \u03b4 it stops after \u03c4 samples with: \u2022 if the transition model p is time-inhomogeneous: \u03c4 \u010f 8H3SA \u01eb2 \u02c6 log \u02c6SAH \u03b4 \u02d9 ` pS \u00b4 1qC \u02d9 , where C \u201c logpe{pS \u00b4 1q ` p8eH2q{ppS \u00b4 1q\u01eb2qplogpSAH{\u03b4q ` 4eqq; 8In the time-homogeneous case, the algorithm merges the samples collected at different h P JHK for the estimation of the transition model and replaces the con\ufb01dence function with: r Ct hps, aq :\u201c 2 ? 2pH \u00b4 h ` 1q d 2r \u03b2 ` ntps, aq, \u03b4 \u02d8 ntps, aq , (7) where r \u03b2 ` n, \u03b4 \u02d8:\u201c logpSA{\u03b4q ` pS \u00b4 1q log ` ep1 ` n{pS \u00b4 1q \u02d8 and ntps, aq \u201c \u0159H h\u201c1 nt hps, aq. \fTowards Theoretical Understanding of Inverse Reinforcement Learning \u2022 if the transition model p is time-homogeneous and : \u03c4 \u010f 8H2SA \u01eb2 \u02c6 log \u02c6SA \u03b4 \u02d9 ` pS \u00b4 1qC2 \u02d9 , where r C \u201c logpe{pS \u00b4 1q ` p8eH2q{ppS \u00b4 1q\u01eb2qplogpSA{\u03b4q ` 4eqq. Thus, time-inhomogeneous (resp. time-homogeneous) transition models, US-IRL suffers a sample complexity bound of order r O \u00b4 H3SA \u01eb2 ` log ` 1 \u03b4 \u02d8 ` S \u02d8\u00af (resp. r O \u00b4 H2SA \u01eb2 ` log ` 1 \u03b4 \u02d8 ` S \u02d8\u00af ) matching the lower bounds of Theorem 5.1 up to logarithmic factors for both regimes of \u03b4. 6.2. Sketch of the Proof The idea of the proof is to exploit Theorem 3.2 to reduce the Hausdorff distance to the L1-norm between the transition model }p pt hp\u00a8|s, aq \u00b4 php\u00a8|s, aq}1. It is worth noting this term replaces |pp pt h \u00b4 phqVh| appearing in previous works (Metelli et al., 2021; Lindner et al., 2022) that was comfortably bounded using H\u00a8 oeffding\u2019s inequality. In our case, the L1-norm is unavoidable due to the Hausdorff distance that implies a worst-case choice of the reward function and, thus, of Vh. This term has to be carefully bounded using the stronger KL-divergence concentration result of (Jonsson et al., 2020, Proposition 1) to get the Oplogp1{\u03b4q ` Sq rate.9 7. Related Works In this section, we discuss the related works about sample complexity analysis and lower bounds for IRL. Additional related works are reported in Appendix A. Sample Complexity for Estimating the Feasible Reward Set The notion of feasible reward set R was introduced in (Ng & Russell, 2000) in an implicit form in the in\ufb01nite-horizon discounted case as a linear feasibility problem and, subsequently, adapted to the \ufb01nite-horizon case in (Lindner et al., 2022). Furthermore, in (Metelli et al., 2021; Lindner et al., 2022) an explicit form of the reward functions belonging to the feasible region R was provided. In these works, the problem of estimating the feasible reward set is studied for the \ufb01rst time considering a \u201creference\u201d pair of rewards pr, q rq P R \u02c6 p R against which to compare the rewards inside the recovered sets, leading to the (pre)metric: r HdpR, R, r, q rq :\u201c max \" inf p rP p R dpr, p rq, inf rPR dpr, q rq * . (8) 9A more na\u00a8 \u0131ve application of the L1-concentration of (Weissman et al., 2003) would lead to the worse OpS logp1{\u03b4qq rate. Compared to the Hausdorff (pre)metric (Equation 1), in Equation (8) there is no maximization over the choice of pr, q rq, leading to a simpler problem.10 In (Metelli et al., 2021), a uniform sampling approach (similar to Algorithm 1) is proved to achieve a sample complexity of order r O \u00b4 \u03b32SA p1\u00b4\u03b3q4\u01eb2 \u00af for the index of Equation (8) with d \u201c dG Q\u02da in the discounted setting with generative model. For the forward model case, the AceIRL algorithm (Lindner et al., 2022) suffers a sample complexity of order r O \u00b4 H5SA \u01eb2 \u00af for the index of Equation (8) with d \u201c dF V \u02da, in the \ufb01nitehorizon case.11 Unfortunately, the reward recovered by AceIRL reward function is not guaranteed to be bounded by a predetermined constant (e.g., r\u00b41, 1s). Modi\ufb01ed versions of these algorithms allow embedding problemdependent features under a speci\ufb01c choice of a reward within the set. Sample Complexity Lower Bounds in IRL To the best of our knowledge, the only work that proposes a sample complexity lower bound for IRL is (Komanduru & Honorio, 2021). The authors consider a \ufb01nite state and action MDP\\R and the IRL algorithm of (Ng & Russell, 2000) for \u03b2-strict separable IRL problems (i.e., with suboptimality gap at least \u03b2) with state-only rewards in the discounted setting. When only two actions are available (A \u201c 2) and the samples are collected starting in each state with equal probability, by means of a geometric construction and Fano\u2019s inequality, the authors derive an \u2126pS log Sq lower bound on the number of trajectories needed to identify a reward function. Note that this analysis limits to the identi\ufb01cation of a reward function within a \ufb01nite set, rather than evaluating the accuracy of recovering the feasible reward set. 8. Conclusions and Open Questions In this paper, we provided contributions to the understanding of the complexity of the IRL problem. We conceived a lower bound of order \u2126 \u00b4 H3SA \u01eb2 ` log ` 1 \u03b4 \u02d8 ` S \u02d8\u00af on the number samples collected with a generative model in the \ufb01nite-horizon setting. This result is of relevant interest since it sets, for the \ufb01rst time, the complexity of the IRL problem, de\ufb01ned as the problem of estimating the feasible reward set. Furthermore, we showed that a uniform sampling strategy matches the lower bound up to logarithmic factors. Nevertheless, the IRL problem is far from being closed. In the following, we outline a road map of open questions, hoping to inspire researchers to work in this appealing area. 10In this sense, a PAC guarantee according to De\ufb01nition 4.1, implies a PAC guarantee de\ufb01ned w.r.t. (pre)metric of Equation (8). 11As discussed in Remark 5.1, in the forward model case, the dissimilarity is in expectation w.r.t. the worst-case policy. \fTowards Theoretical Understanding of Inverse Reinforcement Learning Forward Model The most straightforward extension of our \ufb01ndings is moving to the forward model setting, in which the agent can interact with the environment through trajectories only. As we already noted, our lower bounds can be comfortably extended to this setting. However, in this case, the PAC requirement has to be relaxed since controlling the L8-norm between rewards is no longer a viable option (e.g., for the possible presence of almost unreachable states). Which distance notion should be used for this setting? Will the Lipschitz regularity of Section 3 still hold? Problem-Dependent Analysis Our analysis is worst-case in the class of IRL problems. Would it be possible to obtain a problem-dependent complexity results? Previous problem-dependent analyses provided results tightly connected to the properties of the speci\ufb01c reward selection procedure (Metelli et al., 2021; Lindner et al., 2022). Clearly, a currently open question, in all settings in which reward is missing, including reward-free exploration (Jin et al., 2020) and IRL, is how to de\ufb01ne a problem-dependent quantity in replacement of the suboptimality gaps. Reward Selection Our PAC guarantees concern with the complete feasible reward set. However, algorithmic solutions to IRL implement a speci\ufb01c criterion for selecting a reward (e.g., maximum entropy, maximum margin). How the PAC guarantee based on the Hausdorff distance relates to guarantees on a single reward selected with a speci\ufb01c criterion within R?" + }, + { + "url": "http://arxiv.org/abs/2304.05073v2", + "title": "A Tale of Sampling and Estimation in Discounted Reinforcement Learning", + "abstract": "The most relevant problems in discounted reinforcement learning involve\nestimating the mean of a function under the stationary distribution of a Markov\nreward process, such as the expected return in policy evaluation, or the policy\ngradient in policy optimization. In practice, these estimates are produced\nthrough a finite-horizon episodic sampling, which neglects the mixing\nproperties of the Markov process. It is mostly unclear how this mismatch\nbetween the practical and the ideal setting affects the estimation, and the\nliterature lacks a formal study on the pitfalls of episodic sampling, and how\nto do it optimally. In this paper, we present a minimax lower bound on the\ndiscounted mean estimation problem that explicitly connects the estimation\nerror with the mixing properties of the Markov process and the discount factor.\nThen, we provide a statistical analysis on a set of notable estimators and the\ncorresponding sampling procedures, which includes the finite-horizon estimators\noften used in practice. Crucially, we show that estimating the mean by directly\nsampling from the discounted kernel of the Markov process brings compelling\nstatistical properties w.r.t. the alternative estimators, as it matches the\nlower bound without requiring a careful tuning of the episode horizon.", + "authors": "Alberto Maria Metelli, Mirco Mutti, Marcello Restelli", + "published": "2023-04-11", + "updated": "2023-04-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION The discounted formulation of the Markov Decision Process (MDP, Puterman, 2014), initially studied in (Blackwell, 1962; Bellman, 1966), established itself as one of the most popular models for Reinforcement Learning (RL, Sutton and Barto, 2018) due to its favorable theoretical tractability and its link with temporal difference learnProceedings of the 26th International Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2023, Valencia, Spain. PMLR: Volume 206. Copyright 2023 by the author(s). ing (Sutton, 1988), a key ingredient behind several successful algorithms (e.g., Watkins and Dayan, 1992; Mnih et al., 2015; Lillicrap et al., 2016; Silver et al., 2016). On a technical level, discounted RL problems are based on the estimation of \u201cexponentially discounted\u201d quantities over an in\ufb01nite horizon. Speci\ufb01cally, policy evaluation requires estimating the \u03b3-discounted value function V \u00b5 \u03b3 of a policy \u00b5: V \u00b5 \u03b3 psq \u201c E \u00b5 \u00ab`8 \u00ff t\u201c0 \u03b3tRpst, atq \u02c7 \u02c7 \u02c7 \u02c7 s0 \u201c s \ufb00 , (1) whereas policy optimization involves the estimation of the policy gradient (Sutton et al., 1999): \u2207\u00b5V \u00b5 \u03b3 \u201c E \u00b5 \u00ab`8 \u00ff t\u201c0 \u03b3t\u2207\u00b5 log \u00b5pat|stqQ\u00b5 \u03b3pst, atq \ufb00 . (2) One can equivalently write those quantities as expectations over the state-action space Eps,aq\u201e\u03c0\u00b5 \u03b3 rfps, aqs, where \u03c0\u00b5 \u03b3 ps, aq :\u201c p1 \u00b4 \u03b3q E\u00b5r\u0159`8 t\u201c0 \u03b3t1tst \u201c s, at \u201c aus is the \u03b3-discounted state-action distribution induced by policy \u00b5. The latter can be seen as the stationary distribution of a suitably de\ufb01ned Markov Chain (MC, Levin and Peres, 2017) obtained from the original MDP by \ufb01xing the policy \u00b5 and considering, at any step, a reset probability 1 \u00b4 \u03b3 of returning to the initial state. This means that discounted RL is rooted in the mean estimation of a function in an MC. Nonetheless, the latter technical problem has received little attention in the discounted RL literature, which has mostly focused on the pitfalls of common practices (e.g., Thomas, 2014; Lehnert et al., 2018; Nota and Thomas, 2020; Tang et al., 2021; Zhang et al., 2022) and their impact on the learning problem (Jiang et al., 2016; Van Seijen et al., 2019; Amit et al., 2020; Guo et al., 2022). Instead, how to optimally collect samples from the MC and how to appropriately perform the mean estimation remain mostly obscure. This paper formally studies the \u03b3-discounted mean estimation in MCs. First, we provide a general formulation of the problem by de\ufb01ning an estimation algorithm as a pairing of a reset policy, which is used to make decisions on whether to reset the chain (i.e., re-start from the initial state) at a given step, and an actual estimator, from which the estimate is computed on the collected samples. For this notion of estimation algorithm, we introduce a PAC requirement that guarantees a small estimation error with high probability when enough samples are collected. Most importantly, arXiv:2304.05073v2 [cs.LG] 14 Apr 2023 \fA Tale of Sampling and Estimation in Discounted Reinforcement Learning we derive a lower bound on the number of samples required by any estimation algorithm to meet the proposed PAC requirement, which relates the sample complexity to the discount factor \u03b3 and the mixing properties of the chain. Having established the statistical barriers of the problem, we shift our focus toward the properties of practical estimation algorithms. The most common practice in discounted RL is to compute the quantities in Equations (1, 2) through a \ufb01nite-horizon algorithm, i.e., that resets the chain every T steps. This approach is known to suffer from a meaningful bias (Thomas, 2014), which can be only partially mitigated with a careful choice of T and correcting factors (though they are seldom used in practice). Alternatively, one can design an unbiased estimation algorithm that, at every step, rejects the collected sample with probability \u03b3, otherwise it accepts the sample and resets the chain. However, this one-sample estimator has been mostly used as a theoretical tool (e.g., Thomas, 2014; Metelli et al., 2021), since it wastes a large portion of the samples and, thus, suffers from a large variance. Another option is to reset the chain like the one-sample estimator, but to compute the estimate over all the collected samples, like the \ufb01nite-horizon estimators. The latter all-samples approach introduces some bias by bringing dependent samples, but it mitigates the variance through a greater effective sample size. For the mentioned estimation algorithms, we study their computational properties, derive concentration inequalities, and certify the allsamples approach, which practitioners almost neglect, is the one with the best statistical pro\ufb01le, as it results nearly minimax optimal. Original Contributions In summary, we contribute: \u2022 A formal de\ufb01nition of the problem of \u03b3-discounted mean estimation in Markov chains and its corresponding PAC requirement (Section 3); \u2022 The \ufb01rst minimax lower bound of order r \u2126p1{ a Np1 \u00b4 \u03b2\u03b3qq on the error of \u03b3-discounted mean estimation in MCs, where N is the number of collected samples and 1 \u00b4 \u03b2 is the absolute spectral gap (Levin and Peres, 2017) of the chain (Section 4); \u2022 The analysis of the statistical properties of a family of estimation algorithms that includes the \ufb01nite-horizon, onesample, all-samples types (Section 5), in which the allsamples approach results in the best statistical pro\ufb01le; \u2022 An empirical evaluation of the mentioned estimation algorithms over simple yet illustrative problems, which uphold the compelling statistical properties of the allsamples estimator (Section 6). Finally, this paper aims to shed light on the statistical barriers of \u03b3-discounted mean estimation in MCs, which stands as the technical bedrock of the discounted RL formulation. As a by-product of this theoretical analysis, the all-samples estimation approach emerges as an interesting opportunity for the development of novel practical algorithms for discounted RL supported by compelling statistical properties. 2 PRELIMINARIES In this section, we introduce the necessary background that will be employed in the subsequent sections of the paper. Notation Let X be a set and F a \u03c3-algebra on X. We denote with PpXq the set of probability measures over pX, Fq. We denote with BpXq the set of F-measurable real-valued functions. Let \u03bd P PpXq and f P BpXq, with little abuse of notation, we denote with \u03bd : BpXq \u00d1 R the expectation operator \u03bdf \u201c \u015f X fpxq\u03bdpdxq. For p P r1, 8q, we de\ufb01ne the Lpp\u03c0q-norm as }f}p \u03c0,p \u201c \u015f X |fpxq|p\u03c0pdxq. Let T : BpXq \u00d1 BpXq be a linear operator, we de\ufb01ne the operator norm as }T}\u03c0,p\u00d1q \u201c sup}f}\u03c0,p\u010f1 }Tf}\u03c0,q, for p, q P r1, 8q. Let \u03bd, \u00b5 P PpXq, the chi-square divergence is de\ufb01ned as \u03c72p\u03bd}\u00b5q \u201c \u203a \u203apd\u03bd{d\u00b5 \u00b4 1q2\u203a \u203a2 \u00b5. For a, b P N with a \u010f b we employ the notation r ra, bs s \u201c ta, . . . , bu. Markov Chains A Markov kernel is an F-measurable function P : X \u00d1 PpXq mapping every state x P X to a probability measure Pp\u00a8|xq P PpXq. We denote with PpX, Xq the set of Markov kernels over pX, Fq. With little abuse of notation, we denote with the same symbol the operator P : BpXq \u00d1 BpXq de\ufb01ned as pPfqpxq \u201c \u015f X fpyqPpdy|xq for x P X. A probability measure \u03c0 P PpXq is invariant w.r.t. P if \u03c0 \u201c \u03c0P. Let \u03a0 \u201c 1\u03c0, we de\ufb01ne the absolute L2-spectral gap (Levin and Peres, 2017) as 1 \u00b4 \u03b2, where \u03b2 \u201c }P \u00b4 \u03a0}\u03c0,2\u00d12. Discounted Sampling Let \u03b3 P r0, 1s be a discount factor, P P PpX, Xq be a Markov kernel, and \u03bd P PpXq be an initial-state distribution, the \u03b3-discounted stationary distribution \u03c0\u03b3 P PpXq is de\ufb01ned in several equivalent forms: \u03c0\u03b3 \u201c p1 \u00b4 \u03b3q \u00ff tPN \u03b3t\u03bdP t \u201c p1 \u00b4 \u03b3q\u03bd pI \u00b4 \u03b3Pq\u00b41 \u201c p1 \u00b4 \u03b3q\u03bd ` \u03b3\u03c0\u03b3P. (3) \u03c0\u03b3 represents normalized expected count of the times each state is visited, where a visit at time t P N counts \u03b3t. If \u03b3 \u0103 1, \u03c0\u03b3 is guaranteed to exist. When \u03b3 \u201c 1, we denote with \u03c0 \u201c lim\u03b3\u00d11 \u03c0\u03b3 the stationary distribution of P, if it exists. It is well-known that \u03c0\u03b3 is also the stationary distribution of the MC with kernel P\u03b3 \u201c p1 \u00b4 \u03b3q1\u03bd ` \u03b3P. 3 \u03b3-DISCOUNTED MEAN ESTIMATION In this section, we formally de\ufb01ne the problem of \u03b3discounted mean estimation in Markov chains. Then, we introduce a general framework for characterizing a broad class of estimators (Section 3.1), and we formally de\ufb01ne the PAC requirement to asses their quality (Section 3.2). Let \u03bd P PpXq be an initial-state distribution and P P PpX, Xq be a Markov kernel. Given a discount factor \u03b3 P r0, 1s and a measurable function f P BpXq, our goal \fAlberto Maria Metelli, Mirco Mutti, Marcello Restelli X0\u223c\u03bd X1\u223cP(\u00b7|X0) X2\u223cP(\u00b7|X1) X3\u223cP(\u00b7|X2) X4\u223cP(\u00b7|X3) X\u03c41\u22121\u223cP(\u00b7|X\u03c41\u22122) X\u03c41\u223c\u03bd X\u03c41+1\u223cP(\u00b7|X\u03c41) X8\u223cP(\u00b7|X7) X\u03c42\u22121\u223cP(\u00b7|X\u03c42\u22122) X\u03c4M \u223c\u03bd X\u03c4M+1\u223cP(\u00b7|X\u03c4M ) X\u03c4M+2\u223cP(\u00b7|X\u03c4M+1) X9\u223cP(\u00b7|X8) XN\u22121\u223cP(\u00b7|XN\u22121) Y0=0 Y1=0 Y2=0 Y3=0 Y\u03c41\u22122=0 Y\u03c41\u22121=1 reset Y\u03c41=0 Y\u03c41+1=0 Y\u03c42\u22122=0 Y\u03c42\u22121=1 reset Q reset Y\u03c4M =0 Y\u03c4M +1=0 Y\u03c4M +2=0 YN\u22121=0 \u03be1 \u03be2 \u03beM . . . Q Q Q Figure 1: Graphical representation of the sampling process in a Markov chain with a reset policy. Algorithm 1 Markov chain sampling with reset policy. Input: Markov kernel P, initial-state distribution \u03bd, discount factor \u03b3, reset policy \u03c1 \u201c p\u03c1tqtPN, number of samples N Output: dataset HN H0 \u201c pq, X0 \u201e \u03bd for t P J0, N \u00b4 1K do Yt \u201e \u03c1tp\u00a8|Ht, Xtq Ht`1 \u201c Ht \u2018 ppXt, Ytqq \u2018 denotes concatenation if Yt \u201c 0 then Xt`1 \u201e Pp\u00a8|Xtq else Xt`1 \u201e \u03bd end if end for return HN consists in estimating: \u03c0\u03b3f :\u201c E X\u201e\u03c0\u03b3rfpXqs \u201c \u017c X fpxq\u03c0\u03b3pdxq, (4) i.e., the expectation of f under the \u03b3-discounted stationary distribution \u03c0\u03b3 induced by \u03bd and P, as de\ufb01ned in Equation (3). Furthermore, we introduce the quantity:1 \u03c32 \u03b3f :\u201c Var X\u201e\u03c0\u03b3rfpXqs \u201c \u017c X pfpxq \u00b4 \u03c0\u03b3fq2\u03c0\u03b3pdxq, i.e., the variance of f under distribution \u03c0\u03b3. When \u03b3 \u201c 1, we denote with \u03c0f and \u03c32f the expectation and variance of f under the stationary distribution, when they exist. 3.1 Reset-based Estimation Algorithms In order to perform a reliable estimation of \u03c0\u03b3f, it is advisable to have the possibility of \u201cresetting\u201d the chain, i.e., to interrupt the natural evolution of the chain based on the Markov kernel P, and restart the simulation from a state sampled from the initial-state distribution \u03bd. For these reasons, we introduce the notion of reset policy, i.e., a device that decides whether to reset the chain, based on time, the current history of observed states and reset decisions. De\ufb01nition 3.1 (Reset Policy). A reset policy is a sequence \u03c1 \u201c p\u03c1tqtPN of functions \u03c1t : Ht \u02c6 X \u00d1 Ppt0, 1uq map1\u03c0\u03b3 : BpXq \u00d1 R and \u03c32 \u03b3 : BpXq \u00d1 R\u011b0 act as operators. ping for every t P N a history of past states and resets Ht \u201c pX0, Y0, . . . , Xt\u00b41, Yt\u00b41q P Ht \u201c pX \u02c6 t0, 1uqt and the current state Xt P X, to a probability measure \u03c1tp\u00a8|Ht, Xtq P Ppt0, 1uq. Thus, at every time instant t P N, based on Ht P Ht and Xt P X, we sample the reset decision Yt P t0, 1u from the current reset policy \u03c1tp\u00a8|Ht, Xtq. If Yt \u201c 1, then we reset, i.e., the next state Xt`1 is sampled from the initial state distribution \u03bd, whereas if Yt \u201c 0 the chain evolution proceeds and Xt`1 is sampled from the Markov kernel Pp\u00a8|Xtq. The resulting sampling algorithm is reported in Algorithm 1. Resettable and Resetted Processes Given a reset policy \u03c1 \u201c p\u03c1tqtPN, we can represent the distribution of the next state with the resettable process, de\ufb01ned through the resettable Markov kernel P\u03bd : X \u02c6t0, 1u \u00d1 PpXq, de\ufb01ned for every pX, Y q P X \u02c6 t0, 1u and measurable set B P F as: P\u03bdpB|X, Y q \u201c Y \u00a8 \u03bdpBq ` p1 \u00b4 Y q \u00a8 PpB|Xq. (5) Suppose we run the process for N P N steps, the product measure generating the history HN is given by P N \u03bd,\u03c1 \u201c \u03bd b \u03c10 b p\u00c2N\u00b41 t\u201c1 P\u03bd b \u03c1tq. The sequence of states resulting from applying a reset policy \u03c1 is a non-stationary nonMarkovian process, called resetted process, whose kernel P\u03bd,\u03c1t,t : Ht \u02c6 X \u00d1 PpXq is de\ufb01ned for t P N, history H P Ht, state X P X, and measurable set B P F as:2 P\u03bd,\u03c1t,tpB|X, Hq \u201c \u03c1tpt1u|H, Xq \u00a8 \u03bdpBq ` \u03c1tpt0u|H, Xq \u00a8 PpB|Xq. (6) Trajectories and Horizons We de\ufb01ne a trajectory as the sequence of states observed between two consecutive resets. The number of trajectories M can be computed in terms of the resets, i.e., M \u201c 1 ` \u0159N\u00b41 t\u201c0 Yi. We introduce the time instants \u03c4i in which a reset is performed as: \u03c4i\u201c $ \u2019 & \u2019 % 0 if i\u201c1 1`minttPN :t\u011b\u03c4i\u00b41^Yt\u201c1u if iPJ2,MK N if i\u201cM `1 . (7) 2If \u03c1t is stationary and/or Markovian then P\u03bd,\u03c1t,t is stationary and/or Markovian. \fA Tale of Sampling and Estimation in Discounted Reinforcement Learning Therefore, for every i P JMK, a trajectory is given by \u03bei \u201c pX\u03c4i, . . . , X\u03c4i`1\u00b41q, whose horizon is computed as Ti \u201c \u03c4i`1 \u00b4 \u03c4i. A graphical representation of the resulting sampling process is provided in Figure 1. 3.2 Estimators and PAC Requirement In addition to the reset policy \u03c1, to actually de\ufb01ne an estimation algorithm, we need an estimator, i.e., a function p \u03b7 : HN \u02c6 BpXq \u00d1 R that maps a history of observations HN P HN and a measurable function f P BpXq to a real number p \u03b7pHN, fq P R. Thus, an estimation algorithm is a pair A \u201c p\u03c1, p \u03b7q. We now introduce the PAC requirement to assess the quality of an estimation algorithm A. De\ufb01nition 3.2 (p\u03f5, \u03b4, Nq-PAC). Let \u03b3 P r0, 1s be a discount factor, let P P PpX, Xq be a Markov kernel, let \u03bd P PpXq be an initial-state distribution, and let f P BpXq be a measurable function. An estimation algorithm A \u201c p\u03c1, p \u03b7q for the \u03b3-discounted mean \u03c0\u03b3f, is p\u03f5, \u03b4, NqPAC if with probability at least 1 \u00b4 \u03b4 it holds that: |p \u03b7pHN, fq \u00b4 \u03c0\u03b3f| \u0103 \u03f5, where HN \u201e P N \u03bd,\u03c1 is collected with the reset policy \u03c1. In the next sections, we \ufb01rst dive into the study of the intrinsic complexity of estimating \u03c0\u03b3f (Section 4) and, then, we present a handful of practical estimators along with their computational and statistical properties (Section 5). 4 MINIMAX LOWER BOUND FOR \u03b3\u2013DISCOUNTED MEAN ESTIMATION In this section, we prove the \ufb01rst minimax lower bound for the problem of \u03b3\u2013discounted mean estimation in MCs. We \ufb01rst state a lower bound for a general estimation algorithm A \u201c p\u03c1, p \u03b7q. Then, we report a brief sketch of the proof, which includes how to construct the hard instance, while a complete derivation can be found in Appendix A.1. Theorem 4.1 (Minimax Lower Bound). For every discount factor \u03b3 P r0, 1s, suf\ufb01ciently small con\ufb01dence \u03b4,3 number of interactions N P N, and p\u03f5, \u03b4, Nq-PAC estimation algorithm A \u201c p\u03c1, p \u03b7q, there exists a class of Markov kernels P P PpX, Xq with absolute spectral gap 1 \u00b4 \u03b2 P p0, 1s, initial-state distributions \u03bd P PpXq, measurable function f P BpXq such that with probability at least \u03b4 it holds that: |p \u03b7pHN, fq \u00b4 \u03c0\u03b3f| \u011b d \u03c32 \u03b3f \u00a8 log 1 2\u03b4 Np1 \u00b4 \u03b2\u03b3q , where HN \u201e P N \u03bd,\u03c1 is collected with the reset policy \u03c1. Proof Sketch. The proof is based on the MC construction: 3The explicit regime for \u03b4 is reported in the proof sketch. A B p ` \u03b2 1 \u00b4 p \u00b4 \u03b2 1 \u00b4 p p having two states X \u201c tA, Bu, kernel P parametrized via \u03b2 P r0, 1q and p P p0, 1 \u00b4 \u03b2q, initial state distribution \u03bd \u201c pq, 1 \u00b4 qq parametrized via q P p0, 1q. By computing the invariant measure \u03c0 for the kernel P, it is easy to verify that the MC has spectral gap 1 \u00b4 \u03b2 for every value of p. We consider a pair of functions f1, f\u00b41 de\ufb01ned as f1pBq \u201c f\u00b41pBq \u201c 0, f1pAq \u201c 1, f\u00b41pAq \u201c \u00b41. Crucially, any estimator cannot distinguish the two functions if the state A is never visited. With this intuition, we can lower bound the probability of making an error \u03f5 P r0, 1s through the probability of visiting A. For p and q such that \u03c0\u03b3f1 \u201c \u03f5 (consequently \u03c0\u03b3f\u00b41 \u201c \u00b4\u03f5), we can derive: sup P,\u03bd,f with spectral gap 1 \u00b4 \u03b2 P HN\u201eP N \u03bd,\u03c1 p|p \u03b7pHN, fq \u00b4 \u03c0\u03b3f| \u011b \u03f5q \u011b 1 2 P HN\u201eP N \u03bd,\u03c1 pp \u03b7pHN, f\u00b41q \u201c p \u03b7pHN, f1qq \u011b 1 2 mint1 \u00b4 q, 1 \u00b4 puN. Then, we optimize the values of p and q to make the bound tight, and with some algebraic manipulations we get: 1 2 mint1 \u00b4 q, 1 \u00b4 puN \u011b exp \u02c6 \u00b4\u03f52Np1 \u00b4 \u03b2\u03b3q \u03c32 \u03b3f \u02d9 , in the regime \u03f5 P \u201c 0, 1\u00b4\u03b2 1\u00b4\u03b2\u03b3 \u2030 . The statement follows by reformulating the lower bound in terms of deviation \u03f5 for a small enough \u03b4 P ` 0, 1 2 exp ` \u00b4 Np1\u00b4\u03b2q2 \u03c32 \u03b3fp1\u00b4\u03b2\u03b3q \u02d8\u02d8 . The presented minimax lower bound establishes an instance-dependent rate of order r \u2126p1{ a Np1 \u00b4 \u03b2\u03b3qq for the deviation in \u03b3\u2013discounted mean estimation, which is the \ufb01rst result that connects the statistical complexity of the problem with both the mixing property of the chain and the discount factor through the term p1 \u00b4 \u03b2\u03b3q. Thus, we can appreciate the role of the spectral gap 1 \u00b4 \u03b2 in governing the complexity of the estimation problem. It is worth noting that, when \u03b2 \u201c 0 and the chain mixes instantly making all the collected samples independent, the result reduces to H\u00a8 oeffding\u2019s rate (Boucheron et al., 2013) for independent random variables r \u2126p1{ ? Nq. When \u03b3 \u201c 1, it reduces to H\u00a8 oeffding\u2019s rate for general MCs r \u2126p1{ a Np1 \u00b4 \u03b2qq (Fan et al., 2021). Notably, in the latter setting, the problem reduces to the estimation of the mean of a function under the stationary distribution \u03c0 of an MC. Finally, when \u03b2 \u201c 1 and the chain never mixes, the \u03b3\u2013discounted estimation problem is still well-de\ufb01ned for \u03b3 \u0103 1. In Appendix A.1, we report an additional result which shows that resetting the chain is indeed necessary in the latter nomixing regime. \fAlberto Maria Metelli, Mirco Mutti, Marcello Restelli 5 ANALYSIS OF \u03b3\u2013DISCOUNTED MEAN ESTIMATORS In this section, we analyze four estimation algorithms A \u201c p\u03c1, p \u03b7q for the \u03b3\u2013discounted mean \u03c0\u03b3f from computational and statistical perspectives. We derive suitable concentration inequalities, compare the estimators and discuss their tightness w.r.t. the provided lower bound. We consider two classes of estimation algorithms, based on the nature of the reset policy: Fixed-Horizon Reset (FHR, Section 5.1) and Adaptive-Horizon Reset (AHR, Section 5.2). Both classes of estimators assume the knowledge of the discount factor \u03b3 but not of the absolute spectral gap 1\u00b4\u03b2. Table 1 summarizes the properties of the presented estimators. The proofs of the results of this section are reported in Appendix A.2. 5.1 Fixed-Horizon Estimation Algorithms The Fixed-Horizon (FH) estimation algorithms perform a reset action after having experienced a \ufb01xed number of transitions T P N, i.e., generating trajectories with \ufb01xed horizon. Thus, given N transitions, the number of trajectories is given by M \u201c rN{Ts, with the last one possibly shorter TM \u201c N \u00b4 pM \u00b4 1qT. Thus, the reset policy takes the form \u03c1FHR t p\u00a8|Ht, Xtq \u201c \u03b41tt mod T \u201c0u. For the sake of the analysis, we assume that N mod T \u201c 0, so that all trajectories have the same horizon T. 5.1.1 Computational Properties The FHR reset policy \u03c1FHR is easily parallelizable, as the horizon T is known in advance. Thus, we need M \u201c N{T workers, each collecting a trajectory of T samples, which also corresponds to the time complexity OpTq. 5.1.2 Statistical Properties In the class of FH estimation algorithms, we consider two estimators p \u03b7, the Fixed-Horizon Non-corrected (FHN) and the Fixed-Horizon Corrected (FHC) estimators: p \u03b7FHNpHN, fq \u201c 1 M M\u00b41 \u00ff i\u201c0 p1 \u00b4 \u03b3q T \u00b41 \u00ff j\u201c0 \u03b3jfpXT i`jq, (8) p \u03b7FHCpHN, fq \u201c 1 M M\u00b41 \u00ff i\u201c0 1 \u00b4 \u03b3 1 \u00b4 \u03b3T T \u00b41 \u00ff j\u201c0 \u03b3jfpXT i`jq. (9) Both estimators are based on a sample average over the M \u201c N{T collected trajectories. The samples of each trajectory are weighted by the discount factor \u03b3 raised to a suitable power. The difference between the two estimators lies in the coef\ufb01cient that multiplies the inner summation. While in the FHN this constant disregards the fact that the summation is limited to the horizon T employing 1 \u00b4 \u03b3 as normalizing constant, the FHC accounts for this by selecting the proper constant 1\u00b4\u03b3T 1\u00b4\u03b3 \u201c \u0159T \u00b41 j\u201c0 \u03b3j. Nonetheless, as we shall see, it is not guaranteed that one estimator always outperforms the other in all regimes. These estimators are the most widely employed approaches for estimating \u03b3\u2013discounted means in RL (e.g., Deisenroth et al., 2013; Thomas, 2014; Metelli et al., 2020). Bias Analysis As they truncate each trajectory after T transitions, the FH estimators are affected by a bias, vanishing for large T, which is bounded as follows. Proposition 5.1 (FH Estimators \u2013 Bias). Let HN \u201eP N \u03bd,\u03c1 with the reset policy \u03c1t\u201c\u03b41tt mod T \u201c0u, and let f :X \u00d1 r0,1s. The bias of the FHN and FHC estimators are upper bounded as: Bias HN\u201eP N \u03bd,\u03c1 rp \u03b7FHNpHN,fqs\u010fbFHN \u03b3,T :\u201c\u03b3T , Bias HN\u201eP N \u03bd,\u03c1 rp \u03b7FHCpHN,fqs\u010fbFHC \u03b2,\u03b3,T :\u201cp1\u00b4\u03b3q\u03b3T min t0PJ0,T K s0PN # 2\u00b4\u03b3t0 \u00b4\u03b3s0 1\u00b4\u03b3 ` \u02c6p\u03b2\u03b3qt0p1\u00b4p\u03b2\u03b3qT \u00b4t0q p1\u00b4\u03b3T qp1\u00b4\u03b2\u03b3q ` p\u03b2\u03b3qs0\u03b2T 1\u00b4\u03b2\u03b3 \u02d9a \u03c72p\u03bd}\u03c0q\u03c32f + . Some observations are in order. First, both estimators are asymptotically unbiased as the horizon T \u00d1 `8. However, none of them is asymptotically unbiased as the budget N \u00d1 `8 (provided that T does not depend on N). Second, the bias of the FHN estimator does not depend on the spectral gap 1 \u00b4 \u03b2 of the underlying MC. This is expected, as the normalizing constant generates a scale inhomogeneity, regardless the mixing properties of the MC. Third, the bias of the FHC estimator, instead, depends on the absolute spectral gap 1 \u00b4 \u03b2 and on the divergence \u03c72p\u03bd}\u03c0q between the initial-state distribution \u03bd and the stationary distribution \u03c0. Thus, in the special case in which \u03c0 \u201c \u03bd a.s., the bias of the FHC estimator vanishes. Nevertheless, the dependence on \u03b2 is quite convoluted, and the bound requires an optimization over the auxiliary integer variables t0 and s0. Although for the general case the optimization is nontrivial, for the extreme cases \u03b2 P t0, 1u, we obtain more interpretable expressions that are reported in the following. Corollary 5.2 (FHC Estimator \u2013 Bias). Let HN \u201eP N \u03bd,\u03c1 with the reset policy \u03c1t\u201c\u03b41tt mod T \u201c0u, and let f :X \u00d1r0,1s. The bias of the FHC estimator is upper bounded as: \u2022 if \u03b2\u201c0: bFHC 0,\u03b3,T \u201c 1\u00b4\u03b3 1\u00b4\u03b3T \u03b3T min !a \u03c72p\u03bd}\u03c0q\u03c32f,1\u00b4\u03b3T ) ; (10) \u2022 if \u03b2\u201c1: bFHC 1,\u03b3,T \u201c2\u03b3T min !a \u03c72p\u03bd}\u03c0q\u03c32f,1 ) . (11) Thus, when \u03c72p\u03bd}\u03c0q\u03c32f \" 1 the FHC estimator suffers a smaller bias than the FHN one when \u03b2 \u201c 0, but, surprisingly larger by a factor 2, when \u03b2 \u201c 1. Indeed, when the chain is slowly mixing (\u03b2 \u00ab 1), both will deliver poor estimations and the FHN estimator mitigates this by using a smaller normalization constant. This result, which, to the best of our knowledge, has never appeared in the literature, justi\ufb01es the use of the non-corrected estimator, especially \fA Tale of Sampling and Estimation in Discounted Reinforcement Learning Computational properties Statistical properties Concentration rate :, ; Estimator # parallel workers Time complexity \u02da \u03b2\u201c0 \u03b2\u201c1 Minimax optimal\u00a7 FHN N T T 1 ? N \u00b6 1 a Np1\u00b4\u03b3q \u00b6 \u0017\u2225 (\u0013 for \u03b2Pt0,1u) FHC 1 ? N \u00b6 1 a Np1\u00b4\u03b3q \u00b6 \u0017\u2225 (\u0013 for \u03b2Pt0,1u) OS M where M \u00b41\u201eBinpN \u00b41,1\u00b4\u03b3q min \" N, logpN 2{\u03b4q 1\u00b4\u03b3 * ; 1 a Np1\u00b4\u03b3q \u0017 AS 1 a Np1\u00b4\u03b2\u03b3q \u0013 \u02da Big-O. : Bigr O. ; With probability at least 1\u00b4\u03b4. \u00a7 According to our analysis. \u00b6 Selecting T \u201cOplogN{logp1{\u03b3qq. \u2225At least for \u03b2Pp\u03b2,1q with \u03b2\u01031. Table 1: Summary of the computational and statistical properties of the considered estimators. when it is known that the underlying MC is slowly mixing. Concentration Inequalities Let us now move to the derivation of the concentration inequalities for the FH estimators. The technical challenge in this task consists in effectively exploiting the mixing properties of the underlying MC in order to derive tight concentration results. The following provides the general concentration result, which we particularize for speci\ufb01c values of \u03b2 later. Theorem 5.3 (FH Estimators \u2013 Concentration). Let HN \u201e P N \u03bd,\u03c1 with the reset policy \u03c1t \u201c\u03b41tt mod T \u201c0u, and let f : X \u00d1r0,1s. Let us de\ufb01ne for j0 PJ0,TK: c\u03b2,\u03b3pj0q:\u201c p\u03b2\u03b3qj0 \u00b4p\u03b2\u03b3qT 1\u00b4\u03b2\u03b3 a \u03c72p\u03bd}\u03c0q\u03c32f, d\u03b2,\u03b3pj0,\u03b4q:\u201c d 8T ` logp\u03c72p\u03bdP j0}\u03c0q`1q`4log 2 \u03b4 \u02d8 N \u02c6 d p1\u00b4\u03b3j0q2 p1\u00b4\u03b3q2 ` p1`\u03b2qp\u03b32j0 \u00b4\u03b32T q p1\u00b4\u03b2qp1\u00b4\u03b32q . For every \u03b4Pp0,1q with probability at least 1\u00b4\u03b4, it holds: |p \u03b7FHNpHN,fq\u00b4\u03c0\u03b3f|\u010fbFHN \u03b3,T `p1\u00b4\u03b3q min j0PJ0,T Ktc\u03b2,\u03b3pj0q`d\u03b2,\u03b3pj0,\u03b4qu, (12) |p \u03b7FHCpHN,fq\u00b4\u03c0\u03b3f|\u010fbFHC \u03b2,\u03b3,T ` 1\u00b4\u03b3 1\u00b4\u03b3T min j0PJ0,T Ktc\u03b2,\u03b3pj0q`d\u03b2,\u03b3pj0,\u03b4qu. (13) Similarly to the bias case, the resulting expression requires the optimization over a free variable j0 P J0, TK. Intuitively, j0 should be selected (for analysis purpose only) as a function of \u03b2. Indeed, for slowly mixing chains (\u03b2 \u00ab 1), we should select a small value of j0 and vice versa. The following corollary provides the order of concentration for the extreme cases \u03b2 P t0, 1u. Corollary 5.4 (FH Estimators \u2013 Concentration). Let HN \u201e P N \u03bd,\u03c1 with the reset policy \u03c1t\u201c\u03b41tt mod T \u201c0u, and let f : X \u00d1r0,1s. Then, for any \u03b4Pp0,1q, with probability at least 1\u00b4\u03b4, it holds that:4 \u2022 if \u03b2\u201c0: |p \u03b7FHNpHN,fq\u00b4\u03c0\u03b3f|\u010fO \u00a8 \u02dd\u03b3T ` d Tp1\u00b4\u03b3qp1\u00b4\u03b3T qlog 2 \u03b4 N \u02db \u201a, (14) |p \u03b7FHCpHN,fq\u00b4\u03c0\u03b3f|\u010fO \u00a8 \u02ddp1\u00b4\u03b3q\u03b3T ` d Tp1\u00b4\u03b3qlog 2 \u03b4 Np1\u00b4\u03b3T q \u02db \u201a; (15) \u2022 if \u03b2\u201c1: |p \u03b7FHNpHN,fq\u00b4\u03c0\u03b3f|\u010fO \u00a8 \u02dd\u03b3T `p1\u00b4\u03b3T q d T log 2 \u03b4 N \u02db \u201a, (16) |p \u03b7FHCpHN,fq\u00b4\u03c0\u03b3f|\u010fO \u00a8 \u02dd\u03b3T ` d T log 2 \u03b4 N \u02db \u201a. (17) We note that the FHC estimator outperforms (in the constants, but not in rate) the FHN when \u03b2 \u00ab 1, whereas when \u03b2 \u00ab 0, the FHN estimator enjoys better concentration. Remark 5.1 (About Minimax Optimality of the FH Estimators). A natural question, at this point, is whether the FH estimators match the minimax lower bound of Theorem 4.1. One could, in principle, select a value of the horizon T depending on the spectral gap 1 \u00b4 \u03b2 to tighten the con\ufb01dence bounds. Unfortunately, \u03b2 is usually unknown in practice. Realistically, one should enforce a value of T that depends on the discount factor \u03b3, and, if necessary, on the con\ufb01dence \u03b4, and the number of samples N. The FH estimators, according to our analysis, do not match the minimax lower bound for general \u03b2. When \u03b2 P t0, 1u, we show in Appendix A.2.3 that the optimal \u03b2-independent choice of T is T \u02da \u03b3 \u201c O pplog Nq{plogp1{\u03b3qqq, leading to the rate r Op1{ ? Nq for \u03b2 \u201c 0 and r Op1{ a Np1 \u00b4 \u03b3qq for 4For interpretability reasons, we ignore the dependence on \u03c72p\u03bd}\u03c0q\u03c32f. \fAlberto Maria Metelli, Mirco Mutti, Marcello Restelli \u03b2 \u201c 1, respectively.5 In such regimes, both FH estimators nearly match the minimax lower bound. Nevertheless, in Appendix A.2.4, we show that there exists a regime of large values of \u03b2, namely \u03b2 P p\u03b2, 1q with \u03b2 \u201c p1`\u03b3\u00b42\u03b3T q{p1` \u03b3 \u00b42\u03b3T `1q \u0103 1 for which the concentration rate is at least r \u2126p1{ a Np1 \u00b4 \u03b3qq regardless the value of \u03b2 (when 0.3 \u010f \u03b3 \u0103 1), not matching the lower bound. 5.2 Adaptive-Horizon Estimation Algorithms The Adaptive-Horizon (AH) estimation algorithms generate trajectories with possibly different horizons. At t P N, a Bernoulli random variable with parameter 1\u00b4\u03b3 is sampled, leading to the reset policy \u03c1AHR t pHt, Xtq \u201c Berp1 \u00b4 \u03b3q. Thus, the horizon T of each trajectory is a random variable too, where T \u00b41 \u201e Geop1\u00b4\u03b3q is a geometric distribution.6 5.2.1 Computational Properties In the AHR case, the parallel execution requires computing in advance the horizons pTiqiPJMK of each trajectory until we ran out of the sample budget N and, subsequently, run in parallel the sample collection of each trajectory. From a technical perspective, characterizing the distribution of the individual Ti is challenging. Indeed, since we need to stop as soon as we reach the budget N, the random variables Ti become dependent. The following result characterizes the distribution of M and the time complexity. Theorem 5.5 (AH Estimators \u2013 Complexity). Let HN \u201e P N \u03bd,\u03c1 with the reset policy \u03c1AHR t pHt, Xtq \u201c Berp1 \u00b4 \u03b3q. Then, the number of trajectories M is distributed such that M \u00b4 1 \u201e BinpN \u00b4 1, 1 \u00b4 \u03b3q. Furthermore, for every \u03b4 P p0, 1q, with probability at least 1 \u00b4 \u03b4, the time complexity is bounded as: max iPJMK Ti \u010f O \u02dc min # N, log ` N 2{\u03b4 \u02d8 1 \u00b4 \u03b3 +\u00b8 . Thus, the time complexity is a minimum between N, as no trajectory can be longer than the maximum number of transitions, and a term that grows with \u03b3, since for large \u03b3 the trajectories will have, on average, longer lengths. 5.2.2 Statistical Properties In the family of AH estimators, we analyze the concentration properties of two speci\ufb01c estimation algorithms: OneSample (OS) and All-Samples (AS) estimators. One-Sample Estimator The idea behind the OS estimator is to regard the \u03b3-discounted distribution \u03c0\u03b3 \u201c \u0159 tPNp1 \u00b4 5Any choice of T independent of N (including the widely employed \u201ceffective horizon\u201d T \u201c 1{p1 \u00b4 \u03b3q) will never lead to a consistent estimator, since the bias will not vanish as N \u00d1 `8. 6In a different perspective, one may \ufb01rst sample T \u00b4 1 \u201e Geop1 \u00b4 \u03b3q and then simulate a trajectory of horizon T. \u03b3q\u03b3t\u03bdP t as the mixture of the distributions \u03bdP t with coef\ufb01cients p1 \u00b4 \u03b3q\u03b3t. The OS estimator offers a way of generating independent samples from \u03c0\u03b3, by retaining the ones right before resetting is performed, i.e., when Yt \u201c 1: p \u03b7OSpHN, fq \u201c 1 M \u00b4 1 N\u00b41 \u00ff t\u201c0 YtfpXtq. (18) This estimator has been used in (Thomas, 2014; Metelli et al., 2021), mostly for theoretical reasons, being unbiased. The following result provides the concentration. Theorem 5.6 (OS Estimator \u2013 Concentration). Let HN \u201e P N \u03bd,\u03c1 with the reset policy \u03c1AHR t pHt, Xtq \u201c Berp1\u00b4\u03b3q, and let f : X \u00d1 r0, 1s. For every \u03b4 P p0, 1q, with probability at least 1 \u00b4 \u03b4, it holds that: |p \u03b7OSpHN, fq \u00b4 \u03c0\u03b3f| \u010f d 2 log 8 \u03b4 Np1 \u00b4 \u03b3q. The concentration term is governed by an \u201ceffective number of samples\u201d that is Np1 \u00b4 \u03b3q. Indeed, the probability of retaining each of the N transitions is 1 \u00b4 \u03b3. It is worth noting that the concentration bound, as expected, does not depend on the absolute spectral gap 1 \u00b4 \u03b2, since just one sample per trajectory is considered and, consequently, the estimators guarantees vanish as \u03b3 \u00d1 1. Thus, this estimator is not minimax optimal, according to our analysis. All-Samples Estimator The AS estimator, instead, makes use of all the samples collected from the simulation. Clearly, this choice introduces a new trade-off since we have at our disposal a larger number of samples for estimation, but, unfortunately, within a single trajectory such samples are statistically dependent. Nevertheless, such a dependence is controlled by the mixing properties of the MC. The AS estimator takes the following form: p \u03b7ASpHN, fq \u201c 1 N N\u00b41 \u00ff t\u201c0 fpXtq. (19) This estimator has been employed in (Konda, 2002; Xu et al., 2020; Eldowa et al., 2022). The following result provides a concentration inequality for the AS estimator that highlights the dependence on the mixing properties. Theorem 5.7 (AS Estimator \u2013 Concentration). Let HN \u201e P N \u03bd,\u03c1 with the reset policy \u03c1AHR t pHt, Xtq \u201c Berp1\u00b4\u03b3q, and let f : X \u00d1 r0, 1s. For every \u03b4 P p0, 1q, with probability at least 1 \u00b4 \u03b4, it holds that: |p \u03b7ASpHN, fq \u00b4 \u03c0\u03b3f| \u010f d 8 log 2 \u03b4 ` 4 logp\u03c72p\u03bd}\u03c0\u03b3q ` 1q Np1 \u00b4 \u03b2\u03b3q . We note the dependence with the spectral gap in 1 \u00b4 \u03b2\u03b3. Contrary to the OS estimator, the bound holds even for \u03b3 \u00d1 1. We also observe a logarithmic dependence on the term \u03c72p\u03bd}\u03c0\u03b3q due to the small bias introduced by the sampling procedure. Most importantly, the AS estimator nearly matches the minimax lower bound of Theorem 4.1. \fA Tale of Sampling and Estimation in Discounted Reinforcement Learning 0 1 2 \u03b1 \u03b1 \u03b1 (1 \u2212\u03b1)/2 1 \u2212\u03b1 1 \u2212\u03b1 (1 \u2212\u03b1)/2 (a) Illustrative MC 0 0.5 1 \u00b7104 0 0.2 0.4 samples estimation error T = 10 T = 30 T = 50 0 0.5 1 \u00b7104 samples T = 10 T = 30 T = 50 (b) \u03b1 \u201c 0.99, \u03b2 \u201c 0.99, \u03b3 \u201c 0.99 AS OS FHC FHN 0 0.5 1 \u00b7104 0 0.2 0.4 samples estimation error (c) \u03b1 \u201c 0.005 \u03b2 \u201c 0.99, \u03b3 \u201c 0.99 0 0.5 1 \u00b7104 samples (d) \u03b1 \u201c 0.5 \u03b2 \u201c 0.5, \u03b3 \u201c 0.9 0 0.5 1 \u00b7104 samples (e) \u03b1 \u201c 0.5 \u03b2 \u201c 0.5, \u03b3 \u201c 0.99 0 0.5 1 \u00b7104 samples (f) \u03b1 \u201c 0.99 \u03b2 \u201c 0.99, \u03b3 \u201c 0.9 0 0.5 1 \u00b7104 samples (g) \u03b1 \u201c 0.99 \u03b2 \u201c 0.99, \u03b3 \u201c 0.99 Figure 2: \u03b3\u2013discounted mean estimation of the function fpxq \u201c p1, \u00b41, 2q over the MC depicted in (a). For each combination of parameter \u03b1 and discount \u03b3, we report the estimation error of the OS, AS, FHC, FHN estimators (c, d, e, f, g). For the FHC, FHN, we provide a \ufb01ner analysis on the impact of T (b). We report average and 95% c.i. over 20 runs. 6 NUMERICAL VALIDATION In this section, we confront the \u03b3-discounted mean estimators presented in Section 5 through numerical simulations to both support and complement the analysis of their statistical properties. To this end, we consider an illustrative family of MCs parametrized by \u03b1 (Figure 2a). The parameter \u03b1 allows controlling the mixing properties of the chain. If we set \u03b1 close to either 0 or 1 we get a slow-mixing chain (large \u03b2), in which every state is nearly transient or absorbing respectively. \u03b1 close to 1{2 gives a fast-mixing chain instead (small \u03b2). In this setting, we consider the problem of estimating the mean of the function fpxq \u201c p1, \u00b41, 2q in different discounting regimes, namely \u03b3 P t0.9, 0.99u, where the performance of each estimator p \u03b7 is measured in terms of the corresponding estimation error |p \u03b7 \u00b4 \u03c0\u03b3f|. In Figure 2, we report the results of the numerical analysis. As a further testament of its compelling statistical properties, the AS estimator dominates the alternatives, achieving the smallest estimation error in every mixing-discounting regime. Interestingly, the unbiased OS estimator fails to quickly converge to the true mean with high discounting (Figures 2c 2e 2g), which is likely caused by its inherently large variance. The \ufb01nite-horizon estimators FHC and FHN show a signi\ufb01cant bias instead, despite an overall stable behavior. Although the corrected estimator FHC outperforms (as expected) the non-corrected FHN in most of the regimes (see Figures 2c-2f), the correction can skyrocket the bias in some unfortunate settings (see Figure 2g). This is particularly underwhelming as we cannot trust FHC as a default option even when committed to a \ufb01nite-horizon estimation algorithm. Finally, in Figure 2b we provide a \ufb01ner analysis of the \ufb01nite-horizon estimators for different horizons T. Unsurprisingly, in a slow-mixing regime (note \u03b2 \u201c 0.99), increasing T bene\ufb01ts the overall quality of the estimates for both FHC and FHN, as the bias is visually reduced at the cost of a slightly increased instability. 7 CONCLUSIONS In this paper, we have studied the problem of estimating the mean of a function under the \u03b3\u2013discounted stationary distribution of an MC. We have formulated this problem with a general and \ufb02exible framework and then analyzed its intrinsic complexity through a minimax lower bound. Finally, we have considered two classes of estimation algorithms, for which we provided a study of computation and statistical properties, as well as a numerical validation. The aim of this paper is far from being a theoretical detour, as we believe that our contribution has signi\ufb01cant practical implications in discounted RL. Especially, the all-samples estimator resulted in the best statistical pro\ufb01le among the considered alternatives, while still supporting parallel sampling. This signals an avenue to develop improved \u201cdeep reinforcement learning\u201d (Franc \u00b8ois-Lavet et al., 2018) algorithms based on sampling from the discounted kernel. Other interesting future directions include the study of the \u03b3-discounted mean estimation for inhomogeneous functions, which is akin to the practical implementations of Q-learning (Watkins and Dayan, 1992), the estimation of other functionals beyond the expectation (Chandak et al., 2021), and extending our analysis to generalized notions of discount (Yoshida et al., 2013; Franc \u00b8ois-Lavet et al., 2015; Pitis, 2019; Fedus et al., 2019; Tang et al., 2021). Finally, our results can be of independent interest in the MC literature while bridging fundamental problems in discounted RL and concentration inequalities for MCs (Samson, 2000; Glynn and Ormoneit, 2002; Le\u00b4 on and Perron, 2004; Kontorovich and Ramanan, 2008; Paulin, 2015). \fAlberto Maria Metelli, Mirco Mutti, Marcello Restelli" + }, + { + "url": "http://arxiv.org/abs/2212.03798v1", + "title": "Stochastic Rising Bandits", + "abstract": "This paper is in the field of stochastic Multi-Armed Bandits (MABs), i.e.,\nthose sequential selection techniques able to learn online using only the\nfeedback given by the chosen option (a.k.a. arm). We study a particular case of\nthe rested and restless bandits in which the arms' expected payoff is\nmonotonically non-decreasing. This characteristic allows designing specifically\ncrafted algorithms that exploit the regularity of the payoffs to provide tight\nregret bounds. We design an algorithm for the rested case (R-ed-UCB) and one\nfor the restless case (R-less-UCB), providing a regret bound depending on the\nproperties of the instance and, under certain circumstances, of\n$\\widetilde{\\mathcal{O}}(T^{\\frac{2}{3}})$. We empirically compare our\nalgorithms with state-of-the-art methods for non-stationary MABs over several\nsynthetically generated tasks and an online model selection problem for a\nreal-world dataset. Finally, using synthetic and real-world data, we illustrate\nthe effectiveness of the proposed approaches compared with state-of-the-art\nalgorithms for the non-stationary bandits.", + "authors": "Alberto Maria Metelli, Francesco Trov\u00f2, Matteo Pirola, Marcello Restelli", + "published": "2022-12-07", + "updated": "2022-12-07", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The classical stochastic MAB framework (Lattimore & Szepesv\u00b4 ari, 2020) has been successfully applied to a number of applications, such as advertising, recommendation, and networking. MABs model the scenario in which a learner sequentially selects (a.k.a. pulls) an option (a.k.a. arm) in a \ufb01nite set, and receives a feedback (a.k.a. reward) corresponding to the chosen option. The goal of online learning algorithms is to guarantee the no-regret property, meaning that the loss due to not knowing the best arm is increasing sublinearly with the number of pulls. One of the assumptions that allows designing no-regret algorithms consists in requiring that the payoff (a.k.a. expected reward) provided 1Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milan, Italy. Correspondence to: Alberto Maria Metelli . Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). by the available options is stationary, i.e., rewards come from a \ufb01xed distribution. However, the arms\u2019 payoff may change over time due to intrinsic modi\ufb01cations of the arms or the environment. A no-regret approach is offered by the adversarial algorithms, in which no assumption on the nature of the reward is required. It has been shown that, in this setting, it is possible to design effective algorithms, e.g., EXP3 (Auer et al., 1995). However, in practice, their performance is unsatisfactory because the non-stationarity of real-world cases is far from being adversarial. Instead, non-stationarity is explicitly accounted for by a surge of methods that consider either abrupt changes (e.g., Garivier & Moulines, 2011), smoothly changing environments (e.g., Trov` o et al., 2020) or bounded reward variation (e.g., Besbes et al., 2014). While in non-stationary MABs the arms\u2019 payoff changes naturally over time, a different setting arises when the payoff changes as an effect of pulling the arm. This is the case of rotting bandits (Levine et al., 2017; Seznec et al., 2019), in which the payoff of the arms are monotonically nonincreasing over the pulls, modeling degradation phenomena. Knowing the monotonicity property allows deriving more specialized algorithms, exploiting the process characteristics and further decreasing the regret w.r.t. unrestricted cases. Notably, the symmetric problem of monotonically non-decreasing payoffs cannot be addressed with the same approaches. Indeed, it was shown that it represents a signi\ufb01cantly more complex problem, even for deterministic arms (Heidari et al., 2016). In this non-decreasing setting, a common assumption is the concavity of the payoff function that de\ufb01nes the rising bandits setting (Li et al., 2020). The goal of this paper is to study the stochastic MAB problem when the arms\u2019 payoff is monotonically non-decreasing. This setting arises in several real-world sequential selection problems. For instance, suppose we have to choose among a set of optimization algorithms to maximize an unknown stochastic concave function. In this setting, we expect that all the algorithms progressively increase (on average) the function value and eventually converge to an optimal value, possibly with different speeds. Therefore, we wonder which candidate algorithm to assign the available resources (e.g., computational power or samples) to identify the one that converges faster to the optimum. This online model selecarXiv:2212.03798v1 [cs.LG] 7 Dec 2022 \fStochastic Rising Bandits tion process can be modeled as a rested MAB (Tekin & Liu, 2012), like the rotting bandits (Levine et al., 2017), but with non-decreasing payoffs. Indeed, each optimization algorithm (arms) and the function value does not evolve if we do not select (pull) it. Another example that shows a non-decreasing expected reward is the selection of athletes for competitions. Athletes train in parallel and increase (on average) their performance. However, if participation in competitions is allowed to one athlete only, the trainer should select the one who has achieved the best performance so far. This problem is akin to the restless case (Tekin & Liu, 2012), like non-stationary bandits (Besbes et al., 2014), but with the additional assumption that payoffs are nondecreasing. Indeed, the athletes (arms) are evolving even if they are not selected (pulled) to compete. Original Contribution In this paper, we study the stochastic rising bandits, i.e., stochastic bandits in which the payoffs are monotonically non-decreasing and concave, in both restless and rested formulations. More speci\ufb01cally: \u2022 we show that the rested bandit with non-decreasing payoffs is non-learnable, i.e., the loss due to learning is linear with the number of pulls, unless additional assumptions on the payoff functions are enforced (e.g., concavity); \u2022 we design R-ed-UCB and R-less-UCB, optimistic algorithms for the rising rested and restless bandits; \u2022 we show that R-ed-UCB and R-less-UCB suffer an expected regret that depends on the payoff function pro\ufb01le and, under some conditions, of order r OpT 2 3 q;1 \u2022 we illustrate, using synthetic and real-world data, the effectiveness of our approaches, compared with state-ofthe-art algorithms for the non-stationary (restless) bandits. 2. Related Works Restless and Rested Bandits The rested and restless bandit settings have been introduced by Tekin & Liu (2012) and further developed by (Ortner et al., 2012; Russac et al., 2019) in the restless version and by (Mintz et al., 2020; Pike-Burke & Grunewalder, 2019) in the rested one. Originally the evolution of the payoff was modeled via a suitable process, e.g., a Markov chain with \ufb01nite state space or a linear regression process. For instance, Wang et al. (2020) proposes an optimistic approach based on the estimation of the transition kernel of the underlying chain. More recently, the terms rested and restless have been employed to denote arms whose payoff changes as time passes, for restless ones, or whenever being pulled, for rested ones (Seznec et al., 2019; 2020). That is the setting we target in this work. Non-Stationary Bandits The restless bandits, without a \ufb01xed temporal reward evolution, are usually addressed via non-stationary MAB approaches, that include both pas1With r Op\u00a8q we disregard logarithmic terms in the order. sive (e.g., Garivier & Moulines, 2011; Besbes et al., 2014; Auer et al., 2019; Trov` o et al., 2020) and active (e.g., Liu et al., 2018; Besson et al., 2019; Cao et al., 2019) methods. The former algorithms base their selection criterion on the most recent feedbacks, while the latter actively try to detect if a change in the arms\u2019 rewards occurred and use only data gathered after the last change. Garivier & Moulines (2011) employ a discounted reward approach (D-UCB) or an adaptive sliding window (SW-UCB), proving a r Op ? Tq regret when the number of abrupt changes is known. Similar results have been obtained by (Auer et al., 2019) without knowing the number of changes, at the price of resorting to the doubling trick. (Besbes et al., 2014) provides an algorithm, namely RExp3, a modi\ufb01cation EXP3, originally designed for adversarial MABs, to give a regret bound of OpT 2 3 q under the assumption that the total variation VT of the arms\u2019 expected reward is known. The knowledge of VT has been removed by Chen et al. (2019) using the doubling trick. In Trov` o et al. (2020), an approach in which the combined use of a sliding window on a Thompson Sampling-like algorithm provides theoretical guarantees both on abruptly and smoothly changing environments. Nonetheless, in our setting, their result might lead to linear regret for speci\ufb01c instances. Notably, none of the above explicitly use assumptions on the monotonicity of the payoff over time. Rising Bandits The rising bandit problem has been tackled in its deterministic version by (Heidari et al., 2016; Li et al., 2020). In Heidari et al. (2016), the authors design an online algorithm to minimize the regret of selecting an increasing and concave function among a \ufb01nite set. This study assumes that the learner receives feedback about the true value of the reward function, i.e., no stochasticity is present. In Li et al. (2020), the authors model the problem of parameter optimization for machine learning models as a rising bandit setting. They propose an online algorithm having good empirical performance, still in the case of deterministic rewards. A case where the reward is increasing in expectation (or equivalently decreasing in loss), but no longer deterministic, is provided by Cella et al. (2021). However, the payoff follows a given parametric form known to the learner, who estimates such parameters in the best-arm identi\ufb01cation and regret-minimization frameworks. The need for knowing the parametric form of the payoff makes these approaches hardly applicable for arbitrary increasing functions. Corralling Bandits It is also worth mentioning the corralling bandits (Agarwal et al., 2017; Pacchiano et al., 2020b; Abbasi-Yadkori et al., 2020; Pacchiano et al., 2020a; Arora et al., 2021), a setting in which the goal is to minimize the regret of a process choosing among a \ufb01nite set of bandit algorithms. This setting, close to online model selection, is characterized by particular assumptions. Indeed, each arm corresponds to a learning algorithm, operating on a bandit, endowed with a (possibly known) regret bound, sometimes \fStochastic Rising Bandits requiring additional conditions (e.g., stability). 3. Problem Setting A K-armed MAB (Lattimore & Szepesv\u00b4 ari, 2020) is de\ufb01ned as a vector of probability distributions \u03bd\u201cp\u03bdiqiPrKs, where \u03bdi:N2\u00d1\u2206pRq depends on a pair of parameters pt,nqPN2 for every iPrKs, where rKs\u2013t1,...,Ku. Let T PN be the optimization horizon, at each round tPrTs, the agent selects an arm ItPrKs and observes a reward Rt\u201e\u03bdItpt,NIt,tq, where Ni,t\u201c\u0159t l\u201c11tIl\u201ciu is the number of times arm iP rKs was pulled up to round t. Thus, the reward depends, in general, on the current round t and on the number of pulls NIt,t\u201cNIt,t\u00b41`1 of arm It up to t. For every arm iPrKs, we de\ufb01ne its payoff \u00b5i:N2\u00d1R as the expectation of the reward, i.e., \u00b5ipt,nq\u201cER\u201e\u03bdipt,nqrRs and denote the vector of payoffs as \u00b5\u201cp\u00b5iqiPrKs. We assume that the payoffs are bounded in r0,1s, and that the rewards are \u03c32-subgaussian, i.e., ER\u201e\u03bdipt,nqre\u03bbpR\u00b4\u00b5ipt,nqqs\u010fe \u03c3\u03bb2 2 , for every \u03bbPR. Rested and Restless Arms We revise the de\ufb01nition of rested and restless arms (Tekin & Liu, 2012).2 De\ufb01nition 3.1 (Rested and Restless Arms). Let \u03bd be a MAB and let iPrKs be an arm, we say that: \u2022 i is a rested arm if, for every round tPrTs and number of pulls nPN, we have \u00b5ipt,nq\u201c\u00b5ipnq; \u2022 i is a restless arm if, for every round tPrTs and number of pulls nPN, we have \u00b5ipt,nq\u201c\u00b5iptq. A K-armed bandit is rested (resp. restless) if all of its arms are rested (resp. restless). Thus, the payoff of a rested arm changes when being pulled and, therefore, it models phenomena that evolve as a consequence of the agent intervention. Instead, a restless arm is in all regards a non-stationary arm (Besbes et al., 2014), and it is suitable for modeling a natural phenomenon that evolves for time passing, independently of the agent intervention. Rising Bandits We revise the rising bandits notion, i.e., MABs with payoffs non-decreasing and concave as a function of pt,nq (Heidari et al., 2016).3 Assumption 3.1 (Non-Decreasing Payoff). Let \u03bd be a MAB, for every arm iPrKs, number of pulls nPN, and round tPrTs, functions \u00b5ip\u00a8,nq and \u00b5ipt,\u00a8q are non-decreasing. In particular, we de\ufb01ne the increments: 2We refer to the de\ufb01nition of (Levine et al., 2017; Seznec et al., 2020) and not to the one of (Tekin & Liu, 2012) that assumes an underlying Markov chain governing the arms\u2019 distributions. 3Deterministic bandits with non-decreasing payoffs were introduced in (Heidari et al., 2016) with the term improving. In (Li et al., 2020), the term rising was used to denote the improving bandits with concave payoffs (concavity was already employed by Heidari et al. (2016)). Rested arm: \u03b3ipnq:\u201c\u00b5ipn`1q\u00b4\u00b5ipnq\u011b0; Restless arm: \u03b3iptq:\u201c\u00b5ipt`1q\u00b4\u00b5iptq\u011b0. From an economic perspective, \u03b3ip\u00a8q represents the increase of total return (or payoff) we obtain by adding a factor of production, i.e., pulling the arm (rested) or letting time evolve for a unit (restless). In the next sections, we analyze how the following assumption de\ufb01nes a remarkable class of bandits with non-decreasing payoffs (Heidari et al., 2016). Assumption 3.2 (Concave Payoff). Let \u03bd be a MAB, for every arm iPrKs, number of pulls nPN, and round tPrTs, functions \u00b5ip\u00a8,nq and \u00b5ipt,\u00a8q are concave, i.e.: Rested arm: \u03b3ipn`1q\u00b4\u03b3ipnq\u010f0; Restless arm: \u03b3ipt`1q\u00b4\u03b3iptq\u010f0. As pointed out by Heidari et al. (2016), the concavity assumption corresponds, in economics, to the decrease of marginal returns that emerges when adding a factor of production, i.e., pulling the arm (rested) or letting time evolve for one unit (restless). Formally, we de\ufb01ne rising a stochastic MAB in which both Assumption 3.1 and Assumption 3.2 hold. Learning Problem Let tPrTs be a round, we denote with Ht\u201cpIl,Rlqt l\u201c1 the history of observations up to t. A (nonstationary) deterministic policy is a function \u03c0:Ht\u00b41\u00de\u00d1It mapping a history to an arm, that is abbreviated as \u03c0ptq:\u201c \u03c0pHt\u00b41q. The performance of a policy \u03c0 in a MAB with payoffs \u00b5 is the expected cumulative reward collected over the T rounds, formally: J\u00b5p\u03c0,Tq\u2013E \u201e \u00ff tPrT s \u00b5It pt,NIt,tq \uf6be , and the expectation is computed over the histories. A policy \u03c0\u02da \u00b5,T is optimal if it maximizes the expected cumulative reward: \u03c0\u02da \u00b5,T Pargmax\u03c0tJ\u00b5p\u03c0,Tqu. Denoting with J\u02da \u00b5pTq\u2013J\u00b5p\u03c0\u02da \u00b5,T ,Tq the expected cumulative reward of an optimal policy, the suboptimal policies \u03c0 are evaluated via the expected cumulative regret: R\u00b5p\u03c0,Tq\u2013J\u02da \u00b5pTq\u00b4J\u00b5p\u03c0,Tq. (1) Problem Characterization To characterize the problem instance, we introduce the following quantity, namely the cumulative increment, de\ufb01ned for every M PrTs and qP r0,1s as:4 \u03a5\u00b5pM,qq\u2013 M\u00b41 \u00ff l\u201c1 max iPrKst\u03b3iplqqu. (2) 4The de\ufb01nition of cumulative increment was incorrect in the conference version of the paper (Metelli et al., 2022). \fStochastic Rising Bandits Table 1. O rates of \u03a5\u00b5pM,qq in the case \u03b3iplq\u010ffplq for all iPrKs and lPN (see also Lemma C.6). fplq e\u00b4cl l\u00b4c (cq\u01051) l\u00b4c (cq\u201c1) l\u00b4c (cq\u010f1) \u03a5\u00b5pM,qq e\u00b4cq cq 1 cq\u00b41 logM M 1\u00b4cq 1\u00b4cq The cumulative increment accounts for how fast the payoffs reach their asymptotic value, i.e., become stationary. Intuitively, small values of \u03a5\u00b5pM,qq lead to simpler problems, as they are closer to stationary bandits. Table 1 reports some bounds on \u03a5\u00b5pM,qq for particular choices of \u03b3iplq and q. When q\u201c1, the cumulative increment \u03a5\u00b5pT,1q corresponds to the total variation VT \u2013\u0159T \u00b41 l\u201c1 maxiPrKst\u03b3iplqu (Besbes et al., 2014). In the next sections, we devise and analyze learning algorithms for rested (Section 4) and restless (Section 5) rising bandits. We will present optimistic algorithms, whose structure is summarized in Algorithm 1 and parametrized by an exploration index Biptq that will be designed case by case. 4. Stochastic Rising Rested Bandits In this section, we consider the Rising rested bandits (R-ed) setting in which the arms\u2019 expected payoff increases only when it is pulled, i.e., \u00b5ipt,Ni,tq\u201d\u00b5ipNi,tq.5 Oracle Policy We recall that the oracle constant policy, that always plays at each round tPrTs the arm that maximizes the sum of the payoffs over the horizon T, is optimal for the non-decreasing rested bandits. Theorem 4.1 (Heidari et al., 2016). Let \u03c0c \u00b5,T be the oracle constant policy: \u03c0c \u00b5,T ptqPargmax iPrKs # \u00ff lPrT s \u00b5iplq + , @tPrTs. Then, \u03c0c \u00b5,T is optimal for the rested non-decreasing bandits (i.e., under Assumption 3.1). The result holds under the non-decreasing property (Assumption 3.1) only, without requiring concavity (Assumption 3.2). However, this policy cannot be used in practice as it requires knowing the full function \u00b5ip\u00a8q in advance. 4.1. Non-Learnability We now prove a result highlighting the \u201chardness\u201d of the non-decreasing rested bandits. We show that with no as5We are employing the original de\ufb01nition of rested arms of (Levine et al., 2017) in which \u00b5ipnq is the payoff of arm i when it is pulled for the n-th time. Algorithm 1 R-l-UCB (lPtless,edu ) Input: K, pBiqiPrKs Initialize Ni\u00d00 for all iPrKs for tPp1,...,Tq do Pull ItPargmaxiPrKstBiptqu Observe Rt\u201e\u03bdItpt,NIt `1q Update BIt and NIt \u00d0NIt `1 end for sumptions on the payoff \u00b5ipnq (e.g., concavity), it is impossible to devise a no-regret algorithm. Theorem 4.2 (Non-Learnability). There exists a 2-armed non-decreasing (non-concave) deterministic rested bandit with \u03b3ipnq\u010f\u03b3max\u010f1 for all iPrKs and nPN, such that any learning policy \u03c0 suffers regret: R\u00b5p\u03c0,Tq\u011b Y\u03b3max 12 T ] . The intuition behind this result is that, if we enforce no condition on the increment \u03b3ipnq we cannot predict how much the arm payoff will increase in the future. Therefore, we face the dilemma of whether or not to pull an arm that is currently believed to be suboptimal, hoping its payoff will increase. If we decide to pull it and its payoff will not actually increase, or if we decide not to pull it and its payoff will actually increase, becoming optimal, we will suffer linear regret. Thus, Theorem 4.2 highlights the importance of the concavity assumption (Assumption 3.2), providing an answer to an open question posed in (Heidari et al., 2016). Remark 4.1 (About the Concavity Assumption). While without additional structure, e.g., concavity, the nondecreasing rested bandits are non-learnable (Theorem 4.2), the assumption is not necessary in other related settings. In particular, non-decreasing restless bandits are in all regard non-stationary bandits, for which no-regret algorithms exist under different assumptions about the number of change points (Garivier & Moulines, 2011) or a bounded total variation (Besbes et al., 2014). Furthermore, for non-increasing rested (rotting) bandits (Levine et al., 2017), a bounded payoff decrement between consecutive pulls is suf\ufb01cient to devise a no-regret algorithm. 4.2. Deterministic Setting To progressively introduce the core ideas, we begin with the case of deterministic arms (\u03c3\u201c0). We devise an optimistic estimator of \u00b5iptq, namely \u00b5R-ed i ptq, having observed the exact payoffs p\u00b5ipnqqNi,t\u00b41 n\u201c1 . Differently from the rotting setting, these payoffs are an underestimation of \u00b5iptq. Therefore, we exploit the non-decreasing assumption (As\fStochastic Rising Bandits Ni,t\u00b41\u00b41 Ni,t\u00b41 t 0.5 1 t\u00b4Ni,t\u00b41 \u03b3ipNi,t\u00b41\u00b41q \u00b5ipNi,t\u00b41q \u00b5R-ed i ptq \u00b5iptq n \u00b5ipnq Figure 1. Graphical representation of the estimator construction \u00b5R-ed i ptq for the rested deterministic setting. sumption 3.1) to derive the identity: \u00b5iptq\u201c \u00b5ipNi,t\u00b41q loooomoooon (most recent payoff) ` t\u00b41 \u00ff n\u201cNi,t\u00b41 \u03b3ipnq. looooooomooooooon (sum of future increments) (3) By exploiting the concavity (Assumption 3.2), we upper bound the sum of future increments with the last experienced increment \u03b3ipNi,t\u00b41\u00b41q that is projected for the future t\u00b4 Ni,t\u00b41 pulls, leading to the following estimator: \u00b5R-ed i ptq:\u201c\u00b5ipNi,t\u00b41q loooomoooon (most recent payoff) `pt\u00b4Ni,t\u00b41q\u03b3ipNi,t\u00b41\u00b41q, looooooomooooooon (most recent increment) (4) if Ni,t\u00b41\u011b2 else \u00b5R-ed i ptq:\u201c`8. Figure 1 illustrates the construction of the estimator. The optimism of \u00b5R-ed i and a bias bound are proved in Lemma A.2. Regret Analysis We are now ready to provide the regret analysis of R-ed-UCB, i.e., Algorithm 1 when we employ as exploration index Biptq\u201d\u00b5R-ed i ptq. Theorem 4.3. Let T PN, then R-ed-UCB (Algorithm 1) with Biptq\u201d\u00b5R-ed i ptq suffers an expected regret bounded, for every qPr0,1s, as: R\u00b5pR-ed-UCB,Tq\u010f2K`KT q\u03a5\u00b5 \u02c6R T K V ,q \u02d9 . The regret depends on a parameter qPr0,1s that can be selected to tighten the bound, whose optimal value depends on \u03a5\u00b5p\u00a8,qq, that is a function on the horizon T. Some examples, when \u03b3iptq\u010fl\u00b4c for c\u01050, are reported in Figure 2. 4.3. Stochastic Setting Moving to the R-ed stochastic setting (\u03c3\u01050), we cannot directly exploit the estimator in Equation (4). Indeed, we only observe the sequence of noisy rewards pRti,nqNi,t\u00b41 n\u201c1 , where ti,nPrTs is the round at which arm iPrKs was pulled for the n-th time. To cope with stochasticity, we need to employ an h-wide window made of the h most recent samples, similarly to what has been proposed by Seznec et al. (2020). The choice of h represents a bias-variance trade-off between employing few recent observations (less biased), compared to many past observations (less variance). For hPrNi,t\u00b41s, the resulting estimator p \u00b5R-ed,h i ptq is given by: p \u00b5R-ed,h i ptq:\u201c 1 h Ni,t\u00b41 \u00ff l\u201cNi,t\u00b41\u00b4h`1 \u02dc Rti,l lo omo on (estimated payoff) `pt\u00b4lq Rti,l \u00b4Rti,l\u00b4h h looooooomooooooon (estimated increment) \u00b8 , if h\u010ftNi,t\u00b41{2u, else p \u00b5R-ed,h i ptq:\u201c`8. The construction of the estimator is shown in Appendix A.1 and relies on the idea of averaging several estimators of the form of Equation (4) instanced using as starting points different number of pulls Ni,t\u00b41\u00b4l`1 for lPrhs and replacing the true payoff with the corresponding reward sample. An ef\ufb01cient way to compute this estimator is reported in Appendix D. Regret Analysis By making use of the presented estimator, we build the following optimistic exploration index: Biptq\u201dp \u00b5R-ed,hi,t i ptq`\u03b2R-ed,hi,t i ptq, where \u03b2R-ed,hi,t i pt,\u03b4tq:\u201c\u03c3pt\u00b4Ni,t\u00b41`hi,t\u00b41q d 10log 1 \u03b4t h3 i,t , and hi,t are arm-and-time-dependent window sizes and \u03b4t is a time-dependent con\ufb01dence parameter. By choosing the window size depending linearly on the number of pulls, we are able to provide the following regret bound. Theorem 4.4. Let T PN, then R-ed-UCB (Algorithm 1) with Biptq\u201dp \u00b5R-ed,hi,t i ptq`\u03b2R-ed,hi,t i ptq, hi,t\u201ct\u03f5Ni,t\u00b41u for \u03f5Pp0,1{2q and \u03b4t\u201ct\u00b4\u03b1 for \u03b1\u01052, suffers an expected regret bounded, for every qPr0,1s, as: R\u00b5pR-ed-UCB,Tq\u010fO \u02dc K \u03f5 p\u03c3Tq 2 3 p\u03b1logTq 1 3 ` KT q 1\u00b42\u03f5\u03a5\u00b5 \u02c6R p1\u00b42\u03f5q T K V ,q \u02d9\u00b8 . This result deserves some comments. First, compared with the corresponding deterministic R-ed regret bound (Theorem 4.3), it re\ufb02ects a similar dependence of the cumulative increment \u03a5\u00b5, although it now involves the \u03f5 parameter de\ufb01ning the window size hi,t\u201ct\u03f5Ni,t\u00b41u. Second, it includes an additional term of order r OpT 2 3 q that is due to the noise \u03c3 presence that increases inversely w.r.t. the \u03f5.6 Thus, 6In particular, when \u03b3ipnq decreases suf\ufb01ciently fast (see Table 1), the regret is dominated by the r OpT 2 3 q component. \fStochastic Rising Bandits c\u010f1 c\u011b1 Rested T KT 1 c Restless K 1`c 2 T 1\u00b4 c 2 KT 1 c`1 1 0.5 1 c Figure 2. Regret bounds r O rates optimized over q for R-less and R-ed deterministic bandits when \u03b3iplq\u010fl\u00b4c for c\u01050. we visualize a trade-off in the choice of \u03f5: larger windows (\u03f5\u00ab1) are bene\ufb01cial for the \ufb01rst term, but they enlarge the constant 1{p1\u00b42\u03f5q multiplying the second component. Remark 4.2 (Comparison with Adversarial Bandits). The R-ed setting can be mapped to an adversarial bandit (Auer et al., 2002) with an adaptive (i.e., non-oblivious) adversary. Indeed, the arm payoff \u00b5ipNi,tq can be thought to as selected by an adversary who has access to the previous learner choices (i.e., the history Ht\u00b41), speci\ufb01cally to the number of pulls Ni,t. However, although adversarial bandit algorithms, such as EXP3 (Auer et al., 2002) and OSMD (Audibert et al., 2014), suffer r Op ? Tq regret, these results are not comparable with ours. Indeed, while these correspond to guarantees on the external regret, the regret de\ufb01nition we employ in Section 3 is a notion of policy regret (Dekel et al., 2012). 5. Stochastic Rising Restless Bandits In this section, we consider the Rising restless bandits (R-less) in which the payoff increases at every round regardless the arm is pulled, i.e., \u00b5ipt,Ni,tq\u201d\u00b5iptq. Oracle Policy We start recalling that the oracle greedy policy, i.e., the policy selecting at each round tPrTs the arm with largest payoff, is optimal for the non-decreasing restless bandit setting. Theorem 5.1 (Seznec et al., 2020). Let \u03c0g \u00b5 be the oracle greedy policy: \u03c0g \u00b5ptqPargmax iPrKs t\u00b5iptqu, @tPrTs. Then, \u03c0g \u00b5 is optimal for the restless non-decreasing bandits (i.e., under Assumption 3.1). Notice that \u03c0g \u00b5 is optimal under the non-decreasing payoff assumption (Assumption 3.1) only, without requiring the concavity (Assumption 3.2). We can now \ufb01rst appreciate an important difference between rising and rotting bandits. While for the rotting bandits the oracle greedy policy is optimal for both the rested and restless settings, for the rising bandits it remains optimal in the restless case only. Indeed, for the rising rested case, as shown in Theorem 4.1, the oracle constant policy is needed to achieve optimality. 5.1. Deterministic Setting We begin with the case of deterministic arms (\u03c3\u201c0). Similarly to the rested case, we design an optimistic estimator of \u00b5iptq, namely \u00b5R-less i ptq, employing the exact payoffs p\u00b5ipti,nqqNi,t\u00b41 n\u201c1 . To this end, we exploit the non-decreasing assumption (Assumption 3.1) to derive the identity: \u00b5iptq\u201c \u00b5ipti,Ni,t\u00b41q looooomooooon (most recent payoff) ` t\u00b41 \u00ff l\u201cti,Ni,t\u00b41 \u03b3iplq. looooooomooooooon (sum of future increments) Then, we leverage the concavity (Assumption 3.2) to upper bound the sum of future increments with the last experienced increment that will be projected in the future for t\u00b4ti,Ni,t\u00b41 rounds, leading to the estimator: \u00b5R-less i ptq:\u201c \u00b5ipti,Ni,t\u00b41q looooomooooon (most recent payoff) `pt\u00b4ti,Ni,t\u00b41q \u00b5ipti,Ni,t\u00b41q\u00b4\u00b5ipti,Ni,t\u00b41\u00b41q ti,Ni,t\u00b41 \u00b4ti,Ni,t\u00b41\u00b41 looooooooooooooooomooooooooooooooooon (most recent increment) , (5) if Ni,t\u00b41\u011b2, else \u00b5R-less i ptq:\u201c`8. Lemma A.5 shows that \u00b5R-less i is optimistic and provides a bias bound. Regret Analysis We now provide the regret analysis of R-less-UCB that is obtained from Algorithm 1, when setting Biptq\u201d\u00b5R-less i ptq. Theorem 5.2. Let T PN, then R-less-UCB (Algorithm 1) with Biptq\u201d\u00b5R-less i ptq suffers an expected regret bounded, for every qPr0,1s, as: R\u00b5pR-less-UCB,Tq\u010f2K`KT q q`1 \u03a5\u00b5 \u02c6R T K V ,q \u02d9 1 q`1 . Similarly to Theorem 4.3, the result depends on the free parameter qPr0,1s, that can be chosen to tighten the bound. It is worth noting that the regret bound of the R-less deterministic case (Theorem 5.2) is always smaller than that of the R-ed deterministic case (Theorem 4.3). Indeed, ignoring the dependence on K, we have R\u00b5pR-less-UCB,Tq\u201c O \u00b4 R\u00b5pR-ed-UCB,Tq 1 q`1 \u00af . The following example clari\ufb01es the role of q for both the restless and rested case. Example 5.1. Suppose that for all iPrKs, we have \u03b3iplq\u010f l\u00b4c for c\u01050. The expressions of bounds on \u03a5\u00b5p\u00a8,qq have been shown in Table 1. Different values of qPr0,1s should be selected to tighten the regret bounds depending on the value of c. Figure 2 reports the optimized bounds for the deterministic R-less and R-ed (derivation in Appendix B). \fStochastic Rising Bandits 5.2. Stochastic Setting In the stochastic setting (\u03c3\u01050), we have access to noisy versions of \u00b5i only, i.e., pRti,nqNi,t\u00b41 n\u201c1 . Intuitively, we might be tempted to straightforwardly extend the derivation of the rested case by averaging h estimators like the ones in Equation (5), instanced with different time instants ti,Ni,t\u00b41. Unfortunately, this approach is unsuccessful for technical issues since the increment term would include the difference of time instants ti,Ni,t\u00b41 \u00b4ti,Ni,t\u00b41\u00b41 that, in the stochastic setting, are random variables correlated with the observed rewards Rti,n. For this reason, at the price of a larger bias, we employ the same estimator used in the stochastic rested case, de\ufb01ned for hPrNi,t\u00b41s: p \u00b5R-less,h i ptq\u2013 1 h Ni,t\u00b41 \u00ff l\u201cNi,t\u00b41\u00b4h`1 \u02c6 Rti,l lo omo on (estimated payoff) `pt\u00b4lq Rti,l \u00b4Rti,l\u00b4h h looooooomooooooon (estimated increment) \u02d9 , if hi,t\u010ftNi,t\u00b41{2u, else p \u00b5R-less,h i ptq\u2013`8. Additional details on the estimator construction is reported in Appendix A.2 together with its analysis. Regret Analysis We provide the regret analysis of R-less-UCB when we employ the exploration index analogous to that of the rested case: Biptq\u201dp \u00b5R-less,hi,t i ptq`\u03b2R-less,hi,t i ptq, where \u03b2R-less,hi,t i pt,\u03b4tq:\u201c\u03c3pt\u00b4Ni,t\u00b41`hi,t\u00b41q d 10log 1 \u03b4t h3 i,t , and hi,t are a arm-and-time-dependent window sizes and \u03b4t is a time-dependent con\ufb01dence. The regret bound is given by the following result. Theorem 5.3. Let T PN, then R-less-UCB (Algorithm 1) with Biptq\u201dp \u00b5R-less,hi,t i ptq`\u03b2R-less,hi,t i ptq, hi,t\u201ct\u03f5Ni,t\u00b41u for \u03f5Pp0,1{2q, and \u03b4t\u201ct\u00b4\u03b1 for \u03b1\u01052, suffers an expected regret bounded, for every qPr0,1s, as: R\u00b5pR-less-UCB,Tq\u010fO \u02dc K \u03f5 p\u03c3Tq 2 3 p\u03b1logTq 1 3 ` KT 2q 1`q plogTq q 1`q \u03f5p1\u00b42\u03f5q \u03a5\u00b5 \u02c6R p1\u00b42\u03f5q T K V ,q \u02d9 1 1`q \u00b8 . Some observations are in order. First, compared to the bound for the rested case in Theorem 4.4, we note the same dependence of r OpT 2 3 q due to the noise presence \u03c3. Concerning the second term, compared with the one of the deterministic case (Theorem 5.2), we worsen the dependence on T and an inverse dependence on the \u03f5 and 1\u00b42\u03f5 parameters appear. This is due to the usage of the h-wide window instead of the last sample and that, all other things being equal, the estimator employed for the stochastic case, as already discussed, is looser compared to the one for the deterministic case. Finally, our result is not fully comparable with (Besbes et al., 2014) for generic non-stationary bandits with bounded variation because due to the presence of \u03a5\u00b5prT{Ks,qq. Moreover, we achieve such a bound with no knowledge about \u03a5\u00b5, while the work by (Besbes et al., 2014) requires knowing VT . 6. Numerical Simulations We numerically tested R-less-UCB and R-ed-UCB w.r.t. state-of-the-art algorithms for non-stationary MABs in the restless and rested settings, respectively.7 Algorithms We consider the following baseline algorithms: Rexp3 (Besbes et al., 2014), a non-stationary MAB algorithm based on variation budget, KL-UCB (Garivier & Capp\u00b4 e, 2011), one of the most effective stationary MAB algorithms, Ser4 (Allesiardo et al., 2017), which considers best arm switches during the process, and slidingwindow algorithms such as SW-UCB (Garivier & Moulines, 2011), SW-KL-UCB (Combes & Proutiere, 2014), and SW-TS (Trov` o et al., 2020) that are generally able to deal with non-stationary restless settings. The parameters for all the baseline algorithms have been set as recommended in the corresponding papers (see also Appendix E). For our algorithms, the window is set as hi,t\u201ct\u03f5Ni,t\u00b41u (as prescribed by Theorems 4.4 and 5.3). We remark that while the baseline algorithms are suited for the restless case, in the rested case, no algorithm has been designed to cope with the stochastic rising setting, provided that no knowledge on the payoff function is available. We compare the algorithms in terms of empirical cumulative regret p R\u00b5p\u03c0,tq, which is the empirical counterpart of the expected cumulative regret R\u00b5p\u03c0,tq at round t averaged over multiple independent runs. 6.1. Restless setting To evaluate R-less-UCB in the restless setting, we run the aforementioned algorithms on a problem with K\u201c15 arms over a time horizon of T \u201c200,000 rounds, setting \u03f5\u201c1{4. The payoff functions \u00b5ip\u00a8q are chosen in these families: Fexp\u201ctfptq\u201ccp1\u00b4e\u00b4atqu and Fpoly\u201c \u2423 fptq\u201cc ` 1\u00b4bpt`b1{\u03c1q\u00b4\u03c1\u02d8( , where a,c,\u03c1Pp0,1s and bP R\u011b0 are parameters, whose values have been selected randomly. By construction all functions f PFexpYFpoly satisfy Assumptions 3.1 and 3.2. The functions coming from Fexp 7Details of the experimental setting, and additional results are provided in Appendix E. The code to reproduce the experiments is available at https://github.com/albertometelli/ stochastic-rising-bandits. \fStochastic Rising Bandits 0 2,000 4,000 6,000 0 0.5 1 Time t / Pulls n \u00b5i(\u00b7) (a) 0 0.5 1 1.5 2 \u00d7105 0 0.5 1 \u00d7105 t b R\u00b5(\u03c0, t) (b) 0 2 4 \u00d7104 0 1 2 3 \u00d7104 t b R\u00b5(\u03c0, t) (c) R-less/ed-UCB KL-UCB Rexp3 SW-TS SW-UCB Ser4 SW-KL-UCB Figure 3. 15 arms bandit setting: (a) \ufb01rst 6000 rounds/pulls of the payoff functions, (b) cumulative regret in the R-less scenario, (c) cumulative regret in the R-ed scenario (100 runs 95% c.i.). 0 19T 400 T 8 T 4 0 0.5 1 Pulls n \u00b5i(\u00b7) (a) 0 0.5 1 1.5 2 \u00d7105 0 0.5 1 \u00d7105 t \u02c6 R\u00b5(\u03c0, t) (c) Figure 4. 2 arms R-ed bandit setting: (a) payoff functions, (b) cumulative regret (100 runs, 95% c.i.). 0 2 4 \u00d7104 0 0.5 1 \u00d7104 t b R\u00b5(\u03c0, t) R-ed-UCB KL-UCB Rexp3 SW-TS SW-UCB Ser4 SW-KL-UCB Figure 5. Cumulative regret in the online model selection on IMDB dataset (30 runs, 95% c.i.). (exponential functions) have a sudden increase, while ones from Fpoly (polynomial functions) have a slower growth rate, leading to different cumulative increments \u03a5\u00b5. The stochasticity is realized by adding a Gaussian noise with \u03c3\u201c0.1. The generated functions are shown in Figure 3a. The empirical cumulative regret p R\u00b5p\u03c0,tq is provided in Figure 3b. The results show that SW-TS is the algorithm that achieves the lowest regret at the horizon, even though its performance at the beginning is worse than the other algorithms. As commonly happening in practice, TS-based approaches tend to outperform UCB ones. Indeed, R-less-UCB displays the second-best curve overall and achieves the best performance among the UCB-like algorithms. 6.2. Rested setting We employ the same arms generated for the restless case to evaluate R-ed-UCB in the rested setting. We plot the empirical cumulative regret in Figure 3c. SW-TS is con\ufb01rmed as the best algorithm at the end of the time horizon, although other algorithms (SW-UCB and SW-KL-UCB) suffer less regret at the beginning of learning. R-ed-UCB pays the price of the initial exploration, but at the end of the horizon, it manages to achieve the second-best performance. Notice that, besides R-ed-UCB, all other baseline algorithms are designed for the restless setting and are not endowed with any guarantee on the regret in the rested scenario. To highlight this fact, we designed a particular 2-arms rising rested bandit in which the optimal arm reveals only when pulled a suf\ufb01cient number of times (linear in T). The payoff functions, ful\ufb01lling Assumptions 3.1 and 3.2, are shown in Figure 4a and the algorithms\u2019 empirical regrets in Figure 4b. Note that in this setting the expected (instantaneous) regret may be negative for t\u0103 19T 400 , and this is the case for most of the algorithms for t\u010320,000. While for the \ufb01rst \u00ab20,000 rounds R-ed-UCB is on par with the other algorithms, it outperforms all the other policies over a longer run. Note that the regret for Rexp3 and Ser4 is decreasing the slope for t\u010540,000, meaning that they are somehow reacting to the change in the reward of the two arms. SW-TS starts reacting even later, at around t\u00ab100,000. However, they are not prompt to detect such a change in the rewards and, therefore, collect a large regret in the \ufb01rst part of the learning process. The other algorithms suffer a linear regret at the end of the time horizon since they do not employ forgetting mechanisms or because the sliding window should be tuned knowing the characteristics of the expected reward. \fStochastic Rising Bandits 6.3. IMDB dataset (rested) We investigate the performance of R-ed-UCB on an online model selection task for a real-world dataset. We employ the IMDB dataset, made of 50,000 reviews of movies (scores from 0 to 10). We preprocessed the data as done by Maas et al. (2011) to obtain a binary classi\ufb01cation problem. Each review xt lies in a d\u201c10,000 dimensional feature space, where each feature is the frequency of the most common English words. Each arm corresponds to a different online optimization algorithm, i.e., two of them are Online Logistic Regression algorithms with different learning rate schemes, and the other \ufb01ve are Neural Networks with different topologies. We provide additional information on the arms of the bandit in Appendix E.2. At each round, a sample xt is randomly selected from the dataset, a reward of 1 is generated for a correct classi\ufb01cation, 0 otherwise, and, \ufb01nally, the online update step is performed for the chosen algorithm. The empirical regret is plotted in Figure 5. We can see that R-ed-UCB, with \u03f5\u201c1{32 outperforms the considered baselines. Compared to the synthetic simulations, the smaller window choice is justi\ufb01ed by the fact that we need to take into account that the average learning curves of the classi\ufb01cation algorithms are not guaranteed to be non-decreasing nor concave on the single run. However, keeping the window linear in Ni,t\u00b41 is crucial for the regret guarantees of Theorem 4.4. 7. Discussion and Conclusions This paper studied the MAB problem when the payoffs are non-decreasing functions that evolve either when pulling the corresponding arm (rested) or for time passing (restless). We showed that, for the rested case, an assumption on the payoff (e.g., concavity) is essential to make the problem learnable. We presented novel algorithms that suitably employ the concavity assumption to build proper estimators for both settings. These algorithms are proven to suffer a regret made of a \ufb01rst instance-independent component of r OpT 2 3 q and an instance-dependent component involving the cumulative increment function \u03a5\u00b5p\u00a8,qq. For the rested setting, ours represent the \ufb01rst no-regret algorithm for the stochastic rising bandits. The experimental evaluation con\ufb01rmed our theoretical \ufb01ndings showing advantages over state-of-the-art algorithms designed for non-stationary bandits, especially in the rested setting. The natural future research direction consists of studying the complexity of the learning problem in stochastic rising rested and restless bandits, focusing on deriving suitable regret lower bounds. Other future works include investigating the best-arm identi\ufb01cation setting and, motivated by the online model selection, analysing the alternative case in which the optimization algorithms associated with the arms act on a shared vector of parameters. Acknowledgements The authors would like to Emmanuel Esposito, Saeed Masoudian, and Alessandro Montenegro for their contribution to the \ufb01x of the minor issues present in the manuscript." + }, + { + "url": "http://arxiv.org/abs/2012.08225v1", + "title": "Policy Optimization as Online Learning with Mediator Feedback", + "abstract": "Policy Optimization (PO) is a widely used approach to address continuous\ncontrol tasks. In this paper, we introduce the notion of mediator feedback that\nframes PO as an online learning problem over the policy space. The additional\navailable information, compared to the standard bandit feedback, allows reusing\nsamples generated by one policy to estimate the performance of other policies.\nBased on this observation, we propose an algorithm, RANDomized-exploration\npolicy Optimization via Multiple Importance Sampling with Truncation\n(RANDOMIST), for regret minimization in PO, that employs a randomized\nexploration strategy, differently from the existing optimistic approaches. When\nthe policy space is finite, we show that under certain circumstances, it is\npossible to achieve constant regret, while always enjoying logarithmic regret.\nWe also derive problem-dependent regret lower bounds. Then, we extend RANDOMIST\nto compact policy spaces. Finally, we provide numerical simulations on finite\nand compact policy spaces, in comparison with PO and bandit baselines.", + "authors": "Alberto Maria Metelli, Matteo Papini, Pierluca D'Oro, Marcello Restelli", + "published": "2020-12-15", + "updated": "2020-12-15", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction Policy Optimization (PO, Deisenroth, Neumann, and Peters 2013) is a family of Reinforcement Learning (RL, Sutton and Barto 2018) algorithms based on the explicit optimization of the policy parameters. It represents the most promising approach for learning large-scale continuous control tasks and has already achieved marvelous results in video games (e.g., Vinyals et al. 2019) and robotics (e.g., Peng et al. 2020). These achievements, however, rely on massive amounts of simulation rollouts. The ef\ufb01cient use of experience data is essential both to reduce computational costs and to make learning online from real interaction possible. This is still largely an open problem and calls for better theoretical understanding. Any online-learning agent must face the exploration-exploitation dilemma: whether to leverage on its current knowledge to maximize performance or consider new alternatives. Fortunately, the Multi-Armed Bandit (MAB) literature (Bubeck and Cesa-Bianchi 2012; Lattimore and Szepesv\u00b4 ari 2018) provides a theoretical framework for the problem of ef\ufb01cient exploration under bandit *Equal contribution. Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. feedback, i.e., observing the effects of the chosen actions. The dilemma is addressed by minimizing the cumulative regret of the online performance w.r.t. the optimal one. The most popular exploration strategies are based on the Optimism in the Face of Uncertainty (OFU, Lai and Robbins 1985), of which UCB1 (Auer, Cesa-Bianchi, and Fischer 2002) is the prototypical algorithm, and on Thompson Sampling (TS, Thompson 1933). Both suffer only sublinear regret (Auer, Cesa-Bianchi, and Fischer 2002; Agrawal and Goyal 2012; Kaufmann, Korda, and Munos 2012). TS typically performs better in practice (Chapelle and Li 2011), but it is only computationally ef\ufb01cient in arti\ufb01cial settings (Kveton et al. 2019b). More recent randomized algorithms such as PHE (Perturbed History Exploration) (Kveton et al. 2019a) are able to match the theoretical and practical advantages of TS without the computational burden, and with no assumptions on the payoff distribution. The OFU principle has been applied to RL (Jaksch, Ortner, and Auer 2010) and recently also to PO (Chowdhury and Gopalan 2019; Efroni et al. 2020), at the level of action selection. These methods are promising but limited to \ufb01nite actions. A different perspective is proposed by Papini et al. (2019), where the decision problem is not de\ufb01ned over the agent\u2019s actions but over the policy parameters. This change of viewpoint allows exploiting the special structure of the PO problem: for each policy, a sequence of states and actions performed by the agent is collected, constituting, alongside the rewards, a vastly richer signal than the simple bandit feedback. In this paper, we call it mediator feedback since this extra information acts as a mediator variable between the policy parameters and the return. OPTIMIST (Papini et al. 2019) is an OFU algorithm that uses Multiple Importance Sampling (MIS, Veach and Guibas 1995) to exploit the mediator feedback, so that the results of one policy provide information on all the others. This allows, in principle, to optimize over an in\ufb01nite policy space with only \ufb01nite samples and no regularity assumptions on the underlying process. There are two important limitations in Papini et al. (2019). First, the advantages of the mediator feedback over the bandit feedback are not clear from a theoretical perspective since the regret of OPTIMIST is comparable with that of UCB1 with \ufb01nite policy space. Second, the policy selection of OPTIMIST requires maximizing a non-convex and non-differentiable index. In the continuous setting, this arXiv:2012.08225v1 [cs.LG] 15 Dec 2020 \fis addressed via discretization, with clear scalability issues. In this work, we provide two major advancements. From the theoretical side, we provide regret lower bounds for the policy optimization problem with \ufb01nite policy space, and we show that OPTIMIST actually enjoys constant regret under the assumptions made in (Papini et al. 2019). In fact, mediator feedback is so special that, under strongenough assumptions, a greedy algorithm enjoys the same guarantees. We also devise a PHE-inspired randomized algorithm, called RANDOMIST (RANDomized-exploration policy Optimization via Multiple Importance Sampling with Truncation), with similar regret guarantees as OPTIMIST. From the practical side, this allows replacing the unfeasible index maximization of OPTIMIST with a sampling procedure. Although our regret guarantees apply to the \ufb01nite setting only, we propose a heuristic version of RANDOMIST for continuous problems, using a Markov Chain Monte Carlo (MCMC, Owen 2013). We show the advantages of this algorithm over continuous OPTIMIST in terms of computational complexity and performance. The structure of the paper is as follows. We start in Section 2 with the basic background. In Section 3, we formalize the concept of mediator feedback in PO and derive two regret lower bounds. We illustrate, in Section 4, a possible way to exploit mediator feedback, based on importance sampling. Section 5 is devoted to the discussion of deterministic algorithms, providing the improved regret guarantees for OPTIMIST. In Section 6, we present RANDOMIST with its regret guarantees and the heuristic extension to the continuous case. In Section 7, we compare empirically RANDOMIST with relevant baselines on both illustrative examples and continuous-control problems. In Section 8, we discuss relationships with similar approaches from the bandit and RL literature. We conclude in Section 9, discussing the obtained results and proposing future research directions. The proofs of all the results can be found in Appendix D. 2 Preliminaries In this section, we introduce some notation, the background on Markov decision processes and policy optimization. Mathematical Background Let pX, Fq be a measurable space, we denote with PpXq the set of probability measures over X. Let P, Q P PpXq such that P ! Q,1 for any \u03b1 P r0, 8s the \u03b1-R\u00b4 enyi divergence (R\u00b4 enyi 1961) is de\ufb01ned as:2 D\u03b1pP}Qq \u201c 1 \u03b1 \u00b4 1 log \u017c X \u02c6dP dQ \u02d9\u03b1 dQ. We denote with d\u03b1pP}Qq \u201c exp rD\u03b1pP}Qqs the exponentiated R\u00b4 enyi divergence (Cortes, Mansour, and Mohri 2010). Markov Decision Processes and Policy Optimization A discrete-time Markov Decision Process (MDP, Puterman 1P is absolutely continuous w.r.t. Q, i.e., for every measurable set Y \u010e X we have QpYq \u201c 0 \u00f1 PpYq \u201c 0. 2In the limit, for \u03b1 \u00d1 1 we have D1pP}Qq \u201c DKLpP}Qq and for \u03b1 \u00d1 8 we have D8pP}Qq \u201c ess supX dP dQ. 1994) is a 6-tuple M \u201c pS, A, P, R, \u03b3, \u00b5q, where S is the state space, A is the action space, P is the transition model that for each ps, aq P S \u02c6 A provides the probability distribution of the next state Pp\u00a8|s, aq P PpSq, Rps, aq P R is the reward function, \u03b3 P r0, 1s is the discount factor, and \u00b5 P PpSq is the initial-state distribution. In Policy Optimization (PO, Peters and Schaal 2008), we model the agent\u2019s behavior by means of a policy \u03c0\u03b8p\u00a8|sq P PpAq belonging to a space of parametric policies \u03a0\u0398 \u201c t\u03c0\u03b8 : \u03b8 P \u0398u. The interaction between an agent and an MDP generates a sequence of state-action pairs, named trajectory: \u03c4 \u201c ps0, a0, s1, a1, . . . , sH\u00b41, aH\u00b41q where s0 \u201e \u00b5, for all h P t0, . . . , H \u00b4 1u we have ah \u201e \u03c0\u03b8p\u00a8|shq, sh`1 \u201e Pp\u00a8|sh, ahq and H P N is the trajectory length. Each parameter \u03b8 P \u0398 determines a policy \u03c0\u03b8 P \u03a0\u0398 which, in turn, induces a probability measure p\u03b8 P PpT q over the trajectory space T . To every trajectory \u03c4 P T , we associate an index of performance Rp\u03c4q \u201c \u0159H\u00b41 h\u201c0 \u03b3hRpsh, ahq, called return. Without loss of generality we assume that Rp\u03c4q P r0, 1s. Thus, we can evaluate the performance of a policy \u03c0\u03b8 P \u03a0\u0398 by means of its expected return: Jp\u03b8q \u201c E\u03c4\u201ep\u03b8 rRp\u03c4qs. The goal of the agent consists in \ufb01nding an optimal parameter, i.e., any \u03b8\u02da maximizing Jp\u03b8q.3 3 Online Policy Optimization and Mediator Feedback The online PO protocol works as follows. At each round t P rns, we evaluate a parameter vector \u03b8t P \u0398 by running policy \u03c0\u03b8t, collecting one (or more) trajectory \u03c4t P T and observing the corresponding return Rp\u03c4tq. Then, based on the history Ht \u201c tp\u03b8i, \u03c4i, Rp\u03c4iqqut i\u201c1, we update \u03b8t to get \u03b8t`1. From an online learning perspective, the goal of the agent consists in maximizing the sum of the expected returns over n rounds or, equivalently, minimizing the cumulative regret Rpnq: max \u03b81,...\u03b8nP\u0398 n \u00ff t\u201c1 Jp\u03b8tq \u00f4 min \u03b81,...\u03b8nP\u0398 Rpnq \u201c n \u00ff t\u201c1 \u2206p\u03b8tq, where \u2206p\u03b8q \u201c J\u02da\u00b4Jp\u03b8q is the optimality gap of \u03b8 P \u0398 and J\u02da \u201c sup\u03b8P\u0398 Jp\u03b8q. Thus, whenever policy \u03c0\u03b8t is executed the agent receives the trajectory-return pair p\u03c4t, Rp\u03c4tqq, that we name mediator feedback (MF). The term \u201cmediator\u201d refers to the side information, the trajectory \u03c4t, that mediates between the parameter choice \u03b8t and the return Rp\u03c4tq. By na\u00a8 \u0131vely approaching PO as an online-learning problem over policy space, we would only consider bandit feedback, in which just the return Rp\u03c4tq is observable. In comparison, the MF allows to better exploit the structure underlying the PO problem (Figure 1).4 Indeed, while the return function R 3To simplify the presentation, we frame our results for the usual action-based PO. Our \ufb01ndings directly extend to parameter-based exploration (Sehnke et al. 2008), in which policies are indirectly optimized by learning a hyperpolicy that outputs the policy parameters. Coherently with Papini et al. (2019), the empirical evaluation of Section 7 is carried out in the parameter-based framework. 4In this paper, we employ the wording \u201cbandit feedback\u201d with a different meaning compared to some provably ef\ufb01cient approaches to PO (e.g., Efroni et al. 2020). See also Section 8. \f\u03b8t \u03c4t Rp\u03c4tq p\u03b8t R Mediator Feedback \u03b8t zt Rp\u03c4tq p\u03b8t \u02dd R\u00b41 Bandit Feedback Figure 1: Graphical models comparing mediator and bandit feedbacks. is unknown, the trajectory distribution p\u03b8 is partially known: p\u03b8p\u03c4q \u201c \u00b5ps0q H\u00b41 \u017a h\u201c0 \u03c0\u03b8pah|shqPpsh`1|sh, ahq. (1) The policy factors \u03c0\u03b8, that depend on \u03b8, are known to the agent, whereas the factors due to the environment (\u00b5 and P) are unknown but do not depend on \u03b8. Intuitively, if two policies \u03c0\u03b8 and \u03c0\u03b81 are suf\ufb01ciently \u201csimilar\u201d, given a trajectory \u03c4 from policy \u03c0\u03b8, the return Rp\u03c4q provides information on the expected return of policy \u03c0\u03b81 too. 3.1 Regret Lower Bounds for Finite Policy Space We focus on the intrinsic complexity of PO with \ufb01nite policy space, deriving two lower bounds to the regret. The results are phrased, for simplicity, for the case of two policies, i.e., |\u0398| \u201c 2, and the proof techniques are inspired to (Bubeck, Perchet, and Rigollet 2013). We start showing that, with enough structure between the policies, i.e., when the KLdivergence between the trajectory distributions is bounded, the best achievable regret is constant. Theorem 3.1. There exist an MDP and a parameter space \u0398 \u201c t\u03b81, \u03b82u with DKLpp\u03b81}p\u03b82q \u0103 8, DKLpp\u03b82}p\u03b81q \u0103 8 and Jp\u03b81q \u00b4 Jp\u03b82q \u201c \u2206such that, for suf\ufb01ciently large n, all algorithms suffer regret E Rpnq \u011b 1 32\u2206. Instead, the presence of policies that are uninformative of one another, i.e., with in\ufb01nite KL-divergence between the trajectory distributions, leads to a logarithmic regret. Theorem 3.2. There exist an MDP and a parameter space \u0398 \u201c t\u03b81, \u03b82u with DKLpp\u03b81}p\u03b82q\u201c8 or DKLpp\u03b82}p\u03b81q\u201c 8, and Jp\u03b81q \u00b4 Jp\u03b82q \u201c \u2206such that, for any n \u011b 1, all algorithms suffer regret E Rpnq \u011b 1 8\u2206logp\u22062nq. 4 Exploiting Mediator Feedback with Importance Sampling In this section, we illustrate how Importance Sampling techniques (IS, Cochran 1977; Owen 2013) can be employed to effectively exploit the mediator feedback in PO.5 Monte Carlo Estimation With the bandit feedback at each round t P rns, the agent has access to the history of parameter-return pairs Ht \u201c tp\u03b8i, Rp\u03c4iqut\u00b41 i\u201c1. Let Ttp\u03b8q \u201c \u0159t\u00b41 i\u201c1 1t\u03b8i \u201c \u03b8u be the number of trajectories collected with policy \u03c0\u03b8 P \u03a0\u0398 up to round t \u00b4 1. To estimate the 5We stress that IS is just one method, and not necessarily the best one, to exploit the structure of the PO problem. expected return Jp\u03b8q, if no additional structure is available, we can only use the samples collected when executing \u03c0\u03b8, leading to the Monte Carlo (MC) estimator: p JMC t p\u03b8q \u201c 1 Ttp\u03b8q t\u00b41 \u00ff i\u201c1 Rp\u03c4iq 1t\u03b8i \u201c \u03b8u. (2) p JMC t is unbiased for Jp\u03b8q and its variance scales with Varr p JMC t p\u03b8qs \u010f 1{Ttp\u03b8q. Clearly, p JMC t p\u03b8q can be computed only for the policies that have been executed at least once. Multiple Importance Sampling Estimation With the mediator feedback, at each round t P rns we have access to additional information, i.e., the history of parametertrajectory-return triples Ht \u201c tp\u03b8i, \u03c4i, Rp\u03c4iqqut\u00b41 i\u201c1. Thanks to the factorization in Equation (1), we can compute the trajectory distribution ratios without knowing P and \u00b5: p\u03b8p\u03c4q p\u03b81p\u03c4q \u201c H\u00b41 \u017a h\u201c0 \u03c0\u03b8pah|shq \u03c0\u03b81pah|shq. Thus, we can use all the samples to estimate the expected return of any policy. Let \u03a6t \u201c \u0159t\u00b41 j\u201c1 1 t\u00b41p\u03b8j be the mixture induced by the policies executed up to time t \u00b4 1: if p\u03b8 ! \u03a6t, we can employ a Multiple Importance Sampling (MIS, Veach and Guibas 1995) estimator (with balance heuristic):6 p Jtp\u03b8q \u201c 1 t \u00b4 1 t\u00b41 \u00ff i\u201c1 \u03c9\u03b8,tp\u03c4iqRp\u03c4iq, (3) where \u03c9\u03b8,tp\u03c4iq \u201c p\u03b8p\u03c4iq{\u03a6tp\u03c4iq is the importance weight. Thus, for estimating the expected return Jp\u03b8q of policy \u03c0\u03b8 we do not need to execute \u03c0\u03b8, but just require the absolute continuity p\u03b8 ! \u03a6t (surely ful\ufb01lled if Ttp\u03b8q \u011b 1). The statistical properties of the MIS estimator can be phrased in terms of the R\u00b4 enyi divergence. We can prove that 0 \u010f p Jtp\u03b8q \u010f d8pp\u03b8}\u03a6tq and the variance can be bounded as Varr p Jtp\u03b8qs \u010f d2pp\u03b8}\u03a6tq{pt \u00b4 1q (Metelli et al. 2018; Papini et al. 2019; Metelli et al. 2020). Since the variance of p Jtp\u03b8q scales with d2pp\u03b8}\u03a6tq{pt \u00b4 1q instead of 1{Ttp\u03b8q, as for p JMC t p\u03b8q, we refer to \u03b7tp\u03b8q :\u201c pt \u00b4 1q{d2pp\u03b8}\u03a6tq as the effective number of trajectories. It is worth noting that \u03b7tp\u03b8q \u011b Ttp\u03b8q (Lemma C.4); thus, thanks to the structure introduced by the mediator feedback, the MIS estimator variance is always smaller than the MC estimator variance.7 Truncated Multiple Importance Sampling Estimation The main limitation of the MIS estimator is that the importance weight \u03c9\u03b8,t displays a heavy-tail behavior, preventing exponential concentration, unless d8pp\u03b8}\u03a6tq is \ufb01nite (Metelli et al. 2018). A common solution consists in 6For an extensive discussion of importance sampling and heuristics (e.g., balance heuristic) refer to (Owen 2013). 7The effective number of trajectories \u03b7tp\u03b8q is, in fact, the effective sample size of p Jtp\u03b8q (Martino, Elvira, and Louzada 2017). \ftruncating the estimator (Ionides 2008) at the cost of introducing a negative bias. Given a (time-variant and policydependent) truncation threshold Mtp\u03b8q \u0103 8, the Truncated MIS (TMIS) was introduced by Papini et al. (2019): q Jtp\u03b8q \u201c 1 t \u00b4 1 t\u00b41 \u00ff i\u201c1 q \u03c9\u03b8,tp\u03c4iqRp\u03c4iq, (4) where q \u03c9\u03b8,tp\u03c4iq \u201c min tMtp\u03b8q, \u03c9\u03b8,tp\u03c4iqu. TMIS enjoys more desirable theoretical properties than plain MIS. While its variance scales similarly to p Jtp\u03b8q since Varr q Jtp\u03b8qs \u010f d2pp\u03b8}\u03a6tq{pt \u00b4 1q, the range can be bounded as 0 \u010f q Jtp\u03b8q \u010f Mtp\u03b8q. Thus, the range is controlled by Mtp\u03b8q and no longer by the divergence d8pp\u03b8}\u03a6tq, which may be in\ufb01nite. Similarly, the bias can be bounded as Jp\u03b8q \u00b4 E\u03c4i\u201ep\u03b8i r q Jtp\u03b8qs \u010f d2pp\u03b8}\u03a6tq{Mtp\u03b8q (see Papini et al. (2019) and Lemma C.1 for details). If we are interested in minimizing the joint contribution of bias and variance, this suggests to increase Mtp\u03b8q progressively over the rounds. 5 Deterministic Algorithms In this section, we consider \ufb01nite policy spaces (|\u0398| \u0103 8) and discuss algorithms for PO that select policies deterministically, i.e., \u03b8t is a deterministic function of history Ht\u00b41. Follow The Leader The simplest algorithm accounting for the mediator feedback is Follow The Leader (FTL). It maintains a TMIS estimator q Jtp\u03b8q and selects the policy with the highest estimated expected return, i.e., \u03b8t P arg max\u03b8P\u0398 q Jtp\u03b8q. This is a pure-exploitation algorithm, unsuited for bandit feedback. Surprisingly, under a strong form of mediator feedback, FTL enjoys constant regret. Theorem 5.1. Let \u0398 \u201c rKs, vp\u03b8q \u201c max\u03b81P\u0398 d2pp\u03b8}p\u03b81q for all \u03b8 P \u0398 and v\u02dap\u03b8q \u201c maxtvp\u03b8q, vp\u03b8\u02daqu, where \u03c0\u03b8\u02da is an optimal policy. If v :\u201c max\u03b8P\u0398 vp\u03b8q \u0103 8, then, for any \u03b1 \u0105 1, the expected regret of FTL using TMIS with truncation Mtp\u03b8q \u201c b td2pp\u03b8}\u03a6tq \u03b1 log t is bounded as: E Rpnq \u010f \u00ff \u03b8P\u0398:\u2206p\u03b8q\u01050 48\u03b1v\u02dap\u03b8q \u2206p\u03b8q log 24\u03b1v\u02dap\u03b8q \u2206p\u03b8q2 ` \u2206p\u03b81q ` 2K \u03b1 \u00b4 1 min ! 1, a 2 log v ) . (5) We refer to the condition when all pairwise R\u00b4 enyi divergences are \ufb01nite (i.e., v \u0103 8) as perfect mediator feedback. In such case, we have the remarkable property that running any policy in \u03a0\u0398 provides information for all the others. Indeed, the effective number of trajectories satis\ufb01es \u03b7tp\u03b8q \u011b pt \u00b4 1q{v (Lemma C.4). Unfortunately, when v \u201c 8, FTL degenerates to linear regret (Fact D.1). UCB1 We can always apply an algorithm for standard bandit feedback, like UCB1 (Lai and Robbins 1985; Auer, Cesa-Bianchi, and Fischer 2002), to PO with \ufb01nite policy space, ignoring the mediator feedback. UCB1 maintains the sample mean p JMC t p\u03b8q of the observed returns for Algorithm 1 OPTIMIST Input: initial parameter \u03b81, \u03b1 \u0105 1 Execute \u03c0\u03b81, observe \u03c41 \u201e p\u03b81 and Rp\u03c41q for t \u201c 2, . . . , n do Compute expected return estimate q Jtp\u03b8q Compute index: Btp\u03b8q \u201c q Jtp\u03b8q ` p1 ` ? 2q b \u03b1 log t \u03b7tp\u03b8q Select \u03b8t P arg max\u03b8P\u0398 Btp\u03b8q Execute \u03c0\u03b8t, observe \u03c4t \u201e p\u03b8t and Rp\u03c4tq end for each \u03b8 P \u0398 and selects the one that maximizes p JMC t p\u03b8q ` a p\u03b1 log tq{Ttp\u03b8q. The optimistic bonus favors policies that have been selected less often, in accordance with the OFU principle. Being designed for bandit feedback, UCB1 guarantees Op\u2206\u00b41 log nq regret (Auer, Cesa-Bianchi, and Fischer 2002) even if v \u201c 8, but it cannot exploit mediator feedback when actually present. In principle, we could employ FTL or UCB1 based on whether v is \ufb01nite or in\ufb01nite. There are two reasons why this approach might be inappropriate. First, we would disregard the possibility to share information among pairs of policies with \ufb01nite divergence, losing possible practical bene\ufb01ts (not captured by the current regret analysis). Second, even when v \u0103 8, the regret of FTL is Opv\u2206\u00b41 logpv\u2206\u00b42qq that, at \ufb01nite time, might be worse than Op\u2206\u00b41 log nq, especially for large v. Note that deriving the conditions on v so that the regret of UCB1 is smaller than that of FLT is not practical since it would require the knowledge of the gap \u2206. OPTIMIST The dif\ufb01culty in combining the advantages of FTL and UCB1 is overcome by OPTIMIST (Algorithm 1), an OFU-based algorithm introduced by Papini et al. (2019).8 It selects policies as to maximize an optimistic TMIS expected return estimate that favors policies with a lower effective number of trajectories. In the original paper (Papini et al. 2019), OPTIMIST is only shown to enjoy sublinear regret in high probability under perfect mediator feedback (v \u0103 8). We show here that OPTIMIST actually enjoys constant regret under perfect mediator feedback (like FTL) without ever degenerating into linear regret (like UCB1). Theorem 5.2. Let \u0398 \u201c rKs and vp\u03b8q \u201c max\u03b81P\u0398 d2pp\u03b8}p\u03b81q for all \u03b8 P \u0398 (vp\u03b8q can be in\ufb01nite). For any \u03b1 \u0105 1, the expected regret of OPTIMIST with truncation Mtp\u03b8q \u201c b td2pp\u03b8}\u03a6tq \u03b1 log t is bounded as: (a) if v :\u201c max\u03b8P\u0398 vp\u03b8q \u0103 8: E Rpnq \u010f \u00ff \u03b8P\u0398:\u2206p\u03b8q\u01050 48\u03b1vp\u03b8q \u2206p\u03b8q log 24\u03b1vp\u03b8q \u2206p\u03b8q2 8We consider here a slight variant of OPTIMIST with an explicit exploration parameter \u03b1 in place of the original con\ufb01dence parameter \u03b4 from (Papini et al. 2019), since we focus on expected regret rather than high-probability regret. \fAlgorithm 2 RANDOMIST Input: initial parameter \u03b81, scale a \u011b 0, translation b \u011b 0, \u03b1 \u0105 1 Execute \u03c0\u03b81, observe \u03c41 \u201e p\u03b81 and Rp\u03c41q for t \u201c 2, . . . , n do Compute expected return estimate q Jtp\u03b8q Generate perturbation: Utp\u03b8q \u201c 1 \u03b7tp\u03b8q \u0159a\u03b7tp\u03b8q l\u201c1 \u03c4l ` b, with \u03c4l \u201e Berp1{2q Select \u03b8t P arg max\u03b8P\u0398 q Jtp\u03b8q ` Utp\u03b8q Execute \u03c0\u03b8t, observe \u03c4t \u201e p\u03b8t and Rp\u03c4tq end for ` \u2206p\u03b81q ` 2K \u03b1 \u00b4 1 min ! 1, a 2 log v ) ; (b) in any case: E Rpnq \u010f \u00ff \u03b8P\u0398:\u2206p\u03b8q\u01050 24\u03b1 \u2206p\u03b8q log n ` \u03b1 ` 1 \u03b1 \u00b4 1K, with an instance-independent expected regret of E Rpnq \u010f 4?6\u03b1Kn log n ` p\u03b1 ` 1qK{p\u03b1 \u00b4 1q. Note also that the regret correctly goes to zero with the divergence (when v \u201c 1, all the policies are equivalent). It is an interesting open problem whether better regret guarantees can be provided for the intermediate case, i.e., when some (but not all) the R\u00b4 enyi divergences are \ufb01nite. 6 Randomized Algorithms In this section, we propose a novel algorithm for regret minimization in PO that selects the policies with a randomized strategy. RANDOMIST (RANDomized-exploration policy Optimization via Multiple Importance Sampling with Truncation, Algorithm 2) is based on PHE (Kveton et al. 2019a) and employs additional samples to perturb the TMIS expected return estimate q Jtp\u03b8q, enforcing exploration.9 Clearly, RANDOMIST shares the randomized nature of exploration with the Bayesian approaches for bandits (e.g., Thompson Sampling (Thompson 1933)) although no prior-posterior mechanism is explicitly implemented and no assumption (apart for boundedness) on the return distribution is needed. At each round t \u201c 2, . . . , n, we update the TMIS expected return estimate for each policy q Jtp\u03b8q and we generate the perturbation Utp\u03b8q that is obtained through a\u03b7tp\u03b8q pseudo-rewards sampled from a Bernoulli distribution Berp1{2q. Then, we play the policy maximizing the perturbed estimated expected return, i.e., the sum of the estimated expected return q Jtp\u03b8q and the perturbation Utp\u03b8q. The two hyperparameters are the perturbation scale a \u0105 0 and the perturbation translation b \u0105 0. Informally, a and b are responsible for the amount of exploration: a governs the variance of the perturbation, while b (which is absent in PHE) accounts for the negative bias introduced by the TMIS estimator. We now present the properties of RANDOMIST with \ufb01nite parameter space and propose an extension to deal with compact parameter spaces. 9In this sense, RANDOMIST, as well as PHE, resembles the Follow the Perturbed Leader (Hannan 1957) strategy. Finite Parameter Space If the policy space is \ufb01nite, we can show that RANDOMIST enjoys guarantees similar to those of OPTIMIST on the expected regret. Theorem 6.1. Let \u0398\u201crKs, vp\u03b8q\u201cmaxx1P\u0398d2pp\u03b8}p\u03b81q for all \u03b8P\u0398 (vp\u03b8q can be in\ufb01nite) and v\u02dap\u03b8q\u201c maxtvp\u03b8q,vp\u03b8\u02daqu where \u03c0\u03b8\u02da is an optimal policy. For any \u03b1\u01051, the expected regret of RANDOMIST with truncation Mtp\u03b8q\u201c b td2pp\u03b8}\u03a6tq \u03b1logt is bounded as follows: (a) if v:\u201cmax\u03b8P\u0398vp\u03b8q\u01038, b\u010f a p\u03b1logtq{\u03b7tp\u03b8q and a\u011b0: ERpnq\u010f \u00ff \u03b8P\u0398:\u2206p\u03b8q\u01050 p188`32aq\u03b1v\u02dap\u03b8q \u2206p\u03b8q log p94`16aq\u03b1v\u02dap\u03b8q \u2206p\u03b8q2 `\u2206p\u03b81q` \u03b1`3 \u03b1\u00b41 min ! 1, a 2logv ) K; (b) no matter the value of v, if a\u01058 and Jp\u03b8q\u00b4Er q Jtp\u03b8qs\u010f b\u010f a p\u03b1logtq{\u03b7tp\u03b8q: ERpnq\u010f \u00ff \u03b8P\u0398:\u2206p\u03b8q\u01050 p52`110aqc\u03b1 \u2206p\u03b8q logn`2\u03b1`1 \u03b1\u00b41K, where c\u201c2` e2?a ? 2\u03c0 exp \u201c 16 a\u00b48 \u2030\u00b4 1` b \u03c0a a\u00b48 \u00af , with an instance-independent regret of ERpnq\u010f 2 a p52`110aqc\u03b1Knlogn`2 \u03b1`1 \u03b1\u00b41K. Under perfect mediator feedback RANDOMIST enjoys constant regret, like OPTIMIST, although with a dependence on v\u02dap\u03b8q, which involves the divergence w.r.t. an optimal policy. Moreover, in such case, since exploration is not needed, we could even set a \u201c b \u201c 0 reducing RANDOMIST to FTL. Similarly to OPTIMIST, when we allow v \u201c 8, the regret becomes logarithmic and the hyperparameters a and b must be carefully set to enforce exploration. Compact Parameter Space When the parameter space is a compact set, i.e., \u0398 \u201c r\u00b4M, Msd, the arg max in Algorithm 2 cannot be explicitly computed. However, the random variable \u03b8 P arg max\u03b81P\u0398 q Jtp\u03b8q`Utp\u03b8q can be seen as sampled from the distribution for \u03b8 of being the parameter in \u0398 with the largest perturbed estimated expected return, whose p.d.f. is given by (D\u2019Eramo et al. 2017): g\u02da t p\u03b8q \u201c g \u00b4 q Jtp\u03b8q ` Utp\u03b8q \u201c sup \u03b81P\u0398 q Jtp\u03b81q ` Utp\u03b81q|Ht\u00b41 \u00af \u201c \u017c R g\u03b8pyq G\u03b8pyq R \u0398 G\u03b81pyqd\u03b81dy, (6) where R\u0398G\u03b8pyqd\u03b8 \u201c exp `\u015f \u0398 log G\u03b8pyqd\u03b8 \u02d8 is the product integral (Davis and Chat\ufb01eld 1970), g\u03b8 and G\u03b8 are the p.d.f. and the c.d.f. of the random variable q Jtp\u03b8q ` Utp\u03b8q conditioned to the history Ht\u00b41. The computation of g\u02da t (even up to a constant) is challenging as the product integral requires a numerical integration over the parameter space \u0398. Provided that an approximation (up to a constant) g: t of g\u02da t is available, we can use a Monte Carlo Markov Chain method (Owen \f0 0.5 1 \u00d7104 0 200 400 600 Rounds Cumulative Regret (a) \u03c3 = 0.1 \u03bb = 0.1 0 0.5 1 \u00d7104 0 200 400 Rounds (b) \u03c3 = 2 \u03bb = 0.1 0 0.5 1 \u00d7104 0 200 400 600 Rounds (c) \u03c3 = 0.1 \u03bb = 2 0 0.5 1 \u00d7104 0 20 40 60 Rounds (d) \u03c3 = 2 \u03bb = 2 UCB1 TS FTL PHE OPTIMIST RANDOMIST (8.1) RANDOMIST (1.1) Figure 2: Cumulative regret on the illustrative PO for four values of \u03c3 and \u03bb. 20 runs, 95% c.i. 2013) to generate a sample \u03b8t \u201e g: t. As a practical approximation, we consider the p.d.f. for \u03b8 of having a perturbed estimated expected return larger than that of the previously executed policies:10 g: tp\u03b8q9 \u015f R g\u03b8pyq \u015bt\u00b41 i\u201c1 G\u03b8ipyqdy. Since Opdq iterations of MCMC are suf\ufb01cient to generate a sample (Beskos and Stuart 2009), where d is the dimensionality of \u0398, and one evaluation of g: t can be performed in time Opt3q, the per-round complexity of RANDOMIST is Opdt3q. This can be further reduced to Opdt2q via clever caching (see Appendix F). OPTIMIST (Papini et al. 2019) can also be applied to continuous parameter spaces, with an r Op ? vdnq high-probability regret bound. However, it is not clear how to perform the maximization step of OPTIMIST ef\ufb01ciently in this setting, since the optimistic index is non-differentiable and non-convex in the parameter variable. Discretization is adopted in (Papini et al. 2019), leading to Opt1`d{2q time complexity, that is exponential in d. The RANDOMIST variant proposed here, although heuristic, has only polynomial dependence on d, thus scaling more favorably to high-dimensional problems. 7 Numerical Simulations We present the numerical simulations, starting with an illustrative example and then moving to RL benchmarks. For the RL experiments, similarly to Papini et al. (2019), the evaluation is carried out in the parameter-based PO setting (Sehnke et al. 2008), where the policy parameters \u03b8 are sampled from a hyperpolicy \u03bd\u03be and the optimization is performed in the space of hyperparameters \u039e (Appendix A). This setting is particularly convenient since the R\u00b4 enyi divergence between hyperpolicies can be computed exactly (at least for Gaussians). Details and an additional experiment on the Cartpole domain are reported in Appendix F. Illustrative Problems The goal of this experiment is to show the advantages of the additional structure offered by the mediator feedback over the bandit feedback. We design a class of 5-policy PO problems, isomorphic to bandit problems, in which trajectories are collapsed to a single real action T \u201c R and Rp\u03c4q \u201c maxt0, mint1, \u03c4{4uu. 10g: t can be seen as obtained from g\u02da t applying a quadrature with t\u03b81, . . . \u03b8t\u00b41u as nodes for the inner integral. The policies are Gaussians pNp0, \u03c32q, Np1, \u03c32q, Np2, \u03c32q, Np2.95, \u03bb2q, Np3, \u03c32qq de\ufb01ned in terms of the two values \u03c3, \u03bb \u0105 0. The optimal policy is the \ufb01fth one and we have a near-optimal parameter, the fourth, with a different variance. Intuitively, we can tune the parameters \u03c3 and \u03bb to vary the R\u00b4 enyi divergences. We compare RANDOMIST with a \u201c 8.1 (as prescribed in Theorem 6.1) and a \u201c 1.1, and b \u201c a p\u03b1 log tq{\u03b7tp\u03b8q for both cases, with OPTIMIST (Papini et al. 2019), FTL, UCB1 (Auer, Cesa-Bianchi, and Fischer 2002), PHE (Kveton et al. 2019a), and TS with Gaussian prior (Agrawal and Goyal 2013a). The cumulative regret is shown in Figure 2 for four combinations of \u03c3 and \u03bb. In (a) and (d) we are in a perfect mediator feedback, but in (a) log v \u00bb 2.25 and (d) log v \u00bb 900. Instead, in (b) or (c), we have v \u201c 8. We notice that FTL displays a (near-)linear regret in (a) as expected since v \u201c 8 but also in (c) where v is \ufb01nite but very large. RANDOMIST with theoretical value of a \u201c 8.1 always displays a good behavior and better than OPTIMIST, except in (d) where the latter shows a remarkable constant regret. We also note that when the amount of information shared among parameters is small, UCB1 performs better than OPTIMIST as well as PHE over RANDOMIST. Furthermore, TS with Gaussian prior performs very well across the tasks, although it considers the bandit feedback. This can be explained since TS assumes the correct return distribution. It also suggests that RANDOMIST could be improved when coped with other perturbation distributions (e.g., Gaussian). Finally, we observe that RANDOMIST with a \u201c 1.1, although violating the conditions of Theorem 6.1, keeps showing a sublinear regret even in (b) and (c) when v \u201c 8. Linear Quadratic Gaussian Regulator The Linear Quadratic Gaussian Regulator (LQG, Curtain 1997) is a benchmark for continuous control. We consider the monodimensional case and a Gaussian hyperpolicy \u03bd\u03be \u201c Np\u03be, 0.152q where \u03be is the learned parameter. From \u03bd\u03be, we sample the gain \u03b8 of a deterministic linear policy: ah \u201c \u03b8sh. This experiment aims at comparing RANDOMIST with UCB1 (Auer, Cesa-Bianchi, and Fischer 2002), GPUCB (Srinivas et al. 2010), and OPTIMIST (Papini et al. 2019) in a \ufb01nite policy space by discretizing r\u00b41, 1s in K \u201c 100 parameters. In Figure 3, we notice that OPTIMIST and RANDOMIST outperform UCB1. While RAN\f0 2000 4000 0 200 400 Episodes Cumulative Regret LQG 0 1000 2000 0 50 100 Episodes Cumulative Return Mountain Car RANDOMIST (8.1) UCB1 GPUCB OPTIMIST RANDOMIST (1.1) PGPE PB-POIS Figure 3: Cumulative regret in the LQG (30 runs, 95% c.i.) and cumulative return in the Mountain Car (5 runs, 95% c.i.). DOMIST with a \u201c 8.1 and OPTIMIST have similar performance, RANDOMIST improves signi\ufb01cantly when setting a to 1.1. As in (Papini et al. 2019), the good performance of GPUCB is paired with a lack of theoretical guarantees due to the arbitrary choice of the GP kernel. Mountain Car To test RANDOMIST in a continuous parameter space, we employ the approximation described above in the Mountain Car environment (Sutton and Barto 2018). We consider the setting of (Papini et al. 2019), employing PGPE (Sehnke et al. 2008) and PB-POIS (Metelli et al. 2018) as baselines. We use a Gaussian hyperpolicy \u03bd\u03be \u201c Np\u03be, diagp0.15, 3q2q with learned mean \u03be, from which we sample the parameters of a deterministic policy, linear in position and velocity. The exploration phase is performed by sampling from the approximate density g: t, taking 10 steps of the Metropolis-Hastings algorithm (Owen 2013) with Gaussian proposal qm \u201c Np\u03b8m, diagp0.15, 3q2q. Figure 3 shows that RANDOMIST outperforms both policy gradient baselines and OPTIMIST, in terms of learning speed and \ufb01nal performance. 8 Related Works In this section, we revise the related literature, with attention to bandits with expert advice and to provably ef\ufb01cient PO. Additional comparisons are reported in Appendix B. Mediator Feedback and Expert Advice A related formulation are the Bandits with Expert Advice (BEA, Bubeck and Cesa-Bianchi 2012, Section 4.2), introduced as an approach to adversarial contextual bandits. To draw a parallelism with PO, let T be the set of arms and \u0398 \u201c rKs the \ufb01nite set of experts. At each step t, the agent receives advice pt \u03b8 P PpT q from each expert \u03b8 P \u0398, selects one expert \u03b8t, and pulls arm \u03c4t \u201e pt \u03b8t. The goal is to minimize the in-class regret, competing with the best expert in hindsight. Differently from the trajectory distributions of PO, expert advice can change with time. A major concern of BEA, also relevant to PO, is the dependency of the regret on the number K of experts (resp. policies). A na\u00a8 \u0131ve application of Exp3 (Auer et al. 2002) yields Op?nK log Kq regret. Like our PO algorithms, this is impractical when the experts are exponentially many. Exp4 (Auer et al. 2002) achieves Op a n|T | log Kq regret, which scales well with K, but is vacuous in the case of in\ufb01nite arms. McMahan and Streeter (2009) replace |T | with the degree of agreement of the experts, which has interesting similarities with our distributional-divergence approach. Meta-bandit approaches (Agarwal et al. 2017; Pacchiano et al. 2020) are so general that could be applied both to continuous-arm BEA and PO, but also exhibit a superlogarithmic dependence on K. Beygelzimer et al. (2011) obtain r Op ? dnq regret competing with an in\ufb01nite set of experts of VC-dimension d, mirrored in PO by OPTIMIST on compact spaces of dimension d (Papini et al. 2019, Theorem 3). Provably Ef\ufb01cient PO Recently, a surge of approaches to deal with PO in a theoretically sound way, with both stochastic or adversarial environments, has emerged. These works consider either full-information, i.e., the agent observes the whole reward function tRpsh, aquaPA regardless the played action (e.g., Rosenberg and Mansour 2019; Cai et al. 2019), or the bandit feedback (with a different meaning compared to the use we have made in this paper), in which only the reward of the chosen action is observed Rpsh, ahq (e.g., Jin et al. 2019; Efroni et al. 2020). These methods are not directly comparable with the mediator feedback, although both settings exploit the structure of the PO problem. While with MF we explicitly model the policy space \u03a0\u0398, these methods search in the space of all Markovian stationary policies. Furthermore, they are limited to tabular MDPs, while MF can deal natively with continuous state-action spaces. 9 Discussion and Conclusions We have deepened the understanding of policy optimization as an online learning problem with additional feedback. We believe that mediator feedback has potential applications even beyond PO. Indeed, the problem of optimizing over probability distributions also encompasses GANs and variational inference (Chu, Blanchet, and Glynn 2019) and, more generally, MF emerges in any Bayesian network in which we control the conditional distributions on some vertexes, via parameters \u03b8, while the other are \ufb01xed and independent from \u03b8. Furthermore, we have introduced a novel randomized algorithm, RANDOMIST, and we have shown its advantages both in terms of computational complexity and performance. The algorithm could be improved by adopting a different perturbation, e.g., Gaussian, as already hinted in (Kveton et al. 2019b). Further work is needed to match the theoretical regret lower bounds. Currently, a major discrepancy is the use of the KL-divergence in the lower bounds instead of the larger R\u00b4 enyi divergence required by algorithms based on IS. Moreover, the algorithm employs the ratio importance weight and, thus, it might suffer from the curse of horizon (Liu et al. 2018). Finally, the case of non-perfect mediator feedback could be related to graphical bandits (Alon et al. 2017), where \ufb01nite R\u00b4 enyi divergences are the edges of a directed feedback graph, in order to capture the actual dif\ufb01culty of this intermediate case. \fAcknowledgments This work has been partially supported by the Italian MIUR PRIN 2017 Project ALGADIMAR \u201dAlgorithms, Games, and Digital Markets\u201d." + }, + { + "url": "http://arxiv.org/abs/2002.06836v2", + "title": "Control Frequency Adaptation via Action Persistence in Batch Reinforcement Learning", + "abstract": "The choice of the control frequency of a system has a relevant impact on the\nability of reinforcement learning algorithms to learn a highly performing\npolicy. In this paper, we introduce the notion of action persistence that\nconsists in the repetition of an action for a fixed number of decision steps,\nhaving the effect of modifying the control frequency. We start analyzing how\naction persistence affects the performance of the optimal policy, and then we\npresent a novel algorithm, Persistent Fitted Q-Iteration (PFQI), that extends\nFQI, with the goal of learning the optimal value function at a given\npersistence. After having provided a theoretical study of PFQI and a heuristic\napproach to identify the optimal persistence, we present an experimental\ncampaign on benchmark domains to show the advantages of action persistence and\nproving the effectiveness of our persistence selection method.", + "authors": "Alberto Maria Metelli, Flavio Mazzolini, Lorenzo Bisi, Luca Sabbioni, Marcello Restelli", + "published": "2020-02-17", + "updated": "2020-07-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction In recent years, Reinforcement Learning (RL, Sutton & Barto, 2018) has proven to be a successful approach to address complex control tasks: from robotic locomotion (e.g., Peters & Schaal, 2008; Kober & Peters, 2014; Haarnoja et al., 2019; Kilinc et al., 2019) to continuous system control (e.g., Schulman et al., 2015; Lillicrap et al., 2016; Schulman et al., 2017). These classes of problems are usually formalized in the framework of the discrete\u2013time Markov Decision Processes (MDP, Puterman, 2014), assuming that the control signal is issued at discrete time instants. However, many relevant real\u2013world problems are more naturally de\ufb01ned in the continuous\u2013time domain (Luenberger, 1979). Even though a branch of literature has studied RL in 1Politecnico di Milano, Milan, Italy. 2Institute for Scienti\ufb01c Interchange Foundation, Turin, Italy. Correspondence to: Alberto Maria Metelli . Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). continuous\u2013time MDPs (Bradtke & Duff, 1994; Munos & Bourgine, 1997; Doya, 2000), the majority of the research has focused on the discrete\u2013time formulation, which appears to be a necessary, but effective, approximation. Intuitively, increasing the control frequency of the system offers the agent more control opportunities, possibly leading to improved performance as the agent has access to a larger policy space. This might wrongly suggest that we should control the system with the highest frequency possible, within its physical limits. However, in the RL framework, the environment dynamics is unknown, thus, a too \ufb01ne discretization could result in the opposite effect, making the problem harder to solve. Indeed, any RL algorithm needs samples to \ufb01gure out (implicitly or explicitly) how the environment evolves as an effect of the agent\u2019s actions. When increasing the control frequency, the advantage of individual actions becomes in\ufb01nitesimal, making them almost indistinguishable for standard value-based RL approaches (Tallec et al., 2019). As a consequence, the sample complexity increases. Instead, low frequencies allow the environment to evolve longer, making the effect of individual actions more easily detectable. Furthermore, in the presence of a system characterized by a \u201cslowly evolving\u201d dynamics, the gain obtained by increasing the control frequency might become negligible. Finally, in robotics, lower frequencies help to overcome some partial observability issues, like action execution delays (Kober & Peters, 2014). Therefore, we experience a fundamental trade\u2013off in the control frequency choice that involves the policy space (larger at high frequency) and the sample complexity (smaller at low frequency). Thus, it seems natural to wonder: \u201cwhat is the optimal control frequency?\u201d An answer to this question can disregard neither the task we are facing nor the learning algorithm we intend to employ. Indeed, the performance loss we experience by reducing the control frequency depends strictly on the properties of the system and, thus, of the task. Similarly, the dependence of the sample complexity on the control frequency is related to how the learning algorithm will employ the collected samples. In this paper, we analyze and exploit this trade\u2013off in the context of batch RL (Lange et al., 2012), with the goal of enhancing the learning process and achieving higher perforarXiv:2002.06836v2 [cs.LG] 12 Jul 2020 \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning mance. We assume to have access to a discrete\u2013time MDP M\u2206t0, called base MDP, which is obtained from the time discretization of a continuous\u2013time MDP with \ufb01xed base control time step \u2206t0, or equivalently, a control frequency equal to f0\u201c 1 \u2206t0 . In this setting, we want to select a suitable control time step \u2206t that is an integer multiple of the base time step \u2206t0, i.e., \u2206t\u201ck\u2206t0 with kPN\u011b1.1 Any choice of k generates an MDP Mk\u2206t0 obtained from the base one M\u2206t0 by altering the transition model so that each action is repeated for k times. For this reason, we refer to k as the action persistence, i.e., the number of decision epochs in which an action is kept \ufb01xed. It is possible to appreciate the same effect in the base MDP M\u2206t0 by executing a (non-Markovian and non-stationary) policy that persists every action for k time steps. The idea of repeating actions has been previously employed, although heuristically, with deep RL architectures (Lakshminarayanan et al., 2017). The contributions of this paper are theoretical, algorithmic, and experimental. We \ufb01rst prove that action persistence (with a \ufb01xed k) can be represented by a suitable modi\ufb01cation of the Bellman operators, which preserves the contraction property and, consequently, allows deriving the corresponding value functions (Section 3). Since increasing the duration of the control time step k\u2206t0 has the effect of degrading the performance of the optimal policy, we derive an algorithm\u2013independent bound for the difference between the optimal value functions of MDPs M\u2206t0 and Mk\u2206t0, which holds under Lipschitz conditions. The result con\ufb01rms the intuition that the performance loss is strictly related to how fast the environment evolves as an effect of the actions (Section 4). Then, we apply the notion of action persistence in the batch RL scenario, proposing and analyzing an extension of Fitted Q-Iteration (FQI, Ernst et al., 2005). The resulting algorithm, Persistent Fitted Q-Iteration (PFQI) takes as input a target persistence k and estimates the corresponding optimal value function, assuming to have access to a dataset of samples collected in the base MDP M\u2206t0 (Section 5). Once we estimate the value function for a set of candidate persistences K\u0102N\u011b1, we aim at selecting the one that yields the best performing greedy policy. Thus, we introduce a persistence selection heuristic able to approximate the optimal persistence, without requiring further interactions with the environment (Section 6). After having revised the literature (Section 7), we present an experimental evaluation on benchmark domains, to con\ufb01rm our theoretical \ufb01ndings and evaluate our persistence selection method (Section 8). We conclude by discussing some 1We are considering the near\u2013continuous setting. This is almost w.l.o.g. compared to the continuous time since the discretization time step \u2206t0 can be chosen to be arbitrarily small. Typically, a lower bound on \u2206t0 is imposed by the physical limitations of the system. Thus, we restrict the search of \u2206t from the continuous set R\u01050 to the discrete set tk\u2206t0,kPN\u011b1u. Moreover, considering an already discretized MDP simpli\ufb01es the mathematical treatment. open questions related to action persistence (Section 9). The proofs of all the results are available in Appendix A. 2. Preliminaries In this section, we introduce the notation and the basic notions that we will employ in the remainder of the paper. Mathematical Background Let X be a set with a \u03c3algebra \u03c3X , we denote with PpXq the set of all probability measures and with BpXq the set of all bounded measurable functions over pX,\u03c3X q. If xPX, we denote with \u03b4x the Dirac measure de\ufb01ned on x. Given a probability measure \u03c1PPpXq and a measurable function f PBpXq, we abbreviate \u03c1f \u201c \u015f X fpxq\u03c1pdxq (i.e., we use \u03c1 as an operator). Moreover, we de\ufb01ne the Lpp\u03c1q-norm of f as }f}p p,\u03c1\u201c \u015f X |fpxq|p\u03c1pdxq for p\u011b1, whereas the L8-norm is de\ufb01ned as }f}8\u201csupxPX fpxq. Let D\u201ctxiun i\u201c1\u010eX we de\ufb01ne the Lpp\u03c1q empirical norm as }f}p p,D\u201c 1 n \u0159n i\u201c1|fpxiq|p. Markov Decision Processes A discrete-time Markov Decision Process (MDP, Puterman, 2014) is a 5-tuple M\u201c pS,A,P,R,\u03b3q, where S is a measurable set of states, A is a measurable set of actions, P :S\u02c6A\u00d1PpSq is the transition kernel that for each state-action pair ps,aqPS\u02c6A provides the probability distribution Pp\u00a8|s,aq of the next state, R:S\u02c6A\u00d1PpRq is the reward distribution Rp\u00a8|s,aq for performing action aPA in state sPS, whose expected value is denoted by rps,aq\u201c \u015f RxRpdx|s,aq and uniformly bounded by Rmax\u0103`8, and \u03b3Pr0,1q is the discount factor. A policy \u03c0\u201cp\u03c0tqtPN is a sequence of functions \u03c0t:Ht\u00d1 PpAq mapping a history Ht\u201cpS0,A0,...,St\u00b41,At\u00b41,Stq of length tPN to a probability distribution over A, where Ht\u201cpS\u02c6Aqt\u02c6S. If \u03c0t depends only on the last visited state St then it is called Markovian, i.e., \u03c0t:S\u00d1PpAq. Moreover, if \u03c0t does not depend on explicitly t it is stationary, in this case we remove the subscript t. We denote with \u03a0 the set of Markovian stationary policies. A policy \u03c0P\u03a0 induces a (state-action) transition kernel P \u03c0:S\u02c6A\u00d1PpS\u02c6Aq, de\ufb01ned for any measurable set B\u010eS\u02c6A as (Farahmand, 2011): pP \u03c0qpB|s,aq\u201c \u017c S Ppds1|s,aq \u017c A \u03c0pda1|s1q\u03b4ps1,a1qpBq. (1) The action-value function, or Q-function, of a policy \u03c0P\u03a0 is the expected discounted sum of the rewards obtained by performing action a in state s and following policy \u03c0 thereafter Q\u03c0ps,aq\u201cE \u201c\u0159`8 t\u201c0\u03b3tRt|S0\u201cs,A0\u201ca \u2030 , where Rt\u201eRp\u00a8|St,Atq, St`1\u201ePp\u00a8|St,Atq, and At`1\u201e\u03c0p\u00a8|St`1q for all tPN. The value function is the expectation of the Q-function over the actions: V \u03c0psq\u201c \u015f A\u03c0pda|sqQ\u03c0ps,aq. Given a distribution \u03c1PPpSq, we de\ufb01ne the expected return as J\u03c1,\u03c0psq\u201c \u015f S \u03c1pdsqV \u03c0psq. The optimal Q-function is given by: Q\u02daps,aq\u201csup\u03c0P\u03a0Q\u03c0ps,aq for all ps,aqPS\u02c6A. \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning A policy \u03c0 is greedy w.r.t. a function f PBpS\u02c6Aq if it plays only greedy actions, i.e., \u03c0p\u00a8|sqPPpargmaxaPAfps,aqq. An optimal policy \u03c0\u02daP\u03a0 is any policy greedy w.r.t. Q\u02da. Given a policy \u03c0P\u03a0, the Bellman Expectation Operator T \u03c0: BpS\u02c6Aq\u00d1BpS\u02c6Aq and the Bellman Optimal Operator T \u02da:BpS\u02c6Aq\u00d1BpS\u02c6Aq are de\ufb01ned for a bounded measurable function f PBpS\u02c6Aq and ps,aqPS\u02c6A as (Bertsekas & Shreve, 2004): pT \u03c0fqps,aq\u201crps,aq`pP \u03c0fqps,aq, pT \u02dafqps,aq\u201crps,aq`\u03b3 \u017c S Ppds1|s,aqmax a1PAfps1,a1q. Both T \u03c0 and T \u02da are \u03b3-contractions in L8-norm and, consequently, they have a unique \ufb01xed point, that are the Q-function of policy \u03c0 (T \u03c0Q\u03c0\u201cQ\u03c0) and the optimal Qfunction (T \u02daQ\u02da\u201cQ\u02da) respectively. Lipschitz MDPs Let pX,dX q and pY,dYq be two metric spaces, a function f :X \u00d1Y is called Lf-Lipschitz continuous (Lf-LC), where Lf \u011b0, if for all x,x1PX we have: dYpfpxq,fpx1qq\u010fLfdX px,x1q. (2) Moreover, we de\ufb01ne the Lipschitz semi-norm as }f}L\u201c supx,x1PX:x\u2030x1 dYpfpxq,fpx1qq dX px,x1q . For real functions we employ Euclidean distance dYpy,y1q\u201c}y\u00b4y1}2, while for probability distributions we use the Kantorovich (L1-Wasserstein) metric de\ufb01ned for \u00b5,\u03bdPPpZq as (Villani, 2008): dYp\u00b5,\u03bdq\u201cW1p\u00b5,\u03bdq\u201c sup f:}f}L\u010f1 \u02c7 \u02c7 \u02c7 \u02c7 \u017c Z fpzqp\u00b5\u00b4\u03bdqpdzq \u02c7 \u02c7 \u02c7 \u02c7. (3) We now introduce the notions of Lipschitz MDP and Lipschitz policy that we will employ in the following (Rachelson & Lagoudakis, 2010; Pirotta et al., 2015). Assumption 2.1 (Lipschitz MDP). Let M be an MDP. M is called pLP ,Lrq-LC if for all ps,aq,ps,aqPS\u02c6A: W1pPp\u00a8|s,aq,Pp\u00a8|s,aqq\u010fLP dS\u02c6Apps,aq,ps,aqq, |rps,aq\u00b4rps,aq|\u010fLrdS\u02c6Apps,aq,ps,aqq. Assumption 2.2 (Lipschitz Policy). Let \u03c0P\u03a0 be a Markovian stationary policy. \u03c0 is called L\u03c0-LC if for all s,sPS: W1p\u03c0p\u00a8|sq,\u03c0p\u00a8|sqq\u010fL\u03c0dS ps,sq. 3. Persisting Actions in MDPs By the phrase \u201cexecuting a policy \u03c0 at persistence k\u201d, with kPN\u011b1, we mean the following type of agent-environment interaction. At decision step t\u201c0, the agent selects an action according to its policy A0\u201e\u03c0p\u00a8|S0q. Action A0 is kept \ufb01xed, or persisted, for the subsequent k\u00b41 decision steps, i.e., actions A1,...,Ak\u00b41 are all equal to A0. Then, at decision step t\u201ck, the agent queries again the policy Ak\u201e\u03c0p\u00a8|Skq and persists action Ak for the subsequent k\u00b41 decision steps and so on. In other words, the agent employs its policy only at decision steps t that are integer multiples of the persistence k (t mod k\u201c0). Clearly, the usual execution of \u03c0 corresponds to persistence 1. 3.1. Duality of Action Persistence Unsurprisingly, the execution of a Markovian stationary policy \u03c0 at persistence k\u01051 produces a behavior that, in general, cannot be represented by executing any Markovian stationary policy at persistence 1. Indeed, at any decision step t, such a policy needs to remember which action was taken at the previous decision step t\u00b41 (thus it is nonMarkovian with memory 1) and has to understand whether to select a new action based on t (so it is non-stationary). De\ufb01nition 3.1 (k-persistent policy). Let \u03c0P\u03a0 be a Markovian stationary policy. For any kPN\u011b1, the k-persistent policy induced by \u03c0 is a non\u2013Markovian non\u2013stationary policy, de\ufb01ned for any measurable set B\u010eA and tPN as: \u03c0t,kpB|Htq\u201c # \u03c0pB|Stq if t mod k\u201c0 \u03b4At\u00b41pBq otherwise . (4) Moreover, we denote with \u03a0k\u201ctp\u03c0t,kqtPN :\u03c0P\u03a0u the set of the k-persistent policies. Clearly, for k\u201c1 we recover policy \u03c0 as we always satisfy the condition t mod k\u201c0 i.e., \u03c0\u201c\u03c0t,1 for all tPN. We refer to this interpretation of action persistence as policy view. A different perspective towards action persistence consists in looking at the effect of the original policy \u03c0 in a suitably modi\ufb01ed MDP. To this purpose, we introduce the (stateaction) persistent transition probability kernel P \u03b4:S\u02c6A\u00d1 PpS\u02c6Aq de\ufb01ned for any measurable set B\u010eS\u02c6A as: pP \u03b4qpB|s,aq\u201c \u017c S Ppds1|s,aq\u03b4ps1,aqpBq. (5) The crucial difference between P \u03c0 and P \u03b4 is that the former samples the action a1 to be executed in the next state s1 according to \u03c0, whereas the latter replicates in state s1 action a. We are now ready to de\ufb01ne the k-persistent MDP. De\ufb01nition 3.2 (k-persistent MDP). Let M be an MDP. For any kPN\u011b1, the k-persistent MDP is the following MDP Mk\u201c ` S,A,Pk,Rk,\u03b3k\u02d8 , where Pk and Rk are the k-persistent transition model and reward distribution respectively, de\ufb01ned for any measurable sets B\u010eS , C\u010eR and state-action pair ps,aqPS\u02c6A as: PkpB|s,aq\u201c ` pP \u03b4qk\u00b41P \u02d8 pB|s,aq, (6) RkpC|s,aq\u201c k\u00b41 \u00ff i\u201c0 \u03b3i` pP \u03b4qiR \u02d8 pC|s,aq, (7) and rkps,aq\u201c \u015f RxRkpdx|s,aq\u201c\u0159k\u00b41 i\u201c0 \u03b3i` pP \u03b4qir \u02d8 ps,aq is the expected reward, uniformly bounded by Rmax 1\u00b4\u03b3k 1\u00b4\u03b3 . The k-persistent transition model Pk keeps action a \ufb01xed for k\u00b41 steps while making the state evolve according to P. \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning S0 S1 S2 Sk\u22121 Sk Sk+1 A0\u223c\u03c0(\u00b7|S0) A1\u223c\u03c0(\u00b7|S1) Ak\u22121\u223c\u03c0(\u00b7|Sk\u22121) Ak\u223c\u03c0(\u00b7|Sk) S0 S1 S2 Sk\u22121 Sk Sk+1 A0\u223c\u03c0(\u00b7|S0) [A1=A0] [Ak\u22121=A0] Ak\u223c\u03c0(\u00b7|Sk) A0 is persisted Ak is persisted Figure 1. Agent-environment interaction without (top) and with (bottom) action persistence, highlighting duality. The transition generated by the k-persistent MDP Mk is the cyan dashed arrow, while the actions played by the k-persistent policy are inside the cyan rectangle. Similarly, the k-persistent reward Rk provides the cumulative discounted reward over k steps in which a is persisted. We de\ufb01ne the transition kernel P \u03c0 k , analogously to P \u03c0, as in Equation (1). Clearly, for k\u201c1 we recover the base MDP, i.e., M\u201cM1.2 Therefore, executing policy \u03c0 in Mk at persistence 1 is equivalent to executing policy \u03c0 at persistence k in the original MDP M. We refer to this interpretation of persistence as environment view (Figure 1). Thus, solving the base MDP M in the space of k-persistent policies \u03a0k (De\ufb01nition 3.1), thanks to this duality, is equivalent to solving the k-persistent MDP Mk (De\ufb01nition 3.2) in the space of Markovian stationary policies \u03a0. It is worth noting that the persistence kPN\u011b1 can be seen as an environmental parameter (affecting P, R, and \u03b3), which can be externally con\ufb01gured with the goal to improve the learning process for the agent. In this sense, the MDP Mk can be seen as a Con\ufb01gurable Markov Decision Process with parameter kPN\u011b1 (Metelli et al., 2018; 2019). Furthermore, a persistence of k induces a k-persistent MDP Mk with smaller discount factor \u03b3k. Therefore, the effective horizon in Mk is 1 1\u00b4\u03b3k \u0103 1 1\u00b4\u03b3 . Interestingly, the end effect of persisting actions is similar to reducing the planning horizon, by explicitly reducing the discount factor of the task (Petrik & Scherrer, 2008; Jiang et al., 2016) or setting a maximum trajectory length (Farahmand et al., 2016). 3.2. Persistent Bellman Operators When executing policy \u03c0 at persistence k in the base MDP M, we can evaluate its performance starting from any stateaction pair ps,aqPS\u02c6A, inducing a Q-function that we denote with Q\u03c0 k and call k-persistent action-value function of \u03c0. Thanks to duality, Q\u03c0 k is also the action-value function of policy \u03c0 when executed in the k-persistent MDP Mk. Therefore, Q\u03c0 k is the \ufb01xed point of the Bellman Expectation Operator of Mk, i.e., the operator de\ufb01ned for any f P BpS\u02c6Aq as pT \u03c0 k fqps,aq\u201crkps,aq`\u03b3kpP \u03c0 k fqps,aq, that we call k-persistent Bellman Expectation Operator. Similarly, again thanks to duality, the optimal Q-function in the space of k-persistent policies \u03a0k, denoted by Q\u02da k and 2If M is the base MDP M\u2206t0, the k\u2013persistent MDP Mk corresponds to Mk\u2206t0. We remove the subscript \u2206t0 for brevity. called k-persistent optimal action-value function, corresponds to the optimal Q-function of the k-persistent MDP, i.e., Q\u02da kps,aq\u201csup\u03c0P\u03a0Q\u03c0 kps,aq for all ps,aqPS\u02c6A. As a consequence, Q\u02da k is the \ufb01xed point of the Bellman Optimal Operator of Mk, de\ufb01ned for f PBpS\u02c6Aq as pT \u02da k fqps,aq\u201c rkps,aq`\u03b3k\u015f S Pkpds1|s,aqmaxa1PAfps1,a1q, that we call k-persistent Bellman Optimal Operator. Clearly, both T \u03c0 k and T \u02da k are \u03b3k-contractions in L8-norm. We now prove that the k-persistent Bellman operators are obtained as composition of the base operators T \u03c0 and T \u02da. Theorem 3.1. Let M be an MDP, kPN\u011b1 and Mk be the kpersistent MDP. Let \u03c0P\u03a0 be a Markovian stationary policy. Then, T \u03c0 k and T \u02da k can be expressed as: T \u03c0 k \u201c ` T \u03b4\u02d8k\u00b41T \u03c0 and T \u02da k \u201c ` T \u03b4\u02d8k\u00b41T \u02da, (8) where T \u03b4:BpS\u02c6Aq\u00d1BpS\u02c6Aq is the Bellman Persistent Operator, de\ufb01ned for f PBpS\u02c6Aq and ps,aqPS\u02c6A: ` T \u03b4f \u02d8 ps,aq\u201crps,aq`\u03b3 ` P \u03b4f \u02d8 ps,aq. (9) The \ufb01xed point equations for the k-persistent Q-functions become: Q\u03c0 k \u201c ` T \u03b4\u02d8k\u00b41T \u03c0Q\u03c0 k and Q\u02da k \u201c ` T \u03b4\u02d8k\u00b41T \u02daQ\u02da k. 4. Bounding the Performance Loss Learning in the space of k-persistent policies \u03a0k can only lower the performance of the optimal policy, i.e., Q\u02daps,aq\u011b Q\u02da kps,aq for kPN\u011b1. The goal of this section is to bound }Q\u02da\u00b4Q\u02da k}p,\u03c1 as a function of the persistence kPN\u011b1. To this purpose, we focus on }Q\u03c0\u00b4Q\u03c0 k}p,\u03c1 for a \ufb01xed policy \u03c0P\u03a0, since denoting with \u03c0\u02da an optimal policy of M and with \u03c0\u02da k an optimal policy of Mk, we have that: Q\u02da\u00b4Q\u02da k \u201cQ\u03c0\u02da \u00b4Q \u03c0\u02da k k \u010fQ\u03c0\u02da \u00b4Q\u03c0\u02da k , since Q \u03c0\u02da k k ps,aq\u011bQ\u03c0\u02da k ps,aq. We start with the following result which makes no assumption about the structure of the MDP and then we particularize it for the Lipschitz MDPs. Theorem 4.1. Let M be an MDP and \u03c0P\u03a0 be a Markovian stationary policy. Let Qk\u201ct ` T \u03b4\u02d8k\u00b42\u00b4lT \u03c0Q\u03c0 k :lP t0,...,k\u00b42uu and for all ps,aqPS\u02c6A let us de\ufb01ne: d\u03c0 Qkps,aq\u201c sup fPQk \u02c7 \u02c7 \u02c7 \u02c7 \u017c S \u017c A ` P \u03c0pds1,da1|s,aq\u00b4P \u03b4pds1,da1|s,aq \u02d8 fps1,a1q \u02c7 \u02c7 \u02c7 \u02c7. \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning Then, for any \u03c1PPpS\u02c6Aq, p\u011b1, and kPN\u011b1, it holds that: }Q\u03c0\u00b4Q\u03c0 k}p,\u03c1\u010f \u03b3p1\u00b4\u03b3k\u00b41q p1\u00b4\u03b3qp1\u00b4\u03b3kq \u203a \u203ad\u03c0 Qk \u203a \u203a p,\u03b7\u03c1,\u03c0 k , where \u03b7\u03c1,\u03c0 k PPpS\u02c6Aq is a probability measure de\ufb01ned for any measurable set B\u010eS\u02c6A as: \u03b7\u03c1,\u03c0 k pBq\u201c p1\u00b4\u03b3qp1\u00b4\u03b3kq \u03b3p1\u00b4\u03b3k\u00b41q \u00ff iPN i mod k\u20300 \u03b3i\u00b4 \u03c1pP \u03c0qi\u00b41\u00af pBq. The bound shows that the Q-function difference depends on the discrepancy d\u03c0 Qk between the transition-kernel P \u03c0 and the corresponding persistent version P \u03b4, which is a form of integral probability metric (M\u00a8 uller, 1997), de\ufb01ned in terms of the set Qk. This term is averaged with the distribution \u03b7\u03c1,\u03c0 k , which encodes the (discounted) probability of visiting a state-action pair, ignoring the visitations made at decision steps i that are multiple of the persistence k. Indeed, in those steps, we play policy \u03c0 regardless of whether persistence is used.3 The dependence on k is represented in the term 1\u00b4\u03b3k\u00b41 1\u00b4\u03b3k . When k\u00d11 this term displays a linear growth in k, being asymptotic to pk\u00b41qlog 1 \u03b3 , and, clearly, vanishes for k\u201c1. Instead, when k\u00d18 this term tends to 1. If no structure on the MDP/policy is enforced, the dissimilarity term d\u03c0 Qk may become large enough to make the bound vacuous, i.e., larger than \u03b3Rmax 1\u00b4\u03b3 , even for k\u201c2 (see Appendix B.1). Intuitively, since the persistence will execute old actions in new states, we need to guarantee that the environment state changes slowly w.r.t. to time and the policy must play similar actions in similar states. This means that if an action is good in a state, it will also be almost good for states encountered in the near future. Although the condition on \u03c0 is directly enforced by Assumption 2.2, we need a new notion of regularity over time for the MDP. Assumption 4.1. Let M be an MDP. M is LT \u2013TimeLipschitz Continuous (LT \u2013TLC) if for all ps,aqPS\u02c6A: W1pPp\u00a8|s,aq,\u03b4sq\u010fLT . (10) This assumption requires that the Kantorovich distance between the distribution of the next state s1 and the deterministic distribution centered in the current state s is bounded by LT , i.e., the system does not evolve \u201ctoo fast\u201d (see Appendix B.3). We can now state the following result. Theorem 4.2. Let M be an MDP and \u03c0P\u03a0 be a Markovian stationary policy. Under Assumptions 2.1, 2.2, and 4.1, if \u03b3maxtLP `1,LP p1`L\u03c0qu\u01031 and if \u03c1ps,aq\u201c \u03c1Spsq\u03c0pa|sq with \u03c1S PPpSq, then for any kPN\u011b1: \u203a \u203ad\u03c0 Qk \u203a \u203a p,\u03b7\u03c1,\u03c0 k \u010fLQk rpL\u03c0`1qLT `\u03c3ps. where \u03c3p p\u201csupsPS \u015f A \u015f AdApa,a1qp\u03c0pda|sq\u03c0pda1|sq, and 3\u03b7\u03c1,\u03c0 k resambles the \u03b3-discounted state-action distribution (Sutton et al., 1999a), but ignoring the decision steps multiple of k. LQk \u201c Lr 1\u00b4\u03b3maxtLP `1,LP p1`L\u03c0qu. Thus, the dissimilarity d\u03c0 Qk between P \u03c0 and P \u03b4 can be bounded with four terms. i) LQk is (an upper-bound of) the Lipschitz constant of the functions in the set Qk. Indeed, under Assumptions 2.1 and 2.2 we can reduce the dissimilatity term to the Kantorivich distance (Lemma A.5): d\u03c0 Qkps,aq\u010fLQkW1 ` P \u03c0p\u00a8|s,aq,P \u03b4p\u00a8|s,aq \u02d8 . ii) pL\u03c0`1q accounts for the Lipschitz continuity of the policy, i.e., policies that prescribe similar actions in similar states have a small value of this quantity. iii) LT represents the speed at which the environment state evolves over time. iv) \u03c3p denotes the average distance (in Lp-norm) between two actions prescribed by the policy in the same state. This term is zero for deterministic policies and can be related to the maximum policy variance (Lemma A.6). A more detailed discussion on the conditions requested in Theorem 4.2 is reported in Appendix B.4. 5. Persistent Fitted Q-Iteration In this section, we introduce an extension of Fitted QIteration (FQI, Ernst et al., 2005) that employs the notion of persistence.4 Persisted Fitted Q-Iteration (PFQI(k)) takes as input a target persistence kPN\u011b1 and its goal is to approximate the k-persistent optimal action-value function Q\u02da k. Starting from an initial estimate Qp0q, at each iteration we compute the next estimate Qpj`1q by performing an approximate application of k-persistent Bellman optimal operator to the previous estimate Qpjq, i.e., Qpj`1q\u00abT \u02da k Qpjq. In practice, we have two sources of approximation in this process: i) the representation of the Q-function; ii) the estimation of the k-persistent Bellman optimal operator. (i) comes from the necessity of using functional space F \u0102BpS\u02c6Aq to represent Qpjq when dealing with continuous state spaces. (ii) derives from the approximate computation of T \u02da k which needs to be estimated from samples. Clearly, with samples collected in the k-persistent MDP Mk, the process described above reduces to the standard FQI. However, our algorithm needs to be able to estimate Q\u02da k for different values of k, using the same dataset of samples collected in the base MDP M (at persistence 1).5 For this purpose, we can exploit the decomposition T \u02da k \u201cpT \u03b4qk\u00b41T \u02da of Theorem 3.1 to reduce a single application of T \u02da k to a sequence of k applications of the 1-persistent operators. Speci\ufb01cally, at each iteration j with j mod k\u201c0, given the current estimate Qpjq, we need to perform (in this order) a single application of T \u02da followed by k\u00b41 applica4From now on, we assume that |A|\u0103`8. 5In real\u2013world cases, we might be unable to interact with the physical system to collect samples for any persistence k of interest. \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning Algorithm 1 Persistent Fitted Q-Iteration PFQI(k). Input: k persistence, J number of iterations (J mod k\u201c0), Qp0q initial action-value function, F functional space, D\u201c tpSi,Ai,S1 i,Riqun i\u201c1 batch samples Output: greedy policy \u03c0pJq for j\u201c0,...,J \u00b41 do if j mod k\u201c0 then Y pjq i \u201c p T \u02daQpjqpSi,Aiq, i\u201c1,...,n else Y pjq i \u201c p T \u03b4QpjqpSi,Aiq, i\u201c1,...,n end if Qpj`1qParginffPF \u203a \u203af \u00b4Y pjq\u203a \u203a2 2,D end for \u03c0pJqpsqPargmaxaPAQpJqps,aq, @sPS Phase 1 Phase 2 Phase 3 tions of T \u03b4, leading to the sequence of approximations: Qpj`1q\u00ab # T \u02daQpjq if j mod k\u201c0 T \u03b4Qpjq otherwise . (11) In order to estimate the Bellman operators, we have access to a dataset D\u201ctpSi,Ai,S1 i,Riqun i\u201c1 collected in the base MDP M, where pSi,Aiq\u201e\u03bd, S1 i\u201ePp\u00a8|Si,Aiq, Ri\u201e Rp\u00a8|Si,Aiq, and \u03bdPPpS\u02c6Aq is a sampling distribution. We employ D to compute the empirical Bellman operators (Farahmand, 2011) de\ufb01ned for f PBpS\u02c6Aq as: p p T \u02dafqpSi,Aiq\u201cRi`\u03b3maxaPAfpS1 i,aq i\u201c1,...,n p p T \u03b4fqpSi,Aiq\u201cRi`\u03b3fpS1 i,Aiq i\u201c1,...,n. These operators are unbiased conditioned to D (Farahmand, 2011): Erp p T \u02dafqpSi,Aiq|Si,Ais\u201cpT \u02dafqpSi,Aiq and Erp p T \u03b4fqpSi,Aiq|Si,Ais\u201cpT \u03b4fqpSi,Aiq. The pseudocode of PFQI(k) is summarized in Algorithm 1. At each iteration j\u201c0,...J \u00b41, we \ufb01rst compute the target values Y pjq by applying the empirical Bellman operators, p T \u02da or p T \u03b4, on the current estimate Qpjq (Phase 1). Then, we project the target Y pjq onto the functional space F by solving the least squares problem (Phase 2): Qpj`1qParginf fPF \u203a \u203a \u203af \u00b4Y pjq\u203a \u203a \u203a 2 2,D\u201c 1 n n \u00ff i\u201c1 \u02c7 \u02c7 \u02c7fpSi,Aiq\u00b4Y pjq i \u02c7 \u02c7 \u02c7 2 . Finally, we compute the approximation of the optimal policy \u03c0pJq, i.e., the greedy policy w.r.t. QpJq (Phase 3). 5.1. Theoretical Analysis In this section, we present the computational complexity analysis and the study of the error propagation in PFQI(k). Computational Complexity The computational complexity of PFQI(k) decreases monotonically with the persistence k. Whenever applying p T \u03b4, we need a single evaluation of Qpjq, while |A| evaluations are needed for p T \u02da due to the max over A. Thus, the overall complexity of J iterations of PFQI(k) with n samples, disregarding the cost of regression and assuming that a single evaluation of Qpjq takes constant time, is given by OpJnp1`p|A|\u00b41q{kqq (Proposition A.1). Error Propagation We now consider the error propagation in PFQI(k). Given the sequence of Q-functions estimates pQpjqqJ j\u201c0\u0102F produced by PFQI(k), we de\ufb01ne the approximation error at each iteration j\u201c0,...,J \u00b41 as: \u03f5pjq\u201c # T \u02daQpjq\u00b4Qpj`1q if j mod k\u201c0 T \u03b4Qpjq\u00b4Qpj`1q otherwise . (12) The goal of this analysis is to bound the distance between the k\u2013persistent optimal Q-function Q\u02da k and the Q-function Q\u03c0pJq k of the greedy policy \u03c0pJq w.r.t. QpJq, after J iterations of PFQI(k). The following result extends Theorem 3.4 of Farahmand (2011) to account for action persistence. Theorem 5.1 (Error Propagation for PFQI(k)). Let p\u011b1, kPN\u011b1, J PN\u011b1 with J mod k\u201c0 and \u03c1PPpS\u02c6Aq. Then for any sequence pQpjqqJ j\u201c0\u0102F uniformly bounded by Qmax\u010f Rmax 1\u00b4\u03b3 , the corresponding p\u03f5pjqqJ\u00b41 j\u201c0 de\ufb01ned in Equation (12) and for any rPr0,1s and qPr1,`8s it holds that: \u203a \u203a \u203aQ\u02da k \u00b4Q\u03c0pJq k \u203a \u203a \u203a p,\u03c1\u010f 2\u03b3k p1\u00b4\u03b3qp1\u00b4\u03b3kq \u201e 2 1\u00b4\u03b3 \u03b3 J p Rmax `C 1 2p VI,\u03c1,\u03bdpJ,r,qqE 1 2p p\u03f5p0q,...,\u03f5pJ\u00b41q;r,qq \uf6be . The expression of CVI,\u03c1,\u03bdpJ;r,qq and Ep\u00a8;r,qq can be found in Appendix A.3. We immediately observe that for k\u201c1 we recover Theorem 3.4 of Farahmand (2011). The term CVI,\u03c1,\u03bdpJ;r,qq is de\ufb01ned in terms of suitable concentrability coef\ufb01cients (Definition A.1) and encodes the distribution shift between the sampling distribution \u03bd and the one induced by the greedy policy sequence p\u03c0pjqqJ j\u201c0 encountered along the execution of PFQI(k). Ep\u00a8;r,qq incorporates the approximation errors p\u03f5pjqqJ\u00b41 j\u201c0 . In principle, it is hard to compare the values of these terms for different persistences k since both the greedy policies and the regression problems are different. Nevertheless, it is worth noting that the multiplicative term \u03b3k 1\u00b4\u03b3k decreases in kPN\u011b1. Thus, other things being equal, the bound value decreases when increasing the persistence. Thus, the trade-off in the choice of control frequency, which motivates action persistence, can now be stated more formally. We aim at \ufb01nding the persistence kPN\u011b1 that, for a \ufb01xed J, allows learning a policy \u03c0pJq whose Q-function Q\u03c0pJq k is the closest to Q\u02da. Consider the decomposition: \u203a \u203a \u203aQ\u02da\u00b4Q\u03c0pJq k \u203a \u203a \u203a p,\u03c1\u010f}Q\u02da\u00b4Q\u02da k}p,\u03c1` \u203a \u203a \u203aQ\u02da k \u00b4Q\u03c0pJq k \u203a \u203a \u203a p,\u03c1. The term }Q\u02da\u00b4Q\u02da k}p,\u03c1 accounts for the performance degradation due to action persistence: it is algorithm\u2013independent, and it increases in k (Theorem 4.1). Instead, the second term }Q\u02da k \u00b4Q\u03c0pJq k }p,\u03c1 decreases with k and depends on the algo\fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning Algorithm 2 Heuristic Persistence Selection. Input: batch samples D\u201ctpSi 0,Ai 0,...,Si Hi\u00b41,Ai Hi\u00b41,Si Hiqum i\u201c1, set of persistences K, set of Q-function tQk:kPKu, regressor Reg Output: approximately optimal persistence r k for kPK do p J\u03c1 k \u201c 1 m \u0159m i\u201c1VkpSi 0q Use the Reg to get an estimate r Qk of T \u02da k Qk \u203a \u203a r Qk\u00b4Qk \u203a \u203a 1,D\u201c 1 \u0159m i\u201c1 Hi \u0159m i\u201c1 \u0159Hi\u00b41 t\u201c0 | r QkpSi t,Ai tq\u00b4QkpSi t,Ai tq| end for r kPargmaxkPKBk\u201c p J\u03c1 k \u00b4 1 1\u00b4\u03b3k \u203a \u203a r Qk\u00b4Qk \u203a \u203a 1,D. rithm (Theorem 5.1). Unfortunately, optimizing their sum is hard since the individual bounds contain terms that are not known in general (e.g., Lipschitz constants, \u03f5pjq). The next section proposes heuristics to overcome this problem. 6. Persistence Selection In this section, we discuss how to select a persistence k in a set K\u0102N\u011b1 of candidate persistences, when we are given a set of estimated Q-functions: tQk :kPKu.6 Each Qk induces a greedy policy \u03c0k. Our goal is to \ufb01nd the persistence kPK such that \u03c0k has the maximum expected return in the corresponding k\u2013persistent MDP Mk: k\u02daPargmax kPK J\u03c1,\u03c0k k , \u03c1PPpSq. (13) In principle, we could execute \u03c0k in Mk to get an estimate of J\u03c1,\u03c0k k and employ it to select the persistence k. However, in the batch setting, further interactions with the environment might be not allowed. On the other hand, directly using the estimated Q-function Qk is inappropriate, since we need to take into account how well Qk approximates Q\u03c0k k . This trade\u2013off is encoded in the following result, which makes use of the expected Bellman residual. Lemma 6.1. Let QPBpS\u02c6Aq and \u03c0 be a greedy policy w.r.t. Q. Let J\u03c1\u201c \u015f \u03c1pdsqV psq, with V psq\u201cmaxaPAQps,aq for all sPS. Then, for any kPN\u011b1, it holds that: J\u03c1,\u03c0 k \u011bJ\u03c1\u00b4 1 1\u00b4\u03b3k }T \u02da k Q\u00b4Q}1,\u03b7\u03c1,\u03c0 , (14) where \u03b7\u03c1,\u03c0\u201cp1\u00b4\u03b3kq\u03c1\u03c0 ` Id\u00b4\u03b3kP \u03c0 k \u02d8\u00b41, is the \u03b3discounted stationary distribution induced by policy \u03c0 and distribution \u03c1 in MDP Mk. To get a usable bound, we need to make some simpli\ufb01cations. First, we assume that D\u201e\u03bd is composed of m trajectories, i.e., D\u201ctpSi 0,Ai 0,...,Si Hi\u00b41,Ai Hi\u00b41,Si Hiqum i\u201c1, where Hi is the trajectory length and the initial states are sampled as Si 0\u201e\u03c1. In this way, J\u03c1 can be estimated from samples as p J\u03c1\u201c 1 m \u0159m i\u201c1V pSi 0q. Second, since we are un6For instance, the Qk can be obtained by executing PFQI(k) with different persistences kPK. able to compute expectations over \u03b7\u03c1,\u03c0, we replace it with the sampling distribution \u03bd.7 Lastly, estimating the expected Bellman residual is problematic since its empirical version is biased (Antos et al., 2008). Thus, we resort to an approach similar to (Farahmand & Szepesv\u00b4 ari, 2011), assuming to have a regressor Reg able to output an approximation r Qk of T \u02da k Q. In this way, we replace }T \u02da k Q\u00b4Q}1,\u03bd with } r Qk\u00b4Q}1,D (details in Appendix C). In practice, we set Q\u201cQpJq and we obtain r Qk running PFQI(k) for k additional iterations, setting r Qk\u201cQpJ`kq. Thus, the procedure (Algorithm 2) reduces to optimizing the index: r kPargmax kPK Bk\u201c p J\u03c1 k \u00b4 1 1\u00b4\u03b3k \u203a \u203a \u203a r Qk\u00b4Qk \u203a \u203a \u203a 1,D. (15) 7. Related Works In this section, we revise the works connected to persistence, focusing on continuous\u2013time RL and temporal abstractions. Continuous\u2013time RL Among the \ufb01rst attempts to extend value\u2013based RL to the continuous\u2013time domain there is advantage updating (Bradtke & Duff, 1994), in which Qlearning (Watkins, 1989) is modi\ufb01ed to account for in\ufb01nitesimal control timesteps. Instead of storing the Q-function, the advantage function Aps,aq\u201cQps,aq\u00b4V psq is recorder. The continuous time is addressed in Baird (1994) by means of the semi-Markov decision processes (Howard, 1963) for \ufb01nite\u2013state problems. The optimal control literature has extensively studied the solution of the Hamilton-JacobiBellman equation, i.e., the continuous\u2013time counterpart of the Bellman equation, when assuming the knowledge of the environment (Bertsekas, 2005; Fleming & Soner, 2006). The model\u2013free case has been tackled by resorting to time (and space) discretizations (Peterson, 1993), with also convergence guarantees (Munos, 1997; Munos & Bourgine, 1997), and coped with function approximation (Dayan & Singh, 1995; Doya, 2000). More recently, the sensitivity of deep RL algorithm to the time discretization has been analyzed in Tallec et al. (2019), proposing an adaptation of advantage updating to deal with small time scales, that can be employed with deep architectures. Temporal Abstractions The notion of action persistence can be seen as a form of temporal abstraction (Sutton et al., 1999b; Precup, 2001). Temporally extended actions have been extensively used in the hierarchical RL literature to model different time resolutions (Singh, 1992a;b), subgoals (Dietterich, 1998), and combined with the actor\u2013 critic architectures (Bacon et al., 2017). Persisting an action is a particular instance of a semi-Markov option, always lasting k steps. According to the \ufb02at option representation (Precup, 2001), we have as initiation set I\u201cS the set 7This introduces a bias that is negligible if }\u03b7\u03c1,\u03c0{\u03bd}8\u00ab1 (details in Appendix C.1). \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning Table 1. Results of PFQI in different environments and persistences. For each persistence k, we report the sample mean and the standard deviation of the estimated return of the last policy p J\u03c1,\u03c0k k . For each environment, the persistence with highest average performance and the ones not statistically signi\ufb01cantly different from that one (Welch\u2019s t-test with p\u01030.05) are in bold. The last column reports the mean and the standard deviation of the performance loss \u03b4 between the optimal persistence and the one selected by the index Bk (Equation (15)). Environment Expected return at persistence k ( p J \u03c1,\u03c0k k , mean \u02d8 std) Performance loss k\u201c1 k\u201c2 k\u201c4 k\u201c8 k\u201c16 k\u201c32 k\u201c64 (\u03b4 mean \u02d8 std) Cartpole 169.9\u02d85.8 176.5\u02d85.0 239.5\u02d84.4 10.0\u02d80.0 9.8\u02d80.0 9.8\u02d80.0 9.8\u02d80.0 0.0\u02d80.0 MountainCar \u00b4111.1\u02d81.5 \u00b4103.6\u02d81.6 \u00b497.2\u02d82.0 \u00b493.6\u02d82.1 \u00b494.4\u02d81.8 \u00b492.4\u02d81.5 \u00b4136.7\u02d80.9 1.88\u02d80.85 LunarLander \u00b4165.8\u02d850.4 \u00b412.8\u02d84.7 1.2\u02d83.6 2.0\u02d83.4 \u00b444.1\u02d86.9 \u00b4122.8\u02d810.5 \u00b4121.2\u02d88.6 2.12\u02d84.21 Pendulum \u00b4116.7\u02d816.7 \u00b4113.1\u02d816.3 \u00b4153.8\u02d823.0 \u00b4283.1\u02d818.0 \u00b4338.9\u02d816.3 \u00b4364.3\u02d822.1 \u00b4377.2\u02d821.7 3.52\u02d80.0 Acrobot \u00b489.2\u02d81.1 \u00b482.5\u02d81.7 \u00b483.4\u02d81.3 \u00b4122.8\u02d81.3 \u00b4266.2\u02d81.9 \u00b4287.3\u02d80.3 \u00b4286.7\u02d80.6 0.80\u02d80.27 Swimmer 21.3\u02d81.1 25.2\u02d80.8 25.0\u02d80.5 24.0\u02d80.3 22.4\u02d80.3 12.8\u02d81.2 14.0\u02d80.2 2.69\u02d81.71 Hopper 58.6\u02d84.8 61.9\u02d84.2 62.2\u02d81.7 59.7\u02d83.1 60.8\u02d81.0 66.7\u02d82.7 73.4\u02d81.2 5.33\u02d82.32 Walker 2D 61.6\u02d85.5 37.6\u02d84.0 62.7\u02d818.2 80.8\u02d86.6 102.1\u02d819.3 91.5\u02d813.0 97.2\u02d817.6 5.10\u02d83.74 of all states, as internal policy the policy that plays deterministically the action taken when the option was initiated, i.e., the k\u2013persistent policy, and as termination condition whether k timesteps have passed after the option started, i.e., \u03b2pHtq\u201c1tt mod k\u201c0u. Interestingly, in Mann et al. (2015) an approximate value iteration procedure for options lasting at least a given number of steps is proposed and analyzed. This approach shares some similarities with action persistence. Nevertheless, we believe that the option framework is more general and usually the time abstractions are related to the semantic of the tasks, rather than based on the modi\ufb01cation of the control frequency, like action persistence. 8. Experimental Evaluation In this section, we provide the empirical evaluation of PFQI, with the threefold goal: i) proving that a persistence k\u01051 can boost learning, leading to more pro\ufb01table policies, ii) assessing the quality of our persistence selection method, and iii) studying how the batch size in\ufb02uences the performance of PFQI policies for different persistences. Refer to Appendix D for detailed experimental settings. We train PFQI, using extra-trees (Geurts et al., 2006) as a regression model, for J iterations and different values of k, starting with the same dataset D collected at persistence 1. To compare the performance of the learned policies \u03c0k at the different persistences, we estimate their expected return J\u03c1,\u03c0k k in the corresponding MDP Mk. Table 1 shows the results for different continuous environments and different persistences averaged over 20 runs and highlighting in bold the persistence with the highest average performance and the ones that are not statistically signi\ufb01cantly different from that one. Across the different environments we observe some common trends in line with our theory: i) persistence 1 rarely leads to the best performance; ii) excessively increasing persistence prevents the control at all. In Cartpole (Barto et al., 1983), we easily identify a persistence (k\u201c4) that outperforms all the others. In the Lunar Lander (Brockman et al., 2016) persistences kPt4,8u are the only ones that lead to positive return (i.e., the lander does not crash) and in the Acrobot domain (Geramifard et al., 2015) we identify kPt2,4u as optimal persistences. A qualitatively different behavior is displayed in Mountain Car (Moore, 1991), Pendulum (Brockman et al., 2016), and Swimmer (Coulom, 2002), where we observe a plateau of three persistences with similar performance. An explanation for this phenomenon is that, in those domains, the optimal policy tends to persist actions on its own, making the difference less evident. Intriguingly, the more complex Mujoco domains, like Hopper and Walker 2D (Erickson et al., 2019), seem to bene\ufb01t from the higher persistences. To test the quality of our persistence selection method, we compare the performance of the estimated optimal persistence, i.e., the one with the highest estimated expected return p kPargmax p J\u03c1,\u03c0k k , and the performance of the persistence r k selected by maximizing the index Bk (Equation (15)). For each run i\u201c1,...,20, we compute the performance loss \u03b4i\u201c p J \u03c1,\u03c0p k p k \u00b4 p J \u03c1,\u03c0r ki r ki and we report it in the last column of Table 1. In the Cartpole experiment, we observe a zero loss, which means that our heuristic always selects the optimal persistence (k\u201c4). Differently, non\u2013zero loss occurs in the other domains, which means that sometimes the index Bk mispredicts the optimal persistence. Nevertheless, in almost all cases the average performance loss is signi\ufb01cantly smaller than the magnitude of the return, proving the effectiveness of our heuristics. In Figure 2, we show the learning curves for the Cartpole experiment, highlighting the components that contribute to the index Bk. The \ufb01rst plot reports the estimated expected return p J\u03c1,\u03c0k k , obtained by averaging 10 trajectories executing \u03c0k in the environment Mk, which con\ufb01rms that k\u201c4 is the optimal persistence. The second plot shows the estimated return p J\u03c1 k obtained by averaging the Q-function Qk learned with PFQI(k), over the initial states sampled from \u03c1. We can see that for kPt1,2u, PFQI(k) tends to overestimate the return, while for k\u201c4 we notice a slight underestimation. The overestimation phenomenon can be explained by the \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning 0 200 400 0 100 200 Iteration Expected return b J \u03c1,\u03c0k k 0 200 400 0 100 200 300 Iteration Estimated return b J\u03c1 k 0 200 400 0 2 4 6 8 Iteration \u2225e Qk \u2212Qk\u22251,D 0 200 400 \u2212400 \u2212200 0 Iteration Index Bk k = 1 k = 2 k = 4 k = 8 k = 16 Figure 2. Expected return p J\u03c1,\u03c0k k , estimated return p J\u03c1 k, estimated expected Bellman residual } r Qk\u00b4Qk}1,D, and persistence selection index Bk in the Cartpole experiment as a function of the number of iterations for different persistences. 20 runs, 95 % c.i. 10 30 50 100 200 400 \u221220 0 20 40 Batch Size n Expected return b J \u03c1,\u03c0k k k = 1 k = 2 k = 4 k = 8 Figure 3. Expected return p J\u03c1,\u03c0k k in the Trading experiment as a function of the batch size. 10 runs, 95 % c.i. fact that with small persistences we perform a large number of applications of the operator p T \u02da, which involves a maximization over the action space, injecting an overestimation bias. By combining this curve with the expected Bellman residual (third plot), we get the value of our persistence selection index Bk (fourth plot). Finally, we observe that Bk correctly ranks persistences 4 and 8, but overestimates persistences 8 and 16, compared to persistence 1. To analyze the effect of the batch size, we run PFQI on the Trading environment (see Appendix D.4) varying the number of sampled trajectories. In Figure 3, we notice that the performance improves as the batch size increases, for all persistences. Moreover, we observe that if the batch size is small (nPt10,30,50u), higher persistences (kPt2,4,8u) result in better performances, while for larger batch sizes, k\u201c1 becomes the best choice. Since data is taken from real market prices, this environment is very noisy, thus, when the amount of samples is limited, PFQI can exploit higher persistences to mitigate the poor estimation. 9. Open Questions Improving Exploration with Persistence We analyzed the effect of action persistence on FQI with a \ufb01xed dataset, collected in the base MDP M. In principle, samples can be collected at arbitrary persistence. We may wonder how well the same sampling policy (e.g., the uniform policy over A), executed at different persistences, explores the environment. For instance, in Mountain Car, high persistences increase the probability of reaching the goal, generating more informative datasets (preliminary results in Appendix E.1). Learn in Mk and execute in Mk1 Deploying each policy \u03c0k in the corresponding MDP Mk allows for some guarantees (Lemma 6.1). However, we empirically discovered that using \u03c0k in an MDP Mk1 with smaller persistence k1 sometimes improves its performance. (preliminary results in Appendix E.2). We wonder what regularity conditions on the environment are needed to explain this phenomenon. Persistence in On\u2013line RL Our approach focuses on batch off\u2013line RL. However, the on\u2013line framework could open up new opportunities for action persistence. Speci\ufb01cally, we could dynamically adapt the persistence (and so the control frequency) to speed up learning. Intuition suggests that we should start with a low frequency, reaching a fairly good policy with few samples, and then increase it to re\ufb01ne the learned policy. 10. Discussion and Conclusions In this paper, we formalized the notion of action persistence, i.e., the repetition of a single action for a \ufb01xed number k of decision epochs, having the effect of altering the control frequency of the system. We have shown that persistence leads to the de\ufb01nition of new Bellman operators and that we are able to bound the induced performance loss, under some regularity conditions on the MDP. Based on these considerations, we presented and analyzed a novel batch RL algorithm, PFQI, able to approximate the value function at a given persistence. The experimental evaluation justi\ufb01es the introduction of persistence, since reducing the control frequency can lead to an improvement when dealing with a limited number of samples. Furthermore, we introduced a persistence selection heuristic, which is able to identify good persistence in most cases. We believe that our work makes a step towards understanding why repeating actions may be useful for solving complex control tasks. Numerous questions remain unanswered, leading to several appealing future research directions. \fControl Frequency Adaptation via Action Persistence in Batch Reinforcement Learning Acknowledgements The research was conducted under a cooperative agreement between ISI Foundation, Banca IMI and Intesa Sanpaolo Innovation Center." + }, + { + "url": "http://arxiv.org/abs/1909.03984v1", + "title": "Policy Space Identification in Configurable Environments", + "abstract": "We study the problem of identifying the policy space of a learning agent,\nhaving access to a set of demonstrations generated by its optimal policy. We\nintroduce an approach based on statistical testing to identify the set of\npolicy parameters the agent can control, within a larger parametric policy\nspace. After presenting two identification rules (combinatorial and\nsimplified), applicable under different assumptions on the policy space, we\nprovide a probabilistic analysis of the simplified one in the case of linear\npolicies belonging to the exponential family. To improve the performance of our\nidentification rules, we frame the problem in the recently introduced framework\nof the Configurable Markov Decision Processes, exploiting the opportunity of\nconfiguring the environment to induce the agent revealing which parameters it\ncan control. Finally, we provide an empirical evaluation, on both discrete and\ncontinuous domains, to prove the effectiveness of our identification rules.", + "authors": "Alberto Maria Metelli, Guglielmo Manneschi, Marcello Restelli", + "published": "2019-09-09", + "updated": "2019-09-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction Reinforcement Learning (RL, Sutton and Barto 2018) deals with sequential decision\u2013making problems in which an arti\ufb01cial agent interacts with an environment by sensing perceptions and performing actions. The agent\u2019s goal is to \ufb01nd an optimal policy, i.e., a prescription of actions that maximizes the (possibly discounted) cumulative reward collected during its interaction with the environment. The performance of an agent in an environment is constrained by its perception and actuation possibilities, along with the ability in mapping observations to actions. These three elements de\ufb01ne the agent\u2019s policy space. Agents with different policy spaces could display different optimal behaviors, even in the same environment. Therefore, the notion of optimality is necessarily connected to the agent\u2019s policy space. While in tabular RL we typically assume access to the full (and \ufb01nite) space of Markovian stationary policies, in continuous control, the policy space needs to be limited. In policy search methods (Deisenroth, Neumann, and Peters 2013), the policy is explicitly modeled considering a parametric functional space (Sutton et al. 1999; Preprint. Under review. Peters and Schaal 2008) or a kernel space (Deisenroth and Rasmussen 2011; Levine and Koltun 2013); but also in value\u2013based RL, a function approximator induces a set of representable (greedy) policies. The knowledge of the agent\u2019s policy space could be of crucial importance when the learning process involves the presence of an external supervisor. Recently, the notion of Con\ufb01gurable Markov Decision Process (Conf\u2013 MDP, Metelli, Mutti, and Restelli 2018) has been introduced to account for the real\u2013world scenarios in which it is possible to exercise a, maybe partial, control over the environment, by means of a set of environmental parameters (e.g., Silva, Melo, and Veloso 2018; Silva et al. 2019). This activity, called environment con\ufb01guration, can be carried out by the agent itself or by an external supervisor. While previous works focused on the former case (e.g., Metelli, Ghel\ufb01, and Restelli 2019), in this paper, we explicitly consider the presence of a supervisor who acts on the environment with the goal of \ufb01nding the most suitable con\ufb01guration for the agent. Intuitively, the best environment con\ufb01guration is intimately related to the possibilities of the agent in terms of policy space. For instance, in a car racing problem, the best car con\ufb01guration depends on the car driver and has to be selected, by a track engineer (the supervisor), according to the driver\u2019s skills. Thus, the external supervisor has to be aware of the agent\u2019s policy space. Besides the Conf\u2013MDPs, there are other contexts in which knowing the policy space can be bene\ufb01cial, such as Imitation Learning, i.e., the framework in which an agent learns by observing an expert (Osa et al. 2018). In behavioral cloning, where recovering an imitating policy is cast as a supervised learning problem (Argall et al. 2009), knowing the expert\u2019s policy space means knowing a suitable hypothesis space, preventing possible over/under\ufb01tting phenomena. However, also Inverse Reinforcement Learning algorithms (IRL, Ng and Russell 2000), whose goal is to retrieve a reward function explaining the expert\u2019s behavior, can gain some advantages. In particular, the IRL approaches based on the policy gradient (e.g., Pirotta and Restelli 2016; Metelli, Pirotta, and Restelli 2017; Tateo et al. 2017) require a parametric representation of the expert\u2019s policy, whose choice might affect the quality of the recovered reward function. arXiv:1909.03984v1 [cs.LG] 9 Sep 2019 \fIn this paper, motivated by the examples presented above, we study the problem of identifying the agent\u2019s policy space in a Conf\u2013MDP, by observing the agent\u2019s behavior and, possibly, exploiting the con\ufb01guration opportunities of the environment. We consider the case in which the policy space of the agent is a subset of a known super\u2013policy space \u03a0\u0398 induced by a parameter space \u0398 \u010e Rd. Thus, any policy \u03c0\u03b8 is determined by a d\u2013dimensional parameter vector \u03b8 P \u0398. However, the agent has control over a smaller number d\u02da \u0103 d of parameters (which are unknown), while the remaining ones have a \ufb01xed value, namely zero.1 Our goal is to identify the parameters that the agent can control (and possibly change) by observing demonstrations of the optimal policy \u03c0\u02da. It is worth noting that the formulation based on the identi\ufb01cation of the parameters effectively covers the limitations of the policy space related to perception, actuation, and mapping. To this end, we formulate the problem as deciding whether each parameter \u03b8i for i P t1, ..., du is zero, and we address it by means of a statistical test. In other words, we check whether there is a statistically signi\ufb01cant difference between the likelihood of the agent\u2019s behavior with the full set of parameters and the one in which \u03b8i is set to zero. In such case, we conclude that \u03b8i is not zero and, consequently, the agent can control it. On the contrary, either the agent cannot control the parameter or zero is the value consciously chosen by the agent. Indeed, there could be parameters that, given the peculiarities of the environment, are useless for achieving the optimal behavior or whose optimal value is actually zero, while they could prove to be essential in a different environment. For instance, in a grid world where the goal is to reach the right edge, the vertical position of the agent is useless, while if the goal is to reach the upper right corner both horizontal and vertical positions become relevant. In this spirit, con\ufb01guring the environment can help the supervisor in identifying whether a parameter set to zero is actually uncontrollable by the agent or just useless in the current environment. Thus, the supervisor can change the environment con\ufb01guration \u03c9 P \u2126, so that the agent will adjust its policy, possibly changing the parameter value and revealing whether it can control such parameter. Thus, the new con\ufb01guration should induce an optimal policy in which the considered parameters have a value signi\ufb01cantly different from zero. We formalize this notion as the problem of \ufb01nding the new environment con\ufb01guration that maximizes the power of the statistical test and we propose a surrogate objective for this purpose. It is worth emphasizing that we use the Conf\u2013MDP notion for two purposes. First, we propose the problem of learning the optimal con\ufb01guration in a Conf\u2013MDP as a motivating example in which the knowledge of the policy space is valuable. Second, we use the environment con\ufb01gurability as a tool to improve the identi\ufb01cation of the policy space. The paper is organized as follows. In Section 2, we intro1The choice of zero, as \ufb01xed value, might appear arbitrary, but it is rather a common case in practice. For instance, in a linear policy the fact that the agent does not observe a state feature is equivalent to set the corresponding parameters to zero. In a neural network, removing a neuron is equivalent to neglect all its connections, which in turn can be realized by setting the relative weights to zero. duce the necessary background. The identi\ufb01cation rules to perform parameter identi\ufb01cation in a \ufb01xed environment are presented in Section 3 and analyzed in Section 4. Section 5 shows how to improve them by exploiting the environment con\ufb01gurability. Finally, the experimental evaluation, on discrete and continuous domains, is provided in Section 6. The proofs of all the results can be found in Appendix A. 2 Preliminaries In this section, we report the essential background that will be used in the subsequent sections. (Con\ufb01gurable) Markov Decision Processes A discrete\u2013 time Markov Decision Process (MDP, Puterman 2014) is de\ufb01ned by the tuple M \u201c pS, A, p, \u00b5, r, \u03b3q, where S and A are the state space and the action space respectively, p is the transition model that provides, for every state-action pair ps, aq P S \u02c6 A, a probability distribution over the next state pp\u00a8|s, aq, \u00b5 is the distribution of the initial state, r is the reward model de\ufb01ning the reward collected by the agent rps, aq when performing action a P A in state s P S, and \u03b3 P r0, 1s is the discount factor. The behavior of an agent is de\ufb01ned by means of a policy \u03c0 that provides a probability distribution over the actions \u03c0p\u00a8|sq for every state s P S. An MDP M paired with a policy \u03c0 induces a \u03b3\u2013discounted stationary distribution over the states (Sutton et al. 1999), de\ufb01ned as d\u03c0 \u00b5psq \u201c p1 \u00b4 \u03b3q \u0159`8 t\u201c0 \u03b3t Pr pst \u201c s|M, \u03c0q. We limit the scope to parametric policy spaces \u03a0\u0398 \u201c t\u03c0\u03b8 : \u03b8 P \u0398u, where \u0398 \u010e Rd is the parameter space. The goal of the agent is to \ufb01nd an optimal policy, i.e., any policy parametrization that maximizes the expected return: \u03b8\u02da P arg max \u03b8P\u0398 JMp\u03b8q \u201c 1 1 \u00b4 \u03b3 E s\u201ed \u03c0\u03b8 \u00b5 a\u201e\u03c0\u03b8p\u00a8|sq rrps, aqs . (1) In this paper, we consider a slightly modi\ufb01ed version of the Conf\u2013MDPs (Metelli, Mutti, and Restelli 2018). De\ufb01nition 2.1. A Con\ufb01gurable Markov Decision Process (Conf\u2013MDP) induced by the con\ufb01guration space \u2126 \u010e Rp is de\ufb01ned as the set of MDPs C\u2126 \u201c tM\u03c9 \u201c pS, A, p\u03c9, \u00b5\u03c9, r, \u03b3q : \u03c9 P \u2126u. The main differences w.r.t. the original de\ufb01nition are: i) we allow the con\ufb01guration of the initial state distribution \u00b5\u03c9, in addition to the transition model p\u03c9; ii) we restrict to the case of parametric con\ufb01guration spaces \u2126; iii) we do not consider the policy space \u03a0\u0398 as a part of the Conf\u2013MDP. Generalized Likelihood Ratio Test The Generalized Likelihood Ratio test (GLR, Barnard 1959; Casella and Berger 2002) aims at testing the goodness of \ufb01t of two statistical models. Given a parametric model having density function p\u03b8 with \u03b8 P \u0398, we aim at testing the null hypothesis H0 : \u03b8\u02da P \u03980, where \u03980 \u0102 \u0398 is a subset of the parametric space, against the alternative H1 : \u03b8\u02da P \u0398z\u03980. Given a dataset D \u201c tXiun i\u201c1 sampled independently from p\u03b8\u02da, where \u03b8\u02da is the true parameter, the GLR statistic is: \u039b \u201c sup\u03b8P\u03980 p Lp\u03b8q sup\u03b8P\u0398 p Lp\u03b8q , (2) \fwhere p Lp\u03b8q \u201c \u015bn i\u201c1 p\u03b8pXiq is the likelihood function. We denote with p \u2113p\u03b8q \u201c \u00b4 log p Lp\u03b8q the negative log\u2013likelihood function, p \u03b8 P arg sup\u03b8P\u0398 p Lp\u03b8q and p \u03b80 P arg sup\u03b8P\u03980 p Lp\u03b8q, i.e., the maximum likelihood solutions in \u0398 and \u03980 respectively. Moreover, we de\ufb01ne the expectation of the likelihood under the true parameter: \u2113p\u03b8q \u201c EXi\u201ep\u03b8\u02darp \u2113p\u03b8qs. As the maximization is carried out employing the same dataset D and recalling that \u03980 \u0102 \u0398, we have that \u039b P r0, 1s. It is usually convenient to consider the logarithm of the GLR statistic: \u03bb \u201c \u00b42 log \u039b \u201c 2pp \u2113pp \u03b80q \u00b4 p \u2113pp \u03b8qq. Therefore, H0 is rejected for large values of \u03bb, i.e., when the maximim likelihood parameter searched in the restricted set \u03980 signi\ufb01cantly under\ufb01ts the data D, compared to \u0398. Wilk\u2019s theorem provides the asymptomatic distribution of \u03bb when H0 is true (Wilks 1938; Casella and Berger 2002). Theorem 2.1 (Casella and Berger 2002, Theorem 10.3.3). Let d \u201c dimp\u0398q and d0 \u201c dimp\u03980q \u0103 d. Under suitable regularity conditions (see Casella and Berger (2002) 10.6.2), if H0 is true, then when n \u00d1 `8, the distribution of \u03bb tends to a \u03c72 distribution with d\u00b4d0 degrees of freedom. The signi\ufb01cance of a test \u03b1 P r0, 1s, or type I error probability, is the probability to reject H0 when H0 is true, while the power of a test 1 \u00b4 \u03b2 P r0, 1s is the probability to reject H0 when H0 is false, \u03b2 is the type II error probability. 3 Policy Space Identi\ufb01cation in a Fixed Environment As we introduced in Section 1, we aim at identifying the agent\u2019s policy space, by observing a set of demonstrations coming from the optimal policy \u03c0\u02da P \u03a0\u03982 only, i.e., D \u201c tpsi, aiqun i\u201c1 where si \u201e d\u03c0\u02da \u00b5 and ai \u201e \u03c0\u02dap\u00a8|siq sampled independently. In particular, we assume that the agent has control over a limited number of parameters d\u02da \u0103 d whose value can be changed during learning, while the remaining d \u00b4 d\u02da are kept \ufb01xed to zero.3 Given a set of indexes I \u010e t1, ..., du we de\ufb01ne the subset of the parameter space: \u0398I \u201c t\u03b8 P \u0398 : \u03b8i \u201c 0, @i P Izt1, ..., duu. Thus, the set I represents the indexes of the parameters that can be changed if the agent\u2019s parameter space were \u0398I. Our goal is to \ufb01nd a set of parameter indexes I\u02da that are suf\ufb01cient to explain the agent\u2019s policy, i.e., \u03c0\u02da P \u03a0\u0398I\u02da but also necessary, in the sense that when removing any i P I\u02da the remaining ones are insuf\ufb01cient to explain the agent\u2019s policy, i.e., \u03c0\u02da R \u03a0\u0398I\u02daztiu. We formalize these notions in the following de\ufb01nition. De\ufb01nition 3.1 (Correctness). Let \u03c0\u02da P \u03a0\u0398. A set of parameter indexes I\u02da \u010e t1, ..., du is correct w.r.t. \u03c0\u02da if: \u03c0\u02da P \u03a0\u0398I\u02da ^ @i P I\u02da : \u03c0\u02da R \u03a0\u0398I\u02daztiu. We denote with I\u02da the set of all correct I\u02da. The uniqueness of I\u02da is guaranteed under the assumption that each policy admits a unique representation in \u03a0\u0398. 2We do not explicitly report the dependence on the agent\u2019s parameter \u03b8\u02da P \u0398 as, in the general case, there might exist multiple parameters yielding the same policy \u03c0\u02da. 3The extension of the identi\ufb01cation rules to (known) \ufb01xed values different from zero is straightforward. Assumption 3.1 (Identi\ufb01ability). The policy space \u03a0\u0398 is identi\ufb01able, i.e., for all \u03b8, \u03b81 P \u0398, we have: \u03c0\u03b8 \u201c \u03c0\u03b81 almost surely \u00f9 \u00f1 \u03b8 \u201c \u03b81. The identi\ufb01ability property allows rephrasing De\ufb01nition 3.1 in terms of the policy parameters only. Lemma 3.1. Under Assumption 3.1, let \u03b8\u02da P \u0398 be the unique parameter such that \u03c0\u03b8\u02da \u201c \u03c0\u02da almost surely. Then, there exists a unique set of parameter indexes I\u02da \u010e t1, ..., du (i.e., I\u02da \u201c tI\u02dau) that is correct w.r.t. \u03c0\u02da: I\u02da \u201c ti P t1, ..., du : \u03b8\u02da i \u2030 0u . The following two subsections are devoted to the presentation of the identi\ufb01cation rules based on the application of De\ufb01nition 3.1 (Section 3.1) and Lemma 3.1 (Section 3.2) when we only have access to a dataset of samples D. The goal of an identi\ufb01cation rule consists in producing a set p I, approximating I\u02da. 3.1 Combinatorial Identi\ufb01cation Rule In principle, using D \u201c tpsi, aiqun i\u201c1, we could compute the maximum likelihood parameter p \u03b8 P arg sup\u03b8P\u0398 p Lp\u03b8q and employ it with De\ufb01nition 3.1. However, this approach has, at least, two drawbacks. First, when Assumption 3.1 is not ful\ufb01lled, it would produce a single approximate parameter, while multiple choices might be viable. Second, because of the estimation errors, we would hardly get a zero value for the parameters the agent might not control. For this reasons, we employ a GLR test to assess whether a speci\ufb01c set of parameters is zero. Speci\ufb01cally, for all I \u010e t1, ..., du we consider the pair of hypotheses H0,I : \u03c0\u02da P \u03a0\u0398I against H1,I : \u03c0\u02da P \u03a0\u0398z\u0398I and the GLR statistic: \u03bbI \u201c \u00b42 log sup\u03b8P\u0398I p Lp\u03b8q sup\u03b8P\u0398 p Lp\u03b8q \u201c 2 \u00b4 p \u2113pp \u2113pp \u03b8Iq \u00b4 p \u03b8q \u00af , (3) where the likelihood is de\ufb01ned as p Lp\u03b8q \u201c \u015bn i\u201c1 \u03c0\u03b8pai|siq, p \u03b8I P arg sup\u03b8P\u0398I p Lp\u03b8q and p \u03b8 P arg sup\u03b8P\u0398 p Lp\u03b8q. We now state the identi\ufb01cation rule derived from De\ufb01nition 3.1. Identi\ufb01cation Rule 3.1. p Ic contains all and only the sets of parameter indexes I \u010e t1, ..., du such that: \u03bbI \u010f cp|I|q ^ @i P I : \u03bbIztiu \u0105 cp|Iztiu|q, (4) where cplq are the critical values. Thus, I is de\ufb01ned in such a way that the null hypothesis H0,I is not rejected, i.e., I contains parameters that are suf\ufb01cient to explain the data D, and necessary since for all i P I the set Iztiu is no longer suf\ufb01cient, as H0,Iztiu is rejected. The critical values cplq, that depend on the cardinality l of the tested set of indexes, should be determined in order to enforce guarantees on the type I and II errors. We will show in Section 6 how to set them in practice. Refer to Algorithm 1 for the pseudocode of the identi\ufb01cation rule. 3.2 Simpli\ufb01ed Identi\ufb01cation Rule Identi\ufb01cation Rule 3.1 is usually impractical, as it requires performing O ` 2d\u02d8 statistical tests. However, under Assumption 3.1, to retrieve I\u02da we do not need to test all subsets, but we can just examine one parameter at a time (see \fAlgorithm 1 Identi\ufb01cation Rule 3.1 (Combinatorial) input: dataset D, parameter space \u0398, critical values c p Ic \u00d0 tu p L \u201c max\u03b8P\u0398 p Lp\u03b8q for I \u010e t1, ..., du in sorted by cardinality do p LI \u201c max\u03b8P\u0398I p Lp\u03b8q \u03bbI \u201c \u00b42 log p LI p L if \u03bbI \u010f cp|I|q and @i P I : \u03bbIztiu \u0105 cp|Iztiu|q then p Ic \u00d0 p Ic Y tIu end if end for return p Ic Lemma 3.1). Thus, for all i P t1, ..., du we consider the pair of hypotheses H0,i : \u03b8\u02da i \u201c 0 against H1,i : \u03b8\u02da i \u2030 0 and de\ufb01ne \u0398i \u201c t\u03b8 P \u0398 : \u03b8i \u201c 0u. The GLR test can be performed straightforwardly, using the statistic: \u03bbi \u201c \u00b42 log sup\u03b8P\u0398i p Lp\u03b8q sup\u03b8P\u0398 p Lp\u03b8q \u201c 2 \u00b4 p \u2113pp \u03b8iq \u00b4 p \u2113pp \u03b8q \u00af , (5) where the likelihood is de\ufb01ned as p Lp\u03b8q \u201c \u015bn i\u201c1 \u03c0\u03b8pai|siq, p \u03b8i \u201c arg sup\u03b8P\u0398i p Lp\u03b8q and p \u03b8 \u201c arg sup\u03b8P\u0398 p Lp\u03b8q.4 In the spirit of Lemma 3.1, we de\ufb01ne the identi\ufb01cation rule. Identi\ufb01cation Rule 3.2. p Ic contains the unique set of parameter indexes p Ic such that: p Ic \u201c ti P t1, ..., du : \u03bbi \u0105 cp1qu , (6) where cp1q is the critical value. Therefore, the identi\ufb01cation rule constructs p Ic by taking all the indexes i P t1, ..., du such that the corresponding null hypothesis H0,i : \u03b8\u02da i \u201c 0 is rejected, i.e., those for which there is statistical evidence that their value is not zero. We will show in Section 4 how the critical value cp1q can be computed, in a theoretically sound way, for linear policies belonging to the exponential family. This second procedure requires a test for every parameter, i.e., Opdq instead of Op2dq tests. However, it comes with the cost of assuming the identi\ufb01ability property. What happens if we employ this second procedure in a case where the assumption does not hold? Consider for instance the case in which two parameters are exchangeable, we will include none of them in p Ic as, individually, they are not necessary to explain the agent\u2019s policy. Refer to Algorithm 2 for the pseudocode of the identi\ufb01cation rule. 4 Analysis for the Exponential Family In this section, we provide an analysis of the Identi\ufb01cation Rule 3.2 for a policy \u03c0\u03b8 linear in some state features \u03c6 that belongs to the exponential family (Brown 1986).5 4This setting is equivalent to a particular case the combinatorial rule in which H\u2039,i \u201d H\u2039,t1,...,duztiu, with \u2039 P t0, 1u and, consequently, \u03bbi \u201d \u03bbt1,...,duztiu and \u0398i \u201c \u0398t1,...,duztiu. 5We limit our analysis to Identi\ufb01cation Rule 3.2 since we will show that, in the case of linear policies belonging to the exponential family, the identi\ufb01ability property can be easily enforced. Algorithm 2 Identi\ufb01cation Rule 3.2 (Simpli\ufb01ed) input: dataset D, parameter space \u0398, critical value c p Ic \u00d0 tu p L \u201c max\u03b8P\u0398 p Lp\u03b8q for i P t1, ..., du do p Li \u201c max\u03b8P\u0398i p Lp\u03b8q \u03bbi \u201c \u00b42 log p Li p L if \u03bbi \u0105 cp1q then p Ic \u00d0 p Ic Y tiu end if end for return tp Icu De\ufb01nition 4.1 (Exponential Family). Let \u03c6 : S \u00d1 Rq be a feature function. The policy space \u03a0\u0398 is a space of linear policies, belonging to the exponential family, if \u0398 \u201c Rd and all policies \u03c0\u03b8 P \u03a0\u0398 have form: \u03c0\u03b8pa|sq \u201c hpaq exp ! \u03b8T t ps, aq \u00b4 Ap\u03b8, sq ) , (7) where h is a positive function, t ps, aq is the suf\ufb01cient statistic that depends on the state via the feature function \u03c6 (i.e., t ps, aq \u201c tp\u03c6psq, aq) and Ap\u03b8, sq \u201c log \u015f A hpaq expt\u03b8T tps, aquda is the log partition function. We denote with tps, a, \u03b8q \u201c tps, aq\u00b4Ea\u201e\u03c0\u03b8p\u00a8|sq rtps, aqs the centered suf\ufb01cient statistic. This de\ufb01nition allows modelling the linear policies that are often used in RL (Deisenroth, Neumann, and Peters 2013). Table 1 shows how to map the Gaussian linear policy with \ufb01xed covariance, typically used in continuous action spaces, and the Boltzmann linear policy, suitable for \ufb01nite action spaces, to De\ufb01nition 4.1 (details in Appendix A.1). For the sake of the analysis, we enforce the following assumption concerning the tail behavior of the policy \u03c0\u03b8. Assumption 4.1 (Subgaussianity). For any \u03b8 P \u0398 and for any s P S the centered suf\ufb01cient statistic tps, a, \u03b8q is subgaussian with parameter \u03c3 \u011b 0, i.e., for any \u03b1 P Rd: E a\u201e\u03c0\u03b8p\u00a8|sq \u201c exp \u2423 \u03b1T tps, a, \u03b8q (\u2030 \u010f exp \"1 2 }\u03b1}2 2 \u03c32 * . Proposition A.2 of Appendix A.2 proves that, when the features are uniformly bounded, i.e., }\u03c6psq}2 \u010f \u03a6max for all s P S, this assumption is ful\ufb01lled by both Gaussian and Boltzmann linear policies with parameter \u03c3 \u201c 2\u03a6max and \u03c3 \u201c \u03a6max{ a \u03bbminp\u03a3q respectively. Furthermore, limited to the policies complying with Definition 4.1, the identi\ufb01ability (Assumption 3.1) can be restated in terms of the Fisher Information matrix (Rothenberg and others 1971; Little, Heidenreich, and Li 2010). Lemma 4.1 (Rothenberg and others 1971, Theorem 3). Let \u03a0\u0398 be a policy space, as in De\ufb01nition 4.1. Then, under suitable regularity conditions (see Rothenberg and others 1971), if the Fisher Information matrix (FIM) Fp\u03b8q: Fp\u03b8q \u201c E s\u201ed\u03c0\u02da \u00b5 a\u201e\u03c0\u03b8p\u00a8|sq \u201c tps, a, \u03b8qtps, a, \u03b8qT \u2030 (8) is non\u2013singular for all \u03b8 P \u0398, then \u03a0\u0398 is identi\ufb01able. In this case, we denote with \u03bbmin \u201c inf\u03b8P\u0398 \u03bbmin pFp\u03b8qq \u0105 0. \fPolicy A \u03c0r \u03b8 t h Gaussian Rk \u03c0r \u03b8pa|sq \u201c expt\u00b4 1 2 pa\u00b4r \u03b8\u03c6psqqT \u03a3\u00b41pa\u00b4r \u03b8\u03c6psqqu p2\u03c0q k 2 detp\u03a3q 1 2 \u03a3\u00b41a b \u03c6psq exp \u2423 \u00b4 1 2aT \u03a3\u00b41a ( p2\u03c0q k 2 det p\u03a3q 1 2 Boltzmann ta1, ..., ak`1u \u03c0r \u03b8pai|sq \u201c $ \u2019 \u2019 & \u2019 \u2019 % er \u03b8T i \u03c6psq 1`\u0159k j\u201c1 e r \u03b8T j \u03c6psq if i \u010f k 1 1`\u0159k j\u201c1 e r \u03b8T j \u03c6psq if i \u201c k # ei b \u03c6psq if i \u010f k 0 if i \u201c k ` 1 1 Table 1: Action space A, probability density function \u03c0r \u03b8, suf\ufb01cient statistic t, and function h for the Gaussian linear policy with \ufb01xed covariance and the Boltzmann linear policy. For convenience of representation r \u03b8 P Rk\u02c6q is a matrix and \u03b8 \u201c vecpr \u03b8 T q P Rd, with d \u201c kq. We denote with ei the i\u2013th vector of the canonical basis of Rk and with b the Kronecker product. Proposition A.1 of Appendix A.2 shows that a suf\ufb01cient condition for the identi\ufb01ability in the case of Gaussian and Boltzmann linear policies is that the second moment matrix of the feature vector Es\u201ed\u03c0\u02da \u00b5 \u201c \u03c6psq\u03c6psqT \u2030 is non\u2013singular along with the fact that the policy \u03c0\u03b8 plays each action with positive probability for the Boltzmann policy. Concentration Result We are now ready to present a concentration result, of independent interest, for the parameters and the negative log\u2013likelihood that represents the central tool of our analysis (details and derivation in Appendix A.2). Theorem 4.1. Under Assumption 3.1 and Assumption 4.1, let D \u201c tpsi, aiqun i\u201c1 be a dataset of n \u0105 0 independent samples, where si \u201e d \u03c0\u03b8\u02da \u00b5 and ai \u201e \u03c0\u03b8\u02dap\u00a8|siq. Let p \u03b8 \u201c arg min\u03b8P\u0398 p \u2113p\u03b8q and \u03b8\u02da \u201c arg min\u03b8P\u0398 \u2113p\u03b8q . If the empirical FIM: p Fp\u03b8q \u201c 1 n n \u00ff i\u201c1 E a\u201e\u03c0\u03b8p\u00a8|sq \u201c tps, a, \u03b8qtps, a, \u03b8qT \u2030 (9) has a positive minimum eigenvalue p \u03bbmin \u0105 0 for all \u03b8 P \u0398, then, for any \u03b4 P r0, 1s, with probability at least 1 \u00b4 \u03b4: \u203a \u203a \u203ap \u03b8 \u00b4 \u03b8\u02da\u203a \u203a \u203a 2 \u010f \u03c3 p \u03bbmin c 2d n log 2d \u03b4 . Furthermore, with probability at least 1 \u00b4 \u03b4, individually: \u2113pp \u03b8q \u00b4 \u2113p\u03b8\u02daq \u010f d2\u03c34 p \u03bb2 minn log 2d \u03b4 p \u2113p\u03b8\u02daq \u00b4 p \u2113pp \u03b8q \u010f d2\u03c34 p \u03bb2 minn log 2d \u03b4 . The theorem shows that the L2\u2013norm of the difference between the maximum likelihood parameter p \u03b8 and the true parameter \u03b8\u02da concentrates with rate Opn\u00b41{2q while the likelihood p \u2113and its expectation \u2113concentrate with faster rate Opn\u00b41q. Note that the result assumes that the empirical FIM p Fp\u03b8q has a strictly positive eigenvalue p \u03bbmin \u0105 0. This condition can be enforced as long as the true Fisher matrix Fp\u03b8q has a positive minimum eigenvalue \u03bbmin, i.e., under identi\ufb01ability assumption (Lemma 4.1) and given a suf\ufb01ciently large number of samples. Proposition A.4 of Appendix A.2 provides the minimum number of samples such that with probability at least 1 \u00b4 \u03b4 it holds that p \u03bbmin \u0105 0. Identi\ufb01cation Rule Analysis The goal of the analysis of the identi\ufb01cation rule is to \ufb01nd the critical value cp1q so that the following probabilistic requirement is enforced. De\ufb01nition 4.2 (\u03b4\u2013correctness). Let \u03b4 P r0, 1s. An identi\ufb01cation rule producing p I is \u03b4\u2013correct if: Pr `p I \u2030 I\u02da\u02d8 \u010f \u03b4. We denote with \u03b1 \u201c 1 d\u00b4d\u02da E \u201c\u02c7 \u02c7\u2423 i R I\u02da : i P p Ic (\u02c7 \u02c7\u2030 the expected fraction of parameters that the agent does not control selected by the identi\ufb01cation rule and with \u03b2 \u201c 1 d\u02da E \u201c\u02c7 \u02c7\u2423 i P I\u02da : i R p Ic (\u02c7 \u02c7\u2030 the expected fraction of parameters that the agent does control not selected by the identi\ufb01cation rule.6 We now provide a result that bounds \u03b1 and \u03b2 and employs them to derive \u03b4\u2013correctness. Theorem 4.2. Let p Ic be the set of parameter indexes selected by the Identi\ufb01cation Rule 3.2 obtained using n \u0105 0 i.i.d. samples collected with \u03c0\u03b8\u02da, with \u03b8\u02da P \u0398. Then, under Assumption 3.1 and Assumption 4.1, let \u03b8\u02da i \u201c arg min\u03b8P\u0398i \u2113p\u03b8q for all i P t1, ..., du and \u03bd \u201c min \u2423 1, \u03bbmin \u03c32 ( . If p \u03bbmin \u011b \u03bbmin 2 ? 2 and \u2113p\u03b8\u02da i q \u00b4 lp\u03b8\u02daq \u011b cp1q, it holds that: \u03b1 \u010f 2d exp \" \u00b4cp1q\u03bb2 minn 16d2\u03c34 * \u03b2 \u010f 2d \u00b4 1 d\u02da \u00ff iPI\u02da exp # \u00b4 ` lp\u03b8\u02da i q \u00b4 lp\u03b8\u02daq \u00b4 cp1q \u02d8 \u03bbmin\u03bdn 16pd \u00b4 1q2\u03c32 + . Furthermore, the Identi\ufb01cation Rule 3.2 is ppd \u00b4 d\u02daq\u03b1 ` d\u02da\u03b2q\u2013correct. Since \u03b1 and \u03b2 are functions of cp1q, we could, in principle, employ Theorem 4.2 to enforce a value \u03b4, as in De\ufb01nition 4.2, and derive cp1q. However, Theorem 4.2 is not very attractive in practice as it holds under an assumption regarding the minimum eigenvalue of the FIM and the corresponding estimate, i.e., p \u03bbmin \u011b \u03bbmin 2 ? 2 , that cannot be veri\ufb01ed in practice since \u03bbmin is unknown. Similarly, the constants d\u02da, lp\u03b8\u02da i q and lp\u03b8\u02daq are typically unknown. We will provide in Section 6 a heuristic for setting cp1q. 6We use the symbols \u03b1 and \u03b2 to highlight the analogy between these probabilities and the type I and type II error probabilities of a statistical test. We sometimes refer to \u03b1 as signi\ufb01cance and to 1\u00b4\u03b2 as power of the identi\ufb01cation rules. \f5 Policy Space Identi\ufb01cation in a Con\ufb01gurable Environment The identi\ufb01cation rules presented so far are unable to distinguish between a parameter set to zero because the agent cannot control it, or because zero is its optimal value. To overcome this issue, we employ the Conf\u2013MDP properties to select a con\ufb01guration in which the parameters we want to examine have an optimal value other than zero. Intuitively, if we want to test whether the agent can control parameter \u03b8i, we should place the agent in an environment \u03c9i P \u2126where \u03b8i is \u201cmaximally important\u201d for the optimal policy. This intuition is justi\ufb01ed by Theorem 4.2, since to maximize the power of the test (1 \u00b4 \u03b2), all other things being equal, we should maximize the log\u2013likelihood gap lp\u03b8\u02da i q \u00b4 lp\u03b8\u02daq, i.e., parameter \u03b8i should be essential to justify the agent\u2019s behavior. Let I P t1, ..., du be a set of parameter indexes we want to test, our ideal goal is to \ufb01nd the environment \u03c9I such that: \u03c9I P arg max \u03c9P\u2126 \u2423 lp\u03b8\u02da I p\u03c9qq \u00b4 lp\u03b8\u02dap\u03c9qq ( , (10) where \u03b8\u02dap\u03c9q P arg max\u03b8P\u0398 JM\u03c9p\u03b8q and \u03b8\u02da I p\u03c9q P arg max\u03b8P\u0398I JM\u03c9p\u03b8q are the parameters of the optimal policies in the environment M\u03c9 in \u03a0\u0398 and \u03a0\u0398I respectively. Clearly, given the samples D collected with a single optimal policy \u03c0\u02dap\u03c90q in a single environment M\u03c90, solving problem (10) is hard as it requires performing an off\u2013distribution optimization both on the space of policy parameters and con\ufb01gurations. For these reasons, we consider a surrogate objective that assumes that the optimal parameter in the new con\ufb01guration can be reached by performing a single gradient step.7 Theorem 5.1. Let I P t1, ..., du and I \u201c t1, ..., duzI. For a vector v, we denote with v|I the vector obtained by setting to zero the components in I. Let \u03b8\u02dap\u03c90q P \u0398 the initial parameter. Let \u03b1 \u011b 0, \u03b8\u02da I p\u03c9q \u201c \u03b80 ` \u03b1\u2207\u03b8JM\u03c9p\u03b8\u02dap\u03c90qq|I and \u03b8\u02dap\u03c9q \u201c \u03b80 ` \u03b1\u2207\u03b8JM\u03c9p\u03b8\u02dap\u03c90qq. Then, under Assumption 3.1, we have: \u2113p\u03b8\u02da I p\u03c9qq \u00b4 \u2113p\u03b8\u02dap\u03c9qq \u011b \u03bbmin\u03b12 2 \u203a \u203a\u2207\u03b8JM\u03c9p\u03b8\u02dap\u03c90qq|I \u203a \u203a2 2 . Thus, we maximize the L2\u2013norm of the gradient components that correspond to the parameters we want to test. Since we have at our disposal only samples D collected with the current policy \u03c0\u03b8\u02dap\u03c90q and in the current environment \u03c90, we have to perform an off\u2013distribution optimization over \u03c9. To this end, we employ an approach analogous to that of Metelli et al. (2018) where we optimize the empirical version of the objective with a penalization that accounts for the distance between the distribution over trajectories: CIp\u03c9{\u03c90q \u201c \u203a \u203a \u203ap \u2207\u03b8JM\u03c9{\u03c90 p\u03b8\u02dap\u03c90qq|I \u203a \u203a \u203a 2 2 \u00b4 \u03b6 d p d2p\u03c9}\u03c90q n , (11) where p \u2207\u03b8JM\u03c9{\u03c90 p\u03b8\u02dap\u03c90qq is an off-distribution estimator of the gradient \u2207\u03b8JM\u03c9p\u03b8\u02dap\u03c90qq using samples collected with \u03c90, p d2 is the estimated 2-R\u00b4 enyi divergence (van Erven and Harremo\u00a8 es 2014) that works as a penalization to dis7This idea shares some analogies with the adapted parameter in the meta-learning setting (Finn, Abbeel, and Levine 2017). courage too large updates and \u03b6 \u011b 0 is a regularization parameter. The expression of the estimated gradient, 2-R\u00b4 enyi divergence and the pseudocode are reported in Appendix B. 6 Experimental Evaluation In this section, we present the experimental evaluation of the identi\ufb01cation rules in three RL domains. To set the values of cplq we resort to the Wilk\u2019s asymptotic approximation (Theorem 2.1) to enforce (asymptotic) guarantees on the type I error. For Identi\ufb01cation Rule 3.1 we perform 2d statistical tests by using the same dataset D. Thus, we partition \u03b4 using Bonferroni correction and setting cplq \u201c \u03c72 l,1\u00b4\u03b4{2d, where \u03c72 l,\u03be is the \u03be\u2013quintile of a chi square distribution with l degrees of freedom. Instead, for Identi\ufb01cation Rule 3.2, we perform d statistical test, and thus, we set cp1q \u201c \u03c72 1,1\u00b4\u03b4{d. 6.1 Discrete Grid World The grid world environment is a simple representation of a two-dimensional world (5\u02c65 cells) in which an agent has to reach a target position by moving in the four directions. The goal of this set of experiments is to show the advantages of con\ufb01guring the environment when performing the policy space identi\ufb01cation using rule 3.2. The initial position of the agent and the target position are drawn at the beginning of each episode from a Boltzmann distribution \u00b5\u03c9. The agent plays a Boltzmann linear policy \u03c0\u03b8 with binary features \u03c6 indicating its current row and column and the row and column of the goal.8 For each run, the agent can control a subset I\u02da of the parameters \u03b8I\u02da associated with those features, which is randomly selected. Furthermore, the supervisor can con\ufb01gure the environment by changing the parameters \u03c9 of the initial state distribution \u00b5\u03c9. Thus, the supervisor can induce the agent to explore certain regions of the grid world and, consequently, change the relevance of the corresponding parameters in the optimal policy. Figure 1 shows the empirical p \u03b1 and p \u03b2, i.e., the fraction of parameters that the agent does not control that are wrongly selected and the fraction of those the agent controls that are not selected respectively, as a function of the number n of episodes used to perform the identi\ufb01cation. We compare two cases: conf where the identi\ufb01cation is carried out by also con\ufb01guring the environment, i.e., optimizing Equation (11), and no-conf in which the identi\ufb01cation is performed in the original environment only. In both cases, we can see that p \u03b1 is almost independent of the number of samples, as it is directly controlled by the critical value cp1q. Differently, p \u03b2 decreases as the number of samples increases, i.e., the power of the test 1 \u00b4 p \u03b2 increases with n. Remarkably, we observe that con\ufb01guring the environment gives a signi\ufb01cant advantage in understanding the parameters controlled by the agent w.r.t. using a \ufb01xed environment, as p \u03b2 decreases faster in the conf case. This phenomenon also justi\ufb01es empirically our choice of objective (Equation (11)) for selecting the new environment. Hyperparameters, further experimental results, together with experiments on a continuous version of the grid world, are reported in Appendix C.1\u2013C.2. 8The features are selected to ful\ufb01ll Lemma 4.1. \f101 102 103 0 0.2 0.4 0.6 Episodes n Error Probability b \u03b1 conf b \u03b2 conf b \u03b1 no-conf b \u03b2 no-conf Figure 1: Discrete Grid World: p \u03b1 and p \u03b2 error for conf and noconf cases varying the number of episodes. 25 runs 95% c.i. 0 5 10 15 \u221220 \u221210 0 Putter length \u03c9 Best performance JM\u03c9 (\u03b8\u2217) A1 (x and f) A2 (only x) (i) (ii) (iii) (iv) \u221210 \u22128 \u22126 \u22124 Performance JM\u03c9 (\u03b8\u2217) Figure 2: Mingolf: Performance of the optimal policy varying the putter length \u03c9 for agents A1 and A2 (left) and performance of the optimal policy for agent A2 with four different strategies for selecting \u03c9 (right). 100 runs 95% c.i. 101 102 103 0 0.5 1 Episodes n Correct Identi\ufb01cations Combinatorial Simpli\ufb01ed Figure 3: Simulated Car Driving: fraction of correct identi\ufb01cations varying the number of episodes. 100 runs 95% c.i. 6.2 Minigolf In the Minigolf environment (Lazaric, Restelli, and Bonarini 2007), an agent hits a ball using a putter with the goal of reaching the hole in the minimum number of attempts. Surpassing the hole causes the termination of the episode and a large penalization. The agent selects the force applied to the putter by playing a Gaussian policy linear in some polynomial features (complying to Lemma 4.1) of the distance from the hole (x) and the friction of the green (f). We consider two agents: A1 has access to both the x and f whereas A2 knows only x. Thus, we expect that A1 learns a policy that allows reaching the hole in a smaller number of hits, compared to A2, as it can calibrate force according to friction; whereas A2 has to be more conservative, being unaware of f. There is also a supervisor in charge of selecting, for the two agents, the best putter length \u03c9, i.e., the con\ufb01gurable parameter of the environment. In this experiment, we want to highlight that knowing the policy space might be of crucial importance when learning in a Conf\u2013MDP. Figure 2-left shows the performance of the optimal policy as a function of the putter length \u03c9. We can see that for agent A1 the optimal putter length is \u03c9\u02da A1 \u201c 5 while for agent A2 is \u03c9\u02da A2 \u201c 11.5. Figure 2-right compares the performance of the optimal policy of agent A2 when the putter length \u03c9 is chosen by the supervisor using four different strategies. In (i) the con\ufb01guration is sampled uniformly in the interval r1, 15s. In (ii) the supervisor employs the optimal con\ufb01guration for agent A1 (\u03c9 \u201c 5), i.e., assuming the agent is aware of the friction. (iii) is obtained by selecting the optimal con\ufb01guration of the policy space produced by using our identi\ufb01cation rule 3.2. Finally, (iv) is derived by employing an oracle that knows the true agent\u2019s policy space (\u03c9 \u201c 11.5). We can see that the performance of the identi\ufb01cation procedure (iii) is comparable with that of the oracle (iv) and notably higher than the performance when employing an incorrect policy space (ii). Hyperparameters and additional experiments are reported in Appendix C.3. 6.3 Simulated Car Driving We consider a simple version of a car driving simulator, in which the agent has to reach the end of a road in the minimum amount of time, avoiding running off-road. The agent perceives its speed, four sensors placed at different angles that provide distance from the edge of the road and it can act on acceleration and steering. The purpose of this experiment is to show a case in which the identi\ufb01ability assumption (Assumption 3.1) may not be satis\ufb01ed. The policy \u03c0\u03b8 is modeled as a Gaussian policy whose mean is computed via a single hidden layer neural network with 8 neurons. Some of the sensors are not available to the agent, our goal is to identify which ones the agent can perceive. In Figure 3, we compare the performance of the Identi\ufb01cation Rules 3.1 (Combinatorial) and 3.2 (Simpli\ufb01ed), showing the fraction of runs that correctly identify the policy space. We note that, while for a small number of samples the simpli\ufb01ed rule seems to outperform, when the number of samples increases the combinatorial rule displays remarkable stability, approaching the correct identi\ufb01cation in all the runs. This is explained by the fact that, when multiple representations for the same policy are possible, considering one parameter at a time might induce the simpli\ufb01ed rule to select a wrong set of parameters. Hyperparameters are reported in Appendix C.4. 7 Discussion and Conclusions In this paper, we addressed the problem of identifying the policy space of an agent by simply observing its behavior when playing the optimal policy. We introduced two identi\ufb01cation rules, both based on the GLR test, which can be applied to select the parameters controlled by the agent. Additionally, we have shown how to use the con\ufb01gurability property of the environment to enhance the effectiveness of identi\ufb01cation rules. The experimental evaluation highlights two essential points. First, the identi\ufb01cation of the policy space brings advantages to the learning process in a Conf\u2013MDP, helping to choose wisely the most suitable environment con\ufb01guration. Second, we have illustrated that con\ufb01guring the environment is bene\ufb01cial to speed up the identi\ufb01cation process. We believe that this work opens numerous future research directions, both theoretical, such as the analysis of the combinatorial identi\ufb01cation rule, and empirical, like the application of our identi\ufb01cation rules to imitation learning settings." + }, + { + "url": "http://arxiv.org/abs/1809.06098v2", + "title": "Policy Optimization via Importance Sampling", + "abstract": "Policy optimization is an effective reinforcement learning approach to solve\ncontinuous control tasks. Recent achievements have shown that alternating\nonline and offline optimization is a successful choice for efficient trajectory\nreuse. However, deciding when to stop optimizing and collect new trajectories\nis non-trivial, as it requires to account for the variance of the objective\nfunction estimate. In this paper, we propose a novel, model-free, policy search\nalgorithm, POIS, applicable in both action-based and parameter-based settings.\nWe first derive a high-confidence bound for importance sampling estimation;\nthen we define a surrogate objective function, which is optimized offline\nwhenever a new batch of trajectories is collected. Finally, the algorithm is\ntested on a selection of continuous control tasks, with both linear and deep\npolicies, and compared with state-of-the-art policy optimization methods.", + "authors": "Alberto Maria Metelli, Matteo Papini, Francesco Faccio, Marcello Restelli", + "published": "2018-09-17", + "updated": "2018-10-31", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "cs.AI", + "stat.ML" + ], + "main_content": "Introduction In recent years, policy search methods [9] have proved to be valuable Reinforcement Learning (RL) [49] approaches thanks to their successful achievements in continuous control tasks [e.g., 22, 41, 43, 42], robotic locomotion [e.g., 52, 19] and partially observable environments [e.g., 27]. These algorithms can be roughly classi\ufb01ed into two categories: action-based methods [50, 33] and parameter-based methods [44]. The former, usually known as policy gradient (PG) methods, perform a search in a parametric policy space by following the gradient of the utility function estimated by means of a batch of trajectories collected from the environment [49]. In contrast, in parameterbased methods, the search is carried out directly in the space of parameters by exploiting global optimizers [e.g., 40, 15, 47, 51] or following a proper gradient direction like in Policy Gradients with Parameter-based Exploration (PGPE) [44, 62, 45]. A major question in policy search methods is: how should we use a batch of trajectories in order to exploit its information in the most ef\ufb01cient way? On one hand, on-policy methods leverage on the batch to perform a single gradient step, after which new trajectories are collected with the updated policy. Online PG methods are likely the most widespread policy search approaches: starting from the traditional algorithms based on stochastic policy gradient [50], like REINFORCE [63] and G(PO)MDP [3], moving toward more modern methods, such as Trust Region Policy Optimization (TRPO) [41]. These methods, however, rarely exploit the available trajectories in an ef\ufb01cient way, since each batch is thrown away after just one gradient update. On the other hand, off-policy methods maintain a behavioral policy, used to explore the environment and to collect samples, and a target policy which is optimized. The concept of offpolicy learning is rooted in value-based RL [61, 29, 26] and it was \ufb01rst adapted to PG in [8], using an actor-critic architecture. The approach has been extended to Deterministic Policy Gradient (DPG) [46], which allows optimizing deterministic policies while keeping a stochastic policy for exploration. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montr\u00e9al, Canada. arXiv:1809.06098v2 [cs.LG] 31 Oct 2018 \fMore recently, an ef\ufb01cient version of DPG coupled with a deep neural network to represent the policy has been proposed, named Deep Deterministic Policy Gradient (DDPG) [22]. In the parameter-based framework, even though the original formulation [44] introduces an online algorithm, an extension has been proposed to ef\ufb01ciently reuse the trajectories in an of\ufb02ine scenario [66]. Furthermore, PGPE-like approaches allow overcoming several limitations of classical PG, like the need for a stochastic policy and the high variance of the gradient estimates.1 While on-policy algorithms are, by nature, online, as they need to be fed with fresh samples whenever the policy is updated, off-policy methods can take advantage of mixing online and of\ufb02ine optimization. This can be done by alternately sampling trajectories and performing optimization epochs with the collected data. A prime example of this alternating procedure is Proximal Policy Optimization (PPO) [43], that has displayed remarkable performance on continuous control tasks. Off-line optimization, however, introduces further sources of approximation, as the gradient w.r.t. the target policy needs to be estimated (off-policy) with samples collected with a behavioral policy. A common choice is to adopt an importance sampling (IS) [28, 16] estimator in which each sample is reweighted proportionally to the likelihood of being generated by the target policy. However, directly optimizing this utility function is impractical since it displays a wide variance most of the times [28]. Intuitively, the variance increases proportionally to the distance between the behavioral and the target policy; thus, the estimate is reliable as long as the two policies are close enough. Preventing uncontrolled updates in the space of policy parameters is at the core of the natural gradient approaches [1] applied effectively both on PG methods [17, 32, 62] and on PGPE methods [25]. More recently, this idea has been captured (albeit indirectly) by TRPO, which optimizes via (approximate) natural gradient a surrogate objective function, derived from safe RL [17, 34], subject to a constraint on the KullbackLeibler divergence between the behavioral and target policy.2 Similarly, PPO performs a truncation of the importance weights to discourage the optimization process from going too far. Although TRPO and PPO, together with DDPG, represent the state-of-the-art policy optimization methods in RL for continuous control, they do not explicitly encode in their objective function the uncertainty injected by the importance sampling procedure. A more theoretically grounded analysis has been provided for policy selection [10], model-free [55] and model-based [53] policy evaluation (also accounting for samples collected with multiple behavioral policies), and combined with options [14]. Subsequently, in [54] these methods have been extended for policy improvement, deriving a suitable concentration inequality for the case of truncated importance weights. Unfortunately, these methods are hardly scalable to complex control tasks. A more detailed review of the state-of-the-art policy optimization algorithms is reported in Appendix A. In this paper, we propose a novel, model-free, actor-only, policy optimization algorithm, named Policy Optimization via Importance Sampling (POIS) that mixes online and of\ufb02ine optimization to ef\ufb01ciently exploit the information contained in the collected trajectories. POIS explicitly accounts for the uncertainty introduced by the importance sampling by optimizing a surrogate objective function. The latter captures the trade-off between the estimated performance improvement and the variance injected by the importance sampling. The main contributions of this paper are theoretical, algorithmic and experimental. After revising some notions about importance sampling (Section 3), we propose a concentration inequality, of independent interest, for high-con\ufb01dence \u201coff-distribution\u201d optimization of objective functions estimated via importance sampling (Section 4). Then we show how this bound can be customized into a surrogate objective function in order to either search in the space of policies (Action-based POIS) or to search in the space of parameters (Parameter-bases POIS). The resulting algorithm (in both the action-based and the parameter-based \ufb02avor) collects, at each iteration, a set of trajectories. These are used to perform of\ufb02ine optimization of the surrogate objective via gradient ascent (Section 5), after which a new batch of trajectories is collected using the optimized policy. Finally, we provide an experimental evaluation with both linear policies and deep neural policies to illustrate the advantages and limitations of our approach compared to state-of-the-art algorithms (Section 6) on classical control tasks [11, 56]. The proofs for all Theorems and Lemmas are reported in Appendix B. The implementation of POIS can be found at https://github.com/T3p/pois. 1Other solutions to these problems have been proposed in the action-based literature, like the aforementioned DPG algorithm, the gradient baselines [33] and the actor-critic architectures [20]. 2Note that this regularization term appears in the performance improvement bound, which contains exact quantities only. Thus, it does not really account for the uncertainty derived from the importance sampling. 2 \f2 Preliminaries A discrete-time Markov Decision Process (MDP) [36] is de\ufb01ned as a tuple M = (S, A, P, R, \u03b3, D) where S is the state space, A is the action space, P(\u00b7|s, a) is a Markovian transition model that assigns for each state-action pair (s, a) the probability of reaching the next state s\u2032, \u03b3 \u2208[0, 1] is the discount factor, R(s, a) \u2208[\u2212Rmax, Rmax] assigns the expected reward for performing action a in state s and D is the distribution of the initial state. The behavior of an agent is described by a policy \u03c0(\u00b7|s) that assigns for each state s the probability of performing action a. A trajectory \u03c4 \u2208T is a sequence of state-action pairs \u03c4 = (s\u03c4,0, a\u03c4,0, . . . , s\u03c4,H\u22121, a\u03c4,H\u22121, s\u03c4,H), where H is the actual trajectory horizon. The performance of an agent is evaluated in terms of the expected return, i.e., the expected discounted sum of the rewards collected along the trajectory: E\u03c4 [R(\u03c4)], where R(\u03c4) = PH\u22121 t=0 \u03b3tR(s\u03c4,t, a\u03c4,t) is the trajectory return. We focus our attention to the case in which the policy belongs to a parametric policy space \u03a0\u0398 = {\u03c0\u03b8 : \u03b8 \u2208\u0398 \u2286Rp}. In parameter-based approaches, the agent is equipped with a hyperpolicy \u03bd used to sample the policy parameters at the beginning of each episode. The hyperpolicy belongs itself to a parametric hyperpolicy space NP = {\u03bd\u03c1 : \u03c1 \u2208P \u2286Rr}. The expected return can be expressed, in the parameter-based case, as a double expectation: one over the policy parameter space \u0398 and one over the trajectory space T : JD(\u03c1) = Z \u0398 Z T \u03bd\u03c1(\u03b8)p(\u03c4|\u03b8)R(\u03c4) d\u03c4 d\u03b8, (1) where p(\u03c4|\u03b8) = D(s0) QH\u22121 t=0 \u03c0\u03b8(at|st)P(st+1|st, at) is the trajectory density function. The goal of a parameter-based learning agent is to determine the hyperparameters \u03c1\u2217so as to maximize JD(\u03c1). If \u03bd\u03c1 is stochastic and differentiable, the hyperparameters can be learned according to the gradient ascent update: \u03c1\u2032 = \u03c1 + \u03b1\u2207\u03c1JD(\u03c1), where \u03b1 > 0 is the step size and \u2207\u03c1JD(\u03c1) = R \u0398 R T \u03bd\u03c1(\u03b8)p(\u03c4|\u03b8)\u2207\u03c1 log \u03bd\u03c1(\u03b8)R(\u03c4) d\u03c4 d\u03b8. Since the stochasticity of the hyperpolicy is a suf\ufb01cient source of exploration, deterministic action policies of the kind \u03c0\u03b8(a|s) = \u03b4(a \u2212u\u03b8(s)) are typically considered, where \u03b4 is the Dirac delta function and u\u03b8 is a deterministic mapping from S to A. In the action-based case, on the contrary, the hyperpolicy \u03bd\u03c1 is a deterministic distribution \u03bd\u03c1(\u03b8) = \u03b4(\u03b8 \u2212g(\u03c1)), where g(\u03c1) is a deterministic mapping from P to \u0398. For this reason, the dependence on \u03c1 is typically not represented and the expected return expression simpli\ufb01es into a single expectation over the trajectory space T : JD(\u03b8) = Z T p(\u03c4|\u03b8)R(\u03c4) d\u03c4. (2) An action-based learning agent aims to \ufb01nd the policy parameters \u03b8\u2217that maximize JD(\u03b8). In this case, we need to enforce exploration by means of the stochasticity of \u03c0\u03b8. For stochastic and differentiable policies, learning can be performed via gradient ascent: \u03b8\u2032 = \u03b8 + \u03b1\u2207\u03b8JD(\u03b8), where \u2207\u03b8JD(\u03b8) = R T p(\u03c4|\u03b8)\u2207\u03b8 log p(\u03c4|\u03b8)R(\u03c4) d\u03c4. 3 Evaluation via Importance Sampling In off-policy evaluation [55, 53], we aim to estimate the performance of a target policy \u03c0T (or hyperpolicy \u03bdT ) given samples collected with a behavioral policy \u03c0B (or hyperpolicy \u03bdB). More generally, we face the problem of estimating the expected value of a deterministic bounded function f (\u2225f\u2225\u221e< +\u221e) of random variable x taking values in X under a target distribution P, after having collected samples from a behavioral distribution Q. The importance sampling estimator (IS) [6, 28] corrects the distribution with the importance weights (or Radon\u2013Nikodym derivative or likelihood ratio) wP/Q(x) = p(x)/q(x): b \u00b5P/Q = 1 N N X i=1 p(xi) q(xi)f(xi) = 1 N N X i=1 wP/Q(xi)f(xi), (3) where x = (x1, x2, . . . , xN)T is sampled from Q and we assume q(x) > 0 whenever f(x)p(x) \u0338= 0. This estimator is unbiased (Ex\u223cQ[b \u00b5P/Q] = Ex\u223cP [f(x)]) but it may exhibit an undesirable behavior due to the variability of the importance weights, showing, in some cases, in\ufb01nite variance. Intuitively, the magnitude of the importance weights provides an indication of how much the probability measures P and Q are dissimilar. This notion can be formalized by the R\u00e9nyi divergence [39, 58], an information-theoretic dissimilarity index between probability measures. 3 \fR\u00e9nyi divergence Let P and Q be two probability measures on a measurable space (X, F) such that P \u226aQ (P is absolutely continuous w.r.t. Q) and Q is \u03c3-\ufb01nite. Let P and Q admit p and q as Lebesgue probability density functions (p.d.f.), respectively. The \u03b1-R\u00e9nyi divergence is de\ufb01ned as: D\u03b1(P\u2225Q) = 1 \u03b1 \u22121 log Z X \u0012 dP dQ \u0013\u03b1 dQ = 1 \u03b1 \u22121 log Z X q(x) \u0012p(x) q(x) \u0013\u03b1 dx, (4) where dP/ dQ is the Radon\u2013Nikodym derivative of P w.r.t. Q and \u03b1 \u2208[0, \u221e]. Some remarkable cases are: \u03b1 = 1 when D1(P\u2225Q) = DKL(P\u2225Q) and \u03b1 = \u221eyielding D\u221e(P\u2225Q) = log ess supX dP/ dQ. Importing the notation from [7], we indicate the exponentiated \u03b1-R\u00e9nyi divergence as d\u03b1(P\u2225Q) = exp (D\u03b1(P\u2225Q)). With little abuse of notation, we will replace D\u03b1(P\u2225Q) with D\u03b1(p\u2225q) whenever possible within the context. The R\u00e9nyi divergence provides a convenient expression for the moments of the importance weights: Ex\u223cQ \u0002 wP/Q(x)\u03b1\u0003 = d\u03b1(P\u2225Q). Moreover, Varx\u223cQ \u0002 wP/Q(x) \u0003 = d2(P\u2225Q) \u22121 and ess supx\u223cQ wP/Q(x) = d\u221e(P\u2225Q) [7]. To mitigate the variance problem of the IS estimator, we can resort to the self-normalized importance sampling estimator (SN) [6]: e \u00b5P/Q = PN i=1 wP/Q(xi)f(xi) PN i=1 wP/Q(xi) = N X i=1 e wP/Q(xi)f(xi), (5) where e wP/Q(x) = wP/Q(x)/ PN i=1 wP/Q(xi) is the self-normalized importance weight. Differently from b \u00b5P/Q, e \u00b5P/Q is biased but consistent [28] and it typically displays a more desirable behavior because of its smaller variance.3 Given the realization x1, x2, . . . , xN we can interpret the SN estimator as the expected value of f under an approximation of the distribution P made by N deltas, i.e., e p(x) = PN i=1 e wP/Q(x)\u03b4(x \u2212xi). The problem of assessing the quality of the SN estimator has been extensively studied by the simulation community, producing several diagnostic indexes to indicate when the weights might display problematic behavior [28]. The effective sample size (ESS) was introduced in [21] as the number of samples drawn from P so that the variance of the Monte Carlo estimator e \u00b5P/P is approximately equal to the variance of the SN estimator e \u00b5P/Q computed with N samples. Here we report the original de\ufb01nition and its most common estimate: ESS(P\u2225Q) = N Varx\u223cQ \u0002 wP/Q(x) \u0003 + 1 = N d2(P\u2225Q), d ESS(P\u2225Q) = 1 PN i=1 e wP/Q(xi)2 . (6) The ESS has an interesting interpretation: if d2(P\u2225Q) = 1, i.e., P = Q almost everywhere, then ESS = N since we are performing Monte Carlo estimation. Otherwise, the ESS decreases as the dissimilarity between the two distributions increases. In the literature, other ESS-like diagnostics have been proposed that also account for the nature of f [23]. 4 Optimization via Importance Sampling The off-policy optimization problem [54] can be formulated as \ufb01nding the best target policy \u03c0T (or hyperpolicy \u03bdT ), i.e., the one maximizing the expected return, having access to a set of samples collected with a behavioral policy \u03c0B (or hyperpolicy \u03bdB). In a more abstract sense, we aim to determine the target distribution P that maximizes Ex\u223cP [f(x)] having samples collected from the \ufb01xed behavioral distribution Q. In this section, we analyze the problem of de\ufb01ning a proper objective function for this purpose. Directly optimizing the estimator b \u00b5P/Q or e \u00b5P/Q is, in most of the cases, unsuccessful. With enough freedom in choosing P, the optimal solution would assign as much probability mass as possible to the maximum value among f(xi). Clearly, in this scenario, the estimator is unreliable and displays a large variance. For this reason, we adopt a risk-averse approach and we decide to optimize a statistical lower bound of the expected value Ex\u223cP [f(x)] that holds with high con\ufb01dence. We start by analyzing the behavior of the IS estimator and we provide the following result that bounds the variance of b \u00b5P/Q in terms of the Renyi divergence. Lemma 4.1. Let P and Q be two probability measures on the measurable space (X, F) such that P \u226aQ. Let x = (x1, x2, . . . , xN)T i.i.d. random variables sampled from Q and f : X \u2192R be a 3Note that \f \fe \u00b5P/Q \f \f \u2264\u2225f\u2225\u221e. Therefore, its variance is always \ufb01nite. 4 \fbounded function (\u2225f\u2225\u221e< +\u221e). Then, for any N > 0, the variance of the IS estimator b \u00b5P/Q can be upper bounded as: Var x\u223cQ \u0002 b \u00b5P/Q \u0003 \u22641 N \u2225f\u22252 \u221ed2 (P\u2225Q) . (7) When P = Q almost everywhere, we get Varx\u223cQ \u0002 b \u00b5Q/Q \u0003 \u22641 N \u2225f\u22252 \u221e, a well-known bound on the variance of a Monte Carlo estimator. Recalling the de\ufb01nition of ESS (6) we can rewrite the previous bound as: Varx\u223cQ \u0002 b \u00b5P/Q \u0003 \u2264 \u2225f\u22252 \u221e ESS(P \u2225Q), i.e., the variance scales with ESS instead of N. While b \u00b5P/Q can have unbounded variance even if f is bounded, the SN estimator e \u00b5P/Q is always bounded by \u2225f\u2225\u221eand therefore it always has a \ufb01nite variance. Since the normalization term makes all the samples e wP/Q(xi)f(xi) interdependent, an exact analysis of its bias and variance is more challenging. Several works adopted approximate methods to provide an expression for the variance [16]. We propose an analysis of bias and variance of the SN estimator in Appendix D. 4.1 Concentration Inequality Finding a suitable concentration inequality for off-policy learning was studied in [55] for of\ufb02ine policy evaluation and subsequently in [54] for optimization. On one hand, fully empirical concentration inequalities, like Student-T, besides the asymptotic approximation, are not suitable in this case since the empirical variance needs to be estimated with importance sampling as well injecting further uncertainty [28]. On the other hand, several distribution-free inequalities like Hoeffding require knowing the maximum of the estimator, which might not exist (d\u221e(P\u2225Q) = \u221e) for the IS estimator. Constraining d\u221e(P\u2225Q) to be \ufb01nite often introduces unacceptable limitations. For instance, in the case of univariate Gaussian distributions, it prevents a step that selects a target variance larger than the behavioral one from being performed (see Appendix C).4 Even Bernstein inequalities [4], are hardly applicable since, for instance, in the case of univariate Gaussian distributions, the importance weights display a fat tail behavior (see Appendix C). We believe that a reasonable trade-off is to require the variance of the importance weights to be \ufb01nite, that is equivalent to require d2(P\u2225Q) < \u221e, i.e., \u03c3P < 2\u03c3Q for univariate Gaussians. For this reason, we resort to Chebyshev-like inequalities and we propose the following concentration bound derived from Cantelli\u2019s inequality and customized for the IS estimator. Theorem 4.1. Let P and Q be two probability measures on the measurable space (X, F) such that P \u226aQ and d2(P\u2225Q) < +\u221e. Let x1, x2, . . . , xN be i.i.d. random variables sampled from Q, and f : X \u2192R be a bounded function (\u2225f\u2225\u221e< +\u221e). Then, for any 0 < \u03b4 \u22641 and N > 0 with probability at least 1 \u2212\u03b4 it holds that: E x\u223cP [f(x)] \u22651 N N X i=1 wP/Q(xi)f(xi) \u2212\u2225f\u2225\u221e r (1 \u2212\u03b4)d2(P\u2225Q) \u03b4N . (8) The bound highlights the interesting trade-off between the estimated performance and the uncertainty introduced by changing the distribution. The latter enters in the bound as the 2-R\u00e9nyi divergence between the target distribution P and the behavioral distribution Q. Intuitively, we should trust the estimator b \u00b5P/Q as long as P is not too far from Q. For the SN estimator, accounting for the bias, we are able to obtain a bound (reported in Appendix D), with a similar dependence on P as in Theorem 4.1, albeit with different constants. Renaming all constants involved in the bound of Theorem 4.1 as \u03bb = \u2225f\u2225\u221e p (1 \u2212\u03b4)/\u03b4, we get a surrogate objective function. The optimization can be carried out in different ways. The following section shows why using the natural gradient could be a successful choice in case P and Q can be expressed as parametric differentiable distributions. 4.2 Importance Sampling and Natural Gradient We can look at a parametric distribution P\u03c9, having p\u03c9 as a density function, as a point on a probability manifold with coordinates \u03c9 \u2208\u2126. If p\u03c9 is differentiable, the Fisher Information Matrix (FIM) [38, 2] is de\ufb01ned as: F(\u03c9) = R X p\u03c9(x)\u2207\u03c9 log p\u03c9(x)\u2207\u03c9 log p\u03c9(x)T dx. This matrix is, up to 4Although the variance tends to be reduced in the learning process, there might be cases in which it needs to be increased (e.g., suppose we start with a behavioral policy with small variance, it might be bene\ufb01cial increasing the variance to enforce exploration). 5 \fAlgorithm 1 Action-based POIS Initialize \u03b80 0 arbitrarily for j = 0, 1, 2, ..., until convergence do Collect N trajectories with \u03c0\u03b8j 0 for k = 0, 1, 2, ..., until convergence do Compute G(\u03b8j k), \u2207\u03b8j kL(\u03b8j k/\u03b8j 0) and \u03b1k \u03b8j k+1 = \u03b8j k + \u03b1kG(\u03b8j k)\u22121\u2207\u03b8j kL(\u03b8j k/\u03b8j 0) end for \u03b8j+1 0 = \u03b8j k end for Algorithm 2 Parameter-based POIS Initialize \u03c10 0 arbitrarily for j = 0, 1, 2, ..., until convergence do Sample N policy parameters \u03b8j i from \u03bd\u03c1j 0 Collect a trajectory with each \u03c0\u03b8j i for k = 0, 1, 2, ..., until convergence do Compute G(\u03c1j k), \u2207\u03c1j kL(\u03c1j k/\u03c1j 0) and \u03b1k \u03c1j k+1 = \u03c1j k + \u03b1kG(\u03c1j k)\u22121\u2207\u03c1j kL(\u03c1j k/\u03c1j 0) end for \u03c1j+1 0 = \u03c1j k end for a scale, an invariant metric [1] on parameter space \u2126, i.e., \u03ba(\u03c9\u2032 \u2212\u03c9)T F(\u03c9)(\u03c9\u2032 \u2212\u03c9) is independent on the speci\ufb01c parameterization and provides a second order approximation of the distance between p\u03c9 and p\u03c9\u2032 on the probability manifold up to a scale factor \u03ba \u2208R. Given a loss function L(\u03c9), we de\ufb01ne the natural gradient [1, 18] as e \u2207\u03c9L(\u03c9) = F\u22121(\u03c9)\u2207\u03c9L(\u03c9), which represents the steepest ascent direction in the probability manifold. Thanks to the invariance property, there is a tight connection between the geometry induced by the R\u00e9nyi divergence and the Fisher information metric. Theorem 4.2. Let p\u03c9 be a p.d.f. differentiable w.r.t. \u03c9 \u2208\u2126. Then, it holds that, for the R\u00e9nyi divergence: D\u03b1(p\u03c9\u2032\u2225p\u03c9) = \u03b1 2 (\u03c9\u2032 \u2212\u03c9)T F(\u03c9) (\u03c9\u2032 \u2212\u03c9)+o(\u2225\u03c9\u2032\u2212\u03c9\u22252 2), and for the exponentiated R\u00e9nyi divergence: d\u03b1(p\u03c9\u2032\u2225p\u03c9) = 1 + \u03b1 2 (\u03c9\u2032 \u2212\u03c9)T F(\u03c9) (\u03c9\u2032 \u2212\u03c9) + o(\u2225\u03c9\u2032 \u2212\u03c9\u22252 2). This result provides an approximate expression for the variance of the importance weights, as Varx\u223cp\u03c9 \u0002 w\u03c9\u2032/\u03c9(x) \u0003 = d2(p\u03c9\u2032\u2225p\u03c9) \u22121 \u2243\u03b1 2 (\u03c9\u2032 \u2212\u03c9)T F(\u03c9) (\u03c9\u2032 \u2212\u03c9). It also justi\ufb01es the use of natural gradients in off-distribution optimization, since a step in natural gradient direction has a controllable effect on the variance of the importance weights. 5 Policy Optimization via Importance Sampling In this section, we discuss how to customize the bound provided in Theorem 4.1 for policy optimization, developing a novel model-free actor-only policy search algorithm, named Policy Optimization via Importance Sampling (POIS). We propose two versions of POIS: Action-based POIS (A-POIS), which is based on a policy gradient approach, and Parameter-based POIS (P-POIS), which adopts the PGPE framework. A more detailed description of the implementation aspects is reported in Appendix E. 5.1 Action-based POIS In Action-based POIS (A-POIS) we search for a policy that maximizes the performance index JD(\u03b8) within a parametric space \u03a0\u0398 = {\u03c0\u03b8 : \u03b8 \u2208\u0398 \u2286Rp} of stochastic differentiable policies. In this context, the behavioral (resp. target) distribution Q (resp. P) becomes the distribution over trajectories p(\u00b7|\u03b8) (resp. p(\u00b7|\u03b8\u2032)) induced by the behavioral policy \u03c0\u03b8 (resp. target policy \u03c0\u03b8\u2032) and f is the trajectory return R(\u03c4) which is uniformly bounded as |R(\u03c4)| \u2264Rmax 1\u2212\u03b3H 1\u2212\u03b3 .5 The surrogate loss function cannot be directly optimized via gradient ascent since computing d\u03b1 \u0000p(\u00b7|\u03b8\u2032)\u2225p(\u00b7|\u03b8) \u0001 requires the approximation of an integral over the trajectory space and, for stochastic environments, to know the transition model P, which is unknown in a model-free setting. Simple bounds to this quantity, like d\u03b1 \u0000p(\u00b7|\u03b8\u2032)\u2225p(\u00b7|\u03b8) \u0001 \u2264sups\u2208S d\u03b1 (\u03c0\u03b8\u2032(\u00b7|s)\u2225\u03c0\u03b8(\u00b7|s))H, besides being hard to compute due to the presence of the supremum, are extremely conservative since the R\u00e9nyi divergence is raised to the horizon H. We suggest the replacement of the R\u00e9nyi divergence with an estimate b d2 \u0000p(\u00b7|\u03b8\u2032)\u2225p(\u00b7|\u03b8) \u0001 = 1 N PN i=1 QH\u22121 t=0 d2 (\u03c0\u03b8\u2032(\u00b7|s\u03c4i,t)\u2225\u03c0\u03b8(\u00b7|s\u03c4i,t)) de\ufb01ned only in terms of the policy R\u00e9nyi divergence (see Appendix E.2 for details). Thus, we obtain the following surrogate 5When \u03b3 \u21921 the bound becomes HRmax. 6 \fobjective: LA\u2212POIS \u03bb (\u03b8\u2032/\u03b8) = 1 N N X i=1 w\u03b8\u2032/\u03b8(\u03c4i)R(\u03c4i) \u2212\u03bb s b d2 \u0000p(\u00b7|\u03b8\u2032)\u2225p(\u00b7|\u03b8) \u0001 N , (9) where w\u03b8\u2032/\u03b8(\u03c4i) = p(\u03c4i|\u03b8\u2032) p(\u03c4i|\u03b8) = QH\u22121 t=0 \u03c0\u03b8\u2032(a\u03c4i,t|s\u03c4i,t) \u03c0\u03b8(a\u03c4i,t|s\u03c4i,t) . We consider the case in which \u03c0\u03b8(\u00b7|s) is a Gaussian distribution over actions whose mean depends on the state and whose covariance is stateindependent and diagonal: N(u\u00b5(s), diag(\u03c32)), where \u03b8 = (\u00b5, \u03c3). The learning process mixes online and of\ufb02ine optimization. At each online iteration j, a dataset of N trajectories is collected by executing in the environment the current policy \u03c0\u03b8j 0. These trajectories are used to optimize the surrogate loss function LA\u2212POIS \u03bb . At each of\ufb02ine iteration k, the parameters are updated via gradient ascent: \u03b8j k+1 = \u03b8j k + \u03b1kG(\u03b8j k)\u22121\u2207\u03b8j kL(\u03b8j k/\u03b8j 0), where \u03b1k > 0 is the step size which is chosen via line search (see Appendix E.1) and G(\u03b8j k) is a positive semi-de\ufb01nite matrix (e.g., F(\u03b8j k), the FIM, for natural gradient)6. The pseudo-code of POIS is reported in Algorithm 1. 5.2 Parameter-based POIS In the Parameter-based POIS (P-POIS) we again consider a parametrized policy space \u03a0\u0398 = {\u03c0\u03b8 : \u03b8 \u2208\u0398 \u2286Rp}, but \u03c0\u03b8 needs not be differentiable. The policy parameters \u03b8 are sampled at the beginning of each episode from a parametric hyperpolicy \u03bd\u03c1 selected in a parametric space NP = {\u03bd\u03c1 : \u03c1 \u2208P \u2286Rr}. The goal is to learn the hyperparameters \u03c1 so as to maximize JD(\u03c1). In this setting, the distributions Q and P of Section 4 correspond to the behavioral \u03bd\u03c1 and target \u03bd\u03c1\u2032 hyperpolicies, while f remains the trajectory return R(\u03c4). The importance weights [66] must take into account all sources of randomness, derived from sampling a policy parameter \u03b8 and a trajectory \u03c4: w\u03c1\u2032/\u03c1(\u03b8) = \u03bd\u03c1\u2032(\u03b8)p(\u03c4|\u03b8) \u03bd\u03c1(\u03b8)p(\u03c4|\u03b8) = \u03bd\u03c1\u2032(\u03b8) \u03bd\u03c1(\u03b8) . In practice, a Gaussian hyperpolicy \u03bd\u03c1 with diagonal covariance matrix is often used, i.e., N(\u00b5, diag(\u03c32)) with \u03c1 = (\u00b5, \u03c3). The policy is assumed to be deterministic: \u03c0\u03b8(a|s) = \u03b4(a \u2212u\u03b8(s)), where u\u03b8 is a deterministic function of the state s [e.g., 45, 13]. A \ufb01rst advantage over the action-based setting is that the distribution of the importance weights is entirely known, as it is the ratio of two Gaussians and the R\u00e9nyi divergence d2(\u03bd\u03c1\u2032\u2225\u03bd\u03c1) can be computed exactly [5] (see Appendix C). This leads to the following surrogate objective: LP\u2212POIS \u03bb (\u03c1\u2032/\u03c1) = 1 N N X i=1 w\u03c1\u2032/\u03c1(\u03b8i)R(\u03c4i) \u2212\u03bb r d2 (\u03bd\u03c1\u2032\u2225\u03bd\u03c1) N , (10) where each trajectory \u03c4i is obtained by running an episode with action policy \u03c0\u03b8i, and the corresponding policy parameters \u03b8i are sampled independently from hyperpolicy \u03bd\u03c1, at the beginning of each episode. The hyperpolicy parameters are then updated of\ufb02ine as \u03c1j k+1 = \u03c1j k + \u03b1kG(\u03c1j k)\u22121\u2207\u03c1j kL(\u03c1j k/\u03c1j 0) (see Algorithm 2 for the complete pseudo-code). A further advantage w.r.t. the action-based case is that the FIM F(\u03c1) can be computed exactly, and it is diagonal in the case of a Gaussian hyperpolicy with diagonal covariance matrix, turning a problematic inversion into a trivial division (the FIM is block-diagonal in the more general case of a Gaussian hyperpolicy, as observed in [25]). This makes natural gradient much more enticing for P-POIS. 6 Experimental Evaluation In this section, we present the experimental evaluation of POIS in its two \ufb02avors (action-based and parameter-based). We \ufb01rst provide a set of empirical comparisons on classical continuous control tasks with linearly parametrized policies; we then show how POIS can be also adopted for learning deep neural policies. In all experiments, for the A-POIS we used the IS estimator, while for P-POIS we employed the SN estimator. All experimental details are provided in Appendix F. 6.1 Linear Policies Linear parametrized Gaussian policies proved their ability to scale on complex control tasks [37]. In this section, we compare the learning performance of A-POIS and P-POIS against TRPO [41] and 6The FIM needs to be estimated via importance sampling as well, as shown in Appendix E.3. 7 \fTask A-POIS P-POIS TRPO PPO (a) 0.4 0.4 0.1 0.01 (b) 0.1 0.1 0.1 1 (c) 0.7 0.2 1 1 (d) 0.9 1 0.01 1 (e) 0.9 0.8 0.01 0.01 A-POIS P-POIS TRPO PPO 0 1 2 3 4 5 \u00d7104 0 2000 4000 trajectories average return (a) Cartpole 0 1 2 3 4 5 \u00d7104 0 1000 2000 3000 4000 5000 trajectories average return (b) Inverted Double Pendulum 0 1 2 3 4 5 \u00d7104 \u22121500 \u22121000 \u2212500 trajectories average return (c) Acrobot 0 1 2 3 4 5 \u00d7104 \u2212400 \u2212300 \u2212200 \u2212100 trajectories average return (d) Mountain Car 0 1 2 3 4 5 \u00d7104 \u2212150 \u2212100 \u221250 0 50 trajectories average return (e) Inverted Pendulum Figure 1: Average return as a function of the number of trajectories for A-POIS, P-POIS and TRPO with linear policy (20 runs, 95% c.i.). The table reports the best hyperparameters found (\u03b4 for POIS and the step size for TRPO and PPO). PPO [43] on classical continuous control benchmarks [11]. In Figure 1, we can see that both versions of POIS are able to signi\ufb01cantly outperform both TRPO and PPO in the Cartpole environments, especially the P-POIS. In the Inverted Double Pendulum environment the learning curve of P-POIS is remarkable while A-POIS displays a behavior comparable to PPO. In the Acrobot task, P-POIS displays a better performance w.r.t. TRPO and PPO, but A-POIS does not keep up. In Mountain Car, we see yet another behavior: the learning curves of TRPO, PPO and P-POIS are almost one-shot (even if PPO shows a small instability), while A-POIS fails to display such a fast convergence. Finally, in the Inverted Pendulum environment, TRPO and PPO outperform both versions of POIS. This example highlights a limitation of our approach. Since POIS performs an importance sampling procedure at trajectory level, it cannot assign credit to good actions in bad trajectories. On the contrary, weighting each sample, TRPO and PPO are able also to exploit good trajectory segments. In principle, this problem can be mitigated in POIS by resorting to per-decision importance sampling [35], in which the weight is assigned to individual rewards instead of trajectory returns. Overall, POIS displays a performance comparable with TRPO and PPO across the tasks. In particular, P-POIS displays a better performance w.r.t. A-POIS. However, this ordering is not maintained when moving to more complex policy architectures, as shown in the next section. In Figure 2 we show, for several metrics, the behavior of A-POIS when changing the \u03b4 parameter in the Cartpole environment. We can see that when \u03b4 is small (e.g., 0.2), the Effective Sample Size (ESS) remains large and, consequently, the variance of the importance weights (Var[w]) is small. This means that the penalization term in the objective function discourages the optimization process from selecting policies which are far from the behavioral policy. As a consequence, the displayed behavior is very conservative, preventing the policy from reaching the optimum. On the contrary, when \u03b4 approaches 1, the ESS is smaller and the variance of the weights tends to increase signi\ufb01cantly. Again, the performance remains suboptimal as the penalization term in the objective function is too light. The best behavior is obtained with an intermediate value of \u03b4, speci\ufb01cally 0.4. 6.2 Deep Neural Policies In this section, we adopt a deep neural network (3 layers: 100, 50, 25 neurons each) to represent the policy. The experiment setup is fully compatible with the classical benchmark [11]. While A-POIS can be directly applied to deep neural networks, P-POIS exhibits some critical issues. A highly dimensional hyperpolicy (like a Gaussian from which the weights of an MLP policy are 8 \f0 1 2 3 4 5 \u00d7104 0 1000 2000 3000 4000 5000 trajectories average return 0 1 2 3 4 5 \u00d7104 20 40 60 trajectories ESS 0 1 2 3 4 5 \u00d7104 0 10 20 30 40 trajectories Var[w] \u03b4 = 0.2 \u03b4 = 0.4 \u03b4 = 0.6 \u03b4 = 0.8 \u03b4 = 1 Figure 2: Average return, Effective Sample Size (ESS) and variance of the importance weights (Var[w]) as a function of the number of trajectories for A-POIS for different values of the parameter \u03b4 in the Cartpole environment (20 runs, 95% c.i.). Table 1: Performance of POIS compated with [11] on deep neural policies (5 runs, 95% c.i.). In bold, the performances that are not statistically signi\ufb01cantly different from the best algorithm in each task. Cart-Pole Double Inverted Algorithm Balancing Mountain Car Pendulum Swimmer REINFORCE 4693.7 \u00b1 14.0 \u221267.1 \u00b1 1.0 4116.5 \u00b1 65.2 92.3 \u00b1 0.1 TRPO 4869.8 \u00b1 37.6 \u221261.7 \u00b1 0.9 4412.4 \u00b1 50.4 96.0 \u00b1 0.2 DDPG 4634.4 \u00b1 87.6 \u2212288.4 \u00b1 170.3 2863.4 \u00b1 154.0 85.8 \u00b1 1.8 A-POIS 4842.8 \u00b1 13.0 \u221263.7 \u00b1 0.5 4232.1 \u00b1 189.5 88.7 \u00b1 0.55 CEM 4815.4 \u00b1 4.8 \u221266.0 \u00b1 2.4 2566.2 \u00b1 178.9 68.8 \u00b1 2.4 P-POIS 4428.1 \u00b1 138.6 \u221278.9 \u00b1 2.5 3161.4 \u00b1 959.2 76.8 \u00b1 1.6 sampled) can make d2(\u03bd\u03c1\u2032\u2225\u03bd\u03c1) extremely sensitive to small parameter changes, leading to overconservative updates.7 A \ufb01rst practical variant comes from the insight that d2(\u03bd\u03c1\u2032\u2225\u03bd\u03c1)/N is the inverse of the effective sample size, as reported in Equation 6. We can obtain a less conservative (although approximate) surrogate function by replacing it with 1/ d ESS(\u03bd\u03c1\u2032\u2225\u03bd\u03c1). Another trick is to model the hyperpolicy as a set of independent Gaussians, each de\ufb01ned over a disjoint subspace of \u0398 (implementation details are provided in Appendix E.5). In Table 1, we augmented the results provided in [11] with the performance of POIS for the considered tasks. We can see that A-POIS is able to reach an overall behavior comparable with the best of the action-based algorithms, approaching TRPO and beating DDPG. Similarly, P-POIS exhibits a performance similar to CEM [51], the best performing among the parameter-based methods. The complete results are reported in Appendix F. 7 Discussion and Conclusions In this paper, we presented a new actor-only policy optimization algorithm, POIS, which alternates online and of\ufb02ine optimization in order to ef\ufb01ciently exploit the collected trajectories, and can be used in combination with action-based and parameter-based exploration. In contrast to the state-ofthe-art algorithms, POIS has a strong theoretical grounding, since its surrogate objective function derives from a statistical bound on the estimated performance, that is able to capture the uncertainty induced by importance sampling. The experimental evaluation showed that POIS, in both its versions (action-based and parameter-based), is able to achieve a performance comparable with TRPO, PPO and other classical algorithms on continuous control tasks. Natural extensions of POIS could focus on employing per-decision importance sampling, adaptive batch size, and trajectory reuse. Future work also includes scaling POIS to high-dimensional tasks and highly-stochastic environments. We believe that this work represents a valuable starting point for a deeper understanding of modern policy optimization and for the development of effective and scalable policy search methods. 7This curse of dimensionality, related to dim(\u03b8), has some similarities with the dependence of the R\u00e9nyi divergence on the actual horizon H in the action-based case. 9 \fAcknowledgments The study was partially funded by Lombardy Region (Announcement PORFESR 2014-2020). F.F. was partially funded through ERC Advanced Grant (no: 742870)." + }, + { + "url": "http://arxiv.org/abs/1806.05415v1", + "title": "Configurable Markov Decision Processes", + "abstract": "In many real-world problems, there is the possibility to configure, to a\nlimited extent, some environmental parameters to improve the performance of a\nlearning agent. In this paper, we propose a novel framework, Configurable\nMarkov Decision Processes (Conf-MDPs), to model this new type of interaction\nwith the environment. Furthermore, we provide a new learning algorithm, Safe\nPolicy-Model Iteration (SPMI), to jointly and adaptively optimize the policy\nand the environment configuration. After having introduced our approach and\nderived some theoretical results, we present the experimental evaluation in two\nexplicative problems to show the benefits of the environment configurability on\nthe performance of the learned policy.", + "authors": "Alberto Maria Metelli, Mirco Mutti, Marcello Restelli", + "published": "2018-06-14", + "updated": "2018-06-14", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI" + ], + "main_content": "Introduction Markov Decision Processes (MDPs) (Puterman, 2014) are a popular formalism to model sequential decision-making problems. Solving an MDP means to \ufb01nd a policy, i.e., a prescription of actions, that maximizes a given utility function. Typically, the environment dynamics is assumed to be \ufb01xed, unknown and out of the control of the agent. Several exceptions to this scenario can be found in the literature, especially in the context of Markov Decision Processes with imprecise probabilities (MDPIPs) (Satia & Lave Jr, 1973; White III & Eldeib, 1994; Bueno et al., 2017) and non-stationary environments (Bowerman, 1974; Hopp et al., 1987). In the former case, the transition kernel is known under uncertainty. Therefore, it cannot be speci\ufb01ed using a conditional probability distribution, but it must be de\ufb01ned through a set of probability distributions (Delgado et al., 2009). In this context, Bounded-parameter Markov Decision Processes (BMDPs) consider a special case in which upper and lower bounds on transition probabilities are spec*Equal contribution 1Politecnico di Milano, 32, Piazza Leonardo da Vinci, Milan, Italy. Correspondence to: Alberto Maria Metelli . Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). i\ufb01ed (Givan et al., 1997; Ni & Liu, 2008). A common approach is to solve a minimax problem to \ufb01nd a robust policy maximizing the expected return under the worst possible transition model (Osogami, 2015). In non-stationary environments, the transition probabilities (possibly also the reward function) change over time (Bowerman, 1974). Several works tackle the problem of de\ufb01ning an optimality criterion (Hopp et al., 1987) and \ufb01nding optimal policies in non-stationary environments (Garcia & Smith, 2000; Cheevaprawatdomrong et al., 2007; Ghate & Smith, 2013). Although the environment is no longer \ufb01xed, both Markov decision processes with imprecise probabilities and nonstationary Markov decision processes do not admit the possibility to dynamically alter the environmental parameters. However, there exist several real-world scenarios in which the environment is partially controllable and, therefore, it might be bene\ufb01cial to con\ufb01gure some of its features in order to select the most convenient MDP to solve. For instance, a human car driver has at her/his disposal a number of possible vehicle con\ufb01gurations she/he can act on (e.g., seasonal tires, stability and vehicle attitude, engine model, automatic speed control, parking aid system) to improve the driving style or quicken the process of learning a good driving policy. Another example is the interaction between a student and an automatic teaching system: the teaching model can be tailored to improve the student\u2019s learning experience (e.g., increasing or decreasing the dif\ufb01culties of the questions or the speed at which the concepts are presented). It is worth noting that the active entity in the con\ufb01guration process might be the agent itself or an external supervisor guiding the learning process. In the latter case, for instance, a supervisor can dynamically adapt where to place the products in a supermarket in order to maximize the customer (agent) satisfaction. Similarly, the design of a street network could be con\ufb01gured, by changing the semaphore transition times or the direction of motion, to reduce the drivers\u2019 journey time. In a more abstract sense, the possibility to act on the environmental parameters can have essentially two bene\ufb01ts: i) it allows improving the agent performance; ii) it may allow to speed up the learning process. This second instance has been previously addressed in (Ciosek & Whiteson, 2017; Florensa et al., 2017), where the transition model and the initial state distribution are altered in order to reach a faster convergence to the optimal policy. However, in both the arXiv:1806.05415v1 [cs.AI] 14 Jun 2018 \fCon\ufb01gurable Markov Decision Processes cases the environment modi\ufb01cation is only simulated, while the underlying environment dynamic remains unchanged. In this paper, we propose a framework to model a Con\ufb01gurable Markov Decision Process (Conf-MDP), i.e., an MDP in which the environment can be con\ufb01gured to a limited extent. In principle, any of the Conf-MDP\u2019s parameters can be tuned, but we restrict our attention to the transition model and we focus to the problem of identifying the environment that allows achieving the highest performance possible. At an intuitive level, there exists a tight connection between environment and policy: variations of the environment induce modi\ufb01cations of the optimal policy. Furthermore, even for the same task, in presence of agents having access to different policy spaces, the optimal environment might be different.1 The spirit of this work is to investigate and exercise the tight connection between policy and model, pursuing the goal of improving the \ufb01nal policy performance. After having introduced the de\ufb01nition of Conf-MDP, we propose a method to jointly and adaptively optimize the policy and the transition model, named Safe Policy-Model Iteration (SPMI). The algorithm adopts a safe learning approach (Pirotta et al., 2013b) based on the maximization of a lower bound on the guaranteed performance improvement, yielding a sequence of model-policy pairs with monotonically increasing performance. The safe learning perspective makes our approach suitable for critical applications where performance degradation during learning is not allowed (e.g., industrial scenarios where extensive exploration of the policy space might damage the machinery). In the standard Reinforcement Learning (RL) framework (Sutton & Barto, 1998), the usage of a lower bound to guide the choice of the policy has been \ufb01rst introduced by Conservative Policy Iteration (CPI) (Kakade & Langford, 2002), improved by Safe Policy Iteration (SPI) (Pirotta et al., 2013b) and subsequently exploited by (Ghavamzadeh et al., 2016; Abbasi-Yadkori et al., 2016; Papini et al., 2017). These methods revealed their potential thanks to the preference towards small policy updates, preventing from moving in a single step too far away from the current policy and avoiding premature convergence to suboptimal policies. A similar rationale is at the basis of Relative Entropy Policy Search (REPS) (Peters et al., 2010), and, more recently, Trust Region Policy Optimization (TRPO) (Schulman et al., 2015) and Proximal Policy Optimization (PPO) (Schulman et al., 2017). In order to introduce our framework and highlight its bene\ufb01ts, we limit our analysis to the scenario in which the model space (and the policy space) is known. However, when the model space is unknown, we could resort to a sample-based version of SPMI, which could be derived by adapting that of SPI (Pirotta et al., 2013b). 1In general, a modi\ufb01cation of the environment (e.g., changing the con\ufb01guration of a car) is more expensive and more constrained w.r.t. to a modi\ufb01cation of the policy. We start in Section 2 by recalling some basic notions about MDPs and providing the de\ufb01nition of Conf-MDP. In Section 3 we derive the performance improvement bound and we outline the main features of SPMI (Section 4) along with some theoretical results (Section 5).2 Then, we present the experimental evaluation (Section 6) in two explicative domains, representing simple abstractions of the main application of Conf-MDPs, with the purpose of showing how con\ufb01guring the transition model can be bene\ufb01cial for the \ufb01nal policy performance. 2. Preliminaries A discrete-time Markov Decision Process (MDP) (Puterman, 2014) is de\ufb01ned as M = (S, A, P, R, \u03b3, \u00b5) where S is the state space, A is the action space, P(s\u2032|s, a) is a Markovian transition model that de\ufb01nes the conditional distribution of the next state s\u2032 given the current state s and the current action a, \u03b3 \u2208(0, 1) is the discount factor, R(s, a) \u2208[0, 1] is the reward for performing action a in state s and \u00b5 is the distribution of the initial state. A policy \u03c0(a|s) de\ufb01nes the probability distribution of an action a given the current state s. Given a model-policy pair (P, \u03c0) we indicate with P \u03c0 the state kernel function de\ufb01ned as P \u03c0(s\u2032|s) = R A \u03c0(a|s)P(s\u2032|s, a)da. We now formalize the Con\ufb01gurable Markov Decision Process (Conf-MDP). De\ufb01nition 2.1. A Con\ufb01gurable Markov Decision Process is a tuple CM = (S, A, R, \u03b3, \u00b5, P, \u03a0) where (S, A, R, \u03b3, \u00b5) is an MDP without the transition model and P and \u03a0 are the model and policy spaces. More speci\ufb01cally, \u03a0 is the set of policies the agent has access to and P is the set of available environment con\ufb01gurations (transition models). The performance of a modelpolicy pair (P, \u03c0) \u2208P\u00d7\u03a0 is evaluated through the expected return, i.e., the expected discounted sum of the rewards collected along a trajectory: JP,\u03c0 \u00b5 = 1 1 \u2212\u03b3 Z S dP,\u03c0 \u00b5 (s) Z A \u03c0(a|s)R(s, a)dads, (1) where dP,\u03c0 \u00b5 is the \u03b3-discounted state distribution (Sutton et al., 2000), de\ufb01ned recursively as: dP,\u03c0 \u00b5 (s) = (1 \u2212\u03b3)\u00b5(s) + \u03b3 Z S dP,\u03c0 \u00b5 (s\u2032)P \u03c0(s\u2032|s)ds\u2032. (2) We can also de\ufb01ne the \u03b3-discounted state-action distribution as \u03b4P,\u03c0 \u00b5 (s, a) = \u03c0(a|s)dP,\u03c0 \u00b5 (s). While solving an MDP consists in \ufb01nding a policy \u03c0\u2217that maximizes JP,\u03c0 \u00b5 under the given \ufb01xed environment P, solving a Conf-MDP consists in \ufb01nding a model-policy pair (P \u2217, \u03c0\u2217) such that P \u2217, \u03c0\u2217= arg maxP \u2208P,\u03c0\u2208\u03a0 J\u03c0,P \u00b5 . For control purposes, the state-action value function, or Q-function, is introduced as the expected return starting from a state s and performing 2The proofs of all the lemmas and theorems can be found in Appendix A. \fCon\ufb01gurable Markov Decision Processes action a: QP,\u03c0(s, a) = R(s, a) + \u03b3 Z S P(s\u2032|s, a)V P,\u03c0(s\u2032)ds\u2032. (3) For learning the transition model we introduce the stateaction-next-state value function or U-function, de\ufb01ned as the expected return starting from the state s, performing action a and landing to state s\u2032: U P,\u03c0(s, a, s\u2032) = R(s, a) + \u03b3V P,\u03c0(s\u2032), (4) where V P,\u03c0 is the state value function or V-function. These three functions are tightly connected due to the trivial relations: V P,\u03c0(s) = R A \u03c0(a|s)QP,\u03c0(s, a)da and QP,\u03c0(s, a) = R S P(s\u2032|s, a)U P,\u03c0(s, a, s\u2032)ds\u2032. Furthermore, we de\ufb01ne the policy advantage function AP,\u03c0(s, a) = QP,\u03c0(s, a) \u2212V P,\u03c0(s) that quanti\ufb01es how much an action is better than the others and the model advantage function AP,\u03c0(s, a, s\u2032) = U P,\u03c0(s, a, s\u2032) \u2212QP,\u03c0(s, a) that quanti\ufb01es how much the next state is better than the other ones. In order to evaluate the one-step improvement in performance attained by a new policy \u03c0\u2032 or model P \u2032 when the current policy is \u03c0 and the current model is P, we introduce the relative advantage functions (Kakade & Langford, 2002): AP,\u03c0\u2032 P,\u03c0 (s) = Z A \u03c0\u2032(a|s)AP,\u03c0(s, a)da, AP \u2032,\u03c0 P,\u03c0 (s, a) = Z S P \u2032(s\u2032|s, a)AP,\u03c0(s, a, s\u2032)ds\u2032, and the corresponding expected values under the \u03b3discounted distributions: AP,\u03c0\u2032 P,\u03c0,\u00b5 = R S dP,\u03c0 \u00b5 (s)AP,\u03c0\u2032 P,\u03c0 (s)ds and AP \u2032,\u03c0 P,\u03c0,\u00b5 = R S R A \u03b4P,\u03c0 \u00b5 (s, a)AP \u2032,\u03c0 P,\u03c0 (s, a)dsda. 3. Performance Improvement The goal of this section is to provide a lower bound to the performance improvement obtained by moving from a model-policy pair (P, \u03c0) to another pair (P \u2032, \u03c0\u2032). 3.1. Bound on the \u03b3-discounted distribution We start providing a bound for the difference of \u03b3discounted distributions under different model-policy pairs. Proposition 3.1. Let (P, \u03c0) and (P \u2032, \u03c0\u2032) be two modelpolicy pairs, the \u21131-norm of the difference between the \u03b3discounted state distributions can be upper bounded as: \r \r \rdP \u2032,\u03c0\u2032 \u00b5 \u2212dP,\u03c0 \u00b5 \r \r \r 1 \u2264 \u03b3 1 \u2212\u03b3 DP \u2032\u03c0\u2032,P \u03c0 E , where DP \u2032\u03c0\u2032,P \u03c0 E = Es\u223cdP,\u03c0 \u00b5 \r \rP \u2032\u03c0\u2032 (\u00b7|s) \u2212P \u03c0(\u00b7|s) \r \r 1. This proposition provides a way to upper bound the difference of the \u03b3-discounted state distributions in terms of the state kernel dissimilarity.3 The state kernel couples the effects of the policy and the transition model, but it is 3More formally, DP \u2032\u03c0\u2032 ,P \u03c0 E is just a premetric (Deza & Deza, 2009) and not a metric (see Appendix B for details). convenient to keep their contribution separated, getting the following looser bound. Corollary 3.1. Let (P, \u03c0) and (P \u2032, \u03c0\u2032) be two modelpolicy pairs, the \u21131-norm of the difference between the \u03b3discounted state distributions can be upper bounded as: \r \r \rdP \u2032,\u03c0\u2032 \u00b5 \u2212dP,\u03c0 \u00b5 \r \r \r 1 \u2264 \u03b3 1 \u2212\u03b3 \u0010 D\u03c0\u2032,\u03c0 E + DP \u2032,P E \u0011 , where D\u03c0\u2032,\u03c0 E = Es\u223cdP,\u03c0 \u00b5 \r \r\u03c0\u2032(\u00b7|s) \u2212\u03c0(\u00b7|s) \r \r 1 and DP \u2032,P E = E(s,a)\u223c\u03b4P,\u03c0 \u00b5 \r \rP \u2032(\u00b7|s, a) \u2212P(\u00b7|s, a) \r \r 1. It is worth noting that when P = P \u2032 the bound resembles Corollary 3.2 in (Pirotta et al., 2013b), but it is tighter as: E s\u223cdP,\u03c0 \u00b5 \r \r\u03c0\u2032(\u00b7|s) \u2212\u03c0(\u00b7|s) \r \r 1 \u2264sup s\u2208S \r \r\u03c0\u2032(\u00b7|s) \u2212\u03c0(\u00b7|s) \r \r 1, in particular the bound of (Pirotta et al., 2013b) might yield a large bound value in case there exist states in which the policies are very different even if those states are rarely visited according to dP,\u03c0 \u00b5 . In the context of policy learning, a lower bound employing the same dissimilarity index D\u03c0\u2032,\u03c0 E in the penalization term has been previously proposed in (Achiam et al., 2017). 3.2. Bound on the Performance Improvement In this section, we exploit the previous results to obtain a lower bound on the performance improvement as an effect of the policy and model updates. We start introducing the coupled relative advantage function: AP \u2032,\u03c0\u2032 P,\u03c0 (s) = Z S Z A \u03c0\u2032(a|s)P \u2032(s\u2032|s, a) \u02dc AP,\u03c0(s, a, s\u2032)ds\u2032da, where \u02dc AP,\u03c0(s, a, s\u2032) = U P,\u03c0(s, a, s\u2032) \u2212V P,\u03c0(s). AP \u2032,\u03c0\u2032 P,\u03c0 represents the one-step improvement attained by the new model-policy pair (P \u2032, \u03c0\u2032) over the current one (P, \u03c0), i.e., the local gain in performance yielded by selecting an action with \u03c0\u2032 and the next state with P \u2032. The corresponding expectation under the \u03b3-discounted distribution is given by: AP \u2032,\u03c0\u2032 P,\u03c0,\u00b5 = R S dP,\u03c0 \u00b5 (s)AP \u2032,\u03c0\u2032 P,\u03c0 (s)ds. Now, we have all the elements to express the performance improvement in terms of the relative advantage functions and the \u03b3-discounted distributions. Theorem 3.1. The performance improvement of modelpolicy pair (P \u2032, \u03c0\u2032) over (P, \u03c0) is given by: JP \u2032,\u03c0\u2032 \u00b5 \u2212JP,\u03c0 \u00b5 = 1 1 \u2212\u03b3 Z S dP \u2032,\u03c0\u2032 \u00b5 (s)AP \u2032,\u03c0\u2032 P,\u03c0 (s)ds. This theorem is the natural extension of the result proposed by Kakade & Langford (2002), but, unfortunately, it cannot be directly exploited in an algorithm as the dependence of dP \u2032,\u03c0\u2032 \u00b5 on the candidate model-policy pair (P \u2032, \u03c0\u2032) is highly nonlinear and dif\ufb01cult to treat. We aim to obtain, from this result, a lower bound on JP \u2032,\u03c0\u2032 \u00b5 \u2212JP,\u03c0 \u00b5 that can be ef\ufb01ciently computed using information on the current pair (P, \u03c0). Theorem 3.2 (Coupled Bound). The performance improve\fCon\ufb01gurable Markov Decision Processes ment of model-policy pair (P \u2032, \u03c0\u2032) over (P, \u03c0) can be lower bounded as: JP \u2032,\u03c0\u2032 \u00b5 \u2212JP,\u03c0 \u00b5 | {z } performance improvement \u2265 AP \u2032,\u03c0\u2032 P,\u03c0,\u00b5 1 \u2212\u03b3 | {z } advantage \u2212 \u03b3\u2206AP \u2032,\u03c0\u2032 P,\u03c0 DP \u2032\u03c0\u2032,P \u03c0 E 2(1 \u2212\u03b3)2 | {z } dissimilarity penalization , where \u2206AP \u2032,\u03c0\u2032 P,\u03c0 = sups,s\u2032\u2208S \f \fAP \u2032,\u03c0\u2032 P,\u03c0 (s\u2032) \u2212AP \u2032,\u03c0\u2032 P,\u03c0 (s) \f \f. The bound is composed of two terms, like in (Kakade & Langford, 2002; Pirotta et al., 2013b): the \ufb01rst term, advantage, represents how much gain in performance can be locally obtained by moving from (P, \u03c0) to (P \u2032, \u03c0\u2032), whereas the second term, dissimilarity penalization, discourages updates towards model-policy pairs that are too far away. The coupled bound, however, is not suitable to be used in an algorithm as it does not separate the contribution of the policy and that of the model. In practice, an agent cannot directly update the kernel function P \u03c0 since the environment model can only partially be controlled, whereas, in many cases, we can assume a full control on the policy. For this reason, it is convenient to derive a bound in which the policy and model effects are decoupled. Theorem 3.3 (Decoupled Bound). The performance improvement of model-policy pair (P \u2032, \u03c0\u2032) over (P, \u03c0) can be lower bounded as: JP \u2032,\u03c0\u2032 \u00b5 \u2212JP,\u03c0 \u00b5 | {z } performance improvement \u2265B(P \u2032, \u03c0\u2032) = = AP \u2032,\u03c0 P,\u03c0,\u00b5 + AP,\u03c0\u2032 P,\u03c0,\u00b5 1 \u2212\u03b3 | {z } advantage \u2212\u03b3\u2206QP,\u03c0D 2(1 \u2212\u03b3)2 | {z } dissimilarity penalization , where D is a dissimilarity term de\ufb01ned as: D = D\u03c0\u2032,\u03c0 E \u0000D\u03c0\u2032,\u03c0 \u221e + DP \u2032,P \u221e \u0001 + DP \u2032,P E \u0000D\u03c0\u2032,\u03c0 \u221e + \u03b3DP \u2032,P \u221e \u0001 , D\u03c0\u2032,\u03c0 \u221e = sups\u2208S \r \r\u03c0\u2032(\u00b7|s) \u2212\u03c0(\u00b7|s) \r \r 1, DP \u2032,P \u221e = sups\u2208S,a\u2208A \r \rP \u2032(\u00b7|s, a) \u2212P(\u00b7|s, a) \r \r 1 and \u2206QP,\u03c0 = sups,s\u2032\u2208S,a,a\u2032\u2208A \f \fQP,\u03c0(s\u2032, a\u2032) \u2212QP,\u03c0(s, a) \f \f. 4. Safe Policy Model Iteration To deal with the learning problem in the Conf-MDP framework we could, in principle, learn the optimal policy by using a classical RL algorithm and adapt it to learn the optimal model, sequentially or in parallel. Alternatively, we could resort to general-purpose global optimization tools, like CEM (Rubinstein, 1999) or genetic algorithms (Holland & Goldberg, 1989), using as objective function the performance of the policy learned by a standard RL algorithm. Nonetheless, they may not correspond to the preferable, nor the safest, choices in this context as there exists an inherent connection between policy and model we could not overlook during the learning process. Indeed, a policy learned by interacting with a sub-optimal model could result in poor Algorithm 1 Safe Policy Model Iteration initialize \u03c00, P0. for i = 0, 1, 2, ... until \u03f5-convergence do \u03c0i = PolicyChooser(\u03c0i) P i = ModelChooser(Pi) Vi = {(\u03b1\u2217 0,i, 0), (\u03b1\u2217 1,i, 1), (0, \u03b2\u2217 0,i), (1, \u03b2\u2217 1,i)} \u03b1\u2217 i , \u03b2\u2217 i = arg max\u03b1,\u03b2{B(\u03b1, \u03b2) : (\u03b1, \u03b2) \u2208Vi} \u03c0i+1 = \u03b1\u2217 i \u03c0i + (1 \u2212\u03b1\u2217 i )\u03c0i Pi+1 = \u03b2\u2217 i P i + (1 \u2212\u03b2\u2217 i )Pi end for performance paired with a different, optimal model. At the same time, a policy far from the optimum could mislead the search for the optimal model. The goal of this section is to present an approach, Safe Policy-Model Iteration (SPMI), inspired by (Pirotta et al., 2013b), capable of learning the policy and the model simultaneously,4 possibly taking advantage of the inter-connection mentioned above. 4.1. The Algorithm Following the approach proposed in (Pirotta et al., 2013b), we de\ufb01ne the policy and model improvement update rules: \u03c0\u2032 = \u03b1\u03c0 + (1 \u2212\u03b1)\u03c0, P \u2032 = \u03b2P + (1 \u2212\u03b2)P, where \u03b1, \u03b2 \u2208[0, 1], \u03c0 \u2208\u03a0 and P \u2208P are the target policy and the target model respectively. Extending the rationale of (Pirotta et al., 2013b) to our context, we aim to determine the values of \u03b1 and \u03b2 which jointly maximize the decoupled bound (Theorem 3.3). In the following we will abbreviate B(P \u2032, \u03c0\u2032) with B(\u03b1, \u03b2). Theorem 4.1. For any \u03c0 \u2208\u03a0 and P \u2208P, the decoupled bound is optimized for: \u03b1\u2217, \u03b2\u2217= arg max \u03b1,\u03b2 {B(\u03b1, \u03b2) : (\u03b1, \u03b2) \u2208V}, where V = {(\u03b1\u2217 0, 0), (\u03b1\u2217 1, 1), (0, \u03b2\u2217 0), (1, \u03b2\u2217 1)} and the values of \u03b1\u2217 0, \u03b1\u2217 1, \u03b2\u2217 0 and \u03b2\u2217 1 are reported in Table 1. The theorem expresses the fact that the optimal (\u03b1, \u03b2) pair lies on the boundary of [0, 1]\u00d7[0, 1], i.e., either one between policy and model is moved and the other is kept unchanged or one is moved and the other is set to target. Algorithm 1 reports the basic structure of SPMI. The algorithm stops when both the expected relative advantages fall below a threshold \u03f5. The procedures PolicyChooser and ModelChooser are designated for selecting the target policy and model (see Section 4.3). 4.2. Policy and Model Spaces The selection of the target policy and model is a rather crucial component of the algorithm since the quality of the 4In the context of Conf-MDPs we believe that knowing the model of the con\ufb01gurable part of the environment is a reasonable requirement. \fCon\ufb01gurable Markov Decision Processes Table 1. The four possible optimal (\u03b1, \u03b2) pairs, the optimal pair is the one yielding the maximum bound value (all values are clipped in [0, 1]). The corresponding guaranteed performance improvements can be found in Appendix A. \u03b2\u2217= 0 \u03b1\u2217= 0 \u03b2\u2217= 1 \u03b1\u2217= 1 \u03b1\u2217 0 = (1\u2212\u03b3)AP,\u03c0 P,\u03c0,\u00b5 \u03b3\u2206QP,\u03c0D\u03c0,\u03c0 \u221e D\u03c0,\u03c0 E \u03b2\u2217 0 = (1\u2212\u03b3)AP ,\u03c0 P,\u03c0,\u00b5 \u03b32\u2206QP,\u03c0DP ,P \u221e DP ,P E \u03b1\u2217 1 = \u03b1\u2217 0 \u22121 2 \u0012 DP ,P E D\u03c0,\u03c0 E + DP ,P \u221e D\u03c0,\u03c0 \u221e \u0013 \u03b2\u2217 1 = \u03b2\u2217 0 \u2212 1 2\u03b3 \u0012 D\u03c0,\u03c0 E DP ,P E + D\u03c0,\u03c0 \u221e DP ,P \u221e \u0013 updates largely depends on it. To effectively adopt a target selection strategy we have to know which are the degrees of freedom on the policy and model spaces. Focusing on the model space \ufb01rst, it is easy to discriminate two macroclasses in real-world scenarios. In some cases, there are almost no constraints on the direction in which to update the model. In other cases, only a limited model portion, typically a set of parameters inducing transition probabilities, can be accessed. While we can naturally design the \ufb01rst scenario as an unconstrained model space, to represent the second scenario we limit the model space to the convex hull co(P ), where P is a set of extreme (or vertex) models. Since only the convex combination coef\ufb01cients can be controlled, we refer to the latter as a parametric model space. It is noteworthy that we can symmetrically extend the dichotomy to the policy space, although the need to limit the agent on the direction of policy updates is less signi\ufb01cant in our perspective. 4.3. Target Choice To deal with unconstrained spaces, it is quite natural to adopt the target selection strategy presented in (Pirotta et al., 2013b), by introducing the concept of greedy model as P +(\u00b7|s, a) \u2208arg maxs\u2032\u2208S U P,\u03c0(s, a, s\u2032), i.e., the model that maximizes the relative advantage in each state-action pair. At each step, the greedy policy and model w.r.t. the QP,\u03c0 and U P,\u03c0 are selected as targets. When we are not free to choose the greedy model, like in the parametric setting, we select the vertex model that maximizes the expected relative advantage (greedy choice). The greedy strategy is based on local information and is not guaranteed to provide a model-policy pair maximizing the bound. However, testing all the model-policy pairs is highly inef\ufb01cient in the presence of large model-policy spaces. A reasonable compromise is to select, as a target, the model that yields the maximum bound value between the greedy target P i \u2208arg maxP \u2208P AP,\u03c0 Pi,\u03c0,\u00b5 and the previous target P i\u22121 (the same procedure can be employed for the policy). This procedure, named persistent choice, effectively avoids the oscillating behavior, common with the greedy choice. 5. Theoretical Analysis In this section, we outline some relevant theoretical results related to SPMI. We start by analyzing the scenario in which the model/policy space is parametric, i.e., is limited to the convex hull of a set of vertex models/policies, and then we provide some rationales for the target choices adopted. In most of the section, we restrict our attention to the transition model, as for the policy all results apply symmetrically. 5.1. Parametric Model Space We consider the setting in which the transition model space is limited to the convex hull of a \ufb01nite set of vertex models (e.g., a set of deterministic models): P = co(P ), where P = {P1, P2, ..., PM}. Each model in co(P ) is de\ufb01ned by means of a coef\ufb01cient vector \u03c9 belonging to the Mdimensional fundamental simplex: P\u03c9 = PM i=1 \u03c9iPi. For the sake of brevity, we omit the dependency on \u03c0 of all the quantities. Moreover, we de\ufb01ne the optimal transition model P\u03c9\u2217as the model that maximizes the expected return, i.e., JP\u03c9\u2217 \u00b5 \u2265JP\u03c9 \u00b5 for all P\u03c9 \u2208co(P ). We start by stating some results on the expected relative advantage functions. Lemma 5.1. For any transition model P\u03c9 \u2208co(P ) it holds that: PM i=1 \u03c9iAPi P\u03c9(s, a) = 0 for all s \u2208S and a \u2208A. As a consequence, we observe that also the expected relative advantage functions APi P\u03c9,\u00b5 sums up to zero when weighted by the coef\ufb01cients \u03c9. An analogous statement holds when the policy is de\ufb01ned as a convex combination of vertex policies. The following theorem establishes an essential property of the optimal transition model. Theorem 5.1. For any transition model P\u03c9 \u2208co(P ) it holds that AP\u03c9 P\u03c9\u2217,\u00b5 \u22640. Moreover, for all P\u03c9 \u2208co \u0000{Pi \u2208 P : \u03c9\u2217 i > 0} \u0001 , it holds that AP\u03c9 P\u03c9\u2217,\u00b5 = 0. The theorem provides a necessary condition for a transition model to be optimal, i.e., all the expected relative advantages must be non-positive and, moreover, those of the vertex transition models associated with non-zero coef\ufb01cients must be zero. It is worth noting that the expected relative advantage AP\u03c9\u2032 P\u03c9,\u00b5 represents only a local measure of the performance improvement, as it is de\ufb01ned by taking the expectation of the relative advantage AP\u03c9\u2032 P\u03c9 (s, a) w.r.t. the current \u03b4P\u03c9 \u00b5 . On the other hand, the actual performance improvement JP\u03c9\u2032 \u00b5 \u2212JP\u03c9 \u00b5 is a global measure, being obtained by averaging the relative advantage AP\u03c9\u2032 P\u03c9 (s, a) over the new \u03b4P\u03c9\u2032 \u00b5 (Theorem 3.1). This is intimately related to the measure mismatch claim provided in (Kakade et al., 2003) \fCon\ufb01gurable Markov Decision Processes as the model expected relative advantage AP\u03c9\u2217 P\u03c9,\u00b5 might be null even if JP\u03c9\u2217 \u00b5 > JP\u03c9 \u00b5 , making SPMI, like CPI and SPI, stop into locally optimal models. Furthermore, it is intuitive to get convinced that asking for a guaranteed performance improvement may prevent from \ufb01nding the global optimum, as this may require visiting a lower performance region (see Appendix C.1 for an example). Nevertheless, we can provide a bound for the performance gap between a locally optimal model and the global optimal model. Proposition 5.1. Let P\u03c9 be a transition model having nonpositive relative advantage functions w.r.t. the target models. Then: JP\u03c9\u2217 \u00b5 \u2212JP\u03c9 \u00b5 \u2264 1 1 \u2212\u03b3 sup s\u2208S,a\u2208A max i=1,2,...,M APi P\u03c9(s, a). From this result we notice that a suf\ufb01cient condition for a model to be optimal is that APi P\u03c9(s, a) = 0 for all stateaction pairs. This is a stronger requirement than the maximization of JP\u03c9 \u00b5 as it asks the model to be optimal in every state-action pair independently of the initial state distribution \u00b5;5 such a model might not exist when considering a model space P that does not include all the possible transition models (see Appendix C.2 for an example). 5.2. Analogy with Policy Gradient Methods In this section, we elucidate the relationship between the relative advantage function and the gradient of the expected return. Let us start by stating the expression of the gradient of the expected return w.r.t. a parametric transition model. This is the equivalent of the Policy Gradient Theorem (Sutton et al., 2000) for the transition model. Theorem 5.2 (P-Gradient Theorem). Let P\u03c9 be a class of parametric stochastic transition models differentiable in \u03c9, the gradient of the expected return w.r.t. \u03c9 is given by: \u2207\u03c9JP\u03c9 \u00b5 = Z S Z A \u03b4P\u03c9 \u00b5 (s, a) Z S \u2207\u03c9P\u03c9(s\u2032|s, a)\u00d7 \u00d7 U P\u03c9(s, a, s\u2032)ds\u2032dads. Let us now show the connection between \u2207\u03c9JP\u03c9 \u00b5 and the expected relative advantage functions. This result extends that of Kakade et al. (2003) to multiple parameter updates. Proposition 5.2. Let P be the current transition model. Let us consider a target model which is a convex combination of the models in P: P = PM i=1 \u03b7iPi and the update rule: P \u2032 = \u03b2P + (1 \u2212\u03b2)P, \u03b2 \u2208[0, 1]. Then, the derivative of the expected return of P \u2032 w.r.t. the \u03b2 coef\ufb01cients evaluated in P is given by: \u2202JP \u2032 \u00b5 \u2202\u03b2 \f \f \f \f \u03b2=0 = M X i=1 \u03b7iAPi P,\u00b5. 5This is the same difference between a policy that maximizes the value function V \u03c0 in all states and a policy that maximizes the expected return J\u03c0. The proposition provides an interesting interpretation of the expected relative advantage function. Suppose that P\u03c9 is the current model and we have to choose which target model(s) we should move toward. The local performance improvement, at the \ufb01rst order, is given by JP \u2032 \u00b5 \u2212JP \u00b5 \u2243 \u2202JP \u2032 \u00b5 \u2202\u03b2 \f \f \u03b2=0\u03b2 = \u03b2 PM i=1 \u03b7iAPi P,\u00b5. Given that \u03b2 will be determined later by maximizing the bound, the local performance improvement is maximized by assigning one to the coef\ufb01cient of the model yielding the maximal advantage. Therefore, the choice of the direction to follow, when considering the greedy target choice, is based on local information only (gradient), while the step size \u03b2 is obtained by maximizing the bound on the guaranteed performance improvement (safe), as done in (Pirotta et al., 2013a). 6. Experimental Evaluation The goal of this section is to show the bene\ufb01ts of con\ufb01guring the environment while the policy learning goes on. The experiments are conducted on two explicative domains: the Student-Teacher domain (unconstrained model space) the Racetrack Simulator (parametric model space). We compare different target choices (greedy and persistent, see Section 4.3) and different update strategies. Speci\ufb01cally, SPMI, that adaptively updates policy and model, is compared with some alternative model learning approaches: SPMI-alt(ernated) in which model and policy updates are forced to be alternated, SPMI-sup that uses a looser bound, obtained from Theorem 3.3 by replacing D\u22c6\u2032,\u22c6 E with D\u22c6\u2032,\u22c6 \u221e,6 SPI+SMI7 that optimizes policy and model in sequence and SMI+SPI that does the opposite. 6.1. Student-Teacher domain The Student-Teacher domain is a simple model of concept learning, inspired by (Rafferty et al., 2011). A student (agent) learns to perform consistent assignments to literals as a result of the statements (e.g., \u201cA+C=3\u201d) provided by an automatic teacher (environment, e.g., online platform). The student has a limited policy space as she/he can only change the values of the literals by a \ufb01nite quantity, but it is possible to con\ufb01gure the dif\ufb01culty of the teacher\u2019s statements, selecting the number of literals in the statement, in order to improve the student\u2019s performance (detailed description in Appendix D.1).8 We start considering the illustrative example in which there are two binary literals, and the student can change only one 6When considering only policy updates, this is equivalent to the bound used in SPI (Pirotta et al., 2013b). 7SMI (Safe Model Iteration) is SPMI without policy updates. 8A problem setting is de\ufb01ned by the 4-tuple number of literals maximum literal value maximum update allowed maximum number of literals in the statement (e.g., 2-1-1-2) \fCon\ufb01gurable Markov Decision Processes 102 103 104 4 6 8 10 iteration expected return 102 103 104 10\u22122 100 iteration \u03b1 102 103 104 10\u22124 10\u22122 100 iteration \u03b2 SPMI SPMI-sup SPMI-alt Figure 1. Expected return, \u03b1 and \u03b2 coef\ufb01cients for the Student-Teacher domain 2-1-1-2 for different update strategies. 102 103 104 0 0.5 1 iteration policy dissimilarity SPMI-persistent SPMI-greedy Figure 2. Policy dissimilarity for greedy and persistent target choices in the 2-1-1-2 case. 0 10,000 20,000 4 6 8 10 SPI SMI iteration expected return 102 103 104 2 4 6 8 SPI SMI iteration expected return SPMI SPMI-sup SPMI-alt SPI+SMI SMI+SPI 000 102 103 104 2 4 6 8 SPI SMI iteration expected return SPMI-alt SPI+SMI SMI+SPI Figure 3. Expected return for the Student-Teacher domains 2-1-1-2 (left) and 2-3-1-2 (right) for different update strategies. literal at a time (2-1-1-2). This example aims to illustrate bene\ufb01ts of SPMI over other update strategies and target choices. Further scenarios are reported in Appendix E.1. In Figure 1, we show the behavior of the different update strategies starting from a uniform initialization. We can see that both SPMI and SPMI-sup perform the policy updates and the model updates in sequence. This is a consequence of the fact that, by looking only at the local advantage function, it is more convenient for the student to learn an almost optimal policy with no intervention on the teacher and then re\ufb01ning the teacher model to gain further reward. The joint and adaptive strategy of SPMI outperforms both SPMI-sup and SPMI-alt. The alternated model-policy update (SPMI-alt) is not convenient since, with an initial poor-performing policy, updating the model does not yield a signi\ufb01cant performance improvement. It is worth noting that all the methods converge in a \ufb01nite number of steps and the learning rates \u03b1 and \u03b2 exhibit an exponential growth trend. In Figure 2, we compare the greedy target selection with the persistent target selection. The former, while being the best local choice maximizing the advantage, might result in an unstable behavior that slows down the convergence of the algorithm. In Figure 3, we can notice that learning both policy and model is convenient since the performance of SPMI at convergence is higher than the one of SPI (only policy learned) and SMI (only model learned), corresponding to the markers in Figure 3. Although SPMI adopts the tightest bound, its update strategy is not guaranteed to yield globally the fastest convergence as it is based on local information, i.e., expected relative advantage (Figure 3 right). 6.2. Racetrack simulator The Racetrack simulator is an abstract representation of a car driving problem. The autonomous driver (agent) has to optimize a driving policy to run the vehicle on the track, reaching the \ufb01nish line as fast as possible. During the process, the agent can con\ufb01gure two vehicle settings to improve her/his driving performance: the vehicle stability and the engine boost (detailed description in Appendix D.2). We \ufb01rst present an introductory example on a simple track (T1) in which only the vehicle stability can be con\ufb01gured and then we show a case on a different track (T2) including also engine boost con\ufb01guration. These examples show that the optimal model is not necessarily one of the vertex models. Results on other tracks are reported in Appendix E.2. In Figure 4 left, we highlight the effectiveness of SPMI updates over SPMI-sup and SPMI-alt and sequential executions of SMI and SPI on track T1. Furthermore, the SPMI-greedy, which selects the target greedily in each iteration, results in lower performance w.r.t. SPMI. Comparing SPMI with the sequential approaches, we can easily deduce that is not valuable to con\ufb01gure the vehicle stability, i.e., updating the model, while the driving policy is still really rough. Although in the showed example the difference between SPMI and SPI+SMI is way less signi\ufb01cant in terms of expected return, their learning paths are quite peculiar. In Figure 4 right, we show the trend of the model coef\ufb01cient related to high-speed stability. While the optimal con\ufb01gu\fCon\ufb01gurable Markov Decision Processes 100 101 102 103 104 0 0.2 0.4 0.6 iteration expected return 0 10,000 20,000 30,000 0.4 0.6 0.8 1 iteration high speed stability SPMI SPMI-sup SPMI-alt SPMI-greedy SMI+SPI SPI+SMI Figure 4. Expected return and coef\ufb01cient of the high speed stabiliy vertex model for different update strategies in track T1. 10,000 20,000 30,000 40,000 50,000 0 0.05 0.1 0.15 iteration expected return SPMI SPMI-sup SPMI-greedy SPMI-alt Figure 5. Expected return in track T2 with 4 vertex models. ration results in a mixed model for vehicle stability, SPMI exploits the maximal high-speed stability to learn the driving policy ef\ufb01ciently in an early stage, SPI+SMI, instead, executes all the policy updates and then directly leads the model to the optimal con\ufb01guration. SPMI-greedy prefers to avoid the maximal high-speed stability region as well. It is worthwhile to underline that SPMI could temporarily drive the process aside from the optimum if it leads to higher performance from a local perspective. We consider this behavior quite valuable, especially in scenarios where performance degradations during learning are unacceptable. Figure 5 shows how the previous considerations generalize to an example on a morphologically different track (T2), in which also the engine boost can be con\ufb01gured. The learning process is characterized by a long exploration phase, both in the model and the policy space, in which the driver cannot lead the vehicle to the \ufb01nish line to collect any reward. Then, we observe a fast growth in expected return when the agent has acquired enough information to reach the \ufb01nish line consistently. SPMI displays a more ef\ufb01cient exploration phase compared to other update strategies and target choices, leading the process to a quicker convergence to the optimal model that prefers high speed stability and an intermediate engine boost con\ufb01guration. 7. Discussion and Conclusions In this paper, we proposed a novel framework (Conf-MDP) to model a set of real-world decision-making scenarios that, from our perspective, have not been covered by the literature so far. In Conf-MDPs the environment dynamics can be partially modi\ufb01ed to improve the performance of the learning agent. Conf-MDPs allow modeling many relevant sequential-decision making problems that we believe cannot be effectively addressed using traditional frameworks. Why not a unique agent? Representing the environment con\ufb01gurability in the agent model when the environment is under the control of a supervisor is certainly inappropriate. Even when the environment con\ufb01guration is carried out by the agent, this approach would require the inclusion of \u201ccon\ufb01guration actions\u201d in the action space to allow the agent to con\ufb01gure the environment directly as a part of the policy optimization. However, in our framework, the environment con\ufb01guration is performed just once at the beginning of the episode. Moreover, with con\ufb01guration actions the agent is not really learning a probability distribution on actions, i.e., a policy, but a probability distribution on state-state couples, i.e., a state kernel. This formulation prevents distinguishing, during the process, the effects of the policy from those of the model, making it dif\ufb01cult to \ufb01nely constrain the con\ufb01gurations, limit the feasible model space, and recovering, a posteriori, the optimal model-policy pair. Why not a multi-agent system? When there is no supervisor, the agent is the only learning entity and the environment is completely passive. In the presence of a supervisor, it would be misleading to adopt a cooperative multi-agent approach. The supervisor acts externally, at a different level and could be, possibly, totally transparent to the learning agent. Indeed, the supervisor does not operate inside the environment but it is in charge of selecting the most suitable con\ufb01guration, whereas the agent needs to learn the optimal policy for the given environment. The second signi\ufb01cant contribution of this paper is the formulation of a safe approach, suitable to manage critical tasks, to solve a learning problem in the context of the newly introduced Conf-MDP framework. To this purpose, we proposed a novel tight lower bound on the performance improvement and an algorithm (SPMI) optimizing this bound to learn the policy and the model con\ufb01guration simultaneously. We then presented an empirical study to show the effectiveness of SPMI in our context and to uphold the introduction of the Conf-MDP framework. This is a seminal paper on Conf-MDPs and the proposed approach represents only a \ufb01rst step in solving these kinds of problems: many future research directions are open. Clearly, a \ufb01rst extension could tackle the problem from a samplebased perspective, removing the requirement of knowing the full model space. Furthermore, we could consider different learning approaches, like policy search methods, suitable for continuous state-action spaces. \fCon\ufb01gurable Markov Decision Processes" + } + ], + "Matteo Papini": [ + { + "url": "http://arxiv.org/abs/2110.14798v1", + "title": "Reinforcement Learning in Linear MDPs: Constant Regret and Representation Selection", + "abstract": "We study the role of the representation of state-action value functions in\nregret minimization in finite-horizon Markov Decision Processes (MDPs) with\nlinear structure. We first derive a necessary condition on the representation,\ncalled universally spanning optimal features (UNISOFT), to achieve constant\nregret in any MDP with linear reward function. This result encompasses the\nwell-known settings of low-rank MDPs and, more generally, zero inherent Bellman\nerror (also known as the Bellman closure assumption). We then demonstrate that\nthis condition is also sufficient for these classes of problems by deriving a\nconstant regret bound for two optimistic algorithms (LSVI-UCB and ELEANOR).\nFinally, we propose an algorithm for representation selection and we prove that\nit achieves constant regret when one of the given representations, or a\nsuitable combination of them, satisfies the UNISOFT condition.", + "authors": "Matteo Papini, Andrea Tirinzoni, Aldo Pacchiano, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta", + "published": "2021-10-27", + "updated": "2021-10-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction The ability of an agent to learn an informative mapping from complex observations to a succinct representation is one of the essential factors for the success of machine learning in \ufb01elds such as computer vision, language modeling, and more broadly in deep learning (Bengio et al., 2013). In supervised learning, it is well understood that a \u201cgood\u201d representation is one that allows to accurately \ufb01t any target function of interest (e.g., correctly classify a set of objects in an image). In Reinforcement Learning (RL), this concept is more subtle, as it can be applied to di\ufb00erent aspects of the problem, such as the optimal value function or the optimal policy. Furthermore, recent works have shown that realizability (e.g., being able to represent the optimal value function) is not a su\ufb03cient condition for solving an RL problem, as the sample complexity using realizable representations is exponential in the worst case (e.g., Weisz et al., 2021). As such, a desirable property of a \u201cgood\u201d representation in RL is to enable learning a near-optimal policy with a polynomial sample complexity (or similarly sublinear regret bound). Several works have focused on online learning \u2014 considering sample complexity or regret minimization \u2014 and identi\ufb01ed su\ufb03cient assumptions for e\ufb03cient learning. Standard examples are tabular Markov Decision Processes (MDPs) (e.g., Jaksch et al., 2010; Azar et al., 2012, 2017), low or zero inherent Bellman error (e.g., Jin et al., 2020; Zanette et al., 2020b,a; Jin et al., 2021) and linear mixture MDPs (e.g., Yang and Wang, 2019; Ayoub et al., 2020; Zhang et al., 2021). While, in these settings, the representation is provided as input to the algorithm, an alternative scenario is \u2217Work done while at Facebook AI Research. 1 arXiv:2110.14798v1 [cs.LG] 27 Oct 2021 \fto learn such representations. In this case, research has focused either on the problem of online representation selection for regret minimization (e.g., Ortner et al., 2014, 2019; Lee et al., 2021) or, more recently, on the sample complexity of online representation learning (e.g., Du et al., 2019; Agarwal et al., 2020; Modi et al., 2021). Refer to App. A for more details. While this literature has focused on \ufb01nding a representation enabling learning a near-optimal policy with sublinear regret or polynomial sample complexity, there may be several of such \u201cgood\u201d representations with signi\ufb01cantly di\ufb00erent learning performance and existing approaches are not guaranteed to \ufb01nd the most e\ufb03cient one. Intuitively, we would like to \ufb01nd representations that require the minimum level of exploration to solve the task. For example, representations that would allow the algorithm to stop exploring after a \ufb01nite time and play only optimal actions forever (i.e., achieving constant regret), if they exist. This aspect of representation learning was recently studied by Hao et al. (2020); Papini et al. (2021) in contextual linear bandits, where they showed that certain representations display non-trivial properties that enable much better learning performance. While it is well-known that properties such as dimensionality and norm of the features have an impact on the learning performance, Hao et al. (2020); Papini et al. (2021) proved that it is possible to achieve constant regret (i.e., not scaling with the number of learning steps) if a certain (necessary and su\ufb03cient) condition on the features associated with the optimal actions is satis\ufb01ed. To the best of our knowledge, the impact of similar properties on RL algorithms and how to \ufb01nd such representations is largely unexplored. Contributions. In this paper, we investigate the concept of \u201cgood\u201d representations in the context of regret minimization in \ufb01nite-horizon MDPs with linear structure. In particular, we consider the settings of zero inherent Bellman error (also referred to as Bellman closure) (Zanette et al., 2020b) and low-rank structure (e.g., Jin et al., 2020). Similarly to the bandit case (Hao et al., 2020; Papini et al., 2021), we study the impact of representations on the learning process. Our contributions are both fundamental and algorithmic. 1) We provide a necessary condition (called UniSOFT) for a representation to enable constant regret in any problem with linear reward parametrization. Notably, this result encompasses MDPs with zero inherent Bellman error, and linear mixture MDPs with linearly parametrized rewards. Intuitively, the condition generalizes a similar condition for linear contextual bandits and it requires that the features observed along trajectories generated by the optimal actions provide information on the whole feature space (see Asm. 4). 2) We provide the \ufb01rst constant regret bound for MDPs for both ELEANOR (Zanette et al., 2020b) and LSVI-UCB (Jin et al., 2020) when the UniSOFT condition is satis\ufb01ed. As a consequence, we show that good representations are not only necessary but also su\ufb03cient for constant regret in MDPs with zero inherent Bellman error or low-rank assumptions. 3) We develop an algorithm, called LSVI-LEADER, for representation selection in low-rank MDPs. We prove that in low-rank MDPs, LSVI-LEADER su\ufb00ers the regret of the best representation without knowing it in advance. Furthermore, LSVI-LEADER achieves constant regret even when only a suitable combination of the representations satis\ufb01es the UniSOFT condition despite none of them being \u201cgood\u201d. This is indeed possible thanks to its ability to select a di\ufb00erent representation for each stage, state, and action. 2 Preliminaries We consider a time-inhomogeneous \ufb01nite-horizon Markov decision process (MDP) M = \u0000S, A, H, {rh}H h=1, {ph}H h=1, \u00b5 \u0001 where S is the state space and A is the action space, H is the length of the episode, {rh} and {ph} are reward functions and state-transition probability measures, and \u00b5 is the initial state distribution. We denote by rh(s, a) the expected reward of a pair (s, a) \u2208S \u00d7 A at stage h. We assume that S is a measurable space with a possibly in\ufb01nite number of elements and A is a \ufb01nite set. A policy \u03c0 = (\u03c01, . . . , \u03c0H) \u2208\u03a0 is a sequence of decision rules \u03c0h : S \u2192A. For every h \u2208[H] := {1, . . . , H} and (s, a) \u2208S \u00d7 A, we de\ufb01ne the value functions of a policy \u03c0 as Q\u03c0 h(s, a) = rh(s, a) + E\u03c0 \" H X i=h+1 ri(si, ai) # , V \u03c0 h (s, a) = Q\u03c0 h(s, \u03c0h(s)), where the expectation is over probability measures induced by the policy and the MDP over state-action sequences of length H \u2212h. Under certain regularity conditions (e.g., Bertsekas and 2 \fShreve, 2004), there always exists an optimal policy \u03c0\u22c6whose value functions are de\ufb01ned by V \u03c0\u22c6 h (s) := V \u22c6 h (s) = sup\u03c0 V \u03c0 h (s) and Q\u03c0\u22c6 h (s, a) := Q\u22c6 h(s, a) = sup\u03c0 Q\u03c0 h(s, a). The optimal Bellman equation (and Bellman operator Lh) at stage h \u2208[H] is de\ufb01ned as: Q\u22c6 h(s, a) := LhQ\u22c6 h+1(s, a) = rh(s, a) + Es\u2032\u223cph(s,a) h max a\u2032 Q\u22c6 h+1(s\u2032, a\u2032) i . The value iteration algorithm (a.k.a. backward induction) computes Q\u22c6or Q\u03c0 by applying the Bellman equations starting from stage H down to 1, with V \u03c0 H+1(s) = 0 by de\ufb01nition for any s and \u03c0. The optimal policy is simply the greedy policy w.r.t. Q\u22c6: \u03c0\u22c6 h(s) = argmaxa\u2208A Q\u22c6 h(s, a). In online learning, the agent interacts with an unknown MDP in a sequence of K episodes. At each episode k, the agent observes an initial state sk 1, it selects a policy \u03c0k, it collects the samples observed along a trajectory obtained by executing \u03c0k, it updates the policy, and reiterates over the next episode. We evaluate the performance of a learning agent through the regret: R(K) := PK k=1 V \u22c6 1 (sk 1)\u2212V \u03c0k 1 (sk 1). Linear Representation. When the state space is large or continuous, value functions are often described through a parametric representation. A standard approach is to use linear representations of the state-action function Qh(s, a) = \u03c6h(s, a)T\u03b8h, where \u03c6h : S \u00d7 A \u2192Rd is a time-inhomogeneous feature map and \u03b8h \u2208Rd is an unknown parameter vector.1 In this paper, we consider MDPs satisfying Bellman closure (i.e., zero Inherent Bellman Error) (Zanette et al., 2020b) or low-rank assumptions (e.g., Yang and Wang, 2019; Jin et al., 2020). Assumption 1 (Bellman Closure). De\ufb01ne the set of bounded value function Qh = {Qh|\u03b8h \u2208\u0398h : Qh(s, a) = \u03c6h(s, a)T\u03b8h, \u2200(s, a)} and the associated parameter space \u0398h = {\u03b8h \u2208Rd : |\u03c6h(s, a)T\u03b8h| \u2264 D}. An MDP has zero Inherent Bellman Error (IBE) if \u2200h \u2208[H], sup Qh+1\u2208Qh+1 inf Qh\u2208Qh \u2225Qh \u2212LhQh+1\u2225\u221e= 0. This de\ufb01nition implies that the optimal value function is realizable as Q\u22c6 h \u2208Qh. Furthermore, the function space Q is closed under the Bellman operator, i.e., for all Qh+1 \u2208Qh+1, LhQh+1 \u2208Qh. Under this assumption, value-iteration-based algorithms are guaranteed to converge to the optimal policy in the limit of samples and iterations (Munos and Szepesv\u00e1ri, 2008). In the context of regret minimization, Zanette et al. (2020b) proposed a model-free algorithm, called ELEANOR, that achieves sublinear regret under the Bellman closure assumption, but at the cost of computational intractability.2 The design of a tractable algorithm for regret minimization under low IBE assumption is still an open question in the literature. Assumption 2 (Low-Rank MDP). Let \u0398h = Rd, then an MDP has low-rank structure if \u2200s, a, h, s\u2032, rh(s, a) = \u03c6h(s, a)T\u03b8h, ph(s\u2032|s, a) = \u03c6h(s, a)T\u00b5h(s\u2032) where \u00b5h : S \u2192Rd. Then, for any policy \u03c0 \u2208\u03a0, \u2203\u03b8\u03c0 h \u2208\u0398h such that Q\u03c0 h(s, a) = \u03c6h(s, a)T\u03b8\u03c0 h. We assume \u2225\u03b8h\u22252 \u2264 \u221a d, \u2225 R s\u2032 \u00b5h(s\u2032)v(s\u2032)ds\u2032\u22252 \u2264 \u221a d\u2225v\u2225\u221eand \u2225\u03c6h(s, a)\u22252 \u22641, for any s, a, h, and function v : S \u2192R. This assumption is strictly stronger than Bellman closure (Zanette et al., 2020b) and it implies the value function of any policy is linear in the features. Furthermore, under Asm. 2 sublinear regret is achievable using, e.g., LSVI-UCB (Jin et al., 2020), a tractable algorithm for low-rank MDPs. He et al. (2020) have recently established a problem-dependent logarithmic regret bound for LSVI-UCB under a strictly-positive minimum gap. The minimum positive gap provides a natural measure of the di\ufb03culty of an MDP. 1It is possible to extend the setting to di\ufb00erent feature dimensions {dh}h\u2208[H]. 2ELEANOR works under the weaker assumption of low IBE. Jin et al. (2021) considered the more general case of low Bellman Eluder dimension. Their algorithm reduces to ELEANOR in the case of low IBE. 3 \fAlgorithm (setting) Minimax Problem-Dependent Logarithmic Constant with UniSOFT (this work) ELEANOR (Bellman Closure) e O( \u221a d2H3T) (Zanette et al., 2020b) N/A d2H4 \u2206min\u03bb3/2 + log1/2 \u0012 d2H5 \u03b4\u22062 min\u03bb3 + \u0013 (Thm. 8) LSVI-UCB (low-rank MDPs) e O( \u221a d3H3T) (Jin et al., 2020) O( d3H5 \u2206min log2(T)) (He et al., 2020) d3H5 \u2206min log \u0012 d4H6 \u03b4\u22062 min\u03bb3 + \u0013 (Thm. 9) Lower Bound \u2126( \u221a d2H2T) (Zhou et al., 2020, Remark 5.8) \u2126( dH \u2206min ) (He et al., 2020) N/A Table 1: Regret comparisons of ELEANOR and LSVI-UCB. For ELEANOR, we consider the special case of Bellman closure. Assumption 3. The suboptimality gap of taking action a in state s at stage h is de\ufb01ned as: \u2206h(s, a) = V \u22c6 h (s) \u2212Q\u22c6 h(s, a). (1) We assume the minimum positive gap \u2206min = mins,a,h{\u2206h(s, a)|\u2206h(s, a) > 0} is well de\ufb01ned and that the optimal action is unique, i.e., | argmaxa{Q\u22c6 h(s, a)}| = 1, for any s \u2208S, h \u2208[H]. In Tab. 1, we summarize existing bounds in the two settings. Another structural assumption that has gained popularity in the literature is the linear-mixture structure (Jia et al., 2020; Ayoub et al., 2020; Zhou et al., 2020), where the transition function admits a form ph(s\u2032|s, a) = \u03c6h(s\u2032|s, a)T\u03b8h. No structural requirement is made on the reward, which is typically assumed to be known. As a consequence, the value function may not be linearly representable. However, the fact the reward is known and that it is possible to directly learn the parameters \u03b8h of the transition function allow to achieve sublinear regret (even logarithmic) through model-based algorithms. While in this paper we mostly focus on Asm. 1 and 2, in Sect. 3.1 we show that our condition is necessary for constant regret also for linear-mixture MDPs with unknown linear reward. 3 Constant Regret for Linear MDPs In this section, we introduce UniSOFT, a necessary condition for constant regret in any MDP with linear rewards. We show that this condition is also su\ufb03cient in MDPs with Bellman closure. Assumption 4. A feature map is UniSOFT (Universally Spanning Optimal FeaTures) for an MDP if it satis\ufb01es Asm. 1 or 2, and for all h \u2208[H] the following holds: span n \u03c6h(s, a) | \u2200(s, a), \u2203\u03c0 \u2208\u03a0 : \u03c1\u03c0 h(s, a) > 0 o = span n \u03c6\u22c6 h(s) | \u2200s, \u03c1\u22c6 h(s) > 0 o , where \u03c1\u03c0 h(s) = E[1 {sh = s} |M, \u03c0] is the occupancy measure of a policy \u03c0, \u03c1\u03c0 h(s, a) = \u03c1h(s)1 {\u03c0h(s) = a}, \u03c1\u22c6 h(s) := \u03c1\u03c0\u22c6 h (s), and \u03c6\u22c6 h(s) := \u03c6h(s, \u03c0\u22c6 h(s)). Intuitively, features that are observed by only playing optimal actions must provide information on the whole space of reachable features at each stage h. We notice that Asm. 4 reduces to the HLS property for contextual bandits considered by Hao et al. (2020); Papini et al. (2021). The key di\ufb00erence is that, in RL, the reachability of a state plays a fundamental role. For example, features of states that are not reachable by any policy are irrelevant, while features of optimal actions in states that are not reachable by the optimal policy (i.e., \u03c6\u22c6 h(s) in a state with \u03c1\u22c6 h(s) = 0) do not contribute to the span of optimal features since they can only be reached by acting sub-optimally. In RL, a related structural assumption to Asm. 4 is the \u201cuniformly excited feature\u201d assumption used by Abbasi-Yadkori et al. (2019, Asm. A4) for average reward problems. Their assumption is strictly stronger than ours since it requires that all policies generate an occupancy measure under which the features span all directions uniformly well. Such an assumption can be related to the ergodicity 4 \fassumption for tabular MDPs, which is known to be restrictive. Another related quantity is the \u201cexplorability\u201d coe\ufb03cient introduced by Zanette et al. (2020c). This term represents how explorative (in the feature space) are the optimal policies of the tasks compatible with the MDP, i.e., considering any possible parameter \u03b8h \u2208\u0398h. This coe\ufb03cient is important in reward-free exploration where the objective is to learn a near optimal policy for any task, which is revealed only once learning has completed. In our setting, we focus only on the properties of the optimal policy for the single task we aim to solve. It is interesting to look into Asm. 4 from an alternative perspective. Denote by 0 \u2264\u03bbh,1 \u2264. . . \u2264 \u03bbh,d the eigenvalues of the matrix \u039bh := Es\u223c\u03c1\u22c6 h \u0002 \u03c6\u22c6 h(s)\u03c6\u22c6 h(s)T\u0003 and by \u03bb+ h = min{\u03bbh,i > 0, i \u2208[d]} the minimum positive eigenvalue. We notice that when the features are non-redundant (i.e., {\u03c6h(s, a)} spans Rd) and the UniSOFT assumption holds, then \u03bb+ h = \u03bbh,1 > 0. As we will see, the minimum positive eigenvalue \u03bb+ h plays a fundamental role in the constant regret bound, together with the minimum gap \u2206min. We provide examples of UniSOFT and Non-UniSOFT representations in App. G, as well as their impact on the learning process. 3.1 UniSOFT is Necessary for Constant Regret The following theorem shows that the UniSOFT condition is necessary to achieve constant regret in a large class of MDPs. Theorem 5. Let M be any MDP with \ufb01nite states, arbitrary dynamics p, linear rewards (i.e., rh(s, a) = \u03c6h(s, a)T\u03b8h) with Gaussian N(0, 1) noise, unique optimal policy \u03c0\u22c6, and where condition UniSOFT (Asm. 4) is not satis\ufb01ed. Let M be the set of MDPs with same dynamics as M but di\ufb00erent reward parameters {\u03b8h}h\u2208[H]. Then, there exists no algorithm that su\ufb00ers sub-linear regret in all MDPs in M while su\ufb00ering constant regret in M. Thm. 5 states that in MDPs with linear reward, the UniSOFT condition is necessary to achieve constant regret for any \u201cprovably e\ufb03cient\u201d algorithm. Notably, this result does not put any restriction on the transition model, which can be arbitrary and known. This means that as soon as the reward is linear and unknown to the learning agent, the UniSOFT condition is necessary to attain constant regret. This result applies to low-rank MDPs, linear-mixture MDPs with unknown linear rewards, and MDPs with Bellman closure (Bellman closure implies linear rewards, see Prop. 2 by Zanette et al. (2020b)). Proof sketch of Theorem 5. The key intuition behind the proof is that an algorithm achieving a constant regret must select sub-optimal actions only a \ufb01nite number of times. Nonetheless, in order to learn the optimal policy, all features associated with suboptimal actions should be explored enough. Since UniSOFT does not hold, this cannot happen by executing the optimal policy alone and requires selecting suboptimal policies for long enough, thus preventing constant regret. More formally, we call an algorithm \u201cprovably e\ufb03cient\u201d if it su\ufb00ers sub-linear regret on the given class of MDPs M. Formally, we use the following de\ufb01nition, which is standard to prove problem-dependent lower bounds (e.g., Simchowitz and Jamieson, 2019; Xu et al., 2021). De\ufb01nition 6 (\u03b1-consistency). Let \u03b1 \u2208(0, 1), then an algorithm A is \u03b1-consistent on a class of MDPs M if, for each M \u2208M and K \u22651, there exists a constant cM (independent from K) such that EA M [R(K)] \u2264cMK\u03b1.3 The following lemma is the key result for proving Thm. 5 and it might be of independent interest. It shows that any consistent algorithm must explore su\ufb03ciently all relevant directions in the feature space to discriminate any sub-optimal policy from the optimal one. The proof (reported in App. C) leverages techniques for deriving asymptotic lower bounds for linear contextual bandits (e.g., Lattimore and Szepesvari, 2017; Hao et al., 2020; Tirinzoni et al., 2020). 3In practice, all existing \u201cprovably-e\ufb03cient\u201d algorithms we are interested in are included in this class and cM is polynomial in all problem-dependent quantities (e.g., d, H). For instance, LSVI-UCB and ELEANOR are 1/2-consistent on the class of low-rank and Bellman-closure MDPs, where they enjoy worst-case e O( \u221a K) regret bounds (with cM being O( \u221a d3H4) and O( \u221a d2H4), respectively). 5 \fLemma 7. Let M, M be as in Thm. 5 and A be any \u03b1-consistent algorithm on M. For any \u03c0 \u2208\u03a0, denote by \u03a8\u03c0 h := P s,a \u03c1\u03c0 h(s, a)\u03c6h(s, a) its expected features at stage h and \u2206(\u03c0) := V \u22c6 1 \u2212V \u03c0 1 its sub-optimality gap. Then, for any \u03c0 \u2208\u03a0 with \u2206(\u03c0) > 0 and h \u2208[H], lim sup K\u2192\u221e log(K)\u2225\u03a8\u03c0 h \u2212\u03a8\u22c6 h\u22252 EA M[\u039bK h ]\u22121 \u2264 \u2206(\u03c0)2 2(1 \u2212\u03b1), where \u03a8\u22c6 h := \u03a8\u03c0\u22c6 h and \u039bK h := PK k=1 \u03c6h(sk h, ak h)\u03c6h(sk h, ak h)T. We now proceed by contradiction: suppose that A su\ufb00ers constant expected regret on M even though the MDP does not satisfy the UniSOFT condition. Then, since A plays sub-optimal actions only a \ufb01nite number of times, it is possible to show that, for each h \u2208[H], there exists a positive constant \u03bbM > 0 such that EA M[\u039bK h ] \u2aaf\u039b\u22c6 h + \u03bbMI, where \u039b\u22c6 h := K P s:\u03c1\u22c6 h(s)>0 \u03c6\u22c6 h(s)\u03c6\u22c6 h(s)T. Furthermore, since UniSOFT does not hold, there exists a stage h \u2208[H] and a sub-optimal policy \u03c0 (i.e., with \u2206(\u03c0) > 0) such that the vector \u03a8\u03c0 h \u2212\u03a8\u22c6 h does not belong to span {\u03c6\u22c6 h(s)|\u03c1\u22c6 h(s) > 0}. Then, since such space is exactly the one spanned by all the eigenvectors of \u039b\u22c6 h associated with a non-zero eigenvalue, there exists a positive constant \u03f5 > 0 (independent of K) such that \u2225\u03a8\u03c0 h\u2212\u03a8\u22c6 h\u22252 (\u039b\u22c6 h+\u03bbMI)\u22121 \u2265 \u03f52/\u03bbM. That is, even if the (positive) eigenvalues of \u039b\u22c6 h grow with K, the weighted norm of \u03a8\u03c0 h \u2212\u03a8\u22c6 h, which is not in the span of the eigenvectors of such matrix, cannot decrease below a positive constant. Combining these steps with Lem. 7, we obtain \u2206(\u03c0)2 2(1 \u2212\u03b1) \u2265lim sup K\u2192\u221e log(K)\u2225\u03a8\u03c0 h \u2212\u03a8\u22c6 h\u22252 (\u039b\u22c6 h+\u03b7I)\u22121 \u2265\u03f52 \u03bbM lim sup K\u2192\u221e log(K), which is clearly a contradiction. Therefore, A cannot su\ufb00er constant regret in M while su\ufb00ering sub-linear regret in all other MDPs in M, and our claim follows. 3.2 UniSOFT is Su\ufb03cient for Constant Regret While the UniSOFT condition is necessary for achieving constant regret in a large class of MDPs, in the following, we prove that ELEANOR and LSVI-UCB attain constant regret when the UniSOFT assumption holds, thus implying that it is a su\ufb03cient condition in MDPs with low-rank and Bellman closure structure. Theorem 8. Consider an MDP and a representation {\u03c6h}h\u2208[H] satisfying the Bellman closure (Asm. 1) and UniSOFT assumptions (Asm. 4). Under Asm. 3, with probability at least 1 \u22123\u03b4, ELEANOR4 su\ufb00ers a constant regret R(K) \u2272H3/2d r \u03c4 log \u03c4 \u03b4 , where \u03c4 = H\u03ba and \u03ba is the last episode ELEANOR su\ufb00ers a non-zero regret. Furthermore, \u03ba \u2272 max n d2H4 \u03bb2 + , dH4 \u22062 min\u03bb3 + o 5, where \u03bb+ := minh{\u03bb+ h } > 0. Alternatively, we can prove the following result for LSVI-UCB. Theorem 9. Consider an MDP and a representation {\u03c6h}h\u2208[H] satisfying the low-rank (Asm. 2) and UniSOFT assumptions (Asm. 4). Under Asm. 3, with probability 1 \u22123\u03b4, LSVI-UCB su\ufb00ers a constant regret R(K) \u2272d3H5 \u2206min log \u0000dH2\u03ba/\u03b4 \u0001 , where \u03ba is the last episode LSVI-UCB su\ufb00ers a non-zero regret and is upper-bounded as \u03ba \u2272 max n d3H4 \u03bb2 + , d2H4 \u22062 min\u03bb3 + o , where \u03bb+ := minh{\u03bb+ h } > 0. 4ELEANOR and LSVI-UCB are de\ufb01ned up to a regularization parameter \u03bb that we set to \u03bb = 1. 5Here \u2272hides logarithmic terms in \u03bb+, H, and d, but not in K. 6 \fIn both cases, \u03ba is polynomial in all the problem-dependent terms and independent of the number of episodes K (see Lem. 21 and 20). As a result, ELEANOR and LSVI-UCB achieves a constant regret that only depends on \u201cstatic\u201d MDP and representation characteristics, thus indicating that after a \ufb01nite time the agent only executes the optimal policy. Notice also that the bounds should be read as minimum between the constant regret and the minimax regret O( \u221a K), which may be tighter for small K.The main di\ufb00erence between the two previous bounds is that for ELEANOR we build on the anytime minimax regret bound, while for LSVI-UCB, we derive a more re\ufb01ned constant-regret guarantee by building on its problem-dependent bound of He et al. (2020). Unfortunately, limiting factor for applying the analysis in (He et al., 2020) seems to be the fact that ELEANOR is not optimistic at each stage h but rather only at the \ufb01rst stage. As such, whether ELEANOR can achieve a problem-dependent logarithmic regret based on local gaps that can be leverage to improve our analysis is an open question in the literature. Combined proof sketch of Thm. 8 and Thm. 9. We provide a general proof sketch that can be instantiated to both ELEANOR and LSVI-UCB. The purpose is to illustrate what properties an algorithm must have to exploit good representations, and how this leads to constant regret. Consider a learnable feature map {\u03c6h}h\u2208[H] and an algorithm with the following properties: (a) Greedy w.r.t. a Q-function estimate: \u03c0k h(s) = arg maxa\u2208A{Q k h(s, a)}. (b) Global optimism: V k 1(s) \u2265V \u22c6 1 (s) where, for all h \u22651, we set V k h(s) = maxa\u2208A{Q k h(s, a)}. (c) Almost local optimism: \u2200h > 1, \u2203Ch \u22650 s.t. Q k h(s, a) + Ch\u03b2k \u2225\u03c6h(s, a)\u2225(\u039bk h)\u22121 \u2265Q\u22c6 h(s, a). (d) Con\ufb01dence set: let \u039bk h = Pk\u22121 i=1 \u03c6h(si h, ai h)\u03c6h(si h, ai h)T + \u03bbI and \u03b2k \u2208R+ be logarithmic in k, then V k h(sk h) \u2212V \u03c0k h (sk h) \u22642\u03b2k \r \r\u03c6h(sk h, ak h) \r \r (\u039bk h)\u22121 + Es\u2032\u223cph(sk h,ak h) h V k h+1(s\u2032) \u2212V \u03c0k h+1(s\u2032) i . These properties are veri\ufb01ed by ELEANOR (Zanette et al., 2020b, App. C) and LSVI-UCB (Jin et al., 2020, Lem. B.4, B.5). Note that for LSVI-UCB condition (c) is trivially veri\ufb01ed since the algorithm is optimistic at each stage (Ch = 0). On the other hand, ELEANOR is only guaranteed to be optimistic at the \ufb01rst stage, and (c) is thus important (Ch = 2). First, we use existing techniques to establish an any-time regret bound, either worst-case or problem-dependent. We call this g(k) and prove that R(k) \u2264g(k) \u2264e O( \u221a k) for any k with probability 1 \u22122\u03b4. Next, we show that, under Asm. 4, the eigenvalues of the design matrix grow almost linearly, making the con\ufb01dence intervals decrease at a 1/ \u221a k rate. From some algebra and a martingale argument, \u039bk+1 h \u2ab0k\u039b\u22c6 h + \u03bbI \u2212\u2206\u22121 ming(k)I \u2212e O( \u221a k)I, (2) where \u039b\u22c6 h = Es\u223c\u03c1\u22c6 h[\u03c6\u22c6 h(s)\u03c6\u22c6 h(s)T]. The UniSOFT property ensures that the linear term is nonzero in relevant directions, while the regret bound of the algorithm makes the penalty term sublinear. Then, we show that, for any reachable (s, a), \u03b2k \u2225\u03c6h(s, a)\u2225(\u039bk h)\u22121 \u2264\u03b2k k \u2212e O( \u221a k) (k\u03bb+ h \u2212e O( \u221a k))3/2 = e O(k\u22121/2), (3) where \u03bb+ h is the minimum nonzero eigenvalue of \u039b\u22c6 h. From (3), we can see that \u03bb+ h plays a fundamental role in the rate of decrease. Finally, we show that, under the gap assumption, these uniformlydecreasing con\ufb01dence intervals allow learning the optimal policy in a \ufb01nite time. From the Bellman equations, we have that V \u22c6 1 (sk 1) \u2212V \u03c0k 1 (sk 1) = E\u03c0k \" H X h=1 \u2206h(sh, ah)|s1 = sk 1 # , (4) while from (a)-(d), for any reachable state, \u2206h(s, \u03c0k h(s)) \u22642 E\u03c0k \" H X i=h \u03b2k \u2225\u03c6i(si, ai)\u2225(\u039bk i )\u22121 |sh = s # + 1h>1Ch\u03b2k \u2225\u03c6\u22c6 h(s)\u2225(\u039bk h)\u22121 . 7 \fAlgorithm 1: LSVI-LEADER Input: Representations {\u03a6j}j\u2208[N], con\ufb01dence values {\u03b2k}k\u2208[K] for k = 1, . . . , K do Receive the initial state sk 1 for h = H, . . . , 1 do \u039bk h(j) = \u03bbI + Pk\u22121 i=1 \u03c6(j) h (si h, ai h)\u03c6(j) h (si h, ai h)T \u2200j \u2208[N]. wk h(j) = \u039bk h(j)\u22121 Pk\u22121 i=1 \u03c6(j) h (si h, ai h) \u0012 rh(si h, ai h) + max a\u2208A Qk h+1(si h+1, a) \u0013 , \u2200j \u2208[N] Qk h(s, a) = min \u001a H, minj\u2208[N] \u0012 \u03c6(j) h (s, a)Twk h(j) + \u03b2k \r \r \r\u03c6(j) h (s, a) \r \r \r \u039bk h(j)\u22121 \u0013\u001b end for h = 1, . . . , H do Execute action ak h = \u03c0k h(sk h) := argmaxa\u2208A Qk h(sk h, a). end end The second term (with 1h>1) accounts for the almost-optimism of ELEANOR, while it is zero in LSVI-UCB due to the stage-wise optimism. Then, for every h \u2208[H], we can use (3) to control the feature norms. Thus, there exists an episode \u03bah independent of K satisfying \u2206h(s, \u03c0k h(s)) \u2264\u03b2\u03bah H X i=h (2 + 1i=h>1Ch) \u03bah \u22128 p \u03bah log(2d\u03bahH/\u03b4) \u2212g(\u03bah) (\u03bah\u03bb+ i \u22128 p \u03bah log(2d\u03bahH/\u03b4) \u2212g(\u03bah))3/2 < \u2206min, (5) By de\ufb01nition of minimum gap, then \u2206h(s, \u03c0k h(s)) = 0 for k > \u03bah. Then, for k > \u03ba = maxh{\u03bah}, V \u22c6 1 (sk 1) \u2212V \u03c0k 1 (sk 1) = 0. But this means the algorithm only accumulates regret up to \u03ba, that is, R(K) = R(\u03ba) \u2264g(\u03ba) = O(1) for all K > \u03ba. This holds with probability 1 \u22123\u03b4, also taking into account the martingale argument from (2). Note that {\u03bah} are by de\ufb01nition monotone for LSVI-UCB. The \ufb01nal bounds are then obtained by instantiating the speci\ufb01c values of \u03b2k and g(k) for the two algorithms we analyzed. 4 Representation Selection in Low-Rank MDPs In Sec. 3, we have highlighted the bene\ufb01ts that a UniSOFT representation brings to optimistic algorithms in MDPs with Bellman closure and low rank structure. In this section, we take one step further and investigate the representation selection problem. Since ELEANOR is a computationally intractable algorithm, we build on LSVI-UCB and low-rank MDPs (Asm. 2) and we introduce LSVI-LEADER (Alg. 1), an algorithm that adaptively selects representations in a given set. Given a set of N representations {\u03a6j}j\u2208[N] satisfying Asm. 2, where \u03a6j = \b \u03c6(j) h \t h\u2208[H], at each stage h \u2208[H] of episode k \u2208[K], LSVI-LEADER solves N di\ufb00erent regression problems to compute an optimistic value function for each representation. Then, the \ufb01nal estimate Q k h(s, a) is taken as the minimum across these di\ufb00erent optimistic value functions. Notably, this implies that LSVI-LEADER implicitly combines representations, in the sense that the selected representations (i.e., those with tightest optimism) might vary for di\ufb00erent stages. This is exploited in the following result, which shows that constant regret is achievable even if none of the given representations is globally UniSOFT. Theorem 10. Given an MDP M and a set of representations {\u03a6j}j\u2208[N] satisfying the low-rank assumption (Asm. 2), let Z be the set of HN representations obtained by combining those in {\u03a6j}j\u2208[N] across di\ufb00erent stages.6 Then, with probability at least 1 \u22122\u03b4, LSVI-LEADER su\ufb00ers at most a regret R(K) \u2264min z\u2208Z e R(K, z, {\u03b2k}), 6Note that any combination of features in \u03a6j is learnable, since each representation is learnable in the low-rank MDP sense. 8 \fwhere e R(K, z, \u03b2k) is either the worst-case regret bound of LSVI-UCB (Jin et al., 2020) or the problemdependent one (He et al., 2020) when the algorithm is executed with representation z and con\ufb01dence values \u03b2k \u221ddH p N log(2dNHk/\u03b4). Moreover, if Z contains a UniSOFT representation z\u22c6, then LSVI-LEADER achieves constant regret with problem-dependent values of z\u22c6(see Thm. 9). This result shows that LSVI-LEADER adapts to the best representation automatically, i.e., without any prior knowledge about the properties of the representations. In particular, it shows a problem-dependent (or worst-case) bound when there is no UniSOFT representation, while it attains constant regret when a representation, potentially mixed through stages, is UniSOFT. This is similar to what was obtained by Papini et al. (2021) for linear contextual bandits. Indeed, LSVI-LEADER reduces to their algorithm in the case H = 1. While the cost of representation selection is only logarithmic in linear bandits, the cost becomes polynomial (i.e., \u221a N in the worst-case bound and N in the problem-dependent one) in RL. This is due to the structure induced by the Bellman equation, which requires a cover argument over HN functions (more details in the proof sketch). Note that for H = 1, the analysis can be re\ufb01ned to obtain a log(N) dependence, due to the lack of propagation through stages, and recover the result in (Papini et al., 2021). We refer the read to App. G for a numerical validation. Proof sketch of Thm. 10. The proof relies on the following important result, which extends Lem. B.4 of Jin et al. (2020) and shows that the deviation between the optimistic value function computed by LSVI-LEADER and the true one scales with the minimum con\ufb01dence interval across the di\ufb00erent representations. Formally, with probability 1 \u22122\u03b4, for any \u03c0 \u2208\u03a0, s \u2208S, a \u2208A, h \u2208 [H], k \u2208[K], Q k h(s, a) \u2212Q\u03c0 h(s, a) \u22642\u03b2k min j\u2208[N] \r \r \r\u03c6(j) h (s, a) \r \r \r \u039bk h(j)\u22121 + Es\u2032\u223cph(s,a) h V k h+1(s\u2032) \u2212V \u03c0 h+1(s\u2032) i . As in (Jin et al., 2020), the derivation of this result combines the well-known self-normalized martingale bound in (Abbasi-Yadkori et al., 2011) with a covering argument over the space of possible optimistic value functions. In our setting, the structure of such function space requires us to build N di\ufb00erent covers, one for each di\ufb00erent representation. This, in turn, requires the con\ufb01dence values \u03b2k to be in\ufb02ated by an extra factor \u221a N w.r.t. learning with a single representation. The generality of this result allows us to easily derive, for any \ufb01xed representation z \u2208Z, both the worst-case regret bound of Jin et al. (2020) and the problem-dependent one of He et al. (2020). To see this, note that the regret decompositions in both of these two papers rely on an upper bound to V k h(sk h) \u2212V \u03c0k h (sk h) as a function of the \ufb01xed representation used by LSVI-UCB (see the proof of Theorem 3.1 of Jin et al. (2020) and Lemma 6.2 of He et al. (2020)). Then, \ufb01x any z \u2208Z and call zh its features at stage h. Note that zh \u2208{\u03c6(j) h }j\u2208[M]. Moreover, by de\ufb01nition of low-rank structure, since each \u03a6j induces a low-rank MDP, their combination does too. Thus, z is learnable. Then, instantiating the concentration bound stated above for policy \u03c0k, state sk h, action ak h, stage h, and by upper bounding the minimum with the representation selected in zh, we get V k h(sk h) \u2212V \u03c0k h (sk h) \u22642\u03b2k \r \rzh(sk h, ak h) \r \r \u039bk h(j)\u22121 + Es\u2032\u223cph(sk h,ak h) h V k h+1(s\u2032) \u2212V \u03c0k h+1(s\u2032) i . From here, one can carry out exactly the same proofs of Jin et al. (2020) and He et al. (2020), thus obtaining the same regret bound that LSVI-UCB enjoys when executed with the \ufb01xed representation z \u2208Z and con\ufb01dence values {\u03b2k}k\u2208[K]. Hence, we conclude that the regret of LSVI-LEADER is upper bounded by the minimum of these regret bounds for all representations z \u2208Z, thus proving the \ufb01rst result. To obtain the second result, simply notice that, if z\u22c6\u2208Z is UniSOFT, then we can use the re\ufb01ned analysis for LSVI-UCB of Thm. 9 to show that e R(K, z\u22c6, {\u03b2k}) is upper bounded by a constant independent of K, hence proving constant regret for LSVI-LEADER. 4.1 Representation Selection under a Mixing Condition We show that the LSVI-LEADER algorithm not only is able to select the best representation among a set of viable representations, and to combine representations for the di\ufb00erent stages, but also to 9 \fstitch representations together across states and actions. With this in mind we introduce the notion of a mixed ensemble of representations. De\ufb01nition 11. Consider an MDP M and a set of representations {\u03a6j}j\u2208[N] satisfying the low-rank assumption (Asm. 2). The collection of feature maps {\u03a6j}j\u2208[M] is UniSOFT-mixing if for all s, a \u2208S \u00d7 A and h \u2208[H], there exists j such that \u03c6(j) h (s, a) \u2208span n \u03c6(j) h (s, \u03c0\u22c6 h(s))|\u03c1\u22c6 h(s) > 0 o . We show that when presented with a UniSOFT-mixing family of representations, LSVI-LEADER is able to successfully combine these and obtain a regret guarantee that may be better than what is achievable by running LSVI-UCB using any of these representations in isolation. Theorem 12. Consider an MDP M and a set of representations {\u03a6j}j\u2208[N] satisfying the lowrank (Asm. 2) and UniSOFT-mixing assumptions. If \u2206min > 0 (Asm. 3), then with probability at least 1 \u22123\u03b4, there exist a constant e \u03ba = maxh{\u03bah} independent from K such that the regret of LSVI-LEADER after K episodes is at most: R(K) \u2264min z\u2208Z e R \u0000e \u03ba, z, {\u03b2k} \u0001 , where Z, e R and \u03b2k are de\ufb01ned as in Thm. 10. Under the UniSOFT-mixing condition, LSVI-LEADER may not converge to selecting a single representation for each stage h but rather to mixing multiple representations. In fact, it may select a di\ufb00erent representation in di\ufb00erent regions of the state-action space. This is the main di\ufb00erence w.r.t. Thm. 10, where constant regret is shown when there exists a representation z\u22c6that is UniSOFT, and the value \u03bah depends on the minimum positive eigenvalue of z\u22c6 h. In the case of UniSOFT-mixing, \u03bah depends on properties of a combination of representations at stage h. We provide a characterization of \u03bah in the full proof in App. E. 5 Conclusions We investigated the properties that make a representation e\ufb03cient for online learning in MDPs with Bellman closure. We introduced UniSOFT, a necessary and su\ufb03cient condition to achieve a constant regret bound in this class of MDPs. We demonstrate that existing optimistic algorithms are able to adapt to the structure of the problem and achieve constant regret. Furthermore, we introduce an algorithm able to achieve constant regret by mixing representations across states, actions and stages in the case of low-rank MDPs. An interesting direction raised by our paper is whether it is possible to leverage the UniSOFT structure for probably-e\ufb03cient representation learning, rather than selection. Another direction can be to leverage these insights to drive the design of auxiliary losses for representation learning, for example in deep RL." + }, + { + "url": "http://arxiv.org/abs/2104.03781v1", + "title": "Leveraging Good Representations in Linear Contextual Bandits", + "abstract": "The linear contextual bandit literature is mostly focused on the design of\nefficient learning algorithms for a given representation. However, a contextual\nbandit problem may admit multiple linear representations, each one with\ndifferent characteristics that directly impact the regret of the learning\nalgorithm. In particular, recent works showed that there exist \"good\"\nrepresentations for which constant problem-dependent regret can be achieved. In\nthis paper, we first provide a systematic analysis of the different definitions\nof \"good\" representations proposed in the literature. We then propose a novel\nselection algorithm able to adapt to the best representation in a set of $M$\ncandidates. We show that the regret is indeed never worse than the regret\nobtained by running LinUCB on the best representation (up to a $\\ln M$ factor).\nAs a result, our algorithm achieves constant regret whenever a \"good\"\nrepresentation is available in the set. Furthermore, we show that the algorithm\nmay still achieve constant regret by implicitly constructing a \"good\"\nrepresentation, even when none of the initial representations is \"good\".\nFinally, we empirically validate our theoretical findings in a number of\nstandard contextual bandit problems.", + "authors": "Matteo Papini, Andrea Tirinzoni, Marcello Restelli, Alessandro Lazaric, Matteo Pirotta", + "published": "2021-04-08", + "updated": "2021-04-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction The stochastic contextual bandit is a general framework to formalize sequential decision-making problems in which at each step the learner observes a context drawn from a \ufb01xed distribution, it plays an action, and it receives a noisy reward. The goal of the learner is to maximize the reward accumulated over n rounds, and the performance is typically measured by the regret w.r.t. playing the optimal action in each context. This paradigm has found application in a large range of domains, including recommendation systems, online advertising, and clinical trials (e.g., Boune\ufb00ouf and Rish, 2019). Linear contextual bandit (Lattimore and Szepesv\u00b4 ari, 2020) is one of the most studied instances of contextual bandit due to its e\ufb03ciency and strong theoretical guarantees. In this setting, the reward for each context x and action a is assumed to be representable as the linear combination between d-dimensional features \u03c6(x, a) \u2208Rd and an unknown parameter \u03b8\u22c6\u2208Rd. In this case, we refer to \u03c6 as a realizable representation. Algorithms based on the optimism-in-the-face-of-uncertainty principle such as LinUCB (Chu et al., 2011) and OFUL (Abbasi-Yadkori et al., 2011), have been proved to achieve minimax regret bound O \u0000Sd\u221an ln(nL) \u0001 and problem-dependent regret O \u0000 S2d2 \u2206 ln2(nL) \u0001 , where \u2206is the minimum gap between the reward of the best and second-best action across contexts, and L and S are upper bounds to the \u21132-norm of the features \u03c6 and \u03b8\u22c6, respectively. Unfortunately, the dimension d, and the norm upper bounds L and S, are not the only characteristics of a representation to have an e\ufb00ect on the regret and existing bounds may fail at capturing the impact of the context-action features on the performance of the algorithm. In fact, as illustrated in Fig. 1, running LinUCB with di\ufb00erent realizable representations with same parameters d and S may lead to signi\ufb01cantly di\ufb00erent performance. Notably, there are \u201cgood\u201d representations for which LinUCB achieves constant regret, i.e., not scaling with the horizon n. Recent works identi\ufb01ed di\ufb00erent conditions on the representation that can be exploited to achieve constant regret for LinUCB (Hao et al., 2020; Wu et al., 2020). Similar conditions have also been leveraged to prove other interesting learning properties, such as sub-linear regret for greedy algorithms (Bastani et al., 2020), or regret guarantees for model selection between linear and multi-arm representations (Chatterji et al., 2020; \u2217Work done when Matteo Papini was with Facebook AI Research. 1 arXiv:2104.03781v1 [cs.LG] 8 Apr 2021 \f0 0.2 0.4 0.6 0.8 1 \u00b7104 0 100 200 Rounds n Pseudo Regret \u03c60 \u03c61 \u03c62 \u03c63 \u03c64 \u03c65 Figure 1: Regret of LinUCB with di\ufb00erent realizable representations with same dimension d and parameter bound S. The dashed blue line is LEADER, our proposed representation selection algorithm. Details in App. G.1. Ghosh et al., 2020). While all these conditions, often referred to as diversity conditions, depend on how certain context-arm features span the full Rd space, there is no systematic analysis of their connections and of which ones can be leveraged to achieve constant regret in linear contextual bandits. In this paper, we further investigate the concept of \u201cgood\u201d representations in linear bandit and we provide the following contributions: 1) We review the diversity conditions available in the literature, clarify their relationships, and discuss how they are used. We then focus on our primary goal, which is to characterize the assumptions needed to achieve constant regret for LinUCB. 2) We introduce a novel algorithm that e\ufb00ectively selects the best representation in a given set, thus achieving constant regret whenever at least one \u201cgood\u201d representation is provided. 3) Furthermore, we show that, in certain problems, the algorithm is able to combine given representations to implicitly form a \u201cgood\u201d one, thus achieving constant problem-dependent regret even when running LinUCB on any of the representations would not. 4) Finally, we empirically validate our theoretical \ufb01ndings on a number of contextual bandit problems. Related work. The problem of selecting the best representation in a given set can be seen as a speci\ufb01c instance of the problem of model selection in bandits. In model selection, the objective is to choose the best candidate in a set of base learning algorithms. At each step, a master algorithm is responsible for selecting a base algorithm, which in turn prescribes the action to play and the reward is then provided as feedback to the base algorithms. Examples of model selection methods include adversarial masters \u2013e.g., EXP4 (Auer et al., 2002; Maillard and Munos, 2011) and Corral (Agarwal et al., 2017; Pacchiano et al., 2020b)\u2013 and stochastic masters (Abbasi-Yadkori et al., 2020; Lee et al., 2020; Bibaut et al., 2020; Pacchiano et al., 2020a). For a broader discussion refer to App. A or (Pacchiano et al., 2020a, Sec. 2). Most of these algorithms achieve the regret of the best base algorithm up to a polynomial dependence on the number M of base algorithms (Agarwal et al., 2017). While existing model selection methods are general and can be applied to any type of base algorithms, 1 they may not be e\ufb00ective in problems with a speci\ufb01c structure. An alternative approach is to design the master algorithm for a speci\ufb01c category of base algorithms. An instance of this case is the representation-selection problem, where the base algorithms only di\ufb00er by the representation used to estimate the reward. Foster et al. (2019) and Ghosh et al. (2020) consider a set of nested representations, where the best representation is the one with the smallest dimensionality for which the reward is realizable. Finally, Chatterji et al. (2020) focus on the problem of selecting between a linear and a multi-armed bandit representation. In this paper, we consider an alternative representation-selection problem in linear contextual bandits, where the objective is to exploit constant-regret \u201cgood\u201d representations. Di\ufb00erently from our work, Lattimore et al. (2020) say that a linear representation is \u201cgood\u201d if it has a low misspeci\ufb01cation (i.e., it represents the reward up to a small approximation error), while we focus on realizable representations for which LinUCB achieves constant-regret. 2 Preliminaries We consider the stochastic contextual bandit problem (contextual problem for short) with context space X and \ufb01nite action set A = [K] = {1, . . . , K}. At each round t \u22651, the learner observes a context xt sampled i.i.d. from a dis1Most of existing methods only require prior knowledge of the regret of the optimal base algorithm or a bound on the regret of all base algorithms. Corral also requires the base algorithms to satisfy certain stability conditions. 2 \ftribution \u03c1 over X, it selects an arm at \u2208[K] and it receives a reward yt = \u00b5(xt, at)+\u03b7t where \u03b7t is a \u03c3-subgaussian noise. The learner\u2019s objective is to minimize the pseudo-regret Rn = Pn t=1 \u00b5\u22c6(xt) \u2212\u00b5(xt, at) for any n > 0, where \u00b5\u22c6(xt) := maxa\u2208[K] \u00b5(xt, a). We de\ufb01ne the minimum gap as \u2206= infx\u2208X:\u03c1(x)>0,a\u2208[K],\u2206(x,a)>0{\u2206(x, a)} where \u2206(x, a) = \u00b5\u22c6(x) \u2212\u00b5(x, a). A realizable d\u03c6-dimensional linear representation is a feature map \u03c6 : X \u00d7 [K] \u2192Rd\u03c6 for which there exists an unknown parameter vector \u03b8\u22c6 \u03c6 \u2208Rd\u03c6 such that \u00b5(x, a) = \u27e8\u03c6(x, a), \u03b8\u22c6 \u03c6\u27e9. When a realizable linear representation is available, the problem is called (stochastic) linear contextual bandit and can be solved using, among others, optimistic algorithms like LinUCB (Chu et al., 2011) or OFUL (Abbasi-Yadkori et al., 2011). Given a realizable representation \u03c6, at each round t, LinUCB builds an estimate \u03b8t\u03c6 of \u03b8\u22c6 \u03c6 by ridge regression using the observed data. Denote by Vt\u03c6 = \u03bbId\u03c6 + Pt\u22121 k=1 \u03c6(xk, ak)\u03c6(xk, ak)T the (\u03bb > 0)-regularized design matrix at round t, then \u03b8t\u03c6 = V \u22121 t\u03c6 Pt\u22121 k=1 \u03c6(xk, ak)yk. Assuming that \u2225\u03b8\u22c6 \u03c6\u22252 \u2264S\u03c6 and supx,a \u2225\u03c6(x, a)\u22252 \u2264L\u03c6, LinUCB builds a con\ufb01dence ellipsoid Ct\u03c6(\u03b4) = \b \u03b8 \u2208Rd\u03c6 : \r \r\u03b8t\u03c6 \u2212\u03b8 \r \r Vt\u03c6 \u2264\u03b2t\u03c6(\u03b4) \t . As shown in (Abbasi-Yadkori et al., 2011, Thm. 1), when \u03b2t\u03c6(\u03b4) := \u03c3 s 2 ln \u0012det(Vt\u03c6)1/2 det(\u03bbId\u03c6)\u22121/2 \u03b4 \u0013 + \u221a \u03bbS\u03c6, then P(\u2200t \u22651, \u03b8\u22c6 \u03c6 \u2208Ct\u03c6(\u03b4)) \u22651 \u2212\u03b4. At each step t, LinUCB plays the action with the highest upper-con\ufb01dence bound at = argmaxa\u2208[K] max\u03b8\u2208Ct\u03c6(\u03b4)\u27e8\u03c6(xt, a), \u03b8\u27e9, and it is shown to achieve a regret bounded as reported in the following proposition. Proposition 1 (Abbasi-Yadkori et al., 2011, Thm. 3, 4). For any linear contextual bandit problem with d\u03c6dimensional features, supx,a \u2225\u03c6(x, a)\u22252 \u2264L\u03c6, an unknown parameter vector \u2225\u03b8\u22c6 \u03c6\u22252 \u2264S\u03c6, with probability at least 1 \u2212\u03b4, LinUCB su\ufb00ers regret Rn = O \u0000S\u03c6d\u03c6 \u221an ln(nL\u03c6/\u03b4) \u0001 . Furthermore, if the problem has a minimum gap \u2206> 0, then the regret is bounded as2 Rn = O \u0012 S2 \u03c6d2 \u03c6 \u2206 ln2(nL\u03c6/\u03b4) \u0013 . In the rest of the paper, we assume w.l.o.g. that all terms \u03bb, \u2206max = maxx,a \u2206(x, a), S\u03c6, \u03c3 are larger than 1 to simplify the expression of the bounds. 3 Diversity Conditions Several assumptions, usually referred to as diversity conditions, have been proposed to de\ufb01ne linear bandit problems with speci\ufb01c properties that can be leveraged to derive improved learning results. While only a few of them were actually leveraged to derive constant regret guarantees for LinUCB (others have been used to prove e.g., sub-linear regret for the greedy algorithm, or regret guarantees for model selection algorithms), they all rely on very similar conditions on how certain context-action features span the full Rd\u03c6 space. In this section, we provide a thorough review of these assumptions, their connections, and how they are used in the literature. As diversity conditions are getting more widely used in bandit literature, we believe this review may be of independent interest. Sect. 4 will then speci\ufb01cally focus on the notion of good representation for LinUCB. We \ufb01rst introduce additional notation. For a realizable representation \u03c6, let \u03c6\u22c6(x) := \u03c6(x, a\u22c6 x), where a\u22c6 x \u2208argmaxa\u2208[K] \u00b5(x, a) is an optimal action, be the vector of optimal features for context x. In the following we make the assumption that \u03c6\u22c6(x) is unique. Also, let X \u22c6(a) = {x \u2208X : \u00b5(x, a) = \u00b5\u22c6(x)} denote the set of contexts where a is optimal. Finally, for any matrix A, we denote by \u03bbmin(A) its minimum eigenvalue. For any contextual problem with reward \u00b5 and context distribution \u03c1, the diversity conditions introduced in the literature are summarized in Tab. 2 together with how they were leveraged to obtain regret bounds in di\ufb00erent settings.3 We \ufb01rst notice that all conditions refer to the smallest eigenvalue of a design matrix constructed on speci\ufb01c context-action features. In other words, diversity conditions require certain features to span the full Rd\u03c6 space. The non-redundancy condition is a common technical assumption (e.g., Foster et al., 2019) and it simply de\ufb01nes a problem whose dimensionality cannot be reduced without losing information. Assuming the context distribution \u03c1 is full support, BBK and CMB are structural properties of the representation that are independent from the reward. For example, BBK requires that, for each action, there must be feature vectors lying in all orthants of 2The logarithmic bound reported in Prop. 1 is slightly di\ufb00erent than the one in (Abbasi-Yadkori et al., 2011) since we do not assume that the optimal feature is unique. 3In some cases, we adapted conditions originally de\ufb01ned in the disjoint-parameter setting, where features only depend on the context (i.e., \u03c6(x)) and the unknown parameter \u03b8\u22c6 a is di\ufb00erent for each action a, to the shared-parameter setting (i.e., where features are functions of both contexts and actions) introduced in Sect. 2. 3 \fName De\ufb01nition Application Nonredundant \u03bbmin \u0010 1/K P a\u2208[K] Ex\u223c\u03c1 \u0002 \u03c6(x, a)\u03c6(x, a)T\u0003\u0011 > 0 CMB \u2200a, \u03bbmin \u0010 Ex\u223c\u03c1 \u0002 \u03c6(x, a)\u03c6(x, a)T\u0003\u0011 > 0 Model selection BBK \u2200a, u \u2208Rd, \u03bbmin \u0010 Ex \u0002 \u03c6(x, a)\u03c6(x, a)T1 \b \u03c6(x, a)Tu \u22650 \t\u0003 \u0011 > 0 Logarithmic regret for greedy HLS \u03bbmin \u0010 Ex\u223c\u03c1 \u0002 \u03c6\u22c6(x)\u03c6\u22c6(x)T\u0003\u0011 > 0 Constant regret for LinUCB WYS \u2200a, \u03bbmin \u0010 Ex\u223c\u03c1 \u0002 \u03c6(x, a)\u03c6(x, a)T1 {x \u2208X \u22c6(a)} \u0003\u0011 > 0 Constant regret for LinUCB Figure 2: Diversity conditions proposed in the literature adapted to the sharedparameter setting. The names refer to the authors who \ufb01rst introduced similar conditions. HLS CMB BBK WYS Non-redundant Figure 3: Categorization of diversity conditions. Rd\u03c6. In the case of \ufb01nite contexts, this implies there must be at least 2d\u03c6 contexts. WYS and HLS involve the notion of reward optimality. In particular, WYS requires that all actions are optimal for at least a context (in the continuous case, for a non-negligible set of contexts), while HLS only focuses on optimal actions. We now review how these conditions (or variations thereof) were applied in the literature. CMB is a rather strong condition that requires the features associated with each individual action to span the whole Rd\u03c6 space. Chatterji et al. (2020) leverage a CMB-like assumption to prove regret bounds for OSOM, a model-selection algorithm that uni\ufb01es multi-armed and linear contextual bandits. More precisely, they consider a variation of CMB, where the context distribution induces stochastic feature vectors for each action that are independent and centered. The same condition was adopted by Ghosh et al. (2020) to study representation-selection problems and derive algorithms able to adapt to the (unknown) norm of \u03b8\u22c6 \u03c6 or select the smallest realizable representation in a set of nested representations. Bastani et al. (2020, Assumption 3) introduced a condition similar to BBK for the disjoint-parameter case. In their setting, they prove that a non-explorative greedy algorithm achieves O(ln(n)) problem-dependent regret in linear contextual bandits (with 2 actions).4 Hao et al. (2020, Theorem 3.9) showed that HLS representations can be leveraged to prove constant problem-dependent regret for LinUCB in the shared-parameter case. Concurrently, Wu et al. (2020) showed that, under WYS, LinUCB achieves constant expected regret in the disjoint-parameter case. A WYS-like condition was also used by Bastani et al. (2020, Assumption 4) to extend the result of sublinear regret for the greedy algorithm to more than two actions. The relationship between all these conditions is derived in the following lemma. Lemma 1. For any contextual problem with reward \u00b5 and context distribution \u03c1, let \u03c6 be a realizable linear representation. The relationship between the diversity conditions in Tab. 2 is summarized in Fig. 3, where each inclusion is in a strict sense and each intersection is non-empty. This lemma reveals non-trivial connections between the diversity conditions, better understood through the examples provided in the proof (see App. B.1). BBK is indeed stronger than CMB, and thus it is su\ufb03cient for the model selection results by Chatterji et al. (2020). By super\ufb01cially examining their de\ufb01nitions, CMB may appear stronger than HLS, but the two properties are actually non-comparable, as there are representations that satisfy one condition but not the other. The implications of Fig. 3 on constant-regret guarantees are particularly relevant for our purposes. There are representations that satisfy BBK or CMB and are neither HLS nor WYS and thus may not enable constant regret for LinUCB. We notice that WYS is a stronger condition than HLS. Although WYS may be necessary for LinUCB to achieve constant regret in the disjoint-parameter case, HLS is su\ufb03cient for the shared-parameter case we consider in this paper. For this reason, in the following section we adopt HLS to de\ufb01ne good representations for LinUCB and provide a more complete characterization. 4Whether this is enough for the optimality of the greedy algorithm in the shared-parameter setting is an interesting problem, but it is beyond the scope of this paper. 4 \f4 Good Representations for Constant Regret The HLS condition was introduced by Hao et al. (2020), who provided a \ufb01rst analysis of its properties. In this section, we complement those results by providing a complete proof of a constant regret bound, a proof of the fact that HLS is actually necessary for constant regret, and a novel characterization of the existence of HLS representations. In the following we de\ufb01ne \u03bb\u03c6,HLS := \u03bbmin \u0000Ex\u223c\u03c1 \u0002 \u03c6\u22c6(x)\u03c6\u22c6(x)T\u0003\u0001 , which is strictly positive for HLS representations. 4.1 Constant Regret Bound We begin by deriving a constant problem-dependent regret bound for LinUCB under the HLS condition. Lemma 2. Consider a contextual bandit problem with realizable linear representation \u03c6 satisfying the HLS condition (see Tab. 2). Assume \u2206> 0, maxx,a \u2225\u03c6(x, a)\u22252 \u2264L and \u2225\u03b8\u22c6 \u03c6\u22252 \u2264S. Then, with probability at least 1 \u22122\u03b4, the regret of OFUL after n \u22651 steps is at most Rn \u226432\u03bb\u22062 maxS2 \u03c6\u03c32 \u2206 2 ln \u00121 \u03b4 \u0013 + d\u03c6 ln 1 + \u03c4\u03c6L2 \u03c6 \u03bbd\u03c6 !!2 , where \u2206max = maxx,a \u2206(x, a) is the maximum gap and \u03c4\u03c6 \u2264max \u001a3842d2 \u03c6L2 \u03c6S2 \u03c6\u03c32\u03bb \u03bb\u03c6,HLS\u22062 ln2 64d2 \u03c6L3 \u03c6\u03c3S\u03c6 \u221a \u03bb p \u03bb\u03c6,HLS\u2206\u03b4 ! , 768L4 \u03c6 \u03bb2 \u03c6,HLS ln 512d\u03c6L4 \u03c6 \u03b4\u03bb2 \u03c6,HLS ! \u001b . We \ufb01rst notice that \u03c4\u03c6 is independent from the horizon n, thus making the previous bound a constant only depending on the problem formulation (i.e., gap \u2206, norms L\u03c6 and S\u03c6) and the value \u03bb\u03c6,HLS which measures \u201chow much\u201d the representation \u03c6 satis\ufb01es the HLS condition. Furthermore, one can always take the minimum between the constant regret in Lem. 2 and any other valid regret bound for OFUL (e.g., O(log(n)/\u2206))), which may be tighter for small values of n. While Lem. 2 provides high-probability guarantees, we can easily derive a constant expected-regret bound by running LinUCB with a decreasing schedule for \u03b4 (e.g., \u03b4t \u221d1/t3) and with a slightly di\ufb00erent proof (see App. C and the proof sketch below). Proof sketch (full proof in App. C). Following Hao et al. (2020), the idea is to show that the instantaneous regret rt+1 = \u27e8\u03b8\u22c6, \u03c6\u22c6(xt+1) \u2212\u03c6(xt+1, at+1)\u27e9is zero for su\ufb03ciently large (but constant) time t. By using the standard regret analysis, we have rt+1 \u22642\u03b2t+1(\u03b4) \u2225\u03c6(xt+1, at+1)\u2225V \u22121 t+1 \u2264 2L\u03b2t+1(\u03b4) p \u03bbmin(Vt+1) . Given the minimum-gap assumption, a su\ufb03cient condition for rt+1 = 0 is that the previous upper bound is smaller than \u2206, which gives \u03bbmin(Vt+1) > 4L2\u03b22 t+1(\u03b4)/\u22062. Since \u2206> 0, the problem-dependent regret bound in Prop. 1 holds, and the number of pulls to suboptimal arms up to time t is bounded by gt(\u03b4) = O \u0000(d ln(t/\u03b4)/\u2206)2\u0001 . Hence, the optimal arms are pulled linearly often and, by leveraging the HLS assumption, we are able to show that the minimum eigenvalue of the design matrix grows linearly in time as \u03bbmin(Vt+1) \u2265\u03bb + t\u03bbHLS \u22128L2 s t ln \u00122dt \u03b4 \u0013 \u2212L2gt(\u03b4). By relating the last two equations, we obtain an inequality of the form t\u03bbHLS \u2212o(t) > o(t). If we de\ufb01ne \u03c4 < \u221eas the smallest (deterministic) time such that this inequality holds, we have that after \u03c4 the immediate regret is zero, thus concluding the proof. Note that, if we wanted to bound the expected regret, we could set \u03b4t \u221d1/t3 and the above inequality would still be of the same form (although the resulting \u03c4 would be slightly di\ufb00erent). 5 \fComparison with existing bounds. Hao et al. (2020, Theorem 3.9) prove that LinUCB with HLS representations achieves lim supn\u2192\u221eRn < \u221e, without characterizing the time at which the regret vanishes. Instead, our Lem. 2 provides an explicit problem-dependent constant regret bound. Wu et al. (2020, Theorem 2) consider the disjoint-parameter setting and rely on the WYS condition. While they indeed prove a constant regret result, their bound depends on the the minimum probability of observing a context (or, in the continuous case, a properly de\ufb01ned meta-context). This re\ufb02ects the general tendency, in previous works, to frame diversity conditions simply as a property of the context distribution \u03c1. On the other hand, our characterization of \u03c4 in terms of \u03bb\u03c6,HLS (Lem. 2) allows relating the regret to the \u201cgoodness\u201d of the representation \u03c6 for the problem at hand. 4.2 Removing the Minimum-Gap Assumption Constant-regret bounds for LinUCB rely on a minimum-gap assumption (\u2206> 0). In this section we show that LinUCB can still bene\ufb01t from HLS representations when \u2206= 0, but a margin condition holds (e.g., Rigollet and Zeevi, 2010; Reeve et al., 2018). Intuitively, we require that the probability of observing a context x decays proportionally to its minimum gap \u2206(x) = mina \u2206(x, a). Assumption 1 (Margin condition). There exists C, \u03b1 > 2 such that for all \u03f5 > 0: \u03c1 \u0000{x \u2208X : \u2206(x) \u2264\u03f5} \u0001 \u2264C\u03f5\u03b1. The following theorem provides a problem-dependent regret bound for LinUCB under this margin assumption. Theorem 1. Consider a linear contextual bandit problem satisfying the margin condition (Asm. 1). Assume maxx,a \u2225\u03c6(x, a)\u22252 \u2264L\u03c6 and \u2225\u03b8\u22c6 \u03c6\u22252 \u2264S\u03c6. Then, given a representation \u03c6, with probability at least 1 \u22123\u03b4, the regret of OFUL after n \u22651 steps is at most Rn \u2264O \u0012\u0010 \u03bb(\u2206maxS\u03c6\u03c3d\u03c6)2n1/\u03b1 + p Cd\u03c6 \u0011 ln2(L\u03c6n/\u03b4) \u0013 . When \u03c6 is HLS (\u03bb\u03c6,HLS > 0), let \u03c4\u03c6 \u221d(\u03bb\u03c6,HLS) \u03b1 2\u2212\u03b1 , then Rn \u2264O \u0010 \u2206max\u03c4\u03c6 + p Cd\u03c6 ln2(L\u03c6n/\u03b4) \u0011 . We \ufb01rst notice that in general, LinUCB su\ufb00ers e O(n1/\u03b1) regret, which can be signi\ufb01cantly larger than in the minimum-gap case. On the other hand, with HLS representations, LinUCB achieves logarithmic regret, regardless of the value of \u03b1. The intuition is that, when the HLS condition holds, the algorithm collects su\ufb03cient information about \u03b8\u22c6 \u03c6 by pulling the optimal arms in rounds with large minimum gap, which occur with high probability by the margin condition. This yields at most constant regret in such rounds (\ufb01rst term above), while it can be shown that the regret in steps when the minimum gap is very small is at most logarithmic (second term above). 4.3 Further Analysis of the HLS Condition While Lem. 2 shows that HLS is su\ufb03cient for achieving constant regret, the following proposition shows that it is also necessary. While this property was \ufb01rst mentioned by Hao et al. (2020) as a remark in a footnote, we provide a formal proof in App. C.5. Proposition 2. For any contextual problem with \ufb01nite contexts, full-support context distribution, and given a realizable representation \u03c6, LinUCB achieves sub-logarithmic regret if and only if \u03c6 satis\ufb01es the HLS condition. Finally, we derive the following important existence result. Lemma 3. For any contextual bandit problem with optimal reward 5 \u00b5\u22c6(x) \u0338= 0 for all x \u2208X, that has either i) a \ufb01nite context set with at least d contexts with nonzero probability, or ii) a Borel context space and a non-degenerate context distribution6, for any dimension d \u22651, there exists an in\ufb01nite number of d-dimensional realizable HLS representations. 5This condition is technical and it can be easily relaxed. 6For instance, if X = Rm and the context distribution must have positive variance in all directions. 6 \fThis result crucially shows that the HLS condition is \u201crobust\u201d, since in any contextual problem, it is possible to construct an in\ufb01nite number of representations satisfying the HLS condition. In App. B.2, we indeed provide an oracle procedure for constructing an HLS representation. This result also supports the starting point of next section, where we assume that a learner is provided with a set of representations that may contain at least a \u201cgood\u201d representation, i.e., an HLS representation. 5 Representation Selection In this section, we study the problem of representation selection in linear bandits. We consider a linear contextual problem with reward \u00b5 and context distribution \u03c1. Given a set of M realizable linear representations {\u03c6i : X \u00d7 [K] \u2192Rdi}, the objective is to design a learning algorithm able to perform as well as the best representation, and thus achieve constant regret when a \u201cgood\u201d representation is available. As usual, we assume \u03b8\u22c6 i \u2208Rdi is unknown, but the algorithm is provided with a bound on the parameter and feature norms of the di\ufb00erent representations. 5.1 The LEADER Algorithm We introduce LEADER (Linear rEpresentation bAnDit mixER), see Alg. 1. At each round t, LEADER builds an estimate \u03b8ti of the unknown parameter \u03b8\u22c6 i of each representation \u03c6i.7 These estimates are by nature o\ufb00-policy, and thus all the samples (xl, al, yl)l 0). The analysis can be generalized to \u2206= 0 as done in Sec. 4.2. Thm. 2 establishes the regret guarantee of LEADER (Alg. 1). Theorem 2. Consider a contextual bandit problem with reward \u00b5, context distribution \u03c1 and \u2206> 0. Let (\u03c6i) be a set of M linearly realizable representations such that maxx,a \u2225\u03c6i(x, a)\u22252 \u2264Li and \u2225\u03b8\u22c6 i \u2225i \u2264Si. Then, for any n \u22651, with probability 1 \u22122\u03b4, LEADER su\ufb00ers a regret Rn \u2264min i\u2208[M] \u001a32\u03bb\u22062 maxS2 i \u03c32 \u2206 \u00d7 \u00d7 \u0012 2 ln \u0012M \u03b4 \u0013 + di ln \u0012 1 + min{\u03c4i, n}L2 i \u03bbdi \u0013\u00132 \u001b where \u03c4i \u221d(\u03bbi,HLS\u2206)\u22122 if \u03c6i is HLS and \u03c4i = +\u221eotherwise. 7We use the subscript i \u2208[M] instead of \u03c6i to denote quantities related to representation \u03c6i. 7 \fAlgorithm 1 LEADER Algorithm Input: representations (\u03c6i)i\u2208[M] with values (Li, Si)i\u2208[M], regularization factor \u03bb \u22651, con\ufb01dence level \u03b4 \u2208(0, 1). Initialize V1i = \u03bbIdi, \u03b81i = 0di for each i \u2208[M] for t = 1, . . . do Observe context xt Pull action at \u2208argmaxa\u2208[K] mini\u2208[M]{Uti(xt, a)} Observe reward rt and, for each i \u2208[M], set Vt+1,i = Vti + \u03c6i(xt, at)\u03c6i(xt, at)T and \u03b8t+1,i = V \u22121 t+1,i Pt l=1 \u03c6i(xl, al)rl end for This shows that the problem-dependent regret bound of LEADER is not worse than the one of the best representation (see Prop. 1), up to a ln M factor. This means that the cost of representation selection is almost negligible. Furthermore, Thm. 2 shows that LEADER not only achieves a constant regret bound when an HLS representation is available, but this bound scales as the one of the best HLS representation. In fact, notice that the \u201cquality\u201d of an HLS representation does not depend only on known quantities such as di, Li, Si, but crucially on HLS eigenvalue \u03bbi,HLS, which is usually not known in advance, as it depends on the features of the optimal arms. 5.2 Combining Representations In the previous section, we have shown that LEADER can perform as well as the best representation in the set. However, by inspecting the action selection rule (Eq. 2), we notice that, to evaluate the reward of an action in the current context, LEADER selects the representation with the smallest uncertainty, thus potentially using di\ufb00erent representations for di\ufb00erent context-action pairs. This leads to the question: can LEADER do better than the best representation in the set? We show that, in certain cases, LEADER is able to combine representations and achieve constant regret when none of the individual representations would. The intuition is that a subset of \u201clocally good\u201d representations can be combined to recover a condition similar to HLS. This property is formally stated in the following de\ufb01nition. De\ufb01nition 1 (Mixing HLS). Consider a linear contextual problem with reward \u00b5 and context distribution \u03c1, and a set of M realizable linear representations \u03c61, . . . , \u03c6M. De\ufb01ne Mi = Ex\u223c\u03c1 h \u03c6\u22c6 i (x)\u03c6\u22c6 i (x)Ti and let Zi = {(x, a) \u2208X \u00d7 A | \u03c6i(s, a) \u2208Im(Mi)} be the set of context-action pairs whose features belong to the column space of Mi, i.e., that lie in the span of optimal features. We say that the set (\u03c6i) satis\ufb01es the mixed-HLS condition if X \u00d7 A \u2286SM i=1 Zi. Let \u03bb+ i = \u03bb+ min(Mi) be the minimum nonzero eigenvalue of Mi. Intuitively, the previous condition relies on the observation that every representation satis\ufb01es a \u201crestricted\u201d HLS condition on the context-action pairs (x, a) whose features \u03c6i(x, a) are spanned by optimal features \u03c6\u22c6(x). In this case, the characterizing eigenvalue is \u03bb+ i , instead of the smallest eigenvalue \u03bbi,HLS (which may be zero). If every context-action pair is in the restriction Zi of some representation, we have the mixed-HLS property. In particular, if representation i is HLS, \u03bb+ i = \u03bbi,HLS and Zi = S \u00d7 A. So, HLS is a special case of mixed-HLS. In App. E.2, we provide simple examples of sets of representations satisfying Def. 1. Note that, strictly speaking, there is not a single \u201cmixed representation\u201d solving the whole problem. Even de\ufb01ning one would be problematic since each representation may have a di\ufb00erent parameter and even a di\ufb00erent dimension. Instead, each representation \u201cspecializes\u201d on a di\ufb00erent portion of the context-action space. If together they cover the whole space, the bene\ufb01ts of HLS are recovered, as illustrated in the following theorem. Theorem 3. Consider a stochastic bandit problem with reward \u00b5, context distribution \u03c1 and \u2206> 0. Let (\u03c6i) be a set of M realizable linear representations satisfying the mixed-HLS property in Def. 1. Then, with probability at least 1 \u22122\u03b4, there exists a time \u03c4 < \u221eindependent from n such that, for any n \u22651, the pseudo-regret of 8 \f0 0.2 0.4 0.6 0.8 1 \u00b7104 0 50 100 150 200 Rounds n Pseudo Regret LEADER d = 2 d = 3 d = 4 d = 5 d = 6 (HLS) d = 6 0 0.2 0.4 0.6 0.8 1 \u00b7104 0 10 20 30 40 50 Rounds n LEADER \u03c60 \u03c61 \u03c62 \u03c63 \u03c64 \u03c65 0 0.2 0.4 0.6 0.8 1 \u00b7104 0 50 100 150 200 Rounds n Corral EXP3.P EXP4.IX LEADER RegBalElim RegBal 102 103 104 105 106 0 100 200 300 400 Rounds n LEADER d = 16 d = 17 d = 20 d = 23 d = 24 d = 26 d = 33 Figure 4: Regret of LEADER and model-selection baselines on di\ufb00erent linear contextual bandit problems. (left) Synthetic problem with varying dimensions. (middle left) Representation mixing. (middle right) Comparison to model selection baselines. (right) Jester dataset. LEADER is bounded as Rn \u2264min i\u2208[M] \u001a32\u03bb\u22062 maxS2 i \u03c32 \u2206 \u00d7 \u00d7 \u0012 2 ln \u0012M \u03b4 \u0013 + di ln \u0012 1 + \u03c4L2 i \u03bbdi \u0013\u00132 \u001b . First, note that we are still scaling with the characteristics of the best representation in the set (i.e., di, Li and Si). However, the time \u03c4 to constant regret is a global value rather than being di\ufb00erent for each representation. This highlights that mixed-HLS is a global property of the set of representations rather than being individual as before. In particular, whenever no representation is (globally) HLS (i.e., \u03bbi,HLS = 0 for all \u03c6i), we can show that in the worst case \u03c4 scales as (mini \u03bb+ i )\u22122. In practice, we may expect LEADER to even behave better than that since i) not all the representations may contribute actively to the mixed-HLS condition; and ii) multiple representations may cover the same region of the context-action space. In the latter case, since LEADER leverages all the representations at once, its regret would rather scale with the largest minimum non-zero eigenvalue \u03bb+ i among all the representations covering such region. We refer to App. E.2 for a more complete discussion. 5.3 Discussion Most of the model selection algorithms reviewed in the introduction could be readily applied to select the best representation for LinUCB. However, the generality of their objective comes with several shortcomings when instantiated in our speci\ufb01c problem (see App. A for a more detailed comparison). First, model selection methods achieve the performance of the best algorithm, up to a polynomial dependence on the number M of models. This already makes them a weaker choice compared to LEADER, which, by leveraging the speci\ufb01c structure of the problem, su\ufb00ers only a logarithmic dependence on M. Second, model selection algorithms are often studied in a worst-case analysis, which reveals a high cost for adaptation. For instance, corralling algorithms (Agarwal et al., 2017; Pacchiano et al., 2020b) pay an extra \u221an regret, which would make them unsuitable to target the constant regret of good representations. Similar costs are common to other approaches (Abbasi-Yadkori et al., 2020; Pacchiano et al., 2020a). It is unclear whether a problem-dependent analysis can be carried out and whether this could shave o\ufb00such dependence. Third, these algorithms are generally designed to adapt to a speci\ufb01c best base algorithm. At the best of our knowledge, there is no evidence that model selection methods could combine algorithms to achieve better performance than the best candidate, a behavior that we proved for LEADER in our setting. On the other hand, model selection algorithms e\ufb00ectively deal with non-realizable representations in certain cases (e.g., Foster et al., 2020; Abbasi-Yadkori et al., 2020; Pacchiano et al., 2020a), while LEADER is limited to the realizable case. While a complete study of the model misspeci\ufb01cation case is beyond the scope of this paper, in App. F, we discuss how a variation of the approach presented in (Agarwal et al., 2012b) could be paired to LEADER to discard misspeci\ufb01ed representations and possibly recover the properties of \u201cgood\u201d representations. 9 \f6 Experiments In this section, we report experimental results on two synthetic and one dataset-based problems. For each problem, we evaluate the behavior of LEADER with LinUCB and model selection algorithms: EXP4.IX (Neu, 2015), Corral and EXP3.P in the stochastic version by Pacchiano et al. (2020b) and Regret Balancing with and without elimination (RegBalElim and RegBal) (Abbasi-Yadkori et al., 2020; Pacchiano et al., 2020a). See App. G for a detailed discussion and additional experiments. All results are averaged over 20 independent runs, with shaded areas corresponding to 2 standard deviations. We always set the parameters to \u03bb = 1, \u03b4 = 0.01, and \u03c3 = 0.3. All the representations we consider are normalized to have \u2225\u03b8\u22c6 i \u2225= 1. Synthetic Problems. We de\ufb01ne a randomly-generated contextual bandit problem, for which we construct sets of realizable linear representations with di\ufb00erent properties (see App. G.1 for details). The purpose of these experiments is twofold: to show the di\ufb00erent behavior of LinUCB with di\ufb00erent representations, and to evaluate the ability of LEADER of selecting and mixing representations. Varying dimension. We construct six representations of varying dimension from 2 up to 6. Of the two representations of dimension d = 6, one is HLS. Fig. 4(left) shows that in this case, LinUCB with the HLS representation outperforms any non-HLS representation, even if they have smaller dimension. This property is inherited by LEADER, which performs better than LinUCB with non-HLS representations even of much smaller dimension 2. Mixing representations. We construct six representations of the same dimension d = 6, none of which is HLS. However, they are constructed so that together they satisfy the weaker mixed-HLS assumption (Def. 1). Fig. 4(middle left) shows that, as predicted by Thm. 3, LEADER leverages di\ufb00erent representations in di\ufb00erent context-action regions and it thus performs signi\ufb01cantly better than any LinUCB using non-HLS representations. The superiority of LEADER w.r.t. the model-selection baselines is evident in this case (Fig. 4(middle right) ), since only LEADER is able to mix representations, whereas model-selection algorithms target the best in a set of \u201cbad\u201d representations. Additional experiments in App. G con\ufb01rm that LEADER consistently outperforms all model-selection algorithms. Jester Dataset. In the last experiment, we extract multiple linear representations from the Jester dataset (Goldberg et al., 2001), which consists of joke ratings in a continuous range from \u221210 to 10 for a total of 100 jokes and 73421 users. For a subset of 40 jokes and 19181 users rating all these 40 jokes, we build a linear contextual problem as follows. First, we \ufb01t a 32 \u00d7 32 neural network to predict the ratings from features extracted via a low-rank factorization of the full matrix. Then, we take the last layer of the network as our \u201cground truth\u201d linear model and \ufb01t multiple smaller networks to clone its predictions, while making sure that the resulting misspeci\ufb01cation is small. We thus obtain 7 representations with di\ufb00erent dimensions among which, interestingly, we \ufb01nd that 6 are HLS. Figure 4(right) reports the comparison between LEADER using all representations and LinUCB with each single representation on a log-scale. Notably, the ability of LEADER to mix representations makes it perform better than the best candidate, while transitioning to constant regret much sooner. Finally, the fact that HLS representations arise so \u201cnaturally\u201d raises the question of whether this is a more general pattern in context-action features learned from data. 7 Conclusion We provided a complete characterization of \u201cgood\u201d realizable representations for LinUCB, ranging from existence to a su\ufb03cient and necessary condition to achieve problem-dependent constant regret. We introduced LEADER, a novel algorithm that, given a set of realizable linear representations, is able to adapt to the best one and even leverage their combination to achieve constant regret under the milder mixed-HLS condition. While we have focused on LinUCB, other algorithms (e.g., LinTS (Abeille and Lazaric, 2017)) as well as other settings (e.g., low-rank RL (Jin et al., 2020)) may also bene\ufb01t from HLS-like assumptions. We have mentioned an approach for eliminating misspeci\ufb01ed representations, but a non-trivial trade-o\ufb00may exist between the level of misspeci\ufb01cation and the goodness of the representation. A slightly imprecise but very informative representation may be preferable to most bad realizable ones. Finally, we believe that moving from selection to representation learning \u2013e.g., provided a class of features such as a neural network\u2013 is an important direction both from a theoretical and practical perspective. 10" + }, + { + "url": "http://arxiv.org/abs/1905.03231v2", + "title": "Smoothing Policies and Safe Policy Gradients", + "abstract": "Policy Gradient (PG) algorithms are among the best candidates for the\nmuch-anticipated applications of reinforcement learning to real-world control\ntasks, such as robotics. However, the trial-and-error nature of these methods\nposes safety issues whenever the learning process itself must be performed on a\nphysical system or involves any form of human-computer interaction. In this\npaper, we address a specific safety formulation, where both goals and dangers\nare encoded in a scalar reward signal and the learning agent is constrained to\nnever worsen its performance, measured as the expected sum of rewards. By\nstudying actor-only policy gradient from a stochastic optimization perspective,\nwe establish improvement guarantees for a wide class of parametric policies,\ngeneralizing existing results on Gaussian policies. This, together with novel\nupper bounds on the variance of policy gradient estimators, allows us to\nidentify meta-parameter schedules that guarantee monotonic improvement with\nhigh probability. The two key meta-parameters are the step size of the\nparameter updates and the batch size of the gradient estimates. Through a\njoint, adaptive selection of these meta-parameters, we obtain a policy gradient\nalgorithm with monotonic improvement guarantees.", + "authors": "Matteo Papini, Matteo Pirotta, Marcello Restelli", + "published": "2019-05-08", + "updated": "2022-06-17", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Reinforcement Learning (RL) (Sutton & Barto, 2018) has achieved astounding successes in games (Mnih et al., 2015; OpenAI, 2018; Silver et al., 2018; Vinyals et al., 2019), matching or surpassing human performance in several occasions. However, the much-anticipated applications of RL to real-world tasks such as robotics (Kober, Bagnell, & Peters, 2013), autonomous driving (Okuda, Kajiwara, & Terashima, 2014) and \ufb01nance (Li & Hoi, 2014) seem still far. This technological delay may be due to the very nature of RL, which relies on the repeated interaction of the learning machine with the surrounding environment, e.g., a manufacturing plant, a tra\ufb03cked road, a stock market. The trial-and-error process resulting from this interaction is what makes RL so powerful and general. However, it also poses signi\ufb01cant challenges in terms of sample e\ufb03ciency (Recht, 2019) and safety (Amodei et al., 2016). In reinforcement learning, the term safety can actually refer to a variety of problems (Garc\u00b4 \u0131a & Fern\u00b4 andez, 2015). The general concern is always the same: avoiding or limiting damage. In \ufb01nancial applications, it is typically a loss of money. In robotics and autonomous driving, one should also consider direct damage to people and property. In this work, we do not make assumptions about the nature of the damage, but we assume it is entirely encoded in the scalar reward signal that is presented to the agent in order to evaluate its actions. Other works (e.g., Turchetta, Berkenkamp, & Krause, 2016) employ a distinct safety signal, separate from rewards. A further distinction is necessary on the scope of safety constraints with respect to the agent\u2019s life. One may simply require the \ufb01nal behavior, the one that is deployed at the end of the learning process, to be safe. This is typically the case when learning is performed in simulation, but the \ufb01nal controller has to be deployed in the real world. The main challenges there are in transferring safety properties from simulation to reality (e.g., Tan et al., 2018). In other cases, learning must be performed, or at least \ufb01nalized, on the actual system, because no reliable simulator is available (e.g., Peters & Schaal, 2008). In such a scenario, safety must be enforced for the whole duration of the learning process. This poses a further challenge, as the agent must necessarily go through a sequence of sub-optimal behaviors before learning its \ufb01nal policy. The problem of learning while containing the damage is also known as safe exploration (Amodei et al., 2016) and will be the focus of this work.1 Garc\u00b4 \u0131a and Fern\u00b4 andez (2015) provide a comprehensive survey on safe RL, where the existing approaches are organized into two main families: methods 1We only use \u201dsafe exploration\u201d in the general sense of (Amodei et al., 2016). Indeed, in this work, we are not concerned with how exploration should be performed to maximize e\ufb03ciency, but only with ensuring safety in a context where some form of exploration is necessary. Exploration in RL is a very profound problem with a vast research tradition (Fruit, Lazaric, & Pirotta, 2019). The problem of e\ufb03cient exploration under performance-improvement constraints has been studied in the multi-armed-bandit literature under the name of conservative bandits (Garcelon, Ghavamzadeh, Lazaric, & Pirotta, 2020b; Kazerouni, Ghavamzadeh, Abbasi, & Roy, 2017; Wu, Shari\ufb00, Lattimore, & Szepesv\u00b4 ari, 2016), with recent extensions to \ufb01nite MDPs (Garcelon, Ghavamzadeh, Lazaric, & Pirotta, 2020a). In RL, safe exploration is mainly concerned with avoiding unsafe states during the learning process (Berkenkamp, 2019; Dalal et al., 2018; Hans, Schneega\u00df, Sch\u00a8 afer, & Udluft, 2008; Pecka & Svoboda, 2014; Turchetta et al., 2016). \fSmoothing Policies and Safe Policy Gradients 3 that modify the exploration process directly in order to explicitly avoid dangerous actions (e.g., Gehring & Precup, 2013), and methods that constrain exploration in a more indirect way by modifying the reward optimization process. The former typically require some sort of external knowledge, such as human demonstrations or advice (e.g., Abbeel, Coates, & Ng, 2010; Clouse & Utgo\ufb00, 1992). In this work, we only assume online access to a su\ufb03ciently informative reward signal and prior knowledge of some worst-case constants that are easy to obtain. Optimization-based methods (those belonging to the second class) are more suited for this scenario. A particular kind, identi\ufb01ed by Garc\u00b4 \u0131a and Fern\u00b4 andez as constrained criteria (Castro, Tamar, & Mannor, 2012; Kadota, Kurano, & Yasuda, 2006; Moldovan & Abbeel, 2012), enforces safety by introducing constraints in the optimization problem, i.e., reward maximization.2 A typical constraint is that the agent\u2019s performance, i.e., the sum of rewards, must never be less than a user-speci\ufb01ed threshold (Geibel & Wysotzki, 2005; Thomas, Theocharous, & Ghavamzadeh, 2015), which may be the average performance of a trusted baseline policy. Under the assumption that the reward signal also encodes danger, low performances can be matched with dangerous behaviors, so that the performance threshold works as a safety threshold. This falls into the general framework of Seldonian machine learning introduced by Thomas et al. (2019). If we only cared about the safety of the \ufb01nal controller, the traditional RL objective \u2014 maximizing cumulated reward \u2014 would be enough. However, most RL algorithms are known to yield oscillating performances during the learning phase. Regardless of the \ufb01nal solution, the intermediate ones may violate the threshold, hence yield unsafe behavior. This problem is known as policy oscillation (Bertsekas, 2011; Wagner, 2011). A similar constraint, which confronts the policy oscillation problem even more directly, is Monotonic Improvment (MI, S. Kakade & Langford, 2002; Pirotta, Restelli, Pecorino, & Calandriello, 2013), and is the one adopted in this work. The requirement is that each new policy implemented by the agent during the learning process does not perform worse than the previous one. In this way, if the initial policy is safe, so will be all the subsequent ones. The way safety constraints such as MI can be imposed on the optimization process depends, of course, on what kind of policies are considered as candidates and on how the optimization itself is performed. These two aspects are often tied and will depend on the speci\ufb01c kind of RL algorithm that is employed. Policy Search or Optimization (PO, Deisenroth, Neumann, Peters, et al., 2013) is a family of RL algorithms where the class of candidate policies is \ufb01xed in advance and a direct search for the best one within the class is performed. This makes PO algorithms radically di\ufb00erent from value-based 2Notably, the approach proposed by Chow, Nachum, Du\u00b4 e\u02dc nez-Guzm\u00b4 an, and Ghavamzadeh (2018) lays between the two classes. It relies on the framework of constrained MDPs to guarantee the safety of a behavior policy during training via a set of local, linear constraints de\ufb01ned using an external cost signal. Similar techniques have been used by Berkenkamp, Turchetta, Schoellig, and Krause (2017) to guarantee the ability to re-enter a \u201csafe region\u201d during exploration. \f4 Smoothing Policies and Safe Policy Gradients algorithms such as Deep Q-Networks (Mnih et al., 2015), where the optimal policy is a byproduct of a learned value function. Although value-based methods gained great popularity from their successes in games, PO algorithms are better suited for real-world tasks, especially the ones involving cyberphysical systems. The main reasons are the ability of PO methods to deal with high-dimensional continuous state and action spaces, convergence guarantees (Sutton, McAllester, Singh, & Mansour, 2000), robustness to sensor noise, and the superior control on the set of feasible policies. The latter allows introducing domain knowledge into the optimization process, possibly including some safety constraints. In this work, we focus on Policy Gradient methods (PG, Peters & Schaal, 2008; Sutton et al., 2000), where the set of candidate policies is a class of parametric distributions and the optimization is performed via stochastic gradient ascent on the performance objective as a function of the policy parameters. In particular, we analyze the prototypical PG algorithm, REINFORCE (Williams, 1992) and see how the MI constraints can be imposed by adaptively selecting its meta-parameters during the learning process. To achieve this, we study in more depth the stochastic gradient-based optimization process that is at the core of all PG methods (Robbins & Monro, 1951). In particular, we identify a general family of parametric policies that makes the optimization objective Lipschitz-smooth (Nesterov, 2013) and allows easy upper-bounding of the related smoothness constant. This family, referred to as smoothing policies, includes commonly used policy classes from the PG literature, namely Gaussian and Softmax policies. Using known properties of Lipschitz-smooth functions, we then provide lower bounds on the performance improvement produced by gradient-based updates, as a function of tunable meta-parameters. This, in turn, allows identifying those meta-parameter schedules that guarantee MI with high probability. In previous work, a similar result was achieved only for Gaussian policies (Papini, Pirotta, & Restelli, 2017; Pirotta, Restelli, & Bascetta, 2013).3 The meta-parameters studied here are the step size of the policy updates, or learning rate, and the batch size of gradient estimates, i.e., the number of trials that are performed within a single policy update. These meta-parameters, already present in the original REINFORCE algorithm, are typically selected by hand and \ufb01xed for the whole learning process (Duan, Chen, Houthooft, Schulman, & Abbeel, 2016). Besides guaranteeing monotonic improvement, our proposed method removes the burden of selecting these meta-parameters. This safe, automatic selection within the REINFORCE algorithmic framework yields SPG, our Safe Policy Gradient algorithm. The paper is organized as follows: in Section 2 we introduce the necessary background on Markov decision processes, policy optimization, and smooth functions. In Section 3, we introduce smoothing policies and show the useful properties they induce on the policy optimization problem, most importantly 3See Section 6 for a discussion of related approaches, like the popular TRPO (Schulman, Levine, Abbeel, Jordan, & Moritz, 2015). \fSmoothing Policies and Safe Policy Gradients 5 a lower bound on the performance improvement yielded by an arbitrary policy parameter update (Theorem 7). In Section 4, we exploit these properties to select the step size of REINFORCE in a way that guarantees MI with high probability when the batch size is \ufb01xed, then we achieve similar results with an adaptive batch size. In Section 5, we design a monotonically improving policy gradient algorithm with adaptive batch size, called Safe Policy Gradient (SPG), and show how the latter can also be adapted to weaker improvement constraints. In Section 6, we o\ufb00er a detailed comparison of our contributions with the most closely related literature. In Section 7 we empirically evaluate SPG on simulated control tasks. Finally, we discuss the limitations of our approach and propose directions for future work in Section 8. 2 Preliminaries In this section, we revise continuous Markov Decision Processes (MDPs, Puterman, 2014), actor-only Policy Gradient algorithms (PG, Deisenroth et al., 2013), and some general properties of smooth functions. 2.1 Markov Decision Processes A Markov Decision Process (MDP, Puterman, 2014) is a tuple M = \u27e8S, A, p, r, \u03b3, \u00b5\u27e9, comprised of a measurable state space S, a measurable action space A, a Markovian transition kernel p : S \u00d7 A \u2192\u2206S, where \u2206S denotes the set of probability distributions over S, a reward function r : S \u00d7 A \u2192R, a discount factor \u03b3 \u2208(0, 1) and an initial-state distribution \u00b5 \u2208\u2206S. We only consider bounded-reward MDPs, and denote with R \u2265sups\u2208S,a\u2208A |r(s, a)| (a known upper bound on) the maximum absolute reward. This is the only prior knowledge we have on the task. The MDP is used to model the interaction of a rational agent with the environment. We model the agent\u2019s behavior with a policy \u03c0 : S \u2192\u2206A, a stochastic mapping from states to actions. The initial state is drawn as s0 \u223c\u00b5. For each time step t = 0, 1, . . . , the agent draws an action at \u223c\u03c0(\u00b7|st), conditional on the current state st. Then, the agent obtains a reward rt+1 = r(st, at) and the state of the environment transitions to st+1 \u223cp(\u00b7|st, at). The goal of the agent is to maximize the expected sum of discounted rewards, or performance measure: J(\u03c0) := E \" \u221e X t=0 \u03b3trt+1|s0 \u223c\u00b5, at \u223c\u03c0(\u00b7|st), st+1 \u223cp(\u00b7|st, at) # . (1) We focus on continuous MDPs, where states and actions are real vectors: S \u2286RdS and A \u2286RdA. However, all the results naturally extend to the discrete case by replacing integrals with summations. See (Bertsekas & Shreve, 2004; Puterman, 2014) on matters of measurability and integrability, which just require common technical assumptions. We slightly abuse notation by denoting probability measures (assumed to be absolutely continuous) and density functions with the same symbol. \f6 Smoothing Policies and Safe Policy Gradients Given an MDP, the purpose of RL is to \ufb01nd an optimal policy \u03c0\u2217\u2208 arg max\u03c0 J(\u03c0) without knowing the transition kernel p and the reward function r in advance, but only through interaction with the environment. To better characterize this optimization objective, it is convenient to introduce further quantities. We denote with p\u03c0 the transition kernel of the Markov Process induced by policy \u03c0, i.e., p\u03c0(\u00b7|s) := R A \u03c0(a|s)p(\u00b7|s, a) da. The t-step transition kernel under policy \u03c0 is de\ufb01ned inductively as follows: p0 \u03c0(\u00b7|s) = 1 {s = s\u2032} , p1 \u03c0(\u00b7|s) := p\u03c0(\u00b7|s), pt+1 \u03c0 (\u00b7|s) := Z S pt \u03c0(s\u2032|s)p\u03c0(\u00b7|s\u2032) ds\u2032, (2) for all s \u2208S and t \u22651. The t-step transition kernel allows to de\ufb01ne the following conditional state-occupancy measure: \u03c1\u03c0 s (\u00b7) = (1 \u2212\u03b3) \u221e X t=0 \u03b3tpt \u03c0(\u00b7|s), (3) measuring the (discounted) probability of visiting a state starting from s and following policy \u03c0. The following property of \u03c1\u03c0 s \u2014a variant of the generalized eigenfunction property by Ciosek and Whiteson (2020, Lemma 20)\u2014will be useful (proof in Appendix A.1): Proposition 1 Let \u03c0 be any policy and f be any integrable function on S satisfying the following recursive equation: f(s) = g(s) + \u03b3 Z S p\u03c0(s\u2032|s)f(s\u2032) ds\u2032, for all s \u2208S and some integrable function g on S. Then: f(s) = 1 1 \u2212\u03b3 Z S \u03c1\u03c0 s (s\u2032)g(s\u2032) ds\u2032, for all s \u2208S. The state-value function V \u03c0(s) = E\u03c0 \u0002P\u221e t=0 r(St, At)|S0 = s \u0003 is the discounted sum of rewards obtained, in expectation, by following policy \u03c0 from state s, and satis\ufb01es Bellman\u2019s equation (Puterman, 2014): V \u03c0(s) = E a\u223c\u03c0(\u00b7|s) \u0014 r(s, a) + \u03b3 E s\u2032\u223cp(\u00b7|s,a) [V \u03c0(s\u2032)] \u0015 , (4) Similarly, the action-value function: Q\u03c0(s, a) = r(s, a) + \u03b3 E s\u2032\u223cp(\u00b7|s,a) [V \u03c0(s\u2032)] , (5) \fSmoothing Policies and Safe Policy Gradients 7 is the discounted sum of rewards obtained, in expectation, by taking action a in state s and following \u03c0 afterwards. The two value functions are closely related: V \u03c0(s) = Z A \u03c0(a|s)Q\u03c0(s, a) da, (6) Q\u03c0(s, a) = r(s, a) + \u03b3 Z S p(s\u2032|s, a)V\u03c0(s\u2032) ds\u2032. (7) For bounded-reward MDPs, the value functions are bounded for every policy \u03c0: \u2225V \u03c0\u2225\u221e\u2264\u2225Q\u03c0\u2225\u221e\u2264 R 1 \u2212\u03b3 , (8) where \u2225V \u03c0\u2225\u221e= sups\u2208S |V \u03c0(s)| and \u2225Q\u03c0\u2225\u221e= sups\u2208S,a\u2208A |Q\u03c0(s, a)|. Using the de\ufb01nition of state-value function we can rewrite the performance measure as follows: J(\u03c0) = Z S \u00b5(s)V \u03c0(s) ds = 1 1 \u2212\u03b3 Z S \u03c1\u03c0(s) Z A \u03c0(a|s)r(s, a) da ds, (9) where: \u03c1\u03c0(\u00b7) = Z S \u00b5(s)\u03c1\u03c0 s (\u00b7) ds, (10) is the state-occupancy probability under the starting-state distribution \u00b5. 2.2 Parametric policies In this work, we only consider parametric policies. Given a d-dimensional parameter vector \u03b8 \u2208 \u0398 \u2286 Rd, a parametric policy is a stochastic mapping from states to actions parametrized by \u03b8, denoted with \u03c0\u03b8. The search for the optimal policy is thus limited to the policy class \u03a0\u0398 = {\u03c0\u03b8 | \u03b8 \u2208\u0398}. This corresponds to \ufb01nding an optimal parameter, i.e., \u03b8\u2217\u2208arg max\u03b8\u2208\u0398 J(\u03c0\u03b8). For ease of notation, we often write \u03b8 in place of \u03c0\u03b8 in function arguments and superscripts, e.g., J(\u03b8), \u03c1\u03b8(s) and V \u03b8(s) in place of J(\u03c0\u03b8), \u03c1\u03c0\u03b8 and V \u03c0\u03b8(s), respectively.4 We restrict our attention to policies that are twice di\ufb00erentiable w.r.t. \u03b8, for which the gradient \u2207\u03b8\u03c0\u03b8(a|s) and the Hessian \u22072 \u03b8\u03c0\u03b8(a|s) are de\ufb01ned everywhere and \ufb01nite. For ease of notation, we omit the \u03b8 subscript in \u2207\u03b8 when clear from the context. Given any twicedi\ufb00erentiable scalar function f : \u0398 \u2192R, we denote with Dif the i-th gradient component, i.e., \u2202f \u2202\u03b8i , and with Dijf the Hessian element of coordinates (i, j), i.e., \u22022f \u2202\u03b8i\u2202\u03b8j . We also write \u2207f(\u03b8) to denote \u2207e \u03b8f(e \u03b8) \f \f \fe \u03b8=\u03b8 when this does not introduce any ambiguity. 4Note that J : \u0398 \u2192R, as a function of policy parameters, may have a di\ufb00erent geometry than J : \u03a0\u0398 \u2192R, as a function of the policy. In particular, policy parametrization can be an additional source of non-convexity. \f8 Smoothing Policies and Safe Policy Gradients The Policy Gradient Theorem (Konda & Tsitsiklis, 1999; Sutton et al., 2000) allows us to characterize the gradient of the performance measure J(\u03b8) as an expectation over states and actions visited under \u03c0\u03b8:5 \u2207J(\u03b8) = 1 1 \u2212\u03b3 Z S \u03c1\u03b8(s) Z A \u03c0\u03b8(a|s)\u2207log \u03c0\u03b8(a|s)Q\u03b8(s, a) da ds. (11) The gradient of the log-likelihood \u2207log \u03c0\u03b8(\u00b7|s) is called score function, while the Hessian of the log-likelihood \u22072 log \u03c0\u03b8(\u00b7|s) is sometimes called observed information. 2.3 Actor-only policy gradient In practice, we always consider \ufb01nite episodes of length T. We call this the e\ufb00ective horizon of the MDP, chosen to be su\ufb03ciently large so that the problem does not lose generality.6 We denote with \u03c4 := (s0, a0, s1, a1, . . . , sT \u22121, aT \u22121) a trajectory, i.e., a sequence of states and actions of length T such that s0 \u223c\u00b5, at \u223c\u03c0(\u00b7|st), st \u223cp(\u00b7|st\u22121, at\u22121) for t = 0, . . . , T \u22121 and some policy \u03c0. In this context, the performance measure of a parametric policy \u03c0\u03b8 can be de\ufb01ned as: J(\u03b8) = E \u03c4\u223cp\u03b8 \"T \u22121 X t=0 \u03b3tr(st, at) # , (12) where p\u03b8(\u03c4) is the probability density of the trajectory \u03c4 that can be generated by following policy \u03c0\u03b8, i.e., p\u03b8(\u03c4) = \u00b5(s0)\u03c0\u03b8(a0|s0)p(s1|s0, a0) . . . \u03c0\u03b8(aT \u22121|sT \u22121). Let D \u223c p\u03b8 be a batch {\u03c41, \u03c42, . . . , \u03c4N} of N trajectories generated with \u03c0\u03b8, i.e., \u03c4i \u223cp\u03b8 i.i.d. for i = 1, . . . , N. Let b \u2207J(\u03b8; D) be an estimate of the policy gradient \u2207J(\u03b8) based on D. Such an estimate can be used to perform stochastic gradient ascent on the performance objective J(\u03b8): \u03b8\u2032 \u2190\u03b8 + \u03b1b \u2207J(\u03b8; D), (13) where \u03b1 \u22650 is a step size and N = |D| is called batch size. This yields an Actor-only Policy Gradient method, summarized in Algorithm 1. Under mild conditions, this algorithm is guaranteed to converge to a local optimum (Sutton et al., 2000). This is reasonable since the objective 5As observed by Nota and Thomas (2020), it is important that the state-occupancy measure is discounted as in (3) for the Policy Gradient Theorem to hold. An intuitive way to see the discounted occupancy \u03c1\u03c0(s) is as the probability of visiting state s in an inde\ufb01nite-horizon undiscounted MDP that is reset to the initial state distribution with probability 1\u2212\u03b3 at each step. 6We consider in\ufb01nite-horizon discounted MDPs in our theoretical analysis, but consider a \ufb01nite horizon when introducing speci\ufb01c policy gradient estimators. This mismatch is justi\ufb01ed by the following result: when the reward is uniformly bounded by R, by setting T = O (log(R/\u03f5)/(1 \u2212\u03b3)), the discounted truncated sum of rewards is \u03f5-close to the in\ufb01nite sum (see, e.g., S.M. Kakade et al., 2003, Sec. 2.3.3). See Appendix C.2 for a way to remove this bias by randomizing the horizon. \fSmoothing Policies and Safe Policy Gradients 9 Algorithm 1 Actor-only policy gradient 1: Input: initial policy parameters \u03b80, step size \u03b1, batch size N, number of iterations K 2: for k = 0, . . . , K \u22121 do 3: Collect N trajectories with \u03b8k to obtain dataset Dk 4: Compute policy gradient estimate b \u2207J(\u03b8k; Dk) 5: Update policy parameters as \u03b8k+1 \u2190\u03b8k + \u03b1b \u2207J(\u03b8k; Dk) 6: end for J(\u03b8) is non-convex in general.7 As for the gradient estimator, we can use REINFORCE (Glynn, 1986; Williams, 1992):8 b \u2207J(\u03b8; D) = 1 N N X i=1 T \u22121 X t=0 \u03b3tr(ai t, si t) \u2212b ! T \u22121 X t=0 \u2207log \u03c0\u03b8(ai t|si t) ! , (14) or its re\ufb01nement, G(PO)MDP (Baxter & Bartlett, 2001), which typically su\ufb00ers from less variance (Peters & Schaal, 2008): b \u2207J(\u03b8; D) = 1 N N X i=1 T \u22121 X t=0 \" \u0000\u03b3tr(ai t, si t) \u2212bt \u0001 t X h=0 \u2207log \u03c0\u03b8(ai h|si h) # , (15) where the superscript on states and actions denotes the i-th trajectory of the dataset and b is a (possibly time-dependent and vector-valued) control variate, or baseline. Both estimators are unbiased for any action-independent baseline.9 Peters and Schaal (2008) prove that Algorithm 1 with the G(PO)MDP estimator is equivalent to Monte-Carlo PGT (Policy Gradient Theorem, Sutton et al., 2000), and provide variance-minimizing baselines for both REINFORCE and G(PO)MDP. Algorithm 1 is called actor-only to discriminate it from actor-critic policy gradient algorithms (Konda & Tsitsiklis, 1999), where an approximate value function, or critic, is employed in the gradient computation. In this work, we will focus on actor-only algorithms, for which safety guarantees are more easily proven.10 Generalizations of Algorithm 1 include reducing the variance of gradient estimates through baselines and other stochastic-optimization techniques (e.g., Papini, Binaghi, Canonaco, Pirotta, & Restelli, 2018; Shen, Ribeiro, Hassani, Qian, & Mi, 2019; Xu, Gao, & Gu, 2020) using a vector 7Recent works show that policy gradient algorithms can converge to globally optimal policies in some interesting special cases (Agarwal, Kakade, Lee, & Mahajan, 2020; Bhandari & Russo, 2019; Zhang, Kim, O\u2019Donoghue, & Boyd, 2020). 8In the literature, the term REINFORCE is often used to denote actor-only policy gradient methods in general. In this paper, REINFORCE refers to the algorithm by Williams (1992), which also applies to more general stochastic optimization problems. 9Also valid action-dependent baselines have been proposed. See (Tucker et al., 2018) for a discussion. 10The distinction is not so sharp, as a critic can be seen as a baseline and vice-versa. We call critic an explicit value function estimate used in policy gradient estimation. \f10 Smoothing Policies and Safe Policy Gradients step size (Papini et al., 2017; Yu, Aberdeen, & Schraudolph, 2006); making the step size adaptive, i.e., iteration and/or data-dependent (Pirotta, Restelli, & Bascetta, 2013); making the batch size N also adaptive (Papini et al., 2017); applying a preconditioning matrix to the gradient, as in Natural Policy Gradient (S. Kakade, 2002) and second-order methods (Furmston & Barber, 2012). 2.4 Smooth functions In the following we denote with \u2225x\u2225p the \u2113p-norm of vector x, which is the Euclidean norm for p = 2. . For a matrix A, \u2225A\u2225p = sup{\u2225Ax\u2225p : \u2225x\u2225p = 1} denotes the induced norm, which is the spectral norm for p = 2. When the p subscript is omitted, we always mean p = 2. Let g : X \u2286Rd \u2192Rn be a (non-convex) vector-valued function. We call g Lipschitz continuous if there exists L > 0 such that, for every x, x\u2032 \u2208X: \u2225g(x\u2032) \u2212g(x)\u2225\u2264L \u2225x\u2032 \u2212x\u2225. (16) Let f : X \u2286Rd \u2192R be a real-valued di\ufb00erentiable function. We call f Lipschitz smooth if its gradient is Lipschitz continuous, i.e., there exists L > 0 such that, for every x, x\u2032 \u2208X: \u2225\u2207f(x\u2032) \u2212\u2207f(x)\u2225\u2264L \u2225x\u2032 \u2212x\u2225. (17) Whenever we want to specify the Lipschitz constant L of the gradient, we call f L-smooth.11 We also call L the smoothness constant of f. For a twicedi\ufb00erentiable function, the following holds:12 Proposition 2 Let X be a convex subset of Rd and f : X \u2192R be a twicedi\ufb00erentiable function. If the Hessian is uniformly bounded in spectral norm by L > 0, i.e., supx\u2208X \r \r \r\u22072f(x) \r \r \r 2 \u2264L, then f is L-smooth. Lipschitz smooth functions admit a quadratic bound on the deviation from linear behavior: Proposition 3 (Quadratic Bound) Let X be a convex subset of Rd and f : X \u2192R be an L-smooth function. Then, for every x, x\u2032 \u2208X: \f \ff(x\u2032) \u2212f(x) \u2212 x\u2032 \u2212x, \u2207f(x) \u000b\f \f \u2264L 2 \r \rx\u2032 \u2212x \r \r2 , (18) where \u27e8\u00b7, \u00b7\u27e9denotes the dot product. This bound is often useful for optimization purposes (Nesterov, 2013). 11The Lipschitz constant is usually de\ufb01ned as the smallest constant satisfying the Lipschitz condition. In this paper, we accept any constant for which the Lipschitz condition holds. 12The results from this section are well known in the optimization literature (Nesterov, 2013). However, proofs of Lemma 2 and 3 are reported in Appendix A.2 for the sake of completeness. \fSmoothing Policies and Safe Policy Gradients 11 3 Smooth Policy Gradient In this section, we provide lower bounds on performance improvement based on general assumptions on the policy class. 3.1 Smoothing policies We introduce a family of parametric stochastic policies having properties that we deem desirable for policy-gradient learning. We call them smoothing, as they are characterized by the smoothness of the performance measure: De\ufb01nition 1 Let \u03a0\u0398 = {\u03c0\u03b8 | \u03b8 \u2208\u0398} be a class of twice-di\ufb00erentiable parametric stochastic policies, where \u0398 \u2282Rd is convex. We call it smoothing if there exist nonnegative constants \u03be1, \u03be2, \u03be3 such that, for every state and in expectation over actions, the Euclidean norm of the score function: sup s\u2208S Ea\u223c\u03c0\u03b8(\u00b7|s) h \u2225\u2207log \u03c0\u03b8(a|s)\u2225 i \u2264\u03be1, (19) the squared Euclidean norm of the score function: sup s\u2208S Ea\u223c\u03c0\u03b8(\u00b7|s) h \u2225\u2207log \u03c0\u03b8(a|s)\u22252 i \u2264\u03be2, (20) and the spectral norm of the observed information: sup s\u2208S Ea\u223c\u03c0\u03b8(\u00b7|s) h \r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r i \u2264\u03be3, (21) are upper-bounded. Note that the de\ufb01nition requires that the bounding constants \u03be1, \u03be2, \u03be3 be independent of the policy parameters and the state. For this reason, the existence of such constants depends on the policy parameterization.13 We call a policy class (\u03be1, \u03be2, \u03be3)-smoothing when we want to specify the bounding constants. In Appendix B, we show that some of the most commonly used policies, such as the Gaussian policy for continuous actions and the Softmax policy for discrete actions, are smoothing. The smoothing constants for these classes are reported in Table 1. In the following sections, we will exploit the smoothness of the performance measure induced by smoothing policies to develop a monotonically improving policy gradient algorithm. However, smoothing policies have other interesting properties. For instance, variance upper bounds for REINFORCE/G(PO)MDP with Gaussian policies (Pirotta, Restelli, & Bascetta, 2013; Zhao, Hachiya, Niu, & Sugiyama, 2011) can be generalized to smoothing policies (see Appendix D for details). Other nice properties of smoothing policies, such as Lipschitzness of the performance measure, are discussed in (R. Yuan et al., 2021, Lemma D.1). 13Notice that, by Jensen\u2019s inequality, one can always remove the \ufb01rst requirement (19) by letting \u03be1 = \u221a\u03be2, as observed by R. Yuan, Gower, and Lazaric (2021). However, a smaller value of \u03be1 can sometimes be obtained. See Lemma 23 for an example. \f12 Smoothing Policies and Safe Policy Gradients Table 1: Smoothing constants \u03be1, \u03be2, \u03be3 and smoothness constant L for Gaussian and Softmax policies, where M is an upper bound on the Euclidean norm of the feature function, R is the maximum absolute-value reward, \u03b3 is the discount factor, \u03c3 is the standard deviation of the Gaussian policy and \u03c4 is the temperature of the Softmax policy. We also report the improved smoothness constant by R. Yuan et al. (2021) as L\u22c6. Gaussian Softmax \u03be1 2M \u221a 2\u03c0\u03c3 2M \u03c4 \u03be2 M2 \u03c32 4M2 \u03c42 \u03be3 M2 \u03c32 2M2 \u03c42 L 2M2R \u03c32(1\u2212\u03b3)2 \u0010 1 + 2\u03b3 \u03c0(1\u2212\u03b3) \u0011 2M2R \u03c42(1\u2212\u03b3)2 \u0010 3 + 4\u03b3 1\u2212\u03b3 \u0011 L\u22c6 2M2R \u03c32(1\u2212\u03b3)2 6M2R \u03c42(1\u2212\u03b3)2 3.2 Policy Hessian We now show that the Hessian of the performance measure \u22072J(\u03b8) for a smoothing policy has bounded spectral norm. We start by writing the policy Hessian for a general parametric policy as follows. The result is well known (S. Kakade, 2001), but we report a proof in Appendix A.4 for completeness. Also, note that our smoothing-policy assumption is weaker than the typical one (uniformly bounded policy derivatives). See Appendix A.3 for details. Proposition 4 Let \u03c0\u03b8 be a smoothing policy. The Hessian of the performance measure is: \u22072J(\u03b8) = 1 1 \u2212\u03b3 E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h \u2207log \u03c0\u03b8(a|s)\u2207\u22a4Q\u03b8(s, a) + \u2207Q\u03b8(s, a)\u2207\u22a4log \u03c0\u03b8(a|s) + \u0010 \u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s) + \u22072 log \u03c0\u03b8(a|s) \u0011 Q\u03b8(s, a) i . For smoothing policies, we can bound the policy Hessian in terms of the constants from De\ufb01nition 1: Lemma 5 Given a (\u03be1, \u03be2, \u03be3)-smoothing policy \u03c0\u03b8, the spectral norm of the policy Hessian can be upper-bounded as follows: \r \r \r\u22072J(\u03b8) \r \r \r \u2264 R (1 \u2212\u03b3)2 \u0012 2\u03b3\u03be2 1 1 \u2212\u03b3 + \u03be2 + \u03be3 \u0013 . Proof By the Policy Gradient Theorem (see the proof of Theorem 1 by Sutton et al., 2000): \u2207V \u03b8(s) = 1 1 \u2212\u03b3 Z S \u03c1\u03b8 s (s\u2032) Z A \u03c0\u03b8(a|s\u2032)\u2207log \u03c0\u03b8(a|s\u2032)Q\u03b8(s, a) da ds\u2032. (22) \fSmoothing Policies and Safe Policy Gradients 13 Using (22), we bound the gradient of the value function in Euclidean norm: \r \r \r\u2207V \u03b8(s) \r \r \r \u2264 1 1 \u2212\u03b3 E s\u2032\u223c\u03c1\u03b8 s a\u223c\u03c0\u03b8(\u00b7|s\u2032) h\r \r \r\u2207log \u03c0\u03b8(a|s\u2032)Q\u03b8(s\u2032, a) \r \r \r i \u2264 R (1 \u2212\u03b3)2 E s\u2032\u223c\u03c1\u03b8 s a\u223c\u03c0\u03b8(\u00b7|s\u2032) \u0002\r \r\u2207log \u03c0\u03b8(a|s\u2032) \r \r\u0003 (23) \u2264 R (1 \u2212\u03b3)2 sup s\u2032\u2208S E a\u223c\u03c0\u03b8(\u00b7|s\u2032) \u0002\r \r\u2207log \u03c0\u03b8(a|s\u2032) \r \r\u0003 \u2264 \u03be1R (1 \u2212\u03b3)2 , (24) where (23) is from the Cauchy-Schwarz inequality and (8), and (24) is from the smoothing-policy assumption. Next, we bound the gradient of the action-value function. From (7): \r \r \r\u2207Q\u03b8(s, a) \r \r \r = \r \r \r \r\u2207 \u0012 r(s, a) + \u03b3 E s\u2032\u223cp(\u00b7|s,a) h V \u03b8(s\u2032) i\u0013\r \r \r \r (25) = \u03b3 \r \r \r \r E s\u2032\u223cp(\u00b7|s,a) h \u2207V \u03b8(s\u2032) i\r \r \r \r (26) \u2264\u03b3 E s\u2032\u223cp(\u00b7|s,a) h\r \r \r\u2207V \u03b8(s) \r \r \r i \u2264 \u03b3\u03be1R (1 \u2212\u03b3)2 , (27) where the interchange of gradient and expectation in (26) is justi\ufb01ed by the smoothing-policy assumption (see Appendix A.3 for details) and (27) is from (24). Finally, from Proposition 4: (1 \u2212\u03b3) \r \r \r\u22072J(\u03b8) \r \r \r \u2264 E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h\r \r \r\u2207log \u03c0\u03b8(a|s)\u2207\u22a4Q\u03b8(s, a) \r \r \r i + E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h\r \r \r\u2207Q\u03b8(s, a)\u2207\u22a4log \u03c0\u03b8(a|s) \r \r \r i + E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h\r \r \r\u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s)Q\u03b8(s, a) \r \r \r i + E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h\r \r \r\u22072 log \u03c0\u03b8(a|s)Q\u03b8(s, a) \r \r \r i (28) \u22642 E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h \u2225\u2207log \u03c0\u03b8(a|s)\u2225 \r \r \r\u2207Q\u03b8(s, a) \r \r \r i + E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h \u2225\u2207log \u03c0\u03b8(a|s)\u22252 \f \f \fQ\u03b8(s, a) \f \f \f i + E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h\r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r \f \f \fQ\u03b8(s, a) \f \f \f i (29) \u2264 2\u03b3\u03be1R (1 \u2212\u03b3)2 E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) \u0002 \u2225\u2207log \u03c0\u03b8(a|s)\u2225 \u0003 \f14 Smoothing Policies and Safe Policy Gradients + R 1 \u2212\u03b3 E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h \u2225\u2207log \u03c0\u03b8(a|s)\u22252i + R 1 \u2212\u03b3 E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h\r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r i (30) \u2264 R (1 \u2212\u03b3) \u0012 2\u03b3\u03be2 1 1 \u2212\u03b3 + \u03be2 + \u03be3 \u0013 , (31) where (28) is from Jensen inequality (all norms are convex) and the triangle inequality, (29) is from \r \r \rxy\u22a4\r \r \r = \u2225x\u2225\u2225y\u2225for any two vectors x and y, (30) is from (8) and (27), and the last inequality is from the smoothing-policy assumption. \u25a1 3.3 Smooth Performance For a smoothing policy, the performance measure J(\u03b8) is Lipschitz smooth with a smoothness constant that only depends on the smoothing constants, the reward magnitude, and the discount factor. This result is of independent interest as it can be used to establish convergence rates for policy gradient algorithms (R. Yuan et al., 2021). Lemma 6 Given a (\u03be1, \u03be2, \u03be3)-smoothing policy class \u03a0\u0398, the performance measure J(\u03b8) is L-smooth with the following smoothness constant: L = R (1 \u2212\u03b3)2 \u0012 2\u03b3\u03be2 1 1 \u2212\u03b3 + \u03be2 + \u03be3 \u0013 . (32) Proof From Lemma 5, L is a bound on the spectral norm of the policy Hessian. From Lemma 2, this is a valid Lipschitz constant for the policy gradient, hence the performance measure is L-smooth. \u25a1 The smoothness of the performance measure, in turn, yields the following property on the guaranteed performance improvement: Theorem 7 Let \u03a0\u0398 be a (\u03be1, \u03be2, \u03be3)-smoothing policy class. For every \u03b8, \u03b8\u2032 \u2208\u0398: J(\u03b8\u2032) \u2212J(\u03b8) \u2265\u27e8\u2206\u03b8, \u2207J(\u03b8)\u27e9\u2212L 2 \u2225\u2206\u03b8\u22252 , where \u2206\u03b8 = \u03b8\u2032 \u2212\u03b8 and L = R (1\u2212\u03b3)2 \u0010 2\u03b3\u03be2 1 1\u2212\u03b3 + \u03be2 + \u03be3 \u0011 . Proof It su\ufb03ces to apply Lemma 3 with the Lipschitz constant from Lemma 6. \u25a1 The smoothness constant L for Gaussian and Softmax policies is reported in Table 1. In the following, we will exploit this property of smoothing policies to enforce safety guarantees on the policy updates performed by Algorithm 1, \fSmoothing Policies and Safe Policy Gradients 15 i.e., stochastic gradient ascent updates. However, Theorem 7 applies to any policy update \u2206\u03b8 \u2208Rd as long as \u03b8 + \u2206\u03b8 \u2208\u0398. Very recently, R. Yuan et al. (2021, Lemma 4.4) provided an improved smoothness constant for smoothing policies: L\u22c6= R(\u03be2 + \u03be3) (1 \u2212\u03b3)2 . (33) This is a signi\ufb01cant step forward since it improves the dependence on the e\ufb00ective horizon by a (1\u2212\u03b3)\u22121 factor. In Table 1 we report explicit expressions for L\u22c6in the case of linear Gaussian and Softmax policies. We will use these superior smoothness constant in the numerical simulations of Section 7. 4 Optimal Safe Meta-Parameters In this section, we provide a step size for Algorithm 1 that maximizes a lower bound on the performance improvement for smoothing policies. This yields safety in the sense of Monotonic Improvement (MI), i.e., non-negative performance improvements at each policy update: J(\u03b8k) \u2212J(\u03b8k+1) \u22650, (34) at least with high probability. In policy optimization, at each learning iteration k, we ideally want to \ufb01nd the policy update \u2206\u03b8 that maximizes the new performance J(\u03b8k + \u2206\u03b8), or equivalently: max \u2206\u03b8 J(\u03b8k + \u2206\u03b8) \u2212J(\u03b8k), (35) since J(\u03b8k) is \ufb01xed. Unfortunately, the performance of the updated policy cannot be known in advance.14 For this reason, we replace the optimization objective in (35) with a lower bound, i.e., a guaranteed improvement. In particular, taking Algorithm 1 as our starting point, we maximize the guaranteed improvement of a policy gradient update (line 5) by selecting optimal metaparameters. The solution of this meta-optimization problem provides a lower bound on the actual performance improvement. As long as this is always non-negative, MI is guaranteed. 4.1 Adaptive Step Size \u2013 Exact Framework To decouple the pure optimization aspects of this problem from gradient estimation issues, we \ufb01rst consider an exact policy gradient update, i.e., \u03b8k+1 \u2190\u03b8k +\u03b1\u2207J(\u03b8k), where we assume to have a \ufb01rst-order oracle, i.e., to be 14The performance of the updated policy could be estimated with o\ufb00-policy evaluation techniques, but this would introduce an additional, non-negligible source of variance. The idea of using o\ufb00-policy evaluation to select meta-parameters was explored by Paul, Kurin, and Whiteson (2019). \f16 Smoothing Policies and Safe Policy Gradients able to compute the exact policy gradient \u2207J(\u03b8k). This assumption is clearly not realistic, and will be removed in Section 4.2. In this simpli\ufb01ed framework, performance improvement can be guaranteed deterministically. Furthermore, the only relevant meta-parameter is the step size \u03b1 of the update. We \ufb01rst need a lower bound on the performance improvement J(\u03b8k+1) \u2212J(\u03b8k). For a smoothing policy, we can use the following: Theorem 8 Let \u03a0\u0398 be a (\u03be1, \u03be2, \u03be3)-smoothing policy class. Let \u03b8k \u2208\u0398 and \u03b8k+1 = \u03b8k + \u03b1\u2207J(\u03b8k), where \u03b1 > 0. Provided \u03b8k+1 \u2208\u0398, the performance improvement of \u03b8k+1 w.r.t. \u03b8k can be lower bounded as follows: J(\u03b8k+1) \u2212J(\u03b8k) \u2265\u03b1 \u2225\u2207J(\u03b8k)\u22252 \u2212\u03b12 L 2 \u2225\u2207J(\u03b8k)\u22252 := B(\u03b1; \u03b8k), where L = R (1\u2212\u03b3)2 \u0010 2\u03b3\u03be2 1 1\u2212\u03b3 + \u03be2 + \u03be3 \u0011 . Proof This is a direct consequence of Theorem 7 with \u2206\u03b8 = \u03b1\u2207J(\u03b8k). \u25a1 This bound is in the typical form of performance improvement bounds (e.g., Cohen, Yu, & Wright, 2018; S. Kakade & Langford, 2002; Pirotta, Restelli, & Bascetta, 2013; Schulman et al., 2015): a positive term accounting for the anticipated advantage of \u03b8k+1 over \u03b8k, and a penalty term accounting for the mismatch between the two policies, which makes the anticipated advantage less reliable. In our case, the mismatch is measured by the curvature of the performance measure w.r.t. the policy parameters, via the smoothness constant L. This lower bound is quadratic in \u03b1, hence we can easily \ufb01nd the optimal step size \u03b1\u2217. Corollary 9 Let B(\u03b1; \u03b8k) be the guaranteed performance improvement of an exact policy gradient update, as de\ufb01ned in Theorem 8. Under the same assumptions, B(\u03b1; \u03b8k) is maximized by the constant step size \u03b1\u2217= 1 L, which guarantees the following non-negative performance improvement: J(\u03b8k+1) \u2212J(\u03b8k) \u2265\u2225\u2207J(\u03b8k)\u22252 2L . Proof We just maximize B(\u03b1; \u03b8k) as a (quadratic) function of \u03b1. The global optimum B(\u03b1\u2217; \u03b8k) = \u2225\u2207J(\u03b8k)\u22252 2L is attained by \u03b1\u2217= 1 L. The improvement guarantee follows from Theorem 8. \u25a1 4.2 Adaptive Step Size \u2013 Approximate Framework In practice, we cannot compute the exact gradient \u2207J(\u03b8k), but only an estimate b \u2207J(\u03b8; D) obtained from a \ufb01nite dataset D of trajectories. In this section, N denotes the \ufb01xed size of D. To \ufb01nd the optimal step size, we just need to adapt the performance-improvement lower bound of Theorem 8 to \fSmoothing Policies and Safe Policy Gradients 17 stochastic-gradient updates. Since sample trajectories are involved, this new lower bound will only hold with high probability. To establish statistical guarantees, we make the following assumption on how the (unbiased) gradient estimate concentrates around its expected value: Assumption 1 Fixed a parameter \u03b8 \u2208\u0398, a batch size N \u2208N and a failure probability \u03b4 \u2208(0, 1), with probability at least 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8; D) \u2212\u2207J(\u03b8) \r \r \r \u2264\u03f5(\u03b4) \u221a N , where |D| is a dataset of N i.i.d. trajectories collected with \u03c0\u03b8 and \u03f5 : (0, 1) \u2192R is a known function. We will discuss how this assumption is satis\ufb01ed in cases of interest in Section 5 and Appendix C. Under the above assumption, we can adapt Theorem 8 to the stochastic-gradient case as follows: Theorem 10 Let \u03a0\u0398 be a (\u03be1, \u03be2, \u03be3)-smoothing policy class. Let \u03b8k \u2208\u0398 \u2286Rd and \u03b8k+1 = \u03b8k + \u03b1b \u2207J(\u03b8k; Dk), where \u03b1 \u22650, N = |Dk| \u22651. Under Assumption 1, provided \u03b8k+1 \u2208\u0398, the performance improvement of \u03b8k+1 w.r.t. \u03b8k can be lower bounded, with probability at least 1 \u2212\u03b4k, as follows: J(\u03b8k+1) \u2212J(\u03b8k) \u2265\u03b1 \u0012\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5(\u03b4k) \u221a N \u0013 \u00d7 max \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r , \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r + \u03f5(\u03b4k) \u221a N 2 \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe \u2212\u03b12L 2 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 := e Bk(\u03b1; N), where L = R (1\u2212\u03b3)2 \u0010 2\u03b3\u03be2 1 1\u2212\u03b3 + \u03be2 + \u03be3 \u0011 . Proof Consider the good event Ek = n\r \r \rb \u2207J(\u03b8; D) \u2212\u2207J(\u03b8) \r \r \r \u2264\u03f5(\u03b4k)/ \u221a N o . By Assumption 1, Ek holds with probability at least 1 \u2212\u03b4k. For the rest of the proof, we will assume Ek holds. Let \u03f5k := \u03f5(\u03b4k)/ \u221a N for short. Under Ek, by the triangular inequality: \u2225\u2207J(\u03b8k)\u2225\u2265 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212 \r \r \r\u2207J(\u03b8k) \u2212b \u2207J(\u03b8k; Dk) \r \r \r \u2265 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k, (36) thus: \u2225\u2207J(\u03b8k)\u22252 \u2265max n\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k, 0 o2 . (37) Then, by the polarization identity: D b \u2207J(\u03b8k; Dk), \u2207J(\u03b8k) E = 1 2 \u0012\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 + \u2225\u2207J(\u03b8k)\u22252 \f18 Smoothing Policies and Safe Policy Gradients \u2212 \r \r \r\u2207J(\u03b8k) \u2212b \u2207J(\u03b8k; Dk) \r \r \r 2\u0013 \u22651 2 \u0012\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 + max n\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k, 0 o2 \u2212\u03f52 k \u0013 , where the latter inequality is from (37). We \ufb01rst consider the case in which \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r > \u03f5k: D b \u2207J(\u03b8k; Dk), \u2207J(\u03b8k) E \u22651 2 \u0012\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 + \u0010\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k \u00112 \u2212\u03f52 k \u0013 = \u0010\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k \u0011 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r . (38) Then, we consider the case in which \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2264\u03f5k: D b \u2207J(\u03b8k; Dk), \u2207J(\u03b8k) E \u22651 2 \u0012\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 \u2212\u03f52 k \u0013 (39) = \u0010\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k \u0011 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r + \u03f5k 2 . (40) The two cases can be uni\ufb01ed as follows: D b \u2207J(\u03b8k; Dk), \u2207J(\u03b8k) E \u2265 \u0010\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k \u0011 \u00d7 max \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r , \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r + \u03f5k 2 \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe . (41) From Theorem 7 with \u2206\u03b8 = \u03b1b \u2207J(\u03b8k; Dk) we obtain: J(\u03b8k+1) \u2212J(\u03b8k) \u2265\u27e8\u03b8k+1 \u2212\u03b8k, \u2207J(\u03b8k)\u27e9\u2212L 2 \u2225\u03b8k+1 \u2212\u03b8k\u22252 = \u03b1 D b \u2207J(\u03b8k; Dk), \u2207J(\u03b8k) E \u2212\u03b12L 2 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 \u2265\u03b1 \u0010\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5k \u0011 \u00d7 max \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r , \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r + \u03f5k 2 \uf8fc \uf8f4 \uf8fd \uf8f4 \uf8fe \u2212\u03b12L 2 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 , (42) where the last inequality is from (41). \u25a1 From Theorem 10 we can easily obtain an optimal step size, as done in the exact setting, provided the batch size is su\ufb03ciently large: Corollary 11 Let e B(\u03b1, N; \u03b8k) be the guaranteed performance improvement of a stochastic policy gradient update, as de\ufb01ned in Theorem 10. Under the same assumptions, provided the batch size satis\ufb01es: N \u2265 \u03f52(\u03b4k) \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 , (43) \fSmoothing Policies and Safe Policy Gradients 19 e B(\u03b1, N; \u03b8k) is maximized by the following adaptive step size: \u03b1\u2217 k = 1 L \uf8eb \uf8ec \uf8ed1 \u2212 \u03f5(\u03b4k) \u221a N \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \uf8f6 \uf8f7 \uf8f8, (44) which guarantees, with probability at least 1 \u2212\u03b4k, the following non-negative performance improvement: J(\u03b8k+1) \u2212J(\u03b8k) \u2265 \u0010\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5(\u03b4k) \u221a N \u00112 2L . (45) Proof Let N0 = \u03f52(\u03b4k) \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u22122 . When N \u2264N0, the second argument of the max operator in (41) is selected. In this case, no positive improvement can be guaranteed and the optimal non-negative step size is \u03b1 = 0. Thus, we focus on the case N > N0. In this case, the \ufb01rst argument of the max operator is selected. Optimizing e B(\u03b1, N) as a function of \u03b1 alone, which is again quadratic, yields (44) as the optimal step size and (45) as the maximum guaranteed improvement. \u25a1 In this case, the optimal step size is adaptive, i.e., time-varying and datadependent. The constant, optimal step size for the exact case (Corollary 9) is recovered in the limit of in\ufb01nite data, i.e., N \u2192\u221e. In the following we discuss why this adaptive step size should not be used in practice, and propose an alternative solution. 4.3 Adaptive Batch Size The safe step size from Corollary 11 requires the batch size to be large enough. As soon as the condition (43) fails to hold, the user is left with the decision whether to interrupt the learning process or collect more data \u2014 an undesirable property for a fully autonomous system. To avoid this, a large batch size must be selected from the start, which results in a pointless waste of data in the early learning iterations. Even so, Equation (43), used as a stopping condition, would be susceptible to random oscillations of the stochastic gradient magnitude, interrupting the learning process prematurely. As observed in (Papini et al., 2017), controlling also the batch size N of the gradient estimation can be advantageous. Intuitively, a larger batch size yields a more reliable estimate, which in turn allows a safer policy gradient update. The larger the batch size, the higher the guaranteed improvement, which would lead to selecting the highest possible value of N. However, we must take into account the cost of collecting the trajectories, which is non-negligible in real-world problems (e.g., robotics). For this reason, we would like the metaparameters to maximize the per-trajectory performance improvement: \u03b1k, Nk = arg max \u03b1,N J(\u03b8k + \u03b1b \u2207J(\u03b8k; D)) \u2212J(\u03b8k) N , (46) \f20 Smoothing Policies and Safe Policy Gradients where D is a dataset of N i.i.d. trajectories sampled with \u03c0\u03b8k. We can then use the lower bound from Theorem 10 to \ufb01nd the jointly optimal safe step size and batch size, similarly to what was done in (Papini et al., 2017) for the special case of Gaussian policies: Corollary 12 Let e Bk(\u03b1; N) be the lower bound on the performance improvement of a stochastic policy gradient update, as de\ufb01ned in Theorem 10. Under the same assumptions, the continuous relaxation of e Bk(\u03b1; N)/N is maximized by the following step size \u03b1\u2217and batch size N\u2217 k: \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03b1\u2217= 1 2L N\u2217 k = 4\u03f52(\u03b4k) \r \r \r b \u2207J(\u03b8k; Dk) \r \r \r 2 . (47) Using \u03b1\u2217and \u2308N\u2217 k\u2309in the stochastic gradient ascent update guarantees, with probability at least 1 \u2212\u03b4k, the following non-negative performance improvement: J(\u03b8k+1) \u2212J(\u03b8k) \u2265 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 8L . (48) Proof Fix k and let \u03a5(\u03b1, N) = e Bk(\u03b1; N)/N and N0 = \u03f52(\u03b4k) \u000e \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 . We consider the continuous relaxation of \u03a5(\u03b1, N), where N can be any positive real number. For N \u2265N0, the \ufb01rst argument of the max operator in (36) can be selected. Note that the second argument is always a valid choice, since it is a lower bound on the \ufb01rst one for every N \u22651. Thus, we separately solve the following constrained optimization problems: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 max\u03b1,N 1 N \u0010 \u03b1 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u0010\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r \u2212\u03f5(\u03b4k) \u221a N \u0011 \u2212\u03b12 L 2 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2\u0013 s.t. \u03b1 \u22650, N > \u03f52(\u03b4k) \r \r \r b \u2207J(\u03b8k; Dk) \r \r \r 2 , (49) and: \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 max\u03b1,N 1 N \u0012 \u03b1 2 \u0012\r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 \u2212\u03f52(\u03b4k) N \u0013 \u2212\u03b12 L 2 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2\u0013 s.t. \u03b1 \u22650, N > 0. (50) Both problems can be solved in closed form using KKT conditions. The \ufb01rst one (49) yields \u03a5\u2217 = \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 4 \u000e \u0010 32L\u03f52(\u03b4k) \u0011 with the values of \u03b1\u2217 and N\u2217 k given in (47). The second one (50) yields a worse optimum \u03a5\u2217 = \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 4 \u000e \u0010 54L\u03f52(\u03b4k) \u0011 with \u03b1 = 1 3L and N = 3\u03f52(\u03b4) \u000e \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 . Hence, we keep the \ufb01rst solution. From Theorem 10, using \u03b1\u2217and N\u2217 k would guarantee J(\u03b8k+1) \u2212J(\u03b8k) \u2265 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 \u000e (8L). Of course, only integer batch sizes \fSmoothing Policies and Safe Policy Gradients 21 can be used. However, for N \u2265N0, the right-hand side of (36) is monotonically increasing in N. Since N\u2217 k \u2265N0 and \u2308N\u2217 k\u2309\u2265N\u2217 k, the guarantee (48) is still valid when \u03b1\u2217and \u2308N\u2217 k\u2309are employed in the stochastic gradient ascent update. \u25a1 In this case, the optimal step size is constant, and is exactly half the one for the exact case (Corollary 9). In turn, the batch size is adaptive: when the norm of the (estimated) gradient is small, a large batch size is selected. Intuitively, this allows to counteract the variance of the estimator, which is large relatively to the gradient magnitude. One may worry about the recursive dependence of N \u2217 k on itself through Dk. We will overcome this issue in the next section. 5 Algorithm In this section, we leverage the theoretical results of the previous sections to design a reinforcement learning algorithm with monotonic improvement guarantees. For the reasons discussed above, we adopt the adaptive-batch-size approach from Section 4.3. Corollary 12 provides a constant step size \u03b1\u2217and a schedule for the batch size (\u2308N \u2217 k\u2309)k\u22651 that jointly maximize per-trajectory performance improvement under a monotonic-improvement constraint. Plugging these meta-parameters into Algorithm 1, we could obtain a safe policy gradient algorithm. Unfortunately, the closed-form expression for N \u2217 k provided in (47) cannot be used directly. We must compute the batch size before collecting the batch of trajectories Dk, but N \u2217 k depends on Dk itself. To overcome this issue, we collect trajectories in an incremental fashion until the optimal batch size is achieved. We call this algorithm Safe Policy Gradient (SPG), outlined in Algorithm 2. The user speci\ufb01es the failure probability \u03b4k for each iteration k, while the smoothness constant L and the concentration bound \u03f5 : (0, 1) \u2192R can be computed depending on the policy class and the gradient estimator (see Tables 1 and 2). We can study the data-collecting process of SPG as a stopping problem. Fixed an iteration k, let Fk,i = \u03c3({\u03c4k,1, . . . , \u03c4k,i\u22121}) be the sigma-algebra generated by the \ufb01rst i trajectories collected at that iteration. Let Ei[X] be short for E[X|Fi\u22121].15 In Section 4 and 4.3 we assumed the Euclidean norm of the gradient estimation error to be bounded by \u03f5(\u03b4)/ \u221a N with probability 1 \u2212\u03b4 for some function \u03f5 : (0, 1) \u2192R+. For Algorithm 2 to be well-behaved, we need gradient estimates to concentrate exponentially, which translates into the following, stronger assumption: Assumption 2 Fixed a parameter \u03b8 \u2208\u0398, a batch size N \u2208N and a failure probability \u03b4 \u2208(0, 1), with probability at least 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8; D) \u2212\u2207J(\u03b8) \r \r \r \u2264\u03f5(\u03b4) \u221a N , 15In the analysis that follows, expectation without a subscript actually denotes E[\u00b7|\u03b8k] for a \ufb01xed (outer) iteration k of Algorithm 2. However, we do not need this level of detail since data are discarded at the end of each iteration, dependence between di\ufb00erent iterations is only through \u03b8k, and we can analyze each iteration in isolation until the very end of the section. \f22 Smoothing Policies and Safe Policy Gradients Algorithm 2 Safe Policy Gradient (SPG) 1: Input: initial policy parameter \u03b80, smoothness constant L, concentration bound \u03f5, failure probabilities (\u03b4k)k\u22651, mini-batch size n 2: \u03b1 = 1 2L \u25b7\ufb01xed step size 3: for k = 1, 2, . . . do 4: i = 0, Dk,0 = \u2205 5: do 6: i = i + 1 7: Collect trajectory \u03c4k,i \u223cp\u03b8k 8: Dk,i = Dk,i\u22121 \u222a{\u03c4k,i} 9: Compute policy gradient estimate gk,i = b \u2207J(\u03b8k; Dk,i) 10: \u03b4k,i = \u03b4k i(i+1) 11: while i < 4\u03f52(\u03b4k,i) \u2225gk,i\u22252 12: Nk = i, Dk = Dk,i \u25b7adaptive batch size 13: Update policy parameters as \u03b8k+1 \u2190\u03b8k + \u03b1b \u2207J(\u03b8k; Dk) 14: end for where |D| is a dataset of N i.i.d. trajectories collected with \u03c0\u03b8 and \u03f5(\u03b4) = C p d log(6/\u03b4) for some problem-dependent constant C that is independent of \u03b4, d and N. This is satis\ufb01ed by REINFORCE/G(PO)MDP with Softmax and Gaussian policies, as shown in Appendix C. In Table 2 we summarize the value of the error bound \u03f5(\u03b4) to be used in the di\ufb00erent scenarios. Equipped with this exponential tail bound we can prove that, at any given (outer) iteration of SPG, the data-collecting process (inner loop) terminates: Lemma 13 Fix an iteration k of Algorithm 2 and let Nk the number of trajectories that are collected at that iteration. Under Assumption 2, provided \u2225\u2207J(\u03b8k)\u2225> 0, E[Nk] < \u221e. Proof First, note that Nk is a stopping time w.r.t. the \ufb01ltration (Fk,i)i\u22651. Consider the event Ek,i = \b \r \rgk,i \u2212\u2207J(\u03b8k) \r \r \u2264\u03f5(\u03b4k,i)/ \u221a i \t . By Assumption 2, P(\u00acEk,i) \u2264 \u03b4k,i. This allows to upper bound the expectation of Nk as follows: E[Nk] \u2264E \" \u221e X i=1 I \u221a i < 2\u03f5(\u03b4k,i) \r \rgk,i \r \r !# (51) = E \" \u221e X i=1 I \u221a i < 2\u03f5(\u03b4k,i) \r \rgk,i \r \r , Ek,i !# + E \" \u221e X i=1 I \u221a i < 2\u03f5(\u03b4k,i) \r \rgk,i \r \r , \u00acEk,i !# (52) \u2264 \u221e X i=1 I \u221a i < 2\u03f5(\u03b4k,i) \u2225\u2207J(\u03b8k)\u2225\u2212\u03f5(\u03b4k,i)/ \u221a i ! + \u221e X i=1 P(\u00acEk,i) (53) \fSmoothing Policies and Safe Policy Gradients 23 Table 2: Gradient estimation error bound \u03f5(\u03b4) for Gaussian and Softmax policies using REINFORCE (RE.), GPOMDP (GP.), or the random-horizon estimator discussed in Appendix C.2 (RH.) as gradient estimator, where d is the dimension of the policy parameter, M is an upper bound on the max norm of the feature function, R is the maximum absolute-valued reward, \u03b3 is the discount factor, T is the task horizon, \u03c3 is the standard deviation of the Gaussian policy and \u03c4 is the temperature of the Softmax policy. Gaussian Softmax RE. 4MRT (1\u2212\u03b3\u22a4) \u03c3(1\u2212\u03b3) p 14d log(6/\u03b4) 4MRT (1\u2212\u03b3\u22a4) \u03c4(1\u2212\u03b3) p 2d log(6/\u03b4) GP. 4MR[1\u2212\u03b3\u22a4\u2212T (\u03b3T \u2212\u03b3T +1)] \u03c3(1\u2212\u03b3)2 p 14d log(6/\u03b4) 4MR[1\u2212\u03b3\u22a4\u2212T (\u03b3T \u2212\u03b3T +1)] \u03c4(1\u2212\u03b3)2 p 2d log(6/\u03b4) RH. 4MR \u03c3(1\u2212\u03b31/2)2 p 14d log(6/\u03b4) 4MR \u03c4(1\u2212\u03b31/2)2 p 2d log(6/\u03b4) \u2264min i\u22651 ( \u221a i \u2265 2\u03f5(\u03b4k,i) \u2225\u2207J(\u03b8k)\u2225\u2212\u03f5(\u03b4k,i)/ \u221a i ) + \u221e X i=1 \u03b4k,i (54) \u2264min i\u22651 n \u2225\u2207J(\u03b8k)\u2225 \u221a i \u22653\u03f5(\u03b4k,i) o ) + \u03b4k \u221e X i=1 1 i(i + 1) (55) \u2264min i\u22651 n \u2225\u2207J(\u03b8k)\u2225 \u221a i \u22653C p d log(6i(i + 1)/\u03b4k) o + 1 (56) \u2264min i\u22651 n \u2225\u2207J(\u03b8k)\u22252 i \u226518C2d log(6i/\u03b4k) o + 1 (57) \u2264 & 36C2d \u2225\u2207J(\u03b8k)\u22252 log 108C2d \u2225\u2207J(\u03b8k)\u22252 \u03b4k ' + 1, (58) where (56) is by Assumption 2 and the last inequality is by Lemma 21 assuming \u2225\u2207J(\u03b8k)\u2225\u2264C. If the latter is not true, we still get: E[Nk] \u2264min i\u22651 n \u2225\u2207J(\u03b8k)\u22252 i \u226518C2d log(6i/\u03b4k) o + 1 (59) \u2264min i\u22651 {i \u226518d log(6i/\u03b4k)} + 1 (60) \u2264\u230836d log(108d/\u03b4k)\u2309+ 1. (61) \u25a1 We can now prove that the policy updates of SPG are safe. Theorem 14 Consider Algorithm 2 applied to a smoothing policy, where b \u2207J is an unbiased policy gradient estimator. Under Assumption 2, for any iteration k \u22651, provided \u2207J(\u03b8k) \u0338= 0, with probability at least 1 \u2212\u03b4k: J(\u03b8k+1) \u2212J(\u03b8k) \u2265 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 8L \u22650. \f24 Smoothing Policies and Safe Policy Gradients Proof Fix an (outer) iteration k of Algorithm 2 and let gk,i = b \u2207J(\u03b8k; Dk,i) for short. Using an unbiased policy gradient estimator we ensure Ei[gk,i \u2212\u2207J(\u03b8k)] = 0, so Xi = gk,i \u2212\u2207J(\u03b8k) is a martingale di\ufb00erence sequence adapted to (Fk,i)i\u22651. We use an optional stopping argument to show that gk,Nk is an unbiased policy gradient estimate. Lemma 13 shows that Nk is a stopping time w.r.t. the \ufb01ltration (Fk,i)i\u22651 that is \ufb01nite in expectation. Furthermore, by Assumption 2, integrating the tail: Ei[\u2225Xi\u2225] = Z \u221e 0 P \u0000\u2225X\u2225> x|Fk,i \u0001 dx (62) \u22646 Z \u221e 0 exp(\u2212x2i/(C2d)) dx (63) \u22646C r \u03c0d 4i \u22646C r \u03c0d 4 for all i \u22651. (64) Hence, by optional stopping (Lemma 22), E[XNk] = 0. Since XNk = b \u2207J(\u03b8k; Dk) \u2212 \u2207J(\u03b8k), we have E[b \u2207J(\u03b8k; Dk)] = \u2207J(\u03b8k). This shows that the policy update of Algorithm 2 is an unbiased policy-gradient update. By the stopping condition: Nk \u2265 4\u03f52(\u03b4k,Nk) \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 . (65) Now consider the following good event: Ek = \b \u2200i \u22651 : \r \rgk,i \u2212\u2207J(\u03b8k) \r \r \u2264\u03f5(\u03b4k,i)/i \t . (66) Under Assumption 2, by union bound: P (\u00acEk) \u2264 \u221e X i=1 \u03b4k,i = \u221e X i=1 \u03b4k i(i + 1) = \u03b4k. (67) So Ek holds with probability at least 1\u2212\u03b4k. Under Ek, the performance improvement guarantee is by Corollary 12, Equation (65), and the choice of the step size \u03b1. \u25a1 We have shown that the policy updates of SPG are safe with probability 1 \u2212\u03b4k, where the failure probability \u03b4k can be speci\ufb01ed by the user for each iteration k. Typically, one would like to ensure monotonic improvement for the whole duration of the learning process. This can be achieved by appropriate con\ufb01dence schedules. If the number of updates K is \ufb01xed a priori, \u03b4k = \u03b4/K guarantees monotonic improvement with probability 1 \u2212\u03b4. The same can be obtained by using an adaptive con\ufb01dence schedule \u03b4k = \u03b4 k(k+1), even when the number of updates is not known in advance. Both results are easily shown by taking a union bound over k \u22651. Notice how having an exponential tail bound like the one from Assumption 2 is fundamental for the batch size to have a logarithmic dependence on the number of policy updates. 5.1 Towards a Practical Algorithm The version of SPG we have just analyzed is very conservative. The price for guaranteeing monotonic improvement is slow convergence, even in small problems (see Section 7.1 for an example). In this section, we discuss possible variants and generalizations of Algorithm 2 aimed at the development of a more practical method. In doing so, we still stay faithful to the principle of satisfying the safety requirement speci\ufb01ed by the user with no compromises. We just list the changes here. See Appendix E for a more rigorous discussion. \fSmoothing Policies and Safe Policy Gradients 25 Improved smoothness constant. As mentioned in Section 3.3, we can use the improved smoothness constant by R. Yuan et al. (2021), denoted L\u22c6in the following, which has a better dependence on the e\ufb00ective horizon. This yields a larger step size with the same theoretical guarantees, and allows to tackle problems with longer horizons in practice. Mini-batches. In the inner loop of Algorithm 2, instead of just one trajectory at a time, we can collect mini-batches of n independent trajectories. For instance, n \u22652 is required to employ the variance-reducing baselines discussed in Section 2. Moreover, a carefully picked mini-batch size n can make the early gradient estimates more stable, leading to an earlier stopping of the inner loop and a smaller batch size Nk. We leave the investigation of the optimal value of n to future work. Largest safe step size. The meta parameters of Algorithm 2 were selected to maximize a lower bound on the per-trajectory performance improvement. Although we believe this is the most theoretically justi\ufb01ed choice, we could gain some convergence speed by using a larger step size. From Theorem 10, it is easy to check that \u03b1 = 1/L is the largest constant step size we can use with our choice of adaptive batch size from Algorithm 2. We leave the investigation of alternative safe combinations of batch size and (possibly adaptive) step size to future work. Empirical Bernstein bound. The stopping condition of Algorithm 2 (line 11) is based on a Hoe\ufb00ding-style bound on the gradient estimation error. In the case of policies with bounded score function, such as Softmax policies (see Appendix B.2), we can use an empirical Bernstein bound instead (Maurer & Pontil, 2009). This requires some modi\ufb01cations to the algorithm, but yields a smaller adaptive batch size with the same safety guarantees. See Appendix E for details. Unfortunately, we cannot use the empirical Bernstein bound with the Gaussian policy because of its unbounded score function (see Appendix B.1). Weaker safety requirements. Monotonic improvement is a very strong requirement, so we do expect an algorithm with strict monotonic improvement guarantees like SPG to be very data-hungry and slow to converge. However, with little e\ufb00ort, Algorithm 2 can be modi\ufb01ed to handle weaker safety requirements. A common one is the baseline constraint (Garcelon et al., 2020a; Laroche, Trichelair, & Des Combes, 2019, e.g.,), where the performance of the policy is required to never be (signi\ufb01cantly) lower than the performance of a baseline policy \u03c0b. In a real safety-critical application, the reward could be designed so that policies with \f26 Smoothing Policies and Safe Policy Gradients performance greater than J(\u03c0b) are always safe. In other applications, \u03c0b can be an existing, reliable controller that the user wants to replace with an adaptive RL agent. In this case, assuming \u03c0\u03b80 = \u03c0b, the baseline constraint guarantees that the learning agent never performs worse than the original controller. In our numerical simulations of Section 7, we will consider a stronger version of the baseline constraint that we call milestone constraint. In this case, the agent\u2019s policy must never perform (signi\ufb01cantly) worse than the best performance observed so far. Formally, for all k \u22651: J(\u03b8k+1) \u2265\u03bb max j=1,2,...,k{J(\u03b8j)}, (68) where \u03bb \u2208[0, 1] is a user-de\ufb01ned signi\ufb01cance parameter. The idea is as follows: every time the agent reaches a new level of performance (a milestone), it should never do signi\ufb01cantly worse than that. When \u03bb = 1, this reduces to monotonic improvement. When \u03bb < 1, some amount of performance oscillation is allowed, but this relaxation can signi\ufb01cantly improve the learning speed. Of course, the user has full control on this trade-o\ufb00through the meta-parameter \u03bb. In Appendix E we show that variants of Algorithm 2 satisfy the milestone constraint (and other requirements, such as the baseline constraint) with probability 1 \u2212\u03b4 for given signi\ufb01cance \u03bb and failure probability \u03b4. We experiment with the milestone constraint in Section 7.2. 6 Related Works In this section, we discuss previous results on MI guarantees for policy gradient algorithms. The seminal work on monotonic performance improvement is by S. Kakade and Langford (2002). In this work, policy gradient approaches are soon dismissed because of their lack of exploration, although they guarantee MI in the limit of an in\ufb01nitesimally small step size. The authors hence focus on valuebased RL, proposing the Conservative Policy Iteration (CPI) algorithm, where the new policy \u03c0k+1is a mixture of the old policy \u03c0k and a greedy one \u03c0+ k . The guaranteed improvement of this new policy (S. Kakade & Langford, 2002, Theorem 4.1) depends on the coe\ufb03cient \u03b1 of this convex combination, which plays a similar role as the learning rate in our Theorem 8: J(\u03c0k+1) \u2212J(\u03c0k) \u2265 \u03b1 (1 \u2212\u03b3) E s\u223c\u03c1\u03c0k a\u223c\u03c0+ k [A\u03c0k(s, a)] \u2212 2\u03b12\u03b3\u03f5 (1 \u2212\u03b3)2(1 \u2212\u03b1), (69) where \u03f5 = maxs\u2208S | Ea\u223c\u03c0+ k (\u00b7|s) [A\u03c0k(s, a)] | and A\u03c0(s, a) = Q\u03c0(s, a) \u2212V \u03c0(s) denotes the advantage function of policy \u03c0. In fact, both lower bounds have a positive term that accounts for the expected improvement of the new policy w.r.t. the old one, and a penalization term due to the mismatch between the \fSmoothing Policies and Safe Policy Gradients 27 two. The CPI approach is re\ufb01ned by Pirotta, Restelli, Pecorino, and Calandriello (2013), who propose the Safe Policy Iteration (SPI) algorithm (see also Metelli, Pirotta, Calandriello, & Restelli, 2021). Speci\ufb01c performance improvement bounds for policy gradient algorithms were \ufb01rst provided by Pirotta, Restelli, and Bascetta (2013) by adapting previous results on policy iteration (Pirotta, Restelli, Pecorino, & Calandriello, 2013) to continuous MDPs. However, the penalty term can only be computed for shallow Gaussian policies (App. B.1) in practice. The bound for the exact framework is: J(\u03b8k+1) \u2212J(\u03b8k) \u2265\u03b1k \u2225\u2207J(\u03b8k)\u22252 \u2212\u03b12 k M 2R \u03c32(1 \u2212\u03b3)2 \u0012 |A| \u221a 2\u03c0\u03c3 + \u03b3 2(1 \u2212\u03b3) \u0013 \u00d7 \u2225\u2207J(\u03b8k)\u22252 1 , (70) where |A| denotes the volume of the action space. From Table 1, our bound for the same setting is (Corollary 9): J(\u03b8k+1)\u2212J(\u03b8k) \u2265\u03b1k \u2225\u2207J(\u03b8k)\u22252\u2212\u03b12 k M 2R \u03c32(1 \u2212\u03b3)2 \u0012 1 + 2\u03b3 \u03c0(1 \u2212\u03b3) \u0013 \u2225\u2207J(\u03b8k)\u22252 , which has the same dependence on the step size, the policy standard deviation \u03c3, the e\ufb00ective horizon (1 \u2212\u03b3)\u22121, the maximum reward R and the maximum feature norm M. Besides being more general, our penalty term does not depend on the problematic |A| term (the action space is theoretically unbounded for Gaussian policies) and replaces the l1 norm of (70) with the smaller l2 norm. Due to the di\ufb00erent constants, we cannot say our penalty is always smaller, but the change of norm could make a big di\ufb00erence in practice, especially for large parameter dimension d. Pirotta, Restelli, and Bascetta (2013) also study the approximate framework. However, albeit formulated in terms of the estimated gradient, their lower bound (Theorem 5.2) still pertains exact policy gradient updates, since \u03b8k+1 is de\ufb01ned as \u03b8k+\u03b1k\u2207J(\u03b8k). This easy-to-overlook observation makes our Theorem 10 the \ufb01rst rigorous monotonic improvement guarantee for stochastic policy gradient updates of the form \u03b8k+1 = \u03b8k + \u03b1k b \u2207J(\u03b8k). Pirotta, Restelli, and Bascetta (2013) use their results to design an adaptive step-size schedule for REINFORCE and G(PO)MDP, similarly to what we propose in this paper, but limited to Gaussian policies. Papini et al. (2017) rely on the same improvement lower bound (70) to design an adaptivebatch size algorithm, the most similar to our SPG. Again, their monotonic improvement guarantees are limited to shallow Gaussian policies. Another related family of performance improvement lower bounds, inspired once again by S. Kakade and Langford (2002), is that of TRPO. These are very general results that apply to arbitrary pairs of stochastic policies, although they are mostly used to construct policy gradient algorithms in practice. Specializing Theorem 1 by Schulman et al. (2015) to our setting and applying the \f28 Smoothing Policies and Safe Policy Gradients KL lower bound suggested by the authors we can get the following: J(\u03b8k+1) \u2212J(\u03b8k) \u2265 1 1 \u2212\u03b3 E s\u223c\u03c1\u03b8k a\u223c\u03c0\u03b8k+1 \u0002 A\u03b8k(s, a) \u0003 \u2212 2\u03b3R (1 \u2212\u03b3)3 max s\u2208S \b KL(\u03c0\u03b8k(\u00b7|s)\u2225\u03c0\u03b8k+1(\u00b7|s)) \t , (71) where \u03c0\u03b8 is a stochastic policy. Unfortunately, the lower bound for a policy gradient update (exact or stochastic) cannot be computed exactly. Approximations can lead to very good practical algorithms such as TRPO, but not to actually implementable algorithms with rigorous monotonic improvement guarantees like our SPG. Achiam, Held, Tamar, and Abbeel (2017) and Pajarinen, Thai, Akrour, Peters, and Neumann (2019) are able to remove some approximations, but not all.16 If we were to derive a computable worst-case lower bound starting from (71), we would get a result similar to (70). In fact, Pirotta, Restelli, and Bascetta (2013) explicitly upper-bound the KL divergence in their derivations, which is why the \ufb01nal result is limited to Gaussian policies. We overcome this di\ufb03culty by directly upper-bounding the curvature of the objective function (Lemma 5). Furthermore, Theorem 7 suggests that our theory is not limited to policy gradient updates. Arbitrary update directions are considered in (Papini, Battistello, & Restelli, 2020). Pirotta, Restelli, and Bascetta (2015) provide performance improvement lower bounds (Lemma 8) and adaptive-step algorithms for policy gradients under Lipschitz continuity assumptions on the MDP and the policy. Our assumptions on the environment are much weaker since we only require boundedness of the reward. Intuitively, stochastic policies smooth out the irregularities of the environment in computing expected return objectives. In turn, the results of Pirotta et al. (2015) also apply to deterministic policies. Cohen et al. (2018) provide a general safe policy improvement strategy that can be applied also to policy gradient updates. However, it requires to maintain and evaluate a set of policies per iteration instead of a single one. As mentioned, R. Yuan et al. (2021, Lemma 4.4) also study policy gradient with smoothing policies, providing an improved smoothness constant and proving Lipschitz continuity of the objective function. However, their main focus is sample complexity of vanilla policy gradient. 7 Experiments In this section, we test our SPG algorithm on simulated control tasks. We \ufb01rst test Algorithm 2 with monotonic improvement guarantees on a small continuous-control problem. We then experiment with the milestone-constraint 16This is not a critique of the TRPO algorithm per se. Besides the celebrated empirical results, TRPO is also theoretically justi\ufb01ed (Neu, Jonsson, & G\u00b4 omez, 2017), only not best as a monotonically improving gradient-descent algorithm (see also Shani, Efroni, & Mannor, 2020). \fSmoothing Policies and Safe Policy Gradients 29 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 \u00b7107 \u2212100 \u221280 \u221260 \u221240 \u221220 Episodes Expected Performance SPG (\u03b1 = 1/(2L\u22c6)) SPG (\u03b1 = 1/L\u22c6) AdaBatch Fig. 1: Performance of SPG and AdaBatch (Papini et al., 2017) on the LQR task with Gaussian policy. Results are averaged over 5 independent runs. The shaded areas correspond to 10 standard deviations. A marker corresponds to 100 policy updates. relaxation proposed in Section 5.1 on a classic RL benchmark \u2014 cart-pole balancing. 7.1 Linear-Quadratic Regulator with Gaussian Policy The \ufb01rst task is a 1-dimensional Linear-Quadratic Regulator (LQR, Dorato, Cerone, and Abdallah (1994)), a typical continuous-control benchmark. See Appendix F.1 for a detailed task speci\ufb01cation. We use a Gaussian policy (Appendix B.1) that is linear in the state, \u03c0\u03b8(a|s) = N(a; \u03b8s, \u03c32). The task horizon is T = 10 and we use \u03b3 = 0.9 as a discount factor. The policy mean parameter is initialized to \u03b80 = 0 and the variance is \ufb01xed as \u03c3 = 1. For this task, the maximum reward (in absolute value) is R = 1 and the only feature is the state itself, giving M = 1. Hence, the smoothness constant L\u22c6\u2243200 is easily computed (see Table 1). Similarly, the error bound can be retrieved from Table 2. We compare the SPG (Algorithm 2) with an existing adaptivebatch-size policy gradient algorithm for Gaussian policies (Papini et al., 2017), discussed in the previous section and labeled AdaBatch in the plots. SPG is run with a mini-batch size of n = 100 (see Section 5.1), and AdaBatch (in the version with Bernstein\u2019s inequality as recommended in the original paper) with an initial batch size of N0 = 100. Both use the adaptive con\ufb01dence schedule \u03b4k = \u03b4/(k \u2217(k + 1)) discussed in Section 5, with an overall failure probability of \u03b4 = 0.05. We also consider SPG with a twice-as-large step size \u03b1 = 1/L\u22c6, as discussed in Section 5.1. Figure 1 shows the expected performance of the algorithms on the LQR task. For this task, we are able to compute the expected performance in closed form given the policy parameters (Peters, 2002). This allows to \ufb01lter out the oscillations due to the stochasticity of policy and environment, focusing on \f30 Smoothing Policies and Safe Policy Gradients 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 \u00b7107 0 0.2 0.4 0.6 0.8 1 \u00b7105 Episodes Batch Size SPG (\u03b1 = 1/L\u22c6) SPG (\u03b1 = 1/(2L\u22c6)) AdaBatch Fig. 2: Batch size of SPG and AdaBatch on the LQR task. Results are averaged over 5 independent runs. The shaded areas correspond to one standard deviation. A marker corresponds to 100 policy updates actual (expected) performance oscillations. It is also why the variability among di\ufb00erent seeds is so small (note that, for this \ufb01gure, shaded areas correspond to 10 standard deviations. They correspond to a single standard deviation in the other \ufb01gures). Performance is plotted against the total number of collected trajectories for fair comparison. The distribution of policy updates can be deduced from the markers. We can see that indeed all the safe PG algorithms exhibit monotonic improvement. SPG converges faster than AdaBatch. This is mostly due to the larger step size of SPG (we observed that the step size of SPG was more than 100 times larger than the one of AdaBatch in most of the updates). This allows SPG to converge faster even with fewer policy updates. The variant of SPG with a larger step size (\u03b1 = 1/L\u22c6) converges faster to a good policy, but the original version from Algorithm 2 achieves higher performance on the long run. This indicates that maximizing the lower bound on per-trajectory performance improvement from Theorem 10 is indeed meaningful. Figure 2 shows the batch size of the di\ufb00erent algorithms. The batch size of SPG is mostly larger than that of AdaBatch. From Section 6 we know that the monotonic improvement guarantee of SPG is more rigorous, so a larger batch size is justi\ufb01ed. Notice also that the batch size of SPG is smaller than that of AdaBatch in the early iterations, suggesting that the former is more adaptive. 7.2 Cart-Pole with Softmax Policy The second task is cart-pole (Barto, Sutton, & Anderson, 1983). We use the implementation from openai/gym, which has 4-dimensional continuous states and \ufb01nite actions, a \u2208{1, 2}. See Appendix F.2 for further details. The policy is Softmax (Appendix B.2), linear in the state: \u03c0\u03b8(a|s) \u221dexp(\u03b8\u22a4 a s), with \fSmoothing Policies and Safe Policy Gradients 31 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 \u00b7106 8 8.5 9 9.5 10 Episodes Performance GPOMDP SPG (\u03bb = 0.4) SPG (\u03bb = 0.2) SPG (\u03bb = 0.1) Fig. 3: Performance of GPOMDP and SPG (for di\ufb00erent values of the signi\ufb01cance parameter \u03bb) on the cart-pole task with Softmax policy. Results are averaged over 5 independent runs. The shaded areas correspond to one standard deviation. A marker corresponds to 1000 policy updates. a separate parameter for each action (\u03b8 = [\u03b81; \u03b82]). We use a \ufb01xed temperature \u03c4 = 1, initial policy parameters set to zero (this corresponds to a uniform policy) and \u03b3 = 0.9 as a discount factor. For SPG, we employ all the practical variants proposed in Section 5.1. In particular, since the Softmax policy has a bounded score function, we can use the empirical Bernstein bound. Note that we could not have done the same for the LQG task since the score function of the Gaussian policy is unbounded (see Appendix C). Moreover, we consider the relaxed milestone constraint for di\ufb00erent values of the significance parameter, \u03bb \u2208{0.1, 0.2, 0.4}. The overall failure probability is always \u03b4 = 0.2, the mini-batch size is n = 100, and the step size is \u03b1 = 1/L\u22c6.17 We compare with GPOMDP with the same step size but a \ufb01xed batch size of N = 100, which comes with no safety guarantees, and corresponds to \u03bb = 0. In Figure 3 we plot the performance against the total number of collected trajectories. As expected, a more relaxed constraint yields faster convergence. However, no signi\ufb01cant performance oscillations are observed, not even in the case of GPOMDP, suggesting that the choice of meta-parameters is still overconservative. In Figure 4 (left) we report the evolution of the batch size of SPG during the learning process. Note how, in this case, the batch size seems to converge to a constant value. In Figure 4 (right) we illustrate the milestone constraint. The solid line is the performance of SPG with \u03bb = 0.1, while the dotted line is the performance lower-threshold enforced by the milestone constraint, representing 90% of the highest performance achieved so far. As desired, the actual performance never falls under the threshold. 17Although both theory and our LQR experiments indicate that \u03b1 = 1/(2L\u22c6) is ultimately the best choice, we prioritize convergence speed over long-term performance on this larger task. \f32 Smoothing Policies and Safe Policy Gradients 0 0.5 1 1.5 2 \u00b7106 0 500 1,000 1,500 2,000 2,500 Episodes Batch Size 0 20 40 60 80 100 120 \u00b7103 7.5 8 8.5 9 9.5 10 Policy Updates Performance SPG (\u03bb = 0.1) SPG (\u03bb = 0.2) SPG (\u03bb = 0.4) Fig. 4: Further results for SPG on the cart-pole task. On the left, the batch size is plotted against the total number of trajectories. A marker corresponds to 1000 policy updates. On the right, the performance at each policy update (solid line) is compared with the performance threshold (dashed line) when \u03bb = 0.1. In both plots, shaded areas correspond to one standard deviation. 8 Conclusion We have identi\ufb01ed a general class of policies, called smoothing policies, for which the performance measure (expected total reward) is a smooth function of policy parameters. We have exploited this property to select meta-parameters for actor-only policy gradient that guarantee monotonic performance improvement. We have shown that an adaptive batch size can be used in combination with a constant step size for improved e\ufb03ciency, especially in the early stages of learning. We have designed a monotonically improving policy gradient algorithm, called Safe Policy Gradient (SPG), with adaptive batch size. We have shown how SPG can also be applied to weaker performance-improvement constraints. Finally, we have tested SPG on simulated control tasks. Albeit the safety motivations are clearly of practical interest, our contribution is mostly theoretical. The meta-parameters proposed in Section 4 and used in SPG are based on worst-case problem-dependent constants that are known and easy to compute, but can be very large. This would lead to overconservative behavior in most problems of interest. However, we believe that this work provides a solid starting point to develop safe and e\ufb03cient policy gradient algorithms that are rooted in theory. To conclude, we propose some possible ideas for future work that are aimed to close this gap between theory and practice. While we used the empirical Bernstein bound to characterize the gradient estimation error for Softmax policies, the same cannot be done for Gaussian policies due to their unbounded score function. Tighter concentration inequalities should be studied for this case. The convergence rate of SPG should also be studied. The main challenge \fSmoothing Policies and Safe Policy Gradients 33 here is the growing batch size. The numerical simulations of section 7.1 suggest the the growth is sublinear. Moreover, we have observed convergence to a \ufb01xed batch size under the weaker milestone constraint in Section 7.2. It is also worth to investigate whether SPG can be combined with stochastic variancereduction techniques (e.g., Papini et al., 2018; H. Yuan, Lian, Liu, & Zhou, 2020). Convergence to global optima should also be investigated, as is now common in the policy optimization literature (Agarwal et al., 2020; Bhandari & Russo, 2019; Zhang et al., 2020). Actor-critic algorithms (Konda & Tsitsiklis, 1999) are more used than actor-only algorithms in practice (e.g., Haarnoja, Zhou, Abbeel, & Levine, 2018) due to their reduced variance. Thus, extending our improvement guarantees to this class of algorithms is also important. The main challenge lies in handling the bias due to the critic. A promising \ufb01rst step is to consider compatible critics that yield unbiased gradient estimates (Konda & Tsitsiklis, 1999; Sutton et al., 2000). Although the class of smoothing policies is very broad, we have restricted our attention to Gaussian and Softmax policies with given features. Other policy classes, such as beta policies (Chou, Maturana, & Scherer, 2017) should be considered. Most importantly, deep policies should be considered that also learn the features from data, especially given their success in practice (Duan et al., 2016). See Appendix B.3 for a brief discussion. Other possible extensions include generalizing the monotonic improvement guarantees to other concepts of safety, such as learning under constraints, or risk-averse RL (Bisi, Sabbioni, Vittori, Papini, & Restelli, 2020). Finally, the conservative approach adopted in this work could prevent exploration, making some tasks very hard to learn. We studied the case of Gaussian policies with adaptive standard deviation in (Papini et al., 2020). Future work should consider the trade-o\ufb00between safety, e\ufb03ciency and exploration in greater generality. Acknowledgments. The authors would like to thank Gergely Neu for his suggestions on how to improve the Safe Policy Gradient algorithm and its theoretical analysis. \f34 Smoothing Policies and Safe Policy Gradients Appendix A Omitted Proofs A.1 Markov Decision Processes Lemma 15 For all \u03c0 : S \u2192\u2206A and s0 \u2208S: \u03c1\u03c0 s0(\u00b7) = (1 \u2212\u03b3)1 {s = s0} + \u03b3 Z S \u03c1\u03c0 s0(s)p(\u00b7|s) ds. Proof \u03b3 Z S \u03c1\u03c0 s0(s)p(\u00b7|s) ds = \u03b3 Z S (1 \u2212\u03b3) \u221e X t=0 \u03b3tpt \u03c0(s|s0)p\u03c0(\u00b7|s) ds = \u03b3(1 \u2212\u03b3) \u221e X t=0 \u03b3t Z S pt \u03c0(s|s0)p\u03c0(\u00b7|s) ds = (1 \u2212\u03b3) \u221e X t=0 \u03b3t+1 Z S pt \u03c0(s|s0)p\u03c0(\u00b7|s) ds = (1 \u2212\u03b3) \u221e X t=0 \u03b3t+1pt+1 \u03c0 (\u00b7|s0) = (1 \u2212\u03b3) \u221e X t=1 \u03b3tpt \u03c0(\u00b7|s0) = (1 \u2212\u03b3) \u221e X t=0 \u03b3tpt \u03c0(\u00b7|s0) \u2212(1 \u2212\u03b3) = \u03c1\u03c0 s0(\u00b7) \u2212(1 \u2212\u03b3)1 {s = s0} . \u25a1 Proposition 16 Let \u03c0 be any policy and f be any integrable function on S satisfying the following recursive equation: f(s) = g(s) + \u03b3 Z S p\u03c0(s\u2032|s)f(s\u2032) ds\u2032, for all s \u2208S and some integrable function g on S. Then: f(s) = 1 1 \u2212\u03b3 Z S \u03c1\u03c0 s (s\u2032)g(s\u2032) ds\u2032, for all s \u2208S. Proof Z S \u03c1\u03c0 s (s\u2032)g(s\u2032) ds\u2032 = Z S \u03c1\u03c0 s (s\u2032)f(s\u2032) ds\u2032 \u2212 Z S \u03c1\u03c0 s (s\u2032)\u03b3 Z S p\u03c0(s\u2032\u2032|s\u2032)f(s\u2032\u2032) ds\u2032\u2032 ds\u2032 = Z S \u03c1\u03c0 s (s\u2032)f(s\u2032) ds\u2032 \u2212 Z S \u03b3 Z S \u03c1\u03c0 s (s\u2032)p\u03c0(s\u2032\u2032|s\u2032) ds\u2032f(s\u2032\u2032) ds\u2032\u2032 \fSmoothing Policies and Safe Policy Gradients 35 = Z S \u03c1\u03c0 s (s\u2032)f(s\u2032) ds\u2032 \u2212 Z S \u0000\u03c1\u03c0 s (s\u2032\u2032) \u2212(1 \u2212\u03b3)1 \b s\u2032\u2032 = s \t\u0001 f(s\u2032\u2032) ds\u2032\u2032 (A1) = (1 \u2212\u03b3)f(s), where (A1) is from Lemma 15. \u25a1 A.2 Lipschitz-Smooth Functions The following results, reported in Section 2, are well known in the literature (Nesterov, 1998), but we also report proofs for the sake of completeness: Proposition 17 Let X be a convex subset of Rd and f : X \u2192R be a twicedi\ufb00erentiable function. If the Hessian is uniformly bounded in spectral norm by L > 0, i.e., supx\u2208X \r \r \r\u22072f(x) \r \r \r 2 \u2264L, then f is L-smooth. Proof Let x, x\u2032 \u2208X, h := x\u2032 \u2212x and g : [0, 1] \u2192R, g(\u03bb) \u2261\u2207xf(x + \u03bbh). Convexity of X guarantees x+\u03bbh \u2208X for \u03bb \u2208[0, 1]. Twice-di\ufb00erentiability of f implies \u2207xf is continuous, which in turn implies g is continuous. From the Fundamental Theorem of Calculus: \u2207xf(x\u2032) \u2212\u2207xf(x) = \u2207xf(x + h) \u2212\u2207xf(x) = g(1) \u2212g(0) = Z 1 0 g\u2032(\u03bb) d\u03bb = Z 1 0 h\u22a4\u22072 xf(x + \u03bbh) d\u03bb. (A2) Hence: \r \r\u2207xf(x\u2032) \u2212\u2207xf(x) \r \r = \r \r \r \r \r Z 1 0 h\u22a4\u22072 xf(x + \u03bbh) d\u03bb \r \r \r \r \r 2 \u2264 Z 1 0 \r \r \r\u22072 xf(x + \u03bbh)h \r \r \r 2 d\u03bb \u2264 Z 1 0 \r \r \r\u22072 xf(x + \u03bbh) \r \r \r 2 \u2225h\u22252 d\u03bb (A3) \u2264L \u2225h\u22252 = L \r \rx\u2032 \u2212x \r \r 2 , (A4) where (A3) is from the consistency of induced norms, i.e., \u2225Ax\u2225p \u2264\u2225A\u2225p \u2225x\u2225p. \u25a1 Proposition 18 (Quadratic Bound) Let X be a convex subset of Rd and f : X \u2192R be an L-smooth function. Then, for every x, x\u2032 \u2208X: \f \ff(x\u2032) \u2212f(x) \u2212 x\u2032 \u2212x, \u2207f(x) \u000b\f \f \u2264L 2 \r \rx\u2032 \u2212x \r \r2 , (18) where \u27e8\u00b7, \u00b7\u27e9denotes the dot product. \f36 Smoothing Policies and Safe Policy Gradients Proof Let x, x\u2032 \u2208X, h := x\u2032 \u2212x and g : [0, 1] \u2192R, g(\u03bb) \u2261f(x + \u03bbh). Convexity of X guarantees x+\u03bbh \u2208X for \u03bb \u2208[0, 1]. Lipschitz smoothness implies continuity of f, which in turn implies g is continuous. From the Fundamental Theorem of Calculus: f(x\u2032) \u2212f(x) = g(1) \u2212g(0) = Z 1 0 g\u2032(\u03bb) d\u03bb. (A5) Hence: \f \ff(x\u2032) \u2212f(x) \u2212 x\u2032\u2212x, \u2207xf(x)\u27e9| = \f \f \f \f \f Z 1 0 g\u2032(\u03bb) d\u03bb \u2212\u27e8h, \u2207xf(x)\u27e9 \f \f \f \f \f = \f \f \f \f \f Z 1 0 \u27e8h, \u2207xf(x + \u03bbh)\u27e9d\u03bb \u2212\u27e8h, \u2207xf(x)\u27e9 \f \f \f \f \f = \f \f \f \f \f Z 1 0 \u27e8h, \u2207xf(x + \u03bbh) \u2212\u2207xf(x)\u27e9d\u03bb \f \f \f \f \f \u2264 Z 1 0 |\u27e8h, \u2207xf(x + \u03bbh) \u2212\u2207xf(x)\u27e9| d\u03bb \u2264 Z 1 0 \u2225\u2207xf(x + \u03bbh) \u2212\u2207xf(x)\u22252 \u2225h\u22252 d\u03bb (A6) \u2264L \u2225h\u22252 2 Z 1 0 \u03bb d\u03bb (A7) = L 2 \r \rx\u2032 \u2212x \r \r2 2 , where (A6) is from the Cauchy-Schwartz inequality and (A7) is from the Lipschitz smoothness of f. \u25a1 A.3 Smoothing Policies and Di\ufb00erentiability Our proofs of the results of Section 3.2 rely on the interchange of integrals (w.r.t. states and actions) and derivatives (w.r.t. policy parameters). In the policy gradient literature (cf. S. Kakade, 2001; Konda & Tsitsiklis, 1999; Sutton et al., 2000), these are typically justi\ufb01ed by assuming the derivatives of the policy are bounded uniformly over states and actions, that is: \f \f \f \f \u2202 \u2202\u03b8i \u03c0\u03b8(a|s) \f \f \f \f \u2264C1, \f \f \f \f \u22022 \u2202\u03b8i\u2202\u03b8j \u03c0\u03b8(a|s) \f \f \f \f \u2264C2, (A8) for all s \u2208S, a \u2208A, \u03b8 \u2208\u0398 \u2286Rd, and i, j = 1, 2, . . . , d. The policy gradient itself originally relies on this assumption (Konda & Tsitsiklis, 1999), although weaker requirements are possible (see Bhandari & Russo, 2019, Section 5.1, for a recent discussion). The main problem with (A8) is that the uniform bounds may depend on huge quantities such as the diameter of the parameter space. Even worse, for (linear) Gaussian policies, the \ufb01rst derivative is unbounded: \u2207\u03c0\u03b8(a|s) = \u03c0\u03b8(a|s)a \u2212\u03b8\u22a4\u03c6(s) \u03c32 \u03c6(s), (A9) \fSmoothing Policies and Safe Policy Gradients 37 even when \u03c6(s) is bounded, since a \u2208A = R. However, these policies are smoothing (see Appendix B.1). The following application of the Leibniz Integral Rule (cf. Klenke, 2013, Theorem 6.28) shows that our smoothing-policy assumption (De\ufb01nition 1), can replace the stronger (A8) in di\ufb00erentiating expectations: Lemma 19 Let {\u03c0\u03b8|\u03b8 \u2208\u0398}, be a class of smoothing policies and f : S \u00d7 A \u2192 R be any function such that supa\u2208A |f(s, a)| is integrable on S. Then R S R A \u03c0\u03b8(a|s)f(s, a) da ds is twice di\ufb00erentiable and: \u2202 \u2202\u03b8i Z S Z A \u03c0\u03b8(a|s)f(s, a) da ds = Z S Z A \u2202 \u2202\u03b8i \u03c0\u03b8(a|s)f(s, a) da ds, (A10) \u22022 \u2202\u03b8i\u2202\u03b8j Z S Z A \u03c0\u03b8(a|s)f(s, a) da ds = Z S Z A \u22022 \u2202\u03b8i\u2202\u03b8j \u03c0\u03b8(a|s)f(s, a) da ds, (A11) for all i, j = 1, 2, . . . , d. Proof Let Bs = supa\u2208A |f(s, a)| and \ufb01x an index i \u2264d. Let: us(\u03b8) = Z A \u03c0\u03b8(a|s)f(s, a) da. (A12) By de\ufb01nition: \u2202 \u2202\u03b8i us(\u03b8) = lim h\u21920 u(\u03b8 + hei) \u2212u(\u03b8) h , (A13) where ei is the element of the canonical basis of Rd corresponding to the i-th coordinate. By linearity of integration: \u2202 \u2202\u03b8i us(\u03b8) = lim h\u21920 Z A \u03c0\u03b8+hei(a|s) \u2212\u03c0\u03b8(a|s) h f(s, a) | {z } g\u03b8(s,a) da. (A14) By assumption, \u03c0\u03b8(a|s) is di\ufb00erentiable, so it is continuous. Fix an h \u2208R. By the mean value theorem, there exist a \u03b8 on the segment connecting \u03b8 and \u03b8 + hei such that: \u03c0\u03b8+hei(a|s) \u2212\u03c0\u03b8(a|s) h = \u2202 \u2202\u03b8i \u03c0\u03b8(a|s) \f \f \f \f \f \u03b8=\u03b8 . (A15) Hence, by upper bounding the l\u221enorm with the l2 norm: |g\u03b8(s, a)| \u2264Bs \r \r\u2207\u03b8\u03c0\u03b8(a|s) \r \r . (A16) By the smoothing-policy assumption, \u0398 is convex, so \u03b8 \u2208\u0398, and again by the smoothing-policy assumption: Z A \r \r\u2207\u03b8\u03c0\u03b8(a|s) \r \r da \u2264 Z A \u03c0\u03b8(a|s) \r \r\u2207\u03b8 log \u03c0\u03b8(a|s) \r \r da \u2264\u03be1, (A17) showing that g\u03b8(s, a) is bounded by a function that is integrable w.r.t. a. By the dominated convergence theorem, we can interchange the limit and the integral in (A14) to obtain: \u2202 \u2202\u03b8i us(\u03b8) = Z A lim h\u21920 \u03c0\u03b8+hei(a|s) \u2212\u03c0\u03b8(a|s) h f(s, a) da = Z A \u2202 \u2202\u03b8i \u03c0\u03b8(a|s)f(s, a) da. \f38 Smoothing Policies and Safe Policy Gradients By (A17) and Holder\u2019s inequality, |\u2202/\u2202\u03b8ius(\u03b8)| \u2264Bs\u03be1, which is integrable on S. We can then use the same interchange argument to show that: \u2202 \u2202\u03b8i Z S us(\u03b8) ds = Z S \u2202 \u2202\u03b8i us(\u03b8) ds = Z S Z A \u2202 \u2202\u03b8i \u03c0\u03b8(a|s)f(s, a) da. (A18) For the second derivative, we can just repeat the whole argument from the previous paragraph on \u2202 \u2202\u03b8i us(\u03b8). Continuity of the integrand, which is necessary to apply the mean value theorem, follows from twice di\ufb00erentiability of the policy. To apply the dominated convergence theorem, we use the following: Z A \r \r \r\u22072\u03c0\u03b8(a|s) \r \r \r da = Z A \u03c0\u03b8(a|s) \r \r \r\u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s) + \u22072 log \u03c0\u03b8(a|s) \r \r \r da \u2264 Z A \u03c0\u03b8(a|s) \u2225\u2207log \u03c0\u03b8(a|s)\u22252 da + Z A \u03c0\u03b8(a|s) \r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r da \u2264\u03be2 + \u03be3, by the triangular inequality and the smoothing-policy assumption. \u25a1 With some work, one can use Lemma 19 to justify all the interchanges of di\ufb00erentiation and integrals from Section 3.2 and Appendix A.4, as the original derivations (S. Kakade, 2001; Sutton et al., 2000) were justi\ufb01ed by (A8). A.4 Policy Hessian In the following, the interchange of di\ufb00erentiation and integrals is justi\ufb01ed by our smoothing-policy assumption. See Appendix A.3 for details. Proposition 20 Let \u03c0\u03b8 be a smoothing policy. The Hessian of the performance measure is: \u22072J(\u03b8) = 1 1 \u2212\u03b3 E s\u223c\u03c1\u03b8 a\u223c\u03c0\u03b8(\u00b7|s) h \u2207log \u03c0\u03b8(a|s)\u2207\u22a4Q\u03b8(s, a) + \u2207Q\u03b8(s, a)\u2207\u22a4log \u03c0\u03b8(a|s) + \u0010 \u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s) + \u22072 log \u03c0\u03b8(a|s) \u0011 Q\u03b8(s, a) i . Proof We \ufb01rst compute the Hessian of the state-value function: \u22072V \u03b8(s) = \u22072 Z A \u03c0\u03b8(a|s)Q\u03b8(s, a) da = Z A \u2207 h \u03c0\u03b8(a|s) \u0010 \u2207\u22a4log \u03c0\u03b8(a|s)Q\u03b8(s, a) + \u2207\u22a4Q\u03b8(s, a) \u0011i da (A19) = Z A \u03c0\u03b8(a|s) h (\u22072 log \u03c0\u03b8(a|s) + \u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s))Q\u03b8(s, a) +\u2207log \u03c0\u03b8(a|s)\u2207\u22a4Q\u03b8(s, a) + \u2207Q\u03b8(s, a)\u2207\u22a4log \u03c0\u03b8(a|s) + \u22072Q\u03b8(s, a) i da (A20) = Z A \u03c0\u03b8(a|s) \u0014 (\u22072 log \u03c0\u03b8(a|s) + \u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s))Q\u03b8(s, a) +\u2207log \u03c0\u03b8(a|s)\u2207\u22a4Q\u03b8(s, a) + \u2207Q\u03b8(s, a)\u2207\u22a4log \u03c0\u03b8(a|s) \fSmoothing Policies and Safe Policy Gradients 39 +\u22072 \u0012 r(s, a) + \u03b3 Z S p(s\u2032|s, a)V \u03b8(s\u2032) ds\u2032 \u0013\u0015 da (A21) = Z A \u03c0\u03b8(a|s) h (\u22072 log \u03c0\u03b8(a|s) + \u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s))Q\u03b8(s, a) +\u2207log \u03c0\u03b8(a|s)\u2207\u22a4Q\u03b8(s, a) + \u2207Q\u03b8(s, a)\u2207\u22a4log \u03c0\u03b8(a|s) i da + \u03b3 Z S p\u03b8(s\u2032|s)\u22072V \u03b8(s\u2032) ds\u2032 = g(s) + \u03b3 1 \u2212\u03b3 Z S \u03c1\u03b8 s (s\u2032)g(s\u2032) ds\u2032, (A22) where g(s) = Z A \u03c0\u03b8(a|s) h\u0010 \u2207log \u03c0\u03b8(a|s)\u2207\u22a4log \u03c0\u03b8(a|s) + \u22072 log \u03c0\u03b8(a|s) \u0011 Q\u03b8(s, a) +\u2207log \u03c0\u03b8(a|s)\u2207\u22a4Q\u03b8(s, a) + \u2207Q\u03b8(s, a)\u2207\u22a4log \u03c0\u03b8(a|s) i da, (A19) is from the log-derivative trick, (A20) is from another application of the logderivative trick, (A21) is from (5), and (A22) is from Lemma 1 with \u22072V \u03b8(s\u2032) as the recursive term. Computing the Hessian of the performance measure is then trivial: \u22072J(\u03b8) = \u22072 Z S \u00b5(s)V \u03b8(s) ds = Z S \u00b5(s)\u22072V \u03b8(s) ds, (A23) where the \ufb01rst equality is from (9). Combining (A22), (A23) and (10) we obtain the statement of the lemma. \u25a1 A.5 Auxiliary Lemmas Lemma 21 For any a, b > 0 such that ab > 1, a su\ufb03cient condition for x \u2265a log(bx) is x \u22652a log(ab). Proof This can be deduced from the properties of the Lambert function. However, it is easier to verify it directly. Letting x = 2a log(ab), the \ufb01rst inequality becomes: 2a log(ab) \u2265a log(2ab log(ab)) = a log(ab) + a log(2 log(ab)), (A24) and log(2 log(y)) \u2264log y for any y > 1. Finally, notice that x\u2212a log(bx) is increasing for x > a, and 2a log(ab) > a for ab > 1. \u25a1 Lemma 22 (Optional Stopping) Let (Xt)t\u22651 be a d-dimensional vector-valued martingale di\ufb00erence sequence and \u03c4 be a stopping time, both with respect to a \ufb01ltration (Ft)t\u22650. If E[\u03c4] < \u221eand there exists c \u22650 such that E[\u2225Xt\u2225|Ft\u22121] \u2264c for every t \u22651, then E[X\u03c4] = 0. Proof Consider any martingale Yt such that Xt = Yt \u2212Yt\u22121. We are going to apply Doob\u2019s optional stopping theorem (See Thm 12.5.9 from Grimmett & Stirzaker, 2020)18 to each element Y (i) t of Yt, where i = 1, . . . , d. Su\ufb03cient conditions for E[Y (i) \u03c4 ] = E[Y (i) 0 ] are: 18In the theorem, it is also required that P(\u03c4 < \u221e) = 1, but this is implied by E[\u03c4] < \u221esince \u03c4 is nonnegative. \f40 Smoothing Policies and Safe Policy Gradients 1. E[\u03c4] < \u221e, 2. E[|Y (i) t+1 \u2212Y (i) t | | Ft] \u2264c for all t \u22650. The \ufb01rst one is by hypothesis. For the second one: max i\u2208[d] E[|Y (i) t+1 \u2212Y (i) t | | Ft] = max i\u2208[d] E[|X(i) t+1| | Ft] \u2264E[\u2225Xt+1\u2225\u221e| | Ft] \u2264E[\u2225Xt\u22252 | | Ft] \u2264c, (A25) where the last inequality is by hypothesis. So, by optional stopping, E[Y (i) \u03c4 ] = E[Y (i) 0 ] for all i \u2208[d]. We can repeat the same argument for \u03c4 \u22121. Hence E[X\u03c4] = E[Y\u03c4] \u2212 E[Y\u03c4\u22121] = E[Y0] \u2212E[Y0] = 0. \u25a1 Appendix B Common Smoothing Policies In this section, we show that some of the most commonly used parametric policies are smoothing and provide the corresponding Lipschitz constants for the policy gradient. B.1 Gaussian policy Consider a scalar-action, \ufb01xed-variance, shallow Gaussian policy:19 \u03c0\u03b8(a|s) = N \u0010 a|\u03b8\u22a4\u03c6(s), \u03c32\u0011 = 1 \u221a 2\u03c0\u03c3 exp \uf8f1 \uf8f2 \uf8f3\u22121 2 a \u2212\u03b8\u22a4\u03c6(s) \u03c3 !2\uf8fc \uf8fd \uf8fe, (B26) where \u03b8 \u2208\u0398 \u2286Rd, \u03c3 > 0 is the standard deviation, and \u03c6 : S \u2192Rd is a vector-valued feature function that is bounded in Euclidean norm, i.e., sups\u2208S \u2225\u03c6(s)\u2225< \u221e. This common policy turns out to be smoothing. Lemma 23 Let \u03a0\u0398 be the set of Gaussian policies de\ufb01ned in (B26), with parameter set \u0398, standard deviation \u03c3 and feature function \u03c6. Let M be a non-negative constant such that sups\u2208S \u2225\u03c6(s)\u2225\u2264M. Then \u03a0\u0398 is (\u03be1, \u03be2, \u03be3)-smoothing with the following constants: \u03be1 = 2M \u221a 2\u03c0\u03c3 , \u03be2 = \u03be3 = M2 \u03c32 . The corresponding Lipschitz constant of the policy gradient is: L = 2M2R \u03c32(1 \u2212\u03b3)2 \u0012 1 + 2\u03b3 \u03c0(1 \u2212\u03b3) \u0013 . (B27) 19In this section, \u03c0 with no subscript always denotes the mathematical constant. \fSmoothing Policies and Safe Policy Gradients 41 Proof Fix a \u03b8 \u2208\u0398. Let x \u2261a\u2212\u03b8\u22a4\u03c6(s) \u03c3 . Note that A = R and da = \u03c3 dx. We need the following derivatives: \u2207log \u03c0\u03b8(a|s) = \u03c6(s) \u03c3 x, (B28) \u22072 log \u03c0\u03b8(a|s) = \u2212\u03c6(s)\u03c6(s)\u22a4 \u03c32 . (B29) First, we compute \u03be1: E a\u223c\u03c0\u03b8(\u00b7|s) \u0002 \u2225\u2207log \u03c0\u03b8(a|s)\u2225 \u0003 = Z R 1 \u221a 2\u03c0\u03c3 e\u2212x2/2 \r \r \r \r \u03c6(s) \u03c3 x \r \r \r \r \u03c3 dx \u2264 M \u221a 2\u03c0\u03c3 Z R e\u2212x2/2|x| dx = 2M \u221a 2\u03c0\u03c3 := \u03be1. (B30) Then, we compute \u03be2: E a\u223c\u03c0\u03b8(\u00b7|s) h \u2225\u2207log \u03c0\u03b8(a|s)\u22252i = Z R 1 \u221a 2\u03c0\u03c3 e\u2212x2/2 \r \r \r \r \u03c6(s) \u03c3 x \r \r \r \r 2 \u03c3 dx \u2264 M2 \u221a 2\u03c0\u03c32 Z R e\u2212x2/2x2 dx = M2 \u03c32 := \u03be2. (B31) Finally, we compute \u03be3: E a\u223c\u03c0\u03b8(\u00b7|s) h\r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r i = Z R 1 \u221a 2\u03c0\u03c3 e\u2212x2/2 \r \r \r \r \u03c6(s) \u03c3 x \r \r \r \r 2 \u03c3 dx \u2264 M2 \u221a 2\u03c0\u03c32 Z R e\u2212x2/2x2 dx = M2 \u03c32 := \u03be3. (B32) From these constants, the Lipschitz constant of the policy gradient is easily computed (Lemma 6). \u25a1 B.2 Softmax policy Consider a \ufb01xed-temperature, shallow Softmax policy for a discrete action space: \u03c0\u03b8(a|s) = exp n \u03b8\u22a4\u03c6(s,a) \u03c4 o P a\u2032\u2208A exp n \u03b8\u22a4\u03c6(s,a\u2032) \u03c4 o, (B33) where \u03b8 \u2208\u0398 \u2286Rd, \u03c4 > 0 is the temperature, and \u03c6 : S \u00d7 A \u2192Rd is a vector-valued feature function that is bounded in Euclidean norm, i.e., sups\u2208S,a\u2208A \u2225\u03c6(s, a)\u2225< \u221e. This policy is smoothing. \f42 Smoothing Policies and Safe Policy Gradients Lemma 24 Let \u03a0\u0398 be the set of Softmax policies de\ufb01ned in (B33), with parameter set \u0398, temperature \u03c4 and feature function \u03c6. Let M be a non-negative constant such that sups\u2208S,a\u2208A \u2225\u03c6(s, a)\u2225\u2264M. Then, \u03a0\u0398 is (\u03be1, \u03be2, \u03be3)-smoothing with the following constants: \u03be1 = 2M \u03c4 , \u03be2 = 4M2 \u03c42 , \u03be3 = 2M2 \u03c42 . The corresponding Lipschitz constant of the policy gradient is: L = 2M2R \u03c42(1 \u2212\u03b3)2 \u0012 3 + 4\u03b3 1 \u2212\u03b3 \u0013 . (B34) Proof In this case, we can simply bound \u2225\u2207log \u03c0\u03b8(a|s)\u2225and \r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r uniformly over states and actions. The smoothing conditions follow trivially. We need the following derivatives: \u2207log \u03c0\u03b8(a|s) = 1 \u03c4 \u0012 \u03c6(s, a) \u2212 E a\u2032\u223c\u03c0\u03b8(\u00b7|s) \u0002 \u03c6(s, a\u2032) \u0003\u0013 , (B35) \u22072 log \u03c0\u03b8(a|s) = 1 \u03c42 E a\u2032\u223c\u03c0\u03b8(\u00b7|s) \" \u03c6(s, a\u2032) \u0012 E a\u2032\u2032\u223c\u03c0\u03b8(\u00b7|s) \u0002 \u03c6(s, a\u2032\u2032) \u0003 \u2212\u03c6(s, a\u2032) \u0013\u22a4# . (B36) First, we compute \u03be1 and \u03be2: \u2225\u2207log \u03c0\u03b8(a|s)\u2225\u22641 \u03c4 \u2225\u03c6(s, a)\u2225+ \r \r \r \r E a\u2032\u223c\u03c0\u03b8(\u00b7|s) \u0002 \u03c6(s, a\u2032) \u0003\r \r \r \r ! \u22642M \u03c4 , (B37) hence sups\u2208S Ea\u223c\u03c0\u03b8 \u0002 \u2225\u2207log \u03c0\u03b8(a|s)\u2225 \u0003 \u2264 2M \u03c4 := \u03be1 and sups\u2208S Ea\u223c\u03c0\u03b8 h \u2225\u2207log \u03c0\u03b8(a|s)\u22252i \u22644M 2 \u03c4 2 := \u03be2. Finally, we compute \u03be3: \r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r \u22641 \u03c42 E a\u2032\u223c\u03c0\u03b8(\u00b7|s) \"\r \r \r \r \r\u03c6(s, a\u2032) \u0012 E a\u2032\u2032\u223c\u03c0\u03b8(\u00b7|s) \u0002 \u03c6(s, a\u2032\u2032) \u0003 \u2212\u03c6(s, a\u2032) \u0013\u22a4\r \r \r \r \r # \u22641 \u03c42 E a\u2032\u223c\u03c0\u03b8(\u00b7|s) \" \r \r\u03c6(s, a\u2032) \r \r \r \r \r \r E a\u2032\u2032\u223c\u03c0\u03b8(\u00b7|s) \u0002 \u03c6(s, a\u2032\u2032) \u2212\u03c6(s, a\u2032) \u0003\r \r \r \r # \u22641 \u03c42 E a\u2032\u223c\u03c0\u03b8(\u00b7|s) \u0014\r \r\u03c6(s, a\u2032) \r \r E a\u2032\u2032\u223c\u03c0\u03b8(\u00b7|s) \u0002\r \r\u03c6(s, a\u2032\u2032) \r \r + \r \r\u03c6(s, a\u2032) \r \r\u0003\u0015 \u22642M2 \u03c42 , (B38) hence sups\u2208S Ea\u223c\u03c0\u03b8 h\r \r \r\u22072 log \u03c0\u03b8(a|s) \r \r \r i \u2264 2M 2 \u03c4 2 := \u03be3. From these constants, the Lipschitz constant of the policy gradient is easily computed (Lemma 6). \u25a1 Note the similarity with the Gaussian constants from Lemma 23. The temperature parameter \u03c4 plays a similar role to the standard deviation \u03c3. The smoothness constants for Gaussian and Softmax policies are summarized in Table 1. \fSmoothing Policies and Safe Policy Gradients 43 B.3 Preliminary Results on Deep Policies The policies we have considered so far rely on given feature maps from state (and action) space to low-dimensional linear space. For many applications, a linear map is not expressive enough to represent good policies. Deep policies (Duan et al., 2016) use Neural Networks (NN) to extract more powerful representations from data. Here we provide a \ufb01rst analysis on how the properties of the NN a\ufb00ect the smoothing properties of the policy. As an example, consider a Gaussian policy with mean parametrized by a NN, that is: \u03c0\u03b8(a|s) \u223cN(a|\u00b5\u03b8(s), \u03c32), (B39) where \u00b5\u03b8 : S \u2192A is a NN with weights \u03b8. The score function is then: \u2207\u03b8 log \u03c0\u03b8(a|s) = a \u2212\u00b5\u03b8(s) \u03c32 \u2207\u03b8\u00b5\u03b8(s), (B40) and its second-order counterpart: \u2207\u03b8 log \u03c0\u03b8(a|s) = a \u2212\u00b5\u03b8(s) \u03c32 \u22072 \u03b8\u00b5\u03b8(s) \u2212\u2207\u03b8\u00b5\u03b8(s)\u2207\u22a4 \u03b8 \u00b5\u03b8(s) \u03c32 . (B41) For the policy to be smoothing, we need bounds on the gradient and Hessian of the NN w.r.t. its weights (in Euclidean and spectral norm, respectively), both uniformly over the state space. This may suggest the use of activation functions that are smooth and have bounded derivatives for any input, such as tanh or sigmoid activations. We shall study the impact of the network architecture on the smoothing constants in future work. Appendix C Exponential Concentration of Policy Gradient Estimators In this section, we provide exponential tail inequalities for REINFORCE and G(PO)MDP (see Section 2) policy gradient estimators with policy classes of interest. For the G(PO)MDP estimator, it is useful to notice that it can be equivalently written as (Peters & Schaal, 2008; Sutton et al., 2000): b \u2207J(\u03b8; D) = 1 N N X i=1 T \u22121 X t=0 \" \u2207log \u03c0\u03b8(ai t|si t) T \u22121 X h=t \u03b3hR(ai h, si h) # , (C42) just by reordering. For simplicity, we will consider estimators without variancereducing baselines. First, let us consider the case of a bounded score function: \f44 Smoothing Policies and Safe Policy Gradients Lemma 25 Let \u2225\u2207log \u03c0\u03b8(a|s)\u2225\u2264W for all \u03b8 \u2208Rd, s \u2208S and a \u2208A. Then, for any \u03b8 \u2208Rd, with probability 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r \u22642WRT r 2d log(6/\u03b4) N , (C43) where RT = RT (1\u2212\u03b3\u22a4) 1\u2212\u03b3 for REINFORCE and RT = R 1\u2212\u03b3\u22a4\u2212T (\u03b3T \u2212\u03b3T +1) (1\u2212\u03b3)2 for G(PO)MDP. Proof For REINFORCE let: Rt(\u03c4) := T \u22121 X h=0 \u03b3hrh \u2264R(1 \u2212\u03b3\u22a4) 1 \u2212\u03b3 := Rt for all t \u22650, (C44) RT := T \u22121 X t=0 Rt = RT(1 \u2212\u03b3\u22a4) 1 \u2212\u03b3 . (C45) For G(PO)MDP, let: Rt(\u03c4) := T \u22121 X h=t \u03b3hrh \u2264R(\u03b3t \u2212\u03b3\u22a4) 1 \u2212\u03b3 := Rt, (C46) RT := T \u22121 X t=0 Rt = R1 \u2212\u03b3\u22a4\u2212T(\u03b3T \u2212\u03b3T +1) (1 \u2212\u03b3)2 . (C47) Let Sd\u22121 = {v \u2208Rd : \u2225v\u2225= 1} be the unit sphere in Rd. Fix a vector v \u2208Sd\u22121 and let b \u2207J(\u03b8) denote the policy gradient estimate obtained from a single trajectory \u03c4 sampled from p\u03b8. For both gradient estimators: \u27e8v, b \u2207J(\u03b8)\u27e9= T \u22121 X t=0 \u27e8v, \u2207log \u03c0\u03b8(at|st)\u27e9Rt(\u03c4) (C48) \u2264 T \u22121 X t=0 \u2225\u2207log \u03c0\u03b8(at|st)\u2225Rt(\u03c4) (C49) \u2264WRT , (C50) where the \ufb01rst inequality uses the fact that, for any x \u2208 Rd, \u2225x\u2225 = maxv\u2208Sd\u22121\u27e8v, x\u27e9. By linearity of expectation and unbiasedness of the gradient estimator, E[\u27e8v, b \u2207J(\u03b8)\u27e9] = \u27e8v, \u2207J(\u03b8)\u27e9. Hence, by (C50) and Hoe\ufb00ding\u2019s inequality, with probability 1 \u2212\u03b4v: \u27e8v, b \u2207J(\u03b8; D) \u2212\u2207J(\u03b8)\u27e9\u2264WRT r 2 log(1/\u03b4v) N , (C51) where N = |D|. To turn this into a bound on the Euclidean norm, we need a covering argument. For arbitrary \u03b7 > 0, consider an \u03b7-cover C\u03b7 of Sd\u22121, that is: max v\u2208Sd\u22121,w\u2208C\u03b7 \u2225v \u2212w\u2225\u2264\u03b7. (C52) It is a well known result that a \ufb01nite cover C\u03b7 exists such that |C\u03b7| \u2264(3/\u03b7)d. Then, with probability 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r = max v\u2208Sd\u22121\u27e8v, b \u2207J(\u03b8; D) \u2212\u2207J(\u03b8)\u27e9 (C53) \fSmoothing Policies and Safe Policy Gradients 45 \u2264max v\u2208C\u03b7 \u27e8v, b \u2207J(\u03b8; D) \u2212\u2207J(\u03b8)\u27e9+ \u03b7 \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r (C54) \u2264WRT r 2 log(|C\u03b7|/\u03b4) N + \u03b7 \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r (C55) \u2264WRT v u u t2d log \u0010 3 \u03b7\u03b4 \u0011 N + \u03b7 \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r , (C56) where the \ufb01rst inequality is by Cauchy-Schwarz inequality and de\ufb01nition of C\u03b7, the second one is by union bound over the \ufb01nite elements of C\u03b7, and the last inequality uses the covering number of the sphere in Rd. Finally, by letting \u03b7 = 1/2: \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r \u2264WRT 1 \u2212\u03b7 v u u t2d log \u0010 3 \u03b7\u03b4 \u0011 N = 2WRT r 2d log(6/\u03b4) N . (C57) \u25a1 The Softmax policy described in Appendix B.2 satis\ufb01es the assumption of Lemma 25 with W = 2M \u03c4 where M is an upper bound on \u2225\u03c6(s)\u2225, as shown in the proof of Lemma 24. Unfortunately, the Gaussian policy class from Appendix B.1 is not covered by Lemma 25, since its score function is unbounded. Motivated by the broad use of Gaussian policies in applications, we provide an ad-hoc bound for this class: Lemma 26 Let \u03a0\u0398 be the class of shallow Gaussian policies from Lemma 23. Then, for any \u03b8 \u2208\u0398, with probability 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r \u22644MRT \u03c3 r 14d log(6/\u03b4) N , (C58) where RT = RT (1\u2212\u03b3\u22a4) 1\u2212\u03b3 for REINFORCE and RT = R 1\u2212\u03b3\u22a4\u2212T (\u03b3T \u2212\u03b3T +1) (1\u2212\u03b3)2 for G(PO)MDP. Proof Let Rt(\u03c4), Rt, and RT be de\ufb01ned (di\ufb00erently for the two gradients estimators) as in the proof of Lemma 25. Again, let Sd\u22121 be the unit sphere in Rd , \ufb01x a vector v \u2208Sd\u22121 and let b \u2207J(\u03b8) denote the policy gradient estimate obtained from a single trajectory \u03c4 sampled from p\u03b8. Consider the \ufb01ltration (Ft)T \u22121 t=0 where Ft = \u03c3(s0, a0, . . . , st) is the sigma-algebra representing all the knowledge up to the t-th state included. Conditional on st, at \u223cN(\u03b8\u22a4\u03c6(s), \u03c32). Hence, conditionally on Ft: \u27e8v, \u2207log \u03c0\u03b8(at|st)\u27e9= at \u2212\u03b8\u22a4\u03c6(st) \u03c32 \u27e8v, \u03c6(st)\u27e9\u223cN \u0012 0, \u27e8v, \u03c6(st)\u27e92 \u03c32 \u0013 . (C59) Let Xt = \u27e8v, \u2207log \u03c0\u03b8(at|st)\u27e9 for brevity. Since Xt is Ft-measurable and E[Xt|Ft\u22121] = 0, (Xt)t is a martingale di\ufb00erence sequence adapted to (Ft)t. Furthermore, (C59) shows that, for any \u03bb > 0: E[exp(\u03bbXt)|Ft] = exp \u0012\u03bb\u27e8v, \u03c6(st)\u27e92 2\u03c32 \u0013 \u2264exp \u03bb \u2225\u03c6(st)\u22252 2\u03c32 ! \u2264exp \u0012\u03bbM2 2\u03c32 \u0013 , (C60) \f46 Smoothing Policies and Safe Policy Gradients where the \ufb01rst inequality is by \u2225x\u2225= maxv\u2208Sd\u22121\u27e8v, x\u27e9for any x \u2208Rd. Hence, Xt is conditionally (M/\u03c3)-subgaussian and RtXt is conditionally (RtM/\u03c3)-subgaussian. Using Azuma\u2019s inequality, for any b > 0:20 P \u0010 \u27e8v, b \u2207J(\u03b8)\u27e9> b \u0011 = P T \u22121 X t=0 XtRt(\u03c4) > b ! (C61) \u2264P T \u22121 X t=0 XtRt > b ! (C62) \u2264exp \u2212 \u03c32b2 56M2 PT \u22121 t=0 R2 t ! (C63) \u2264exp \u2212 \u03c32b2 56M2R2 T ! , (C64) showing that \u27e8v, b \u2207J(\u03b8)\u27e9 is \u221a 28MRT /\u03c3-subgaussian. From this and E[\u27e8v, b \u2207J(\u03b8; D)\u27e9] = \u27e8v, \u2207J(\u03b8)\u27e9, using Hoe\ufb00ding\u2019s inequality for averages of i.i.d. subgaussian random variables: \u27e8v, b \u2207J(\u03b8; D) \u2212\u2207J(\u03b8)\u27e9\u2264MRT \u03c3 r 56 log(1/\u03b4v) N , (C65) with probability 1 \u2212\u03b4v. Finally, using the same covering argument as in the proof of Lemma 25, with probability 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8) \u2212\u2207J(\u03b8) \r \r \r \u22642MRT \u03c3 r 56d log(6/\u03b4) N . (C66) \u25a1 The values of \u03f5(\u03b4) for Gaussian and Softmax policies are summarized in Table 2. C.1 Empirical Bernstein Bound For bounded-score policies (such as the Softmax), we can improve Lemma 25 using an empirical Bernstein inequality (Maurer & Pontil, 2009): Lemma 27 Let \u2225\u2207log \u03c0\u03b8(a|s)\u2225\u2264W for all \u03b8 \u2208Rd, s \u2208S and a \u2208A. Then, for any \u03b8 \u2208Rd, with probability 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8; D) \u2212\u2207J(\u03b8) \r \r \r \u2264 s 8db V log(12/\u03b4) N + 14dWRT log(6/\u03b4) 3(N \u22121) . (C67) where \u02c6 V = 1 N\u22121 PN i=1 \r \r \rb \u2207J(\u03b8; \u03c4i) \u2212b \u2207J(\u03b8; D) \r \r \r 2 , and RT is de\ufb01ned as in Lemma 25. 20We use the version by Shamir (2011). \fSmoothing Policies and Safe Policy Gradients 47 Proof Recall that D = {\u03c41, . . . , \u03c4N} is a set of trajectories sampled independently from p\u03b8. Let b \u2207J(\u03b8; \u03c4i) denote the policy gradient estimate obtained from trajectory \u03c4i, and recall J(\u03b8; D) = 1 N PN i=1 b \u2207J(\u03b8; \u03c4i) denotes the sample mean. Fix a vector v \u2208Sd\u22121, the unit sphere in Rd, and let Xi = \u27e8v, b \u2207J(\u03b8; \u03c4i)\u27e9for short. Then, as shown in (C50): |Xi| \u2264WRT , (C68) and E[Xi] = \u27e8v, \u2207J(\u03b8)\u27e9. Moreover, (Xi)N i=1 are i.i.d. (conditionally on \u03b8, which is \ufb01xed in this case). By Theorem 4 from (Maurer & Pontil, 2009), with probability 1 \u2212\u03b4v: \u27e8v, b \u2207J(\u03b8; D) \u2212\u2207J(\u03b8)\u27e9\u2264 s 2b Vv log(2/\u03b4v) N + 7WRT log(2/\u03b4v) 3(N \u22121) , (C69) where and b Vv is the (unbiased) sample variance of the (Xi)N i=1: b Vv = 1 N \u22121 N X i=1 D v, b \u2207J(\u03b8; \u03c4i) \u2212b \u2207J(\u03b8; D) E2 (C70) \u2264 1 N \u22121 N X i=1 \r \r \rb \u2207J(\u03b8; \u03c4i) \u2212b \u2207J(\u03b8; D) \r \r \r 2 := \u02c6 V , (C71) where the inequality is by Cauchy-Schwarz and \u2225v\u2225= 1. Since \u02c6 V does not depend on v, we can use the same covering argument as in the proof of Lemma 25 to obtain the desired result. \u25a1 To use this concentration inequality in SPG, Algorithm 2 must be modi\ufb01ed, as discussed in Appendix E. C.2 In\ufb01nite-Horizon Estimators To obtain an unbiased estimate of the gradient for the original in\ufb01nite-horizon performance measure considered in the paper, we can modify our simulation protocol as suggested in (Bedi, Parayil, Zhang, Wang, & Koppel, 2021). Consider a random-horizon G(PO)MDP estimator that, for each episode: 1. Samples a random horizon T \u223cGeom(1 \u2212\u03b3t/2) from a geometric distribution; 2. Generates a trajectory \u03c4 of length T with the current policy \u03c0\u03b8; 3. Outputs b \u2207J(\u03b8; \u03c4, T) = PT \u22121 t=0 \u0010 \u03b3t/2r(ai t, si t) Pt h=0 \u2207log \u03c0\u03b8(ai h|si h) \u0011 . The result can be averaged over a batch of independent trajectories, each with its own independently sampled random length. This policy gradient estimator is unbiased (Bedi et al., 2021, Lemma 1). The random horizon should be accounted for in the concentration bounds of Lemma 25, 26, and 27. However, note that the term RT , for the G(PO)MDP estimator, is bounded as follows: RT = R1 \u2212\u03b3\u22a4\u2212T(\u03b3T \u2212\u03b3T +1) (1 \u2212\u03b3)2 \u2264 R (1 \u2212\u03b3)2 , (C72) \f48 Smoothing Policies and Safe Policy Gradients for any T \u22650. Hence, Lemma 25, 26, and 27 all hold for the random-horizon estimator with RT = R/(1 \u2212\u03b31/2)2. The corresponding error bounds are reported in Table 2. We leave a more re\ufb01ned analysis of the variance and tail behavior of this random-horizon estimator to future work. Appendix D Variance of Policy Gradient Estimators In this section, we provide upper bounds on the variance of the (\ufb01nite-horizon) REINFORCE and G(PO)MDP estimators, generalizing existing results for Gaussian policies (Pirotta, Restelli, & Bascetta, 2013; Zhao et al., 2011) to smoothing policies. We begin by bounding the variance of the REINFORCE estimator: Lemma 28 Given a (\u03be1, \u03be2, \u03be3)-smoothing policy class \u03a0\u0398 and an e\ufb00ective task horizon T, for every \u03b8 \u2208\u0398, the variance of the REINFORCE estimator (with zero baseline) is upper-bounded as follows: Var h b \u2207J(\u03b8; D) i \u2264T\u03be2R2(1 \u2212\u03b3\u22a4)2 N(1 \u2212\u03b3)2 . (D73) Proof Let g\u03b8(\u03c4) := \u0010PT \u22121 t=0 \u03b3tr(at, st) \u0011 \u0010PT \u22121 t=0 \u2207log \u03c0\u03b8(at|st) \u0011 with st, at \u2208\u03c4 for t = 0, . . . , T \u22121. Using the de\ufb01nition of REINFORCE (14) with b = 0: Var D\u223cp\u03b8 h b \u2207J(\u03b8; D) i = 1 N Var \u03c4\u223cp\u03b8 [g\u03b8(\u03c4)] \u22641 N E \u03c4\u223cp\u03b8 h \u2225g\u03b8(\u03c4)\u22252i \u2264R2(1 \u2212\u03b3\u22a4)2 N(1 \u2212\u03b3)2 E \u03c4\u223cp\u03b8 \uf8ee \uf8f0 \r \r \r \r \r T \u22121 X t=0 \u2207log \u03c0\u03b8(at|st) \r \r \r \r \r 2\uf8f9 \uf8fb \u2264R2(1 \u2212\u03b3\u22a4)2 N(1 \u2212\u03b3)2 m X i=1 E \u03c4\u223cp\u03b8 \"T \u22121 X t=0 (Di log \u03c0\u03b8(at|st))2 + 2 T \u22122 X t=0 T \u22121 X h=t+1 Di log \u03c0\u03b8(at|st)Di log \u03c0\u03b8(ah|sh) \uf8f9 \uf8fb = R2(1 \u2212\u03b3\u22a4)2 N(1 \u2212\u03b3)2 E \u03c4\u223cp\u03b8 \"T \u22121 X t=0 \u2225\u2207log \u03c0\u03b8(at|st)\u22252 # (D74) = R2(1 \u2212\u03b3\u22a4)2 N(1 \u2212\u03b3)2 T \u22121 X t=0 E s0\u223c\u00b5 \u0014 . . . E at\u223c\u03c0\u03b8(\u00b7|st) h \u2225\u2207log \u03c0\u03b8(at|st)\u22252 \f \f \f st i . . . \u0015 \u2264T\u03be2R2(1 \u2212\u03b3\u22a4)2 N(1 \u2212\u03b3)2 , (D75) \fSmoothing Policies and Safe Policy Gradients 49 where (D74) is from the following: E \u03c4\u223cp\u03b8 \uf8ee \uf8f0 T \u22122 X t=0 T \u22121 X h=t+1 Di log \u03c0\u03b8(at|st)Di log \u03c0\u03b8(ah|sh) \uf8f9 \uf8fb = T \u22122 X t=0 E s0\u223c\u00b5 \u0014 . . . E at\u223c\u03c0\u03b8(\u00b7|st) [Di log \u03c0\u03b8(at|st) (D76) T \u22121 X h=t+1 E st+1\u223cp(\u00b7|st,at) \u0014 . . . E ah\u223c\u03c0\u03b8(\u00b7|sh) [Di log \u03c0\u03b8(ah|sh) | sh] . . . \f \f \f \f at \u0015 \f \f \f \f \f \f st \uf8f9 \uf8fb. . . \uf8f9 \uf8fb = 0, (D77) where the last equality is from Eah\u223c\u03c0\u03b8(\u00b7|sh) [Di log \u03c0\u03b8(ah|sh)] = 0. \u25a1 This is a generalization of Lemma 5.3 from Pirotta, Restelli, and Bascetta (2013), which in turn is an adaptation of Theorem 2 from Zhao et al. (2011). In the Gaussian case, the original lemma is recovered by plugging the smoothing constant \u03be2 = M 2 \u03c32 from Lemma 23. Note also that, from the de\ufb01nition of smoothing policy, only the second condition (20) is actually necessary for Lemma 28 to hold. For the G(PO)MDP estimator, we obtain an upper bound that does not grow linearly with the horizon T: Lemma 29 Given a (\u03be1, \u03be2, \u03be3)-smoothing policy class \u03a0\u0398 and an e\ufb00ective task horizon T, for every \u03b8 \u2208\u0398, the variance of the G(PO)MDP estimator (with zero baseline) is upper-bounded as follows: Var h b \u2207J(\u03b8; D) i \u2264 \u03be2R2\u0010 1 \u2212\u03b3\u22a4\u0011 N(1 \u2212\u03b3)3 . (D78) Proof Let g\u03b8(\u03c4) := PT \u22121 t=0 \u03b3tr(at, st) \u0010Pt h=0 \u2207log \u03c0\u03b8(ah|sh) \u0011 with st, at \u2208\u03c4 for t = 0, . . . , T \u22121. Using the de\ufb01nition of G(PO)MDP (15) with b = 0: Var D\u223cp\u03b8 h b \u2207J(\u03b8; D) i = 1 N Var \u03c4\u223cp\u03b8 \"T \u22121 X t=0 \u03b3tr(at, st) t X h=0 \u2207log \u03c0\u03b8(ah|sh) !# (D79) \u22641 N E \u03c4\u223cp\u03b8 \uf8ee \uf8f0 T \u22121 X t=0 \u03b3 t/2r(at, st)\u03b3 t/2 t X h=0 \u2207log \u03c0\u03b8(ah|sh) !!2\uf8f9 \uf8fb \u22641 N E \u03c4\u223cp\u03b8 \uf8ee \uf8f0 T \u22121 X t=0 \u03b3tr(at, st)2 ! \uf8eb \uf8ed T \u22121 X t=0 \u03b3t t X h=0 \u2207log \u03c0\u03b8(ah|sh) !2\uf8f6 \uf8f8 \uf8f9 \uf8fb (D80) \u2264R2(1 \u2212\u03b3\u22a4) N(1 \u2212\u03b3) E \u03c4\u223cp\u03b8 \uf8ee \uf8f0 T \u22121 X t=0 \u03b3t t X h=0 \u2207log \u03c0\u03b8(ah|sh) !2\uf8f9 \uf8fb \f50 Smoothing Policies and Safe Policy Gradients Table D1: Upper bounds on the variance V(\u03b8) for common policy gradient estimators (single trajectory, no baseline), assuming the policy is smoothing (De\ufb01nition 1). Here R is the maximum absolute-valued reward, \u03b3 is the discount factor, T is the task horizon, and the smoothing constant \u03be2 can be retrieved from Table 1 depending on the policy class. REINFORCE G(PO)MDP T \u03be2R2(1\u2212\u03b3\u22a4)2 (1\u2212\u03b3)2 \u03be2R2(1\u2212\u03b3\u22a4) (1\u2212\u03b3)3 \u2264\u03be2R2(1 \u2212\u03b3\u22a4) N(1 \u2212\u03b3) T \u22121 X t=0 \u03b3t(t + 1) (D81) = \u03be2R2(1 \u2212\u03b3\u22a4) N(1 \u2212\u03b3)3 \uf8ee \uf8ef \uf8f01 \u2212T \u0010 \u03b3\u22a4\u2212\u03b3T +1 | {z } \u22650 \u0011 \u2212\u03b3\u22a4 \uf8f9 \uf8fa \uf8fb (D82) \u2264 \u03be2R2\u0010 1 \u2212\u03b3\u22a4\u0011 N(1 \u2212\u03b3)3 , where (D79) is from the fact that the trajectories are i.i.d., (D80) is from the CauchySchwarz inequality, (D81) is from the same argument used for (D74) in the proof of Lemma 28, and (D82) is from the sum of the arithmetico-geometric sequence. \u25a1 This is a generalization of Lemma 5.5 from Pirotta, Restelli, and Bascetta (2013). Again, in the Gaussian case, the original lemma is recovered by plugging the smoothing constant \u03be2 = M \u03c32 from Lemma 23. Note that this variance upper bound stays \ufb01nite in the limit T \u2192\u221e, which is not the case for REINFORCE. Appendix E Analysis of Relaxed Algorithm In this section, we will analyze in more detailed the variants of SPG introduced in Section 5.1. In particular, we will consider a very general relaxed improvement guarantee, then we will specialize it to the baseline and milestone constraints discussed in the main paper. The pseudocode for the relaxed version of SPG is provided in Algorithm 3. The algorithm takes as additional inputs the mini-batch size n and a sequence of degradation thresholds \u2206k \u22650. Moreover, it assumes access to a generic gradient estimation error function \u03f5 with the following property: Assumption 3 Fixed a parameter \u03b8 \u2208\u0398, a batch size N \u2208N and a failure probability \u03b4 \u2208(0, 1), with probability at least 1 \u2212\u03b4: \r \r \rb \u2207J(\u03b8; D) \u2212\u2207J(\u03b8) \r \r \r \u2264\u03f5(N, \u03b4), where |D| is a dataset of N i.i.d. trajectories collected with \u03c0\u03b8 and: \u03f5(N, \u03b4) = O \u0012log(1/\u03b4) \u221a N \u0013 . (E83) \fSmoothing Policies and Safe Policy Gradients 51 Algorithm 3 Relaxed SPG 1: Input: initial policy parameter \u03b80, smoothness constant L, concentration bound \u03f5, failure probabilities (\u03b4k)k\u22651, degradation thresholds (\u2206k)k\u22651, mini-batch size n 2: \u03b1 = 1 L \u25b7\ufb01xed step size 3: for k = 1, 2, . . . do 4: i = 0, Dk,0 = \u2205 5: do 6: i = i + 1 7: Collect trajectories \u03c4k,i,1 . . . \u03c4k,i,n iid \u223cp\u03b8k 8: Dk,i = Dk,i\u22121 \u222a{\u03c4k,i,1 . . . \u03c4k,i,n} 9: Compute policy gradient estimate gk,i = b \u2207J(\u03b8k; Dk,i) 10: \u03b4k,i = \u03b4k i(i+1) 11: while \u03f5(ni, \u03b4k,i) > \u2225gk,i\u2225 2 + L\u2206k \u2225gk,i\u2225 12: Nk = ni, Dk = Dk,i \u25b7adaptive batch size 13: Update policy parameters as \u03b8k+1 \u2190\u03b8k + \u03b1b \u2207J(\u03b8k; Dk) 14: end for The assumption, as the analysis that will follow, is less precise than Assumption 2, but more general. Indeed, it allows to use the empirical Bernstein bound from Lemma 27 for Softmax and other bounded-score policies. We can prove that the per-iteration performance degradation of Algorithm 3 is bounded by the user-de\ufb01ned threshold \u2206k with high probability. Of course, when \u2206k = 0, this is still a monotonic improvement guarantee. Theorem 30 Consider Algorithm 3 applied to a smoothing policy, where b \u2207J is an unbiased policy gradient estimator. Under Assumption 3, for any iteration k \u22651, provided \u2207J(\u03b8k) \u0338= 0, with probability at least 1 \u2212\u03b4k: J(\u03b8k+1) \u2212J(\u03b8k) \u2265\u2212\u2206k. Proof Let the \ufb01ltration (Fk,i)i\u22651 be de\ufb01ned as in Section 5 and note that Nk is a stopping time w.r.t. this \ufb01ltration. Consider the event Ek,i = \b \r \rgk,i \u2212\u2207J(\u03b8k) \r \r \u2264 \u03f5(i, \u03b4k,i) \t . By Assumption 2, P(\u00acEk,i) \u2264\u03b4k,i. Hence, by the same arguments used in the proof of Lemma 13: E[Nk] \u2264E \" \u221e X i=1 I \u03f5(ni, \u03b4k,i) > \r \rgk,i \r \r 2 + L\u2206k \r \rgk,i \r \r !# (E84) \u2264E \" \u221e X i=1 I \u03f5(ni, \u03b4k,i) > \r \rgk,i \r \r 2 !# (E85) = E \" \u221e X i=1 I \u03f5(ni, \u03b4k,i) > \r \rgk,i \r \r 2 , Ek,i !# \f52 Smoothing Policies and Safe Policy Gradients + E \" \u221e X i=1 I \u03f5(i, \u03b4k,i) > \r \rgk,i \r \r 2 , \u00acEk,i !# (E86) \u2264 \u221e X i=1 I \u03f5(ni, \u03b4k,i) > \u0000\u2225\u2207J(\u03b8k)\u2225\u2212\u03f5(ni, \u03b4k,i) \u0001 2 ! + \u221e X i=1 P(\u00acEk,i) (E87) \u2264min i\u22651 ( \u03f5(ni, \u03b4k,i) \u2264 \u0000\u2225\u2207J(\u03b8k)\u2225\u2212\u03f5(ni, \u03b4k,i) \u0001 2 ) + \u221e X i=1 \u03b4k,i (E88) \u2264min i\u22651 \u001a \u03f5(ni, \u03b4k,i) \u2264\u2225\u2207J(\u03b8k)\u2225 3 \u001b ) + \u03b4k \u221e X i=1 1 i(i + 1) (E89) = min i\u22651 \u001a \u03f5(ni, \u03b4k,i) \u2264\u2225\u2207J(\u03b8k)\u2225 3 \u001b ) + \u03b4k, (E90) which is \ufb01nite since, by Assumption 3: \u03f5(ni, \u03b4k,i) = O \u0012log(1/\u03b4k,i) \u221a ni \u0013 = O \u0012log i \u221a i \u0013 . (E91) This shows that the inner loop of Algorithm 3 always terminates with a \ufb01nite batch size. By the same optional-stopping argument as in the proof of Theorem 14, E[b \u2207J(\u03b8k; Dk)] = \u2207J(\u03b8k), which means the gradient estimate is unbiased. By the stopping condition, for all k: \u03f5(Nk, \u03b4k,Nk) \u2264 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 + L\u2206k \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r , (E92) with probability at least 1 \u2212P\u221e i=1 \u03b4k,i = 1 \u2212P\u221e i=1 \u03b4k/(i(i + 1)) = 1 \u2212\u03b4k. By Theorem 10 and the choice of step size \u03b1 = 1/L, with the same probability: J(\u03b8k+1) \u2212J(\u03b8k) \u2265 \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r L \uf8eb \uf8ec \uf8ed \r \r \rb \u2207J(\u03b8k; Dk) \r \r \r 2 \u2212\u03f5(Nk, \u03b4k,Nk) \uf8f6 \uf8f7 \uf8f8 (E93) \u2265\u2212\u2206k, (E94) where the last inequality is from (E92). \u25a1 In the following we discuss some applications of Algorithm 3 to speci\ufb01c safety requirements. Baseline Constraint. A common requirement is for the updated policy not to perform (signi\ufb01cantly) worse than a known baseline policy (e.g., Garcelon et al., 2020a; Laroche et al., 2019). The safety constraint is thus: J(\u03b8k+1) \u2265\u03bbJb, (E95) where Jb is the (discounted) performance of the baseline policy and \u03bb \u2208[0, 1] is a user-de\ufb01ned signi\ufb01cance parameter. Equivalently, J(\u03b8k+1) \u2212J(\u03b8k) \u2265 \fSmoothing Policies and Safe Policy Gradients 53 Algorithm 4 SPG with baseline constraint 1: Input: initial policy parameter \u03b80, smoothness constant L, concentration bound \u03f5, failure probabilities (\u03b4k)k\u22651, baseline performance Jb, signi\ufb01cance parameter \u03bb, mini-batch size n 2: \u03b1 = 1 L \u25b7\ufb01xed step size 3: for k = 1, 2, . . . do 4: i = 0, Dk,0 = \u2205 5: do 6: i = i + 1 7: Collect trajectories \u03c4k,i,1 . . . \u03c4k,i,n iid \u223cp\u03b8k 8: Dk,i = Dk,i\u22121 \u222a{\u03c4k,i,1 . . . \u03c4k,i,n} 9: Compute policy gradient estimate gk,i = b \u2207J(\u03b8k; Dk,i) 10: \u03b4k,i = \u03b4k 2i(i+1) 11: Estimate performance mean \u02c6 J and variance \u02c6 V from Dk,i 12: J = \u02c6 J \u2212 q 2 \u02c6 V log(2/\u03b4k,i) ni \u22127R log(2/\u03b4k,i) 3(1\u2212\u03b3)(ni\u22121) 13: \u2206k,i = max{J \u2212\u03bbJb, 0} \u25b7baseline constraint 14: while \u03f5(ni, \u03b4k,i) > \u2225gk,i\u2225 2 + L\u2206k,i \u2225gk,i\u2225 15: Nk = ni, Dk = Dk,i \u25b7adaptive batch size 16: Update policy parameters as \u03b8k+1 \u2190\u03b8k + \u03b1b \u2207J(\u03b8k; Dk) 17: end for \u03bbJb \u2212J(\u03b8k), and Algorithm 3 satis\ufb01es this safety requirement if we set the degradation threshold as follows: \u2206k = max{J(\u03b8k) \u2212\u03bbJb, 0}. (E96) However, the performance of the current policy must also be estimated from data, and accidentally over-estimating it may result in excessive performance degradation. Hence, we replace it with a lower con\ufb01dence bound based on the empirical Bernstein inequality (Maurer & Pontil, 2009). See Algorithm 4 for details. Note how the failure probability in line 10 is adjusted w.r.t. Algorithm 3 to account for this additional estimation step. With this small caveat, the analysis of Algorithm 4 can be carried out analogously to the one of Algorithm 3. Milestone Constraint. In our numerical simulations we consider the following safety constraint: J(\u03b8k+1) \u2265\u03bb max h=1,...,k J(\u03b8h), (E97) \f54 Smoothing Policies and Safe Policy Gradients Algorithm 5 SPG with milestone constraint 1: Input: initial policy parameter \u03b80, smoothness constant L, concentration bound \u03f5, failure probabilities (\u03b4k)k\u22651, baseline performance Jb, signi\ufb01cance parameter \u03bb, mini-batch size n 2: \u03b1 = 1 L \u25b7\ufb01xed step size 3: J\u22c6= \u2212\u221e 4: for k = 1, 2, . . . do 5: i = 0, Dk,0 = \u2205 6: do 7: i = i + 1 8: Collect trajectories \u03c4k,i,1 . . . \u03c4k,i,n iid \u223cp\u03b8k 9: Dk,i = Dk,i\u22121 \u222a{\u03c4k,i,1 . . . \u03c4k,i,n} 10: Compute policy gradient estimate gk,i = b \u2207J(\u03b8k; Dk,i) 11: \u03b4k,i = \u03b4k 2i(i+1) 12: Estimate performance mean \u02c6 J and variance \u02c6 V from Dk,i 13: J = \u02c6 J \u2212 q 2 \u02c6 V log(2/\u03b4k,i) ni \u22127R log(2/\u03b4k,i) 3(1\u2212\u03b3)(ni\u22121) 14: J = max \u001a \u02c6 J + q 2 \u02c6 V log(2/\u03b4k,i) ni + 7R log(2/\u03b4k,i) 3(1\u2212\u03b3)(ni\u22121), J\u22c6 \u001b 15: \u2206k,i = max{J \u2212\u03bbJ, 0} \u25b7milestone constraint 16: while \u03f5(ni, \u03b4k,i) > \u2225gk,i\u2225 2 + L\u2206k,i \u2225gk,i\u2225 17: Nk = ni, Dk = Dk,i \u25b7adaptive batch size 18: J\u22c6= max{J, J\u22c6} 19: Update policy parameters as \u03b8k+1 \u2190\u03b8k + \u03b1b \u2207J(\u03b8k; Dk) 20: end for which can be enforced by setting the degradation threshold in Algorithm 3 as: \u2206k = max \u001a J(\u03b8k) \u2212\u03bb max h=1,...,k J(\u03b8h), 0 \u001b . (E98) Again, we must replace the unknown performance J(\u03b8k) with a lower con\ufb01dence bound. In this case, we also need to overestimate the best historical performance. See Algorithm 5 for details. Note that this safety constraint reduces to monotonic improvement if the signi\ufb01cance parameter is set to \u03bb = 1, since maxh=1,...,k J(\u03b8h) \u2265J(\u03b8k). Appendix F Task Speci\ufb01cations In this Appendix, we provide detailed descriptions of the control tasks used in the numerical simulations. \fSmoothing Policies and Safe Policy Gradients 55 F.1 LQR The LQR is a classical optimal control problem (Dorato et al., 1994). It models the very general task of controlling a set of variables to zero with the minimum e\ufb00ort. Given a state space S \u2286Rn and an action space A \u2286Rm, the next state is a linear function of current state and action:21 st+1 = Ast + Bat, (F99) where A \u2208Rn\u00d7n and B \u2208Rn\u00d7m. The reward is quadratic in both state and action: rt+1 = s\u22a4 t Cst + a\u22a4 t Dat, (F100) where C \u2208Rn\u00d7n and D \u2208Rm\u00d7m are positive de\ufb01nite matrices. A linear controller is optimal for this task (Dorato et al., 1994) and can be computed in closed form with dynamic-programming techniques. In our experiments, we always consider shallow Gaussian policies of the form: \u03c0(\u00b7|st) = N(\u03b8\u22a4st, \u03c32I), (F101) where \u03b8 \u2208Rn and \u03c3 > 0 can be \ufb01xed or learned as an additional policy parameter. This version of LQR with Gaussian policies is also called LQG (Linear-Quadratic Gaussian Regulator, Peters & Schaal, 2008). States and actions are clipped in practice when they happen to fall outside S and A, respectively. We have ignored nonlinearities stemming from this fact. The LQR problem used in Section 7 is 1-dimensional with S = A = [\u22121, 1], A = B = C = D = 1. F.2 Cart-Pole This is the CartPole-v1 environment from openai/gym (Brockman et al., 2016). It has 4-dimensional continuous states and \ufb01nite (two) actions. The goal is to keep a pole balanced by controlling a cart to which the pole is attached. Reward is +1 for every time-step until the pole falls. We set a maximum episode length of 100. See the o\ufb03cial documentation for more details (https:// gym.openai.com/envs/CartPole-v1/). 21A zero-mean Gaussian noise is typically added to the next state to model disturbances. However, since we always consider Gaussian policies with \ufb01xed standard deviation, we can ignore the system noise without loss of generality. Indeed, from linearity of the next state, said at the expected action under (F101), st+1 = Ast + Bat + B\u03f5, where \u03f5 \u223cN (0, \u03c32I). From the property of Gaussians, we can write \u03f5 = \u03f5a + \u03f5b where \u03f5a \u223cN (0, \u03c32 aI) is from the actual stochasticity of the agent and \u03f5b \u223cN (0, \u03c32 bB\u2020) is the system noise, which can be subsumed by the policy noise in numerical simulations for simplicity. \f56 Smoothing Policies and Safe Policy Gradients" + }, + { + "url": "http://arxiv.org/abs/1806.05618v1", + "title": "Stochastic Variance-Reduced Policy Gradient", + "abstract": "In this paper, we propose a novel reinforcement- learning algorithm\nconsisting in a stochastic variance-reduced version of policy gradient for\nsolving Markov Decision Processes (MDPs). Stochastic variance-reduced gradient\n(SVRG) methods have proven to be very successful in supervised learning.\nHowever, their adaptation to policy gradient is not straightforward and needs\nto account for I) a non-concave objective func- tion; II) approximations in the\nfull gradient com- putation; and III) a non-stationary sampling pro- cess. The\nresult is SVRPG, a stochastic variance- reduced policy gradient algorithm that\nleverages on importance weights to preserve the unbiased- ness of the gradient\nestimate. Under standard as- sumptions on the MDP, we provide convergence\nguarantees for SVRPG with a convergence rate that is linear under increasing\nbatch sizes. Finally, we suggest practical variants of SVRPG, and we\nempirically evaluate them on continuous MDPs.", + "authors": "Matteo Papini, Damiano Binaghi, Giuseppe Canonaco, Matteo Pirotta, Marcello Restelli", + "published": "2018-06-14", + "updated": "2018-06-14", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction On a very general level, arti\ufb01cial intelligence addresses the problem of an agent that must select the right actions to solve a task. The approach of Reinforcement Learning (RL) (Sutton & Barto, 1998) is to learn the best actions by direct interaction with the environment and evaluation of the performance in the form of a reward signal. This makes RL fundamentally different from Supervised Learning (SL), where correct actions are explicitly prescribed by a human teacher (e.g., for classi\ufb01cation, in the form of class labels). However, the two approaches share many challenges and tools. The problem of estimating a model from samples, which is at the core of SL, is equally fundamental in RL, whether we choose to model the environment, *Equal contribution 1Politecnico di Milano, Milano, Italy 2Inria, Lille, France. Correspondence to: Matteo Papini . Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). a value function, or directly a policy de\ufb01ning the agent\u2019s behaviour. Furthermore, when the tasks are characterized by large or continuous state-action spaces, RL needs the powerful function approximators (e.g., neural networks) that are the main subject of study of SL. In a typical SL setting, a performance function J(\u03b8) has to be optimized w.r.t. to model parameters \u03b8. The set of data that are available for training is often a subset of all the cases of interest, which may even be in\ufb01nite, leading to optimization of \ufb01nite sums that approximate the expected performance over an unknown data distribution. When generalization to the complete dataset is not taken into consideration, we talk about Empirical Risk Minimization (ERM). Even in this case, stochastic optimization is often used for reasons of ef\ufb01ciency. The idea of stochastic gradient (SG) ascent (Nesterov, 2013) is to iteratively focus on a random subset of the available data to obtain an approximate improvement direction. At the level of the single iteration, this can be much less expensive than taking into account all the data. However, the sub-sampling of data is a source of variance that can potentially compromise convergence, so that periteration ef\ufb01ciency and convergence rate must be traded off with proper handling of meta-parameters. Variance-reduced gradient algorithms such as SAG (Roux et al., 2012), SVRG (Johnson & Zhang, 2013) and SAGA (Defazio et al., 2014a) offer better ways of solving this trade-off, with signi\ufb01cant results both in theory and practice. Although designed explicitly for ERM, these algorithms address a problem that affects more general machine learning problems. In RL, stochastic optimization is rarely a matter of choice, since data must be actively sampled by interacting with an initially unknown environment. In this scenario, limiting the variance of the estimates is a necessity that cannot be avoided, which makes variance-reduced algorithms very interesting. Among RL approaches, policy gradient (Sutton et al., 2000) is the one that bears the closest similarity to SL solutions. The fundamental principle of these methods is to optimize a parametric policy through stochastic gradient ascent. Compared to other applications of SG, the cost of collecting samples can be very high since it requires to interact with the environment. This makes SVRG-like methods potentially much more ef\ufb01cient than, e.g., batch learning. Unfortunately, RL has a series of dif\ufb01culties that are not present in ERM. First, in SL the objective can often be dearXiv:1806.05618v1 [cs.LG] 14 Jun 2018 \fStochastic Variance-Reduced Policy Gradient signed to be strongly concave (we aim to maximize). This is not the case for RL, so we have to deal with non-concave objective functions. Then, as mentioned before, the dataset is not initially available and may even be in\ufb01nite, which makes approximations unavoidable. This rules out SAG and SAGA because of their storage requirements, which leaves SVRG as the most promising choice. Finally, the distribution used to sample data is not under direct control of the algorithm designer, but it is a function of policy parameters that change over time as the policy is optimized, which is a form of non-stationarity. SVRG has been used in RL as an ef\ufb01cient technique for optimizing the per-iteration problem in Trust-Region Policy Optimization (Xu et al., 2017) or for policy evaluation (Du et al., 2017). In both the cases, the optimization problems faced resemble the SL scenario and are not affected by all the previously mentioned issues. After providing background on policy gradient and SVRG in Section 2, we propose SVRPG, a variant of SVRG for the policy gradient framework, addressing all the dif\ufb01culties mentioned above (see Section 3). In Section 4 we provide convergence guarantees for our algorithm, and we show a convergence rate that has an O(1/T) dependence on the number T of iterations. In Section 5.2 we suggest how to set the meta-parameters of SVRPG, while in Section 5.3 we discuss some practical variants of the algorithm. Finally, in Section 7 we empirically evaluate the performance of our method on popular continuous RL tasks. 2. Preliminaries In this section, we provide the essential background on policy gradient methods and stochastic variance-reduced gradient methods for \ufb01nite-sum optimization. 2.1. Policy Gradient A Reinforcement Learning task (Sutton & Barto, 1998) can be modelled with a discrete-time continuous Markov Decision Process (MDP) M = {S, A, P, R, \u03b3, \u03c1}, where S is a continuous state space; A is a continuous action space; P is a Markovian transition model, where P(s\u2032|s, a) de\ufb01nes the transition density from state s to s\u2032 under action a; R is the reward function, where R(s, a) \u2208[\u2212R, R] is the expected reward for state-action pair (s, a); \u03b3 \u2208[0, 1) is the discount factor; and \u03c1 is the initial state distribution. The agent\u2019s behaviour is modelled as a policy \u03c0, where \u03c0(\u00b7|s) is the density distribution over A in state s. We consider episodic MDPs with effective horizon H.1 In this setting, we can limit our attention to trajectories of length H. A trajectory \u03c4 is a sequence of states and actions (s0, a0, s1, a1, . . . , sH\u22121, aH\u22121) observed by follow1The episode duration is a random variable, but the optimal policy can reach the target state (i.e., absorbing state) in less than H steps. This has not to be confused with a \ufb01nite horizon problem where the optimal policy is non-stationary. ing a stationary policy, where s0 \u223c\u03c1. We denote with p(\u03c4|\u03c0) the density distribution induced by policy \u03c0 on the set T of all possible trajectories (see Appendix A for the de\ufb01nition), and with R(\u03c4) the total discounted reward provided by trajectory \u03c4: R(\u03c4) = PH\u22121 t=0 \u03b3tR(st, at). Policies can be ranked based on their expected total reward: J(\u03c0) = E\u03c4\u223cp(\u00b7|\u03c0) [R(\u03c4)|M]. Solving an MDP M means \ufb01nding \u03c0\u2217\u2208arg max\u03c0{J(\u03c0)}. Policy gradient methods restrict the search for the best performing policy over a class of parametrized policies \u03a0\u03b8 = {\u03c0\u03b8 : \u03b8 \u2208Rd}, with the only constraint that \u03c0\u03b8 is differentiable w.r.t. \u03b8. For sake of brevity, we will denote the performance of a parametric policy with J(\u03b8) and the probability of a trajectory \u03c4 with p(\u03c4|\u03b8) (in some occasions, p(\u03c4|\u03b8) will be replaced by p\u03b8(\u03c4) for the sake of readability). The search for a locally optimal policy is performed through gradient ascent, where the policy gradient is (Sutton et al., 2000; Peters & Schaal, 2008a): \u2207J(\u03b8) = E \u03c4\u223cp(\u00b7|\u03b8) [\u2207log p\u03b8(\u03c4)R(\u03c4)] . (1) Notice that the distribution de\ufb01ning the gradient is induced by the current policy. This aspect introduces a nonstationarity in the sampling process. Since the underlying distribution changes over time, it is necessary to resample at each update or use weighting techniques such as importance sampling. Here, we consider the online learning scenario, where trajectories are sampled by interacting with the environment at each policy change. In this setting, stochastic gradient ascent is typically employed. At each iteration k > 0, a batch Dk N = {\u03c4i}N i=0 of N > 0 trajectories is collected using policy \u03c0\u03b8k. The policy is then updated as \u03b8k+1 = \u03b8k + \u03b1b \u2207NJ(\u03b8k), where \u03b1 is a step size and b \u2207NJ(\u03b8) is an estimate of Eq. (1) using Dk N. The most common policy gradient estimators (e.g., REINFORCE (Williams, 1992) and G(PO)MDP (Baxter & Bartlett, 2001)) can be expressed as follows b \u2207NJ(\u03b8) = 1 N N X n=1 g(\u03c4i|\u03b8), \u03c4i \u2208Dk N, (2) where g(\u03c4i|\u03b8) is an estimate of \u2207log p\u03b8(\u03c4i)R(\u03c4i). Although the REINFORCE de\ufb01nition is simpler than the G(PO)MDP one, the latter is usually preferred due to its lower variance. We refer the reader to Appendix A for details and a formal de\ufb01nition of g. The main limitation of plain policy gradient is the high variance of these estimators. The na\u00a8 \u0131ve approach of increasing the batch size is not an option in RL due to the high cost of collecting samples, i.e., by interacting with the environment. For this reason, literature has focused on the introduction of baselines (i.e., functions b : S \u00d7 A \u2192R) aiming to reduce the variance (e.g., Williams, 1992; Peters & Schaal, \fStochastic Variance-Reduced Policy Gradient 2008a; Thomas & Brunskill, 2017; Wu et al., 2018), see Appendix A for a formal de\ufb01nition of b. These baselines are usually designed to minimize the variance of the gradient estimate, but even them need to be estimated from data, partially reducing their effectiveness. On the other hand, there has been a surge of recent interest in variance reduction techniques for gradient optimization in supervised learning (SL). Although these techniques have been mainly derived for \ufb01nite-sum problems, we will show in Section 3 how they can be used in RL. In particular, we will show that the proposed SVRPG algorithm can take the best of both worlds (i.e., SL and RL) since it can be plugged into a policy gradient estimate using baselines. The next section has the aim to describe variance reduction techniques for \ufb01nite-sum problems. In particular, we will present the SVRG algorithm that is at the core of this work. 2.2. Stochastic Variance-Reduced Gradient Finite-sum optimization is the problem of maximizing an objective function f(\u03b8) which can be decomposed into the sum or average of a \ufb01nite number of functions zi(\u03b8): max \u03b8 ( f(\u03b8) = 1 N N X i=1 zi(\u03b8) ) . This kind of optimization is very common in machine learning, where each zi may correspond to a data sample xi from a dataset DN of size N (i.e., zi(\u03b8) = z(xi|\u03b8)). A common requirement is that z must be smooth and concave in \u03b8.2 Under this hypothesis, full gradient (FG) ascent (Cauchy, 1847) with a constant step size achieves a linear convergence rate in the number T of iterations (i.e., parameter updates) (Nesterov, 2013). However, each iteration requires N gradient computations, which can be too expensive for large values of N. Stochastic Gradient (SG) ascent (e.g., Robbins & Monro, 1951; Bottou & LeCun, 2004) overcomes this problem by sampling a single sample xi per iteration, but a vanishing step size is required to control the variance introduced by sampling. As a consequence, the lower per-iteration cost is paid with a worse, sub-linear convergence rate (Nemirovskii et al., 1983). Starting from SAG, a series of variations to SG have been proposed to achieve a better trade-off between convergence speed and cost per iteration: e.g., SAG (Roux et al., 2012), SVRG (Johnson & Zhang, 2013), SAGA (Defazio et al., 2014a), Finito (Defazio et al., 2014b), and MISO (Mairal, 2015). The common idea is to reuse past gradient computations to reduce the variance of the current estimate. In particular, Stochastic Variance-Reduced Gradient (SVRG) is often preferred to other similar methods for its limited storage requirements, which is a signi\ufb01cant advantage when deep and/or wide neural networks are employed. 2Note that we are considering a maximization problem instead of the classical minimization one. Algorithm 1 SVRG Input: a dataset DN, number of epochs S, epoch size m, step size \u03b1, initial parameter \u03b80 m := e \u03b8 0 for s = 0 to S \u22121 do \u03b8s+1 0 := e \u03b8 s = \u03b8s m e \u00b5 = \u2207f(e \u03b8 s) for t = 0 to m \u22121 do x \u223cU (DN) vs+1 t = e \u00b5 + \u2207z(x|\u03b8s+1 t ) \u2212\u2207z(x|e \u03b8 s) \u03b8s+1 t+1 = \u03b8s+1 t + \u03b1vs+1 t end for end for Concave case: return \u03b8S m Non-Concave case: return \u03b8s+1 t with (s, t) picked uniformly at random from {[0, S \u22121] \u00d7 [0, m \u22121]} The idea of SVRG (Algorithm 1) is to alternate full and stochastic gradient updates. Each m = O(N) iterations, a snapshot e \u03b8 of the current parameter is saved together with its full gradient \u2207f(e \u03b8) = 1 N P i \u2207z(xi|e \u03b8). Between snapshots, the parameter is updated with \u25bcf(\u03b8), a gradient estimate corrected using stochastic gradient. For any t \u2208 {0, . . . , m \u22121}: \u25bcf(\u03b8t) := vt = \u2207f(e \u03b8) + \u2207z(x|\u03b8t) \u2212\u2207z(x|e \u03b8), (3) where x is sampled uniformly at random from DN (i.e., x \u223cU(DN)). Note that t = 0 corresponds to a FG step (i.e., \u25bcf(\u03b80) = \u2207f(e \u03b8)) since \u03b80 := e \u03b8. The corrected gradient \u25bcf(\u03b8) is an unbiased estimate of \u2207f(\u03b8), and it is able to control the variance introduced by sampling even with a \ufb01xed step size, achieving a linear convergence rate without resorting to a plain full gradient. More recently, some extensions of variance reduction algorithms to the non-concave objectives have been proposed (e.g., Allen-Zhu & Hazan, 2016; Reddi et al., 2016a;b). In this scenario, f is typically required to be L-smooth, i.e., \r \r\u2207f(\u03b8\u2032) \u2212\u2207f(\u03b8) \r \r 2 \u2264L \r \r\u03b8\u2032 \u2212\u03b8 \r \r 2 for each \u03b8, \u03b8\u2032 \u2208Rn and for some Lipschitz constant L. Under this hypothesis, the convergence rate of SG is O(1/ \u221a T) (Ghadimi & Lan, 2013), i.e., T = O(1/\u03f52) iterations are required to get \u2225\u2207f(\u03b8)\u22252 2 \u2264\u03f5. Again, SVRG achieves the same rate as FG (Reddi et al., 2016a), which is O( 1 T ) in this case (Nesterov, 2013). The only additional requirement is to select \u03b8\u2217uniformly at random among all the \u03b8k instead of simply setting it to the \ufb01nal value (k being the iterations). 3. SVRG in Reinforcement Learning In online RL problems, the usual approach is to tune the batch size of SG to \ufb01nd the optimal trade-off between variance and speed. Recall that, compared to SL, the samples \fStochastic Variance-Reduced Policy Gradient are not \ufb01xed in advance but we need to collect them at each policy change. Since this operation may be costly, we would like to minimize the number of interactions with the environment. For these reasons, we would like to apply SVRG to RL problems in order to limit the variance introduced by sampling trajectories, which would ultimately lead to faster convergence. However, a direct application of SVRG to RL is not possible due to the following issues: Non-concavity: the objective function J(\u03b8) is typically non-concave. In\ufb01nite dataset: the RL optimization cannot be expressed as a \ufb01nite-sum problem. The objective function is an expected value over the trajectory density p\u03b8(\u03c4) of the total discounted reward, for which we would need an in\ufb01nite dataset. Non-stationarity: the distribution of the samples changes over time. In particular, the value of the policy parameter \u03b8 in\ufb02uences the sampling process. To deal with non-concavity, we require J(\u03b8) to be L-smooth, which is a reasonable assumption for common policy classes such as Gaussian3 and softmax (e.g., Furmston & Barber, 2012; Pirotta et al., 2015). Because of the in\ufb01nite dataset, we can only rely on an estimate of the full gradient. Harikandeh et al. (2015) analysed this scenario under the assumptions of z being concave, showing that SVRG is robust to an inexact computation of the full gradient. In particular, it is still possible to recover the original convergence rate if the error decreases at an appropriate rate. Bietti & Mairal (2017) performed a similar analysis on MISO. In Section 4 we will show how the estimation accuracy impacts on the convergence results with a non-concave objective. Finally, the non-stationarity of the optimization problem introduces a bias into the SVRG estimator in Eq. (3). To overcome this limitation we employ importance weighting (e.g., Rubinstein, 1981; Precup, 2000) to correct the distribution shift. We can now introduce Stochastic Variance-Reduced Policy Gradient (SVRPG) for a generic policy gradient estimator g. Pseudo-code is provided in Algorithm 2. The overall structure is the same as Algorithm 1, but the snapshot gradient is not exact and the gradient estimate used between snapshots is corrected using importance weighting:4 \u25bcJ(\u03b8t) = b \u2207NJ(e \u03b8) + g(\u03c4|\u03b8t) \u2212\u03c9(\u03c4|\u03b8t, e \u03b8)g(\u03c4|e \u03b8) for any t \u2208{0, . . . , m \u22121}, where b \u2207NJ(e \u03b8) is as in Eq. (2) where DN is sampled using the snapshot policy \u03c0e \u03b8, \u03c4 is 3See Appendix C for more details on the Gaussian policy case. 4Note that g can be any unbiased estimator, with or without baseline. The unbiasedness is required for theoretical results (e.g., Appendix A). Algorithm 2 SVRPG Input: number of epochs S, epoch size m, step size \u03b1, batch size N, mini-batch size B, gradient estimator g, initial parameter \u03b80 m := e \u03b8 0 := \u03b80 for s = 0 to S \u22121 do \u03b8s+1 0 := e \u03b8 s = \u03b8s m Sample N trajectories {\u03c4j} from p(\u00b7|e \u03b8 s) e \u00b5 = b \u2207NJ(e \u03b8 s) (see Eq. (2)) for t = 0 to m \u22121 do Sample B trajectories {\u03c4i} from p(\u00b7|\u03b8s+1 t ) cs+1 t = 1 B B\u22121 P i=0 \u0010 g(\u03c4i|\u03b8s+1 t ) \u2212\u03c9(\u03c4i|\u03b8s+1 t , e \u03b8 s)g(\u03c4i|e \u03b8 s) \u0011 vs+1 t = e \u00b5 + cs+1 t \u03b8s+1 t+1 = \u03b8s+1 t + \u03b1vs+1 t end for end for return \u03b8A := \u03b8s+1 t with (s, t) picked uniformly at random from {[0, S \u22121] \u00d7 [0, m \u22121]} sampled from the current policy \u03c0\u03b8t, and \u03c9(\u03c4|\u03b8t, e \u03b8) = p(\u03c4|e \u03b8) p(\u03c4|\u03b8t) is an importance weight from \u03c0\u03b8t to the snapshot policy \u03c0e \u03b8. Similarly to SVRG, we have that \u03b80 := e \u03b8, and the update is a FG step. Our update is still fundamentally on-policy since the weighting concerns only the correction term. However, this partial \u201coff-policyness\u201d represents an additional source of variance. This is a well-known issue of importance sampling (e.g., Thomas et al., 2015). To mitigate it, we use mini-batches of trajectories of size B \u226aN to average the correction, i.e., \u25bcJ(\u03b8t) := vt = b \u2207NJ(e \u03b8) (4) + 1 B B\u22121 X i=0 h g(\u03c4i|\u03b8t) \u2212\u03c9(\u03c4i|\u03b8t, e \u03b8)g(\u03c4i|e \u03b8) i ct . It is worth noting that the full gradient and the correction term have the same expected value: E\u03c4i\u223cp(\u00b7|\u03b8t) h 1 B PB\u22121 i=0 \u03c9(\u03c4i|\u03b8t, e \u03b8)g(\u03c4i|e \u03b8) i = \u2207J(e \u03b8).5 This property will be used to prove Lemma 3.1. The use of mini-batches is also common practice in SVRG since it can yield a performance improvement even in the supervised case (Harikandeh et al., 2015; Kone\u02c7 cn` y et al., 2016). It is easy to show that the SVRPG estimator has the following, desirable properties: Lemma 3.1. Let b \u2207NJ(\u03b8) be an unbiased estimator of (1) and let \u03b8\u2217\u2208arg min\u03b8{J(\u03b8)}. Then, the SVRG estimate 5The reader can refer to Appendix A for off-policy gradients and variants of REINFORCE and G(PO)MDP. \fStochastic Variance-Reduced Policy Gradient in (4) is unbiased E [\u25bcJ(\u03b8)] = \u2207J(\u03b8). (5) and regardless of the mini-batch size B:6 Var [\u25bcJ(\u03b8\u2217)] = Var h b \u2207NJ(\u03b8\u2217) i . (6) Previous results hold for both REINFORCE and G(PO)MDP. In particular, the latter result suggests that an SVRG-like algorithm using \u25bcJ(\u03b8) can achieve faster convergence, by performing much more parameter updates with the same data without introducing additional variance (at least asymptotically). Note that the randomized return value of Algorithm 2 does not affect online learning at all, but will be used as a theoretical tool in the next section. 4. Convergence Guarantees of SVRPG In this section, we state the convergence guarantees for SVRPG with REINFORCE or G(PO)MDP gradient estimator. We mainly leverage on the recent analysis of nonconcave SVRG (Reddi et al., 2016a; Allen-Zhu & Hazan, 2016). Each of the three challenges presented at the beginning of Section 3 can potentially prevent convergence, so we need additional assumptions. In Appendix C we show how Gaussian policies satisfy these assumptions. 1) Non-concavity. A common assumption, in this case, is to assume the objective function to be L-smooth. However, in RL we can consider the following assumption which is suf\ufb01cient for the L-smoothness of the objective (see Lemma B.2). Assumption 4.1 (On policy derivatives). For each stateaction pair (s, a), any value of \u03b8, and all parameter components i, j there exist constants 0 \u2264G, F < \u221esuch that: |\u2207\u03b8i log \u03c0\u03b8(a|s)| \u2264G, \f \f \f \f \u22022 \u2202\u03b8i\u2202\u03b8j log \u03c0\u03b8(a|s) \f \f \f \f \u2264F. 2) FG Approximation. Since we cannot compute an exact full gradient, we require the variance of the estimator to be bounded. This assumption is similar in spirit to the one in (Harikandeh et al., 2015). Assumption 4.2 (On the variance of the gradient estimator). There is a constant V < \u221esuch that, for any policy \u03c0\u03b8: Var [g(\u00b7|\u03b8)] \u2264V. 3) Non-stationarity. Similarly to what is done in SL (Cortes et al., 2010), we require the variance of the importance weight to be bounded. 6 For any vector x, we use Var[x] to denote the trace of the covariance matrix, i.e., Tr(E \u0002 (x \u2212E [x])(x \u2212E [x])T \u0003 ). Assumption 4.3 (On the variance of importance weights). There is a constant W < \u221esuch that, for each pair of policies encountered in Algorithm 2 and for each trajectory, Var [\u03c9(\u03c4|\u03b81, \u03b82)] \u2264W, \u2200\u03b81, \u03b82 \u2208Rd, \u03c4 \u223cp(\u00b7|\u03b81). Differently from Assumptions 4.1 and 4.2, Assumption 4.3 must be enforced by a proper handling of the epoch size m. We can now state the convergence guarantees for SVRPG. Theorem 4.4 (Convergence of the SVRPG algorithm). Assume the REINFORCE or the G(PO)MDP gradient estimator is used in SVRPG (see Equation (4)). Under Assumptions 4.1, 4.2 and 4.3, the parameter vector \u03b8A returned by Algorithm 2 after T = m \u00d7 S iterations has, for some positive constants \u03c8, \u03b6, \u03be and for proper choice of the step size \u03b1 and the epoch size m, the following property: E h \u2225\u2207J(\u03b8A)\u22252 2 i \u2264J(\u03b8\u2217) \u2212J(\u03b80) \u03c8T + \u03b6 N + \u03be B , where \u03b8\u2217is a global optimum and \u03c8, \u03b6, \u03be depend only on G, F, V, W, \u03b1 and m. Refer to Appendix B for a detailed proof involving the de\ufb01nition of the constants and the meta-parameter constraints. By analysing the upper-bound in Theorem 4.4 we observe that: I) the O(1/T) term is coherent with results on nonconcave SVRG (e.g., Reddi et al., 2016a); II) the O(1/N) term is due to the FG approximation and is analogous to the one in (Harikandeh et al., 2015); III) the O(1/B) term is due to importance weighting. To achieve asymptotic convergence, the batch size N and the mini-batch size B should increase over time. In practice, it is enough to choose N and B large enough to make the second and the third term negligible, i.e., to mitigate the variance introduced by FG approximation and importance sampling, respectively. Once the last two terms can be neglected, the number of trajectories needed to achieve \u2225\u2207J(\u03b8)\u22252 2 \u2264\u03f5 is O( B+N/m \u03f5 ). In this sense, an advantage over batch gradient ascent can be achieved with properly selected meta-parameters. In Section 5.2 we propose a joint selection of step size \u03b1 and epoch size m. Finally, from the return statement of Algorithm 2, it is worth noting that J(\u03b8A) can be seen as the average performance of all the policies tried by the algorithm. This is particularly meaningful in the context of online learning that we are considering in this paper. 5. Remarks on SVRPG The convergence guarantees presented in the previous section come with requirements on the meta-parameters (i.e., \u03b1 and m) that may be too conservative for practical applications. Here we provide a practical and automatic way to choose the step size \u03b1 and the number of sub-iterations m performed between snapshots. Additionally, we provide \fStochastic Variance-Reduced Policy Gradient a variant of SVRPG exploiting a variance-reduction technique for importance weights. Despite lacking theoretical guarantees, we will show in Section 7 that this method can outperform the baseline SVRPG (Algorithm 2). 5.1. Full Gradient Update As noted in Section 3, the update performed at the beginning of each epoch is equivalent to a full-gradient update. In our setting, where collecting samples is particularly expensive, the B trajectories collected using the snapshot trajectory \u03c0e \u03b8 s feels like a waste of data (the term P i g(\u03c4i) \u2212\u03c9(\u03c4i)g(\u03c4i) = 0 since \u03b80 = e \u03b8). In practice, we just perform an approximate full gradient update using the N trajectories sampled to compute b \u2207NJ(e \u03b8 s), i.e., \u03b8s+1 1 = e \u03b8 s + \u03b1b \u2207NJ(e \u03b8 s) \u03b8s+1 t+1 = \u03b8s+1 t + \u03b1\u25bcJ(\u03b8s+1 t ) for t = 1, . . . , m \u22121. In the following, we will always use this practical variant. 5.2. Meta-Parameter Selection The step size \u03b1 is crucial to balance variance reduction and ef\ufb01ciency, while the epoch length m in\ufb02uences the variance introduced by the importance weights. Low values of m are associated with small variance but increase the frequency of snapshot points (which means many FG computations). High values of m may move policy \u03c0\u03b8t far away from the snapshot policy \u03c0e \u03b8, causing large variance in the importance weights. We will jointly set the two meta-parameters. Adaptive step size. A standard way to deal with noisy gradients is to use adaptive strategies to compute the step size. ADAptive Moment estimation (ADAM) (Kingma & Ba, 2014) stabilizes the parameter update by computing learning rates for each parameter based on an incremental estimate of the gradient variance. Due to this feature, we would like to incorporate ADAM in the structure of the SVRPG update. Recall that SVRPG performs two different updates of the parameters \u03b8: I) FG update in the snapshot; II) corrected gradient update in the sub-iterations. Given this structure, we suggest using two separate ADAM estimators: \u03b8s+1 1 = e \u03b8 s + \u03b1FG s \u0010 b \u2207NJ(e \u03b8 s) \u0011 \u03b8s+1 t+1 = \u03b8s+1 t + \u03b1SI s+1,t \u0000\u25bcJ(\u03b8s+1 t ) \u0001 for t = 1, . . . , m \u22121, where \u03b1FG s is associated with the snapshot and \u03b1SI s+1,t with the sub-iterations (see Appendix D for details). By doing so, we decouple the contribution of the variance due to the approximate FG from the one introduced by the subiterations. Note that these two terms have different orders of magnitude since are estimated with a different number of trajectories (B \u226aN) and the estimator in the snapshot does not require importance weights. The use of two ADAM estimators allows to capture and exploit this property. Adaptive epoch length. It is easy to imagine that a prede\ufb01ned schedule (e.g., m \ufb01xed in advance or changed with a policy-independent process) may poorly perform due to the high variability of the updates. In particular, given a \ufb01xed number of sub-iterations m, the variance of the updates in the sub-iterations depends on the snapshot policy and the sampled trajectories. Since the ADAM estimate partly captures such variability, we propose to take a new snapshot (i.e., interrupt the sub-iterations) whenever the step size \u03b1SI proposed by ADAM for the sub-iterations is smaller than the one for the FG (i.e., \u03b1FG). If the latter condition is veri\ufb01ed, it amounts to say that the noise in the corrected gradient has overcome the information of the FG. Formally, the stopping condition is as follows If \u03b1FG N > \u03b1SI B then take snapshot, where we have introduced N and B to take into account the trajectory ef\ufb01ciency (i.e., weighted advantage). The less the number of trajectories used to update the policy, the better. Including the batch sizes in the stopping condition allows us to optimize the trade-off between the quality of the updates and the cost of performing them. 5.3. Normalized Importance Sampling As mentioned in Section 5.2, importance weights are an additional source of variance. A standard way to cope with this issue is self-normalization (e.g., Precup, 2000; Owen, 2013). This technique can reduce the variance of the importance weights at the cost of introducing some bias (Owen, 2013, Chapter 9). Whether the trade-off is advantageous depends on the speci\ufb01c task. Introducing self-normalization in the context of our algorithm, we switch from Eq. (4) to: \u25bcJ(\u03b8t) = b \u2207NJ(e \u03b8) + 1 B B\u22121 X i=0 [g(\u03c4i|\u03b8t)] \u22121 \u2126 B\u22121 X i=0 h \u03c9(\u03c4i|\u03b8t, e \u03b8)g(\u03c4i|e \u03b8) i . where \u2126= PB\u22121 i=0 \u03c9(\u03c4i|\u03b8t, e \u03b8). In Section 7 we show that self-normalization can provide a performance improvement. 6. Related Work Despite the considerable interest received in SL, variancereduced gradient approaches have not attracted the RL community. As far as we know, there are just two applications of SVRG in RL. The \ufb01rst approach (Du et al., 2017) aims to exploit SVRG for policy evaluation. The policy evaluation problem is more straightforward than the one faced in this paper (control problem). In particular, since the goal is to evaluate just the performance of a prede\ufb01ned policy, the optimization problem is stationary. The setting considered in the paper is the one of policy evaluation by minimizing the \fStochastic Variance-Reduced Policy Gradient empirical mean squared projected Bellman error (MSPBE) with a linear approximation of the value function. Du et al. (2017) shown that this problem can be equivalently reformulated as a convex-concave saddle-point problem that is characterized by a \ufb01nite-sum structure. This problem can be solved using a variant of SVRG (Palaniappan & Bach, 2016) for which convergence guarantees have been provided. The second approach (Xu et al., 2017) uses SVRG as a practical method to solve the optimization problem faced by Trust Region Policy Optimization (TRPO) at each iteration. This is just a direct application of SVRG to a problem having \ufb01nite-sum structure since no speci\ufb01c structure of the RL problem is exploited. It is worth to mention that, for practical reasons, the authors proposed to use a Newton conjugate gradient method with SVRG. In the recent past, there has been a surge of studies investigating variance reduction techniques for policy gradient methods. The speci\ufb01c structure of the policy gradient allows incorporating a baseline (i.e., a function b : S \u00d7 A \u2192R) without affecting the unbiasedness of the gradient (e.g., Williams, 1992; Weaver & Tao, 2001; Peters & Schaal, 2008b; Thomas & Brunskill, 2017; Wu et al., 2018). Although the baseline can be arbitrarily selected, literature often refers to the optimal baseline as the one minimizing the variance of the estimate. Nevertheless, even the baseline needs to be estimated from data. This fact may partially reduce its effectiveness by introducing variance. Even if these approaches share the same goal as SVRG, they are substantially different. In particular, the proposed SVRPG does not make explicit use of the structure of the policy gradient framework, and it is independent of the underlying gradient estimate (i.e., with or without baseline). This suggests that would be possible to integrate an ad-hoc SVRPG baseline to further reduce the variance of the estimate. Since this paper is about the applicability of SVRG technique to RL, we consider this topic as future work. Additionally, the experiments show that SVRPG has an advantage over G(PO)MPD even when the baseline is used (see the half-cheetah domain in Section 7). Concerning importance weighting techniques, RL has made extensive use of them for off-policy problems (e.g., Precup, 2000; Thomas et al., 2015). However, as mentioned before, SVRPG cannot be compared to such methods since it is in all respects an on-policy algorithm. Here, importance weighting is just a statistical tool used to preserve the unbiasedness of the corrected gradient. 7. Experiments In this section, we evaluate the performance of SVRPG and compare it with policy gradient (PG) on well known continuous RL tasks: Cart-pole balancing and Swimmer (e.g., Duan et al., 2016). We consider G(PO)MDP since it has a smaller variance than REINFORCE. For our algorithm, we use a batch size N = 100, a mini-batch size B = 10, and the jointly adaptive step size \u03b1 and epoch length m proposed in Section 5.2. Since the aim of this comparison is to show the improvement that SVRG-\ufb02avored variance reduction brings to SG in the policy gradient framework, we set the batch size of the baseline policy gradient algorithm to B. In this sense, we measure the improvement yielded by computing snapshot gradients and using them to adjust parameter updates. Since we evaluate on-line performance over the number of sampled trajectories, the cost of computing such snapshot gradients is automatically taken into consideration. To make the comparison fair, we also use Adam in the baseline PG algorithm, which we will denote simply as G(PO)MDP in the following. In all the experiments, we use deep Gaussian policies with adaptive standard deviation (details on network architecture in Appendix E). Each experiment is run 10 times with a random policy initialization and seed, but this initialization is shared among the algorithms under comparison. The length of the experiment, i.e., the total number of trajectories, is \ufb01xed for each task. Performance is evaluated by using test-trajectories on a subset of the policies considered during the learning process. We provide average performance with 90% bootstrap con\ufb01dence intervals. Task implementations are from the rllab library (Duan et al., 2016), on which our agents are also based.7 More details on meta-parameters and exhaustive task descriptions are provided in Appendix E. Figure 1a compares SVRPG with G(PO)MDP on a continuous variant of the classical Cart-pole task, which is a 2D balancing task. Despite using more trajectories on average for each parameter update, our algorithm shows faster convergence, which can be ascribed to the better quality of updates due to variance reduction. The Swimmer task is a 3D continuous-control locomotion task. This task is more dif\ufb01cult than cart-pole. In particular, the longer horizon and the more complex dynamics can have a dangerous impact on the variance of importance weights. In this case, the self-normalization technique proposed in Section 5.3 brings an improvement (even if not statistically signi\ufb01cant), as shown in Figure 1b. Figure 1c shows self-normalized SVRPG against G(PO)MDP. Our algorithm outperforms G(PO)MDP for almost the entire learning process. Also here, we note an increase of speed in early iterations, and, toward the end of the learning process, the improvement becomes statistically signi\ufb01cant. Preliminary results on actor-critic. Another variancereduction technique in policy gradient consists of using baselines or critics. This tool is orthogonal to the methods described in this paper, and the theoretical results of Section 4 are general in this sense. In the experiments de7Code available at github.com/Dam930/rllab. \fStochastic Variance-Reduced Policy Gradient 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 \u00b7104 200 400 600 800 1,000 Trajectories Average Return SVRPG GPOMDP (a) SVRPG vs G(PO)MDP on Cart-pole. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 \u00b7104 0 20 40 60 Trajectories Average Return Self-Normalized SVRPG SVRPG (b) Self-Normalized SVRPG vs SVRPG on Swimmer. 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 \u00b7104 0 20 40 60 Trajectories Average Return Self-Normalized SVRPG GPOMDP (c) Self-Normalized SVRPG vs G(PO)MDP on Swimmer. 0 1 2 3 4 5 6 7 8 9 10 \u00b7103 0 0.5 1 1.5 2 \u00b7103 Trajectories Average Return GPOMDP Self Normalized SVRPG (d) Self-Normalized SVRPG vs G(PO)MDP on Half-Cheetah. Figure 1: Comparison of on-line performance over sampled trajectories, with 90% con\ufb01dence intervals. scribed so far, we compared against the so-called actor-only G(PO)MDP, i.e., without the baseline. To move towards a more general understanding of the variance issue in policy gradient, we also test SVRPG in an actor-critic scenario. To do so, we consider the more challenging MuJoCo (Todorov et al., 2012) Half-cheetah task, a 3D locomotion task that has a larger state-action space than Swimmer. Figure 1d compares self-normalized SVRPG and G(PO)MDP on Halfcheetah, using the critic suggested in (Duan et al., 2016) for both algorithms. Results are promising, showing that a combination of the baseline usage and SVRG-like variance reduction can yield an improvement that the two techniques alone are not able to achieve. Moreover, SVRPG presents a noticeably lower variance. The performance of actorcritic G(PO)MDP8 on Half-Cheetah is coherent with the one reported in (Duan et al., 2016). Other results are not comparable since we did not use the critic. 8. Conclusion In this paper, we introduced SVRPG, a variant of SVRG designed explicitly for RL problems. The control prob8Duan et al. (2016) report results on REINFORCE. However, inspection on rllab code and documentation reveals that it is actually PGT (Sutton et al., 2000), which is equivalent to G(PO)MDP (shown by Peters & Schaal, 2008b). Using the name REINFORCE in a general way is inaccurate, but widespread. lem considered in the paper has a series of dif\ufb01culties that are not common in SL. Among them, non-concavity and approximate estimates of the FG have been analysed independently in SL (e.g., Allen-Zhu & Hazan, 2016; Reddi et al., 2016a; Harikandeh et al., 2015) but never combined. Nevertheless, the main issue in RL is the non-stationarity of the sampling process since the distribution underlying the objective function is policy-dependent. We have shown that by exploiting importance weighting techniques, it is possible to overcome this issue and preserve the unbiasedness of the corrected gradient. We have additionally shown that, under mild assumptions that are often veri\ufb01ed in RL applications, it is possible to derive convergence guarantees for SVRPG. Finally, we have empirically shown that practical variants of the theoretical SVRPG version can outperform classical actor-only approaches on benchmark tasks. Preliminary results support the effectiveness of SVRPG also with a commonly used baseline for the policy gradient. Despite that, we believe that it will be possible to derive a baseline designed explicitly for SVRPG to exploit the RL structure and the SVRG idea jointly. Another possible improvement would be to employ the natural gradient (Kakade, 2002) to better control the effects of parameter updates on the variance of importance weights. Future work should also focus on making batch sizes N and B adaptive, as suggested in (Papini et al., 2017). \fStochastic Variance-Reduced Policy Gradient Acknowledgments This research was supported in part by French Ministry of Higher Education and Research, Nord-Pas-de-Calais Regional Council and French National Research Agency (ANR) under project ExTra-Learn (n.ANR-14-CE24-001001)." + } + ], + "Mirco Mutti": [ + { + "url": "http://arxiv.org/abs/2310.07518v2", + "title": "Exploiting Causal Graph Priors with Posterior Sampling for Reinforcement Learning", + "abstract": "Posterior sampling allows exploitation of prior knowledge on the\nenvironment's transition dynamics to improve the sample efficiency of\nreinforcement learning. The prior is typically specified as a class of\nparametric distributions, the design of which can be cumbersome in practice,\noften resulting in the choice of uninformative priors. In this work, we propose\na novel posterior sampling approach in which the prior is given as a (partial)\ncausal graph over the environment's variables. The latter is often more natural\nto design, such as listing known causal dependencies between biometric features\nin a medical treatment study. Specifically, we propose a hierarchical Bayesian\nprocedure, called C-PSRL, simultaneously learning the full causal graph at the\nhigher level and the parameters of the resulting factored dynamics at the lower\nlevel. We provide an analysis of the Bayesian regret of C-PSRL that explicitly\nconnects the regret rate with the degree of prior knowledge. Our numerical\nevaluation conducted in illustrative domains confirms that C-PSRL strongly\nimproves the efficiency of posterior sampling with an uninformative prior while\nperforming close to posterior sampling with the full causal graph.", + "authors": "Mirco Mutti, Riccardo De Santi, Marcello Restelli, Alexander Marx, Giorgia Ramponi", + "published": "2023-10-11", + "updated": "2024-04-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION Posterior sampling (Thompson, 1933), a.k.a. Thompson sampling, is a powerful alternative to classic optimistic methods for Reinforcement Learning (RL, Sutton & Barto, 2018) as it guarantees outstanding sample efficiency (Osband et al., 2013) through an explicit model of the epistemic uncertainty that allows exploiting prior knowledge over the environment\u2019s dynamics. Specifically, Posterior Sampling for Reinforcement Learning (PSRL, Strens, 2000; Osband et al., 2013) implements a Bayesian procedure in which, at every episode k, (1) a model of the environment\u2019s dynamics is sampled from a parametric prior distribution Pk, (2) an optimal policy \u03c0k is computed (e.g., through value iteration (Bellman, 1957)) according to the sampled model, (3) a posterior update is performed on the prior parameters to incorporate in Pk+1 the evidence collected by running \u03c0k in the true environment. Under the assumption that the true environment\u2019s dynamics are sampled with positive probability from the prior P0, the latter procedure is provably efficient as it showcases a Bayesian regret that scales with O( \u221a K) being K the total number of episodes (Osband & Van Roy, 2017). Although posterior sampling has been also praised for its empirical prowess (Chapelle & Li, 2011), specifying the prior through a class of parametric distributions, a crucial requirement of PSRL, can be cumbersome in practice. Let us take a Dynamic Treatment Regime (DTR, Murphy, 2003) as an illustrative application. Here, we aim to overcome a patient\u2019s disease by choosing, at each stage, a treatment based on the patient\u2019s evolving conditions and previously administered treatments. The goal is to identify the best treatment for the specific patient quickly. Medicine provides plenty of prior knowledge to help solve the DTR problem. However, it is not easy to translate this knowledge into a parametric prior distribution that is general enough to include the model of any patient while being sufficiently narrow to foster efficiency. Instead, it is remarkably easy to list some known causal relationships between patient\u2019s state variables, such as heart rate and blood pressure, or diabetes and glucose level. Those causal edges might come from experts\u2019 knowledge (e.g., physicians) or previous clinical studies. A prior in the form of a causal graph is more natural to specify for practitioners, \u2217Work done while the author was at Politecnico di Milano. \u2020Joint senior-authorship. 1 arXiv:2310.07518v2 [cs.LG] 8 Apr 2024 \fPublished as a conference paper at ICLR 2024 who might be unaware of the intricacies of Bayesian statistics. Posterior sampling does not currently support the specification of the prior through a causal graph, which limits its applicability. This paper proposes a novel posterior sampling methodology that can exploit a prior specified through a partial causal graph over the environment\u2019s variables. Notably, a complete causal graph allows for a factorization of the environment\u2019s dynamics, which can be then expressed as a Factored Markov Decision Process (FMDP, Boutilier et al., 2000). PSRL can be applied to FMDPs, as demonstrated by previous work (Osband & Van Roy, 2014), where the authors assume to know the complete causal graph. However, this assumption is often unreasonable in practical applications.1 Instead, we assume to have partial knowledge of the causal graph, which leads to considering a set of plausible FMDPs. Taking inspiration from (Hong et al., 2020; 2022b;a; Kveton et al., 2021), we design a hierarchical Bayesian procedure, called Causal PSRL (C-PSRL), extending PSRL to the setting where the true model lies within a set of FMDPs (induced by the causal graph prior). At each episode, C-PSRL first samples a factorization consistent with the causal graph prior. Then, it samples the model of the FMDP from a lower-level prior that is conditioned on the sampled factorization. After that, the algorithm proceeds similarly to PSRL on the sampled FMDP. Having introduced C-PSRL, we study the Bayesian regret it induces on the footsteps of previous analyses for PSRL in FMDPs (Osband & Van Roy, 2014) and hierarchical posterior sampling (Hong et al., 2022b). Our analysis shows that C-PSRL takes the best of both worlds by avoiding a direct dependence on the number of states in the regret (as in FMDPs) and without requiring a full causal graph prior (as in hierarchical posterior sampling). Moreover, we can analytically capture the dependency of the Bayesian regret on the number of causal edges known a priori and encoded in the (partial) causal graph prior. Finally, we empirically validate C-PSRL against two relevant baselines: PSRL with an uninformative prior, i.e., that does not model potential factorizations in the dynamics, and PSRL equipped with the full knowledge of the causal graph (an oracle prior). We carry out the comparison in simple yet illustrative domains, which show that exploiting a causal graph prior improves efficiency over uninformative priors while being only slightly inferior to the oracle prior. In summary, the main contributions of this paper include the following: \u2022 A novel problem formulation that links PSRL with a prior expressed as a partial causal graph to the problem of learning an FMDP with unknown factorization (Section 2); \u2022 A methodology (C-PSRL) that extends PSRL to exploit a partial causal graph prior (Section 3); \u2022 The analysis of the Bayesian regret of C-PSRL, which is \u02dc O( p K/2\u03b7)2 where K is the total number of episodes and \u03b7 is the degree of prior knowledge (Section 4); \u2022 An ancillary result on causal discovery that shows how a (sparse) super-graph of the true causal graph can be extracted from a run of C-PSRL as a byproduct (Section 5); \u2022 An experimental evaluation of the performance of C-PSRL against PSRL with uninformative or oracle priors in illustrative domains (Section 6). Finally, the aim of this work is to enable the use of posterior sampling for RL in relevant applications through a causal perspective on prior specification. We believe this contribution can help to close the gap between PSRL research and actual adoption of PSRL methods in real-world problems. 2 PROBLEM FORMULATION In this section, we first provide preliminary background on graphical causal models (Section 2.1) and Markov decision processes (Section 2.2). Then, we explain how a causal graph on the variables of a Markov decision process induces a factorization of its dynamics (Section 2.3). Finally, we formalize the reinforcement learning problem in the presence of a causal graph prior (Section 2.4). Notation. With few exceptions, we will denote a set or space as A, their elements as a \u2208A, constants or random variables with A, and functions as f. We denote \u2206(A) the probability simplex over A, and [A] the set of integers {1, . . . , A}. For a d-dimensional vector x, we define the scope operator x[I] := N i\u2208I xi for any set I \u2286[d]. When I = {i} is a singleton, we use x[i] as a shortcut for x[{i}]. A recap of the notation, which is admittedly involved, can be found in Appendix A. 1DTR is an example, where several causal relations affecting patient\u2019s conditions remain a mystery. 2We report regret rates with the common \u201cBig-O\u201d notation, in which \u02dc O hides logarithmic factors. Note that the rate here is simplified to highlight the most relevant factors. A complete rate can be found in Theorem 4.1. 2 \fPublished as a conference paper at ICLR 2024 2.1 CAUSAL GRAPHS Let X = {Xj}dX j=1 and Y = {Yj}dY j=1 be sets of random variables taking values xj, yj \u2208[N] respectively, and let p : X \u2192\u2206(Y) a strictly positive probability density. Further, let G = (X, Y, z) be a bipartite Directed Acyclic Graph (DAG), or bigraph, having left variables X, right variables Y, and a set of edges z \u2286X \u00d7 Y. We denote as zj the parents of the variable Yj \u2208Y, such as zj = {i \u2208[dX] | (Xi, Yj) \u2208z} and z = S j\u2208[dY ] S i\u2208zj{(Xi, Yj)}. We say that G is Z-sparse if maxj\u2208[dY ] |zj| \u2264Z \u2264dX, and we call Z the degree of sparseness of G. The tuple (p, G) is called a graphical causal model (Pearl, 2009) if p fulfills the Markov factorization property with respect to G, that is p(X, Y) = p(X)p(Y|X) = p(X) Q j\u2208[dY ] pj(y[j]|x[zj]) and all interventional distributions are well defined. 3 Note that the causal model that we consider in this paper does not admit confounding. Further, we can exclude \u201cvertical\u201d edges in Y \u00d7 Y and directed edges Y \u00d7 X. Finally, we call causal graph the component G of a graphical causal model. 2.2 MARKOV DECISION PROCESSES A finite episodic Markov Decision Process (MDP, Puterman, 2014) is defined throug the tuple M := (S, A, p, r, \u00b5, H), where S is a state space of size S, A is an action space of size A, p : S \u00d7 A \u2192 \u2206(S) is a Markovian transition model such that p(s\u2032|s, a) denotes the conditional probability of the next state s\u2032 given the state s and action a, r : S \u00d7 A \u2192\u2206([0, 1]) is a reward function such that the reward collected performing action a in state s is distributed as r(s, a) with mean R(s, a) = E[r(s, a)], \u00b5 \u2208\u2206(S) is the initial state distribution, H < \u221eis the episode horizon. An agent interacts with the MDP as follows. First, the initial state is drawn s1 \u223c\u00b5. For each step h < H, the agent selects an action ah \u2208A. Then, they collect a reward rh \u223cr(sh, ah) while the state transitions to sh+1 \u223cp(\u00b7|sh, ah). The episode ends when sH is reached. The strategy from which the agent selects an action at each step is defined through a non-stationary, stochastic policy \u03c0 = {\u03c0h}h\u2208[H] \u2208\u03a0, where each \u03c0h : S \u2192\u2206(A) is a function such that \u03c0h(a|s) denotes the conditional probability of selecting action a in state s at step h, and \u03a0 is the policy space. A policy \u03c0 \u2208\u03a0 can be evaluated through its value function V \u03c0 h : S \u2192[0, H], which is the expected sum of rewards collected under \u03c0 starting from state s at step h, i.e., V \u03c0 h (s) := E \u03c0 \" H X h\u2032=h R(sh\u2032, ah\u2032) \f \f \fsh = s # , \u2200s \u2208S, h \u2208[H]. We further define the value function of \u03c0 in the MDP M under \u00b5 as VM(\u03c0) := Es\u223c\u00b5[V \u03c0 1 (s)]. 2.3 CAUSAL STRUCTURE INDUCES FACTORIZATION In the previous section, we formulated the MDP in a tabular representation, where each state (action) is identified by a unique symbol s \u2208S (a \u2208A). However, in relevant real-world applications, the states and actions may be represented through a finite number of features, say dS and dA features respectively. The DTR problem is an example, where state features can be, e.g., blood pressure and glucose level, action features can be indicators on whether a particular medication is administered. Let those state and action features be modeled by random variables in the interaction between an agent and the MDP, we can consider additional structure in the process by considering the causal graph of its variables, such that the value of a variable only depends on the values of its causal parents. Looking back to DTR, we might know that the value of the blood pressure at step h + 1 only depends on its value at step h and whether a particular medication has been administered. Formally, we can show that combining an MDP M = (S, A, p, r, \u00b5, H) with a causal graph over its variables, which we denote as GM = (X, Y, z), gives a factored MDP (Boutilier et al., 2000) F = ({Xj}dX j=1, {Yj, zj, pj, rj}dY j=1, \u00b5, H, Z, N), 3Both assumptions come naturally with factored MDPs (see Section 2.3). Informally, an intervention on a node Yj \u2208Y (or in X) assigns y[j] to a constant c, i.e., pj(y[j] = c) = 1. All mechanisms pi, where i is not intervened on, remain invariant. For more details on the causal notation we refer to (Pearl, 2009, Chapter 1). 3 \fPublished as a conference paper at ICLR 2024 0 AOVHicnVfdjhs1FJ62FEpglxYuZlSrYRQNmR2G7q9WNFtqYqXdruXys1YeW ZOUnM2uPB9mwSRvME3ML78BRIvAsXHDuTzXiSbSMsxXG+850fHx/JEwZVbrd/ufa9Rsf3Pzwo1s fNz75dG39s9t3Pj9VIpMRnESCfkmJAoYTeBEU83gTSqB8JDB6/D8iZG/vgCpqEiO9SFHieDhPZ pRDRCr9pnt+1W23b/MVBUA7uf+XZ9vLsztru91YRBmHREeMKPU2aKe6lxOpacSgaHQzBSmJzsk A3uIwIRxUL7eRFv5GpogWfgrSp8y3ILgaur/Ty2mSZhqSCBV8wpgY+Yhv7vgWd/jHQS/vi2RKxrb ho9Df2Qyp9o8D34hUVSEfYhKkhL5lWwWLYP7OXWIm2YxTEhXlKQP/5HDfN6lUoDVNBo5SKMS5JqG aW0+l6IMy+Sds89eMKonPlIYuO4IVzbYuWbIUBQKImOfEz301YSHgrlaCY2gL0k014oER6GesTE D0g+3Wr6oKOWo8tpJIWZSDHTnSFiIEk6nDiJji9oqsrFHJer6Ve8IuCGpkUsEqFBXZo3iG8hl0j CjBE5LlyUnv+2GcXF3IdB7HRiSjBAXsfGwhJ9ZAXy2EauQKeMSxYwVxUZWFEUrM13EKGcSqkqfL 4l0zpUNSCNesjV8VjUY3hj7uRZuQ/IKGkiT6McugyA+fPS7ydtMPgofY7TwolnKfTEhScjuB5TX9 re2d5eRjIKxiuLON3fZ3y7kvsBvMwkCLZSCdYDn9EOIZt3f0JB/RcgHmIJEk6pSw8eLuc/kzAp yUHbxF12mL2N3VaY8PfOzjy+zbmtBDwKOE+wois3qNjc1VWn0HLpQPAlrgNmqiA74pQWmiwc4K z4Jpde3ScZDkCOsM5rsliH08qm1GlXREM+awS4KezkDzsl7OJGQgqF0sjDelQq47N6fZeVEnwP y4WXcEcnmqpUCVz1UVbdW2fCMxKEk/PLg1j/X9Wc6xwZJbqh+k0DkWIO7c0nXdhUOTQGrSa3Ufd 91ApbhraglWoI6mLfNSLd2qkFe+IE5qPZ7LGblbj6n/VsziW3oGN7y7hYaMoigVF0OeME937X KMcqD4qaNMZ8dJtWLHke18V701lP9SPC8r0646jOKozDivCw2KebpOgF3hb4xtCfoM0TsZ4kJzl 5rtmIuJpxchBXcxjV/wz+nQZzyvy53X1C9wV5sTrhjzvTsd1Bs9KMQ6unMHT09JLGOZPr6adEjn nRb4q5YvAfUnPGOlBE5sEnD727TjN5FpMmMSJN6eo/LBNjvs6iA3uSOZzLI1oDJqyGC4zd6Vf vNx5kdu+4VTpiKgJPmpQkYHGT19zIZKuBEbGU0zSwXAB7GeM1TEMsg4Zh6BzrDa5U7Ccen4KHP0 Dmp2DkBGmZxUoVNIMlUFnuJbegoEekQnmcprcRi3RKdScfTCV6lrt2fINVZAlXoJcu0qALPANeS VpF9cAj75o52POMFlFKHsycpOJ6PSbnsTi5kxfCrlp9dxJ5joVhTvfp2AgAZK7nfZd+xF0T5WH lhC2M+7ilMGk7zAtuGaGi+YkubpUtPvSzFKJQG3GcJn4yG1DxUywdMp4NPrs592xVLqYvM7fpbh 0/C+atvy/LMp8A/XEH979Xi4HSrFWy3tl617z16OP3n5d3yvS+8r72Au+B98j70XvpnXiRB97v3 h/en2t/r/27fmP95pR6/Vqp84XntPX1/wDVgw3g 1 AOVHicnVfdjhs1FJ62FEpglxYuZlSrYRQNmR2G7q9WNFtqYqXdruXys1YeW ZOUnM2uPB9mwSRvME3ML78BRIvAsXHDuTzXiSbSMsxXG+850fHx/JEwZVbrd/ufa9Rsf3Pzwo1s fNz75dG39s9t3Pj9VIpMRnESCfkmJAoYTeBEU83gTSqB8JDB6/D8iZG/vgCpqEiO9SFHieDhPZ pRDRCr4Kz2/farbZt/uIgKAf3v/Ls+3l2Z213W4soxDoiNGlHobtFPdy4nUNGJQNLqZgpRE52Q Ab3GYEA6ql9tIC38jU0QLPwXpU+ZbEFwN3d/p5TRJMw1JhAo+YUyMfMQ3d3yLO/zjoJf3RTIlY9v wUejvbIZU+8eBb0SqpAPMQlSQt+yrYJFMH/nLjGTbMYpiYrylIF/crjvm1Qq0JomA0cpFOJck1D NradS9EGZ/BO2+WtGNUTHykMXHeEKxvsXDNkKAoFkbHPiR76asJDwVythEbQlySa0WCo1DP2Jg B6QfbjV90FHL0eU0ksJMpJjpzhAxkCQdTpxExc0VeVijsvV9CteEXBD0yIWidCgLs0bxLeQSyR hxogcFy5Kz3/bjOJi7sMgdjoxJRgr+WPDYSkesiL5TCNXAHPGBasYC6qsjAiqdkabiHDOBXSVHn 8S6Z0KGrBmvWRq+KRqMbQx/3ok1IfkFDSRL9mGVQ5IfPHhd5u+kHwUPsdh4US7lPJiQpuZ3A8pr+ 1vbOcvIxEFYx3NnGbvu75dwX2A1mYaDFMpBOsJx+CPGM275vaMi/IuQDTEGiSdW0pQcPl/OfSZiU 5KBt4i47zN7G7iqtseHvHRz5ZfZtTegh4FHCfQWRWb3GxuYqrb4DF8oHAS1wGzXRAd+UoDTRYGeF Z5+E0uvbJOMhyBHWGU12yxB6+dRajapoiGfNYBeFvZwB5+Q9nEhIwVA6WeRhPSqV8Vm9vstKCb6H ZReLrmAOT7VUqJK56qKturZPBGYliadnl4ax/j+rOVY4Mkv1w3QahyLEnVuazrswKHJoDVrN7qPu e6gUNw1twSrUkdRFPmrJlm5VyCtP/MAc89Fs9tjNSlz9z3o25Jb0LG9ZVwsNGWRwCi6nHGCe79r lGOVB0VNGmM+uk0rljyP6+K96ayn+hFh+V6dcVRnHNUZhxXhYTFPt0nQC7yt8Q0hv0EaJ2M8SM5y 810zEfG0YuSgLuaxK/4ZfbqM5xX587r6Be4Kc+J1Q53p+M6g2elGAdXzuDpaeklDPOnV9NOiZz Tgv8VcsX3gNqznhHyogc2KThd7dpRu8i0mRGpEk9vcdlAuz3WVfRgT3JHM5lkY0Bk1ZDJeZu9Iv Xu68yG3fcKp0RNQEHzWoyEDjp6+5ElXAiPjKSbpYLgA9jPG6hgGWYeMQ9AgZ1jtcifhuHR8lDl6 BzU7ByCjTE6q0CkmaoCT/GtPHSUiHQIz7OUVmKxbonOpOPpBK9S1+5PkOosgSr0kmVaVIFngGtJ q8g+OIR9c0c7nvECSqnD2ZMUHM/HJPzWJzcyQthV62+O4k8x8Iwp/v0bTCQAMndTvufaiaB8rD ywh7OdxSmDSV5g23BNjRdMSfN0qen3pRglFEoD7rOET0ZDah6q5QOm08EnV+e+7Yql1EXmdv2tw yfh/NW3ZXnmU+AfrqD+92pxcLrVCrZbW6/a9x49nP7z8m5X3pfeV97gfAe+T96L30TrzIA+937 w/vz7W/1/5dv7F+c0q9fq3U+cJz2vr6f+NzDeE= 2 AOVHicnVfdjhs1FJ62FEpglxYuZlSrYRQNmR2G7q9WNFtqYqXdruXys1YeW ZOUnM2uPB9mwSRvME3ML78BRIvAsXHDuTzXiSbSMsxXG+850fHx/JEwZVbrd/ufa9Rsf3Pzwo1s fNz75dG39s9t3Pj9VIpMRnESCfkmJAoYTeBEU83gTSqB8JDB6/D8iZG/vgCpqEiO9SFHieDhPZ pRDRCr7bObt9rt9q2+YuDoBzc+/4vz7aXZ3fWdruxiDIOiY4YUept0E51LydS04hB0ehmClISnZM BvMVhQjioXm4jLfyNTBEt/BSkT5lvQXA1dH+nl9MkzTQkESr4hDEx8hHf3PEt7vCPg17eF8mUjG3 DR6G/sxlS7R8HvhGpqkI+xCRICX3LtgoWwfydu8RMshmnJCrKUwb+yeG+b1KpQGuaDBylUIhzTUI 1t5K0Qdl8k/Y5q8ZYVRPfKQwcN0Rrmywc82QoSgURMY+J3roqwkPBXO1EhpBX5JorhUJjkI9Y2M GpB98u9X0QUctR5fTSAozkWKmO0PEQJ0OHESHV/QVJWLOS5X0694RcANTYtYJEKDujRvEN9CLpG EGSNyXLgoPf9tM4qLuQ+D2OnElGCAvJY/NhCS6iEvlsM0cgU8Y1iwgrmoysKIpGZruIUM41RIU+X xL5nSoagFa9ZHqr4qGo1uDH3cizYh+QUNJUn0Y5ZBkR8+e1zk7aYfBA+x23lQLOU+mZCk5HYCy2v6 W9s7y8nHQFjFcGcbu+3vlnNfYDeYhYEWy0A6wXL6IcQzbvu+oSH/ipAPMAWJlXTlh48XM5/JmFS koO2ibvsMHsbu6u0xoa/d3Dkl9m3NaGHgEcJ9xVEZvUaG5urtPoOXCgfBLTAbdREB3xTgtJEg50V n0Sq9vk4yHIEdYZzTZLUPo5VNrNaqiIZ41g10U9nIGnJP3cCIhBUPpZJGH9ahUxmf1+i4rJfge l0suoI5PNVSoUrmqou26to+EZiVJ6eXRrG+v+s5ljhyCzVD9NpHIoQd25pOu/CoMihNWg1u4+6 76FS3DS0BatQR1IX+aglW7pVIa8QNzEez2WM3K3H1P+vZnEtuQcf2lnGx0JRFAqPocsYJ7v2u UY5VHhQ1aYz56DatWPI8rov3prOe6keE5Xt1xlGdcVRnHFaEh8U83SZBL/C2xjeE/AZpnIzxIDnL zXfNRMTipGDupjHrvhn9Okynlfkz+vqF7grzInXDXnenY7rDJ6VYhxcOYOnp6WXMyfXk07JXLO Oy3wVy1feA+oOeMdKSNyYJOG392mGb2LSJMZkSb19B6XCbDfZ1FB/YkcziXWRrRGDRlMVxm7kq/ eLnzIrd9w6nSEVETfNSgIgONn7mQiRdCYyMp5ikg+EC2M8Yq2MYZB0yDkGDnG1y52E49LxUebo HdTsHICMjmpQqeQZKoKPMW38tBRItIhPM9SWonFuiU6k46nE7xKXbs/QaqzBKrQS5ZpUQWeAa4l rSL74BD2zR3teMYLKUOZ09ScDwfk0zOY3FyJy+EXbX67iTyHAvDnO7Tt8FAiR3O+279tmLon2sP LCEsJ93FacMJnmBbcM1NV4wJc3Tpabfl2KUCgNuM8SPhkNqXmolg+YTgefXJ37tiuWUheZ2/W3D p+E81fluWZT4F/uIL636vFwelWK9hub1q3v0cPrPy7vlfel95X3tBd4D75H3o/fSO/EiD7zfv T+8P9f+Xvt3/cb6zSn1+rVS5wvPaevr/wHxYw3i 3 AOVHicnVfdjhs1FJ62FEpglxYuZlSrYRQNmR2G7q9WNFtqYqXdruXys1YeW ZOUnM2uPB9mwSRvME3ML78BRIvAsXHDuTzXiSbSMsxXG+850fHx/JEwZVbrd/ufa9Rsf3Pzwo1s fNz75dG39s9t3Pj9VIpMRnESCfkmJAoYTeBEU83gTSqB8JDB6/D8iZG/vgCpqEiO9SFHieDhPZ pRDRCr7bPbt9rt9q2+YuDoBzc+/4vz7aXZ3fWdruxiDIOiY4YUept0E51LydS04hB0ehmClISnZM BvMVhQjioXm4jLfyNTBEt/BSkT5lvQXA1dH+nl9MkzTQkESr4hDEx8hHf3PEt7vCPg17eF8mUjG3 DR6G/sxlS7R8HvhGpqkI+xCRICX3LtgoWwfydu8RMshmnJCrKUwb+yeG+b1KpQGuaDBylUIhzTUI 1t5K0Qdl8k/Y5q8ZYVRPfKQwcN0Rrmywc82QoSgURMY+J3roqwkPBXO1EhpBX5JorhUJjkI9Y2M GpB98u9X0QUctR5fTSAozkWKmO0PEQJ0OHESHV/QVJWLOS5X0694RcANTYtYJEKDujRvEN9CLpG EGSNyXLgoPf9tM4qLuQ+D2OnElGCAvJY/NhCS6iEvlsM0cgU8Y1iwgrmoysKIpGZruIUM41RIU+X xL5nSoagFa9ZHqr4qGo1uDH3cizYh+QUNJUn0Y5ZBkR8+e1zk7aYfBA+x23lQLOU+mZCk5HYCy2v6 W9s7y8nHQFjFcGcbu+3vlnNfYDeYhYEWy0A6wXL6IcQzbvu+oSH/ipAPMAWJlXTlh48XM5/JmFS koO2ibvsMHsbu6u0xoa/d3Dkl9m3NaGHgEcJ9xVEZvUaG5urtPoOXCgfBLTAbdREB3xTgtJEg50V n0Sq9vk4yHIEdYZzTZLUPo5VNrNaqiIZ41g10U9nIGnJP3cCIhBUPpZJGH9ahUxmf1+i4rJfge l0suoI5PNVSoUrmqou26to+EZiVJ6eXRrG+v+s5ljhyCzVD9NpHIoQd25pOu/CoMihNWg1u4+6 76FS3DS0BatQR1IX+aglW7pVIa8QNzEez2WM3K3H1P+vZnEtuQcf2lnGx0JRFAqPocsYJ7v2u UY5VHhQ1aYz56DatWPI8rov3prOe6keE5Xt1xlGdcVRnHFaEh8U83SZBL/C2xjeE/AZpnIzxIDnL zXfNRMTipGDupjHrvhn9Okynlfkz+vqF7grzInXDXnenY7rDJ6VYhxcOYOnp6WXMyfXk07JXLO Oy3wVy1feA+oOeMdKSNyYJOG392mGb2LSJMZkSb19B6XCbDfZ1FB/YkcziXWRrRGDRlMVxm7kq/ eLnzIrd9w6nSEVETfNSgIgONn7mQiRdCYyMp5ikg+EC2M8Yq2MYZB0yDkGDnG1y52E49LxUebo HdTsHICMjmpQqeQZKoKPMW38tBRItIhPM9SWonFuiU6k46nE7xKXbs/QaqzBKrQS5ZpUQWeAa4l rSL74BD2zR3teMYLKUOZ09ScDwfk0zOY3FyJy+EXbX67iTyHAvDnO7Tt8FAiR3O+279tmLon2sP LCEsJ93FacMJnmBbcM1NV4wJc3Tpabfl2KUCgNuM8SPhkNqXmolg+YTgefXJ37tiuWUheZ2/W3D p+E81fluWZT4F/uIL636vFwelWK9hub1q3v0cPrPy7vlfel95X3tBd4D75H3o/fSO/EiD7zfv T+8P9f+Xvt3/cb6zSn1+rVS5wvPaevr/wH/Uw3j 0 AOVHicnVfdjhs1FJ62FEpglxYuZlSrYRQNmR2G7q9WNFtqYqXdruXys1YeW ZOUnM2uPB9mwSRvME3ML78BRIvAsXHDuTzXiSbSMsxXG+850fHx/JEwZVbrd/ufa9Rsf3Pzwo1s fNz75dG39s9t3Pj9VIpMRnESCfkmJAoYTeBEU83gTSqB8JDB6/D8iZG/vgCpqEiO9SFHieDhPZ pRDRCr9pnt+1W23b/MVBUA7uf+XZ9vLsztru91YRBmHREeMKPU2aKe6lxOpacSgaHQzBSmJzsk A3uIwIRxUL7eRFv5GpogWfgrSp8y3ILgaur/Ty2mSZhqSCBV8wpgY+Yhv7vgWd/jHQS/vi2RKxrb ho9Df2Qyp9o8D34hUVSEfYhKkhL5lWwWLYP7OXWIm2YxTEhXlKQP/5HDfN6lUoDVNBo5SKMS5JqG aW0+l6IMy+Sds89eMKonPlIYuO4IVzbYuWbIUBQKImOfEz301YSHgrlaCY2gL0k014oER6GesTE D0g+3Wr6oKOWo8tpJIWZSDHTnSFiIEk6nDiJji9oqsrFHJer6Ve8IuCGpkUsEqFBXZo3iG8hl0j CjBE5LlyUnv+2GcXF3IdB7HRiSjBAXsfGwhJ9ZAXy2EauQKeMSxYwVxUZWFEUrM13EKGcSqkqfL 4l0zpUNSCNesjV8VjUY3hj7uRZuQ/IKGkiT6McugyA+fPS7ydtMPgofY7TwolnKfTEhScjuB5TX9 re2d5eRjIKxiuLON3fZ3y7kvsBvMwkCLZSCdYDn9EOIZt3f0JB/RcgHmIJEk6pSw8eLuc/kzAp yUHbxF12mL2N3VaY8PfOzjy+zbmtBDwKOE+wois3qNjc1VWn0HLpQPAlrgNmqiA74pQWmiwc4K z4Jpde3ScZDkCOsM5rsliH08qm1GlXREM+awS4KezkDzsl7OJGQgqF0sjDelQq47N6fZeVEnwP y4WXcEcnmqpUCVz1UVbdW2fCMxKEk/PLg1j/X9Wc6xwZJbqh+k0DkWIO7c0nXdhUOTQGrSa3Ufd 91ApbhraglWoI6mLfNSLd2qkFe+IE5qPZ7LGblbj6n/VsziW3oGN7y7hYaMoigVF0OeME937X KMcqD4qaNMZ8dJtWLHke18V701lP9SPC8r0646jOKozDivCw2KebpOgF3hb4xtCfoM0TsZ4kJzl 5rtmIuJpxchBXcxjV/wz+nQZzyvy53X1C9wV5sTrhjzvTsd1Bs9KMQ6unMHT09JLGOZPr6adEjn nRb4q5YvAfUnPGOlBE5sEnD727TjN5FpMmMSJN6eo/LBNjvs6iA3uSOZzLI1oDJqyGC4zd6Vf vNx5kdu+4VTpiKgJPmpQkYHGT19zIZKuBEbGU0zSwXAB7GeM1TEMsg4Zh6BzrDa5U7Ccen4KHP0 Dmp2DkBGmZxUoVNIMlUFnuJbegoEekQnmcprcRi3RKdScfTCV6lrt2fINVZAlXoJcu0qALPANeS VpF9cAj75o52POMFlFKHsycpOJ6PSbnsTi5kxfCrlp9dxJ5joVhTvfp2AgAZK7nfZd+xF0T5WH lhC2M+7ilMGk7zAtuGaGi+YkubpUtPvSzFKJQG3GcJn4yG1DxUywdMp4NPrs592xVLqYvM7fpbh 0/C+atvy/LMp8A/XEH979Xi4HSrFWy3tl617z16OP3n5d3yvS+8r72Au+B98j70XvpnXiRB97v3 h/en2t/r/27fmP95pR6/Vqp84XntPX1/wDVgw3g 1 AOVHicnVfdjhs1FJ62FEpglxYuZlSrYRQNmR2G7q9WNFtqYqXdruXys1YeW ZOUnM2uPB9mwSRvME3ML78BRIvAsXHDuTzXiSbSMsxXG+850fHx/JEwZVbrd/ufa9Rsf3Pzwo1s fNz75dG39s9t3Pj9VIpMRnESCfkmJAoYTeBEU83gTSqB8JDB6/D8iZG/vgCpqEiO9SFHieDhPZ pRDRCr4Kz2/farbZt/uIgKAf3v/Ls+3l2Z213W4soxDoiNGlHobtFPdy4nUNGJQNLqZgpRE52Q Ab3GYEA6ql9tIC38jU0QLPwXpU+ZbEFwN3d/p5TRJMw1JhAo+YUyMfMQ3d3yLO/zjoJf3RTIlY9v wUejvbIZU+8eBb0SqpAPMQlSQt+yrYJFMH/nLjGTbMYpiYrylIF/crjvm1Qq0JomA0cpFOJck1D NradS9EGZ/BO2+WtGNUTHykMXHeEKxvsXDNkKAoFkbHPiR76asJDwVythEbQlySa0WCo1DP2Jg B6QfbjV90FHL0eU0ksJMpJjpzhAxkCQdTpxExc0VeVijsvV9CteEXBD0yIWidCgLs0bxLeQSyR hxogcFy5Kz3/bjOJi7sMgdjoxJRgr+WPDYSkesiL5TCNXAHPGBasYC6qsjAiqdkabiHDOBXSVHn 8S6Z0KGrBmvWRq+KRqMbQx/3ok1IfkFDSRL9mGVQ5IfPHhd5u+kHwUPsdh4US7lPJiQpuZ3A8pr+ 1vbOcvIxEFYx3NnGbvu75dwX2A1mYaDFMpBOsJx+CPGM275vaMi/IuQDTEGiSdW0pQcPl/OfSZiU 5KBt4i47zN7G7iqtseHvHRz5ZfZtTegh4FHCfQWRWb3GxuYqrb4DF8oHAS1wGzXRAd+UoDTRYGeF Z5+E0uvbJOMhyBHWGU12yxB6+dRajapoiGfNYBeFvZwB5+Q9nEhIwVA6WeRhPSqV8Vm9vstKCb6H ZReLrmAOT7VUqJK56qKturZPBGYliadnl4ax/j+rOVY4Mkv1w3QahyLEnVuazrswKHJoDVrN7qPu e6gUNw1twSrUkdRFPmrJlm5VyCtP/MAc89Fs9tjNSlz9z3o25Jb0LG9ZVwsNGWRwCi6nHGCe79r lGOVB0VNGmM+uk0rljyP6+K96ayn+hFh+V6dcVRnHNUZhxXhYTFPt0nQC7yt8Q0hv0EaJ2M8SM5y 810zEfG0YuSgLuaxK/4ZfbqM5xX587r6Be4Kc+J1Q53p+M6g2elGAdXzuDpaeklDPOnV9NOiZz Tgv8VcsX3gNqznhHyogc2KThd7dpRu8i0mRGpEk9vcdlAuz3WVfRgT3JHM5lkY0Bk1ZDJeZu9Iv Xu68yG3fcKp0RNQEHzWoyEDjp6+5ElXAiPjKSbpYLgA9jPG6hgGWYeMQ9AgZ1jtcifhuHR8lDl6 BzU7ByCjTE6q0CkmaoCT/GtPHSUiHQIz7OUVmKxbonOpOPpBK9S1+5PkOosgSr0kmVaVIFngGtJ q8g+OIR9c0c7nvECSqnD2ZMUHM/HJPzWJzcyQthV62+O4k8x8Iwp/v0bTCQAMndTvufaiaB8rD ywh7OdxSmDSV5g23BNjRdMSfN0qen3pRglFEoD7rOET0ZDah6q5QOm08EnV+e+7Yql1EXmdv2tw yfh/NW3ZXnmU+AfrqD+92pxcLrVCrZbW6/a9x49nP7z8m5X3pfeV97gfAe+T96L30TrzIA+937 w/vz7W/1/5dv7F+c0q9fq3U+cJz2vr6f+NzDeE= x AOVHicnVfdchs1FN62FIohoYVLbrZ0MsMwjvEmNU0vAk1Lp0wnoW3+2pmsyWh 3j20RabVI2thmZ5+AW3gi3oEZ3oULjuR1bG2c1lPNWJa/850fHR39OMoYVbrd/vfa9Rsf3Pzwo1s fNz75dGX1s9t3Pj9WIpcxHMWCfkmIgoYTeFIU83gTSaB8IjB6+jsiZG/PgepqEgP9TiDLif9lPZ oTDRCr0ant+1W23b/MuDoBrc+HvTdtent5Z2Q4TEecUh0zotRJ0M50tyBS05hB2QhzBRmJz0g fTnCYEg6qW9hIS38tV0QLPwPpU+ZbEFwN3dvqFjTNcg1pjAo+YUwMfcTXt3yLO/zDoFv0RDohY1v zUehvrUdU+4eBb0RqXqEYBKkhJ5lWwWLYP7OXGIu2ZRTERXlGQP/aH/XN6lUoDVN+45SJMSZJpG aWc+k6IEy+Sds/becMKrHPlIYuO4IVzbYmWbEUBQJIhOfEz3w1ZhHgrlaKY2hJ0k804oFR6GesjE D0g+3Wj6oOWo8tpLIWZSDnVnSKiL0k2GDuJTs5pqrFHFWr6c95RcANTYtEpEKDujBvEN9CLpF EOSNyVLoPft9PU7KmQ+D2OklGCAvJY/1heS6gEvF8M0dgU8Z1iwgrmoyqOYZGZruIUMo0xIU+X Jr7nSkagFa9ZHqp4qG40wgR7uRZuQ4pxGkqT6McuhLPafPS6LdtMPgofYbT0oF3KfjElacTuB5TX9 jc2txeRDIGzOcGcTu83vFnNfYNefhoEWq0A6wWL6PiRTbvu+oSH/ipD3MAWpJvOmLT14uJj/TMK4 IgdtE3fVYfbWtpdpjTV/Z+/Ar7Jva0IPAI8S7iuIzeo1taXafUdeKl8ENACt1ETHfB1CUoTDXZW ePZJqLyepDmPQA6xzmi6XYXQLSbWalRFIzxr+tso7BYMOCfv4MRCobS8WUe1qNSOZ/W69usVOA7 WHax6BLm8FTLhKqYy7asmv7RGBW0mRydmkY6fdZzZHCkVmqHyfT2BcR7tzKdBFCvyg1W81w0fh O6gUNw1twTLUodRlMWzJlm7NkZe+J45uPp7LGblrh6z3o25Jb0Im9ZVwsMmWRwjC+mHGKez80 yokqgrImTAfYdOKJS+SunhnMuJfkxYsVNnHNQZB3XG/pxwv5yl2yToBd7W+IaQ3yCNkxEeJKeF +a6ZiHk2Z2SvLuaJK/4FfbqM53Py53X1c9wV5sQLI16Ek3GdwfNKjIMrZ/D0uPISRcXTq2nHRM54 xyX+quUL7wE1Y7wlZUT2bdLwO2ya0duINJ0SaVpP72GVAPt9GiratyeZw7nI0pAmoClL4CJzV/rF y52Xhe0bTpUOiRrjowYVGWj89DQXIg0lMDKaYJL2B5fAXs5YHcMg65BxCBrkFKtd7iQaVY4Pckdv r2ZnD2Scy/E8dAxpruaBp/hWHjhKRDqE53lG52KxbonOpePpCK9S1+7PkOk8hXnoJcu1mAeAa4l nUd2wSHsmjva8YwXUEYdzo6k4Hg+JLmcxeLkTp4Lu2r13UnkGRaGOd0nb4O+BEjvdtp37bMXRbtYe WAJUa8IFacMxkWJbc01NbpkSpqnS02/J8UwpVAZcJ8lfDwcUPNQrR4wnQ4+uTr3bVcupF5mbtbfO nwczV59G5ZnPiX+4Qrqf68uD43WsFma+NV+96j71Ju+V96X3lfe0F3gPvkfeT9I78mIPvD+8P 72/Vv5Z+W/1xurNCfX6tUrnC89pq6v/A2D2I= y AOVHicnVfdbts2Fbdeu8JW23i13sRl0RYBgc10rqNQWoWlXdCiStc1fC1R eQEnHNhdS1EgqtifoCXa7PcneZcAeYLd9gl3sUJZjUXZaowRM09/5zg8PD38cJIwq3W7/c+nylQ+ ufvjRtY8bn3y6snr9xs3PjpVIZQhHoWBCvgqIAkZjONJUM3iVSCA8YPAyOH1k5C/PQCoq4kM9TqD LST+mPRoSjdCL8cmN2+1Wu2ju/MArB7cfPHmzfXv/vr3+cnNlW0/EmHKIdYhI0q9tqJ7mZEaho yBt+qiAh4Snpw2scxoSD6mZFpLm7liqihZuAdClzCxBsDd3b6mY0TlINcYgKLmFMDF3E17fcArf 4h14364l4Qsa25qLQ3VoPqHYPdeIVFUhG2ASpIRewS4UCgTzd2oTU8mnJKoKE8YuEf7u65JpQK tady3lAIhTjUJ1Mx6IkUPlMk/Yeu/poRPXaRwsB2R7gqgp1pBgxFgSAycjnRA1eNeSCYrRXTEHq ShDOtUHAU6ikbMyBd785G0wUdtixdTkMpzETyqe4UEX1JksHYSnR0RhNVLuaoXE234hUBOzQtIhE LDercvEHcArKJEgZkaPcRunpb+thlM98GKSYTkQJBshr+WN9Iake8HwxTENbwFOGBSuYjao0CEl itoZdyDBKhDRVHv2SKh2IWrBmfaTqbzR8CPo4V4sEpKd0UCSWD9kKeTZ/pOHedZup53H7ute/lC 7qMxiUtuxyt4TXdjc2sx+RAIqxjubGK3+e1i7jPs+tMw0GIZSMdbTN+HaMpt3zU05F8Q8h6mINak arqge/cX859IGJdkr23iLjvM3tr2Mq2x5u7sHbhl9oua0APAo4S7CkKzeo219WVafQfOlQ8CWuA2 aqIDvi5BaKhmBWefRJKr6/jlAcgh1hnN4uQ+hmE2s1qIBnjX9bR2Mwack3dwQiEFQ+l4nof1 qFTKp/X6Nisl+A5WsVh0CXN4qiVClcxlF23ZtX0kMCtxNDm7NIz0+6zmSOHILNUPk2nsiwB3bmk6 86GfZ9Dqt5r+A/8dVIqbhrZgGepQ6jwbtmRLtyrkpSe+Z475cDp7KYlrt6zns25ZBd0VNwyNhaY sohGJ7POMa97xvlSGVeXpNGmA+/WYglz6K6eGcy64l+SFi2U2c1BkHdcZ+Rbifz9JtEvQMb2t8 Q8hvkMbJCA+Sk8x810yEPKkY2auLeWSLf0afNuNpRf60rn6Gu8KceH7AM38yrjN4WopxcOEMHh+X XoIge3wx7ZjIGe84x1+1fOE9oGaMt6SMyH6RNPz2m2b0NiKNp0Qa19N7WCag+D7xFe0XJ5nFOc/S kEagKYvgPHMX+sXLnedZ0TesKh0SNcZHDSoy0PjpaS5E7EtgZDTBJO0P5sBeylgdwyDrkHEIGuQU q13uJBiVjg9S2+vZmcPZJjKcRU6hjhVeAxvpUHlhKRFuFpmtBKLIVbolNpeTrCq9S2+xMkOo2h Cj1nqRZV4AngWtIqsgsWYdfc0ZnvIASanF2JAXL8yFJ5SwWK3fyTBSrVt+dRJ5iYZjTfI26EuA+ Fanfat49qJoFysPCkLQy3zFKYNxlmNbs02N5kxJ83Sp6fekGMYUSgP2s4SPhwNqHqrlA6bTwSdX5 27R5Qup8zN+luHj4PZq2+j4JlPjn+4vPrfq/nB8UbL2xtvMB/Xt87k3bN+dL5yvna8Zx7zgPnR +e5c+SEDji/O384f678vfLf6pXVqxPq5UulzueO1VZX/wdYUxF z\u02da AOkXicnVdfb9s2EFe7f12Z u067GUYoK4IMAyOZyX1mjwUSNIVLbpk7fKvBaosoKSzYUNZK7Wr6Mnvdvsg+wt63z7EdaTkWlaQ1SsA0fe7O/LueDxHGaNKdzp/X7n6zrvf/BtQ8XPvr4+uInN25+eqhELmM4iAUT8kVEFDCaw oGmsGLTALhEYPn0ckDw39+ClJRke7rcQZHnPRT2qMx0Ug6vF5aHUW0Jpke5CUhavjr8pj2/c6bQ7dvjnF0G1uLOx+N+/f352T/Pjm9ej8NExDmHVMeMKPUy6GT6qCBS05hBuRDmCjISn5A+vMRlS jio8IaL/2lXBEt/AykT5lvieBK6N7aUHTLNeQxijgE8bE0Ef68pv6Q5+PzgqeiKdgHEs+cj015Yjqv39wDcsVRcoBugbKaFn0VbAUtCtJy4wl2yKqYCK8oyBf7C7RsPK9Capn1HKBLiRJNIzbRnU vRAmbAQtvxrThjVYx8hDFxzhCu72ZlkxJAVCSITnxM98NWYR4K5UimNoSdJPJOKBUemnqLRA9IPvl1p+aDjtiPLaSyFOUg5lZ1SRF+SbDB2HJ2c0kxVwRxV0fRrVpHgbk2LRKRCgzpTbyi+JblAEuWMy FHpUunJq+U4KWc2DMUeJ6EN8gb/mN9Iake8PJiMo1dBs8ZJqxgLlXlUwyc2PcRIZRJqTJ8uSXOlINDZr4iNVT5ULC2ECPbyik8t2SiNJUr3FciL3UdbZdFp+UGwjtPavfJC7IMxStsN7C4lr+yun YxeB8IqynuruK0+t3F2Kc49afbQI3VRrBxXBbISbYzl0DQ/wlW95BF6Sa1FVbeLB+Mf6RhHEFDjpm39XUQNcq1eP9ne2yiGNTi9DHS/fnGQtL/ubOnl/FyGaOHgAWHO4riE2MF5aW5xnNe3ouyZCgBV 62FhrgyxKUJhrsabBCSqisvkxzHoEcYjbS9H61haNioq0BVTCitS/j8yjgHn5A2YWEjBkDs+j8OsVSrn06x+nZaK+AaUDRKdQx3WvkyoCjlv0OaN7QOBXkmTSYXTMNJvE82RwpUJ1feTY+yKCO93pb oIoV8W0O63W+FG+AYoxatF2zAPdCh1WQzbsq3bNfDcB98xj0E8PT1O0xRXb5nPpnq5CZ3Yt8ilRSYtUhjGZydO8WaGRjhRVA2uAn6I2xZtuRF0mRvTk49kY8JKzabiL0mYq+J2K0xd8uZu42DnuKbjp 2G/AZhnIyw3BwX5ruhIuZTclOk80Tl/0z2nQRT2r8J03xU7wVpi6GES/CybqJ4HnFxsWlJ3h4WFmJouLh5bBDIme4wxJ/NfyFr4WaIV7jMiL71mn4HbM6nVAmk6BNG26d79ygP0+DhXt20rmYM68NK QJaMoSOPcpXaxBeBlYecFJ0uHRI2x9UFBho/Pc2FSEMJjIwmNEn7g3PEXs5Yk4abJKMQdAgp7RGC0CiUWV4L3fkdhp6dkDGuRzXSYeQ5qpOeIgd9cARItIBPMkzWtuLNUt0Lh1LB/jgunp/hEznKdR Jz1iuRZ3wCDCWtE7ZBgewbV5yxzI+QBl1MJuSgmN5n+RythfHd/JU2Kg1byeRJ5gYprpPeoK+BEhvdzu3bXOMrG3MPLCAqFeEilMG46LEseSqGp1TJU1j0ZDvSTFMKVQK3HaEj4cDatrZqs3pdrEx696 1U3kh9DxytdkR8XE06w1XLM58zN+yoPkn7PzicKUdrLZXfsL/Z+veZFzvC+8r72Au+et+E9p5B17s/eb97v3h/bl4a3F9cWNxawK9eqWSueU5Y/GH/wEDGSfL dX AOln icnVdfb9s2EFe7f523Zu2hwHFAHVFgGFIPCtp 1vShQ9MuSFEka5Z/LVB5BiWdbS6kqJFUbFfQwz 7NXrePsY+w9+1zbEdajkUlaYMSME3f/e6OvDse z1HGqNKdzt9Xr7z7nvf3Dtw9ZH19f+OTGzU +PlMhlDIexYEK+iIgCRlM41FQzeJFJIDxi8Dw6 fmz4z09AKirSAz3JoMvJIKV9GhONpN6NW4uhV I8EkqLdA+SsnjV+6ZsJb0XvRt3Ou2OHf7ZRVAt 7jxc+O/fv78/J/d3s3rcZiIOeQ6pgRpV4GnU x3CyI1jRmUrTBXkJH4mAzgJS5TwkF1C7uB0l/M FdHCz0D6lPmWCK6E7q93C5pmuY0RgGfMCZGPt KX131Ld/AHQbfoi3QKxrHoI9NfX46o9g8C37BU XaAYoOkhL5FWwFLQd8eu8BcshmAirKMwb+4d 62b9ysQGuaDhyhSIhjTSI15J0QdlYkPY8q85 YVRPfIQwcM0Rruxm5IRQ1YkiEx8TvTQVxMeCe ZKpTSGviTxXCoWHJl6hkYPSD/4dmXJBx23HVlO YynMQcqZ7IwiBpJkw4nj6OSEZqoK5riKpl+zig R3a1okIhUa1Kl6Q/EtyQWSKGdEjkuXSo9fLcdJ ObdhKPY4CSW4Qd7wHxsISfWQl+eTaewyeM4wYQ VzqSqPYpKZa+MmMowzIU2WJ7/kSkeisVkTH6n6 qmy1wgT6eE+nF+6ERpKk+hHLoSz2th6VRWfJD4L 7OK3fK8/FPp6QtMKuBRa35K+srp8PgDCaorXV nFa/e587DOcBrNtoMZqI2vB+XBbJabYzl0DQ/w FW95BF6Sa1FVbeHD/fPyWhEkFDjpm39XUQNeq1 ZODne2yiGNTi9DHiw8uM1qL/sbOvl/FyGaOHgI WHO4riE2MW4vLlxnNe3omyZCgBV62JTAlyUoT TY02CFlFBZfZnmPAI5wmyk6YNqC91iq0BVT CijR4gMxuwYBz8gZMLKRgyJ2cxWHWKpXzWVa/T ktFfAPKBoleQh3WvkyoCnZoF02to8FeiVNphV Ow1i/THClcmVD9Mj7EnIrzfleoihEFZQHvQX gofhm+AUrxatA2XgY6kLotRW7Z1uwa+9MF3zGM Qz06P0yzF1Vvms6lebkIn9i1yaZFJixRG8emJU 7yZoRFOVBGUDW6C/giXLFvyImyN6ansrHhBU bTcR+E7HfROzVmHvl3N3GQc/wTcdOQ36DME7GW G56hfluqIh5VlOy02TzxGX/jDZdxNMa/2lT/AR vhamLYcSLcLpuInhesXFx4Qk2jyorUVRsXgw7I nKOyrxV8Nf+FqoOeI1LiNyYJ2G3+GSWb0OSNM ZkKZN9x5UDrDfvVDRga1kDubUSyOagKYsgVPX WgXWwBeFnZuOVk6ImqCrQ8KMtD46WsuRBpKYGQ 8pUk6GJ4h9nPGmjTcZJNkDIGOaM1WgASjSvD+ 7kjt9PQswMyzuWkTjqCNFd1wiZ21ENHiEgH8DT PaG0v1izRuXQsHeKD6+r9ETKdp1An7bJcizphCz CWtE7ZBgewbV5yxzI+QBl1MBuSgmP5gORyvhfH d/JE2Kg1byeRx5gYprpPe4KBEhvr3Vu2+YWd uYeWABUb8IFacMJkWJY9FVNT6jSprGoiHfl2KU UqgUuO0In4yG1LSzVZuztoaN2dpdO5XnQs8iV5 sdEZ9E895wxeLMp8S/ZUHzT9jZxdFKO1htr/yE /8/ue9NxzbvlfeV97QXePe+h98Tb9Q692PvN+9 37w/tz4YuF7xc2F7am0KtXKpnPGcs7P4PTmAp Rw= dY AOlnicnVdfb9s2EFe7f523Zu2hwHFAHVFgKFwPCtp1vShQ9MuSFEka5Z/7VB lASWdbS6kqJFUbFfQwz7NXrePsY+w9+1zbEdajkUlaYISME3f/e6OvDsez1HGqNLd7t9Xr7z7nv vf3Dtw9ZH1+f+TGzU/3lchlDHuxYEK+jIgCRlPY01QzeJlJIDxi8CI6emL4L45BKirSXT3O4IC Tfkp7NCYaSYc3bs2HVknxWCgt0m1IyuL14d2ylRz+dHjTrfTtcM/vQiqxZ1Hc/9+9eXn/+zdXj zehwmIs45pDpmRKlXQTfTBwWRmsYMylaYK8hIfET68AqXKeGgDgq7gdKfzxXRws9A+pT5lgiuhO6 tHBQ0zXINaYwCPmFMDH2kL6z4lu7gd4ODoifSCRjHvI9Mf2UhotrfDXzDUnWBYoAOkhJ6Fm0FLAV 9e+QCc8mAqoKM8Y+HvbG75xswKtadp3hCIhjSJ1Ex7JkUPlIkNYQu/5oRPfYRwsA1R7iym51 JRgxZkSAy8TnRA1+NeSYK5XSGHqSxDOpWHBk6ikaPSD94JvFtg867jiynMZSmIOU9kpRfQlyQZ jx9HJMc1UFcxRFU2/ZhUJ7ta0SEQqNKgT9YbiW5ILJFHOiByVLpUevV6Ik3Jmw1DscRJKcIO84T/ WF5LqAS/PJtPYZfCcYcIK5lJVHsUkM9fGTWQYZUKaLE9+yZWORGOzJj5S9VTZaoUJ9PCeTi7cMY0k SfVjlkNZbK8/Lotu2w+CBzit3C/PxD4Zk7TCLgcW1/YXl1bOBu8CYTXFy0s4LX17NvY5Tv3pNlBj tZHl4Gy4rRITbPegSH+nC1vogtSTeqLTx4cDZ+XcK4Agds+9qaqBr1erp7uZGWcSxqUXo4/mH lxmteX91c8evYmQzRw8ACw73FcQmxq35hcuM5j09lWRI0AIvWxsN8AUJShMN9jRYISVUVl+lOY9A DjEbafqw2sJBMdHWgCoaYUXqP0TmQcGAc3IBJhZSMOSOT+Mwa5XK+TSr36SlIl6AskGil1CHtS8T qkJeNmiXje0TgV5Jk0mF0zDSbxPNkcKVCdX3k2Nsiwjvd6W6CKFfFtDpd9rho/ACKMWrRTtwGehQ 6rIYdmRHd2rgSx980zwG8fT0OE1TXL1lPpvq5SZ0Yt8ilxaZtEhGJ+cOMWbGRrhRBVB2eAm6I+w bdmSF0mTvTo59UQ+JqxYbSJ2moidJmK7xtwuZ+42DnqObzp2GvIuwjgZYbk5LMx3Q0XMs5qSzSab Jy7Z7TpIp7V+M+a4sd4K0xdDCNehJN1E8Hzio2Lc0+wtl9ZiaJi7XzYPpEz3H6Jvxr+wtdCzRBv cBmRfes0/A7bZvUmIE2nQJo23btbOcB+H4aK9m0lczAnXhrSBDRlCZx47ly72ALwsrBzy8nSIVFj bH1QkIHGT09zIdJQAiOjCU3S/uAUsZcz1qThJpskYxA0yCmt0QKQaFQZ3skduc2Gnk2QcS7HdI+ pLmqE9awox4QkQ6gGd5Rmt7sWaJzqVjaQ8fXFfvD5DpPIU6aYvlWtQJ64CxpHXKBjiADfOSO5bxA cqog1mVFBzLuySXs704vpPHwkateTuJPMLEMNV90hP0JUB6e7l72zbHyNrAzAMLiHpFqDhlMC5KH POuqtEpVdI0Fg35nhTDlEKlwG1H+Hg4oKadrdqc5WVszJbv2ak8E3oaudTsiPg4mvWGixZnPiX+L Quaf8JOL/YXO8FSZ/FH/H/2wJuMa94t7yvay/w7nuPvKfelrfnxd5v3u/eH96fc1/MfTe3Nrc+g V69Usl85jljbut/XHYpSA= causal graph prior G0 AO0nicnVfNbhs3EN6kf6nauElz7IVpIKAIbFVrx41zMBAnDRwEdpPash3A6wr c3ZHEmlxuSa4lebGHotc+SF+it576Bj32h76Bp2lVpa4thMjBERM9/8cGZIjsKUM23a7b+uX/ v/Q8+/OjGx41Pr258Nmt25/va5mpCPYiyaV6HVINnCWwZ5jh8DpVQEXI4SA8flryD05AaSaTjhm ncCRoP2E9FlGDpO6tg2ZgleRPpDYy2YG4yE+794tG87Sbtxf9gqyTICf+IlkmQdEIDIxMHtFMU07 6iqYDkiomFSlIKgZRJTnm0W3Tbq37rVbTvI+YVfLe49vPfH8t/Hvz2qnv7ZhTEMsoEJCbiVOt Dv52ao5wqwyIOaDrTkNLomPbhEJcJFaCPcut8QZrokJEkBUYJ5YIroTprR3lLEkzA0mEAoRyLoc E6UtrxNIdfMc/ynsymYBxNAkydpSyAzp+KRk6XmBfIDBVQp6Fm0FLAXzcuwCM8WnmAqomUg5kL2 dLVKmSIMxLOk7QqGUx4aGeqY9VbIHuswr5Us/ZQzMyYI4eCao0JbZ2eSIUdWKmKSZkyoscilNy VSlgEPUWjmVQkBTLNFI0RUMT/enmRgIlajqxgkZLlRoqp7JQibcWMnUDHJyzVTJHVTbJnFUkuK4 ZGctEGtBn6ksKsSQXSMOMUzUqXCo7Pl2K4mJmo6TY7cSMoOiFj/el4qZgSguJrPIZYiMY8FK7lJ1 FkY0LY+cW8gwSqUqz+MdMmlDVny/wo3dNFoxHE0MzPjmsJyxUNDFPeAZFvrP5pMCjSnz/EU5r D4sLsU/HNKmwq7F4ZFeWbsY3AHK5xSvruC08s3F2Jc49aduoMbKkVX/Yri9YSbY9oMShvhLXN7G ECSGzqu2cP/RxfhNBeMK7LdLv6uphp676Z53treKPIrKuwhj3Fy/ymg0ycb2LqlyZCvHDAvHE0 RGWOG82lq4z6OT1XZEgwEg/bIhoQSwq0oQbsbvCGVFBZPUwyEYIaYjWyZL1y4SifaKtBNQvxRuqv I/Mo5yAEfQsmkpy5I7P47Bqtc7EtKrfpKUivgVlk8SuoA7vlTqCnVpF01t08lRiWJzdc+dy9 SzZHGldlqr6dbGNHhni+K9V5AP0ih1a/tRg8Dt4CZXi0WAuAh0qU+TDlmqZ1hz4yhvfLh+DaLp7 nKYlrt+xnsvby3o2L5FLi0syKBYXS24wRPpu0mYp37RY0bYzyCRctWIo/r7I3JrvOzbmSjtit I3briJ05k4xC3cZoJf4pmOnoe4jTNARXjfdvPyuqYhEOqdku84Wscv+AW26iBdz/Bd18RM8FeW9 GIQiDybrOkJkFRsXl+7g2X5lJQzZ5fD9qma4fYL/FWLF74WeoZ4Q8io6tug4XewWK7eBGTJFMiS eng7VQDsdzfQrG9vMgdzFqUhi8EwHsNZ5C61iy2AKHI7N5wqHVI9xtYHBTkY/PSMkDIJFHA6mtAU 6w/OEXsZ53UaOlknlQbBgJrSai0ADUeV4d3Mkdu6dkGFWVqPE/ahyT84Rn2FEPHCGqHMCLGVzv liz1GTKsbSHD6r9ztITZbAPOkVz4ycJ2wC5pLNU7bAWyVL7ljGR+glDmYDcXAsdyhmZr54sROn UibtfrpOoYC6O83Sc9QV8BJHdX23dtc4ysLaw8sICwlwdaMA7jvMDRdFWNzqlSZWNRk+8pOUwYV ArcdkSMhwNWtrNVm7O6io3Z6gM7FRdCzyNX6h2RGIez3nDZ4spPgX/L/PqfsPOL/eWv9Ja/h7/n z3yJuOG94X3pfeV53sPvcfec+Vt+dF3u/e394/3r8LnYXThZ8XfplAr1+rZO54zlj49X/G0T+U< /latexit> z01 \u201c t1, 2u AO3XicnVfdb9s2EFe7r85bs3Z7GvbCrgwFIlnJc2aPgRo2hUpimTt8tUClWd Q0tnmQokaScV2BQF72duw1/1L+xf2T+x1e9yRlmNRcdqgBEIzd7/74N2RPIUZ0p3On9fufre+x9 8+NG1j1ufHp96bMbNz8/ViKXERxFgv5MqQKOEvhSDPN4WUmgSYhxfhySPDf3EKUjGRHupJBt2 EDlLWZxHVSOrdCJcDq6R4KJQW6T7EZfG6d6dsve4VnZ5fki0SFMRfIWskKFsLwIGsS60zIEMJM2 GpCRBQvUworzYKXt3SNm7cbvT7thBzi/8anH7wZe/emY87928HgWxiPIEUh1xqtQrv5PpbkGlZhG HshXkCjIandABvMJlShNQ3cK6VpLlXFEtSAaSME4sEVwJ3d/sFizNcg1phAKEci5GBOmrm8TSHfy h3y36Ip2CcSwTZJLN1ZBpcugTw1J1gWKIcZYS+hZtBSwFU3TiAnPJZ5gKqFiScSBH+7vEZEuB1iw dOEKhECeahmquPZOiD8qkmPLVX3LKmZ4QhHBwzdFEWfnkiFHViojInJGVGTJBTclUpZBH1Jo7l UJBJk6hkaIyCJ/+3aCgEdtR3ZhEVSmI2UM9kZRdhimTiBjk9ZpqpkjqtskpVJLiuaRGLVGhQZ+o NhViSC6Rhzqkcly6VnbxejeJybsNQ7HZiRtHBpBE/PhCS6WFSLiazyGUkOceCFdylqjyMaGZOn1vI M6ENFUe/5wrHYqGsyY/UvV2WoFMfTxuE+P4ikLJU31Q5DWezvPCyLzgrx/fs4bd4rF2IfTWha YTd8i8Pjvb65GHwIlNcUb6zjtP7dYuwznAYzN1Bj5ciGvxhu748ptnPXwB/gct7GIJU07pqC/fv L8bvSJhUYL9j/K6mBrp2jz053NstiygydxHGeHnrMqO1TLb3DkiVI1s5egh4SREQWRy3Fpevcxo ntNzRYELfCwraCBZFWC0lSD3Q3ekBIq6/SPAlBjrAaWbpVudAtptoaUMVCvJEGW8jsFhyShL4F EwkpOHIn53FYtUrlyayq36SlIr4FZPELqEO75MqAp52aRdNrePBEYljac3nHn3iWbY4Urk6rv p9vYFyGe70p1EcCgLKA9aK8ED4K3QBkeLdaGy0BHUpfFqC3bul0DX3rje+YxiGa7x2lW4uod69nc Xm5Bx/YtcmhKYsURtHZjlPTYRjhWBV+2eDGI9gxbJlUsRN9vZ018VZO7LdRBw0EQdNxH6NuV/O w20C9AzfdOw05B2EJXSM102vML8NFVGS1ZTsNdlJ7LJ/Qpsu4mN/7QpfoqnwtyLQZhgK2bXTUS V2xcXLiDx8eVlTAsHl8MO6Zyjsu8b9GvPC1UHPEG0JG5cAGDX+DFbN6E5ClMyBLm+E9rAJgf3uB YgN7kzmYsyiNWAya8RjOInehXWwBkrKwc8up0hFVE2x9UJCDxr+ToRIAwmcjqc0yQbDc8R+znmT hk42ScYgaJAzWqMFoOG4MnyQO3J7DT17IKNcTuqkY0hzVSc8xo56AhR6QCe5hmr+WLNUp1Lx9IRP riu3h8g03kKdJznmtRJ+wA5pLVKbvgAHbNS+5YxgcoYw5mWzJwLB/SXM59cWInT4XNWvN0UnlSf cVMe4KBEhvbXRu2eYWbtYeWABYb8IVMI4TIoSx7KranxOlbQfSK58X4pRyqBS4LYjyWQ0ZKadr dqcjQ1szDbu2qlcCD2PXG92RMknPeGaxZn/sxnmd/8CDu/OF5r+vtR/x+y+Nx3XvK+8r71vP N+75z3wnjPvSMv8v7y/vH+9f5b6i39tvT70h9T6NUrlcwXnjOW/vwf1LtB+w= z00 \u201c t0, 2u AO3XicnVfdb9s2EFe7r85bs3Z7GvbCrgwFIlnJc2aPgRo2hUpimTt8tUClWd Q0tnmQokaScV2BQF72duw1/1L+xf2T+x1e9yRlmNRcdqgBEIzd7/74N2RPIUZ0p3On9fufre+x9 8+NG1j1ufHp96bMbNz8/ViKXERxFgv5MqQKOEvhSDPN4WUmgSYhxfhySPDf3EKUjGRHupJBt2 EDlLWZxHVSOrdCJcDq6R4KJQW6T7EZfG6d6dsve4VnV6nJFskKEhnhayRoGwtAcaxrQMgcykDQ bkpIECdXDiPJip+zdIWXvxu1Ou2MHOb/wq8XtB1/+6pnxvHfzehTEIsoTSHXEqVKv/E6muwWVmkU cylaQK8hodEIH8AqXKU1AdQvrWkmWc0W1IBlIwjixRHAldH+zW7A0yzWkEQoQyrkYEaSvbhJLd/C Hfrfoi3QKxrFMkEk2V0OmyaFPDEvVBYohxlK6Fu0FbAUTNGJC8wln2EqoGJxoEc7e8Sky0FWrN 04AiFQpxoGq59kyKPiTYspXf8kpZ3pCEMLBNUcTZ2dS4YcWaGgMiYmZ0RNklBwVyplEfQljeZ SkUiQqWdojIAk/rdrKwR01HZkExZJYTZSzmRnFGLZeIEOj5lmaqSOa6ySWpWkeC6pkUsUqFBnak 3FGJLpCGOadyXLpUdvJ6NYrLuQ1DsduJGUHk0b8+EBIpodJuZjMIpeR5BwLVnCXqvIwopk5fW4h wzgT0lR5/HOudCgazpr8SNVXZasVxNDH4z49iqcslDTVD3kOZbG/87As8LD6/n2cNu+VC7GPJjSt sBu+xeHxXt9cD4EymuKN9ZxWv9uMfYZToOZG6ixcmTDXwy398cU27lrYIi/wOU9DEGqaV21hfv3 F+N3JEwqsN8xfldTA127x54c7u2WRSZuwhjvLx1mdFaJt7B6TKka0cPQS8cBKiIDI5bi2vXmY0 z+m5IkOCFnjYVtBAsipBarB7gZvSAmV1VdpnoQgR1iNLN2qXOgWU20NqGIh3kiDLWR2Cw5JQt+C iYQUHLmT8zisWqXyZFbVb9JSEd+Csklil1CHd18mVIW8bNIum9tHAqOSxtMbzrxz75LNscKVSdX3 023sixDPd6W6CGBQFtAetFeCB8FboAyPFmvDZaAjqcti1JZt3a6BL73xPfMYRLPd4zQrcfWO9Wxu L7egY/sWubTQlEUKo+hsx6npMIxwrAq/bHBjEewYtkyKeIme3u6+KsHdluIg6aiIMmYr/G3C/n 4TYBeoZvOnYa8g7CEjrG6ZXmN+GijJakr2muwkdtk/oU0X8bTGf9oUP8VTYe7FIEywFbPrJiLJ KzYuLtzB4+PKShgWjy+GHVM5x2X+F8jXvhaqDniDSGjcmCDhr/Bilm9CcjSGZClzfAeVgGwv71A sYG9yRzMWZRGLAbNeAxnkbvQLrYASVnYueVU6YiqCbY+KMhB419fJ0KkgQROx1OaZIPhOWI/57xJ QyebJGMQNMgZrdEC0HBcGT7IHbm9hp49kFEuJ3XSMaS5qhMeY0c9dISodABP84zVfLFmqc6lY+kIH 1xX7w+Q6TyFOuk5z7WoE3YAc8nqlF1wALvmJXcs4wOUMQezLRk4lg9pLue+OLGTp8JmrXk6qTypv mKmPcFAqS3Njq3bHOMrF2sPLCAsF8EKmEcJkWJY9lVNT6nStoPJFe+L8UoZVApcNuRZDIaMtPOV m3OxgY2Zht37VQuhJ5Hrjc7omQSznvDNYszf+azG9+hJ1fHK+1/fX2o/4fXbfm45r3lfe1943n u/d8x54T7zn3pEXeX95/3j/ev8t9Z+W/p96Y8p9OqVSuYLzxlLf/4PuA9B+Q= set of consistent APG3icnVdfb9s2EHe6reu8N Wu3x72wKwMReJaSd2mDwGadkWKIlm7/GuxKjMo6WxzoUSNpGI7gj7KgH2XvQ173cO+zY60HIty0gYlEJq5+90deXc8noKUM6U7nf+Wrn3y6WfXP7/xRfPLr24uf3r9jdHSmQyhMNQcCHfBlQBZwkca qY5vE0l0Djg8CY4eWb4b05BKiaSAz1J4Timg4T1WUg1knq3/mz5Vkn+VCgtkj2Iivysd69ots56eafXKcgm8XPSWSFrxEfqItrXMNa5lhmQgaTpkBTEj6kehpTn20XvHimaU4gCTUSfhCJReC5ItFnO X0aiHZmd2Tqir4pejdutpd+wgiwuvXNxtlON17/bN0I9EmMVoIeRUqXdeJ9XHOZWahRxwN5mClIYndADvcJnQGNRxbs9VkFamqBYkBUkYJ5YIroTubxznLEkzPEKIAoRyLkYE6asbxNId/IF3nPdFM gXjaBFko3VgGly4BHDUlWBfIhRkhL6Fm0FLAUDfOICM8lnmBKoWJxyId7O8TEGh2uWTJwhAIhTjQN1Fx7KkUflEkQyld/zyhnekIQwsE1R2NlNzuXDiyAkFlREy8iJrEgeCuVMJC6EsazqVCESNTz 9DoAUm8+2srBHTYdmRjFkphDlLMZGcUYTNt4jg6OmWpKoM5LqNJKlaR4G5Ni0gkQoM6V28oxJcIA0yTuW4cKns5Gw1jIq5DUOx4kYxQ3GNf/xAea4HsbFxWQWuow45iwgrtUlQUhTc09cRMZxqmQJs uj3zKlA1HbrImPVH1VNJt+BH0sFtN7fMoCSRP9lGdQ5HvbT4scb7rnPcZp41FxIfbZhCYltutZHNaG9Y2LwQdAeUVxdx2n9YcXY1/hNJhtAzWG+l6F8Nt8ZliOw8MDPGXbHkXZBoWlVt4d7ji/HbEi Yl2OuYfZdTDV0pgi8OdneKPAxNLUIftzavMpotsrW7T8oY2czRQ8CExMFoYlxs7V6lVG/pwtJhgQt8LKtoIF4VYLSVIM9DVZICaXVd0kWByBHmI0s2Sy3cJxPtdWgigVYkQabyDzOcQx/QAmFJw5E 4WcZi1SmXxLKvfp6UkfgBlg8SuoA5rXypUibxq0K4a2cCvZJE0wpn3rmPieZY4cqE6sfpMfZEgPe7VJ37MChyaA/aK/4T/wNQhleLteEq0JHURT5qy7ZuV8BXPviueQzC2elxmqW4+sh8NtXLTejIvk UuLTBpkcAoPD9xYtoTIxyp3Ctq3Aj94a9YtozqM7emp46P29FtuqI/Tpiv47YqzD3irm7jYNe4ZuOnYa8h7CYjrHc9HLzW1MRxmlFyW6dHUcu+1e06SJeVvgv6+KneCtMXfSDGPs4u64j4qxk4+LSEz w/Kq0EQf78ctgRlXPcUYH/1fyFr4WaI97jMioH1mn46+Y1fuALJkBWVJ370HpAPvb8xUb2ErmYM69NGIRaMYjOPfcpXaxBYiL3M5NJ0tHVE2w9UFBDhr/+joWIvElcDqe0iQbDBeI/YzOg03WScZg6B Bzmi1FoAG49LwfubI7db07IMzmpko4gyVSV8Bw76qEjRKUDeJmlrLIXa5bqTDqWDvHBdfX+BKnOEqiSXvNMiyphGzCWrErZAQewY15yxzI+QClzMFuSgWP5gGZyvhfHd/JU2KjVbyeVJ+Un0LQnGEi A5E63c8c2x8jawcwDCwj6ua9ixmGSFzharqrxgipv65c+b4Uo4RBqcBtR+LJaMhMO1u2Od0uNmbdB3YqLoQuItfrHVE8Cea94ZrFmT/zWebVP8IWF0drbW+9vfbzg7tPHpcfaDca3zW+b/zQ8BqPGk8 aLxqvG4eNcOn60spSd+nh8h/Lfy3/vfzPFHptqZT5tuGM5X/BzT9V4c= factorizations Z APG3icnVdfb9s2EHe6reu8N Wu3x72wKwMReJaSd2mDwGadkWKIlm7/GuxKjMo6WxzoUSNpGI7gj7KgH2XvQ173cO+zY60HIty0gYlEJq5+90deXc8noKUM6U7nf+Wrn3y6WfXP7/xRfPLr24uf3r9jdHSmQyhMNQcCHfBlQBZwkca qY5vE0l0Djg8CY4eWb4b05BKiaSAz1J4Timg4T1WUg1knq3/mz5Vkn+VCgtkj2Iivysd69ots56eafXKcgm8XPSWSFrxEfqItrXMNa5lhmQgaTpkBTEj6kehpTn20XvHjFSFqNAE9EnoUgUHgwSXTSnj D4NtZDszO5JVRX8UvRu3e20O3aQxYVXLu42yvG6d/tm6EcizGI0EHKq1Duvk+rjnErNQg5oM1OQ0vCEDuAdLhMagzrO7bkK0soU1YKkIAnjxBLBldD9jeOcJWmGJwhRgFDOxYgfXWDWLqDP/CO875Ip mAcLYJMsrEaME0OPGJYqiqQDzFKUkLfoq2ApWCAT1xgJvkMUwIVi1MO5HBvh5hYo781SwaOUCDEiaBmtPpeiDMglC+ervGeVMTwhCOLjmaKzsZueSAUdWIKiMiIkXUZM4ENyVSlgIfUnDuVQoYmTqG Ro9Il3f2FgA7bjmzMQinMQYqZ7IwibKZNHEdHpyxVZTDHZTRJxSoS3K1pEYlEaFDn6g2FWJILpEHGqRwXLpWdnK2GUTG3YSj2OBGjuMG45j8+wBzXw7i4mMxClxFnHBNWcJeqsiCkqbknbiLDOBXSZH n0W6Z0IGqbNfGRq+KZtOPoI/FYnqPT1kgaKf8gyKfG/7aZHjTfe8xzhtPCouxD6b0KTEdj2Lw9qwvnEx+AoryjuruO0/vBi7CucBrNtoMZyI13vYrgtPlNs54GBIf6SLe+iCxJNq6ot3Ht8MX5bwq QEex2z73KqoStF8MXB7k6Rh6GpRej1uZVRrNFtnb3SRkjmzl6CFhwYqIgNDFutlavMur3dCHJkKAFXrYVNBCvSlCarCnwQopobT6LsniAOQIs5Elm+UWjvOpthpUsQAr0mATmc5hzimH8CEQgqO3M kiDrNWqSyeZfX7tJTED6BskNgV1GHtS4UqkVcN2lVj+0ygV5JoWuHMO/cx0RwrXJlQ/Tg9xp4I8H6XqnMfBkUO7UF7xX/ifwDK8GqxNlwFOpK6yEdt2dbtCvjKB981j0E4Oz1OsxRXH5nPpnq5CR3Zt8 ilBSYtEhiF5ydOTHtihCOVe0WNG6E/BXLlnEe1dlb01Pn563IVh2xX0fs1xF7FeZeMXe3cdArfNOx05D3EBbTMZabXm5+ayrCOK0o2a2z48hl/4o2XcTLCv9lXfwUb4Wpi34QYx9n13VEnJVsXFx6gu dHpZUgyJ9fDjuico47KvC/mr/wtVBzxHtcRuXAOg1/RWzeh+QJTMgS+ruPSgdYH97vmIDW8kczLmXRiwCzXgE5671C62AHGR27npZOmIqgm2PijIQeNfX8dCJL4ETsdTmSD4QKxn3Fep+Em6yRjEDT IGa3WAtBgXBrezxy53ZqeXZBhJidV0hEkmaoSnmNHPXSEqHQAL7OUVfZizVKdScfSIT64rt6fINVZAlXSa5pUSVsA8aSVSk74AB2zEvuWMYHKGUOZksycCwf0EzO9+L4Tp4KG7X67aTypPwEmvYEAwm Q3Ol27tjmGFk7mHlgAUE/91XMOEzyAkfLVTVeUCXt15Ur35dilDAoFbjtSDwZDZlpZ8s2p9vFxqz7wE7FhdBF5Hq9I4onwbw3XLM482c+y7z6R9ji4mit7a23135+cPfJ4/ID7Ubju8b3jR8aXuNR40n jReN147ARLl1fWlnqLj1c/mP5r+W/l/+ZQq8tlTLfNpyx/O/LsxXhw= set of consistent APG3icnVdfb9s2EHe6reu8N Wu3x72wKwMReJaSd2mDwGadkWKIlm7/GuxKjMo6WxzoUSNpGI7gj7KgH2XvQ173cO+zY60HIty0gYlEJq5+90deXc8noKUM6U7nf+Wrn3y6WfXP7/xRfPLr24uf3r9jdHSmQyhMNQcCHfBlQBZwkca qY5vE0l0Djg8CY4eWb4b05BKiaSAz1J4Timg4T1WUg1knq3/mz5Vkn+VCgtkj2Iivysd69ots56eafXKcgm8XPSWSFrxEfqItrXMNa5lhmQgaTpkBTEj6kehpTn20XvHimaU4gCTUSfhCJReC5ItFnO X0aiHZmd2Tqir4pejdutpd+wgiwuvXNxtlON17/bN0I9EmMVoIeRUqXdeJ9XHOZWahRxwN5mClIYndADvcJnQGNRxbs9VkFamqBYkBUkYJ5YIroTubxznLEkzPEKIAoRyLkYE6asbxNId/IF3nPdFM gXjaBFko3VgGly4BHDUlWBfIhRkhL6Fm0FLAUDfOICM8lnmBKoWJxyId7O8TEGh2uWTJwhAIhTjQN1Fx7KkUflEkQyld/zyhnekIQwsE1R2NlNzuXDiyAkFlREy8iJrEgeCuVMJC6EsazqVCESNTz 9DoAUm8+2srBHTYdmRjFkphDlLMZGcUYTNt4jg6OmWpKoM5LqNJKlaR4G5Ni0gkQoM6V28oxJcIA0yTuW4cKns5Gw1jIq5DUOx4kYxQ3GNf/xAea4HsbFxWQWuow45iwgrtUlQUhTc09cRMZxqmQJs uj3zKlA1HbrImPVH1VNJt+BH0sFtN7fMoCSRP9lGdQ5HvbT4scb7rnPcZp41FxIfbZhCYltutZHNaG9Y2LwQdAeUVxdx2n9YcXY1/hNJhtAzWG+l6F8Nt8ZliOw8MDPGXbHkXZBoWlVt4d7ji/HbEi Yl2OuYfZdTDV0pgi8OdneKPAxNLUIftzavMpotsrW7T8oY2czRQ8CExMFoYlxs7V6lVG/pwtJhgQt8LKtoIF4VYLSVIM9DVZICaXVd0kWByBHmI0s2Sy3cJxPtdWgigVYkQabyDzOcQx/QAmFJw5E 4WcZi1SmXxLKvfp6UkfgBlg8SuoA5rXypUibxq0K4a2cCvZJE0wpn3rmPieZY4cqE6sfpMfZEgPe7VJ37MChyaA/aK/4T/wNQhleLteEq0JHURT5qy7ZuV8BXPviueQzC2elxmqW4+sh8NtXLTejIvk UuLTBpkcAoPD9xYtoTIxyp3Ctq3Aj94a9YtozqM7emp46P29FtuqI/Tpiv47YqzD3irm7jYNe4ZuOnYa8h7CYjrHc9HLzW1MRxmlFyW6dHUcu+1e06SJeVvgv6+KneCtMXfSDGPs4u64j4qxk4+LSEz w/Kq0EQf78ctgRlXPcUYH/1fyFr4WaI97jMioH1mn46+Y1fuALJkBWVJ370HpAPvb8xUb2ErmYM69NGIRaMYjOPfcpXaxBYiL3M5NJ0tHVE2w9UFBDhr/+joWIvElcDqe0iQbDBeI/YzOg03WScZg6B Bzmi1FoAG49LwfubI7db07IMzmpko4gyVSV8Bw76qEjRKUDeJmlrLIXa5bqTDqWDvHBdfX+BKnOEqiSXvNMiyphGzCWrErZAQewY15yxzI+QClzMFuSgWP5gGZyvhfHd/JU2KjVbyeVJ+Un0LQnGEi A5E63c8c2x8jawcwDCwj6ua9ixmGSFzharqrxgipv65c+b4Uo4RBqcBtR+LJaMhMO1u2Od0uNmbdB3YqLoQuItfrHVE8Cea94ZrFmT/zWebVP8IWF0drbW+9vfbzg7tPHpcfaDca3zW+b/zQ8BqPGk8 aLxqvG4eNcOn60spSd+nh8h/Lfy3/vfzPFHptqZT5tuGM5X/BzT9V4c= factorizations Z APG3icnVdfb9s2EHe6reu8NWu3x72wKwMReJaSd2mDwGadkWKIlm7/GuxKjM o6WxzoUSNpGI7gj7KgH2XvQ173cO+zY60HIty0gYlEJq5+90deXc8noKUM6U7nf+Wrn3y6WfXP7/ xRfPLr24uf3r9jdHSmQyhMNQcCHfBlQBZwkcaqY5vE0l0Djg8CY4eWb4b05BKiaSAz1J4Timg4T 1WUg1knq3/mz5Vkn+VCgtkj2Iivysd69ots56eafXKcgm8XPSWSFrxEfqItrXMNa5lhmQgaTpkBT Ej6kehpTn20XvHjFSFqNAE9EnoUgUHgwSXTSnjD4NtZDszO5JVRX8UvRu3e20O3aQxYVXLu42yvG 6d/tm6EcizGI0EHKq1Duvk+rjnErNQg5oM1OQ0vCEDuAdLhMagzrO7bkK0soU1YKkIAnjxBLBldD 9jeOcJWmGJwhRgFDOxYgfXWDWLqDP/CO875IpmAcLYJMsrEaME0OPGJYqiqQDzFKUkLfoq2ApWC AT1xgJvkMUwIVi1MO5HBvh5hYo781SwaOUCDEiaBmtPpeiDMglC+ervGeVMTwhCOLjmaKzsZue SAUdWIKiMiIkXUZM4ENyVSlgIfUnDuVQoYmTqGRo9Il3f2FgA7bjmzMQinMQYqZ7IwibKZNHEd HpyxVZTDHZTRJxSoS3K1pEYlEaFDn6g2FWJILpEHGqRwXLpWdnK2GUTG3YSj2OBGjuMG45j8+wBzX w7i4mMxClxFnHBNWcJeqsiCkqbknbiLDOBXSZHn0W6Z0IGqbNfGRq+KZtOPoI/FYnqPT1kgaKf 8gyKfG/7aZHjTfe8xzhtPCouxD6b0KTEdj2Lw9qwvnEx+AoryjuruO0/vBi7CucBrNtoMZyI13v YrgtPlNs54GBIf6SLe+iCxJNq6ot3Ht8MX5bwqQEex2z73KqoStF8MXB7k6Rh6GpRej1uZVRrNF tnb3SRkjmzl6CFhwYqIgNDFutlavMur3dCHJkKAFXrYVNBCvSlCarCnwQopobT6LsniAOQIs5El m+UWjvOpthpUsQAr0mATmc5hzimH8CEQgqO3MkiDrNWqSyeZfX7tJTED6BskNgV1GHtS4UqkVcN 2lVj+0ygV5JoWuHMO/cx0RwrXJlQ/Tg9xp4I8H6XqnMfBkUO7UF7xX/ifwDK8GqxNlwFOpK6yEdt 2dbtCvjKB981j0E4Oz1OsxRXH5nPpnq5CR3Zt8ilBSYtEhiF5ydOTHtihCOVe0WNG6E/BXLlnEe 1dlb01Pn563IVh2xX0fs1xF7FeZeMXe3cdArfNOx05D3EBbTMZabXm5+ayrCOK0o2a2z48hl/4o2 XcTLCv9lXfwUb4Wpi34QYx9n13VEnJVsXFx6gudHpZUgyJ9fDjuico47KvC/mr/wtVBzxHtcRuXA Og1/RWzeh+QJTMgS+ruPSgdYH97vmIDW8kczLmXRiwCzXgE5671C62AHGR27npZOmIqgm2PijI QeNfX8dCJL4ETsdTmSD4QKxn3Fep+Em6yRjEDTIGa3WAtBgXBrezxy53ZqeXZBhJidV0hEkmaoSn mNHPXSEqHQAL7OUVfZizVKdScfSIT64rt6fINVZAlXSa5pUSVsA8aSVSk74AB2zEvuWMYHKGUOZ ksycCwf0EzO9+L4Tp4KG7X67aTypPwEmvYEAwmQ3Ol27tjmGFk7mHlgAUE/91XMOEzyAkfLVTVeU CXt15Ur35dilDAoFbjtSDwZDZlpZ8s2p9vFxqz7wE7FhdBF5Hq9I4onwbw3XLM482c+y7z6R9ji4 mit7a23135+cPfJ4/ID7Ubju8b3jR8aXuNR40njReN147ARLl1fWlnqLj1c/mP5r+W/l/+ZQq8tl TLfNpyx/O/LsxXhw= Z \u201c 3 AO/nicnVfNbhs3EJb7m6qNk7SXFr0wDQUga1Kdtw4BwNx0iBYDep/xI06wr c3ZHEmlxuSa4lZbFAgb5Lb0WvfZU+Qt+iQ2plLVd2YoSAKXrmx/ODMnZMOVMm07n36X3v/gw48 +vJ89Pri5fu37j8yMtMxXBYS5VC9DqoGzBA4NMxepgqoCDm8CE8eWv6LU1CayeTATFI4FnS QsD6LqEFS7/ofrcApyR9IbWSyB3GRv+7dLpqt17280+sUZIsEOemskDUSIHURHRgYm9yoDMhA0XR IChIaoYR5fnjonebWCmH0WCI7JNIJho3Bokpmj+j+vXe9VudscNsrjolotbjXI87924GgWxjDK BOiJOtX7V7aTmOKfKsIhD0QwyDSmNTugAXuEyoQL0ce5cL0gr09RIkoIijBNHBF/C9DePc5akGTo ZoQChnMsRQfrqJnF0D3/QPc7MpmCcbQIMsnmasgMOegSy9JVgXyIiVAK+g7tBwFc3jiAzPFZ5g SqJlIOZDvR1i04khNSwZeEKhlCeGhnquPVWyD9rWAOWrv2WUMzMhCOHgm6NCO2fnkiFHVipion NKdETEUruSyUsgr6i0VwqkgKZobGCjS/W5thYCJ2p6sYJGSdiPFTHZGka6YJl6g41OW6jKZ4zK bpGIVCb5rRsYykQb0mXpLIY7kA2mYcarGhU9lJ69Xo7iY27AUt52YUXRQ1OLHB1IxMxTF+WQW+QyR cSxYyX2qzsKIpvZ4+oUM41QqW+Xxr5k2oaw5a/OjdF8XzWYQx/vg+lRPWhol5wDMo8r3HD4oc D3O3ew+nzbvFudiHE5qU2I2uw+HxX98H3wAlFcUb6zjtP79+dhnOA1mbqDG0pGN7vlwd79MsZ07 Fob4C1zexRAkhlZVO3j3vn4xwomJbjbsX6XUw1dueHOzuFHkU2bsIY9zausxotsj27j4pc+Qq xwBLxBNEQ2x83W6mVG/ZwuFBkSjMTDtoIGxKoCbagBtxu8IRWUVl8lmQhBjbAaWbJVunCcT7XV oJqFeCMNtpB5nHMQgr4FE0klOXInizisWq0zMavqN2kpiW9BuSxS6jDuy+VukReNmXze1DiVFJ 4ukNZ9+4d8nmWOPKpuqH6Tb2ZIjnu1SdBzAocmgP2ivB/eAtUIZHi7XhMtCRMkU+aqu2aVfAl974 rn0MotnucZqVuH7Hera3l1/QsXuLfFpoyKBUXS248R2IFY41nm3qHFjEew4thK5HGdvT3dX7W rmzXEft1xH4dsVdh7hXzcNsAPcM3HTsNdRthgo7xunl9remIhJpRclunS1in/0L2vQRTyv8p3Xx UzwV9l4MQoGtmlvXESIr2bi4cAePjkorYZg/uh2RNUcd1Tgf7V4Wuh54g3hIyqgQsa/gYrdvUm IEtmQJbUw3tQBsD9gLNBu4m8zBnURqxGAzjMZxF7kK72AKIndz06vSEdUTbH1QkIPBv74RUiaB Ak7HU5pig+ECsZ9xXqehk3WSNQgG1IxWawFoOC4N72e3G5Nzy6oKFOTKukIkxXCY+wox56QlR5g KdZyiq+OLPUZMqzdIgPrq/3R0hNlkCV9JxnRlYJjwFzyaqUHfAO/Yl9yzjA5QyD7OtGHiWD2im5 r54sVOn0mWtfjqpOim/cqY9wUABJDc3Ojdc4ysHaw8cICwnwdaMA6TvMDR8lWNF1Qp9wHly/eVH CUMSgV+OyImoyGz7WzZ5mxsYGO2cdNxbnQReR6vSMSk3DeG645nP0r8LOsW/8IW1wcrbW76+21n +7cun+v/EC70vi68U3j20a3cbdxv/Gk8bx2Iga/y1dW/py6avl35f/XP5r+e8p9L2lUuaLhjeW/ /kfiuZKnQ= true graph GF* AOzXicnVdfcxs1EL/ytxgaWnjk5UonM9BxjC+pafrQmalpNJaMm/dqY2Gd3 d2haRToeki2O45XPwxt8BD4CLwxv8A14ZSWf49PZaTPVjHXy7m93pd2VtApTRpVut/+89Mab73 9zruX32u8/8GVlQ+vXvoSIlMRnAYCSbk85AoYDSBQ01g+epBMJDBs/CkweG/+wUpKIiOdCTFHq cDBLapxHRSDq+ute1OvL7QmR7EFc5F0NY51rmYE/kCQd+kWXEz2MCMu3i+P87M/X5o/CEYRinLd bm0V+s8B2fPVGu9W2zV8cBOXgxr3Pf/0j/e/3v58eX7sSdWMRZRwSHTGi1IugnepeTqSmEYOi0c0 UpCQ6IQN4gcOEcFC93E68FczRbTwU5A+Zb4lgiuh+5u9nCZpiGJUMAnjImRj/S1Td/SHfxB0Mv 7IpmCsa36yPQ310Kq/YPANyxVFciH6FcpoW/RVsBSMCQnLjCTbIYpgYrylIF/uLfjm+go0JomA0c oFOJEk1DNtadS9EGZkBK29kNGNUTHyEMXHOEKzvZuWTIkBUKImPfRNFXEx4K5kolNIK+JNFcKhI cmXqGRg9IP/hivemDjlqOLKeRFGYhxUx2RhE2lSaOo+NTmqoymOMymn7FKhLcqWkRi0RoUGfqDcW 3JBdIwowROS5cKj35cS2Ki7kNQ7HLiSnBCfKa/9hASKqHvFhOpHL4BnDhBXMpaosjEhqdpubyDBO hTRZHn+fKY1bqKYM4yNVXxWNRjeGPm7v6UY9paEkib7PMijyve37Rd5u+kFwB7vN28VS7IMJSUps J7C4pr+sbkcfACEVR3NrDb+HI59gl2g9k0UGM5kU6wHG5Plym2fcvAEH/OlHfRBYkmVdUWHtxZ jt+WMCnBQdvMu+xq6Mop9+hgd6fIo8icRej1bsXaY1Vf2t3y9jZDNHDwEPHO4riEyMG6trF2n1 fbqQZEjQAjdbEw3wNQlKEw12NXhCSitvkgyHoIcYTbS5G45hV4+1VaDKhriTS4i8xezoBz8gpM JKRgyJ0s4jBrlcr4LKtfpqUkvgJlg0QvoA7PvlSoEnRoF0tg8EeiWJpyecuQVfJ5pjhSMTq+m y9gTIe7vUnXehUGRQ2vQanbvdV8Bpbi1aAsuAh1JXeSjlmzpVgV84YXvmsgmq0eu1mKq9fMZ3N6 uQkd27vIpYUmLRIYRWcrTkz9YRjlQdFjRujP7pNy5Y8j+vsremq5wXKVh2xX0fs1xF7FeZeMXe3 cdATvNOx0pA3EcbJGI+b49x8ayoinlaU7NbZPHbZ36FNF/G4wn9cFz/FXWHOxW7IsVCz4zqCZyUb B+eu4OFRaSUM84fnw46InOCvxX8xfeFmqOeInLiBxYp+G32zSjlwFpMgPSpO7eg9IB9nvcVXRg TzIHc+alEY1BUxbDmefOtYslAC9y2zecLB0RNcHSBwUZaPz1NRci6UpgZDylSToYLhD7GWN1Gk6y TjIGQYOc0WolAnHpeH9zJHbrenZBRlclIlHUGSqSrhIVbUQ0eISAfwOEtpZS7WLNGZdCwd4oXr6 v0GUp0lUCU9ZkWVcI2YCxplbIDmDH3OSOZbyAUupgtiQFx/IByeR8Lo7v5KmwUavTiJPyjfOt CYSIDkeqd93RbHyNrBzAMLCPv4wOGUwSQ3D5tV9V4QZW0zydXvi/FKFQKnDLET4ZDakpZ8syp 9PBwqxzy3bFUugicqNeEfFJOK8N1y3O/MyzLKg/whYHR+utYKO1/i2+z+5403bZ+8T71PvMC7zb3 j3vkfUO/Qi7zfvL+8f79+VJyvZyk8rP0+hb1wqZT72nLby/IX0MS Figure 1: (Left) Illustrative causal graph prior G0 with dX = 4, dY = 2 features, degree of sparseness Z = 3. The hidden true graph GF\u2217includes all the edges in G0 plus the red-dashed edge (3, 1). (Right) Visualization of Z, the set of factorizations consistent with G0, which is the support of the hyper-prior P0. The factorization z\u2217of the true FMDP F\u2217is highlighted in red. where X = S \u00d7 A = X1 \u00d7 . . . \u00d7 XdX is a factored state-action space with dX = dS + dA discrete variables, Y = S = Y1 \u00d7 . . . \u00d7 YdY is a factored state space with dY = dS variables, and zj are the causal parents of each state variable, which are obtained from the edges z of GM. Then, p is a factored transition model specified as p(y|x) = QdY j=1 pj(y[j] | x[zj]), \u2200y \u2208Y, x \u2208X, and r is a factored reward function r(x) = PdY j=1 r(x[zj]), with mean R(x) = PdY j=1 R(x[zj]), \u2200x \u2208X. Finally, \u00b5 \u2208\u2206(Y) and H are the initial state distribution and episode horizon as specified in M, Z is the degree of sparseness of GM, N is a constant such that all the variables are supported in [N]. The interaction between an agent and the FMDP can be described exactly as we did in Section 2.2 for a tabular MDP, and the policies with their corresponding value functions are analogously defined. With the latter formalization of the FMDP induced by a causal graph, we now have all the components to introduce our learning problem in the next section. 2.4 REINFORCEMENT LEARNING WITH PARTIAL CAUSAL GRAPH PRIORS In the previous section, we show how the prior knowledge of a causal graph over the MDP variables can be exploited to obtain an FMDP representation of the problem, which is well-known to allow for more efficient reinforcement learning thanks to the factorization of the transition model and reward function (Osband & Van Roy, 2014; Xu & Tewari, 2020; Tian et al., 2020; Chen et al., 2020; Talebi et al., 2021; Rosenberg & Mansour, 2021). However, in several applications is unreasonable to assume prior knowledge of the full causal graph, and causal identification is costly in general (Gillispie & Perlman, 2001; Shah & Peters, 2020). Nonetheless, some prior knowledge of the causal graph, i.e., a portion of the edges, may be easily available. For instance, in a DTR problem some edges of the causal graph on patient\u2019s variables are commonly known, whereas several others are elusive. In this paper, we study the reinforcement learning problem when a partial causal graph prior G0 \u2286 GM on the MDP M is available.4 We formulate the learning problem in a Bayesian sense, in which the instance F\u2217is sampled from a prior distribution PG0 consistent with the causal graph prior G0.5 In Figure 1 (left), we illustrate both the causal graph prior G0 and the (hidden) true graph GF\u2217of the true instance F\u2217. Analogously to previous works on Bayesian RL formulations, e.g., (Osband et al., 2013), we evaluate the performance of a learning algorithm in terms of its induced Bayesian regret. Definition 1 (Bayesian Regret). Let A a learning algorithm and let PG0 a prior distribution on FMDPs consistent with the partial causal graph prior G0. The K-episodes Bayesian regret of A is BR(K) := E F\u2217\u223cPG0 \" K X k=1 V\u2217(\u03c0\u2217) \u2212V\u2217(\u03c0k) # , where V\u2217(\u03c0) = VF\u2217(\u03c0) is the value of the policy \u03c0 in F\u2217under \u00b5, \u03c0\u2217\u2208arg max\u03c0\u2208\u03a0 V\u2217(\u03c0) is the optimal policy in F\u2217, and \u03c0k is the policy played by algorithm A at step k \u2208[K]. The Bayesian regret allows to evaluate a learning algorithm on average over multiple instances. This is particularly suitable in some domains, such as DTR, in which it is crucial to achieve a good performance of the treatment policy on different patients. In the next section, we introduce an algorithm that achieves a Bayesian regret rate that is sublinear in the number of episodes K. 4For two bigraphs G\u22c6= (X, Y, z\u22c6) and G\u2022 = (X, Y, z\u2022), we let G\u22c6\u2286G\u2022 if z\u22c6\u2286z\u2022. 5We will specify in the next Section 3 how the prior PG0 can be constructed. 4 \fPublished as a conference paper at ICLR 2024 3 CAUSAL PSRL To face the learning problem described in the previous section, we cannot na\u00efvely apply the PSRL algorithm for FMDPs (Osband & Van Roy, 2014), since we cannot access the factorization z\u2217of the true instance F\u2217, but only a causal graph prior G0 = (X, Y, z0) such that z0 \u2286z\u2217. Moreover, z\u2217is always latent in the interaction process, in which we can only observe state-action-reward realizations from F\u2217. The latter can be consistent with several factorizations of the transition dynamics of F\u2217, which means we can neither extract z\u2217directly from data. This is the common setting of hierarchical Bayesian methods (Hong et al., 2020; 2022a;b; Kveton et al., 2021), where a latent state is sampled from a latent hypothesis space on top of the hierarchy, which then conditions the sampling of the observed state down the hierarchy. In our setting, we can see the latent hypothesis space as the space of all the possible factorizations that are consistent with G0, whereas the observed states are the model parameters of the FMDP, from which we observe realizations. The algorithm that we propose, Causal PSRL (C-PSRL), builds on this intuition to implement a principled hierarchical posterior sampling procedure to minimize the Bayesian regret exploiting the causal graph prior. Algorithm 1 Causal PSRL (C-PSRL) 1: input: causal graph prior G0 \u2286GF\u2217, degree of sparseness Z 2: Compute the set of consistent factorizations Z = Z1 \u00d7 . . . \u00d7 ZdY = n z = {zj}j\u2208[dY ] \f \f \f |zj| < Z and z0,j \u2286zj \u2200j \u2208[dY ] o 3: Build the hyper-prior P0 and the prior P0(\u00b7|z) for each z \u2208Z 4: for episode k = 0, 1, . . . , K \u22121 do 5: Sample z \u223cPk and p \u223cPk(\u00b7|z) to build the FMDP Fk 6: Compute the policy \u03c0k \u2190arg max\u03c0\u2208\u03a0 VFk(\u03c0) collect an episode with \u03c0k in F\u2217 7: Compute the posteriors Pk+1 and Pk+1(\u00b7|z) with the collected data 8: end for First, C-PSRL computes the set Z, illustrated in Figure 1 (right), of the factorizations consistent with G0, i.e., which are both Z-sparse and include all of the edges in z0 (line 2). Then, it specifies a parametric distribution P0, called hyper-prior, over the latent hypothesis space Z, and, for each z \u2208Z, a further parametric distribution P0(\u00b7|z), which is a prior on the model parameters, i.e., transition probabilities, conditioned on the latent state z (line 3). The former represents the agent\u2019s belief over the factorization of the true instance F\u2217, the latter on the factored transition model p\u2217.6 Having translated the causal graph prior G0 into proper parametric prior distributions, C-PSRL executes a hierarchical posterior sampling procedure (lines 4-8). For each episode k, the algorithm sample a factorization z from the current hyper-prior Pk, and a transition model p from the prior Pk(\u00b7|z), such that p is factored according to z (line 5). With these two, it builds the FMDP Fk (line 5), for which it computes the optimal policy \u03c0k solving the corresponding planning problem, which is deployed on the true instance F\u2217for one episode (line 6). Finally, the evidence collected in F\u2217 serves to compute the closed-form posterior updates of the prior and hyper-prior (line 7). As we shall see, Algorithm 1 has compelling statistical properties, a regret sublinear in K (Section 4) with a notion of causal discovery (Section 5), and promising empirical performance (Section 6). Recipe. Three key ingredients concur to make the algorithm successful. First, C-PSRL links RL of an FMDP F\u2217with unknown factorization to a hierarchical Bayesian learning, in which the factorization acts as a latent state on top of the hierarchy, and the transition probabilities are the observed state down the hierarchy. Secondly, C-PSRL exploits the causal graph prior G0 to reduce the size of the latent hypothesis space Z, which is super-exponential in dX, dY in general (Robinson, 1973). Finally, C-PSRL harnesses the specific causal structure of the problem to get a factorization z (line 5) through independent sampling of the parents zj \u2208Zj for each Yj, which significantly reduces the number of hyper-prior parameters. Crucially, this can be done as we do not admit \u201cvertical\u201d edges in Y and edges from Y to X, such that parents\u2019 assignment cannot lead to a cycle. Degree of sparseness. C-PSRL takes as input (line 1) the degree of sparseness Z of the true FMDP F\u2217, which might be unknown in practice. In that case, Z can be seen as an hyper-parameter of the algorithm, which can be either implied through domain expertise or tuned independently. 6A description of parametric distributions P0 and P0(\u00b7|z) and their posterior updates is in Appendix B. 5 \fPublished as a conference paper at ICLR 2024 Planning in FMDPs. C-PSRL requires exact planning in a FMDP (line 6), which is intractable in general (Mundhenk et al., 2000; Lusena et al., 2001). While we do not address computational issues in this paper, we note that efficient approximation schemes have been developed (Guestrin et al., 2003). Moreover, under linear realizability assumptions for the transition model or value functions, exact planning methods exist (Yang & Wang, 2019; Jin et al., 2020b; Deng et al., 2022). 4 REGRET ANALYSIS OF C-PSRL In this section, we study the Bayesian regret induced by C-PSRL with a Z-sparse causal graph prior G0 = (X, Y, z0). First, we define the degree of prior knowledge \u03b7 \u2264minj\u2208[dY ] |z0,j|, which is a lower bound on the number of causal parents revealed by the prior G0 for each state variable Yj. We then provide an upper bound on the Bayesian regret of C-PSRL, which we discuss in Section 4.1. Theorem 4.1. Let G0 be a causal graph prior with degree of sparseness Z and degree of prior knowledge \u03b7. The K-episodes Bayesian regret incurred by C-PSRL is BR(K) = \u02dc O \u0010\u0010 H5/2N 1+Z/2dY + \u221a H2dX\u2212\u03b7 \u0011 \u221a K \u0011 . While we defer the proof of the result to Appendix E, we report a sketch of its main steps below. Step 1. The first step of our proof bridges the previous analyses of a hierarchical version of PSRL, which is reported in (Hong et al., 2022b), with the one of PSRL for factored MDPs (Osband & Van Roy, 2014). In short, we can decompose the Bayesian regret (see Definition 1) as BR(K) = E \" K X k=1 E k \u0002 V\u2217(\u03c0\u2217) \u2212V k(\u03c0\u2217, Z\u2217) \u0003 # + E \" K X k=1 E k \u0002 V k(\u03c0k, Zk) \u2212V\u2217(\u03c0k) \u0003 # where Ek[\u00b7] is the conditional expectation given the evidence collected until episode k, and V k(\u03c0, z) = EF\u223cPk(\u00b7|z) [VF(\u03c0)] is the value function of \u03c0 on average over the posterior Pk(\u00b7|z). Informally, the first term captures the regret due to the concentration of the posterior Pk(\u00b7|z\u2217) around the true transition model p\u2217having fixed the true factorization z\u2217. Instead, the second term captures the regret due to the concentration of the hyper-posterior Pk around the true factorization z\u2217. Through a non-trivial adaptation of the analysis in (Hong et al., 2022b) to the FMDP setting, we can bound each term separately to obtain \u02dc O((H5/2N 1+Z/2dY + p H|Z|) \u221a K). Step 2. The upper bound of the previous step is close to the final result up to a factor p |Z| related the size of the latent hypothesis space. Since C-PSRL performs local sampling from the product space Z = Z1 \u00d7 . . . \u00d7 ZdY , by combining independent samples zj \u2208Zj for each variable Yj as we briefly explained in Section 3, we can refine the dependence in |Z| to maxj\u2208[dY ] |Zj| \u2264|Z|. Step 3. Finally, to obtain the final rate reported in Theorem 4.1, we have to capture the dependency in the degree of prior knowledge \u03b7 in the Bayesian regret by upper bounding maxj\u2208[dY ] |Zj| as max j\u2208[dY ] |Zj| = XZ\u2212\u03b7 i=0 \u0012dX \u2212\u03b7 i \u0013 \u22642dX\u2212\u03b7. 4.1 DISCUSSION OF THE BAYESIAN REGRET The regret bound in Theorem 4.1 contains two terms, which informally capture the regret to learn the transition model having the true factorization (left), and to learn the true factorization (right). The first term is typical in previous analyses of vanilla posterior sampling. Especially, the best known rate for the MDP setting is \u02dc O(H \u221a SAK) (Osband & Van Roy, 2017). In a FMDP setting with known factorization, the direct dependencies with the size S, A of the state and action spaces can be refined to obtain \u02dc O(Hd3/2 Y N Z/2\u221a K) (Osband & Van Roy, 2014). Our rate includes additional factors of H and N, but a better dependency on the number of state features dY . The second term of the regret rate is instead unique to hierarchical Bayesian settings, which include an additional source of randomization in the sampling of the latent state from the hyper-prior. In Theorem 4.1, we are able to express this term in the degree of prior knowledge \u03b7, resulting in a rate 6 \fPublished as a conference paper at ICLR 2024 \u02dc O( p K/2\u03b7). The latter naturally demonstrates that a richer causal graph prior G0 will benefit the efficiency of PSRL, bringing the regret rate closer to the one for an FMDP with known factorization. We believe that the rate in Theorem 4.1 is shedding light on how prior causal knowledge, here expressed through a partial causal graph, impacts on the efficiency of posterior sampling for RL. 5 C-PSRL EMBEDS A NOTION OF CAUSAL DISCOVERY In this section, we provide an ancillary result that links Bayesian regret minimization with C-PSRL to a notion of causal discovery, which we call weak causal discovery. Especially, we show that we can extract a Z-sparse super-graph of the causal graph GF\u2217of the true instance F\u2217as a byproduct. A run of C-PSRL produces a sequence {\u03c0k}K\u22121 k=0 of optimal policies for the FMDPs {Fk}K\u22121 k=0 drawn from the posteriors. Every FMDP Fk is linked to a corresponding graph (or factorization) GFk = (X, Y, zk), where zk \u223cPk is sampled from the hyper-posterior. Note that the algorithm does not enforce any causal meaning to the edges zk of GFk. Nonetheless, we aim to show that we can extract a Z-sparse super-graph of GF\u2217from the sequence {Fk}K\u22121 k=0 with high probability. First, we need to assume that any misspecification in GFk negatively affects the value function of \u03c0k. Thus, we extend the traditional notion of causal minimality (Spirtes et al., 2000) to value functions. Definition 2 (\u03f5-Value Minimality). An FMDP F fulfills \u03f5-value minimality, if for any FMDP F\u2032 encoding a proper subgraph of GF, i.e., GF\u2032 \u2282GF, it holds that V \u2217 F > V \u2217 F\u2032 + \u03f5, where V \u2217 F, V \u2217 F\u2032 are the value functions of the optimal policies in F, F\u2032 respectively. Then, as a corollary of Theorem 4.1, we can prove the following result. Corollary 5.1 (Weak Causal Discovery). Let F\u2217be an FMDP in which the transition model p\u2217 fulfills the causal minimality assumption with respect to GF\u2217, and let F\u2217fulfill \u03f5-value minimality. Then, GF\u2217\u2286GFK holds with high probability, where GFK is a Z-sparse graph randomly selected within the sequence {GFk}K\u22121 k=0 produced by C-PSRL over K = \u02dc O(H5d2 Y 2dX\u2212\u03b7/\u03f52) episodes. The latter result shows that C-PSRL discovers the causal relationships between the FMDP variables, but cannot easily prune the non-causal edges, making GFK a super-graph of GF\u2217. In Appendix D, we report a detailed derivation of the previous result. Interestingly, Corollary 5.1 suggests a direct link between regret minimization in a FMDP with unknown factorization and a (weak) notion of causal discovery, which might be further explored in future works. 6 EXPERIMENTS In this section, we provide experiments to both support the design of C-PSRL (Section 3) and validate its regret rate (Section 4). We consider two simple yet illustrative domains. The first, which we call Random FMDP, benchmarks the performance of C-PSRL on randomly generated FMDP instances, a setting akin to the Bayesian learning problem (see Section 2.4) that we considered in previous sections. The latter is a traditional Taxi environment (Dietterich, 2000), which is naturally factored and hints at a potential application. In those domains, we compare C-PSRL against two natural baselines: PSRL for tabular MDPs (Strens, 2000) and Factored PSRL (F-PSRL), which extends PSRL to factored MDP settings (Osband & Van Roy, 2014). Note that F-PSRL is equivalent to an instance of C-PSRL that receives the true causal graph prior as input, i.e., has an oracle prior. Random FMDPs. An FMDP (relevant parameters are reported in the caption of Figure 2) is sampled uniformly from the prior specified through a random causal graph, which is Z-sparse with at least two edges for every state variable (\u03b7 = 2). Then, the regret is minimized by running PSRL, F-PSRL, and C-PSRL (\u03b7 = 2) for 500 episodes. Figure 2a shows that C-PSRL achieves a regret that is significantly smaller than PSRL, thus outperforming the baseline with an uninformative prior, while being surprisingly close to F-PSRL, having the oracle prior. Indeed, C-PSRL resulted efficient in estimating the transition model of the sampled FMDP, as we can see from Figure 2b, which reports the \u21131 distance between the true model p\u2217and the pk sampled by the algorithm at episode k. Taxi. For the Taxi domain, we use the common Gym implementation (Brockman et al., 2016). In this environment, a taxi driver needs to pick up a passenger at a specific location, and then it has to 7 \fPublished as a conference paper at ICLR 2024 0 250 500 0 2 4 episodes regret \u00b7102 (a) Random FMDP 0 250 500 0 10 20 30 episodes model error (b) Random FMDP 0 150 300 0 2 4 episodes regret \u00b7103 (c) Taxi 3 \u00d7 3 0 200 400 0 1 2 episodes regret \u00b7104 (d) Taxi 5 \u00d7 5 PSRL F-PSRL C-PSRL (\u03b7 = 2) Figure 2: (a,b) Regret and model error as a function of the episodes in the Random FMDP domain with dX = 9, dY = 6, Z = 5, N = 2, H = 100. (c,d) Regret as a function of the episodes in Taxi 3 \u00d7 3 with dX = 5, dY = 4, Z = 5, N = [3, 3, 2, 1, 6], H = 10, Taxi 5 \u00d7 5 with dX = 5, dY = 4, Z = 5, N = [5, 5, 2, 1, 6], H = 15. The plots report the mean and 95% c.i. over 20 runs. bring the passenger to their destination. The environment is represented as a grid, with some special cells identifying the passenger location and destination. As reported in Sim\u00e3o & Spaan (2019), this domain is inherently factored since the state space is represented by four independent features: The position of the taxi (row and column), the passenger\u2019s location and whether they are on the taxi, and the destination. We perform the experiment on two grids with varying size (3 \u00d7 3 and 5 \u00d7 5 respectively), for which we report the relevant parameters in Figure 2. Here we compare the proposed algorithm C-PSRL (\u03b7 = 2) with PSRL, while F-PSRL is omitted as the knowledge of the oracle prior is not available. Both algorithms converge to a good policy eventually in the smaller grid (see the regret in Figure 2c). Instead, when the size of the grid increases, PSRL is still suffering a linear regret after 400 episodes, whereas C-PSRL succeeds in finding a good policy efficiently (see Figure 2d). Notably, this domain resembles the problem of learning optimal routing in a taxi service, and our results show that exploiting common knowledge (such as that the location of the taxi and passenger\u2019s destination) in the form of a causal graph prior can be a game changer. 7 RELATED WORK We revise here the most relevant related work in posterior sampling, factored MDPs, and causal RL. Posterior sampling. Thompson sampling (Thompson, 1933) is a well-known Bayesian algorithm that has been extensively analyzed in both multi-armed bandit problems (Kaufmann et al., 2012; Agrawal & Goyal, 2012) and RL (Osband et al., 2013). Osband & Van Roy (2017) provides a regret rate \u02dc O(H \u221a SAK) for vanilla Thompson sampling in RL, which is called the PSRL algorithm. Recently, other works adapted Thompson sampling to hierarchical Bayesian problems (Hong et al., 2020; 2022a;b; Kveton et al., 2021). Mixture Thompson sampling (Hong et al., 2022b), which is similar to PSRL but samples the unknown MDP from a mixture prior, is arguably the closest to our setting. In this paper, we take inspiration from their algorithm to design C-PSRL and derive its analysis, even though, instead of their tabular setting, we tackle a fundamentally different problem on factored MDPs resulting from a casual graph prior, which induces unique challenges. Factored MDPs. Previous works considered RL in FMDPs (Boutilier et al., 2000) with either known (Osband & Van Roy, 2014; Xu & Tewari, 2020; Talebi et al., 2021; Tian et al., 2020; Chen et al., 2020) or unknown (Strehl et al., 2007; Vigorito & Barto, 2009; Diuk et al., 2009; Chakraborty & Stone, 2011; Hallak et al., 2015; Guo & Brunskill, 2017; Rosenberg & Mansour, 2021) factorization. The PSRL algorithm has been adapted to both finite-horizon (Osband & Van Roy, 2014) and infinite-horizon (Xu & Tewari, 2020) FMDPs. The former assumes knowledge of the factorization, close to our setting with an oracle prior, and provides Bayesian regret of order \u02dc O(Hd3/2 Y N Z/2\u221a K). Previous works also studied RL in FMDPs in a frequentist sense, either with known (Chen et al., 2020) or unknown (Rosenberg & Mansour, 2021) factorization. Rosenberg & Mansour (2021) employ an optimistic method that is orthogonal to ours, whereas they leave as an open problem capturing the effect of prior knowledge, for which we provide answers in a Bayesian setting. Causal RL. Various works addressed RL with a causal perspective (see Kaddour et al., 2022, Chapter 7). Causal principles are typically exploited to obtain compact representations of states and transitions (Tomar et al., 2021; Gasse et al., 2021), or to pursue generalization across tasks and environments (Zhang et al., 2020; Huang et al., 2022; Feng et al., 2022; Mutti et al., 2023). Closer to 8 \fPublished as a conference paper at ICLR 2024 our setting, Lu et al. (2022) aim to exploit prior causal knowledge to learn in both MDPs and FMDPs. Our work differs from theirs in two key aspects: We show how to exploit a partial causal graph prior instead of assuming knowledge of the full causal graph, and we consider a Bayesian formulation of the problem while they tackle a frequentist setting through optimism principles. Zhang (2020b) show an interesting application of causal RL for designing treatments in a DTR problem. Causal bandits. Another research line connecting causal models and sequential decision-making is the one on causal bandits (Lattimore et al., 2016; Sen et al., 2017; Lee & Bareinboim, 2018; 2019; Lu et al., 2020; 2021; Nair et al., 2021; Xiong & Chen, 2022; Feng & Chen, 2023), in which the actions of the bandit problem correspond to interventions on variables of a causal graph. There, the causal model specifies a particular structure on the actions, modelling their dependence with the rewarding variable, instead of the transition dynamics as in our work. Moreover, they typically assume the causal model to be known, with the exception of (Lu et al., 2021), and they study the simple regret in a frequentist sense rather than the Bayesian regret given a partial causal prior. 8 CONCLUSION In this paper, we presented how to exploit prior knowledge expressed through a partial causal graph to improve the statistical efficiency of reinforcement learning. Before reporting some concluding remarks, it is worth commenting on where such a causal graph prior might be originated from. Exploiting experts\u2019 knowledge. One natural application of our methodology is to exploit domainspecific knowledge coming from experts. In several domains, e.g., medical or scientific applications, expert practitioners have some knowledge over the causal relationships between the domain\u2019s variables. However, they might not have a full picture of the causal structure, especially when they face complex systems such as the human body or biological processes. Our methodology allows those practitioners to easily encode their partial knowledge into a graph prior, instead of having to deal with technically involved Bayesian statistics to specify parametric prior distributions, and then let C-PSRL figure out a competent decision policy with the given information. Exploiting causal discovery. Identifying the causal graph over domain\u2019s variables, which is usually referred as causal discovery, is a main focus of causality (Pearl, 2009, Chapter 3). The literature provides plenty of methods to perform causal discovery from data (Peters et al., 2017, Chapter 4), including learning causal variables and their relationships in MDP settings (Zhang et al., 2020; Mutti et al., 2023). However, learning the full causal graph, even when it is represented with a bigraph as in MDP settings (Mutti et al., 2023), can be statistically costly or even prohibitive (Gillispie & Perlman, 2001; Wadhwa & Dong, 2021). Moreover, not all the causal edges are guaranteed to transfer across environments (Mutti et al., 2023), which would force to perform causal discovery anew for any slight variation of the domain (e.g., changing the patient in a DTR setting). Our methodology allows to focus on learning the universal causal relationships (Mutti et al., 2023), which transfer across environments, e.g., different patients, and then specify the prior through a partial causal graph. The latter paragraphs describe two scenarios in which our work enhance the applicability of PSRL, bridging the gap between how the prior might be specified in practical applications and what previous methods currently require, i.e., a parametric prior distribution. To summarize our contributions, we first provided a Bayesian formulation of reinforcement learning with prior knowledge expressed through a partial causal graph. Then, we presented an algorithm, C-PSRL, tailored for the latter problem, and we analyzed its regret to obtain a rate that is sublinear in the number of episodes and shows a direct dependence with the degree of causal knowledge. Finally, we derived an ancillary result to show that C-PSRL embeds a notion of causal discovery, and we provided an empirical validation of the algorithm against relevant baselines. C-PSRL resulted nearly competitive with FPSRL, which enjoys a richer prior, while clearly outperforming PSRL with an uninformative prior. Future works may derive a tighter analysis of the Bayesian regret of C-PSRL, as well as a stronger causal discovery result that allows to recover a minimal causal graph instead of a super-graph. Another important aspect is to address computational issues inherent to planning in FMDPs to scale the implementation of C-PSRL to complex domains. Finally, interesting future directions include extending our framework to model-free PSRL (Dann et al., 2021; Tiapkin et al., 2023), in which the prior may specify causal knowledge of the reward or the value function directly, and to study how prior misspecification (Simchowitz et al., 2021) affects the regret rate. 9 \fPublished as a conference paper at ICLR 2024" + }, + { + "url": "http://arxiv.org/abs/2202.11079v1", + "title": "Reward-Free Policy Space Compression for Reinforcement Learning", + "abstract": "In reinforcement learning, we encode the potential behaviors of an agent\ninteracting with an environment into an infinite set of policies, the policy\nspace, typically represented by a family of parametric functions. Dealing with\nsuch a policy space is a hefty challenge, which often causes sample and\ncomputation inefficiencies. However, we argue that a limited number of policies\nare actually relevant when we also account for the structure of the environment\nand of the policy parameterization, as many of them would induce very similar\ninteractions, i.e., state-action distributions. In this paper, we seek for a\nreward-free compression of the policy space into a finite set of representative\npolicies, such that, given any policy $\\pi$, the minimum R\\'enyi divergence\nbetween the state-action distributions of the representative policies and the\nstate-action distribution of $\\pi$ is bounded. We show that this compression of\nthe policy space can be formulated as a set cover problem, and it is inherently\nNP-hard. Nonetheless, we propose a game-theoretic reformulation for which a\nlocally optimal solution can be efficiently found by iteratively stretching the\ncompressed space to cover an adversarial policy. Finally, we provide an\nempirical evaluation to illustrate the compression procedure in simple domains,\nand its ripple effects in reinforcement learning.", + "authors": "Mirco Mutti, Stefano Del Col, Marcello Restelli", + "published": "2022-02-22", + "updated": "2022-02-22", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "INTRODUCTION In the Reinforcement Learning (RL) (Sutton and Barto, 2018) framework, an arti\ufb01cial agent interacts Proceedings of the 25th International Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2022, Valencia, Spain. PMLR: Volume 151. Copyright 2022 by the author(s). (*) Correspondence to with an environment, typically modeled through a Markov Decision Process (MDP) (Puterman, 2014), to maximize some form of long-term performance, which is usually the sum of the discounted rewards collected in the process. The agent\u2019s behavior is encoded in a Markovian policy, i.e., a function that maps the current state of the environment with a probability distribution over the next action to be taken. In principle, if the underlying MDP is small enough, we can represent a Markovian policy with a table that includes an entry for each state-action pair, and we call it a tabular policy. However, most relevant scenarios have too many (possibly in\ufb01nite) states and actions to allow for a tabular representation. In this case, we can turn to function approximation (Sutton and Barto, 2018) to encode the policy within a family of parametric functions, e.g., a linear basis combination or a deep neural network, and we call it a parametric policy. This set of parametric policies, which we call the policy space, is typically in\ufb01nite. Therefore, learning a policy that maximizes the performance can be a hefty challenge, and the sheer size of the policy space often causes sample and computation ine\ufb03ciencies. A setting where these ine\ufb03ciencies arise clearly and naturally is Policy Optimization (PO) (Deisenroth et al., 2013). In PO, we aim to \ufb01nd a policy that maximizes the performance within the policy space, i.e., an optimal policy, with the least amount of interactions (Sutton et al., 1999; Silver et al., 2014; Schulman et al., 2015; Metelli et al., 2018). If we also account for the performance of the policies that are actually deployed to collect these interactions, we come up with an online PO (Papini et al., 2019; Cai et al., 2020). In this setting, we try to minimize the regret that the agent su\ufb00ers by taking interactions with a sub-optimal behavior before converging to an optimal policy. Recent results showed that the regret of online PO is directly related to the size of the policy space (Papini et al., 2019; Metelli et al., 2021a). In particular, online PO with a \ufb01nite policy space can enjoy a constant regret, i.e., it does not scale with the number of interactions, under certain conditions (Metelli et al., 2021a). Instead, the regret of online PO with an in\ufb01arXiv:2202.11079v1 [cs.LG] 22 Feb 2022 \fReward-Free Policy Space Compression for Reinforcement Learning nite policy space does scale with the square root of the number of interactions in general (Papini et al., 2019), which means that we only have asymptotic guarantees of reaching an optimal policy. In view of these results, one could wonder whether the expressive power of an in\ufb01nite policy space is worth the additional regret it causes: Are all of these in\ufb01nitely many policies really necessary for PO? The expressive power of a policy space is related to the di\ufb00erent distributions that its policies can induce over the states and actions of the environment, as the whole point of PO is to \ufb01nd a policy that maximizes the probability of reaching state-action pairs associated with high rewards. However, di\ufb00erent parameterizations might actually induce equivalent policies due to the speci\ufb01c structure of the policy space. Similarly, even di\ufb00erent policies can induce the same state-action distribution in a given environment. These two types of policies are arguably redundant for PO and we would like to \ufb01nd a policy space that does not include either. Especially, we aim to answer the following question: Having an in\ufb01nite parametric policy space \u0398 in a given environment M, can we compress \u0398 into a \ufb01nite subset that retains most of its expressive power? In this paper, we formulate this question into the Policy Space Compression problem, where we exploit the inherent structure of M and \u0398 to compute the compressed policy space. The general idea is to identify a \ufb01nite set of representative policies, such that for any policy \u03c0 of the original space, the minimum R\u00b4 enyi divergence between the state-action distributions of the representative policies and the state-action distribution of \u03c0 is bounded by a given constant. This compression is agnostic to the reward function, and thus the resulting policy space can bene\ufb01t the computational and sample complexity of any RL task one can later specify over M, as it is typical in reward-free RL (Hazan et al., 2019; Jin et al., 2020a). Speci\ufb01cally, the paper includes the following contributions. First, we provide a formal de\ufb01nition of the policy space compression problem (Section 3). We note that the problem can be formulated equivalently as a set cover, and that \ufb01nding an optimal compression of the policy space is NP-hard in general (Feige, 1998). Despite this negative result, we propose a game-theoretic reformulation (Section 4) that casts the problem to the one of reaching a di\ufb00erential Stackelberg equilibrium (Fiez et al., 2020) of a two-player sequential game, in which the \ufb01rst player tries to cover the policy space with a \ufb01nite set of policies and the second player tries to \ufb01nd a policy that falls outside this coverage. Then, we present a planning algorithm (Section 5) to e\ufb03ciently compute a compression of the policy space in a given environment, by repeatedly solving, with a \ufb01rst-order method, the two-player game for an increasing number of covering policies, until the compression requirement is met globally. In Section 6, we provide a theoretical analysis of the performance guarantees attained by the compressed policy space in relevant RL tasks, namely policy evaluation and policy optimization. Finally, in Section 7 we provide a brief numerical validation of both the compression algorithm and RL with the compressed policy space. The proofs of the theorems can be found in Appendix A. 2 PRELIMINARIES In this section, we introduce the essential background on controlled Markov processes, policy optimization, importance sampling estimation and R\u00b4 enyi divergence. Throughout the paper, we will denote a vector v with a bold typeface, as opposed to a scalar v. 2.1 Controlled Markov Processes A discrete-time Controlled Markov Process (CMP) is de\ufb01ned as a tuple M := (S, A, P, \u00b5, \u03b3), in which S is the state space, A is the action space, P : S \u00d7 A \u2192 \u2206(S) is a transition model such that the next state is drawn as s\u2032 \u223cP(\u00b7|s, a) given the current state s \u2208S and action a \u2208A, \u00b5 : \u2206(S) is an initial state distribution such that the initial state is drawn as s \u223c\u00b5(\u00b7), and \u03b3 \u2208[0, 1] is the discount factor. The behavior of an agent interacting with a CMP can be modeled through a Markovian parametric policy \u03c0\u03b8 : S \u2192\u2206(A) such that an action is drawn as a \u223c\u03c0\u03b8(\u00b7|s) given the current state s \u2208S, where \u03b8 \u2208\u0398 \u2286Rm are the policy parameters, and the set \u03a0\u0398 is called the policy space. A policy \u03c0\u03b8 induces a \u03b3-discounted state distribution ds \u03c0\u03b8 : \u2206(S) over the state space of the CMP M, which is given by ds \u03c0\u03b8(s) = (1 \u2212\u03b3) P\u221e t=1 \u03b3tPr(st = s) or the equivalent recursive relation ds \u03c0\u03b8(s) = (1 \u2212 \u03b3)\u00b5(s)\u2212\u03b3 R SA ds \u03c0\u03b8(s\u2032)\u03c0\u03b8(a\u2032|s\u2032)P(s|s\u2032, a\u2032) ds\u2032 da\u2032. Similarly, we de\ufb01ne the \u03b3-discounted state-action distribution dsa \u03c0\u03b8 : \u2206(S\u00d7A) given by dsa \u03c0\u03b8(s, a) = \u03c0\u03b8(a|s)ds \u03c0\u03b8(s). With a slight overloading of notation, we will indi\ufb00erently denote the parametric policy space \u03a0\u0398 by \u0398, a parametric policy \u03c0\u03b8 \u2208\u03a0\u0398 by \u03b8, and its induced distributions ds \u03c0\u03b8(s), dsa \u03c0\u03b8(s, a) by ds \u03b8(s), dsa \u03b8 (s, a). 2.2 Policy Optimization The process of looking for the policy that maximizes the agent\u2019s performance on a given RL task with a direct search in the policy space is called Policy Optimization (PO) (Deisenroth et al., 2013). The task is generally modeled through a Markov Decision Process (MDP) (Puterman, 2014) MR := M \u222aR, i.e., \fMirco Mutti, Stefano Del Col, Marcello Restelli the combination of a CMP M and a reward function R : S \u00d7 A \u2192[\u2212Rmax, Rmax] such that R(s, a) is the bounded reward that the agent collects by selecting action a \u2208A in state s \u2208S, and Rmax < \u221e. The agent\u2019s performance is de\ufb01ned by the expected sum of discounted rewards collected by its policy, i.e., J(\u03b8) := E s0\u223c\u00b5(\u00b7),at\u223c\u03c0\u03b8(\u00b7|st) st+1\u223cP (\u00b7|st,at) \u0014 \u221e X t=1 \u03b3tR(st, at) \u0015 = 1 (1 \u2212\u03b3) E (s,a)\u223cdsa \u03b8 \u0002 R(s, a) \u0003 , A Monte-Carlo estimate of the performance can be computed from a batch of N samples {sn, an}N n=1 taken with the policy \u03c0\u03b8 in the \u03b3-discounted MDP MR as b J(\u03b8) = 1 (1\u2212\u03b3)N PN n=1 R(sn, an). 2.3 Importance Sampling and R\u00b4 enyi Divergence Importance Sampling (IS) (Cochran, 2007; Owen, 2013) is a common technique to estimate the expectation of a function under a target distribution by taking samples from a di\ufb00erent distribution. In PO, importance sampling allows for estimating the performance of a target policy \u03c0\u03b8\u2032 through a batch of samples {sn, an}N n=1 taken with a policy \u03c0\u03b8. Especially, we de\ufb01ne the importance weight w\u03b8\u2032/\u03b8(s, a) := dsa \u03b8\u2032 (s, a)/dsa \u03b8 (s, a). A Monte-Carlo estimate of J(\u03b8\u2032) via importance sampling is given by b JIS(\u03b8\u2032/\u03b8) = 1 (1 \u2212\u03b3)N N X n=1 w\u03b8\u2032/\u03b8(sn, an)R(sn, an). The latter estimator is known to be unbiased, i.e., E\u03b8[ b JIS(\u03b8\u2032/\u03b8)] = J(\u03b8\u2032) (Owen, 2013). However, b JIS(\u03b8\u2032/\u03b8) might su\ufb00er from a large variance whenever the importance weights w\u03b8\u2032/\u03b8(s, a) have a large variance. The variance of the importance weights is related to the exponentiated 2-R\u00b4 enyi divergence D2(dsa \u03b8\u2032 ||dsa \u03b8 ) (R\u00b4 enyi et al., 1961) through Var(s,a)\u223cdsa \u03b8 [w\u03b8\u2032/\u03b8(s, a)] = D2(dsa \u03b8\u2032 ||dsa \u03b8 ) \u22121 (Cortes et al., 2010), where D2(dsa \u03b8\u2032 ||dsa \u03b8 ) := Z SA dsa \u03b8 (s, a) \u0012dsa \u03b8\u2032 (s, a) dsa \u03b8 (s, a) \u00132 ds da. The latter has been employed in (Metelli et al., 2018) to upper bound the variance of the importance sampling estimator as Var(s,a)\u223cdsa \u03b8 [ b JIS(\u03b8\u2032/\u03b8)] \u2264 \u0000 Rmax 1\u2212\u03b3 \u00012D2(dsa \u03b8\u2032 ||dsa \u03b8 )/N. In the following, we will refer to the exponentiated 2-R\u00b4 enyi divergence as the R\u00b4 enyi divergence. 3 THE POLICY SPACE COMPRESSION PROBLEM Let us suppose to have a CMP M the agent can interact with, and a parametric policy space \u0398 from which the agent can select its strategy of interaction. For the common parameterization choices, ranging from linear policies to deep neural networks, the policy space \u0398 is typically in\ufb01nite. Dealing with such a large policy space to address the usual RL tasks, e.g., \ufb01nding a convenient task-agnostic sampling strategy (Hazan et al., 2019) or seeking for an optimal policy within the set (Deisenroth et al., 2013), is often a huge challenge. Furthermore, many policies in \u0398 are unnecessary for these purposes, as they induce very similar interactions, and thus they have very similar performance. On the one hand, di\ufb00erent policy parameters \u03b8 \u2208\u0398 might induce nearly identical distributions over actions. On the other hand, even di\ufb00erent distributions over actions can lead to comparable state-action distributions due to the structure of the environment. Since we do not have any reward encoded in M, it would be unwise to deem any state-action distribution irrelevant without additional information on the task structure. In this work, we aim to identify a subset of the policy space \u0398\u2032 \u2286\u0398 that retains most of the expressive power of \u0398, i.e., the set of the state-action distributions it can induce, while dramatically reducing its size, to the advantage of the computational and sample e\ufb03ciency of future RL tasks. Especially, we consider a \u03c3-soft compression of \u0398, where for any policy \u03b8 \u2208\u0398 we would like to have a policy \u03b8\u2032 \u2208\u0398\u2032 such that the R\u00b4 enyi divergence between their respective state-action distributions dsa \u03b8 , dsa \u03b8\u2032 is bounded by a positive constant \u03c3. The R\u00b4 enyi divergence is particularly convenient in this setting due to its relationship with the variance of the importance sampling in the o\ufb00-policy estimation (Cortes et al., 2010; Metelli et al., 2018). The following statement provides a more formal de\ufb01nition of this \u03c3-soft compression. De\ufb01nition 3.1 (\u03c3-compression). Let M be a CMP, let \u0398 be a parametric policy space for M, and let \u03c3 > 0 be a constant. We call \u0398\u03c3 a \u03c3-compression of \u0398 in M if it holds that |\u0398\u03c3| < \u221eand \u2200\u03b8 \u2208\u0398, min \u03b8\u2032\u2208\u0398\u03c3 D2(dsa \u03b8 ||dsa \u03b8\u2032 ) \u2264\u03c3. We call the task of \ufb01nding a \u03c3-compression of \u0398 in M the policy space compression problem. Notably, for some M, \u0398, \u03c3, a \u03c3-compression of \u0398 in M might not exist, as in\ufb01nitely many policies \u03b8 \u2208\u0398 might induce relevant state-action distributions. However, we note that those scenarios are not interesting for our purposes, as the PO problem would be far-fetched as well, since one should try in\ufb01nitely many policies \fReward-Free Policy Space Compression for Reinforcement Learning to \ufb01nd an optimal policy. Instead, we only consider scenarios in which the \u03c3-compression is feasible. In these cases, given M and \u0398, we would like to extract the smallest set of policies \u0398\u2032 that is a \u03c3-compression of \u0398 in M, and then keep this reduced policy space to address any RL task one can de\ufb01ne over M. Let \u2126\u0398 := {dsa \u03b8 | \u2200\u03b8 \u2208\u0398} be the set of state-action distributions induced by the policy space \u0398, the compression problem can be formulated as a typical set cover problem, i.e., minimize X \u03c9\u2208\u2126\u0398 x\u03c9 subject to X \u03c9:D2(\u03c5||\u03c9)\u2264\u03c3 x\u03c9 \u22651, \u2200\u03c5 \u2208\u2126\u0398 x\u03c9 \u2208{0, 1}, \u2200\u03c9 \u2208\u2126\u0398 (1) where the positive integers x\u03c9 denote the state-action distributions that are active in the covering, and the corresponding \u03c3-compression of \u0398 in M can be retrieved as \u0398\u03c3 = {\u03b8 \u2208\u0398 | dsa \u03b8 = \u03c9 \u2227 x\u03c9 = 1}. Unfortunately, the problem (1) is known to be NPhard (Feige, 1998), even when the model of M is fully available. Two aspects arguably make this problem extremely hard: On the one hand, we are looking for an e\ufb03cient solution in the number of active state-action distributions, secondly, we are covering the set \u2126\u0398 all at once rather than incrementally. Instead of considering common relaxations of (1) (Johnson, 1974; Lov\u00b4 asz, 1975), which would not strictly meet the requirements of De\ufb01nition 3.1 (Feige, 1998), in the next section we build on these insights to reformulate the policy space compression problem in a tractable way. 4 A GAME THEORETIC REFORMULATION Due to its inherent hardness, we aim to \ufb01nd a tractable reformulation of the policy space compression problem (1) whose solution is a valid \u03c3-compression of \u0398 in M. Let us consider a game-theoretic perspective to the set cover problem. A \ufb01rst player distributes a set of K policies (\u03b81, . . . , \u03b8K) \u2208\u0398K with the intention of covering the set of state-action distributions \u2126\u0398. A second player tries to \ufb01nd a policy \u00b5 \u2208\u0398 that is not well covered by (\u03b81, . . . , \u03b8K), i.e., a policy that maximizes the R\u00b4 enyi divergence between its state-action distribution and the one of the closest \u03b8k \u2208(\u03b81, . . . , \u03b8K). The former player moves \ufb01rst, and we call it a leader. The latter player makes his move in response to the other player, and it is then called a follower. The two-player, zero-sum, sequential game that we have informally described can be represented as the optimization problem min \u03b8\u2208\u0398K max \u00b5\u2208\u0398 f(\u03b8, \u00b5), (2) f(\u03b8, \u00b5) := min k\u2208[K] D2(dsa \u00b5 ||dsa \u03b8k), where \u03b8 = (\u03b81, . . . , \u03b8K) and [K] = {1, . . . , K}. It is straightforward to see that if the \u03c3-compression is feasible for \u0398 in M and K is large enough, then any optimal leader\u2019s strategy for the game (2), i.e., \u03b8\u2217\u2208arg min\u03b8\u2208\u0398K max\u00b5\u2208\u0398 f(\u03b8, \u00b5), is a \u03c3-compression of \u0398 in M. Unfortunately, f(\u03b8, \u00b5) is a non-convex non-concave function, and \ufb01nding a globally optimal strategy for the game (2) is still a NP-hard problem. However, we do not actually need to \ufb01nd a globally optimal strategy for the leader, as any \u03b8 \u2208\u0398K such that min\u00b5\u2208\u0398 f(\u03b8, \u00b5) \u2264\u03c3 would be a valid \u03c3-compression of \u0398. Thus, we might instead target a locally optimal strategy for (2), which is a stationary point of f that is both a local maximum w.r.t. \u03b8 and a local minimum w.r.t. \u00b5. We formalize this solution concept as a Differential Stackelberg Equilibrium (DSE) (Fiez et al., 2020). De\ufb01nition 4.1 (Di\ufb00erential Stackelberg (Fiez et al., 2020)). The joint strategy (\u03b8\u2217, \u00b5\u2217) \u2208 \u0398K+1 in which \u03b8\u2217 k \u2208 arg mink\u2208[K](dsa \u00b5\u2217||dsa \u03b8\u2217 k) is a di\ufb00erential Stackelberg equilibrium of the game (2) if it holds \u2207\u03b8\u2217 kf(\u03b8\u2217, \u00b5\u2217) = 0, \u2207\u00b5\u2217f(\u03b8\u2217, \u00b5\u2217) = 0, |\u2207\u03b8\u2217 k\u2207\u22a4 \u03b8\u2217 kf(\u03b8\u2217, \u00b5\u2217)| > 0, and |\u2207\u00b5\u2217\u2207\u22a4 \u00b5\u2217f(\u03b8\u2217, \u00b5\u2217)| < 0. 1 Luckily, several recent works have established a favorable complexity for the problem of \ufb01nding a DSE (Jin et al., 2020b; Fiez et al., 2020; Fiez and Ratli\ufb00, 2020) in a sequential game. Especially, Jin et al. (2020b) showed that a basic \ufb01rst-order method, i.e., Gradient Descent Ascent (GDA), with an in\ufb01nite time-scale separation between the leader\u2019s and follower\u2019s updates is guaranteed to converge to a DSE under mild conditions. This result might be surprising, as we started with a fundamentally hard problem (1) and ended up with a way easier formulation (2) that we can address with a common methodology, without making any strong assumption on the structure of the problem. However, we still have to deal with two crucial issues to solve the policy space compression problem through the game-theoretic formulation. On the one hand, it is not enough to look at the value f(\u03b8\u2217, \u00b5\u2217) attained by a DSE (\u03b8\u2217, \u00b5\u2217) to guarantee that \u03b8 is a \u03c3-compression of \u0398, as we should check that max\u00b5\u2208\u0398 f(\u03b8\u2217, \u00b5) \u2264\u03c3, where \u00b5 is a global maximizer. On the other hand, it is not clear how to set a convenient value of K beforehand. In the next section, we present a \ufb01rst-order 1Let f(x) be a function of x \u2208Rm, we denote its gradient vector as \u2207xf(x), its Hessian matrix as \u2207x\u2207\u22a4 x f(x), and the determinant of its Hessian matrix as |\u2207x\u2207\u22a4 x f(x)|. \fMirco Mutti, Stefano Del Col, Marcello Restelli method that addresses these two issues by \ufb01nding a DSE of iteratively larger instances of the game (2) (which we will henceforth call the cover game) until a conservative approximation of the global condition max\u00b5\u2208\u0398 f(\u03b8\u2217, \u00b5) \u2264\u03c3 is \ufb01nally met. 5 A PLANNING ALGORITHM TO SOLVE THE PROBLEM Optimization problems of the kind of (2) are typically addressed with a GDA procedure, in which the leader\u2019s parameters (\u03b8) and the follower\u2019s parameters (\u00b5) are updated iteratively according to \u03b8 \u2190\u03b8 \u2212\u03b1\u2207\u03b8f(\u03b8, \u00b5), \u00b5 \u2190\u00b5 + \u03b2\u2207\u00b5f(\u03b8, \u00b5), where \u2207\u03b8f(\u03b8, \u00b5) and \u2207\u00b5f(\u03b8, \u00b5) are the respective gradients of the joint objective function, \u03b1 > 0 and \u03b2 > 0 are learning rates. Especially, if we consider a su\ufb03ciently large time-scale separation \u03c4 := \u03b2/\u03b1, we are guaranteed to converge to a DSE of the game (2) (Jin et al., 2020b; Fiez and Ratli\ufb00, 2020). In this case, we can consider \u03c4 = \u221e, which means that we update the follower\u2019s parameters until a stationary point is reached, i.e., \u2207\u00b5f(\u03b8, \u00b5) = 0, before updating the leader\u2019s parameters. However, to instantiate the cover game, we still need to specify the number K of leader-controlled policies \u03b8 = (\u03b81, . . . , \u03b8K). A straightforward solution is to start with a small number of policies \ufb01rst, say K = 1, then retrieve a DSE (\u03b8\u2217, \u00b5\u2217) via GDA for a cover-game instance with K policies, and \ufb01nally check if the resulting leader\u2019s strategy \u03b8\u2217meets the global requirement max\u00b5\u2208\u0398 f(\u03b8\u2217, \u00b5) \u2264\u03c3. If the answer is positive, the policy space compression problem is solved, and \u03b8\u2217 is a \u03c3-compression of \u0398 in M. Otherwise, we increment K and we repeat the process to see if we can solve the problem with more policies in \u03b8. If the policy space compression problem is feasible, with this simple procedure we are guaranteed to get a valid \u03c3compression eventually. We call this method the Policy Space Compression Algorithm (PSCA) and we report the pseudocode in Algorithm 1. In the following sections, we describe in details how the optimization of the follower\u2019s parameters (Section 5.1) and the leader\u2019s parameters (Section 5.2) are carried out in an adaptation of the GDA method to the speci\ufb01c setting of the cover game. In Section 5.3, we discuss how to verify the global requirement max\u00b5\u2208\u0398 f(\u03b8\u2217, \u00b5) \u2264\u03c3 without actually having to \ufb01nd a globally optimal follower\u2019s strategy, but instead optimizing a surrogate objective through a tractable linear program. Algorithm 1 PSCA Input: CMP M, policy space \u0398, constant \u03c3 initialize K = 0 and the cover guarantee Z\u03b8 = \u221e while (Z\u03b8)2 > \u03c3 do K \u2190K + 1 initialize the leader \u03b8 = (\u03b81, . . . , \u03b8K) \u2208\u0398K for epoch = 1, 2, . . . , until convergence do compute the best response \u00b5br to \u03b8 identify the active leader\u2019s component \u03b8k update the leader \u03b8k \u2190\u03b8k \u2212\u03b1\u2207\u03b8kf(\u03b8, \u00b5br) end for compute the cover guarantee Z\u03b8 with (6) end while Output: return \u03b8, a \u03c3-compression of \u0398 in M 5.1 Optimizing the Follower\u2019s Parameters In principle, we would like to compute the gradient \u2207\u00b5f(\u03b8, \u00b5) to perform the update \u00b5 \u2190\u00b5+\u03b2\u2207\u00b5f(\u03b8, \u00b5) as in a common GDA procedure. Unfortunately, the objective function f(\u03b8, \u00b5) = mink\u2208[K] D2(dsa \u00b5 ||dsa \u03b8k) is not di\ufb00erentiable due to the minimum over the K components of \u03b8. However, only the leader\u2019s component \u03b8k that attains the minimum of f is actually relevant for the follower\u2019s update, as the other K \u22121 components do not a\ufb00ect the value of the objective. Thus, we call \u03b8k \u2208arg min\u03b8i\u2208\u03b8 D2(dsa \u00b5 ||dsa \u03b8k) the active leader\u2019s component. Conveniently, we can update the follower\u2019s parameters w.r.t. the gradient \u2207\u00b5f(\u03b8k, \u00b5), which is di\ufb00erentiable w.r.t. \u00b5. The following proposition provides the formula for this gradient. Proposition 5.1 (Follower\u2019s Gradient). Let (\u03b8, \u00b5) \u2208 \u0398K, the gradient of f(\u03b8, \u00b5) w.r.t. \u00b5 is given by \u2207\u00b5f(\u03b8, \u00b5) = 2 E (s,a)\u223cdsa \u03b8k \u0014\u0012 dsa \u00b5 (s, a) dsa \u03b8k(s, a) \u00132 \u2207\u00b5 log dsa \u00b5 (s, a) \u0015 , (3) where \u03b8k is the active leader\u2019s component such that \u03b8k \u2208arg min\u03b8i\u2208\u03b8 D2(dsa \u00b5 ||dsa \u03b8i ). To perform a full optimization of the follower\u2019s parameters, we just need to repeatedly apply the gradient ascent update with the gradient \u2207\u00b5f(\u03b8, \u00b5) computed as in (3). Under mild conditions on the learning rate (Robbins and Monro, 1951), this process is guaranteed to converge to a stationary point such that \u2207\u00b5f(\u03b8, \u00b5) = 0. We call the follower\u2019s parameters \u00b5 at this stationary point the best response to the leader\u2019s parameter \u03b8, and we denote it as \u00b5br. 5.2 Optimizing the Leader\u2019s Parameters Whenever the follower converges at the best response \u00b5br to the current leader\u2019s parameters, we would like \fReward-Free Policy Space Compression for Reinforcement Learning to make an update to \u03b8 in the direction of the gradient \u2207\u03b8f(\u03b8, \u00b5), i.e., \u03b8 \u2190\u03b8 \u2212\u03b1\u2207\u03b8f(\u03b8, \u00b5). Just as before, we can pre-compute the active leader\u2019s component \u03b8k \u2208arg min\u03b8i\u2208\u03b8 D2(dsa \u00b5 ||dsa \u03b8i ) to make an update to \u03b8k in the direction of the gradient \u2207\u03b8kf(\u03b8k, \u00b5), which is di\ufb00erentiable in \u03b8k. Indeed, an update to any other leader\u2019s component would not have a meaningful impact on the value of the objective, whereas updating \u03b8k with a su\ufb03ciently small learning rate \u03b1 is guaranteed to decrease f(\u03b8, \u00b5), possibly forcing the follower to change its best response in the next epoch. The following proposition provides the formula for the gradient. Proposition 5.2 (Leader\u2019s Gradient). Let (\u03b8, \u00b5) \u2208 \u0398K, the gradient of f(\u03b8, \u00b5) w.r.t. \u03b8k is given by \u2207\u03b8kf(\u03b8, \u00b5) = \u2212 E (s,a)\u223cdsa \u03b8k \u0014\u0012 dsa \u00b5 (s, a) dsa \u03b8k(s, a) \u00132 \u2207\u03b8k log dsa \u03b8k(s, a) \u0015 . (4) 5.3 Assessing the Global Value of the Leader\u2019s Parameters The last missing piece of the PSCA algorithm requires verifying that the leader\u2019s strategy in the DSE (\u03b8\u2217, \u00b5\u2217) obtained from the GDA procedure is actually a \u03c3compression of \u0398 in M. In principle, we should verify that mink\u2208[K] D2(\u03b8\u2217 k, \u00b5) \u2264\u03c3 for any \u00b5 \u2208\u0398, which is equivalent to controlling if max\u00b5\u2208\u0398 f(\u03b8\u2217, \u00b5) \u2264\u03c3. Unfortunately, the follower\u2019s strategy \u00b5\u2217is only locally optimal. Thus, checking f(\u03b8\u2217, \u00b5\u2217) \u2264\u03c3 is not su\ufb03cient, as the globally optimal follower\u2019s strategy might attain a greater value of f than \u00b5\u2217. Instead, we should check Z\u03b8\u2217\u2264\u03c3, where Z\u03b8\u2217is given by Z\u03b8\u2217= max \u03c9\u2208\u2126\u0398 min k\u2208[K] Z SA \u0000\u03c9(s, a) \u00012 dsa \u03b8\u2217 k(s, a) ds da, (5) which can be written as a quadratically constrained quadratic program (see Appendix B.1). It might come as no surprise that solving this problem is NP-hard. Indeed, this is equivalent to the problem (2) with a \ufb01xed leader\u2019s strategy \u03b8\u2217, but the objective f(\u03b8\u2217, \u00b5) is still non-concave w.r.t. \u00b5. Luckily, we can reformulate this NP-hard problem in the surrogate linear program (see Appendix B.2): \u0000Z\u03b8\u2217\u0001\u22121 2 = max \u03c9\u2208\u2126\u0398 min k\u2208[K] Z SA \u03c9(s, a) \u0000dsa \u03b8\u2217 k(s, a) \u0001\u22121 2 ds da, (6) where the value Z\u03b8\u2217is a conservative approximation of Z\u03b8\u2217, as stated in the following theorem. Theorem 5.3. The value Z\u03b8\u2217is an upper bound to the value Z\u03b8\u2217, i.e., Z\u03b8\u2217\u2265Z\u03b8\u2217, \u2200\u03b8\u2217\u2208\u0398K. 6 GUARANTEES OF RL WITH A COMPRESSED POLICY SPACE In the previous sections, we have motivated the pursuit of a compression \u0398\u03c3 of the original policy space \u0398 in the CMP M as a way to improve the computation and sample e\ufb03ciency of solving RL tasks de\ufb01ned upon M. Since this compression procedure induces a loss, albeit bounded, in the expressive power of the policy space, it is worth investigating the performance guarantees that we have when addressing RL tasks with \u0398\u03c3. We \ufb01rst analyze policy evaluation (Section 6.1) and then policy optimization (Section 6.2). The reported theoretical results mostly combine techniques from (Metelli et al., 2018; Papini et al., 2019). 6.1 Policy Evaluation In policy evaluation (Sutton and Barto, 2018), we aim to estimate the performance J(\u03b8) of a target policy \u03b8 \u2208\u0398 through sampled interactions with an MDP MR. In our case, we can only draw samples with the policies in \u0398\u03c3, and we have to provide an o\ufb00-policy estimate of J(\u03b8) via importance sampling. Since for any target policy \u03b8 we are guaranteed to have a sampling policy \u03b8\u2032 \u2208\u0398\u03c3 such that D2(dsa \u03b8 ||dsa \u03b8\u2032 ) \u2264\u03c3, by choosing a convenient sampling policy in \u0398\u03c3, we can enjoy the following guarantee on the error we make when evaluating any target policy \u03b8 \u2208\u0398 in any MDP MR one can build upon M. Theorem 6.1 (Policy Evaluation Error). Let \u0398\u03c3 be a \u03c3-compression of \u0398 in M, let R be a reward function for M uniformly bounded by Rmax, let \u03b8 \u2208\u0398 be a target policy, and let \u03b4 \u2208(0, 1) be a con\ufb01dence. There exists \u03b8\u2032 \u2208\u0398\u03c3 such that, given N i.i.d. samples from dsa \u03b8\u2032 ,2 the error of the importance sampling evaluation of J(\u03b8) in MR, i.e., b JIS(\u03b8/\u03b8\u2032) = 1 (1 \u2212\u03b3)N N X n=1 w\u03b8/\u03b8\u2032(sn, an)R(sn, an), is upper bounded with probability at least 1 \u2212\u03b4 as |J(\u03b8) \u2212b JIS(\u03b8/\u03b8\u2032)| \u2264Rmax 1 \u2212\u03b3 r \u03c3 \u03b4N . Notably, given a budget of samples N, a con\ufb01dence \u03b4, and a requirement on the evaluation error beforehand, we could select a proper \u03c3 to build a \u03c3-compression that meets the requirement in any policy evaluation 2One can generate a sample from dsa \u03b8\u2032 by drawing s0 \u223c\u00b5 and then following the policy \u03b8\u2032. At each step t, the state st and action at are accepted with probability \u03b3, whereas the simulation ends with probability 1 \u2212\u03b3 (Metelli et al., 2021b). \fMirco Mutti, Stefano Del Col, Marcello Restelli task. However, choosing a sampling policy \u03b8\u2032 \u2208\u0398\u03c3 that is best suited for a given task might be non-trivial. Thus, one can instead take a batch of Nk samples with each policy in \u0398\u03c3, and then perform the policy evaluation via Multiple Importance Sampling (MIS) (Owen, 2013; Papini et al., 2019). Corollary 6.2. Let \u0398\u03c3 be a \u03c3-compression of \u0398 in M such that |\u0398\u03c3| = K, let R be a reward function for M uniformly bounded by Rmax, let \u03b8 \u2208\u0398 be a target policy, and let \u03b4 \u2208(0, 1) be a con\ufb01dence. Given Nk i.i.d. samples from each dsa \u03b8k, \u03b8k \u2208\u0398\u03c3, the error of the multiple importance sampling evaluation of J(\u03b8) in MR, i.e., b JMIS(\u03b8/\u03b81, . . . , \u03b8K) = 1 (1 \u2212\u03b3) K X k=1 Nk X n=1 dsa \u03b8 (sn,k, an,k) PK j=1 Njdsa \u03b8j(sn,k, an,k) R(sn,k, an,k), is upper bounded with probability at least 1 \u2212\u03b4 as |J(\u03b8) \u2212b JMIS(\u03b8/\u03b81, . . . , \u03b8K)| \u2264Rmax 1 \u2212\u03b3 r D2(dsa \u03b8 ||\u03a6) \u03b4N where N = PK k=1 Nk is the total number of samples and \u03a6 = PK k=1 Nk N dsa \u03b8k is a \ufb01nite mixture. Thanks to the result in (Metelli et al., 2020, Theorem 1), in tabular MDPs the evaluation error of the MIS estimator is guaranteed to be lower than the one of the IS estimator of Theorem 6.1 (as long as Nk \u2265N, where N is the number of samples considered by the IS estimator). 6.2 Policy Optimization In policy optimization (see Section 2.2), we seek for the policy \u03b8 that maximizes J(\u03b8) within a parametric policy space. In principle, we could look for the policy that maximizes the performance within the \u03c3compression \u0398\u03c3, which can be found e\ufb03ciently with the OPTIMIST algorithm (Papini et al., 2019). Especially, in this setting OPTIMIST yields constant regret for tabular MDPs (Metelli et al., 2021a), as the set \u0398\u03c3 is \ufb01nite and it is composed of stochastic policies such that \u2200\u03b8, \u03b8\u2032 \u2208\u0398\u03c3, D2(dsa \u03b8 ||dsa \u03b8\u2032 ) < \u221e. However, this optimal policy within \u0398\u03c3 might be sub-optimal w.r.t. the optimal policy within the original policy space \u0398. We can still upper bound this sub-optimality, as reported in the following theorem. Theorem 6.3 (Policy Optimization in \u0398\u03c3). Let \u0398\u03c3 be a \u03c3-compression of \u0398 in M, and let R be a reward function for M uniformly bounded by Rmax. The policy \u03b8\u2217 \u03c3 \u2208arg max\u03b8\u2208\u0398\u03c3 J(\u03b8) is \u03f5-optimal for the MDP MR, where \u03f5 := | max \u03b8\u2208\u0398 J(\u03b8) \u2212J(\u03b8\u2217 \u03c3)| \u2264Rmax 1 \u2212\u03b3 p log \u03c3. Notably, the latter guarantee does not involve any estimation, and the policy \u03b8\u2217can be obtained in a \ufb01nite number of interactions. Nonetheless, one can shrink the sub-optimality \u03f5, and without deteriorating the sample complexity, by coupling the OPTIMIST algorithm with an additional o\ufb04ine optimization procedure. The idea is to return the policy \u03b8 \u2208\u0398 that maximizes the importance sampling evaluation obtained with the samples from the policies in \u0398\u03c3. Theorem 6.4 (O\ufb00-Policy Optimization in \u0398). Let \u0398\u03c3 be a \u03c3-compression of \u0398 in M such that |\u0398\u03c3| = K, let R be a reward function for M uniformly bounded by Rmax, and let \u03b4 \u2208(0, 1) be a con\ufb01dence. Given Nk samples from each dsa \u03b8k, \u03b8k \u2208\u0398\u03c3, we can recover an \u03f5-optimal policy for MR as \u0000, \u03b8\u2217 IS \u0001 \u2208 arg max \u03b8k\u2208\u0398\u03c3,\u03b8\u2208\u0398:D2(dsa \u03b8 ||dsa \u03b8k ) 1 (1 \u2212\u03b3)Nk Nk X n=1 w\u03b8/\u03b8k(sn, an)R(sn, an), (7) such that with probability at least 1 \u2212\u03b4 \u03f5 := \f \f max \u03b8\u2208\u0398 J(\u03b8) \u2212J(\u03b8\u2217 IS) \f \f \u2264Rmax 1 \u2212\u03b3 p 2\u03c3/Nk\u03b4. Although, contrary to the guarantee in Theorem 6.3, \u03f5 vanishes with the number of samples in the latter result, solving the o\ufb04ine problem (7) is non-trivial in general, as the policy space \u0398 is often in\ufb01nite. 7 NUMERICAL VALIDATION In this section, we provide a brief numerical validation of the policy space compression problem (Section 7.1) and how it bene\ufb01ts RL (Section 7.2, 7.3). To the purpose of the analysis, we consider the River Swim domain (Strehl and Littman, 2008), in which an agent navigates a chain of six states by taking one of two actions: either swim up, to move upstream towards the upper states, or swim down, to go downstream back to the lower states. Swimming upstream is harder than swimming downstream, thus the action swim up fails with a positive probability, such that only a sequence of swim up is likely to lead to the \ufb01nal state (an illustration of the corresponding CMP is reported in Figure 1a). In Appendix C, we report further details on the experimental settings, along with some additional results in a Grid World environment. We leave as future work a more extensive experimental evaluation of the policy space compression problem beyond toy domains. \fReward-Free Policy Space Compression for Reinforcement Learning 0 1 2 3 4 5 0.3 0.3 0.7 0.3 0.1 0.6 0.3 0.1 0.6 0.3 0.1 0.6 0.3 0.1 0.6 0.7 (a) River Swim 1 2 3 0 20 40 60 number of policies Z Z \u03c3 0 100 200 0 100 200 300 iteration Z Z \u03c3 (b) Policy Space Compression 0 50 100 0 20 40 60 iteration J(\u03b8) OPTIM. (\u0398\u03c3) OPTIM. (\u03983) OPTIM (\u039820) (c) Policy Optimization \u03b8 \u03b81 \u03b82 \u03b83 \u03b8U 0 5 10 b JIS(\u03b8) \u03b81 \u03b82 \u03b83 \u03b8U 0 10 20 b JIS(\u03b8) (d) IS Policy Evaluation \u0398\u03c3 \u03983 0 10 20 b JMIS(\u03b8) \u0398\u03c3 \u03983 0 10 25 b JMIS(\u03b8) (e) MIS Policy Evaluation Figure 1: Set of experiments in the River Swim domain, which is illustrated in (a). (b) The value of the compression guarantee Z, its upper bound Z, and the requirement \u03c3 as a function of the number of policies K (left) and as a function of the iterations with K = 1 (right) obtained with PSCA. (c) The average return J(\u03b8) obtained by OPTIMIST with the \u03c3-compression \u0398\u03c3 (3 policies), a 3-policies discretization \u03983, and a 20-policies discretization \u039820 (95% c.i. over 50 runs). (d,e) IS and MIS evaluation of J(\u03b8) by taking samples with \u03b8, \u03b8k \u2208\u0398\u03c3, a uniform policy \u03b8U, the mixture \u0398\u03c3, or a mixture of 3 random policies \u03983. We provide both the empirical (left, 95% c.i. over 50 runs) and the hindsight (right) values. 7.1 Policy Space Compression In the River Swim, we consider the policy space \u0398 \u2286R|S|\u00d7(|A|\u22121) of the softmax policies \u03c0\u03b8(a|s) = exp(\u03b8sa)/ P j\u2208A exp(\u03b8sj), and we seek for a compression \u0398\u03c3 with the requirement \u03c3 = 10, such that \u0398\u03c3 is a valid \u03c3-compression if min\u03b8\u2208\u0398\u03c3 max\u00b5\u2208\u0398 f(\u03b8, \u00b5) \u2264 10. In Figure 1b, we report the values of Z = max\u00b5\u2208\u0398 f(\u03b8, \u00b5) (5) and its upper bound Z \u2265Z (6). Especially, we can see that PSCA e\ufb00ectively found a valid \u03c3-compression \u0398\u03c3 of just K = 3 policies (Figure 1b, left), and that the values of Z and Z smoothly decreases during the GDA procedure for a \ufb01xed number of policies (Figure 1b, right). Notably, K = 2 policies are actually su\ufb03cient to meet the \u03c3 requirement in this setting. However, PSCA cannot access Z but its conservative approximation Z, and thus stops whenever Z \u2264\u03c3. In Appendix C, we report an illustration of the obtained policies \u03b8k \u2208\u0398\u03c3. This set coarsely includes two policies that swims up most of the time, either mixing the actions when the rightmost state is reached (\u03b81) or swimming up there as well (\u03b82), and a policy that swims down in the leftmost state and swims up in the others (\u03b83). 7.2 Policy Evaluation with a Compressed Policy Space We now show that the obtained \u03c3-compression \u0398\u03c3 can be employed with bene\ufb01t in the most challenging policy evaluation task one can de\ufb01ne in the River Swim, which is the o\ufb00-policy evaluation of an \u03f5-greedy policy \u03b8 for the reward function that assigns Rmax = 100 for taking the action swim up in the rightmost state. In Figure 1d, we show that sampling with the policies \u03b81, \u03b82 \u2208\u0398\u03c3 lead to an IS o\ufb00-policy evaluation that is comparable to the exact J(\u03b8) (dashed line) and its onpolicy estimate (\u03b8). Instead, the policy \u03b83 and a uniform policy \u03b8U lead to signi\ufb01cantly worse evaluations, as they collect too many samples in the leftmost state. Even by sampling from a uniform mixture of the policies in \u0398\u03c3, the performance of the MIS evaluation is signi\ufb01cantly better than the one obtained by a uniform mixture of three random policies (\u03983), as reported in Figure 1e. For both the IS and the MIS regime, we provide the empirical evaluations (on the left) and the hindsight evaluations (right) obtained with the exact values of the importance weights w\u03b8/\u03b8\u2032 and the con\ufb01dence bounds of the Theorem 6.1, 6.2 respectively. \fMirco Mutti, Stefano Del Col, Marcello Restelli 7.3 Policy Optimization with a Compressed Policy Space Finally, we show that the compression \u0398\u03c3 allows for e\ufb03cient policy optimization. We consider the same reward function of the previous section, and the OPTIMIST (Papini et al., 2019) algorithm equipped with \u0398\u03c3, or a uniform discretization of the original policy space \u0398 with either three policies (\u03983) or twenty policies (\u039820). In Figure 1c, we show that OPTIMIST with \u0398\u03c3 swiftly converges (less than \ufb01ve iterations) to the optimal policy within the space. Instead, the policy space \u03983 leads to a huge sub-optimality in the \ufb01nal performance, and OPTIMIST with \u039820 is way slower to converge to the optimal policy within the space. These results are a testament of the ability of PSCA to incorporate the peculiar structure of the domain in a small set of representative policies \u0398\u03c3, and to allows for a remarkable balance between sample e\ufb03ciency and sub-optimality in subsequent policy optimization. 8 DISCUSSION AND CONCLUSION In this paper, we considered the problem of compressing an in\ufb01nite parametric policy space into a \ufb01nite set of representative policies for a given environment. First, we provided a formal de\ufb01nition of the problem, and we highlighted its inherent hardness. Then, we proposed a tractable game-theoretic reformulation, for which a locally optimal solution can be e\ufb03ciently found through an iterative GDA procedure. Finally, we provided a theoretical characterization of the guarantees that the compression brings to subsequent RL tasks, and a numerical validation of the approach. 8.1 Related Works Previous works (Gregor et al., 2016; Eysenbach et al., 2018; Achiam et al., 2018; Hansen et al., 2019) have considered heuristic methods to extract a convenient set of policies from the policy space, but they lack the formalization and the theoretical guarantees that we provided. Especially, Eysenbach et al. (2021) argue that the set of policies learned by those methods cannot be used to solve all the relevant policy optimization tasks. Those policies should be generally intended as e\ufb00ective initializations for subsequent adaptation procedures, operating in the original policy space once the task is revealed, rather than a minimal set of su\ufb03cient policies. To the best of our knowledge, the only other work considering a formal criterion to operate a selection of the policies is (Zahavy et al., 2021). Having some similarieties, our work and (Zahavy et al., 2021) still di\ufb00er for some crucial aspects. Whereas they look for a set of policies that maximizes the performance under the worst-case reward, we look for a set of policies that guarantees \u03f5-optimality for any task. They do not consider the parameterization of the policy space as an additional source of structure, and thus they do not fully exploit the interplay between the policy space and the environment as we do. Their problem formulation is multi-task, as they restrict the class of rewards to linear combinations of a feature vector, our formulation is instead fully reward-free. Overall, our policy space compression problem is more general, as it is solving the problem in (Zahavy et al., 2021) as a by-product. However, their problem might be easier in nature,3 and thus preferable if one only cares about the worst-case performance. Finally, Eysenbach et al. (2021) provide interesting insights on the information geometry of the space of the state distributions induced by a policy in a CMP, which can lead to compelling geometric interpretations of our policy space compression problem. 8.2 Limitations and Future Directions The main limitation of our work is that the proposed algorithm is assuming full knowledge of the environment, which is uncommon in RL literature. However, we believe that PSCA is providing a clear blueprint for future works that might target the compression problem from interactions with an unknown environment, to pave the way for scalable policy space compression. Especially, such an extension would require sample-based estimates of the gradients (3), (4), and the global guarantee (6). Whereas estimating the gradients of state-action distributions is not an easy feat, previous works provide useful inspiration (Morimura et al., 2010; Schroecker and Isbell, 2017; Schroecker et al., 2018). Similarly, sample-based estimates of (6) can take inspiration from approximate linear programming methods for MDPs (De Farias and Van Roy, 2003; Pazis and Parr, 2011). Another potential limitation of the proposed approach is the memory complexity required to store the compression, in contrast to the compact representations of common policy spaces, such as a small set of basis functions or a neural network architecture. A future work might focus on compact representations for a given compression. Other interesting future directions include an extension of the policy space compression problem to the parameterbased perspective (Sehnke et al., 2008; Metelli et al., 2018; Papini et al., 2019), and the development of policy optimization algorithms that are tailored to exploit a compression of the policy space. 3This is purely speculative as (Zahavy et al., 2021) does not provide a formal study of the computational complexity of the problem. \fReward-Free Policy Space Compression for Reinforcement Learning" + }, + { + "url": "http://arxiv.org/abs/2202.06545v3", + "title": "Provably Efficient Causal Model-Based Reinforcement Learning for Systematic Generalization", + "abstract": "In the sequential decision making setting, an agent aims to achieve\nsystematic generalization over a large, possibly infinite, set of environments.\nSuch environments are modeled as discrete Markov decision processes with both\nstates and actions represented through a feature vector. The underlying\nstructure of the environments allows the transition dynamics to be factored\ninto two components: one that is environment-specific and another that is\nshared. Consider a set of environments that share the laws of motion as an\nexample. In this setting, the agent can take a finite amount of reward-free\ninteractions from a subset of these environments. The agent then must be able\nto approximately solve any planning task defined over any environment in the\noriginal set, relying on the above interactions only. Can we design a provably\nefficient algorithm that achieves this ambitious goal of systematic\ngeneralization? In this paper, we give a partially positive answer to this\nquestion. First, we provide a tractable formulation of systematic\ngeneralization by employing a causal viewpoint. Then, under specific structural\nassumptions, we provide a simple learning algorithm that guarantees any desired\nplanning error up to an unavoidable sub-optimality term, while showcasing a\npolynomial sample complexity.", + "authors": "Mirco Mutti, Riccardo De Santi, Emanuele Rossi, Juan Felipe Calderon, Michael Bronstein, Marcello Restelli", + "published": "2022-02-14", + "updated": "2023-03-30", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Whereas recent breakthroughs have established Reinforcement Learning (RL, Sutton and Barto 2018) as a powerful tool to address a wide range of sequential decision making problems, the curse of generalization (Kirk et al. 2021) is still a main limitation of commonly used techniques. RL algorithms deployed on a given task are usually effective in discovering the correlation between an agent\u2019s behavior and the resulting performance from large amounts of labeled samples (Jaksch, Ortner, and Auer 2010; Lange, Gabel, and Riedmiller 2012). However, those algorithms are usually unable to discover basic cause-effect relations between the agent\u2019s behavior and the environment dynamics. Crucially, *These authors contributed equally. The appendix of this paper can be found at https://arxiv.org/abs/2202.06545. Copyright \u00a9 2023, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. the aforementioned correlations are oftentimes speci\ufb01c to the task at hand, and they are unlikely to be of any use for addressing different tasks or environments. Instead, some universal causal relations generalize over the environments, and once learned they can be exploited for solving any task. Let us consider as an illustrative example an agent interacting with a large set of physical environments. While each of these environments can have its speci\ufb01c dynamics, we expect the basic laws of motion to hold across the environments, as they encode general causal relations. Once they are learned, there is no need to discover them again from scratch when facing a new task, or an unseen environment. Even if the dynamics over these relations can change, such as moving underwater is different than moving in the air, or the gravity can change from planet to planet, the underlying causal structure still holds. This knowledge alone often allows the agent to solve new tasks in unseen environments by taking a few, or even zero, interactions. We argue that we should pursue this kind of generalization in RL, which we call systematic generalization, where learning universal causal relations from interactions with a few environments allows us to approximately solve any task in any other environment without further interactions. Although this problem setting might seem overly ambitious or even far-fetched, in this paper we provide the \ufb01rst tractable formulation of systematic generalization, thanks to a set of structural assumptions that are motivated by a causal viewpoint. The problem formulation is partially inspired by reward-free RL (Jin et al. 2020a), in which the agent can take unlabelled interactions with an environment to learn a model that allows approximate planning for any reward function. Here, we extend this formulation to a large, potentially in\ufb01nite, set of reward-free environments, or a universe, the agent can freely interact with. We consider discrete environments, such that both their states and actions can be described through vectors of discrete features. Crucially, these environments share a common causal structure that explains a signi\ufb01cant portion, but not all, of their transition dynamics. Can we design a provably ef\ufb01cient algorithm that guarantees an arbitrarily small planning error for any possible task arXiv:2202.06545v3 [cs.LG] 30 Mar 2023 \fTarget K (discrete MDP) K (tabular MDP) Causal Structure Estimation Pr( b G \u0338= G\u03f5) \u2264\u03b4 O \u0000n log(d2 SdA/\u03b4)/\u03f52\u0001 O \u0000log(S2A/\u03b4)/\u03f52\u0001 Bayesian Network Estimation Pr(\u2225b PG \u2212PG\u22251 \u2265\u03f5) \u2264\u03b4 e O \u0000d3 Sn3Z+1/\u03f52\u0001 e O \u0000S222Z/\u03f52\u0001 Systematic Generalization Pr(V \u2217 1 \u2212V \u03c0 1 \u2265\u03f5\u03bb + \u03f5) \u2264\u03b4 e O(MH6d3 SZ2n5Z+3/\u03f52) e O(MH6S4A2Z2/\u03f52) Table 1: Sample complexity overview. that can be de\ufb01ned over the set of environments, by taking reward-free interactions with a generative model? In this paper, we provide a partially positive answer to this question by presenting a simple but principled causal model-based approach (see Figure 1). This algorithm interacts with a \ufb01nite subset of the universe to learn the causal structure underlying the set of environments in the form of a causal dependency graph G. Then, the causal transition model, which encodes the dynamics that is common across the environment, is obtained by estimating the Bayesian network PG over G from a mixture of the environments. Finally, the causal transition model is employed by a planning oracle to provide an approximately optimal policy for an unknown environment and a given reward function. We can show that this simple recipe, with a sample complexity that is polynomial in all the relevant quantities, allows achieving any desired planning error up to an unavoidable error term. The latter is inherent to the setting, which demands generalization over an in\ufb01nite set of environments, and cannot be overcome without additional samples from the test environment. The contributions of this paper include: (c1) The \ufb01rst tractable formulation of the systematic generalization problem in RL, thanks to structural assumptions motivated by causal considerations (\u00a7 3); (c2) A provably ef\ufb01cient algorithm to learn systematic generalization over an in\ufb01nite set of environments (\u00a7 4.1); (c3) The sample complexity of estimating the causal structure underlying a discrete MDP (\u00a7 4.2); (c4) The sample complexity of estimating the Bayesian network underlying a discrete MDP (\u00a7 4.3); (c5) A brief numerical validation of the main results (\u00a7 5). On a technical level, (c3, c4) require the adaptation of known results in causal discovery (Wadhwa and Dong 2021) and Bayesian network estimation (Dasgupta 1997) to the speci\ufb01c MDP setting, which are then employed as building blocks to obtain the rate for systematic generalization (c2). See Table 1 for a summary of the main sample complexity results. With this work we aim to connect several active research areas on model-based RL (Sutton and Barto 2018), rewardfree RL (Jin et al. 2020a), causal RL (Zhang et al. 2020), factored MDPs (Rosenberg and Mansour 2021), independence testing (Canonne et al. 2018), experimental design (Ghassami et al. 2018) in a general framework where individual progresses can be enhanced beyond the sum of their parts. 2 Preliminaries We start with some notions about graphs, causality, and Markov decision processes for later use. We denote a set of integers {1, . . . , a} as [a], and the probability simplex over the space A as \u2206A. For a factored space A = A1 \u00d7. . .\u00d7Aa and a set of indices Z \u2286[a], which we call a scope, we denote the scope operator as A[Z] := N i\u2208Z Ai, in which N is a cardinal product. For any A \u2208A, we denote with A[Z] the vector (Ai)i\u2208Z. For singletons we write A[i] as a shorthand for A[{i}]. Given two probability measures P and Q over a discrete space A, their L1-distance is \u2225P \u2212Q\u22251 = P A\u2208A |P(A)\u2212Q(A)|, and their Kullback-Leibler (KL) divergence is dKL(P||Q) = P A\u2208A P(A) log(P(A)/Q(A)). Graphs We de\ufb01ne a graph G as a pair G := (V, E), where V is a set of nodes and E \u2286N \u00d7 N is a set of edges between them. We call G a directed graph if all of its edges E are directed (i.e., ordered pairs of nodes). We also de\ufb01ne the in-degree of a node to be its number of incoming edges: degreein(A) = |{(B, A) : (B, A) \u2208E, \u2200B}|. G is said to be a Directed Acyclic Graph (DAG) if it is a directed graph without cycles. We call G a bipartite graph if there exists a partition X \u222aY = V such that none of the nodes in X and Y are connected by an edge, i.e., E \u2229(X \u00d7 X) = E \u2229 (Y \u00d7 Y ) = \u2205. For any subset of nodes S \u2282V, we de\ufb01ne the subgraph induced by S as G[S] := (S, E[S]), in which E[S] = E \u2229(S \u00d7 S). The skeleton of a graph G is the undirected graph that is obtained from G by replacing all the directed edges in E with undirected ones. Finally, the graph edit distance between two graphs is the minimum number of graph edits (addition or deletion of either a node or an edge) necessary to transform one graph into the other. Causal Graphs and Bayesian Networks For a set X of random variables, we represent the causal structure over X with a DAG GX = (X, E),1 which we call the causal graph of X. For each pair of variables A, B \u2208X, a directed edge (A, B) \u2208GX denotes that B is conditionally dependent on A. For every variable A \u2208X, we denote as Pa(A) the causal parents of A, i.e., the set of all the variables B \u2208X on which A is conditionally dependent, (B, A) \u2208GX . A Bayesian network (Dean and Kanazawa 1989) over the set X is de\ufb01ned as N := (GX , P), where GX speci\ufb01es the structure of the network, i.e., the dependencies between the variables in X, and the distribution P : X \u2192\u2206X speci\ufb01es the conditional probabilities of the variables in X, such that P(X) = Q Xi\u2208X Pi(Xi| Pa(Xi)). Markov Decision Processes A tabular episodic Markov Decision Process (MDP, Puterman 2014) is de\ufb01ned as M := (S, A, P, H, r), where S is a set of |S| = S states, A is a set of |A| = A actions, P is a transition model such that P(s\u2032|s, a) gives the conditional probability of the next state s\u2032 having taken action a in state s, H is the episode horizon, r : S \u00d7 A \u2192[0, 1] is a deterministic reward function. The strategy of an agent interacting with M is represented 1We will omit the subscript X whenever clear from the context. \f} Causal Structure Causal Transition Model \u001f G \u001f P \u001f G MDP1 MDP2 MDPM \u001f \u03c0 unseen MDP Approximate Planning Figure 1: High-level illustration of the causal model-based approach to systematic generalization. by a non-stationary, stochastic policy, a collection of functions (\u03c0h : S \u2192\u2206A)h\u2208[H] where \u03c0h(a|s) denotes the conditional probability of taking action a in state s at step h. The value function V \u03c0 h : S \u2192R associated to \u03c0 is de\ufb01ned as the expected sum of the rewards that will be collected, under the policy \u03c0, starting from s at step h, i.e., V \u03c0 h (s) := E\u03c0 \u0014 H X h\u2032=h r(sh\u2032, ah\u2032) \f \f \f sh = s \u0015 . For later convenience, we further de\ufb01ne PV \u03c0 h+1(s, a) := Es\u2032\u223cP (\u00b7|s,a)[V \u03c0 h+1(s\u2032)] and V \u03c0 1 := Es\u223cP [V \u03c0 1 (s)]. We will write V \u03c0 M,r to denote V \u03c0 1 in the MDP M with reward function r (if not obvious from the context). For an MDP M with \ufb01nite states, actions, and horizon, there always exists an optimal policy \u03c0\u2217that gives the value V \u2217 h (s) = sup\u03c0 V \u03c0 h (s) for every s, a, h. The goal of the agent is to \ufb01nd a policy \u03c0 that is \u03f5-close to the optimal one, i.e., V \u2217 1 \u2212V \u03c0 1 \u2264\u03f5. Finally, we de\ufb01ne a discrete Markov decision process as M := ((S, dS, n), (A, dA, n), P, H, r), where S, A, P, H, r are speci\ufb01ed as before, and where the states and actions spaces admit additional structure, such that every s \u2208S can be represented through a dS-dimensional vector of discrete features taking value in [n], and every a \u2208A can be represented through a dA-dimensional vector of discrete features taking value in [n]. Note that any tabular MDP can be formulated under this alternative formalism through one-hot encoding by taking n = 2, dS = S, and dA = A. 3 Problem Formulation In our setting, a learning agent aims to master a large, potentially in\ufb01nite, set U of environments modeled as discrete MDPs without rewards that we call a universe U := \b Mi = ((S, dS, n), (A, dA, n), Pi, \u00b5) \t\u221e i=1. The agent can draw a \ufb01nite amount of experience by interacting with the MDPs in U. From these interactions alone, the agent aims to acquire suf\ufb01cient knowledge to approximately solve any task that can be speci\ufb01ed over the universe U. A task is de\ufb01ned as any pairing of an MDP M \u2208U and a reward function r, whereas solving it refers to providing a slightly sub-optimal policy via planning, i.e., without taking additional interactions. We call this problem systematic generalization, which we can formalize as follows. De\ufb01nition 1 (Systematic Generalization). For any unknown MDP M \u2208U and any given reward function r : S \u00d7 A \u2192 [0, 1], the systematic generalization problem requires the agent to provide a policy \u03c0, such that V \u2217 M,r \u2212V \u03c0 M,r \u2264\u03f5 up to any desired sub-optimality \u03f5 > 0. Since the set U is in\ufb01nite, we clearly require additional structure to make the problem feasible. On the one hand, the state space (S, dS, n), action space (A, dA, n), and initial state distribution \u00b5 are shared across M \u2208U. The transition dynamics Pi is instead speci\ufb01c to each MDP Mi \u2208U. However, we assume the presence of a common causal structure that underlies the transition dynamics of the universe, and relates the single transition models Pi. 3.1 Causal Structure of the Transition Dynamics The transition dynamics of a discrete MDP gives the conditional probability of next state features s\u2032 given the current state-action features (s, a). To ease the notation, from now on we will denote the state-action features with a random vector X = (Xi)i\u2208[dS+dA], in which each Xi is supported in [n], and the next state features with a random vector Y = (Yi)i\u2208[dS], in which each Yi is supported in [n]. For each environment Mi \u2208U, the conditional dependencies between the next state features Y and the current state-action features X are represented through a bipartite dependency graph Gi, such that (X[z], Y [j]) \u2208Gi if and only if Y [j] is conditionally dependent on X[z]. Clearly, each environment can display its own dependencies, but we assume there is a set of dependencies that represent general causal relationships between the features, and that appear in any Mi \u2208U. In particular, we call the intersection G := \u2229\u221e i=0Gi the causal structure of U, which is the set of conditional dependencies that are common across the universe. In Figure 2, we show an illustration of such a causal structure. Since it represents universal causal relationships, the causal structure G is time-consistent, i.e., G(h) = G(1) for any step h \u2208[H], and we further assume that G is sparse, which means that the number of features X[z] on which a feature Y [j] is dependent on is bounded from above. Assumption 1 (Z-sparseness). The causal structure G is Zsparse if maxj\u2208[dS] degreein(Y [j]) \u2264Z. Given a causal structure G, without loosing generality2 we can express each transition model Pi as Pi(Y |X) = PG(Y |X)Fi(Y |X), in which PG is the Bayesian network over the causal structure G, whereas Fi includes environment-speci\ufb01c factors.3 Since it represents the conditional probabilities due to universal causal relations in U, we call PG the causal transition model of U. Thanks to the 2Note that one can always take PG(Y |Z) = 1, \u2200(X, Y ). 3The parameters in Fi are numerical values such that Pi remains a well-de\ufb01ned probability measure. \f{ { { dS { X Y causal edge e \u2208G environment speci\ufb01c edge e / \u2208G [n] dS + dA [n] Figure 2: Causal structure G of U. structure G, PG can be further factored as PG(Y |X) = dS Y j=1 Pj(Y [j]|X[Zj]), (1) where the scopes Zj are the the causal parents of Y [j], i.e., (X[z], Y [j]) \u2208G, \u2200z \u2208Zj. In Figure 3, we show an illustration of the causal transition model and its factorization. Similarly to the underlying structure G, the causal transition model PG is also time-consistent, i.e., P (h) G = P (1) G for any step h \u2208[H]. In this work, we assume that the causal transition model is non-vacuous and that it explains a signi\ufb01cant part of the transition dynamics of Mi \u2208U. Assumption 2 (\u03bb-suf\ufb01ciency). Let \u03bb \u2208[0, 1] be a constant. The causal transition model PG is causally \u03bb-suf\ufb01cient if supX \u2225PG(\u00b7|X) \u2212Pi(\u00b7|X)\u22251 \u2264\u03bb, \u2200Pi \u2208Mi \u2208U. The parameter \u03bb controls the amount of the transition dynamics that is due to the universal causal relations G (\u03bb = 0 means that PG is suf\ufb01cient to explain the transition dynamics of any Mi \u2208U, whereas \u03bb = 1 implies no shared structure). In this paper, we argue that learning the causal transition model PG is a good target for systematic generalization and we provide theoretical support for this claim in \u00a7 4. 3.2 A Class of Training Environments Even if the universe U admits the structure that we presented in the last section, it is still an in\ufb01nite set. Instead, the agent can only interact with a \ufb01nite subset of discrete MDPs M := {Mi = ((S, dS, n), (A, dA, n), Pi, \u00b5)}M i=1 \u2282U, which we call a class of size M. Crucially, the causal structure G is a property of the full set U, and if we aim to infer it from interactions with a \ufb01nite class M, we have to assume that M is informative enough on the structure of U. Assumption 3 (Diversity). Let M \u2282U be class of size M. We say that M is causally diverse if G = \u2229M i=1Gi = \u2229\u221e i=1Gi.4 Analogously, if we aim to infer the causal transition model PG from interactions with the transition models Pi of the single MDPs Mi \u2208M, we have to assume that M is balanced in terms of the conditional probabilities displayed by its components, so that the factors that do not represent universal causal relations even out while learning. 4W.l.o.g., we assume that the indices i \u2208[M] refers to the Mi \u2208M, and i \u2208(M, \u221e) to the Mi \u2208U \\ M. dS { X Y PG(Y | X) = dS \u001f j=1 Pj(Y [j] | X[Zj]) Figure 3: Causal transition model PG of U. Assumption 4 (Evenness). Let M \u2282U a class of size M. We say that M is causally even if 5 Ei\u223cU[M] \u0002 Fi(Y [j]|X) \u0003 = 1, \u2200j \u2208[dS]. In this paper we assume that M is diverse and even by design, while we leave as future work the problem of selecting such a class from active interactions with U, which would add to our formulation \ufb02avors of active learning and experimental design (Hauser and B\u00a8 uhlmann 2014; Kocaoglu, Shanmugam, and Bareinboim 2017; Ghassami et al. 2018). 3.3 Learning Systematic Generalization Before addressing the sample complexity of systematic generalization, it is worth considering the kind of interactions that we need in order to learn the causal transition model PG and its underlying causal structure G. Especially, thanks to the peculiar con\ufb01guration of the causal structure G, i.e., a bipartite graph in which the edges are necessarily directed from the state-action features X to the next state features Y , as a causation can only happen from the past to the future, learning the skeleton of G is equivalent to learning its full structure. Crucially, learning the skeleton of a causal graph does not need speci\ufb01c interventions, as it can be done from observational data alone (Hauser and B\u00a8 uhlmann 2014). Proposition 1. The causal structure G of U can be identi\ufb01ed from purely observational data. In this paper, we will consider the online learning setting with a generative model for estimating G and PG from sampled interactions with a class M of size M. A generative model allows the agent to set the state of an MDP before sampling a transition, instead of drawing sequential interactions from the process. Finally, analogous results to what we obtain here can apply to the of\ufb02ine setting as well, in addition to convenient coverage assumptions on the dataset. 4 Sample Complexity Analysis We provide a sample complexity analysis of the problem, which stands as a core contribution of this paper along with the problem formulation itself (\u00a7 3). First, we consider the sample complexity of systematic generalization (\u00a7 4.1). Then, we provide ancillary results on the estimation of the causal structure (\u00a7 4.2) and the Bayesian network (\u00a7 4.3) of an MDP, which can be of independent interest. 5We denote by U[M] the uniform distribution over [M]. \f4.1 Sample Complexity of Systematic Generalization with a Generative Model We have access to a class M of discrete MDPs within a universe U, from which we draw interactions with a generative model P(X). We aim to solve the systematic generalization problem as described in De\ufb01nition 1. This problem requires to provide, for any combination of an (unknown) MDP M \u2208U, and a given reward function r, a planning policy b \u03c0 such that V \u2217 M,r \u2212V b \u03c0 M,r \u2264\u03f5. Especially, can we design an algorithm that guarantees this requirement with high probability by taking a number of samples K that is polynomial in \u03f5 and the relevant parameters of M? Here we give a partially positive answer to this question, by providing a simple but provably ef\ufb01cient algorithm that guarantees systematic generalization over U up to an unavoidable suboptimality term \u03f5\u03bb that we will later specify. The algorithm implements a model-based approach into two separated components. The \ufb01rst component is the procedure that actually interacts with the class M to obtain a principled estimation b P b G of the causal transition model PG of U. The second, is a planning oracle that takes as input a reward function r and the estimated causal transition model, and returns an optimal policy b \u03c0 operating on b P b G as an approximation of the transition model Pi of the true MDP Mi.6 First, we provide the sample complexity of the causal transition model estimation (Algorithm 1), which in turn is based on repeated causal structure estimations (Algorithm 2) to obtain b G, and an estimation procedure of the Bayesian network over b G (Algorithm 3) to obtain b P b G. Lemma 4.1. Let M = {Mi}M i=1 be a class of M discrete MDPs, let \u03b4 \u2208(0, 1), \u03f5 > 0. The Algorithm 1 returns a causal transition model b P b G such that Pr(\u2225b P b G \u2212PG\u22251 \u2265 \u03f5) \u2264\u03b4 with a sample complexity K = O \u0010 Md3 SZ2n3Z+1 log \u0000 4Md2 SdAnZ \u03b4 \u0001. \u03f52\u0011 . An analogous result can be derived for tabular MDPs. Lemma 4.2. Let M = {Mi}M i=1 be a class of M tabular MDPs. The result of Lemma 4.1 reduces to K = O \u0010 MS2Z222Z log \u0000 4MS2A2Z \u03b4 \u0001. \u03f52\u0011 . Having established the sample complexity of the causal transition model estimation, we can now show how the learned model b P b G allows us to approximately solve, via a planning oracle, any task de\ufb01ned by a combination of a latent MDP Mi \u2208U and a given reward function r. To provide this result in the discrete MDP setting, we have to further assume that the transition dynamics Pi of the target MDP Mi admits factorization analogous to (1), such that we can write Pi(Y |X) = QdS j=1 Pi,j(Y [j]|X[Z \u2032 j]), where the scopes Z \u2032 j are given by the environment causal structure Gi, which we assume to be 2Z-sparse (Assumption 1). 6The planning oracle can be substituted with a principled approximate planning solver (see Jin et al. 2020a, Section 3.3). Algorithm 1 Causal Transition Model Estimation Input: class of MDPs M, error \u03f5, con\ufb01dence \u03b4 let K\u2032 = C\u2032\u0000d2 SZ2n log(2Md2 SdA/\u03b4) \u000e \u03f52\u0001 set the generative model P(X) = UX for i = 1, . . . , M do let Pi(Y |X) the transition model of Mi \u2208M b Gi \u2190Causal Structure Estimation (Pi, P(X), K\u2032) end for let b G = \u2229M i=1 b Gi let K\u2032\u2032 = C\u2032\u2032\u0000d3 Sn3Z+1 log(4dSnZ/\u03b4) \u000e \u03f52\u0001 let PM(Y |X) be the mixture 1 M PM i=1 Pi(Y |X) b P b G \u2190Bayesian Network Estimation (PM, b G, K\u2032\u2032) Output: causal transition model b P b G Theorem 4.3. Let \u03b4 \u2208(0, 1) and \u03f5 > 0. For an unknown discrete MDP M \u2208U, and a given reward function r, a planning oracle operating on the causal transition model b P b G as an approximation of M returns a policy b \u03c0 such that Pr \u0000V \u2217 Mi,r \u2212V b \u03c0 Mi,r \u2265\u03f5\u03bb + \u03f5 \u0001 \u2264\u03b4, where \u03f5\u03bb = 2\u03bbH3dSn2Z+1, and b P b G is obtained from Algorithm 1 with \u03b4\u2032 = \u03b4 and \u03f5\u2032 = \u03f5/2H3nZ+1. Without the additional factorization of the environmentspeci\ufb01c transition model, the result of Theorem 4.3 reduces to the analogous for the tabular MDP setting. Corollary 4.4. Let M a tabular MDP, the result of Theorem 4.3 holds with \u03f5\u03bb = 2\u03bbSAH3, \u03f5\u2032 = \u03f5/2SAH3. Theorem 4.3 and Corollary 4.4 establish the sample complexity of systematic generalization through Lemma 4.1 and Lemma 4.2 respectively. For the discrete MDP setting, we have that e O(MH6d3 SZ2n5Z+3) samples are required, which reduces to e O(MH6S4A2Z2) in the tabular setting. Unfortunately, we are only able to obtain systematic generalization up to an unavoidable sub-optimality term \u03f5\u03bb. This error term is related to the \u03bb-suf\ufb01ciency of the causal transition model (Assumption 2), and it accounts for the fact that PG cannot fully explain the transition dynamics of each M \u2208U, even when it is estimated exactly. This is inherent to the ambitious problem setting, and can be only overcome with additional interactions with the test MDP M. 4.2 Sample Complexity of Learning the Causal Structure of a Discrete MDP As a byproduct of the main result in Theorem 4.3, we can provide a sample complexity result for the problem of learning the causal structure G underlying a discrete MDP M with a generative model. We believe that this problem can be of independent interest, mainly in consideration of previous work on causal discovery of general stochastic processes (e.g., Wadhwa and Dong 2021), for which we re\ufb01ne known results to account for the structure of an MDP, which allows for a tighter analysis of the sample complexity. Instead of the exact dependency graph G, which can include dependencies that are too weak to be detected with a \fAlgorithm 2 MDP Causal Structure Estimation Input: sampling model P(Y |X), generative model P(X), batch parameter K draw (xk, yk)K k=1 iid \u223cP(Y |X)P(X) initialize b G = \u2205 for each pair of nodes Xz, Yj do compute the independence test I(Xz, Yj) if dependent add (Xz, Yj) to b G end for Output: causal dependency graph b G \ufb01nite number of samples, we only address the dependencies above a given threshold. De\ufb01nition 2. We call G\u03f5 \u2286G the \u03f5-dependency subgraph of G if it holds, for each pair (A, B) \u2208G distributed as PA,B, (A, B) \u2208G\u03f5 iff infQ\u2208{\u2206A\u00d7\u2206B} \u2225PA,B \u2212Q\u22251 \u2265\u03f5. Before presenting the result, we state the existence of a principled independence testing procedure. Lemma 4.5 (Diakonikolas et al. (2021)). There exists an (\u03f5, \u03b4)-independence tester I(A, B) for distributions PA,B on [n] \u00d7 [n], which returns yes if A, B are independent, no if infQ\u2208{\u2206A\u00d7\u2206B} \u2225PA,B \u2212Q\u22251 \u2265\u03f5, both with probability at least 1 \u2212\u03b4 and sample complexity O(n log(1/\u03b4)/\u03f52). We can now provide an upper bound to the number of samples required by a simple estimation procedure to return an (\u03f5, \u03b4)-estimate b G of the causal dependency graph G. Theorem 4.6. Let M a discrete MDP with causal structure G, let \u03b4 \u2208(0, 1), and let \u03f5 > 0. The Algorithm 2 returns a dependency graph b G such that Pr( b G \u0338= G\u03f5) \u2264\u03b4 with a sample complexity K = O \u0000n log(d2 SdA/\u03b4)/\u03f52\u0001 . Corollary 4.7. Let M a tabular MDP. The result of Theorem 4.6 reduces to K = O \u0000log(S2A/\u03b4)/\u03f52\u0001 . 4.3 Sample Complexity of Learning the Bayesian Network of a Discrete MDP We present as a standalone result an upper bound to the sample complexity of learning the parameters of a Bayesian network PG with a \ufb01xed structure G. Especially, we re\ufb01ne known results (e.g., Dasgupta 1997) by considering the speci\ufb01c structure G of an MDP. If the structure G is dense, the number of parameters of PG grows exponentially, making the estimation problem mostly intractable. Thus, we consider a Z-sparse G (Assumption 1), as in previous works (Dasgupta 1997). Then, we can provide a polynomial sample complexity for the problem of learning the Bayesian network PG of a an MDP M. Theorem 4.8. Let M a discrete MDP with causal structure G, let \u03b4 \u2208 (0, 1), and let \u03f5 > 0. The Algorithm 3 returns a Bayesian network b PG such that Pr(\u2225b PG \u2212 PG\u22251 \u2265 \u03f5) \u2264 \u03b4 with a sample complexity K = O \u0000d3 Sn3Z+1 log(dSnZ/\u03b4)/\u03f52\u0001 . Corollary 4.9. Let M a tabular MDP. The result of Theorem 4.8 reduces to K = O \u0000S222Z log(S2Z/\u03b4)/\u03f52\u0001 . Algorithm 3 MDP Bayesian Network Estimation Input: sampling model P(Y |X), dependency graph G, batch parameter K let K\u2032 = \u2308K/dSnZ\u2309 for j = 1, . . . , dS do let Zj the scopes (X[Zj], Y [j]) \u2286G initialize the counts N(X[Zj], Y [j]) = 0 for each value x \u2208[n]|Zj| do for k = 1, . . . , K\u2032 do draw y \u223cP(Y [j]|X[Zj] = x) increment N(X[Zj] = x, Y [j] = y) end for end for compute b Pj(Y [j]|X[Zj]) = N(X[Zj],Y [j]) K\u2032 end for let b PG(Y |X) = QdS j=1 b Pj(Y [j]|X[Zj]) Output: Bayesian network b PG 5 Numerical Validation We empirically validate the theoretical \ufb01ndings of this work by experimenting on a synthetic example where each environment is a person, and the MDP represents how a series of actions the person can take in\ufb02uences their weight (W) and academic performance (A). As actions we consider hours of physical training (P), hours of sleep (S), hours of study (St), amount of vegetables in the diet (D), and the amount of caffeine intake (C). The obvious use-case for such a model would be a tracking device that monitors how the actions of a person in\ufb02uence their weight and academic performance and provides personalized recommendations to reach the person\u2019s goals. While the physiological responses of different individuals can vary, there are some underlying mechanisms shared by all humans, and therefore deemed causal in our terminology. Examples of such causal links are the dependency of weight on the type of diet, and the dependency of academic performance on the number of hours of study. Other links, such as the dependency of weight on the amount of caffeine, are present in some individuals, but are generally not shared and therefore not causal. For simplicity, all variables are treated as discrete with values 0 (below average), 1 (average) or 2 (above average). See Appendix B for details on how transition models of different environments are generated. A class M of 3 environments is used to estimate the causal transition model. All experiments are repeated 10 times and report the average and standard deviation. Causal Structure Estimation We \ufb01rst empirically investigate the graph edit distance between estimated and ground-truth causal structures GED(G, b G) as a function of number of samples (K\u2032 in Algorithm 1). The causal structure is estimated by obtaining the causal graph for each training environment (using a series of independence tests), and taking the intersection of their edges. As expected, the distance converges to zero as we increase the number of samples, and we can recover the exact causal graph (Figure 4a). Causal Transition Model Estimation Figure 4b shows the L1-distance between the estimated and ground-truth \f0 1000 2000 3000 N. of samples 0 2 4 6 Graph-edit-distance (a) Causal Structure 0 20000 40000 60000 80000 N. of samples 0.06 0.08 0.10 0.12 L1-distance (b) Causal Transition Model 0 20000 40000 60000 80000 N. of samples 0.00 0.02 0.04 L1-distance (c) Value Function Figure 4: Estimation errors of key quantities as a function of the number of samples. causal transition model, as a function of the number of samples (K\u2032 + K\u2032\u2032 in Algorithm 1). As the samples grow, the L1-distance shrinks towards 0.05, which is due to the environments not fully respecting the evenness assumption. Value Function Estimation Finally, we investigate whether we can approximate the optimal value function for an unseen environment. From Figure 4c, we observe that our algorithm is able to approximate the optimal value function up to a small error with a reasonable number of samples. 6 Related Work Finally, we revise the relevant literature and discuss how it relates with our problem formulation and results. Causal Discovery and Bayesian Networks On a technical level, our work is related to previous efforts on the sample complexity of causal discovery (Wadhwa and Dong 2021) and Bayesian network estimation (Friedman and Yakhini 1996; Dasgupta 1997; Bhattacharyya, Canonne, and Yang 2022). None of these works consider the MDP setting. Instead, we account for the peculiar MDP structure to get sharper rates w.r.t. a blind application of previous results. Reward-Free RL Reward-free RL (Jin et al. 2020a) is akin to a special case of our systematic generalization framework in which the set of MDPs is a singleton (Wang et al. 2020; Zanette et al. 2020; Kaufmann et al. 2021; M\u00b4 enard et al. 2021; Zhang, Du, and Ji 2021; Qiu et al. 2021). It is worth comparing our sample complexity result to independent reward-free exploration for each MDP. Let |U| = U, the latter would require at least \u2126(UH3S2A/\u03f52) samples to obtain systematic generalization up to an \u03f5 threshold over a set of tabular MDPs U (Jin et al. 2020a). This compares favorably with our rate e O(MH6S4A2/\u03f52) whenever U is small, but leveraging the inner structure of U becomes crucial as U grows to in\ufb01nity, while M remains constant. Our approach pays this further generality with the additional error term \u03f5\u03bb, which is unavoidable. It is an interesting direction to see whether additional factors in S, A, H are also unavoidable. Hidden Structures in RL Previous works have considered learning an hidden structure of the MDP for sample ef\ufb01cient RL (Du et al. 2019; Misra et al. 2020a,b; Agarwal et al. 2020). Their focus is on learning latent representations of states assuming a linear structure in the MDP. This is orthogonal to our work, which instead targets the causal structure shared by in\ufb01nitely many MDPs, while assuming access to the state features. Other works (e.g., Jin et al. 2020b; Cai et al. 2020; Yin et al. 2022) study the impact of structural properties of the MDP assuming access to the features. Our structural assumption is strictly more general than the linear structures they consider, but their work could provide useful inspiration to extend our results beyond discrete settings. Model-Based RL Model-based RL (Sutton and Barto 2018) prescribes learning an approximate model of the transition dynamics to extract an optimal policy. Theoretical works (e.g., Jaksch, Ortner, and Auer 2010; Ayoub et al. 2020) generally focus on the estimation of the approximate value functions obtained through the learned model, rather than the estimation of the model itself. A notable exception is (Tarbouriech et al. 2020), which targets point-wise high probability guarantees on the model estimation as we do in Lemma 4.1, 4.2. However, they address the model estimation of a single MDP, instead of the shared transition dynamics of an in\ufb01nite set of MDPs that we target in this paper. Factored MDPs The factored MDP formalism (Kearns and Koller 1999) allows encoding transition dynamics that are the product of multiple independent factors. This is closely related to how we de\ufb01ne the causal transition model in (1), which can be seen as a factored MDP. Previous works have considered learning in factored MDPs, either assuming full knowledge of the factorization (Delgado, Sanner, and De Barros 2011; Xu and Tewari 2020; Talebi, Jonsson, and Maillard 2021; Tian, Qian, and Sra 2020), or by estimating its structure from data (Strehl, Diuk, and Littman 2007; Vigorito and Barto 2009; Chakraborty and Stone 2011; Osband and Van Roy 2014; Rosenberg and Mansour 2021). To the best of our knowledge, none of the existing works have considered the factored MDP framework in combination with a reward-free setting and systematic generalization, which bring unique challenges to the identi\ufb01cation of the underlying factorization and the estimation of the transition factors. Causal RL Previous works (Zhang et al. 2020; Tomar et al. 2021; Gasse et al. 2021; Feng et al. 2022) address model-based RL from a causal perspective. The motivations behind (Zhang et al. 2020) are especially similar to ours, but they have come to different structural assumptions, which lead to non-overlapping results. To the best of our knowledge, we are the \ufb01rst to prove a polynomial sample complexity for causal model-based RL in systematic generalization. Similarly to our paper, Feng et al. (2022) employ causal structure learning to build a factored representation of the MDP, but they tackle non-stationary changes in the environment instead of systematic generalization. Finally, Lu, Meisami, and Tewari (2021) show how to exploit a known causal representation for sample ef\ufb01cient RL, which can complement our work on how to learn such representation." + }, + { + "url": "http://arxiv.org/abs/2202.03060v2", + "title": "The Importance of Non-Markovianity in Maximum State Entropy Exploration", + "abstract": "In the maximum state entropy exploration framework, an agent interacts with a\nreward-free environment to learn a policy that maximizes the entropy of the\nexpected state visitations it is inducing. Hazan et al. (2019) noted that the\nclass of Markovian stochastic policies is sufficient for the maximum state\nentropy objective, and exploiting non-Markovianity is generally considered\npointless in this setting. In this paper, we argue that non-Markovianity is\ninstead paramount for maximum state entropy exploration in a finite-sample\nregime. Especially, we recast the objective to target the expected entropy of\nthe induced state visitations in a single trial. Then, we show that the class\nof non-Markovian deterministic policies is sufficient for the introduced\nobjective, while Markovian policies suffer non-zero regret in general. However,\nwe prove that the problem of finding an optimal non-Markovian policy is\nNP-hard. Despite this negative result, we discuss avenues to address the\nproblem in a tractable way and how non-Markovian exploration could benefit the\nsample efficiency of online reinforcement learning in future works.", + "authors": "Mirco Mutti, Riccardo De Santi, Marcello Restelli", + "published": "2022-02-07", + "updated": "2022-07-08", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Several recent works have addressed Maximum State Entropy (MSE) exploration (Hazan et al., 2019; Tarbouriech & Lazaric, 2019; Lee et al., 2019; Mutti & Restelli, 2020; Mutti et al., 2021; Zhang et al., 2021; Guo et al., 2021; Liu & Abbeel, 2021b;a; Seo et al., 2021; Yarats et al., 2021; Mutti et al., 2022; Nedergaard & Cook, 2022) as an objective for unsupevised Reinforcement Learning (RL) (Sutton & Barto, 2018). In this line of work, an agent interacts with a rewardfree environment (Jin et al., 2020) in order to maximize an *Equal contribution 1Politecnico di Milano 2Universit` a di Bologna 3ETH Zurich. Correspondence to: Mirco Mutti , Riccardo De Santi . Proceedings of the 39 th International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s). entropic measure of the state distribution induced by its behavior over the environment, effectively targeting a uniform exploration of the state space. Previous works motivated this MSE objective in two main directions. On the one hand, this learning procedure can be seen as a form of unsupervised pre-training of the base model (Laskin et al., 2021), which has been extremely successful in supervised learning (Erhan et al., 2009; 2010; Brown et al., 2020). In this view, a MSE policy can serve as an exploratory initialization to standard learning techniques, such as Q-learning (Watkins & Dayan, 1992) or policy gradient (Peters & Schaal, 2008), and this has been shown to bene\ufb01t the sample ef\ufb01ciency of a variety of RL tasks that could be speci\ufb01ed over the pre-training environment (e.g., Mutti et al., 2021; Liu & Abbeel, 2021b; Laskin et al., 2021). On the other hand, pursuing a MSE objective leads to an even coverage of the state space, which can be instrumental to address the sparse reward discovery problem (Tarbouriech et al., 2021). Especially, even when the \ufb01ne-tuning is slow (Campos et al., 2021), the MSE policy might allow to solve hard-exploration tasks that are out of reach of RL from scratch (Mutti et al., 2021; Liu & Abbeel, 2021b). As we \ufb01nd these premises fascinating, and of general interest to the RL community, we believe it is worth providing a theoretical reconsideration of the MSE problem. Speci\ufb01cally, we aim to study the minimal class of policies that is necessary to optimize a well-posed MSE objective, and the general complexity of the resulting learning problem. All of the existing works pursuing a MSE objective solely focus on optimizing Markovian exploration strategies, in which each decision is conditioned on the current state of the environment rather than the full history of the visited states. The resulting learning problem is known to be provably ef\ufb01cient in tabular domains (Hazan et al., 2019; Zhang et al., 2020). Moreover, this choice is common in RL, as it is well-known that an optimal deterministic Markovian strategy maximizes the usual cumulative sum of rewards objective (Puterman, 2014). Similarly, Hazan et al. (2019, Lemma 3.3) note that the class of Markovian strategies is suf\ufb01cient for the standard MSE objective. A carefully constructed Markovian strategy is able to induce the same state distribution of any history-based (non-Markovian) one by exploiting randomization. Crucially, this result does not hold only for asymptotic state distributions, but also for state arXiv:2202.03060v2 [cs.LG] 8 Jul 2022 \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration Figure 1. Illustrative two-rooms domain. The agent starts in the middle, colored traces represent optimal strategies to explore the left and the right room. distributions that are marginalized over a \ufb01nite horizon (Puterman, 2014). Hence, there is little incentive to consider more complicated strategies as they are not providing any bene\ufb01t on the value of the entropy objective. However, the intuition suggests that exploiting the history of the interactions is useful when the agent\u2019s goal is to uniformly explore the environment: If you know what you have visited already, you can take decisions accordingly. To this point, let us consider an illustrative example in which the agent \ufb01nds itself in the middle of a two-rooms domain (as depicted in Figure 1), having a budget of interactions that is just enough to visit every state within a single episode. It is easy to see that an optimal Markovian strategy for the MSE objective would randomize between going left and right in the initial position, and then would follow the optimal route within a room, \ufb01nally ending in the initial position again. An episode either results in visiting the left room twice, or the right room twice, or each room once, and all of this outcomes have the same probability. Thus, the agent might explore poorly when considering a single episode, but the exploration is uniform in the average of in\ufb01nite trials. Arguably, this is quite different from how a human being would tackle this problem, i.e., taking intentional decisions in the middle position to visit a room before the other. This strategy leads to uniform exploration of the environment in any trial, but it is inherently non-Markovian. Backed by this intuition, we argue that prior work does not recognize the importance of non-Markovianity in MSE exploration due to an hidden in\ufb01nite-samples assumption in the objective formulation, which is in sharp contrast with the objective function it is actually optimized by empirical methods, i.e., the state entropy computed over a \ufb01nite batch of interactions. In this paper, we introduce a new \ufb01nite-sample MSE objective that is akin to the practical formulation, as it targets the expected entropy of the state visitation frequency induced within an episode instead of the entropy of the expected state visitation frequency over in\ufb01nite samples. In this \ufb01nite-sample formulation non-Markovian strategies are crucial, and we believe they can bene\ufb01t a signi\ufb01cant range of relevant applications. For example, collecting task-speci\ufb01c samples might be costly in some real-world domains, and a pre-trained non-Markovian strategy is essential to guarantee quality exploration even in a single-trial setting. In another instance, one might aim to pre-train an exploration strategy for a class of multiple environments instead of a single one. A non-Markovian strategy could exploit the history of interactions to swiftly identify the structure of the environment, then employing the environment-speci\ufb01c optimal strategy thereafter. Unfortunately, learning a nonMarkovian strategy is in general much harder than a Markovian one, and we are able to show that it is NP-hard in this setting. Nonetheless, this paper aims to highlight the importance of non-Markovinaity to ful\ufb01ll the promises of maximum state entropy exploration, thereby motivating the development of tractable formulations of the problem as future work. The contributions are organized as follows. First, in Section 3, we report a known result (Puterman, 2014) to show that the class of Markovian strategies is suf\ufb01cient for any in\ufb01nite-samples MSE objective, including the entropy of the induced marginal state distributions in episodic settings. Then, in Section 4, we propose a novel \ufb01nite-sample MSE objective and a corresponding regret formulation. Especially, we prove that the class of non-Markovian strategies is suf\ufb01cient for the introduced objective, whereas the optimal Markovian strategy suffers a non-zero regret. However, in Section 5, we show that the problem of \ufb01nding an optimal non-Markovian strategy for the \ufb01nite-sample MSE objective is NP-hard in general. Despite the hardness result, we provide a numerical validation of the theory (Section 6), and we comment some potential options to address the problem in a tractable way (Section 7). In Appendix A, we discuss the related work in the MSE literature, while the missing proofs can be found in Appendix B. 2. Preliminaries In the following, we will denote with \u2206(X) the simplex of a space X, with [T] the set of integers {0, . . . , T \u22121}, and with v \u2295u a concatenation of the vectors v, u. Controlled Markov Process A Controlled Markov Process (CMP) is a tuple M := (S, A, P, \u00b5), where S is a \ufb01nite state space (|S| = S), A is a \ufb01nite action space (|A| = A), P : S \u00d7 A \u2192\u2206(S) is the transition model, such that P(s\u2032|a, s) denotes the probability of reaching state s\u2032 \u2208S when taking action a \u2208A in state s \u2208S, and \u00b5 \u2208\u2206(S) is the initial state distribution. Policies A policy \u03c0 de\ufb01nes the behavior of an agent interacting with an environment modelled by a CMP. It consists of a sequence of decision rules \u03c0 := (\u03c0t)\u221e t=0. Each of them is a map between histories h := (sj, aj)t j=0 \u2208Ht and actions \u03c0t : Ht \u2192\u2206(A), such that \u03c0t(a|h) de\ufb01nes \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration the conditional probability of taking action a \u2208A having experienced the history h \u2208Ht. We denote as H the space of the histories of arbitrary length. We denote as \u03a0 the set of all the policies, and as \u03a0D the set of deterministic policies \u03c0 = (\u03c0t)\u221e t=1 such that \u03c0t : Ht \u2192A. We further de\ufb01ne: \u2022 Non-Markovian (NM) policies \u03a0NM, where each \u03c0 \u2208 \u03a0NM collapses to a single time-invariant decision rule \u03c0 = (\u03c0, \u03c0, . . .) such that \u03c0 : H \u2192\u2206(A); \u2022 Markovian (M) policies \u03a0M, where each \u03c0 \u2208\u03a0M is de\ufb01ned through a sequence of Markovian decision rules \u03c0 = (\u03c0t)\u221e t=0 such that \u03c0t : S \u2192\u2206(A). A Markovian policy that collapses into a single time-invariant decision rule \u03c0 = (\u03c0, \u03c0, . . .) is called a stationary policy. State Distributions and Visitation Frequency A policy \u03c0 \u2208\u03a0 interacting with a CMP induces a t-step state distribution d\u03c0 t (s) := Pr(st = s|\u03c0) over S (Puterman, 2014). This distribution is described by the temporal relation d\u03c0 t (s) = R S R A d\u03c0 t\u22121(s\u2032, a\u2032)P(s|s\u2032, a\u2032) ds\u2032 da\u2032, where d\u03c0 t (\u00b7, \u00b7) \u2208\u2206(S \u00d7 A) is the t-step state-action distribution. We call the asymptotic \ufb01xed point of this temporal relation the stationary state distribution d\u03c0 \u221e(s) := limt\u2192\u221ed\u03c0 t (s), and we denote as d\u03c0 \u03b3(s) := (1 \u2212\u03b3) P\u221e t=0 \u03b3td\u03c0 t (s) its \u03b3discounted counterpart, where \u03b3 \u2208(0, 1) is the discount factor. A marginalization of the t-step state distribution over a \ufb01nite horizon T, i.e., d\u03c0 T (s) := 1 T P t\u2208[T ] d\u03c0 t (s), is called the marginal state distribution. The state visitation frequency dh(s) = 1 T P t\u2208[T ] 1(st = s|h) is a realization of the marginal state distribution, such that Eh\u223cp\u03c0 T \u0002 dh(s) \u0003 = d\u03c0 T (s), where the distribution over histories p\u03c0 T \u2208\u2206(HT ) is de\ufb01ned as p\u03c0 T (h) = \u00b5(s0) Q t\u2208[T \u22121] \u03c0(at|ht)P(st+1|at, st). Markov Decision Process A CMP M paired with a reward function R : S \u00d7 A \u2192R is called a Markov Decision Process (MDP) (Puterman, 2014) MR := M \u222aR. We denote with R(s, a) the expected immediate reward when taking action a \u2208A in s \u2208S, and with R(h) = P t\u2208[T ] R(st, at) the return over the horizon T. The performance of a policy \u03c0 over the MDP MR is de\ufb01ned as the average return JMR(\u03c0) = Eh\u223cp\u03c0 T [R(h)], and \u03c0\u2217 J \u2208 arg max\u03c0\u2208\u03a0 JMR(\u03c0) is called an optimal policy. For any MDP MR, there always exists a deterministic Markovian policy \u03c0 \u2208\u03a0D M that is optimal (Puterman, 2014). Extended MDP The problem of \ufb01nding an optimal nonMarkovian policy with history-length T in an MDP MR, i.e., \u03c0\u2217 NM \u2208arg max\u03c0\u2208\u03a0NM JMR(\u03c0), can be reformulated as the one of \ufb01nding an optimal Markovian policy \u03c0\u2217 M \u2208 arg max\u03c0\u2208\u03a0M J f MR T (\u03c0) in an extended MDP f MR T . The extended MDP is de\ufb01ned as f MR T := ( e S, e A, e P, e R, e \u00b5), in which e S \u2286H[T ] = H1 \u222a. . . \u222aHT , and e s := (e s0, . . . , e s\u22121) corresponds to a history in MR of length |e s|, e A = A, e P(e s\u2032|e s, e a) = P(s\u2032 = e s\u2032 \u22121|s = e s\u22121, a = e a), e R(e s, e a) = R(s = e s\u22121, a = e a), and e \u00b5(e s) = \u00b5(s = e s) for any e s \u2208e S of unit length. Partially Observable MDP A Partially Observable Markov Decision Process (POMDP) (Astrom, 1965; Kaelbling et al., 1998) is described by MR \u2126 := (S, A, P, R, \u00b5, \u2126, O), where S, A, P, R, \u00b5 are de\ufb01ned as in an MDP, \u2126is a \ufb01nite observation space, and O : S \u00d7 A \u2192 \u2206(\u2126) is the observation function, such that O(o|s\u2032, a) denotes the conditional probability of the observation o \u2208\u2126 when selecting action a \u2208A in state s \u2208S. Crucially, while interacting with a POMDP the agent cannot observe the state s \u2208S, but just the observation o \u2208\u2126. The performance of a policy \u03c0 is de\ufb01ned as in an MDP. 3. In\ufb01nite Samples: Non-Markovianity Does Not Matter Previous works pursuing maximum state entropy exploration of a CMP consider an objective of the kind E\u221e(\u03c0) := H \u0000d\u03c0(\u00b7) \u0001 = \u2212E s\u223cd\u03c0 \u0002 log d\u03c0(s) \u0003 , (1) where d\u03c0(\u00b7) is either a stationary state distribution (Mutti & Restelli, 2020), a discounted state distribution (Hazan et al., 2019; Tarbouriech & Lazaric, 2019), or a marginal state distribution (Lee et al., 2019; Mutti et al., 2021). While it is well-known (Puterman, 2014) that there exists an optimal deterministic policy \u03c0\u2217\u2208\u03a0D M for the common average return objective JMR, it is not pointless to wonder whether the objective in (1) requires a more powerful policy class than \u03a0M. Hazan et al. (2019, Lemma 3.3) con\ufb01rm that the set of (randomized) Markovian policies \u03a0M is indeed suf\ufb01cient for E\u221ede\ufb01ned over asymptotic (stationary or discounted) state distributions. In the following theorem and corollary, we report a common MDP result (Puterman, 2014) to show that \u03a0M suf\ufb01ces for E\u221ede\ufb01ned over (nonasymptotic) marginal state distributions as well. Theorem 3.1. Let x \u2208{\u221e, \u03b3, T}, and let Dx NM = {d\u03c0 x(\u00b7) : \u03c0 \u2208\u03a0NM}, Dx M = {d\u03c0 x(\u00b7) : \u03c0 \u2208\u03a0M} the corresponding sets of state distributions over a CMP. We can prove that: (i) The sets of stationary state distributions are equivalent D\u221e NM \u2261D\u221e M ; (ii) The sets of discounted state distributions are equivalent D\u03b3 NM \u2261D\u03b3 M for any \u03b3; (iii) The sets of marginal state distributions are equivalent DT NM \u2261DT M for any T. Proof Sketch. For any non-Markovian policy \u03c0 \u2208\u03a0NM inducing distributions d\u03c0 t (\u00b7), d\u03c0 t (\u00b7, \u00b7) over the states and the \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration state-action pairs of the CMP, we can build a Markovian policy \u03c0\u2032 \u2208\u03a0M, \u03c0\u2032 = (\u03c0\u2032 t)\u221e t=0 through the construction \u03c0\u2032 t(a|s) = d\u03c0 t (s, a) \u000e d\u03c0 t (s), \u2200s \u2208S, \u2200a \u2208A. From (Puterman, 2014, Theorem 5.5.1) we know that d\u03c0 t (s) = d\u03c0\u2032 t (s) holds for any t \u22650 and \u2200s \u2208S. This implies that d\u03c0 \u221e(\u00b7) = d\u03c0\u2032 \u221e(\u00b7), d\u03c0 \u03b3(\u00b7) = d\u03c0\u2032 \u03b3 (\u00b7), d\u03c0 T (\u00b7) = d\u03c0\u2032 T (\u00b7), from which Dx NM \u2261Dx M follows. See Appendix B.1 for a detailed proof. From the equivalence of the sets of induced distributions, it is straightforward to derive the optimality of Markovian policies for objective (1). Corollary 3.2. For every CMP, there exists a Markovian policy \u03c0\u2217\u2208\u03a0M such that \u03c0\u2217\u2208arg max\u03c0\u2208\u03a0 E\u221e(\u03c0). As a consequence of Corollary 3.2, there is little incentive to consider non-Markovian policies when optimizing objective (1), since there is no clear advantage to make up for the additional complexity of the policy. This result might be unsurprising when considering asymptotic distributions, as one can expect a carefully constructed Markovian policy to be able to tie the distribution induced by a non-Markovian policy in the limit of the interaction steps. However, it is less evident that a similar property holds for the expectation of \ufb01nal-length interactions alike. Yet, we were able to show that a Markovian policy that properly exploits randomization can always achieve equivalent state distributions w.r.t. nonMarkovian counterparts. Note that state distributions are actually expected state visitation frequency, and the expectation practically implies an in\ufb01nite number of realizations. In this paper, we show that this underlying in\ufb01nite-sample regime is the reason why the bene\ufb01t of non-Markovianity, albeit backed up by intuition, does not matter. Instead, we propose a relevant \ufb01nite-sample entropy objective in which non-Markovianity is crucial. 4. Finite Samples: Non-Markovianity Matters In this section, we reformulate the typical maximum state entropy exploration objective of a CMP (1) to account for a \ufb01nite-sample regime. Crucially, we consider the expected entropy of the state visitation frequency rather than the entropy of the expected state visitation frequency, which results in E(\u03c0) := E h\u223cp\u03c0 T \u0002 H \u0000dh(\u00b7) \u0001\u0003 = \u2212 E h\u223cp\u03c0 T E s\u223cdh \u0002 log dh(s) \u0003 . (2) We note that E(\u03c0) \u2264E\u221e(\u03c0) for any \u03c0 \u2208\u03a0, which is trivial by the concavity of the entropy function and the Jensen\u2019s inequality. Whereas (2) is ultimately an expectation as it is (1), the entropy is not computed over the in\ufb01nite-sample state distribution d\u03c0 T (\u00b7) but its \ufb01nite-sample realization dh(\u00b7). Thus, to maximize E(\u03c0) we have to \ufb01nd a policy inducing high-entropy state visits within a single trajectory rather than high-entropy state visits over in\ufb01nitely many trajectories. Crucially, while Markovian policies are as powerful as any other policy class in terms of induced state distributions (Theorem 3.1), this is no longer true when looking at induced trajectory distributions p\u03c0 T . Indeed, we will show that non-Markovianity provides a superior policy class for objective (2). First, we de\ufb01ne a performance measure to formally assess this bene\ufb01t, which we call the regret-to-go.1 De\ufb01nition 4.1 (Expected Regret-to-go). Consider a policy \u03c0 \u2208\u03a0 interacting with a CMP over T \u2212t steps starting from the trajectory ht. We de\ufb01ne the expected regret-to-go RT \u2212t, i.e., from step t onwards, as RT \u2212t(\u03c0, ht) = H\u2217\u2212 E hT \u2212t\u223cp\u03c0 T \u2212t \u0002 H \u0000dht\u2295hT \u2212t(\u00b7) \u0001\u0003 , where H\u2217= max\u03c0\u2217\u2208\u03a0 Eh\u2217 T \u2212t\u223cp\u03c0\u2217 T \u2212t \u0002 H \u0000dht\u2295h\u2217 T \u2212t(\u00b7) \u0001\u0003 is the expected entropy achieved by an optimal policy \u03c0\u2217. The term RT (\u03c0) denotes the expected regret-to-go of a T-step trajectory hT starting from s \u223c\u00b5. The intuition behind the regret-to-go is quite simple. Suppose to have drawn a trajectory ht upon step t. If we take the subsequent action with the (possibly sub-optimal) policy \u03c0, by how much would we decrease (in expectation) the entropy of the state visits H(dhT (\u00b7)) w.r.t. an optimal policy \u03c0\u2217? In particular, we would like to know how limiting the policy \u03c0 to a speci\ufb01c policy class would affect the expected regret-to-go and the value of E(\u03c0) we could achieve. The following theorem and subsequent corollary, which constitute the main contribution of this paper, state that an optimal non-Markovian policy suffers zero expected regret-to-go in any case, whereas an optimal Markovian policy suffers non-zero expected regret-to-go in general. Theorem 4.2 (Non-Markovian Optimality). For every CMP M and trajectory ht \u2208H[T ], there exists a deterministic non-Markovian policy \u03c0NM \u2208\u03a0D NM that suffers zero regretto-go RT \u2212t(\u03c0NM, ht) = 0, whereas for any \u03c0M \u2208\u03a0M we have RT \u2212t(\u03c0M, ht) \u22650. Corollary 4.3 (Suf\ufb01cient Condition). For every CMP M and trajectory ht \u2208H[T ] for which any optimal Markovian policy \u03c0M \u2208\u03a0M is randomized (i.e., stochastic) in st, we have strictly positive regret-to-go RT \u2212t(\u03c0M, ht) > 0. The result of Theorem 4.2 highlights the importance of non-Markovianity for optimizing the \ufb01nite-sample MSE objective (2), as the class of Markovian policies is dominated by the class of non-Markovian policies. Most importantly, Corollary 4.3 shows that non-Markovian policies are strictly 1Note that the entropy function does not enjoy additivity, thus we cannot adopt the usual expected cumulative regret formulation in this setting. \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration better than Markovian policies in any CMP of practical interest, i.e., those in which any optimal Markovian policy has to be randomized (Hazan et al., 2019) in order to maximize (2). The intuition behind this result is that a Markovian policy would randomize to make up for the uncertainty over the history, whereas a non-Markovian policy does not suffer from this partial observability, and it can deterministically select an optimal action. Clearly, this partial observability is harmless when dealing with the standard RL objective, in which the reward is fully Markovian and does not depend on the history, but it is instead relevant in the peculiar MSE setting, in which the objective is a concave function of the state visitation frequency. In the following section, we report a sketch of the derivation underlying Theorem 4.2 and Corollary 4.3, while we refer to the Appendix B.2 for complete proofs. 4.1. Regret Analysis To the purpose of the regret analysis, we will consider the following assumption to ease the notation.2 Assumption 1 (Unique Optimal Action). For every CMP M and trajectory ht \u2208H[T ], there exists a unique optimal action a\u2217\u2208A w.r.t. the objective (2). First, we show that the class of deterministic non-Markovian policies is suf\ufb01cient for the minimization of the regret-to-go, and thus for the maximization of (2). Lemma 4.4. For every CMP M and trajectory ht \u2208H[T ], there exists a deterministic non-Markovian policy \u03c0NM \u2208 \u03a0D NM such that \u03c0NM \u2208arg max\u03c0\u2208\u03a0NM E(\u03c0), which suffers zero regret-to-go RT \u2212t(\u03c0NM, ht) = 0. Proof. The result RT \u2212t(\u03c0NM, ht) = 0 is straightforward by noting that the set of non-Markovian policies \u03a0NM with arbitrary history-length is as powerful as the general set of policies \u03a0. To show that there exists a deterministic \u03c0NM, we consider the extended MDP f MR T obtained from the CMP M as in Section 2, in which the extended reward function is e R(e s, e a) = H(de s(\u00b7)) for every e a \u2208e A, and every e s \u2208e S such that |e s| = T, and e R(e s, e a) = 0 otherwise. Since a Markovian policy e \u03c0M \u2208\u03a0D M on f MR T can be mapped to a non-Markovian policy \u03c0NM \u2208\u03a0D NM on M, and it is well-known (Puterman, 2014) that for any MDP there exists an optimal deterministic Markovian policy, we have that e \u03c0M \u2208arg max\u03c0\u2208\u03a0M J f MR T (\u03c0) implies \u03c0NM \u2208arg max\u03c0\u2208\u03a0NM E(\u03c0). 2Note that this assumption could be easily removed by partitioning the action space in ht as A(ht) = Aopt(ht) \u222a Asub\u2212opt(ht), such that Aopt(ht) are optimal actions and Asub\u2212opt(ht) are sub-optimal, and substituting any term \u03c0(a\u2217|ht) with P a\u2208Aopt(ht) \u03c0(a|ht) in the results. Then, in order to prove that the class of non-Markovian policies is also necessary for regret minimization, it is worth showing that Markovian policies can instead rely on randomization to optimize objective (2). Lemma 4.5. Let \u03c0NM \u2208\u03a0D NM a non-Markovian policy such that \u03c0NM \u2208arg max\u03c0\u2208\u03a0 E(\u03c0) on a CMP M. For a \ufb01xed history ht \u2208Ht ending in state s, the variance of the event of an optimal Markovian policy \u03c0M \u2208 arg max\u03c0\u2208\u03a0M E(\u03c0) taking a\u2217= \u03c0NM(ht) in s is given by Var \u0002 B(\u03c0M(a\u2217|s, t)) \u0003 = Var hs\u223cp \u03c0NM t \u0002 E \u0002 B(\u03c0NM(a\u2217|hs)) \u0003\u0003 , where hs \u2208Ht is any history of length t such that the \ufb01nal state is s, i.e., hs := (ht\u22121 \u2208Ht\u22121) \u2295s, and B(x) is a Bernoulli with parameter x. Proof Sketch. We can prove the result through the Law of Total Variance (LoTV) (see Bertsekas & Tsitsiklis, 2002), which gives Var \u0002 B(\u03c0M(a\u2217|s, t)) \u0003 = E hs\u223cp \u03c0NM t \u0002 Var \u0002 B(\u03c0NM(a\u2217|hs)) \u0003\u0003 + Var hs\u223cp \u03c0NM t \u0002 E \u0002 B(\u03c0NM(a\u2217|hs)) \u0003\u0003 , \u2200s \u2208S. Then, exploiting the determinism of \u03c0NM (through Lemma 4.4), it is straightforward to see that Ehs\u223cp \u03c0NM t \u0002 Var \u0002 B(\u03c0NM(a\u2217|hs)) \u0003\u0003 = 0, which concludes the proof.3 Unsurprisingly, Lemma 4.5 shows that, whenever the optimal strategy for (2) (i.e., the non-Markovian \u03c0NM) requires to adapt its decision in a state s according to the history that led to it (hs), an optimal Markovian policy for the same objective (i.e., \u03c0M) must necessarily be randomized. This is crucial to prove the following result, which establishes lower and upper bounds RT \u2212t, RT \u2212t to the expected regret-to-go of any Markovian policy that optimizes (2). Lemma 4.6. Let \u03c0M be an optimal Markovian policy \u03c0M \u2208 arg max\u03c0\u2208\u03a0M E(\u03c0) on a CMP M. For any ht \u2208H[T ], it holds RT \u2212t(\u03c0M) \u2264RT \u2212t(\u03c0M) \u2264RT \u2212t(\u03c0M) such that RT \u2212t(\u03c0M) = H\u2217\u2212H\u2217 2 \u03c0M(a\u2217|st) Var hst\u223cp \u03c0NM t \u0002 E \u0002 B(\u03c0NM(a\u2217|hst)) \u0003\u0003 , RT \u2212t(\u03c0M) = H\u2217\u2212H\u2217 \u03c0M(a\u2217|st) Var hst\u223cp \u03c0NM t \u0002 E \u0002 B(\u03c0NM(a\u2217|hst)) \u0003\u0003 , where \u03c0NM \u2208arg max\u03c0\u2208\u03a0D NM E(\u03c0), and H\u2217, H\u2217 2 are given 3Note that the determinism of \u03c0NM does not also imply Varhs\u223cp\u03c0NM t \u0002 E \u0002 B(\u03c0NM(a\u2217|hs)) \u0003\u0003 = 0, as the optimal action a = \u03c0NM(hs) may vary for different histories, which results in the inner expectations E \u0002 B(\u03c0NM(a\u2217|hs)) \u0003 being either 1 or 0. \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration by H\u2217= min h\u2208HT \u2212t H(dht\u2295h(\u00b7)), H\u2217 2 = max h\u2208HT \u2212t\\H\u2217 T \u2212t H(dht\u2295h(\u00b7)) s.t. H\u2217 T \u2212t = arg max h\u2208HT \u2212t H(dht\u2295h(\u00b7)). Proof Sketch. The crucial idea to derive lower and upper bounds to the regret-to-go is to consider the impact of a sub-optimal action in the best-case and the worst-case CMP respectively (see Lemma B.2, B.1). This gives RT \u2212t(\u03c0M) \u2265H\u2217\u2212\u03c0M(a\u2217|st)H\u2217\u2212 \u00001 \u2212 \u03c0M(a\u2217|st) \u0001 H\u2217 2 and RT \u2212t(\u03c0M) \u2264H\u2217\u2212\u03c0M(a\u2217|st)H\u2217\u2212 \u00001 \u2212\u03c0M(a\u2217|st) \u0001 H\u2217. Then, with Lemma 4.5 we get Var \u0002 B(\u03c0M(a\u2217|st)) \u0003 = \u03c0M(a\u2217|st) \u00001 \u2212\u03c0M(a\u2217|st) \u0001 = Varhs\u223cp \u03c0NM t \u0002 E \u0002 B(\u03c0NM(a\u2217|hst)) \u0003\u0003 , which concludes the proof. Finally, the result in Theorem 4.2 is a direct consequence of Lemma 4.6. Note that the upper and lower bounds on the regret-to-go are strictly positive whenever \u03c0M(a\u2217|st) < 1, as it is stated in Corollary 4.3. 5. Complexity Analysis Having established the importance of non-Markovianity in dealing with MSE exploration in a \ufb01nite-sample regime, it is worth considering how hard it is to optimize the objective 2 within the class of non-Markovian policies. Especially, we aim at characterizing the complexity of the problem: \u03a80 := maximize \u03c0\u2208\u03a0NM E(\u03c0), de\ufb01ned over a CMP M. Before going into the details of the analysis, we provide a couple of useful de\ufb01nitions for the remainder of the section, whereas we leave to (Arora & Barak, 2009) an extended review of complexity theory. De\ufb01nition 5.1 (Many-to-one Reductions). We denote as A \u2264m B a many-to-one reduction from A to B. De\ufb01nition 5.2 (Polynomial Reductions). We denote as A \u2264p B a polynomial-time (Turing) reduction from A to B. Then, we recall that \u03a80 can be rewritten as the problem of \ufb01nding a reward-maximizing Markovian policy, i.e., e \u03c0M \u2208arg max\u03c0\u2208\u03a0M J f MR T (\u03c0), over a convenient extended MDP f MR T obtained from CMP M (see the proof of Lemma 4.4 for further details). We call this problem e \u03a80 and we note that e \u03a80 \u2208P, as the problem of \ufb01nding a reward-maximizing Markovian policy is well-known to be in P for any MDP (Papadimitriou & Tsitsiklis, 1987). However, the following lemma shows that it does not exist a many-to-one reduction from \u03a80 to e \u03a80. Lemma 5.3. A reduction \u03a80 \u2264m e \u03a80 does not exist. Proof. In general, coding any instance of \u03a80 in the representation required by e \u03a80, which is an extended MDP f MR T , holds exponential complexity w.r.t. the input of the initial instance of \u03a80, i.e., a CMP M. Indeed, to build the extended MDP f MR T from M, we need to de\ufb01ne the transition probabilities e P(e s\u2032|e s, e a) for every e s\u2032 \u2208e S, e a \u2208e A, e s \u2208e S. Whereas the action space remains unchanged e A = A, the extended state space e S has cardinality | e S| = ST in general, which grows exponentially in T. The latter result informally suggests that \u03a80 / \u2208P. Indeed, we can now prove the main theorem of this section, which shows that \u03a80 is NP-hard under the common assumption that P \u0338= NP. Theorem 5.4. \u03a80 is NP-hard. Proof Sketch. To prove the theorem, it is suf\ufb01cient to show that there exists a problem \u03a8c \u2208NP-hard so that \u03a8c \u2264p \u03a80. We show this by reducing 3SAT, which is a well-known NP-complete problem, to \u03a80. To derive the reduction we consider two intermediate problems, namely \u03a81 and \u03a82. Especially, we aim to show that the following chain of reductions holds \u03a80 \u2265m \u03a81 \u2265p \u03a82 \u2265p 3SAT. First, we de\ufb01ne \u03a81 and we prove that \u03a80 \u2265m \u03a81. Informally, \u03a81 is the problem of \ufb01nding a reward-maximizing Markovian policy \u03c0M \u2208\u03a0M w.r.t. the entropy objective (2) encoded through a reward function in a convenient POMDP f MR \u2126. We can build f MR \u2126from the CMP M similarly as the extended MDP f MR T (see Section 2 and the proof of Lemma 4.4 for details), except that the agent only access the observation space e \u2126instead of the extended state space e S. In particular, we de\ufb01ne e \u2126= S (note that S is the state space of the original CMP M), and e O(e o|e s) = e s\u22121. Then, the reduction \u03a80 \u2265m \u03a81 works as follows. We denote as I\u03a8i the set of possible instances of problem \u03a8i. We show that \u03a80 is harder than \u03a81 by de\ufb01ning the polynomial-time functions \u03c8 and \u03c6 such that any instance of \u03a81 can be rewritten through \u03c8 as an instance of \u03a80, and a solution \u03c0\u2217 NM \u2208\u03a0NM for \u03a80 can be converted through \u03c6 into a solution \u03c0\u2217 M \u2208\u03a0M for the original instance of \u03a81. The function \u03c8 sets S = e \u2126 and derives the transition model of M from the one of f MR \u2126, while \u03c6 converts the optimal solution of \u03a80 by computing \u03c0\u2217 M(a|o, t) = P ho\u2208Ho p\u03c0\u2217 NM T (ho)\u03c0\u2217 NM(a|ho), where Ho stands for the set of histories h \u2208Ht ending in the observation o \u2208\u2126. Thus, we have that \u03a80 \u2265m \u03a81 holds. We now de\ufb01ne \u03a82 as the policy existence problem w.r.t. the problem statement of \u03a81. Hence, \u03a82 is the problem of determining whether the value of a reward-maximizing \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration 0 2 1 (a) 3State \u03c0M \u03c0NM 0 0.5 1 E(\u03c0) (b) Average Entropy 0.4 0.6 0.8 1 0 50 100 Entropy(d\u03c4 (\u00b7)) N(\u03c4) \u03c0M \u03c0NM (c) Entropy Frequency 0 1 2 0.3 0.7 0.3 0.7 (d) River Swim \u03c0M \u03c0NM 0 0.5 1 E(\u03c0) (e) Average Entropy 0 1 2 0.25 0.5 d\u03c4 (s) \u03c0M \u03c0NM (f) State Visitation Frequency Figure 2. In (a, d), we illustrates the 3State and River Swim CMPs. Then, we report the average entropy induced by an optimal (stationary) Markovian policy \u03c0M and an optimal non-Markovian policy \u03c0NM in the 3State (T = 9) (b) and the River Swim (T = 10) (e). In (c) we report the entropy frequency in the 3State, in (f) the state visitation frequency in the River Swim. We provide 95% c.i. over 100 runs. Markovian policy \u03c0\u2217 M \u2208arg max\u03c0\u2208\u03a0M J f MR \u2126(\u03c0) is greater than 0. Since computing an optimal policy in POMDPs is in general harder than the relative policy existence problem (Lusena et al., 2001, Section 3), we have that \u03a81 \u2265p \u03a82. For the last reduction, i.e., \u03a82 \u2265p 3SAT, we extend the proof of Theorem 4.13 in (Mundhenk et al., 2000), which states that the policy existence problem for POMDPs is NP-complete. In particular, we show that this holds within the restricted class of POMDPs de\ufb01ned in \u03a81. Since the chain \u03a80 \u2265m \u03a81 \u2265p \u03a82 \u2265p 3SAT holds, we have that \u03a80 \u2265p 3SAT. Since 3SAT \u2208NP-complete, we can conclude that \u03a80 is NP-hard. Having established the hardness of the optimization of \u03a80, one could now question whether the problem \u03a80 is instead easy to verify (\u03a80 \u2208NP), from which we would conclude that \u03a80 \u2208NP-complete. Whereas we doubt that this problem is signi\ufb01cantly easier to verify than to optimize, the focus of this work is on its optimization version, and we thus leave as future work a \ufb01ner analysis to show that \u03a80 / \u2208NP. 6. Numerical Validation Despite the hardness result of Theorem 5.4, we provide a brief numerical validation around the potential of nonMarkovianity in MSE exploration. Crucially, the reported analysis is limited to simple domains and short time horizons, and it has to be intended as an illustration of the theoretical claims reported in previous sections. For the sake of simplicity, in this analysis we consider stationary policies for the Markovian set, though similar results can be obtained for time-variant strategies as well (in stochastic environments). Whereas a comprehensive evaluation of the practical bene\ufb01ts of non-Markovianity in MSE exploration is left as future work, we discuss in Section 7 why we believe that the development of scalable methods is not hopeless even in this challenging setting. In this section, we consider a 3State (S = 3, A = 2, T = 9), which is a simple abstraction of the two-rooms in Figure 1, and a River Swim (Strehl & Littman, 2008) (S = 3, A = 2, T = 10) that are depicted in Figure 2a, 2d respectively. Especially, we compare the expected entropy (2) achieved by an optimal non-Markovian policy \u03c0NM \u2208 arg max\u03c0\u2208\u03a0NM E(\u03c0), which is obtained by solving the extended MDP as described in the proof of Lemma 4.4, against an optimal Markovian policy \u03c0M \u2208arg max\u03c0\u2208\u03a0M E(\u03c0). In con\ufb01rmation of the result in Theorem 4.2, \u03c0M cannot match the performance of \u03c0NM (see Figure 2b, 2e). In 3State, an optimal strategy requires going left when arriving in state 0 from state 2 and vice versa. The policy \u03c0NM is able to do that, and it always realizes the optimal trajectory (Figure 2c). Instead, \u03c0M is uniform in 0 and it often runs into sub-optimal trajectories. In the River Swim, the main hurdle is to reach state 2 from the initial one. Whereas \u03c0M and \u03c0NM are equivalently good in doing so, as reported in Figure 2f, only the non-Markovian strategy is able to balance the visitations in the previous states when it eventually reaches 2. The difference is already noticeable with a short horizon and it would further increase with a longer T. 7. Discussion and Conclusion In the previous sections, we detailed the importance of nonMarkovianity when optimizing a \ufb01nite-sample MSE objective, but we also proved that the corresponding optimization problem is NP-hard in its general formulation. Despite the hardness result, we believe that it is not hopeless to learn exploration policies with some form of non-Markovianity, while still preserving an edge over Markovian strategies. \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration In the following paragraphs, we discuss potential avenues to derive practical methods for relevant relaxations to the general class of non-Markovian policies. Finite-Length Histories Throughout the paper, we considered non-Markovian policies that condition their decisions on histories of arbitrary length, i.e., \u03c0 : H \u2192\u2206(A). However, the complexity of optimizing such policies grows exponentially with the length of the history. To avoid this exponential blowup, one can de\ufb01ne a class of non-Markovian policies \u03c0 : HH \u2192\u2206(A) in which the decisions are conditioned on histories of a \ufb01nite length H > 1 that are obtained from a sliding window on the full history. The optimal policy within this class would still retain better regret guarantees than an optimal Markovian policy, but it would not achieve zero regret in general. With the length parameter H one can trade-off the learning complexity with the regret according to the structure of the domain. For instance, H = 2 would be suf\ufb01cient to achieve zero regret in the 3State domain, whereas in the River Swim domain any H < T would cause some positive regret. Compact Representations of the History Instead of setting a \ufb01nite length H, one can choose to perform function approximation on the full history to obtain a class of policies \u03c0 : f(H) \u2192\u2206(A), where f is a function that maps an history h to some compact representation. An interesting option is to use the notion of eligibility traces (Sutton & Barto, 2018) to encode the information of h in a vector of length S, which is updated as zt+1 \u2190\u03bbzt + 1st, where \u03bb \u2208(0, 1) is a discount factor, 1st is a vector with a unit entry at the index st, and z0 = 0. The discount factor \u03bb acts as a smoothed version of the length parameter H, and it can be dynamically adapted while learning. Indeed, this eligibility traces representation is particularly convenient for policy optimization (Deisenroth et al., 2013), in which we could optimize in turn a parametric policy over actions \u03c0\u03b8(\u00b7|z, \u03bb) and a parametric policy over the discount \u03c0\u03bd(\u03bb). To avoid a direct dependence on S, one can de\ufb01ne the vector z over a discretization of the state space. Deep Recurrent Policies Another noteworthy way to do function approximation on the history is to employ recurrent neural networks (Williams & Zipser, 1989; Hochreiter & Schmidhuber, 1997) to represent the non-Markovian policy. This kind of recurrent architecture is already popular in RL. In this paper we are providing the theoretical ground to motivate the use of deep recurrent policies to address maximum state entropy exploration. Non-Markovian Control with Tree Search In principle, one can get a realization of actions from the optimal non-Markovian policy without ever computing it, e.g., by employing a Monte-Carlo Tree Search (MCTS) (Kocsis & Szepesv\u00b4 ari, 2006) approach to select the next action to take. Given the current state st as a root, we can build the tree of trajectories from the root through repeated simulations of potential action sequences. With a suf\ufb01cient number of simulations and a suf\ufb01ciently deep tree, we are guaranteed to select the optimal action at the root. If the horizon is too long, we can still cut the tree at any depth and approximately evaluate a leaf node with the entropy induced by the path from the root to the leaf. The drawback of this procedure is that we require to access a simulator with reset (or a reliable estimate of the transition model) to actually build the tree. Having reported interesting directions to learn nonMarkovian exploration policies in practice, we would like to mention some relevant online RL settings that might bene\ufb01t from such exploration policies. We leave as future work a formal de\ufb01nition of the settings and an empirical study. Single-Trial RL In many relevant real-world scenarios, where data collection might be costly or non-episodic in nature, we cannot afford multiple trials to achieve the desired exploration of the environment. Non-Markovian exploration policies guarantee a good coverage of the environment in a single trial and they are particularly suitable for online learning processes. Learning in Latent MDPs In a latent MDP scenario (Hallak et al., 2015; Kwon et al., 2021) an agent interacts with an (unknown) environment drawn from a class of MDPs to solve an online RL task. A non-Markovian exploration policy pre-trained on the whole class could exploit the memory to perform a fast identi\ufb01cation of the speci\ufb01c context that has been drawn, quickly adapting to the optimal environmentspeci\ufb01c policy. In this paper we focus on the gap between non-Markovian and Markovian policies, which can be either stationary or time-variant. Future works might consider the role of stationarity (see also Akshay et al., 2013; Laroche et al., 2022), such as establishing under which conditions stationary strategies are suf\ufb01cient in this setting. Finally, here we focus on state distributions, which is most common in the MSE literature, but similar results could be extended to state-action distributions with minor modi\ufb01cations. To conclude, we believe that this work sheds some light on the, previously neglected, importance of non-Markovianity to address maximum state entropy exploration. Although it brings a negative result about the computational complexity of the problem, we believe it can provide inspiration for future empirical and theoretical contributions on the matter. Acknowledgements We would like to thank the reviewers of this paper for their feedbacks and useful advices. We also thank Romain Laroche and Remi Tachet des Combes for having signalled a technical error in a previous draft of the manuscript. \fThe Importance of Non-Markovianity in Maximum State Entropy Exploration" + }, + { + "url": "http://arxiv.org/abs/2202.01511v3", + "title": "Challenging Common Assumptions in Convex Reinforcement Learning", + "abstract": "The classic Reinforcement Learning (RL) formulation concerns the maximization\nof a scalar reward function. More recently, convex RL has been introduced to\nextend the RL formulation to all the objectives that are convex functions of\nthe state distribution induced by a policy. Notably, convex RL covers several\nrelevant applications that do not fall into the scalar formulation, including\nimitation learning, risk-averse RL, and pure exploration. In classic RL, it is\ncommon to optimize an infinite trials objective, which accounts for the state\ndistribution instead of the empirical state visitation frequencies, even though\nthe actual number of trajectories is always finite in practice. This is\ntheoretically sound since the infinite trials and finite trials objectives can\nbe proved to coincide and thus lead to the same optimal policy. In this paper,\nwe show that this hidden assumption does not hold in the convex RL setting. In\nparticular, we show that erroneously optimizing the infinite trials objective\nin place of the actual finite trials one, as it is usually done, can lead to a\nsignificant approximation error. Since the finite trials setting is the default\nin both simulated and real-world RL, we believe shedding light on this issue\nwill lead to better approaches and methodologies for convex RL, impacting\nrelevant research areas such as imitation learning, risk-averse RL, and pure\nexploration among others.", + "authors": "Mirco Mutti, Riccardo De Santi, Piersilvio De Bartolomeis, Marcello Restelli", + "published": "2022-02-03", + "updated": "2023-01-27", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Standard Reinforcement Learning (RL) [50] is concerned with sequential decision-making problems in which the utility can be expressed through a linear combination of scalar reward terms. The coef\ufb01cients of this linear combination are given by the state visitation distribution induced by the agent\u2019s policy. Thus, the objective function can be equivalently written as the inner product between the mentioned state distribution and a reward vector. However, not all the relevant objectives can be encoded through this linear representation [2]. Several works have thus extended the standard RL formulation to address non-linear objectives of practical interest. These include imitation learning [30, 42], or the problem of \ufb01nding a policy that minimizes the distance between the induced state distribution and the state distribution provided by experts\u2019 interactions [1, 17, 22, 29, 32, 33], risk-averse RL [20], in which the objective is sensitive to the tail behavior of the agent\u2019s policy [8, 15, 16, 43, 51, 52, 61], pure exploration [27], where the goal is to \ufb01nd a policy that maximizes the entropy of the induced state distribution [24, 33, 36, 38, 39, 40, 41, 47, 53, 55, 59], diverse skills \u2217Equal contribution 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2202.01511v3 [cs.LG] 27 Jan 2023 \fdiscovery [11, 19, 23, 25, 28, 35, 48, 58], constrained RL [3, 4, 6, 9, 37, 45, 56], and others. All this large body of work has been recently uni\ufb01ed into a unique framework, called convex RL [21, 57, 60], which admits as an objective any convex function of the state distribution induced by the agent\u2019s policy. The convex RL problem has been showed to be largely tractable either computationally, as it admits a dual formulation akin to standard RL [44], or statistically, as principled algorithms achieving sub-linear regret rates that are slightly worse than standard RL have been developed [57, 60]. Finite Trials In\ufb01nite Trials RL Convex RL r \u00b7 d\u03c0 F(d\u03c0) E dn\u223cp\u03c0 n \u001f r \u00b7 dn \u001e E dn\u223cp\u03c0 n \u001f F(dn) \u001e = \u0338= Figure 1: Summary of the main \ufb01nding of this paper: the equivalence between \ufb01nite and in\ufb01nite trials objectives does not hold for the convex RL formulation. However, we note that the usual convex RL formulation makes an implicit in\ufb01nite trials assumption which is rarely met in practice. Indeed, the objective is written as a function of the state distribution, which is an expectation over the empirical state distributions that are actually obtained by running the policy in a given episode. In practice, we always run our policy for a \ufb01nite number of episodes (or trials), which in general prevents the empirical state distribution from converging to its expectation. This has never been a problem in standard RL: due to the scalar objective, optimizing the policy over in\ufb01nite trials or \ufb01nite trials is equivalent, as it leads to the same optimal policy. Crucially, in this paper, we show that this property does not hold for the convex RL formulation: a policy optimized over in\ufb01nite trials can be signi\ufb01cantly sub-optimal when deployed over \ufb01nite trials (Figure 1). In light of this observation, we reformulate the convex RL problem from a \ufb01nite trials perspective, developing insights that can be used to partially rethink the way convex objectives have been previously addressed in RL, with potential ripple effects to research areas of signi\ufb01cant interest, such as imitation learning, risk-averse RL, pure exploration, and others. In this paper, we formalize the notion of a \ufb01nite trials RL problem, in which the objective is a function of the empirical state distribution induced by the agent\u2019s policy over n trials rather than its expectation over in\ufb01nite trials. As an illustrative example, consider a \ufb01nancial application, in which we aim to optimize a trading strategy. In the real world, we can only deploy the strategy over a single trial. Thus, we are only interested in the performance of the strategy in the real-world realization, rather than the performance of the strategy when averaging different realizations. Similar considerations apply to other relevant real-world applications, such as autonomous driving or treatment optimization in a healthcare domain. Following this intuition, we \ufb01rst de\ufb01ne the (linear) \ufb01nite trial RL formulation (Section 3), for which it is trivial to prove the equivalence with standard RL. In Section 4, we provide the \ufb01nite trial convex RL formulation, for which we prove an upper bound on the approximation error made by optimizing the in\ufb01nite trials as a proxy of the \ufb01nite trials objective. In light of this \ufb01nding, we challenge the hidden assumption that (1) convex RL can be equivalently addressed with an in\ufb01nite trials formulation, even if the setting is \ufb01nite-trial. We corroborate this result with an additional numerical analysis showing that the approximation bound is non-vacuous for relevant applications (Section 6). Finally, in Section 5 we include an in-depth analysis of the single trial convex RL, which suggests that other common assumptions in the convex RL literature, i.e., that (2) the problem is always computationally tractable and that (3) stationary policies are in general suf\ufb01cient, should be reconsidered as well. The proofs of the reported results can be found in the Appendix. 2 Preliminaries In this section, we report the notation and background useful to understand the paper. We denote with [T] a set of integers {1, . . . , T}, and with a lower case letter a a scalar or a vector, according to the context. For two vectors a, b \u2208Rd, we denote with a \u00b7 b = Pd i=1 aibi the inner product between a, b. 2.1 Probabilities and Percentiles Let X denote a measurable space, we will denote with \u2206(X) the probability simplex over X, and with p \u2208\u2206(X) a probability measure over X. For two probability measures p, q over X, we de\ufb01ne their \u2113p-distance as \u2225p \u2212q\u2225p := \u0000 P x\u2208X \f \fp(x) \u2212q(x) \f \fp\u00011/p, and their Kullback-Leibler (KL) divergence 2 \fas KL(p||q) := P x\u2208X p(x) log \u0000p(x)/q(x) \u0001 . Let X be a random variable distributed according to p, having a cumulative density function FX(x) = Pr(X \u2264x). We denote with E[X] its expected value, and its \u03b1-percentile is denoted as VaR\u03b1(X) = inf \b x | FX(x) \u2265\u03b1 \t = F \u22121 X (\u03b1), where \u03b1 \u2208(0, 1) is a con\ufb01dence level, and VaR\u03b1 stands for Value at Risk (VaR) at level \u03b1. We denote the expected value of X within its \u03b1-percentile as CVaR\u03b1(X) = E \u0002 X | X \u2264VaR\u03b1(X) \u0003 , where CVaR\u03b1 stands for Conditional Value at Risk (CVaR) at level \u03b1. 2.2 Markov Decision Processes A tabular Markov Decision Process (MDP) [44] is de\ufb01ned as M := (S, A, P, T, \u00b5, r), where S is a state space of size S, A is an action space of size A, P is a Markovian transition model P : S \u00d7A \u2192\u2206(S), such that P(s\u2032|s, a) denotes the conditional probability of the next state s\u2032 given the current state s and action a, T is the episode horizon, \u00b5 is an initial state distribution \u00b5 : \u2206(S), and r is a scalar reward function r : S \u2192R, such that r(s) is the reward collected in the state s. In the typical interaction episode, an agent \ufb01rst observes the initial state s0 \u223c\u00b5 of the MDP. Then, the agent select an action a0, so that the MDP transitions to the next state s1 \u223cP(\u00b7|s0, a0), and the agent collects the reward r(s1). Having observed s1, the agent then selects an action a1 triggering a subsequent transition to s2 \u223cP(\u00b7|s1, a1). This process carries on repeatedly until the episode ends. A policy \u03c0 de\ufb01nes the behavior of an agent interacting with an MDP, i.e., the strategy for which an action is selected at any step of the episode. It consists of a sequence of decision rules (\u03c0t)\u221e t=0 that maps the current trajectory2 ht = (si, ai)t\u22121 i=0 \u2208Ht with a distribution over actions \u03c0t : Ht \u2192\u2206(A), where Ht denotes the set of trajectories of length t. A non-stationary policy is a sequence of decision rules \u03c0t : S \u2192\u2206(A). A stationary (Markovian) policy is a time-consistent decision rule \u03c0 : S \u2192\u2206(A), such that \u03c0(a|s) denotes the conditional probability of taking action a in state s. A trajectory h, obtained from an interaction episode, induces an empirical distribution d over the states of the MDP M, such that d(s) = 1 |h| P st\u2208h 1(st = s). We denote with p\u03c0 the probability of drawing d by following the policy \u03c0. For n \u2208N, we denote with dn the empirical distribution dn(s) = 1 n Pn i=1 di(s), and with p\u03c0 n the probability of drawing dn by following the policy \u03c0 for n episodes. Finally, we call the expectation d\u03c0 = Ed\u223cp\u03c0[d] the state distribution induced by \u03c0. 3 Reinforcement Learning in Finite Trials In the standard RL formulation [50], an agent aims to learn an optimal policy by interacting with an unknown MDP M. An optimal policy is a decision strategy that maximizes the expected sum of rewards collected during an episode. Especially, we can represent the value of a policy \u03c0 through the value function V \u03c0 t (s) := E\u03c0 \u0002 PT t\u2032=t r(st\u2032) \f \f st = s \u0003 . The value function allows us to write the RL objective as max\u03c0\u2208\u03a0 Es1\u223c\u00b5[V \u03c0 1 (s1)], where \u03a0 is the set of all the stationary policies. Equivalently, we can rewrite the RL objective into its dual formulation [44], i.e., RL max \u03c0\u2208\u03a0 \u0010 r \u00b7 d\u03c0\u0011 =: J\u221e(\u03c0) (1) where we denote with r \u2208RS a reward vector, and with d\u03c0 the state distribution induced by \u03c0. We call the problem (1) the in\ufb01nite trials RL formulation. Indeed, the objective J\u221e(\u03c0) considers the sum of the rewards collected during an episode, i.e., r \u00b7 d\u03c0, that we can achieve on the average of an in\ufb01nite number of episodes drawn with \u03c0. This is due to the state distribution d\u03c0 being an expectation of empirical distributions d\u03c0 = Ed\u223cp\u03c0[d]. However, in practice, we can never draw in\ufb01nitely many episodes following a policy \u03c0. Instead, we draw a small batch of episodes dn \u223cp\u03c0 n. Thus, we can instead conceive a \ufb01nite trials RL formulation that is closer to what is optimized in practice. 2We will call a sequence of states and actions a trajectory or a history indifferently. 3 \fTable 1: Various convex RL objectives and applications. The last column states the equivalence between in\ufb01nite trials and \ufb01nite trials settings, as derived in Proposition 1 (Appendix). OBJECTIVE F APPLICATION INFINITE \u2261FINITE r \u00b7 d r \u2208RS, d \u2208\u2206(S) RL \u0013 \u2225d \u2212dE\u2225p p KL(d||dE) d, dE \u2208\u2206(S) IMITATION LEARNING \u0017 \u2212d \u00b7 log (d) d \u2208\u2206(S) PURE EXPLORATION \u0017 CVaR\u03b1[r \u00b7 d] r \u00b7 d \u2212Var[r \u00b7 d] r \u2208RS, d \u2208\u2206(S) RISK-AVERSE RL \u0017 r \u00b7 d, S.T. \u03bb \u00b7 d \u2264c r, \u03bb \u2208RS, c \u2208R, d \u2208\u2206(S) LINEARLY CONSTRAINED RL \u0013 \u2212Ez KL (dz|| Ek dk) z \u2208Rd, dz, dk \u2208\u2206(S) DIVERSE SKILL DISCOVERY \u0017 Finite Trials RL max \u03c0\u2208\u03a0 \u0010 E dn\u223cp\u03c0 n \u0002 r \u00b7 dn \u0003\u0011 =: Jn(\u03c0) (2) One could then wonder whether optimizing the \ufb01nite trials objective (2) leads to results that differ from the in\ufb01nite trials one (1). To this point, it is straightforward to see that the two objective functions are actually equivalent Jn(\u03c0) = E dn\u223cp\u03c0 n \u0002 r \u00b7 dn \u0003 = r \u00b7 E dn\u223cp\u03c0 n \u0002 dn \u0003 = r \u00b7 d\u03c0 = J\u221e(\u03c0), since r is a constant vector and the expectation is a linear operator. It follows that the in\ufb01nite trials and the \ufb01nite trials RL formulations admit the same optimal policies. Hence, one can enjoy both the computational tractability of the in\ufb01nite trials formulation and, at the same time, optimize the objective function that is employed in practice. In the next section, we will show that a similar result does not hold true for the convex RL formulation. 4 Convex Reinforcement Learning in Finite Trials Even though the RL formulation covers a wide range of sequential decision-making problems, several relevant applications cannot be expressed by means of the inner product between a linear reward vector r and a state distribution d\u03c0 [2, 49]. These include imitation learning, pure exploration, constrained problems, and risk-sensitive objectives, among others. Recently, a convex RL formulation [21, 57, 60] has been proposed to unify these applications in a unique general framework, which is Convex RL max \u03c0\u2208\u03a0 \u0010 F(d\u03c0) \u0011 =: \u03b6\u221e(\u03c0) (3) where F : \u2206(S) \u2192R is a function3 of the state distribution d\u03c0. In Table 1, we recap some of the most relevant problems that fall under the convex RL formulation, along with their speci\ufb01c F function. As it happens for linear RL, in any practical simulated or real-world scenario, we can only draw \ufb01nite number of episodes with a policy \u03c0. From these episodes, we obtain an empirical distribution dn \u223cp\u03c0 n rather than the actual state distribution d\u03c0, where n is the number of episodes. This can cause a mismatch from the objective that is typically considered in convex RL [57, 60], and what can be optimized in practice. To overcome this mismatch, we de\ufb01ne a \ufb01nite trials formulation of the convex RL objective, as we did in the previous section for the linear RL formulation. Finite Trials Convex RL max \u03c0\u2208\u03a0 \u0010 E dn\u223cp\u03c0 n \u0002 F(dn) \u0003\u0011 =: \u03b6n(\u03c0) (4) 3In this context, we use the term convex to distinguish it from the standard linear RL objective. However, in the following we will consider functions F that are either convex, concave or even non-convex. In general, problem (3) takes the form of a max problem for concave F, or a min problem for convex F. 4 \fComparing objectives (3) and (4), one can notice that both of them include an expectation over the episodes, being d\u03c0 = Ed\u223cp\u03c0[d]. Especially, we can write \u03b6\u221e(\u03c0) = F(d\u03c0) = F( E dn\u223cp\u03c0 n [dn]) \u2264 E dn\u223cp\u03c0 n [F(dn)] = \u03b6n(\u03c0) through the Jensen\u2019s inequality. As a consequence, optimizing the in\ufb01nite trials objective \u03b6\u221e(\u03c0) does not necessarily lead to an optimal behavior for the \ufb01nite trials objective \u03b6n(\u03c0). From a mathematical perspective, this is due to the fact that the empirical distributions dn induced by the policy \u03c0 are averaged by the expectation d\u03c0 before computing the F function into objective (3), thus losing a measure of the performance F for each batch of episodes, which we instead keep in the objective (4). 4.1 Approximating the Finite Trials Objective with In\ufb01nite Trials Despite the evident mismatch between the \ufb01nite trials and the in\ufb01nite trials formulation of the convex RL problem, most existing works consider (3) as the standard objective, even if only a \ufb01nite number of episodes can be drawn in practice. Thus, it is worth investigating how much we can lose by approximating a \ufb01nite trials objective with an in\ufb01nite trials one. First, we report a useful assumption on the structure of the function F. Assumption 4.1 (Lipschitz). A function F : X \u2192R is Lipschitz-continuous if it holds for some constant L \f \fF(x) \u2212F(y) \f \f \u2264L \r \rx \u2212y \r \r 1, \u2200(x, y) \u2208X 2. Then, we provide an upper bound on the approximation error in the following theorem. Theorem 4.1 (Approximation Error). Let n \u2208N be a number of trials, let \u03b4 \u2208(0, 1] be a con\ufb01dence level, let \u03c0\u2020 \u2208arg max\u03c0\u2208\u03a0 \u03b6n(\u03c0) and \u03c0\u22c6\u2208arg max\u03c0\u2208\u03a0 \u03b6\u221e(\u03c0). Then, it holds with probability at least 1 \u2212\u03b4 err := \f \f\u03b6n(\u03c0\u2020) \u2212\u03b6n(\u03c0\u22c6) \f \f \u22644LT r 2S log(4T/\u03b4) n The previous result establishes an approximation error rate err = O(LT p S/n) that is polynomial in the number of episodes n. Unsurprisingly, the guarantees over the approximation error scales with O(1/\u221an), as one can expect the empirical distribution dn to concentrate around its expected value for large n [54]. This implies that approximating the \ufb01nite trials objective \u03b6n(\u03c0) with the in\ufb01nite trials \u03b6\u221e(\u03c0) can be particularly harmful in those settings in which n is necessarily small. As an example, consider training a robot through a simulator and deploying the obtained policy in the real world, where the performance measures are often based on a single episode (n = 1). The performance that we experience from the deployment can be much lower than the expected \u03b6\u221e(\u03c0), which might result in undesirable or unsafe behaviors. However, Theorem 4.1 only reports an instance-agnostic upper bound, and it does not necessarily imply that there would be a signi\ufb01cant approximation error in a speci\ufb01c instance, i.e., a speci\ufb01c pairing of an MDP M and a function F. Nevertheless, in this paper we argue that the upper bound of the approximation error is not vacuous in several relevant applications, and we provide an illustrative numerical corroboration of this claim in Section 6. Challenged assumption 1. The convex RL problem can be equivalently addressed with an in\ufb01nite trials formulation. Finally, in Figure 2 we report a visual representation4 of the approximation error de\ufb01ned in Theorem 4.1. Notice that the \ufb01nite trials objective \u03b6n converges uniformly to the in\ufb01nite trials objective \u03b6\u221e as a trivial consequence of Theorem 4.1. This is particularly interesting as it results in \u03c0\u2020 converging to \u03c0\u22c6in the limit of large n as shown Figure 2. 5 In-Depth Analysis of Single Trial Convex Reinforcement Learning Setting Having established a signi\ufb01cant mismatch between the in\ufb01nite trials convex RL setting that is usually considered in previous works, i.e., \u03b6\u221e(\u03c0), and the \ufb01nite trials formulation that is actually targeted in 4Note that it is not possible to represent the objective functions in two dimensions in general. Nevertheless, we provide an abstract one-dimensional representation of the policy space to bring the intuition. 5 \f\u03a0 \u03b6\u221e \u03b6n \u03c0\u2217 \u03c0\u2020 \u03a0 \u03b6\u221e \u03b6n \u03c0\u2217 \u03c0\u2020 Figure 2: The two illustrations report an abstract visualization of \u03b6n and \u03b6\u221efor small values of n (left) and large values of n (right) respectively. The green bar visualize the distance \r \rdn \u2212d\u03c0\u22c6\r \r 1, in which dn \u223cp\u03c0\u2020 n . The blue bar visualize the distance \f \f\u03b6n(\u03c0\u2020) \u2212\u03b6\u221e(\u03c0\u22c6) \f \f. The orange bar visualize the approximation error, i.e., the distance \f \f\u03b6n(\u03c0\u2020) \u2212\u03b6n(\u03c0\u22c6) \f \f. practice, i.e., \u03b6n(\u03c0), it is now worth taking a closer look at the \ufb01nite trials optimization problem (4). Indeed, to avoid the approximation error that can occur by optimizing (4) through the in\ufb01nite trials formulation (Theorem 4.1), one could instead directly address the optimization of (4). Especially, how does the \ufb01nite trials convex RL problem compare to its in\ufb01nite trials formulation and the linear RL problem? What kind of policies do we need to optimize the \ufb01nite trials objective? Is the underlying learning process statistically harder than in\ufb01nite trials convex RL? In this section, we investigate the answers to these relevant questions. To this purpose, we will focus on a single trial setting, i.e., \u03b6n(\u03c0) with n = 1, which allows for a clearer analysis, while analogous considerations should extend to a general number of trials n > 1. Single Trial Convex RL max \u03c0\u2208\u03a0 \u0010 E d\u223cp\u03c0 \u0002 F(d) \u0003\u0011 =: \u03b61(\u03c0) (5) Taking inspiration from [57], we can cast the problem (5) de\ufb01ned over an MDP M into a convex MDP CM := (S, A, P, T, \u00b5, F), where S, A, P, T, \u00b5 are de\ufb01ned as in a standard MDP (see Section 2), and F : \u2206(S) \u2192R is a convex function that de\ufb01nes the objective \u03b61(\u03c0). Is solving a convex MDP CM signi\ufb01cantly harder than solving an MDP M? 5.1 Extended MDP Formulation of the Single Trial Convex RL Setting We can show that any \ufb01nite-horizon convex MDP CM can be actually translated into an equivalent MDP M\u2113= (S\u2113, A\u2113, P\u2113, \u00b5\u2113, r\u2113), which we call an extended MDP. The main idea is to temporallyextend CM so that each state contains the information of the full trajectory leading to it, so that the convex objective can be cast into a linear reward. To do this, we de\ufb01ne the extended state space S\u2113to be the set of all the possible histories up to length T, so that s\u2113\u2208S\u2113now represents a history. Then, we can keep A\u2113, P\u2113, \u00b5\u2113equivalent to A, P, \u00b5 of the original CM, where for the extended transition model P\u2113(s\u2032 \u2113|s\u2113, a) we solely consider the last state in the history s\u2113to de\ufb01ne the conditional probability to the next history s\u2032 \u2113. Finally, we just need to de\ufb01ne a scalar reward function r\u2113: S\u2113\u2192R such that r\u2113(s\u2113) = F(ds\u2113) for all the histories s\u2113of length T and r\u2113(s\u2113) = 0 otherwise, where we denoted with ds\u2113the empirical state distribution induced by s\u2113. Notably, the problem of \ufb01nding an optimal policy for the extended MDP M\u2113, i.e., \u03c0\u2217\u2208 arg max\u03c0\u2208\u03a0 r\u2113\u00b7 d\u03c0, is equivalent to solve the problem (5). Indeed, we have r\u2113\u00b7 d\u03c0 = X s\u2113\u2208S\u2113 r\u2113(s\u2113)d\u03c0(s\u2113) = X s\u2113\u2208S\u2113 F(ds\u2113)1(|s\u2113| = T)p\u03c0(ds\u2113) = E ds\u2113\u223cp\u03c0[F(ds\u2113)]. Whereas M\u2113can be solved with classical MDP methods [44], the size of the policy \u03c0 : S\u2113\u2192\u2206(A\u2113) to be learned does scale with the size of M\u2113, which grows exponentially in the episode horizon as we have |S\u2113| > ST . Thus, the extended MDP formulation and the resulting insight cast some doubts on the notion that the convex RL is not signi\ufb01cantly harder than standard RL [57, 60]. 6 \fChallenged assumption 2. Convex RL is only slightly harder than the standard RL formulation. 5.2 Partially Observable MDP Formulation of the Single Trial Convex RL Setting Instead of temporally extending the convex MDP CM as in the previous section, which causes the policy space to grow exponentially with the episode horizon T, we can alternatively formulate CM as a Partially Observable MDP (POMDP) [5, 31] PM = (S\u2113, A\u2113, P\u2113, \u00b5\u2113, r\u2113, \u2126, O), in which \u2126denotes an observation space, and O : S\u2113\u2192\u2206(\u2126) is an observation function. The process to build PM is rather similar to the one we employed for the extended MDP M\u2113, and the components S\u2113, A\u2113, P\u2113, \u00b5\u2113, r\u2113remain indeed unchanged. However, in a POMDP the agent does not directly access a state s\u2113\u2208S\u2113, but just a partial observation o \u2208\u2126that is given by the observation function O. Here O(s\u2113) = o is a deterministic function such that the given observation o is the last state in the history s\u2113. Since the agent only observes o, a stationary policy can be de\ufb01ned as a function \u03c0 : \u2126\u2192\u2206(A), for which the size depends on the number of states S of the convex MDP CM, being \u2126= S. However, it is well known [31] that history-dependent policies should be considered for the problem of optimizing a POMDP. This is in sharp contrast with the current convex MDP literature, which only considers stationary policies due to the in\ufb01nite trials formulation [60]. Challenged assumption 3. The set of stationary randomized policies is suf\ufb01cient for convex RL. 5.3 Online Learning in Single Trial Convex RL Let us assume to have access to a planning oracle that returns an optimal policy \u03c0\u2217for a given CM, so that we can sidestep the concerns on the computational feasibility reported in previous sections. It is worth investigating the complexity of learning \u03c0\u2217from online interactions with an unknown CM. A typical measure of this complexity is the online learning regret R(N), which is de\ufb01ned as R(N) := N X t=1 V \u2217\u2212V (t), where N is the number of learning episodes, V \u2217= V \u03c0\u2217 1 (s1) is the value of the optimal policy, V (t) = V \u03c0t is the value of the policy \u03c0t deployed at the episode t. We now aim to assess whether there exists a principled algorithm that can achieve a sub-linear regret R(N) in the worst case. To this purpose, we can cast our learning problem in the Once-Per-Episode (OPE) RL formulation [12]. In the latter setting, the agent interacts with the MDP for T steps, receiving a 0/1 feedback at the end of the episode, where the feedback is computed according to a logistic model that is function of the history. To translate our objective \u03b61(\u03c0) = Ed\u223cp\u03c0[F(d)] into the OPE framework [12], we have to encode F into a linear representation. With the following assumption, we state the existence of such representation. Assumption 5.1 (Linear Realizability). The function F is linearly-realizable if it holds F(d) = w\u22a4 \u2217\u03c6(d), where w\u2217\u2208Rdw is a vector of parameters such that \u2225w\u2217\u22252 \u2264B for some known B > 0, and \u03c6(d) = (\u03c6j(d))dw j=1 is a known vector of basis functions such that \u2225\u03c6(d)\u22252 \u22641, \u2200d \u2208\u2206(S). With the Assumption 5.1 and other minor changes that are detailed in the Appendix, we can invoke the analysis of OPE-UCBVI in [12] to provide an upper bound to the regret R(N) in our setting. Theorem 5.1 (Regret). For any con\ufb01dence \u03b4 \u2208(0, 1] and unknown convex MDP CM, the regret of the OPE-UCBVI algorithm is upper bounded as R(N) \u2264O \u0010h d7/2 w B3/2T 2SA1/2i\u221a N \u0011 with probability 1 \u2212\u03b4. The latter result states that the problem of learning an optimal policy in a unknown convex MDP is at least statistically ef\ufb01cient assuming linear realizability and the access to a planning oracle. Those are fairly strong assumptions, but principled approximate solvers may be designed to overcome the planning oracle assumption, whereas in several convex RL settings the function F is assumed to be known, and thus trivially realizable. 7 \f0 2 1 (a) Pure exploration 1 0 2 \u03f5 1 \u2212\u03f5 1 \u2212\u03f5 \u03f5 (b) Risk-averse RL 0 1 (c) Imitation learning Figure 3: Visualization of the illustrative MDPs. In (b), state 0 is a low-reward (r) low-risk state, state 2 is a high-reward (R) high-risk state, and state 1 is a penalty state with zero reward. 6 Numerical Validation In this section, we evaluate the performance over the \ufb01nite trials objective (4) achieved by a policy \u03c0\u2020 \u2208arg max\u03c0\u2208\u03a0 \u03b6n(\u03c0) maximizing the same \ufb01nite trials objective (4) against a policy \u03c0\u22c6\u2208 arg max\u03c0\u2208\u03a0 \u03b6\u221e(\u03c0) maximizing the in\ufb01nite trials objective (3) instead. The latter in\ufb01nite trials \u03c0\u2217 can be obtained by solving a dual optimization on the convex MDP (see Sec. 6.2 in [41]), max \u03c9\u2208\u2206(S\u00d7A) F(\u03c9), subject to X a\u2208A \u03c9(s, a) = X s\u2032\u2208S,a\u2032\u2208A P(s|s\u2032, a\u2032)\u03c9(s\u2032, a\u2032), \u2200s \u2208S, To get the \ufb01nite trials \u03c0\u2020, we \ufb01rst recover the extended MDP as explained in Section 5.1, and then we apply standard dynamic programming [7]. In the experiments, we show that optimizing the in\ufb01nite trials objective can lead to sub-optimal policies across a wide range of applications. In particular, we cover examples from pure exploration, risk-averse RL, and imitation learning. We carefully selected MDPs that are as simple as possible in order to stress the generality of our results. For the sake of clarity, we restrict the discussion to the single trial setting (n = 1). Pure Exploration For the pure exploration setting, we consider the state entropy objective [27], i.e., F(d) = H(d) = \u2212d \u00b7 log d, and the convex MDP in Figure 3a. In this example, the agent aims to maximize the state entropy over \ufb01nite trajectories of T steps. Notice that this happens when a policy induces an empirical state distribution that is close to uniform. In Figure 4a, we compare the average entropy induced by the optimal \ufb01nite trials policy \u03c0\u2020 and the optimal in\ufb01nite trials policy \u03c0\u22c6. An agent following the policy \u03c0\u2020 always achieves a uniform empirical state distribution leading to the maximum entropy. Moreover, \u03c0\u2020 is a non-Markovian deterministic policy. In contrast, the policy \u03c0\u2217is randomized in all the three states. As a result, this policy induces sub-optimal empirical state distributions with strictly positive probability, as shown in Figure 4d. Risk-Averse RL For the risk-averse RL setting, we consider a Conditional Value-at-Risk (CVaR) objective [46] given by F(d) = CVaR\u03b1[r \u00b7 d], where r \u2208[0, 1]S is a reward vector, and the convex MDP in Figure 3b, in which the agent aims to maximize the CVaR over a \ufb01nite-length trajectory of T steps. First, notice that a \ufb01nancial semantics can be attributed to the given MDP. An agent, starting in state 2, can decide whether to invest in risky assets, e.g., crypto-currencies, or in safe ones, e.g., treasury bills. Because of the stochastic transitions, a policy would need to be reactive to the realizations of the transition model in order to maximize the single trial objective (5). This kind of behavior is achieved by an optimal \ufb01nite trials policy \u03c0\u2020. Indeed, \u03c0\u2020 is a non-Markovian deterministic policy, which can take decisions as a function of history, and thus takes into account the current realization. On the other hand, an optimal in\ufb01nite trials policy \u03c0\u2217is a Markovian policy, and it cannot take into account the current history. As a result, the policy \u03c0\u2217induces sub-optimal trajectories with strictly positive probability (see Figure 4e). Finally, in Figure 4b we compare the single trial performance induced by the optimal single trial policy \u03c0\u2020 and the optimal in\ufb01nite trials policy \u03c0\u22c6. Overall, \u03c0\u2020 performs signi\ufb01cantly better than \u03c0\u22c6. Imitation Learning For the imitation learning setting, we consider the distribution matching objective [32], i.e., F(d) = KL (d||dE) , and the convex MDP in Figure 3c. The agent aims to learn a policy \u03c0 inducing an empirical state distribution d close to the empirical state distribution dE demonstrated by an expert. In Figure 4c, we compare single trial performance induced by the optimal single trial policy \u03c0\u2020 and the optimal in\ufb01nite trials policy \u03c0\u22c6. An agent following \u03c0\u2020 induces an empirical state distribution that perfectly matches the expert. In contrast, an agent following \u03c0\u2217 induces sub-optimal realizations with strictly positive probability (see Figure 4f). 8 \f(a) Entropy average (b) CVaR average (c) KL average (d) Entropy distribution (e) CVaR distribution (f) KL distribution Figure 4: \u03c0\u2020 denotes an optimal single trial policy, \u03c0\u22c6denotes an optimal in\ufb01nite trials policy. In (a, d) we report the average and the empirical distribution of the single trial utility H(d) achieved in the pure exploration convex MDP (T = 6) of Figure 3a. In (b, e) we report the average and the empirical distribution of the single trial utility CVaR\u03b1[r \u00b7 d] (with \u03b1 = 0.4) achieved in the risk-averse convex MDP (T = 5) of Figure 3b. In (c, f) we report the average and the empirical distribution of the single trial utility KL(d||dE) (with expert distribution dE = (1/3, 2/3)) achieved in the imitation learning convex MDP (T = 12) of Figure 3c. For all the results, we provide 95 % c.i. over 1000 runs. 7 Related Work To the best of our knowledge, [27] were the \ufb01rst to introduce the convex RL problem, as a generalization of the standard RL formulation to non-linear utilities, especially the entropy of the state distribution. They show that the convex RL objective, while being concave (convex) in the state distribution, can be non-concave (non-convex) in the policy parameters. Anyway, they provide a provably ef\ufb01cient algorithm that overcomes the non-convexity through a Frank-Wolfe approach. [60] study the convex RL problem under the name of RL with general utilities. Especially, they investigated a hidden convexity of the convex RL objective that allows for statistically ef\ufb01cient policy optimization in the in\ufb01nite-trials setting. Recently, the in\ufb01nite trials convex RL formulation has been reinterpreted from game-theoretic perspectives [21, 57]. The former [57] notes that the convex RL problem can be seen as a min-max game between the policy player and a cost player. The latter [21] shows that the convex RL problem is a subclass of mean-\ufb01eld games. Another relevant branch of literature is the one investigating the expressivity of scalar (Markovian) rewards [2, 49]. Especially, [2] shows that not all the notions of task, such as inducing a set of admissible policies, a (partial) policy ordering, a trajectory ordering, can be naturally encoded with a scalar reward function. Whereas the convex RL formulation extends the expressivity of scalar RL w.r.t. all these three notions of task, it is still not suf\ufb01cient to cover any instance. Convex RL is powerful in terms of the policy ordering it can induce, but it is inherently limited on the trajectory ordering as it only accounts for the in\ufb01nite trials state distribution. Instead, the \ufb01nite trials convex RL setting that we presented in this paper is naturally expressive in terms of trajectory orderings, at the expense of a diminished expressivity on the policy orderings w.r.t. in\ufb01nite trials convex RL. Previous works concerning RL in the presence of trajectory feedback are also related to this work. Most of this literature assumes an underlying scalar reward model [e.g., 18] which only delays the feedback at the end of the episode. One notable exception is the once-per-episode formulation in [12], which we have already commented on in Section 5. Finally, the work in [13, 14] considers in\ufb01nite-horizon MDPs with vectorial rewards as a mean to encode convex objectives in RL with a multi-objective \ufb02avor. They show that stationary policies are in general sub-optimal for the introduced online learning setting, where non-stationarity becomes 9 \fessential. In this setting, they provide principled procedures to learn an optimal policy with sub-linear regret. Their work essentially complement our analysis in the in\ufb01nite-horizon problem formulation, where the difference between \ufb01nite trials and in\ufb01nite trials fades away. 8 Conclusion and Future Directions While in classic RL the optimization of an in\ufb01nite trials objective leads to the optimal policy for the \ufb01nite trials counterpart, we have shown that true convex RL does not have this property. First, we have formalized the concept of \ufb01nite trials convex RL, which captures a problem that until now has been cast into an unfounded optimization problem. Then, we have given an upper bound on the approximation error obtained by erroneously optimizing the in\ufb01nite trials objective, as it is currently done in practice. Finally, we have presented intuitive, yet general, experimental examples to show that the approximation error can be signi\ufb01cant in relevant applications. Since the \ufb01nite trials setting is the standard in both simulated and real-world RL, we believe that shedding light on the above mentioned performance gap will lead to better approaches for convex RL and related areas. Future work could target approximate solutions to the \ufb01nite trials objective rather than the in\ufb01nite trials one, which can cause sub-optimality even when solved exactly. Methods in POMDPs [26, 34] or optimistic planning algorithms [10] could provide useful inspiration. Acknowledgements Riccardo De Santi and Piersilvio De Bartolomeis thank professor Niao He for offering graduate students at ETH the opportunity to work in exciting research areas within the \u201cFoundations of Reinforcement Learning\u201d course. Further, we thank Ali Batuhan Yardim for his generous feedback on an early version of this work." + }, + { + "url": "http://arxiv.org/abs/2112.08746v1", + "title": "Unsupervised Reinforcement Learning in Multiple Environments", + "abstract": "Several recent works have been dedicated to unsupervised reinforcement\nlearning in a single environment, in which a policy is first pre-trained with\nunsupervised interactions, and then fine-tuned towards the optimal policy for\nseveral downstream supervised tasks defined over the same environment. Along\nthis line, we address the problem of unsupervised reinforcement learning in a\nclass of multiple environments, in which the policy is pre-trained with\ninteractions from the whole class, and then fine-tuned for several tasks in any\nenvironment of the class. Notably, the problem is inherently multi-objective as\nwe can trade off the pre-training objective between environments in many ways.\nIn this work, we foster an exploration strategy that is sensitive to the most\nadverse cases within the class. Hence, we cast the exploration problem as the\nmaximization of the mean of a critical percentile of the state visitation\nentropy induced by the exploration strategy over the class of environments.\nThen, we present a policy gradient algorithm, $\\alpha$MEPOL, to optimize the\nintroduced objective through mediated interactions with the class. Finally, we\nempirically demonstrate the ability of the algorithm in learning to explore\nchallenging classes of continuous environments and we show that reinforcement\nlearning greatly benefits from the pre-trained exploration strategy w.r.t.\nlearning from scratch.", + "authors": "Mirco Mutti, Mattia Mancassola, Marcello Restelli", + "published": "2021-12-16", + "updated": "2021-12-16", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction The typical Reinforcement Learning (RL, Sutton and Barto 2018) setting involves a learning agent interacting with an environment in order to maximize a reward signal. In principle, the reward signal is a given and perfectly encodes the task. In practice, the reward is usually hand-crafted, and designing it to make the agent learn a desirable behavior is often a huge challenge. This poses a serious roadblock on the way of autonomous learning, as any task requires a costly and speci\ufb01c formulation, while the synergy between solving one RL problem and another is very limited. To address this crucial limitation, several recent works (Mutti, Pratissoli, and Restelli 2021; Liu and Abbeel 2021b,a; Seo et al. 2021; Yarats et al. 2021) have been dedicated to unsupervised RL. In this framework, originally envisioned in (Hazan et al. 2019; Mutti and Restelli 2020), the agent \ufb01rst pre-trains its policy by taking a *These authors contributed equally. The implementation of \u03b1MEPOL is available at https://github.com/muttimirco/alphamepol. Copyright \u00a9 2022, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. large amount of unsupervised interactions with the environment (unsupervised pre-training). Then, the pre-trained policy is transferred to several downstream tasks, each of them de\ufb01ned through a reward function, and the agent has to learn an optimal policy by taking additional supervised interactions with the environment (supervised \ufb01ne-tuning). Whereas most of the existing works in unsupervised RL (Campos et al. (2021) make for a notable exception) converged to a straightforward \ufb01ne-tuning strategy, in which the pre-trained policy is employed as an exploratory initialization of a standard RL algorithm, there is lesser consensus on which unsupervised objective is best suited for the pre-training phase. Traditional intrinsic motivation bonuses that were originally designed to address exploration in supervised RL (e.g., Pathak et al. 2017; Burda et al. 2019) can be employed in the unsupervised RL setting as well (Laskin et al. 2021). However, these bonuses are designed to vanish over time, which makes it hard to converge to a stable policy during the unsupervised pre-training. The Maximum State Visitation Entropy (MSVE, Hazan et al. 2019) objective, which incentives the agent to learn a policy that maximizes the entropy of the induced state visitation, emerged as a powerful alternative in both continuous control and visual domains (Laskin et al. 2021). The intuition underlying the MSVE objective is that a pre-trained exploration strategy should visit with high probability any state where the agent might be rewarded in a subsequent supervised task, so that the \ufb01ne-tuning to the optimal policy is feasible. Although unsupervised pre-training methods effectively reduce the reliance on a reward function and lead to remarkable \ufb01ne-tuning performances w.r.t. RL from scratch, all of the previous solutions to unsupervised RL assume the existence of a single environment. In this work, we aim to push the generality of this framework even further, by addressing the problem of unsupervised RL in multiple environments. In this setting, during the pretraining the agent faces a class of reward-free environments that belong to the same domain but differ in their transition dynamics. At each turn of the learning process, the agent is drawn into an environment within the class, where it can interact for a \ufb01nite number of steps before facing another turn. The ultimate goal of the agent is to pre-train an exploration strategy that helps to solve any subsequent \ufb01ne-tuning task that can be speci\ufb01ed over any environment of the class. Our contribution to the problem of unsupervised RL in arXiv:2112.08746v1 [cs.LG] 16 Dec 2021 \fmultiple environments is three-fold: First, we frame the problem into a tractable formulation (Section 4), then, we propose a methodology to address it (Section 5), for which we provide a thorough empirical evaluation (Section 6). Specifically, we extend the pre-training objective to the multipleenvironments setting. Notably, when dealing with multiple environments the pre-training becomes a multi-objective problem, as one could establish any combination of preferences over the environments. Previous unsupervised RL methods would blindly optimize the average of the pre-training objective across the class, implicitly establishing a uniform preference. Instead, in this work we consider the mean of a critical percentile of the objective function, i.e., its Conditional Value-at-Risk (CVaR, Rockafellar, Uryasev et al. 2000) at level \u03b1, to prioritize the performance in particularly rare or adverse environments. In line with the MSVE literature, we chose the CVaR of the induced state visitation entropy as the pre-training objective, and we propose a policy gradient algorithm (Deisenroth, Neumann, and Peters 2013), \u03b1-sensitive Maximum Entropy POLicy optimization (\u03b1MEPOL), to optimize it via mere interactions with the class of environments. As in recent works (Mutti, Pratissoli, and Restelli 2021; Liu and Abbeel 2021b; Seo et al. 2021), the algorithm employs non-parametric methods to deal with state entropy estimation in continuous and high-dimensional environments. Then, it leverages these estimated values to optimize the CVaR of the entropy by following its policy gradient (Tamar, Glassner, and Mannor 2015). Finally, we provide an extensive experimental analysis of the proposed method in both the unsupervised pre-training over classes of multiple environments, and the supervised \ufb01ne-tuning over several tasks de\ufb01ned over the class. The exploration policy pre-trained with \u03b1MEPOL allows to solve sparse-rewards tasks that are impractical to learn from scratch, while consistently improving the performance of a pre-training that is blind to the unfavorable cases. 2 Related Work In this section, we revise the works that relates the most with the setting of unsupervised RL in multiple environments. A more comprehensive discussion can be found in Appendix A. In a previous work, Rajendran et al. (2020) considered a learning process composed of agnostic pre-training (called a practice) and supervised \ufb01ne-tuning (a match) in a class of environments. However, in their setting the two phases are alternated, and the supervision signal of the matches allows to learn the reward for the practice through a meta-gradient. Parisi et al. (2021) addresses the unsupervised RL in multiple environments concurrently to our work. Whereas their setting is akin to ours, they come up with an essentially orthogonal solution. Especially, they consider a pre-training objective inspired by count-based methods (Bellemare et al. 2016) in place of our entropy objective. Whereas they design a speci\ufb01c bonus for the multiple-environments setting, they essentially establish a uniform preference over the class instead of prioritizing the worst-case environment as we do. Finally, our framework resembles the meta-RL setting (Finn, Abbeel, and Levine 2017), in which we would call meta-training the unsupervised pre-training, and meta-testing the supervised \ufb01ne-tuning. However, none of the existing works combine unsupervised meta-training (Gupta et al. 2018a) with a multiple-environments setting. 3 Preliminaries A vector v is denoted in bold, and vi stands for its i-th entry. Probability and Percentiles Let X be a random variable distributed according to a cumulative density function (cdf) FX(x) = Pr(X \u2264x). We denote with E[X], Var[X] the expected value and the variance of X respectively. Let \u03b1 \u2208(0, 1) be a con\ufb01dence level, we call the \u03b1-percentile (shortened to \u03b1%) of the variable X its Value-at-Risk (VaR), which is de\ufb01ned as VaR\u03b1(X) = inf \b x | FX(x) \u2265\u03b1 \t . Analogously, we call the mean of this same \u03b1-percentile the Conditional Value-at-Risk (CVaR) of X, CVaR\u03b1(X) = E \u0002 X | X \u2264VaR\u03b1(X) \u0003 . Markov Decision Processes A Controlled Markov Process (CMP) is a tuple M := (S, A, P, D), where S is the state space, A is the action space, the transition model P(s\u2032|a, s) denotes the conditional probability of reaching state s\u2032 when selecting action a in state s, and D is the initial state distribution. The behavior of an agent is described by a policy \u03c0(a|s), which de\ufb01nes the probability of taking action a in s. Let \u03a0 be the set of all the policies. Executing a policy \u03c0 in a CMP over T steps generates a trajectory \u03c4 = (s0,\u03c4, a0,\u03c4, . . . , aT \u22122,\u03c4, sT \u22121,\u03c4) such that p\u03c0,M(\u03c4) = D(s0,\u03c4) QT \u22121 t=0 \u03c0(at,\u03c4|st,\u03c4)P(st+1,\u03c4|st,\u03c4, at,\u03c4) denotes its probability. We denote the state-visitation frequencies induced by \u03c4 with d\u03c4(s) = 1 T PT \u22121 t=0 1(st,\u03c4 = s), and we call dM \u03c0 = E\u03c4\u223cp\u03c0,M[d\u03c4] the marginal state distribution. We de\ufb01ne the differential entropy (Shannon 1948) of d\u03c4 as H(d\u03c4) = \u2212 R S d\u03c4(s) log d\u03c4(s) ds. For simplicity, we will write H(d\u03c4) as a random variable H\u03c4 \u223c\u03b4(h \u2212 H(d\u03c4))p\u03c0,M(\u03c4), where \u03b4(h) is a Dirac delta. By coupling a CMP M with a reward function R we obtain a Markov Decision Process (MDP, Puterman 2014) MR := M \u222aR. Let R(s, a) be the expected immediate reward when taking a \u2208A in s \u2208S and let R(\u03c4) = PT \u22121 t=0 R(st,\u03c4), the performance of a policy \u03c0 over the MDP MR is de\ufb01ned as JMR(\u03c0) = E \u03c4\u223cp\u03c0,M \u0002 R(\u03c4) \u0003 . (1) The goal of reinforcement learning (Sutton and Barto 2018) is to \ufb01nd an optimal policy \u03c0\u2217 J \u2208arg max JMR(\u03c0) through sampled interactions with an unknown MDP MR. 4 Unsupervised RL in Multiple Environments Let M = {M1, . . . , MI} be a class of unknown CMPs, in which every element Mi = (S, A, Pi, D) has a speci\ufb01c transition model Pi, while S, A, D are homogeneous across the class. At each turn, the agent is able to interact with a single environment M \u2208M. The selection of the environment to interact with is mediated by a distribution pM over M, \fFigure 1: On the left, we highlight the unsupervised pre-training, in which the agent iteratively interacts with a CMP M \u2208M drawn from pM. The pre-trained policy \u03c0\u2217 E conveys the initialization to the subsequent supervised \ufb01ne-tuning (on the right), which outputs a reward maximizing policy \u03c0\u2217 J for an MDP M \u222aR that pairs M \u2208M with an arbitrary reward function R. outside the control of the agent. The aim of the agent is to pre-train an exploration strategy that is general across all the MDPs MR one can build upon M. In a single-environment setting, this problem has been assimilated to learning a policy that maximizes the entropy of the induced state visitation frequencies (Hazan et al. 2019; Mutti and Restelli 2020). One can straightforwardly extend the objective to multiple environments by considering the expectation over the class of CMPs, EM(\u03c0) = EM\u223cpM \u03c4\u223cp\u03c0,M \u0002 H\u03c4 \u0003 , where the usual entropy objective over the single environment Mi can be easily recovered by setting pMi = 1. However, this objective function does not account for the tail behavior of H\u03c4, i.e., for the performance in environments of M that are rare or particularly unfavorable. This is decidedly undesirable as the agent may be tasked with an MDP built upon one of these adverse environments in the subsequent supervised \ufb01ne-tuning, where even an optimal strategy w.r.t. EM(\u03c0) may fail to provide suf\ufb01cient exploration. To overcome this limitation, we look for a more nuanced exploration objective that balances the expected performance with the sensitivity to the tail behavior. By taking inspiration from the risk-averse optimization literature (Rockafellar, Uryasev et al. 2000), we consider the CVaR of the state visitation entropy induced by \u03c0 over M, E\u03b1 M(\u03c0) = CVaR\u03b1(H\u03c4) = E M\u223cpM \u03c4\u223cp\u03c0,M \u0002 H\u03c4 | H\u03c4 \u2264VaR\u03b1(H\u03c4) \u0003 , (2) where \u03b1 is a con\ufb01dence level and E1 M(\u03c0) := EM(\u03c0). The lower we set the value of \u03b1, the more we hedge against the possibility of a bad exploration outcome in some M \u2208M. In the following sections, we propose a method to effectively learn a policy \u03c0\u2217 E \u2208arg max E\u03b1 M(\u03c0) through mere interactions with M, and we show how this serves as a pre-training for RL (the full process is depicted in Figure 1). A preliminary theoretical characterization of the problem of optimizing E\u03b1 M(\u03c0) is provided in Appendix B. 5 A Policy Gradient Approach In this section, we present an algorithm, called \u03b1-sensitive Maximum Entropy POLicy optimization (\u03b1MEPOL), to optimize the exploration objective in (2) through mediated interactions with a class of continuous environments. \u03b1MEPOL operates as a typical policy gradient approach (Deisenroth, Neumann, and Peters 2013). It directly searches for an optimal policy by navigating a set of parametric differentiable policies \u03a0\u0398 := {\u03c0\u03b8 : \u03b8 \u2208\u0398 \u2286Rn}. It does so by repeatedly updating the parameters \u03b8 in the gradient direction, until a stationary point is reached. This update has the form \u03b8\u2032 = \u03b8 + \u03b2\u2207\u03b8E\u03b1 M(\u03c0\u03b8), where \u03b2 is a learning rate, and \u2207\u03b8E\u03b1 M(\u03c0\u03b8) is the gradient of (2) w.r.t. \u03b8. The following proposition provides the formula of \u2207\u03b8E\u03b1 M(\u03c0\u03b8). The derivation follows closely the one in (Tamar, Glassner, and Mannor 2015, Proposition 1), which we have adapted to our objective function of interest (2). Proposition 5.1. The policy gradient of the exploration objective E\u03b1 M(\u03c0\u03b8) w.r.t. \u03b8 is given by \u2207\u03b8E\u03b1 M(\u03c0\u03b8) = E M\u223cpM \u03c4\u223cp\u03c0\u03b8,M \u0014\u0012 T \u22121 X t=0 \u2207\u03b8 log \u03c0\u03b8(at,\u03c4|st,\u03c4) \u0013 \u00d7 \u0012 H\u03c4 \u2212VaR\u03b1(H\u03c4) \u0013\f \f \f \fH\u03c4 \u2264VaR\u03b1(H\u03c4) \u0015 . However, in this work we do not assume full knowledge of the class of CMPs M, and the expected value in Proposition 5.1 cannot be computed without having access to pM and p\u03c0\u03b8,M. Instead, \u03b1MEPOL computes the policy update via a Monte Carlo estimation of \u2207\u03b8E\u03b1 M from the sampled interactions {(Mi, \u03c4i)}N i=1 with the class of environments M. The policy gradient estimate itself relies on a Monte Carlo estimate of each entropy value H\u03c4i from \u03c4i, and a Monte Carlo estimate of VaR\u03b1(H\u03c4) given the estimated {H\u03c4i}N i=1. The following paragraphs describe how these estimates are carried out, while Algorithm 1 provides the pseudocode of \u03b1MEPOL. Additional details and implementation choices can be found in Appendix D. Entropy Estimation We would like to compute the entropy H\u03c4i of the state visitation frequencies d\u03c4i from a single realization {st,\u03c4i}T \u22121 t=0 \u2282\u03c4i. This estimation is notoriously challenging when the state space is continuous and highdimensional S \u2286Rp. Taking inspiration from recent works pursuing the MSVE objective (Mutti, Pratissoli, and Restelli 2021; Liu and Abbeel 2021b; Seo et al. 2021), we employ a principled k-Nearest Neighbors (k-NN) entropy estimator (Singh et al. 2003) of the form b H\u03c4i \u221d\u22121 T T \u22121 X t=0 log k \u0393( p 2 + 1) T \r \rst,\u03c4i \u2212sk-NN t,\u03c4i \r \rp \u03c0 p 2 , (3) where \u0393 is the Gamma function, \u2225\u00b7\u2225is the Euclidean distance, and sk-NN t,\u03c4i \u2208\u03c4i is the k-nearest neighbor of st,\u03c4i. The intuition behind the estimator in (3) is simple: We can suppose \fAlgorithm 1: \u03b1MEPOL Input: percentile \u03b1, learning rate \u03b2 Output: policy \u03c0\u03b8 1: initialize \u03b8 2: for epoch = 0, 1, . . ., until convergence do 3: for i = 1, 2, . . . , N do 4: sample an environment Mi \u223cpM 5: sample a trajectory \u03c4i \u223cp\u03c0\u03b8,Mi 6: estimate H\u03c4i with (3) 7: end for 8: estimate VaR\u03b1(H\u03c4) with (4) 9: estimate \u2207\u03b8E\u03b1 M(\u03c0\u03b8) with (5) 10: update parameters \u03b8 \u2190\u03b8 + \u03b2 b \u2207\u03b8E\u03b1 M(\u03c0\u03b8) 11: end for the state visitation frequencies d\u03c4i to have a high entropy as long as the average distance between any encountered state and its k-NN is large. Despite its simplicity, a Euclidean metric suf\ufb01ces to get reliable entropy estimates in continuous control domains (Mutti, Pratissoli, and Restelli 2021). VaR Estimation The last missing piece to get a Monte Carlo estimate of the policy gradient \u2207\u03b8E\u03b1 M is the value of VaR\u03b1(H\u03c4). Being H[1], . . . , H[N] the order statistics out of the estimated values { b H\u03c4i}N i=1, we can na\u00a8 \u0131vely estimate the VaR as [ VaR\u03b1(H\u03c4) = H[\u2308\u03b1N\u2309]. (4) Albeit asymptotically unbiased, the VaR estimator in (4) is known to suffer from a large variance in \ufb01nite sample regimes (Kolla et al. 2019), which is aggravated by the error in the upstream entropy estimates, which provide the order statistics. This variance is mostly harmless when we use the estimate to \ufb01lter out entropy values beyond the \u03b1%, i.e., the condition H\u03c4 \u2264VaR\u03b1(H\u03c4) in Proposition 5.1. Instead, its impact is signi\ufb01cant when we subtract it from the values within the \u03b1%, i.e., the term H\u03c4 \u2212VaR\u03b1(H\u03c4) in Proposition 5.1. To mitigate this issue, we consider a convenient baseline b = \u2212VaR\u03b1(H\u03c4) to be subtracted from the latter, which gives the Monte Carlo policy gradient estimator b \u2207\u03b8E\u03b1 M(\u03c0\u03b8) = N X i=1 f\u03c4i b H\u03c4i 1( b H\u03c4i \u2264[ VaR\u03b1(H\u03c4)), (5) where f\u03c4i = PT \u22121 t=0 \u2207\u03b8 log \u03c0\u03b8(at,\u03c4i|st,\u03c4i). Notably, the baseline b trades off a lower estimation error for a slight additional bias in the estimation (5). We found that this baseline leads to empirically good results and we provide some theoretical corroboration over its bene\ufb01ts in Appendix D.1. 6 Empirical Evaluation We provide an extensive empirical evaluation of the proposed methodology over the two-phase learning process described in Figure 1, which is organized as follows: 6.1 We show the ability of our method in pre-training an exploration policy in a class of continuous gridworlds, emphasizing the importance of the percentile sensitivity; 6.2 We discuss how the choice of the percentile of interest affects the exploration strategy; 6.3 We highlight the bene\ufb01t that the pre-trained strategy provides to the supervised \ufb01ne-tuning on the same class; 6.4 We verify the scalability of our method with the size of the class, by considering a class of 10 continuous gridworlds; 6.5 We verify the scalability of our method with the dimensionality of the environments, by considering a class of 29D continuous control Ant domains; 6.6 We verify the scalability of our method with visual inputs, by considering a class of 147D MiniGrid domains; 6.7 We show that the pre-trained strategy outperforms a policy meta-trained with MAML (Finn, Abbeel, and Levine 2017; Gupta et al. 2018a) on the same class. A thorough description of the experimental setting is provided in Appendix E. 6.1 Unsupervised Pre-Training with Percentile Sensitivity We consider a class M composed of two different con\ufb01gurations of a continuous gridworld domain with 2D states and 2D actions, which we call the GridWorld with Slope. In each con\ufb01guration, the agent navigates through four rooms connected by narrow hallways, by choosing a (bounded) increment along the coordinate directions. A visual representation of the setting can be found in Figure 2a, where the shaded areas denote the initial state distribution and the arrows render a slope that favors or contrasts the agent\u2019s movement. The con\ufb01guration on the left has a south-facing slope, and thus it is called GridWorld with South slope (GWS). Instead, the one on the right is called GridWorld with North slope (GWN) as it has a north-facing slope. This class of environments is unbalanced (and thus interesting to our purpose) for two reasons: First, the GWN con\ufb01guration is more challenging from a pure exploration standpoint, since the slope prevents the agent from easily reaching the two bottom rooms; secondly, the distribution over the class is also unbalanced, as it is pM = [Pr(GWS), Pr(GWN)] = [0.8, 0.2]. In this setting, we compare \u03b1MEPOL against MEPOL (Mutti, Pratissoli, and Restelli 2021), which is akin to \u03b1MEPOL with \u03b1 = 1,1 to highlight the importance of percentile sensitivity w.r.t. a na\u00a8 \u0131ve approach to the multiple-environments scenario. The methods are evaluated in terms of the state visitation entropy E1 M induced by the exploration strategies they learn. In Figure 2, we compare the performance of the optimal exploration strategy obtained by running \u03b1MEPOL (\u03b1 = 0.2) and MEPOL for 150 epochs on the GridWorld with Slope class (pM = [0.8, 0.2]). We show that the two methods achieve a very similar expected performance over the class (Figure 2b). However, this expected performance is the result of a (weighted) average of very different contributions. As anticipated, MEPOL has a strong performance in GWS (pM = [1, 0], Figure 2c), which is close to the con\ufb01gurationspeci\ufb01c optimum (dashed line), but it displays a bad showing in the adverse GWN (pM = [0, 1], Figure 2d). Conversely, 1The pseudocode is identical to Algorithm 1 except that all trajectories affect the gradient estimate in (5). \f\u2193\u2193\u2193 \u2193\u2193\u2193 \u2193\u2193\u2193 \u2193\u2193\u2193 \u2193 \u2193 \u2193 \u2191\u2191\u2191 \u2191\u2191\u2191 \u2191\u2191\u2191 \u2191\u2191\u2191 \u2191 \u2191 \u2191 (a) GridWorld with Slope MEPOL \u03b1MEPOL 0 0.5 1 E1 M (b) pM = [0.8, 0.2] MEPOL \u03b1MEPOL 0 0.5 1 E1 M (c) pM = [1, 0] MEPOL \u03b1MEPOL \u22121.5 \u22121 \u22120.5 0 0.5 1 E1 M (d) pM = [0, 1] \u22121 0 1 0 0.1 0.2 0.3 0.4 H\u03c4 probability \u22121 0 1 0 0.1 0.2 0.3 0.4 H\u03c4 probability (e) MEPOL (left) and \u03b1MEPOL (right) 0.15 0.25 0.35 0.5 1 \u22121.5 \u22121 \u22120.5 0 0.5 1 \u03b1 E1 M (f) \u03b1 sensitivity of \u03b1MEPOL Figure 2: Pre-training performance E1 M obtained by \u03b1MEPOL (\u03b1 = 0.2) and MEPOL in the GridWorld with Slope domain (a). The polices are trained on (b) and tested on (b, c, d). The dashed lines in (c, d) represent the optimal performance. The empirical distribution having mean in (b) is reported in (e). The behaviour of \u03b1MEPOL with different \u03b1 is reported in (f). For every plot, we provide 95% c.i. over 10 runs. \u03b1MEPOL learns a strategy that is much more robust to the con\ufb01guration, showing a similar performance in GWS and GWN, as the percentile sensitivity prioritizes the worst case during training. To con\ufb01rm this conclusion, we look at the actual distribution that is generating the expected performance in Figure 2b. In Figure 2e, we provide the empirical distribution of the trajectory-wise performance (H\u03c4), considering a batch of 200 trajectories with pM = [0.8, 0.2]. It clearly shows that MEPOL is heavy-tailed towards lower outcomes, whereas \u03b1MEPOL concentrates around the mean. This suggests that with a conservative choice of \u03b1 we can induce a good exploration outcome for every trajectory (and any con\ufb01guration), while without percentile sensitivity we cannot hedge against the risk of particularly bad outcomes. However, let us point out that not all classes of environments would expose such an issue for a na\u00a8 \u0131ve, risk-neutral approach (see Appendix E.4 for a counterexample), but it is fair to assume that this would arguably generalize to any setting where there is an imbalance (either in the hardness of the con\ufb01gurations, or in their sampling probability) in the class. These are the settings we care about, as they require nuanced solutions (e.g., \u03b1MEPOL) for scenarios with multiple environments. 6.2 On the Value of the Percentile In this section, we consider repeatedly training \u03b1MEPOL with different values of \u03b1 in the GridWorld with Slope domain, and we compare the resulting exploration performance E1 M as before. In Figure 2f, we can see that the lower \u03b1 we choose, the more we prioritize GWN (right bar for every \u03b1) at the expense of GWS (left bar). Note that this trend carries on with increasing \u03b1, ending in the values of Figures 2c, 2d. The reason for this behavior is quite straightforward, the smaller is \u03b1, the larger is the share of trajectories from the adverse con\ufb01guration (GWN) ending up in the percentile at \ufb01rst, and thus the more GWN affects the policy update (see the gradient in (5)). Note that the value of the percentile \u03b1 should not be intended as a hyper-parameter to tune via trial and error, but rather as a parameter to select the desired risk pro\ufb01le of the algorithm. Indeed, there is not a way to say which of the outcomes in Figure 2f is preferable, as they are all reasonable trade-offs between the average and worst-case performance, which might be suited for speci\ufb01c applications. For the sake of consistency, in every experiment of our analysis we report results with a value of \u03b1 that matches the sampling probability of the worst-case con\ufb01guration, but similar arguments could be made for different choices of \u03b1. 6.3 Supervised Fine-Tuning To assess the bene\ufb01t of the pre-trained strategy, we design a family of MDPs MR, where M \u2208{GWS, GWN}, and R is any sparse reward function that gives 1 when the agent reaches the area nearby a random goal location and 0 otherwise. On this family, we compare the performance achieved by TRPO (Schulman et al. 2015) with different initializations: The exploration strategies learned (as in Section 6.1) by \u03b1MEPOL (\u03b1 = 0.2) and MEPOL, or a randomly initialized policy (Random). These three variations are evaluated in terms of their average return JMR, which is de\ufb01ned in (1), over 50 randomly generated goal locations (Figure 3b). As expected, the performance of TRPO with MEPOL is competitive in the GWS con\ufb01guration (Figure 3), but it falls sharply in the GWN con\ufb01guration, where it is not signi\ufb01cantly better than TRPO with Random. Instead, the performance of TRPO with \u03b1MEPOL is strong on both GWS and GWN. Despite the simplicity of the domain, solving an RL problem in GWN with an adverse goal location is far-fetched for both a random initialization and a na\u00a8 \u0131ve solution to the problem of unsupervised RL in multiple environments. \f0 50 100 0 0.5 1 epoch JMR 0 50 100 0 0.5 1 epoch JMR \u03b1MEPOL MEPOL Random (a) Fine-tuning on GWS (left) and GWN (right) y x (b) Sampled goals Figure 3: Fine-tuning performance JMR as a function of learning epochs achieved by TRPO initialized with \u03b1MEPOL (\u03b1 = 0.2), MEPOL, and random exploration strategies, when dealing with a set of RL tasks speci\ufb01ed on the GridWorld with Slope domain (a). We provide 95% c.i. over 50 randomly sampled goal locations (b). \u03b1MEPOL MEPOL Random MEPOL \u03b1MEPOL \u22120.5 0 0.5 1 E1 M 0 50 100 0 0.5 1 epoch JMR (a) MultiGrid: Pre-training (left) and \ufb01ne-tuning (right) MEPOL \u03b1MEPOL 0 1 2 3 E1 M 0 50 100 0 0.2 epoch JMR (b) Ant: Pre-training (left) and \ufb01ne-tuning (right) (c) MiniGrid: EasyG (left) and AdvG (right) 0 100 200 0 0.5 1 epoch JMR 0 100 200 0 0.5 1 epoch JMR (d) MiniGrid: \ufb01ne-tuning on EasyG (left) and AdvG (right) Figure 4: Pre-training performance E1 M (95% c.i. over 10 runs) achieved by \u03b1MEPOL (\u03b1 = 0.1 (a), \u03b1 = 0.2 (b)) and MEPOL in the in the MultiGrid (a) and Ant (b) domains. Fine-tuning performance JMR (95% c.i. over 50 tasks (a), 8 tasks (b), 13 tasks (d)) obtained by TRPO with corresponding initialization (\u03b1MEPOL, MEPOL, Random), in the MultiGrid (a), Ant (b), and MiniGrid (d) domains. MiniGrid domains are illustrated in (c). 6.4 Scaling to Larger Classes of Environments In this section, we consider a class M composed of 10 different con\ufb01gurations of the continuous gridworlds presented in Section 6.1 (including the GWN as the worst-case con\ufb01guration) which we call the MultiGrid domain. As before, we compare \u03b1MEPOL (\u03b1 = 0.1) and MEPOL on the exploration performance E1 M achieved by the optimal strategy, in this case considering a uniformly distributed pM. While the average performance of MEPOL is slightly higher across the class (Figure 4a left, left bar), \u03b1MEPOL still has a decisive advantage in the worst-case con\ufb01guration (Figure 4a left, right bar). Just as in Section 6.3, this advantage transfer to the \ufb01ne-tuning, where we compare the average return JMR achieved by TRPO with \u03b1MEPOL, MEPOL, and Random initializations over 50 random goal locations in the GWN con\ufb01guration (Figure 4a right). Whereas in the following sections we will only consider classes of two environments, this experiment shows that the arguments made for small classes of environments can easily generalize to larger classes. 6.5 Scaling to Increasing Dimensions In this section, we consider a class M consisting of two Ant environments, with 29D states and 8D actions. In the \ufb01rst, sampled with probability pM1 = 0.8, the Ant faces a wide descending staircase (Ant Stairs Down). In the second, the Ant faces a narrow ascending staircase (Ant Stairs Up, sampled with probability pM2 = 0.2), which is significantly harder to explore than the former. In the mold of the gridworlds in Section 6.1, these two con\ufb01gurations are speci\ufb01cally designed to create an imbalance in the class. As in Section 6.1, we compare \u03b1MEPOL (\u03b1 = 0.2) against MEPOL on the exploration performance E1 M achieved after 500 epochs. \u03b1MEPOL fares slightly better than MEPOL both in the worst-case con\ufb01guration (Figure 4b left, right bar) and, \f\u03b1MEPOL MAML+R MAML+DIAYN 0 50 100 0 0.5 1 epoch JMR 0 50 100 0 0.5 1 epoch JMR (a) GridWorld with Slope: GWS (left) and GWN (right) 0 50 100 0 0.5 1 epoch JMR (b) MultiGrid Figure 5: Fine-tuning performance JMR achieved by TRPO initialized with \u03b1MEPOL (\u03b1 = 0.2 (a), \u03b1 = 0.1 (b)), a MAML+R meta-policy, and a MAML+DIAYN meta-policy, when dealing with a set of RL tasks in the GridWorld with Slope (a) and the MultiGrid (b) domains. We provide 95% c.i. over 50 tasks. surprisingly, in the easier one (Figure 4b left, left bar).2 Then, we design a set of incrementally challenging \ufb01ne-tuning tasks in the Ant Stairs Up, which give reward 1 upon reaching a certain step of the staircase. Also in this setting, TRPO with \u03b1MEPOL initialization outperforms TRPO with MEPOL and Random in terms of the average return JMR (Figure 4b right). Note that these sparse-reward continuous control tasks are particularly arduous: TRPO with MEPOL and Random barely learns anything, while even TRPO with \u03b1MEPOL does not handily reach the optimal average return (1). 6.6 Scaling to Visual Inputs In this section, we consider a class M of two partiallyobservable MiniGrid (Chevalier-Boisvert, Willems, and Pal 2018) environments, in which the observation is a 147D image of the agent\u2019s \ufb01eld of view. In Figure 4c, we provide a visualization of the domain: The easier con\ufb01guration (EasyG, left) is sampled with probability pM1 = 0.8, the adverse con\ufb01guration (AdvG, right) is sampled with probability pM2 = 0.2. Two factors make the AdvG more challenging to explore, which are the presence of a door at the top-left of the grid, and reversing the effect of agent\u2019s movements (e.g., the agent goes backward when it tries to go forward). Whereas in all the previous experiments we estimated the entropy on the raw input features, visual inputs require a wiser choice of a metric. As proposed in (Seo et al. 2021), we process the observations through a random encoder before computing the entropy estimate in (3), while keeping everything else as in Algorithm 1. We run this slightly modi\ufb01ed version of \u03b1MEPOL (\u03b1 = 0.2) and MEPOL for 300 epochs. Then, we compare TRPO with the learned initializations (as well as Random) on sparse-reward \ufb01ne-tuning tasks de\ufb01ned upon the class. As in previous settings, TRPO with \u03b1MEPOL results slightly worse than TRPO with MEPOL in the easier con\ufb01guration (Figure 4d, left), but signi\ufb01cantly better in the worst-case (Figure 4d, right). Notably, TRPO from scratch struggles to learn the tasks, especially in the AdvG (Figure 4d, right). Although the MiniGrid domain is extremely simple 2Note that this would not happen in general, as we expect \u03b1MEPOL to be better in the worst-case but worse on average. In this setting, the percentile sensitivity positively biases the average performance due to the peculiar structure of the environments. from a vision standpoint, we note that the same architecture can be employed in more challenging scenarios (Seo et al. 2021), while the focus of this experiment is the combination between visual inputs and multiple environments. 6.7 Comparison with Meta-RL In this section, we compare our approach against metatraining a policy with MAML (Finn, Abbeel, and Levine 2017) on the same GridWorld with Slope (pM = [0.8, 0.2]) and MultiGrid (uniformly distributed pM) domains that we have previously presented. Especially, we consider two relevant baselines. The \ufb01rst is MAML+R, to which we provide full access to the tasks (i.e., rewards) during metatraining. Note that this gives MAML+R an edge over \u03b1MEPOL, which operates reward-free training. The second is MAML+DIAYN (Gupta et al. 2018a), which operates unsupervised meta-training through an intrinsic reward function learned with DIAYN (Eysenbach et al. 2018). As in previous sections, we consider the average return JMR achieved by TRPO initialized with the exploration strategy learned by \u03b1MEPOL or the meta-policy learned by MAML+R and MAML+DIAYN. TRPO with \u03b1MEPOL fares clearly better than TRPO with the meta-policies in all the con\ufb01gurations (Figures 5a, 5b). Even if it works \ufb01ne in fast adaptation (see Appendix E.5), MAML struggles to encode the diversity of task distribution into a single meta-policy and to deal with the most adverse tasks in the long run. Moreover, DIAYN does not speci\ufb01cally handle multiple environments, and it fails to cope with the larger MultiGrid class. 7 Conclusions In this paper, we addressed the problem of unsupervised RL in a class of multiple environments. First, we formulated the problem within a tractable objective, which is inspired by MSVE but includes an additional percentile sensitivity. Then, we presented a policy gradient algorithm, \u03b1MEPOL, to optimize this objective. Finally, we provided an extensive experimental analysis to show its ability in the unsupervised pre-training and the bene\ufb01ts it brings to the subsequent supervised \ufb01ne-tuning. We believe that this paper motivates the importance of designing speci\ufb01c solutions to the relevant problem of unsupervised RL in multiple environments." + }, + { + "url": "http://arxiv.org/abs/2007.04640v2", + "title": "Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate", + "abstract": "In a reward-free environment, what is a suitable intrinsic objective for an\nagent to pursue so that it can learn an optimal task-agnostic exploration\npolicy? In this paper, we argue that the entropy of the state distribution\ninduced by finite-horizon trajectories is a sensible target. Especially, we\npresent a novel and practical policy-search algorithm, Maximum Entropy POLicy\noptimization (MEPOL), to learn a policy that maximizes a non-parametric,\n$k$-nearest neighbors estimate of the state distribution entropy. In contrast\nto known methods, MEPOL is completely model-free as it requires neither to\nestimate the state distribution of any policy nor to model transition dynamics.\nThen, we empirically show that MEPOL allows learning a maximum-entropy\nexploration policy in high-dimensional, continuous-control domains, and how\nthis policy facilitates learning a variety of meaningful reward-based tasks\ndownstream.", + "authors": "Mirco Mutti, Lorenzo Pratissoli, Marcello Restelli", + "published": "2020-07-09", + "updated": "2021-02-26", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction In recent years, Reinforcement Learning (RL) (Sutton and Barto 2018) has achieved outstanding results in remarkable tasks, such as Atari games (Mnih et al. 2015), Go (Silver et al. 2016), Dota 2 (Berner et al. 2019), and dexterous manipulation (Andrychowicz et al. 2020). To accomplish these feats, the learning process usually requires a considerable amount of human supervision, especially a hand-crafted reward function (Had\ufb01eld-Menell et al. 2017), while the outcome rarely generalizes beyond a single task (Cobbe et al. 2019). This barely mirrors human-like learning, which is far less dependent on exogenous guidance and exceptionally general. Notably, an infant would go through an intrinsically-driven, nearly exhaustive, exploration of the environment in an early stage, without knowing much about the tasks she/he will face. Still, this same unsupervised process will be consequential to solve those complex, externally-driven tasks, when they will eventually arise. In this perspective, what is a suitable task-agnostic exploration objective to set for the agent in an unsupervised phase, so that the acquired knowledge would facilitate learning a variety of reward-based tasks afterwards? Lately, several works have addressed this question in different directions. In (Bechtle et al. 2019; Zheng et al. 2020), authors investigate how to embed task-agnostic knowledge into *Equal contribution. Copyright \u00a9 2021, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. a transferable meta-reward function. Other works (Jin et al. 2020; Tarbouriech et al. 2020) consider the active estimation of the environment dynamics as an unsupervised objective. Another promising approach, which is the one we focus in this paper, is to incorporate the unsupervised knowledge into a task-agnostic exploration policy, obtained by maximizing some entropic measure over the state space (Hazan et al. 2019; Tarbouriech and Lazaric 2019; Mutti and Restelli 2020; Lee et al. 2019). Intuitively, an exploration policy might be easier to transfer than a transition model, which would be hardly robust to changes in the environment, and more ready to use than a meta-reward function, which would still require optimizing a policy as an intermediate step. An ideal maximum-entropy policy, thus inducing a uniform distribution over states, is an extremely general starting point to solve any (unknown) subsequent goal-reaching task, as it minimizes the so-called worst-case regret (Gupta et al. 2018, Lemma 1). In addition, by providing an ef\ufb01cient estimation of any, possibly sparse, reward function, it signi\ufb01cantly reduces the burden on reward design. In tabular settings, Tarbouriech and Lazaric (2019); Mutti and Restelli (2020) propose theoretically-grounded methods for learning an exploration policy that maximizes the entropy of the asymptotic state distribution, while Mutti and Restelli (2020) concurrently consider the minimization of the mixing time as a secondary objective. In (Hazan et al. 2019), authors present a principled algorithm (MaxEnt) to optimize the entropy of the discounted state distribution of a tabular policy, and a theoretically-relaxed implementation to deal with function approximation. Similarly, Lee et al. (2019) design a method (SMM) to maximize the entropy of the \ufb01nite-horizon state distribution. Both SMM and MaxEnt learn a maximum-entropy mixture of policies following this iterative procedure: \ufb01rst, they estimate the state distribution induced by the current mixture to de\ufb01ne an intrinsic reward, then, they learn a policy that optimizes this reward to be added to the mixture. Unfortunately, the literature approaches to state entropy maximization either consider impractical in\ufb01nite-horizon settings (Tarbouriech and Lazaric 2019; Mutti and Restelli 2020), or output a mixture of policies that would be inadequate for non-episodic tasks (Hazan et al. 2019; Lee et al. 2019). In addition, they would still require a full model of the transition dynamics (Tarbouriech and Lazaric 2019; Mutti and Restelli 2020), or a state density estimation (Hazan et al. 2019; Lee arXiv:2007.04640v2 [cs.LG] 26 Feb 2021 \fet al. 2019), which hardly scale to complex domains. In this paper, we present a novel policy-search algorithm (Deisenroth et al. 2013), to deal with task-agnostic exploration via state entropy maximization over a \ufb01nite horizon, which gracefully scales to continuous, high-dimensional domains. The algorithm, which we call Maximum Entropy POLicy optimization (MEPOL), allows learning a single maximum-entropy parameterized policy from mere interactions with the environment, combining non-parametric state entropy estimation and function approximation. It is completely model-free as it requires neither to model the environment transition dynamics nor to directly estimate the state distribution of any policy. The entropy of continuous distributions can be speculated by looking at how random samples drawn from them laid out over the support surface (Beirlant et al. 1997). Intuitively, samples from a high entropy distribution would evenly cover the surface, while samples drawn from low entropy distributions would concentrate over narrow regions. Backed by this intuition, MEPOL relies on a k-nearest neighbors entropy estimator (Singh et al. 2003) to asses the quality of a given policy from a batch of interactions. Hence, it searches for a policy that maximizes this entropy index within a parametric policy space. To do so, it combines ideas from two successful, state-of-the-art policy-search methods: TRPO (Schulman et al. 2015), as it performs iterative optimizations of the entropy index within trust regions around the current policies, and POIS (Metelli et al. 2018), as these optimizations are performed of\ufb02ine via importance sampling. This recipe allows MEPOL to learn a maximum-entropy task-agnostic exploration policy while showing stable behavior during optimization. The paper is organized as follows. First, we report the basic background (Section 2) and some relevant theoretical properties (Section 3) that will be instrumental to subsequent sections. Then, we present the task-agnostic exploration objective (Section 4), and a learning algorithm, MEPOL, to optimize it (Section 5), which is empirically evaluated in Section 6. In Appendix A, we discuss related work. The proofs of the theorems are reported in Appendix B. The implementation of MEPOL can be found at https://github.com/muttimirco/mepol. 2 Preliminaries In this section, we report background and notation. Markov Decision Processes A discrete-time Markov Decision Process (MDP) (Puterman 2014) is de\ufb01ned by a tuple M = (S, A, P, R, d0), where S and A are the state space and the action space respectively, P(s\u2032|s, a) is a Markovian transition model that de\ufb01nes the conditional probability of the next state s\u2032 given the current state s and action a, R(s) is the expected immediate reward when arriving in state s, and d0 is the initial state distribution. A trajectory \u03c4 \u2208T is a sequence of state-action pairs \u03c4 = (s0, a0, s1, a1, . . .). A policy \u03c0(a|s) de\ufb01nes the probability of taking action a given the current state s. We denote by \u03a0 the set of all stationary Markovian policies. A policy \u03c0 that interacts with an MDP, induces a t-step state distribution de\ufb01ned as (let d\u03c0 0 = d0): d\u03c0 t (s) = Pr(st = s|\u03c0) = Z T Pr(\u03c4|\u03c0, st = s) d\u03c4, d\u03c0 t (s) = Z S d\u03c0 t\u22121(s\u2032) Z A \u03c0(a|s\u2032)P(s|s\u2032, a) da ds\u2032, for every t > 0. If the MDP is ergodic, it admits a unique steady-state distribution which is limt\u2192\u221ed\u03c0 t (s) = d\u03c0(s). The mixing time tmix describes how fast the state distribution d\u03c0 t converges to its limit, given a mixing threshold \u03f5: tmix = \b t \u2208N : sup s\u2208S \f \fd\u03c0 t (s) \u2212d\u03c0 t\u22121(s) \f \f \u2264\u03f5 \t . Differential Entropy Let f(x) be a probability density function of a random vector X taking values in Rp, then its differential entropy (Shannon 1948) is de\ufb01ned as: H(f) = \u2212 Z f(x) ln f(x) dx. When the distribution f is not available, this quantity can be estimated given a realization of X = {xi}N i=1 (Beirlant et al. 1997). In particular, to deal with high-dimensional data, we can turn to non-parametric, k-Nearest Neighbors (k-NN) entropy estimators of the form (Singh et al. 2003): b Hk(f) = \u22121 N N X i=1 ln k NV k i + ln k \u2212\u03a8(k), (1) where \u03a8 is the digamma function, ln k \u2212\u03a8(k) is a bias correction term, V k i is the volume of the hyper-sphere of radius Ri = |xi \u2212xk-NN i |, which is the Euclidean distance between xi an its k-nearest neighbor xk-NN i , so that: V k i = \f \fxi \u2212xk-NN i \f \fp \u00b7 \u03c0 p/2 \u0393( p 2 + 1) , where \u0393 is the gamma function, and p the dimensions of X. The estimator (1) is known to be asymptotically unbiased and consistent (Singh et al. 2003). When the target distribution f \u2032 differs from the sampling distribution f, we can provide an estimate of H(f \u2032) by means of an Importance-Weighted (IW) k-NN estimator (Ajgl and \u02c7 Simandl 2011): b Hk(f \u2032|f) = \u2212 N X i=1 Wi k ln Wi V k i + ln k \u2212\u03a8(k), (2) where Wi = P j\u2208N k i wj, such that N k i is the set of indices of the k-NN of xi, and wj are the normalized importance weights of samples xj, which are de\ufb01ned as: wj = f \u2032(xj)/f(xj) PN n=1 f \u2032(xn)/f(xn) . As a by-product, we have access to a non-parametric IW kNN estimate of the Kullback-Leibler (KL) divergence, given by (Ajgl and \u02c7 Simandl 2011): b DKL \u0000f \f \f\f \ff \u2032) = 1 N N X i=1 ln k \u000e N P j\u2208N k i wj . (3) Note that, when f \u2032 = f, wj = 1/N, the estimator (2) is equivalent to (1), while b DKL(f||f \u2032) is zero. \f3 Analysis of the Importance-Weighted Entropy Estimator In this section, we present a theoretical analysis over the quality of the estimation provided by (2). Especially, we provide a novel detailed proof of the bias, and a new characterization of its variance. Similarly as in (Singh et al. 2003, Theorem 8) for the estimator (1), we can prove the following. Theorem 3.1. (Ajgl and \u02c7 Simandl 2011, Sec. 4.1) Let f be a sampling distribution, f \u2032 a target distribution. The estimator b Hk(f \u2032|f) is asymptotically unbiased for any choice of k. Therefore, given a suf\ufb01ciently large batch of samples from an unknown distribution f, we can get an unbiased estimate of the entropy of any distribution f \u2032, irrespective of the form of f and f \u2032. However, if the distance between the two grows large, a high variance might negatively affect the estimation. Theorem 3.2. Let f be a sampling distribution, f \u2032 a target distribution. The asymptotic variance of the estimator b Hk(f \u2032|f) is given by: lim N\u2192\u221eVar x\u223cf \u0002 b Hk(f \u2032|f) \u0003 = 1 N \u0012 Var x\u223cf \u0002 w ln w \u0003 + Var x\u223cf \u0002 w ln Rp\u0003 + \u0000ln C \u00012 Var x\u223cf \u0002 w \u0003\u0013 , where w = f \u2032(x) f(x) , and C = N\u03c0 p /2 k\u0393(p/2+1) is a constant. 4 A Task-Agnostic Exploration Objective In this section, we de\ufb01ne a learning objective for taskagnostic exploration, which is a fully unsupervised phase that potentially precedes a set of diverse goal-based RL phases. First, we make a common regularity assumption on the class of the considered MDPs, which allows us to exclude the presence of unsafe behaviors or dangerous states. Assumption 4.1. For any policy \u03c0 \u2208\u03a0, the corresponding Markov chain P \u03c0 is ergodic. Then, following a common thread in maximum-entropy exploration (Hazan et al. 2019; Tarbouriech and Lazaric 2019; Mutti and Restelli 2020), and particularly (Lee et al. 2019), which focuses on a \ufb01nite-horizon setting as we do, we de\ufb01ne the task-agnostic exploration problem: maximize \u03c0\u2208\u03a0 FTAE(\u03c0) = H \u0012 1 T T X t=1 d\u03c0 t \u0013 , (4) where \u00af dT = 1 T PT t=1 d\u03c0 t is the average state distribution. An optimal policy w.r.t. this objective favors a maximal coverage of the state space into the \ufb01nite-horizon T, irrespective of the state-visitation order. Notably, the exploration horizon T has not to be intended as a given trajectory length, but rather as a parameter of the unsupervised exploration phase which allows to tradeoff exploration quality (i.e., state-space coverage) with exploration ef\ufb01ciency (i.e., mixing properties). As the thoughtful reader might realize, optimizing Objective (4) is not an easy task. Known approaches would require either to estimate the transition model in order to obtain average state distributions (Tarbouriech and Lazaric 2019; Mutti Algorithm 1 MEPOL Input: exploration horizon T, sample-size N, trust-region threshold \u03b4, learning rate \u03b1, nearest neighbors k initialize \u03b8 for epoch = 1, 2, . . . , until convergence do draw a batch of \u2308N T \u2309trajectories of length T with \u03c0\u03b8 build a dataset of particles D\u03c4 = {(\u03c4 t i , si)}N i=1 \u03b8\u2032 = IS-Optimizer(D\u03c4, \u03b8) \u03b8 \u2190\u03b8\u2032 end for Output: task-agnostic exploration policy \u03c0\u03b8 IS-Optimizer Input: dataset of particles D\u03c4, sampling parameters \u03b8 initialize h = 0 and \u03b8h = \u03b8 while DKL( \u00af dT (\u03b80)|| \u00af dT (\u03b8h)) \u2264\u03b4 do compute a gradient step: \u03b8h+1 = \u03b8h + \u03b1\u2207\u03b8h b Hk \u0000 \u00af dT (\u03b8h)| \u00af dT (\u03b80) \u0001 h \u2190h + 1 end while Output: parameters \u03b8h and Restelli 2020), or to directly estimate these distributions through a density model (Hazan et al. 2019; Lee et al. 2019). In contrast to the literature, we turn to non-parametric entropy estimation without explicit state distributions modeling, deriving a more practical policy-search approach that we present in the following section. 5 The Algorithm In this section, we present a model-free policy-search algorithm, Maximum Entropy POLicy optimization (MEPOL), to deal with the task-agnostic exploration problem (4) in continuous, high-dimensional domains. MEPOL searches for a policy that maximizes the performance index b Hk( \u00af dT (\u03b8)) within a parametric space of stochastic differentiable policies \u03a0\u0398 = {\u03c0\u03b8 : \u03b8 \u2208\u0398 \u2286Rq}. The performance index is given by the non-parametric entropy estimator (1) where we replace f with the average state distribution \u00af dT (\u00b7|\u03c0\u03b8) = \u00af dT (\u03b8). The approach combines ideas from two successful policy-search algorithms, TRPO (Schulman et al. 2015) and POIS (Metelli et al. 2018), as it is reported in the following paragraphs. Algorithm 1 provides the pseudocode for MEPOL. Trust-Region Entropy Maximization The algorithm is designed as a sequence of entropy index maximizations, called epochs, within a trust-region around the current policy \u03c0\u03b8 (Schulman et al. 2015). First, we select an exploration horizon T and an estimator parameter k \u2208N. Then, at each epoch, a batch of trajectories of length T is sampled from the environment with \u03c0\u03b8, so as to take a total of N samples. By considering each state encountered in these trajectories as an unweighted particle, we have D = {si}N i=1 where si \u223c\u00af dT (\u03b8). Then, given a trust-region threshold \u03b4, we aim to solve the following optimization problem: maximize \u03b8\u2032\u2208\u0398 b Hk \u0000 \u00af dT (\u03b8\u2032) \u0001 subject to DKL \u0000 \u00af dT (\u03b8) \f \f\f \f \u00af dT (\u03b8\u2032) \u0001 \u2264\u03b4. (5) \fThe idea is to optimize Problem (5) via Importance Sampling (IS) (Owen 2013), in a fully off-policy manner partially inspired by (Metelli et al. 2018), exploiting the IW entropy estimator (2) to calculate the objective and the KL estimator (3) to compute the trust-region constraint. We detail the off-policy optimization in the following paragraph. Importance Sampling Optimization We \ufb01rst expand the set of particles D by introducing D\u03c4 = {(\u03c4 t i , si)}N i=1, where \u03c4 t i = (s0 i , . . . , st i = si) is the portion of the trajectory that leads to state si. In this way, for any policy \u03c0\u03b8\u2032, we can associate to each particle its normalized importance weight: wi = Pr(\u03c4 t i |\u03c0\u03b8\u2032) Pr(\u03c4 t i |\u03c0\u03b8) = t Y z=0 \u03c0\u03b8\u2032(az i |sz i ) \u03c0\u03b8(az i |sz i ) , wi = wi PN n=0 wn . Then, having set a constant learning rate \u03b1 and the initial parameters \u03b80 = \u03b8, we consider a gradient ascent optimization of the IW entropy estimator (2), \u03b8h+1 = \u03b8h + \u03b1\u2207\u03b8h b Hk \u0000 \u00af dT (\u03b8h)| \u00af dT (\u03b80) \u0001 , (6) until the trust-region boundary is reached, i.e., when it holds: b DKL \u0000 \u00af dT (\u03b80) \f \f\f \f \u00af dT (\u03b8h+1) \u0001 > \u03b4. The following theorem provides the expression for the gradient of the IW entropy estimator in Equation (6). Theorem 5.1. Let \u03c0\u03b8 be the current policy and \u03c0\u03b8\u2032 a target policy. The gradient of the IW estimator b Hk( \u00af dT (\u03b8\u2032)| \u00af dT (\u03b8)) w.r.t. \u03b8\u2032 is given by: \u2207\u03b8\u2032 b Hk( \u00af dT (\u03b8\u2032)| \u00af dT (\u03b8)) = \u2212 N X i=0 \u2207\u03b8\u2032Wi k \u0012 V k i + ln Wi V k i \u0013 , where: \u2207\u03b8\u2032Wi = X j\u2208N k i wj \u00d7 \u0012 t X z=0 \u2207\u03b8\u2032 ln \u03c0\u03b8\u2032(az j|sz j) \u2212 PN n=1 Qt z=0 \u03c0\u03b8\u2032(az n|sz n) \u03c0\u03b8(az n|sz n) Pt z=0 \u2207\u03b8\u2032 ln \u03c0\u03b8\u2032(az n|sz n) PN n=1 Qt z=0 \u03c0\u03b8\u2032(az n|sz n) \u03c0\u03b8(az n|sz n) \u0013 . 6 Empirical Analysis In this section, we present a comprehensive empirical analysis, which is organized as follows: 6.1) We illustrate that MEPOL allows learning a maximumentropy policy in a variety of continuous domains, outperforming the current state of the art (MaxEnt); 6.2) We illustrate how the exploration horizon T, over which the policy is optimized, maximally impacts the trade-off between state entropy and mixing time; 6.3) We reveal the signi\ufb01cant bene\ufb01t of initializing an RL algorithm (TRPO) with a MEPOL policy to solve numerous challenging continuous control tasks. A thorough description of the experimental set-up, additional results, and visualizations are provided in Appendix C. 6.1 Task-Agnostic Exploration Learning In this section, we consider the ability of MEPOL to learn a task-agnostic exploration policy according to the proposed objective (4). Such a policy is evaluated in terms of its induced entropy value b Hk( \u00af dT (\u03b8)), which we henceforth refer as entropy index. We chose k to optimize the performance of the estimator, albeit experiencing little to none sensitivity to this parameter (Appendix C.3). In any considered domain, we picked a speci\ufb01c T according to the time horizon we aimed to test in the subsequent goal-based setting (Section 6.3). This choice is not relevant in the policy optimization process, while we discuss how it affects the properties of the optimal policy in the next section. Note that, in all the experiments, we adopt a neural network to represent the parametric policy \u03c0\u03b8 (see Appendix C.2). We compare our algorithm with MaxEnt (Hazan et al. 2019). To this end, we considered their practical implementation1 of the algorithm to deal with continuous, non-discretized domains (see Appendix C.3 for further details). Note that MaxEnt learns a mixture of policies rather than a single policy. To measure its entropy index, we stick with the original implementation by generating a batch as follows: for each step of a trajectory, we sample a policy from the mixture and we take an action with it. This is not our design choice, while we found that using the mixture in the usual way leads to inferior performance anyway. We also investigated SMM (Lee et al. 2019) as a potential comparison. We do not report its results here for two reasons: we cannot achieve signi\ufb01cant performance w.r.t. the random baseline, the difference with MaxEnt is merely in the implementation. First, we evaluate task-agnostic exploration learning over two continuous illustrative domains: GridWorld (2D states, 2D actions) and MountainCar (2D, 1D). In these two domains, MEPOL successfully learns a policy that evenly covers the state space in a single batch of trajectories (state-visitation heatmaps are reported in Appendix C.3), while showcasing minimal variance across different runs (Figure 1a, 1b). Notably, it signi\ufb01cantly outperforms MaxEnt in the MountainCar domain.2 Additionally, In Figure 1c we show how a batch of samples drawn with a random policy (left) compares to one drawn with an optimal policy (right, the color fades with the time step). Then, we consider a set of continuous control, high-dimensional environments from the Mujoco suite (Todorov, Erez, and Tassa 2012): Ant (29D, 8D), Humanoid (47D, 20D), HandReach (63D, 20D). While we learn a policy that maps full state representations to actions, we maximize the entropy index over a subset of the state space dimensions: 7D for Ant (3D position and 4D torso orientation), 24D for Humanoid (3D position, 4D body orientation, and all the joint angles), 24D for HandReach (full set of joint angles). As we report in Figure 1d, 1e, 1f, MEPOL is able to learn policies with striking entropy values in all the environments. As a by-product, it unlocks several meaningful high-level skills during the process, such as jumping, rotating, navigation (Ant), crawling, standing up (Humanoid), and ba1https://github.com/abbyvansoest/maxent/tree/master/ humanoid 2We avoid the comparison in GridWorld, since the environment resulted particularly averse to MaxEnt. \f0 2 4 \u00b7106 3.5 4 4.5 sample entropy index (a) GridWorld 0 2.5 5 \u00b7106 \u22124 \u22123 \u22122 sample entropy index (b) MountainCar (c) GridWorld Visualization 0 1 2 \u00b7107 \u22124 0 3.5 7 sample entropy index (d) Ant 0 1 2 \u00b7107 \u22128 0 8 sample entropy index (e) Humanoid 0 2.5 5 \u00b7106 \u221215 \u221210 \u22125 0 sample entropy index (f) HandReach MEPOL Random MaxEnt Figure 1: Comparison of the entropy index as a function of training samples achieved by MEPOL, MaxEnt, and a random policy. (95% c.i. over 8 runs. MEPOL: k: 4 (c, d, e, f), 50 (b); T: 400 (c), 500 (d, e, f), 1200 (b). MaxEnt epochs: 20 (c), 30 (d, e, f)). sic coordination (Humanoid, HandReach). Most importantly, the learning process is not negatively affected by the increasing number of dimensions, which is, instead, a well-known weakness of approaches based on explicit density estimation to compute the entropy (Beirlant et al. 1997). This issue is documented by the poor results of MaxEnt, which struggles to match the performance of MEPOL in the considered domains, as it prematurely converges to a low-entropy mixture. Scalability As we detail above, in the experiments over continuous control domains we do not maximize the entropy over the full state representation. Note that this selection of features is not dictated by the inability of MEPOL to cope with even more dimensions, but to obtain reliable and visually interpretable behaviors (see Appendix C.3 for further details). To prove this point we conduct an additional experiment over a massively high-dimensional GirdWorld domain (200D, 200D). As we report in Figure 2b, even in this setting MEPOL handily learns a policy to maximize the entropy index. On MaxEnt Results One might realize that the performance reported for MaxEnt appears to be much lower than the one presented in (Hazan et al. 2019). In this regard, some aspects need to be considered. First, their objective is different, as they focus on the entropy of discounted stationary distributions instead of \u00af dT . However, in the practical implementation, they consider undiscounted, \ufb01nite-length trajectories as we do. Secondly, their results are computed over all samples collected during the learning process, while we measure the entropy over a single batch. Lastly, one could argue that an evaluation over the same measure (k-NN entropy estimate) that our method explicitly optimize is unfair. Nevertheless, even evaluating over the entropy of the 2D-discretized state space, which is the measure considered in (Hazan et al. 2019), leads to similar results (as reported in Figure 2a). 6.2 Impact of the Exploration Horizon Parameter In this section, we discuss how choosing an exploration horizon T affects the properties of the learned policy. First, it is useful to distinguish between a training horizon T, which is an input parameter to MEPOL, and a testing horizon h on which the policy is evaluated. Especially, it is of particular interest to consider how an exploratory policy trained over T-steps fares in exploring the environment for a mismatching number of steps h. To this end, we carried out a set of experiments in the aforementioned GridWorld and Humanoid domains. We denote by \u03c0\u2217 T a policy obtained by executing MEPOL with a training horizon T and we consider the entropy of the h-step state distribution induced by \u03c0\u2217 T . Figure 2c (left), referring to the GridWorld experiment, shows that a policy trained over a shorter T might hit a peak in the entropy measure earlier (fast mixing), but other policies achieve higher entropy values at their optimum (highly exploring).3 It is worth noting that the policy trained over 200-steps becomes overzealous when the testing horizon extends to higher values, while derailing towards a poor h-step entropy. In such a short horizon, the learned policy cannot evenly cover the four rooms and it over\ufb01ts over easy-to-reach locations. Unsurprisingly, also the average state entropy over h-steps ( \u00af dh), which is the actual objective we aim to maximize in taskagnostic exploration, is negatively affected, as we report in Figure 2c (right). This result points out the importance of properly choosing the training horizon in accordance with the downstream-task horizon the policy will eventually face. However, in other cases a policy learned over T-steps might gracefully generalize to longer horizons, as con\ufb01rmed by the Humanoid experiment (Figure 2d). The environment is free of obstacles that can limit the agent\u2019s motion, so there is no incentive to over\ufb01t an exploration behavior over a shorter T. 3The trade-off between entropy and mixing time has been substantiated for steady-state distributions in (Mutti and Restelli 2020). \fMountainCar Ant Humanoid samples 5 \u00b7 106 2 \u00b7 107 2 \u00b7 107 MEPOL 4.31 \u00b1 0.04 3.67 \u00b1 0.05 1.92 \u00b1 0.08 MaxEnt 3.36 \u00b1 0.4 1.92 \u00b1 0.05 0.96 \u00b1 0.06 Random 1.98 \u00b1 0.05 1.86 \u00b1 0.06 0.84 \u00b1 0.04 (a) Comparison of the entropy over the 2D-discretized states achieved by MEPOL, MaxEnt, and a random policy (95% c.i. over 8 runs). 0 2.5 5 \u00b7106 0 20 40 60 sample entropy index MEPOL Random (b) 200D-GridWorld 0 500 1,000 0 2 4 testing horizon h h-step entropy 0 500 1,000 0 2 4 testing horizon h average entropy \u03c0\u2217 200 \u03c0\u2217 400 \u03c0\u2217 800 (c) GridWorld 0 1,000 2,000 0 2 4 6 testing horizon h h-step entropy \u03c0\u2217 500 \u03c0\u2217 1000 \u03c0\u2217 2000 (d) Humanoid Figure 2: Comparison of the entropy index over an extended (200D, 200D) GridWorld domain (b). Comparison of the h-step entropy (H(d\u03c0 h)) and average entropy (H( \u00af dh)) achieved by a set of policies trained over different horizons T as a function of the testing horizon h (c, d). (95% c.i. over 8 runs). 6.3 Goal-Based Reinforcement Learning In this section, we illustrate how a learning agent can bene\ufb01t from an exploration policy learned by MEPOL when dealing with a variety of goal-based RL tasks. Especially, we compare the performance achieved by TRPO (Schulman et al. 2015) initialized with a MEPOL policy (the one we learned in Section 6.1) w.r.t. a set of signi\ufb01cant baselines that learn from scratch, i.e., starting from a randomly initialized policy. These baselines are: TRPO, SAC (Haarnoja et al. 2018), which promotes exploration over actions, SMM (Lee et al. 2019), which has an intrinsic reward related to the state-space entropy, ICM (Pathak et al. 2017), which favors exploration by fostering prediction errors, and Pseudocount (Bellemare et al. 2016), which assigns high rewards to rarely visited states. The algorithms are evaluated in terms of average return on a series of sparse-reward RL tasks de\ufb01ned over the environments we considered in the previous sections. Note that we purposefully chose an algorithm without a smart exploration mechanism, i.e., TRPO, to employ the MEPOL initialization. In this way we can clearly show the merits of the initial policy in providing the necessary exploration. However, the MEPOL initialization can be combined with any other RL algorithm, potentially improving the reported performance. In view of previous results in taskagnostic exploration learning (Section 6.1), where MaxEnt is plainly dominated by our approach, we do not compare with TRPO initialized with a MaxEnt policy, as it would not be a challenging baseline in this setting. In GridWorld, we test three navigation tasks with different goal locations (see Figure 3a). The reward is 1 in the states having Euclidean distance to the goal lower than 0.1. For the Ant environment, we de\ufb01ne three, incrementally challenging, tasks: Escape, Jump, Navigate. In the \ufb01rst, the Ant starts from an upside-down position and it receives a reward of 1 whenever it rotates to a straight position (Figure 3b). In Jump, the agent gets a reward of 1 whenever it jumps higher than three units from the ground (Figure 3c). In Navigate, the reward is 1 in all the states further than 7 units from the initial location (Figure 3d). Finally, in Humanoid Up, the agent initially lies on the ground and it receives a reward of 1 when it is able to stand-up (Figure 3e). In all the considered tasks, the reward is zero anywhere except for the goal states, an episode ends when the goal is reached. As we show in Figure 3, the MEPOL initialization leads to a striking performance across the board, while the tasks resulted extremely hard to learn from scratch. In some cases (Figure 3b), MEPOL allows for zero-shot policy optimization, as the optimal behavior has been already learned in the unsupervised exploration stage. In other tasks (e.g., Figure 3a), the MEPOL-initialized policy has lower return, but it permits for lighting fast adaptation w.r.t. random initialization. Note that, to match the tasks\u2019 higher-level of abstraction, in Ant Navigate and Humanoid Up we employed MEPOL initialization learned by maximizing the entropy over mere spatial coordinates (x-y in Ant, x-y-z in Humanoid). However, also the exact policies learned in Section 6.1 fares remarkably well in those scenarios (see Appendix C.4), albeit experiencing slower convergence. 7 Discussion and Conclusions In this paper, we addressed task-agnostic exploration in environments with non-existent rewards by pursuing state entropy maximization. We presented a practical policy-search algorithm, MEPOL, to learn an optimal task-agnostic exploration policy in continuous, high-dimensional domains. We empirically showed that MEPOL performs outstandingly in terms of \f0 50 100 0 0.5 1 epoch average return goal 1 0 50 100 0 0.5 1 epoch average return goal 2 0 50 100 0 0.5 1 epoch average return goal 3 (a) GridWorld Navigation 0 250 500 0 0.5 1 epoch average return (b) Ant Escape 0 500 1,000 0 0.5 1 epoch average return (c) Ant Jump 0 500 1,000 0 0.5 1 epoch average return (d) Ant Navigate 0 500 1,000 0 0.5 1 epoch average return (e) Humanoid Up TRPO with MEPOL init. TRPO SAC SMM ICM Pseudocount Figure 3: Comparison of the average return as a function of learning epochs achieved by TRPO with MEPOL initialization, TRPO, SAC, SMM, ICM, and Pseudocount over a set of sparse-reward RL tasks. For each task, we report a visual representation and learning curves. (95% c.i. over 8 runs). state entropy maximization, and that the learned policy paves the way for solving several reward-based tasks downstream. Extensions and Future Directions First, we note that the results reported for the goal-based setting (Section 6.3) can be easily extended, either considering a wider range of tasks or combining the MEPOL initialization with a variety of RL algorithms (other than TRPO). In principle, any algorithm can bene\ufb01t from task-agnostic exploration, especially when dealing with sparse-reward tasks. Secondly, while we solely focused on \ufb01nite-horizon exploration, it is straightforward to adapt the presented approach to the discounted case: We could simply generate a batch of trajectories with a probability 1 \u2212\u03b3 to end at any step instead of stopping at step T, and then keep everything else as in Algorithm 1. This could be bene\ufb01cial when dealing with discounted tasks downstream. Future work might address an adaptive control over the exploration horizon T, so to induce a curriculum of exploration problems, starting from an easier one (with a short T) and going forward to more challenging problems (longer T). Promising future directions also include learning task-agnostic exploration across a collection of environments, and contemplating the use of a non-parametric state entropy regularization in reward-based policy optimization. Other Remarks It is worth mentioning that the choice of a proper metric for the k-NN computation might signi\ufb01cantly impact the \ufb01nal performance. In our continuous control experiments, we were able to get outstanding results with a simple Euclidean metric. However, different domains, such as learning from images, might require the de\ufb01nition of a more thoughtful metric space in order to get reliable entropy estimates. In this regard, some recent works (e.g., Misra et al. 2020) provide a blueprint to learn state embeddings in rewardfree rich-observation problems. Another theme that is worth exploring to get even better performance over future tasks is sample reuse. In MEPOL, the samples collected during task-agnostic training are discarded, while only the resulting policy is retained. An orthogonal line of research focuses on the problem of collecting a meaningful batch of samples in a reward-free setting (Jin et al. 2020), while discarding sampling policies. Surely a combination of the two objectives will be necessary to develop truly ef\ufb01cient methods for task-agnostic exploration, but we believe that these two lines of work still require signi\ufb01cant individual advances before being combined into a unique, broadly-applicable approach. To conclude, we hope that this work can shed some light on the great potential of state entropy maximization approaches to perform task-agnostic exploration." + }, + { + "url": "http://arxiv.org/abs/1907.04662v2", + "title": "An Intrinsically-Motivated Approach for Learning Highly Exploring and Fast Mixing Policies", + "abstract": "What is a good exploration strategy for an agent that interacts with an\nenvironment in the absence of external rewards? Ideally, we would like to get a\npolicy driving towards a uniform state-action visitation (highly exploring) in\na minimum number of steps (fast mixing), in order to ease efficient learning of\nany goal-conditioned policy later on. Unfortunately, it is remarkably arduous\nto directly learn an optimal policy of this nature. In this paper, we propose a\nnovel surrogate objective for learning highly exploring and fast mixing\npolicies, which focuses on maximizing a lower bound to the entropy of the\nsteady-state distribution induced by the policy. In particular, we introduce\nthree novel lower bounds, that lead to as many optimization problems, that\ntradeoff the theoretical guarantees with computational complexity. Then, we\npresent a model-based reinforcement learning algorithm, IDE$^{3}$AL, to learn\nan optimal policy according to the introduced objective. Finally, we provide an\nempirical evaluation of this algorithm on a set of hard-exploration tasks.", + "authors": "Mirco Mutti, Marcello Restelli", + "published": "2019-07-10", + "updated": "2019-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction In general, the Reinforcement Learning (RL) framework (Sutton and Barto 2018) assumes the presence of a reward signal coming from a, potentially unknown, environment to a learning agent. When this signal is suf\ufb01ciently informative about the utility of the agent\u2019s decisions, RL has proved to be rather successful in solving challenging tasks, even at a super-human level (Mnih et al. 2015; Silver et al. 2017). However, in most real-world scenarios, we cannot rely on a well-shaped, complete reward signal. This may prevent the agent from learning anything until, while performing random actions, it eventually stumbles into some sort of external reward. Thus, what is a good objective for a learning agent to pursue, in the absence of an external reward signal, to prepare itself to learn ef\ufb01ciently, eventually, a goal-conditioned policy? Intrinsic motivation (Chentanez, Barto, and Singh 2005; Oudeyer and Kaplan 2009) traditionally tries to answer this pressing question by designing self-motivated goals that favor exploration. In a curiosity-driven approach, \ufb01rst proposed in (Schmidhuber 1991), the intrinsic objective encourages the agent to explore novel states by rewarding prediction Copyright c \u20dd2020, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. errors (Stadie, Levine, and Abbeel 2015; Pathak et al. 2017; Burda et al. 2019a; 2019b). On a similar \ufb02avor, other works propose to relate an intrinsic reward to some sort of learning progress (Lopes et al. 2012) or information gain (Mohamed and Rezende 2015; Houthooft et al. 2016), stimulating the agent\u2019s empowerment over the environment. Countbased approaches (Bellemare et al. 2016; Tang et al. 2017; Ostrovski et al. 2017) consider exploration bonuses proportional to the state visitation frequencies, assigning high rewards to rarely visited states. Athough the mentioned approaches have been relatively effective in solving sparserewards, hard-exploration tasks (Pathak et al. 2017; Burda et al. 2019b), they have some common limitations that may affect their ability to methodically explore an environment in the absence of external rewards, as pointed out in (Ecoffet et al. 2019). Especially, due to the consumable nature of their intrinsic bonuses, the learning agent could prematurely lose interest in a frontier of high rewards (detachment). Furthermore, the agent may suffer from derailment by trying to return to a promising state, previously discovered, if a na\u00a8 \u0131ve exploratory mechanism, such as \u03f5-greedy, is combined to the intrinsic motivation mechanism (which is often the case). To overcome these limitations, recent works suggest alternative approaches to motivate the agent towards a more systematic exploration of the environment (Hazan et al. 2019; Ecoffet et al. 2019). Especially, in (Hazan et al. 2019) the authors consider an intrinsic objective which is directed to the maximization of an entropic measure over the state distribution induced by a policy. Then, they provide a provably ef\ufb01cient algorithm to learn a mixture of deterministic policies that is overall optimal w.r.t. the maximum-entropy exploration objective. To the best of our knowledge, none of the mentioned approaches explicitly address the related aspect of the mixing time of an exploratory policy, which represents the time it takes for the policy to reach its full capacity in terms of exploration. Nonetheless, in many cases we would like to maximize the probability of reaching any potential target state having a fairly limited number of interactions at hand for exploring the environment. Notably, this context presents some analogies to the problem of maximizing the ef\ufb01ciency of a random walk (Hassibi et al. 2014). In this paper, we present a novel approach to learn exploratory policies that are, at the same time, highly exploring and fast mixing. In Section 3, we propose a surrogate objecarXiv:1907.04662v2 [cs.LG] 19 Dec 2019 \ftive to address the problem of maximum-entropy exploration over both the state space (Section 3.1) and the action space (Section 3.2). The idea is to search for a policy that maximizes a lower bound to the entropy of the induced steadystate distribution. We introduce three new lower bounds and the corresponding optimization problems, discussing their pros and cons. Furthermore, we discuss how to complement the introduced objective to account for the mixing time of the learned policy (Section 3.3). In Section 4, we present the Intrinsically-Driven Effective and Ef\ufb01cient Exploration ALgorithm (IDE3AL), a novel, model-based, reinforcement learning method to learn highly exploring and fast mixing policies through iterative optimizations of the introduced objective. In Section 5, we provide an empirical evaluation to illustrate the merits of our approach on hard-exploration, \ufb01nite domains, and to show how it fares in comparison to count-based and maximum-entropy approaches. Finally, in Section 6, we discuss the proposed approach and related works. The proofs of the Theorems are reported in Appendix A1. 2 Preliminaries A discrete-time Markov Decision Process (MDP) (Puterman 2014) is de\ufb01ned as a tuple M = (S, A, P, R, d0), where S is the state space, A is the action space, P(s\u2032|s, a) is a Markovian transition model de\ufb01ning the distribution of the next state s\u2032 given the current state s and action a, R is the reward function, such that R(s, a) is the expected immediate reward when taking action a from state s, and d0 is the initial state distribution. A policy \u03c0(a|s) de\ufb01nes the probability of taking an action a in state s. In the following we will indifferently turn to scalar or matrix notation, where v denotes a vector, M denotes a matrix, and vT , M T denote their transpose. A matrix is row (column) stochastic if it has non-negative entries and all of its rows (columns) sum to one. A matrix is doubly stochastic if it is both row and column stochastic. We denote with P the space of doubly stochastic matrices. The L\u221e-norm \u2225M\u2225\u221e of a matrix is its maximum absolute row sum, while \u2225M\u22252 = \u0000max eig M T M \u0001 1 2 and \u2225M\u2225F = \u0000 P i P j(M(i, j))2\u0001 1 2 are its L2 and Frobenius norms respectively. We denote with 1n a column vector of n ones and with 1n\u00d7m a matrix of ones with n rows and m columns. Using matrix notation, d0 is a column vector of size |S| having elements d0(s), P is a row stochastic matrix of size (|S||A| \u00d7 |S|) that describes the transition model P ((s, a), s\u2032) = P(s\u2032|s, a), \u03a0 is a row stochastic matrix of size (|S| \u00d7 |S||A|) that contains the policy \u03a0(s, (s, a)) = \u03c0(a|s), and P \u03c0 = \u03a0P is a row stochastic matrix of size (|S| \u00d7 |S|) that represents the state transition matrix under policy \u03c0. We denote with \u03a0 the space of all the stationary Markovian policies. In the absence of any reward, i.e., when R(s, a) = 0 for every (s, a), a policy \u03c0 induces, over the MDP M, a Markov Chain (MC) (Levin and Peres 2017) de\ufb01ned by C = (S, P \u03c0, d0) where P \u03c0(s\u2032|s) = P \u03c0(s, s\u2032) is the state transition model. Having de\ufb01ned the t-step transition matrix 1A complete version of the paper, which includes the Appendix, is available at https://arxiv.org/abs/1907.04662 as P \u03c0 t = (P \u03c0)t, the state distribution of the MC at time step t is d\u03c0 t = (P \u03c0 t )T d0, while d\u03c0 = limt\u2192\u221ed\u03c0 t is the steady state distribution. If the MC is ergodic, i.e., aperiodic and recurrent, it admits a unique steady-state distribution, such that d\u03c0 = (P \u03c0)T d\u03c0. The mixing time tmix of the MC describes how fast the state distribution converges to the steady state: tmix = min \b t \u2208N : supd0 \u2225d\u03c0 t \u2212d\u03c0\u2225\u221e\u2264\u03f5 \t , (1) where \u03f5 is the mixing threshold. An MC is reversible if the condition P \u03c0d\u03c0 = (P \u03c0)T d\u03c0 holds. Let \u03bb\u03c0 be the eigenvalues of P \u03c0. For ergodic reversible MCs the largest eigenvalue is 1 with multiplicity 1. Then, we can de\ufb01ne the second largest eigenvalue modulus \u03bb\u03c0(2) and the spectral gap \u03b3\u03c0 as: \u03bb\u03c0(2) = max \u03bb\u03c0(i)\u0338=1 |\u03bb\u03c0(i)|, \u03b3\u03c0 = 1 \u2212\u03bb\u03c0(2). (2) 3 Optimization Problems for Highly Exploring and Fast Mixing Policies In this section, we de\ufb01ne a set of optimization problems whose goal is to identify a stationary Markovian policy that effectively explores the state-action space. The optimization problem is introduced in three steps: \ufb01rst we ask for a policy that maximizes some lower bound to the steady-state distribution entropy, then we foster exploration over the action space by adding a constraint on the minimum action probability, and \ufb01nally we add another constraint to reduce the mixing time of the Markov chain induced by the policy. 3.1 Highly Exploring Policies over the State Space Intuitively, a good exploration policy should guarantee to visit the state space as uniformly as possible. In this view, a potential objective function is the entropy of the steady-state distribution induced by a policy over the MDP (Hazan et al. 2019). The resulting optimal policy is: \u03c0\u2217\u2208arg max \u03c0\u2208\u03a0 H(d\u03c0), (3) where H(d\u03c0) = \u2212Es\u223cd\u03c0\u0002 log d\u03c0(s) \u0003 is the state distribution entropy. Unfortunately, a direct optimization of this objective is particularly arduous since the steady-state distribution entropy is not a concave function of the policy (Hazan et al. 2019). To overcome this issue, a possible solution (Hazan et al. 2019) is to use the conditional gradient method, such that the gradients of the steady-state distribution entropy become the intrinsic reward in a sequence of approximate dynamic programming problems (Bertsekas 1995). In this paper, we follow an alternative route that consists in maximizing a lower bound to the policy entropy. In particular, in the following we will consider three lower bounds that lead to as many optimization problems (named In\ufb01nity, Frobenius, Column Sum) that show different trade-offs between theoretical guarantees and computational complexity. In\ufb01nity From the theory of Markov chains (Levin and Peres 2017), we know a necessary and suf\ufb01cient condition for a policy to induce a uniform steady-state distribution (i.e., to achieve the maximum possible entropy). We report this result in the following theorem. \fTheorem 3.1. Let P be the transition matrix of a given MDP. The steady-state distribution d\u03c0 induced by a policy \u03c0 is uniform over S iff the matrix P \u03c0 = \u03a0P is doubly stochastic. Unfortunately, given the constraints speci\ufb01ed by the transition matrix P , a stationary Markovian policy that induces a doubly stochastic P \u03c0 may not exist. On the other hand, it is possible to lower bound the entropy of the steady-state distribution induced by policy \u03c0 as a function of the minimum L\u221e-norm between P \u03c0 and any doubly stochastic matrix. Theorem 3.2. Let P be the transition matrix of a given MDP and P the space of doubly stochastic matrices. The entropy of the steady-state distribution d\u03c0 induced by a policy \u03c0 is lower bounded by: H(d\u03c0) \u2265log |S| \u2212|S| inf P u\u2208P \u2225P u \u2212\u03a0P \u22252 \u221e. The maximization of this lower bound leads to the following constrained optimization problem: minimize P u\u2208P,\u03a0\u2208\u03a0 \u2225P u \u2212\u03a0P \u2225\u221e (4) It is worth noting that this optimization problem can be reformulated as a linear program with |S|2 + |S||A| + |S| optimization variables and 2|S||S| + |S|2 + |S||A| inequality constraints and 3|S| equality constraints (the linear program formulation can be found in Appendix B.1). In order to avoid the exponential growth of the number of constraints as a function of the number of states, we are going to introduce alternative optimization problems. Frobenius It is worth noting that different transition matrices P \u03c0 having equal \u2225P u \u2212P \u03c0\u2225\u221emight lead to significantly different state distribution entropies H(d\u03c0), as the L\u221e-norm only accounts for the state corresponding to the maximum absolute row sum. The Frobenius norm can better captures the distance between P u and P \u03c0 over all the states, as discussed in Appendix C. For this reason, we have derived a lower bound to the policy entropy that replace the L\u221e-norm with the Frobenius one. Theorem 3.3. Let P be the transition matrix of a given MDP and P the space of doubly stochastic matrices. The entropy of the steady-state distribution d\u03c0 induced by a policy \u03c0 is lower bounded by: H(d\u03c0) \u2265log |S| \u2212|S|2 inf P u\u2208P \u2225P u \u2212\u03a0P \u22252 F . It can be shown (see Corollary A.1 in Appendix A) that the lower bound based on the Frobenius norm cannot be better (i.e., larger) than the one with the In\ufb01nite norm. However, we have the advantage that the resulting optimization problem has signi\ufb01cantly less constraints than Problem (4): minimize P u\u2208P,\u03a0\u2208\u03a0 \u2225P u \u2212\u03a0P \u2225F . (5) This problem is a (linearly constrained) quadratic problem with |S|2 + |S||A| optimization variables and |S|2 + |S||A| inequality constraints and 3|S| equality constraints. Column Sum Problems (4) and (5) are aiming at \ufb01nding a policy associated with a state transition matrix that is doubly stochastic. To achieve this result it is enough to guarantee that the column sums of the matrix P \u03c0 are all equal to one (Kirkland 2010). A measure that can be used to evaluate the distance to a doubly stochastic matrix can be the absolute sum of the difference between one and the column sums: P s\u2208S |1 \u2212P s\u2032\u2208S P \u03c0(s|s\u2032)| = \r \r \r \u0000I \u2212(\u03a0P )T \u0001 \u00b7 1|S| \r \r \r 1. The following theorem provides a lower bound to the policy entropy as a function of this measure. Theorem 3.4. Let P be the transition matrix of a given MDP. The entropy of the steady-state distribution d\u03c0 induced by a policy \u03c0 is lower bounded by: H(d\u03c0) \u2265log |S| \u2212|S| \r \r \r \r \u0010 I \u2212(\u03a0P )T \u0011 \u00b7 1|S| \r \r \r \r 2 1 . The optimization of this lower bound leads to the following linear program: minimize \u03a0\u2208\u03a0 \r \r \r \r \u0010 I \u2212(\u03a0P )T \u0011 \u00b7 1|S| \r \r \r \r 1 . (6) Besides being a linear program, unlike the other optimization problems presented, Problem (6) does not require to optimize over the space of all the doubly stochastic matrices, thus signi\ufb01cantly reducing the number of optimization variables (|S| + |S||A|) and constraints (2|S| + |S||A| inequalities and |S| equalities). The linear program formulation of Problem (6) can be found in Appendix B.2. 3.2 Highly Exploring Policies over the State and Action Space Although the policy resulting from the optimization of one of the above problems may lead to the most uniform exploration of the state space, the actual goal of the exploration phase is to collect enough information on the environment to optimize, at some point, a goal-conditioned policy (Pong et al. 2019). To this end, it is essential to have an exploratory policy that adequately covers the action space A in any visited state. Unfortunately, the optimization of Problems (4), (5), (6) does not guarantee even that the obtained policy is stochastic. Thus, we need to embed in the problem a secondary objective that takes into account the exploration over A. This can be done by enforcing a minimal entropy over actions in the policy to be learned, adding to (4), (5), (6) the following constraints: \u03c0(a|s) \u2265\u03be, \u2200s \u2208S, \u2200a \u2208A, (7) where \u03be \u2208[0, 1 |A|]. This secondary objective is actually in competition with the objective of uniform exploration over states. Indeed, an overblown incentive in the exploration over actions may limit the state distribution entropy of the optimal policy. Having a low probability of visiting a state decreases the likelihood of sampling an action from that state, hence, also reducing the exploration over actions. To illustrate that, Figure 1 shows state distribution entropies (H(d\u03c0)) and state-action distribution entropies, i.e., H(d\u03c0\u03a0), achieved by the optimal policy w.r.t. Problem (5) on the Single Chain domain (Furmston and Barber 2010) for different values of \u03be. \f0 0.25 0.5 0.6 0.8 1 \u03be H(d\u03c0) H(d\u03c0\u03a0) Figure 1: State distribution entropy (H(d\u03c0)), state-action distribution entropy (H(d\u03c0\u03a0)) for different values of \u03be on the Single Chain domain. 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1 \u03b6 H(d\u03c0) \u03b3\u03c0 d\u03c0 80 (\u03b6 = 1) d\u03c0 80 (\u03b6 = 0.2) 0 0.02 0.04 Figure 2: State distribution entropy (H(d\u03c0)), spectral gap (\u03b3\u03c0) for different values of \u03b6 on the Single Chain domain (left). Color-coded state distribution overlaid on a 4-rooms gridworld for different values of \u03b6 (right). 3.3 An Objective to Make Highly Exploring Policies Mix Faster In many cases, such as in episodic tasks where the horizon for exploration is capped, we may have interest in trading inferior state entropy for faster convergence of the learned policy. Although the doubly stochastic matrices are equally valid in terms of steady-state distribution, the choice of the target P u strongly affects the mixing properties of the P \u03c0 induced by the policy. Indeed, while an MC with a uniform transition matrix, i.e., transition probabilities P u(s, s\u2032) = 1 |S| for any s, s\u2032, mixes in no time, an MC with probability one on the self-loops never converges to a steady state. This is evident considering that the mixing time tmix of an MC is trapped as follows (Levin and Peres 2017, Theorems 12.3 and 12.4): 1 \u2212\u03b3\u03c0 \u03b3\u03c0 log 1 2\u03f5 \u2264tmix \u22641 \u03b3\u03c0 log 1 d\u03c0 min\u03f5, (8) where \u03f5 is the mixing threshold, d\u03c0 min is a minorization of d\u03c0, and \u03b3\u03c0 is the spectral gap of P \u03c0 (2). From the literature of MCs, we know that a variant of the Problems (4), (5) having the uniform transition matrix as target P u and the L2 as matrix norm, is equivalent to the problem of \ufb01nding the fastest mixing transition matrix P \u03c0 (Boyd, Diaconis, and Xiao 2004). However, the choice of this target may overly limit the entropy over the state distribution induced by the optimal policy. Instead, we look for a generalization that allows us to prioritize fast exploration at will. Thus, we consider a continuum of relaxations in the fastest mixing objective by embedding in Problems (4) and (5) (but not in Problem (6)) the following constraints: P u(s, s\u2032) \u2264\u03b6, \u2200s, s\u2032 \u2208S, (9) where \u03b6 \u2208[ 1 |S|, 1]. By setting \u03b6 = 1 |S|, we force the optimization problem to consider the uniform transition matrix as a target, thus aiming to reduce the mixing time, while larger values of \u03b6 relax this objective, allowing us to get a higher steady-state distribution entropy. In Figure 2 we show how the parameter \u03b6 affects the trade-off between high steady-state entropy and low mixing times (i.e., high spectral gaps), reporting the values obtained by optimal policies w.r.t. Problem (5) for different \u03b6. 4 A Model-Based Algorithm for Highly Exploring and Fast Mixing Policies In this section, we present an approach to incrementally learn a highly exploring and fast mixing policy through interactions with an unknown environment, developing a novel modelbased exploration algorithm called Intrinsically-Driven Effective and Ef\ufb01cient Exploration ALgorithm (IDE3AL). Since Problems (4), (5), (6) requires an explicit representation of the matrix P , we need to estimate the transition model from samples before performing an objective optimization (modelbased approach). In tabular settings, this can be easily done by adopting the transition frequency as a proxy for the (unknown) transition probabilities, obtaining an estimated transition model \u02c6 P(s\u2032|s, a). However, in hard-exploration tasks, it can be arbitrarily arduous to sample transitions from the most dif\ufb01cult-to-reach states by relying on na\u00a8 \u0131ve exploration mechanisms, such as a random policy. To address the issue, we lean on an iterative approach in which we alternate model estimation phases with optimization sweeps of the objectives (4), (5) or (6). In this way, we combine the bene\ufb01t of collecting samples with highly exploring policies to better estimate the transition model and the bene\ufb01t of having a better-estimated model to learn superior exploratory policies. In order to foster the policy towards (s, a) pairs that have never been sampled, we keep their corresponding distribution \u02c6 P(\u00b7|s, a) to be uniform over all possible states, thus making the pair (s, a) particularly valuable in the perspective of the optimization problem. The algorithm converges whenever the exploratory policy remains unchanged during consecutive optimization sweeps and, if we know the size of the MDP, when all state-action pairs have been suf\ufb01ciently explored. In Algorithm 1 we report the pseudo-code of IDE3AL. Finally, in Figure 3 we compare the iterative formulation against a not-iterative one, i.e., an approach that collects samples with a random policy and then optimizes the exploration objective off-line. Considering an exploration task on the Double Chain domain (Furmston and Barber 2010), we show that the iterative form has a clear edge in reducing the model estimation error \u2225P \u2212\u02c6 P \u2225F . Both the approaches employ a Frobenius formulation. \fAlgorithm 1 IDE3AL Input: \u03be, \u03b6, batch size N Initialize \u03c00 and transition counts C \u2208N|S|2\u00d7|A| for i = 0, 1, 2, . . . , until convergence do Collect N steps with \u03c0i and update C Estimate the transition model as: \u02c6 Pi(s\u2032|s, a) = ( C(s\u2032|s,a) P s\u2032 C(s\u2032|s,a), if C(\u00b7|s,a)>0 1/|S|, otherwise \u03c0i+1 \u2190optimal policy for (4) (or (5) or (6)), given the parameters \u03be, \u03b6, and \u02c6 Pi end for Output: exploratory policy \u03c0i 0 1,000 2,000 3,000 2 4 number of samples model estimation iterative not-iterative Figure 3: Model estimation error on the Double Chain with \u03be = 0.1, \u03b6 = 0.7, N = 10 (100 runs, 95% c.i.). 5 Experimental Evaluation In this section, we provide the experimental evaluation of IDE3AL. First, we show a set of experiments on the illustrative Single Chain and Double Chain domains (Furmston and Barber 2010; Peters, Mulling, and Altun 2010). The Single Chain consists of 10 states having 2 possible actions, one to climb up the chain from state 0 to 9, and the other to directly fall to the initial state 0. The two actions are \ufb02ipped with a probability pslip = 0.1, making the environment stochastic and reducing the probability of visiting the higher states. The Double Chain concatenates two Single Chain into a bigger one sharing the central state 9, which is the initial state. Thus, the chain can be climbed in two directions. These two domains, albeit rather simple from a dimensionality standpoint, are actually hard to explore uniformly, due to the high shares of actions returning to the initial state and preventing the agent to consistently reach the higher states. Then, we present an experiment on the much more complex Knight Quest environment (Fruit et al. 2018, Appendix), having |S| = 360 and |A| = 8. This domain takes inspiration from classical arcade games, in which a knight has to rescue a princess in the shortest possible time without being killed by the dragon. To accomplish this feat, the knight has to perform an intricate sequence of actions. In the absence of any reward, it is a fairly challenging environment for exploration. On these domains, we address the task of learning the best exploratory policy in a limited number of samples. Especially, we evaluate these policies in terms of the induced state entropy H(d\u03c0) and state-action entropy H(d\u03c0\u03a0). We compare our approach with MaxEnt (Hazan et al. 2019), the model-based algorithm to learn maximum entropy exploration that we have previously discussed in the paper, and a count-based approach inspired by the exploration bonuses of MBIE-EB (Strehl and Littman 2008), which we refer as CountBased in the following. The latter shares the same structure of our algorithm, but replace the policy optimization sweeps with approximate value iterations (Bertsekas 1995), where the reward for a given state is inversely proportional to the visit count of that state. It is worth noting that the results reported for the MaxEnt algorithm are related to the mixture policy \u03c0mix = (D, \u03b1), where D = (\u03c00, . . . , \u03c0k\u22121) is a set of k \u03f5-deterministic policies, and \u03b1 \u2208\u2206k is a probability distribution over D. For the sake of simplicity, we have equipped all the approaches with a little domain knowledge, i.e., the cardinality of S and A. However, this can be avoided without a signi\ufb01cant impact on the presented results. For every experiment, we will report the batch-size N, and the parameters \u03be, \u03b6 of IDE3AL. CountBased and MaxEnt employ \u03f5-greedy policies having \u03f5 = \u03be in all the experiments. In any plot, we will additionally provide the performance of a baseline policy, denoted as Random, that randomly selects an action in every state. Detailed information about the presented results, along with an additional experiment, can be found in Appendix D. First, in Figure 4, we compare the Problems (4), (5), (6) on the Single Chain environment. On one hand, we show the performance achieved by the exact solutions, i.e., computed with a full knowledge of P . While the plain formulations (\u03be = 0, \u03b6 = 1) are remarkably similar, adding a constraint over the action entropy (\u03be = 0.1) has a signi\ufb01cantly different impact. On the other hand, we illustrate the performance of IDE3AL, equipped with the alternative optimization objectives, in learning a good exploratory policy from samples. In this case, the Frobenius clearly achieves a better performance. In the following, we will report the results of IDE3AL considering only the best-performing formulation, which, for all the presented experiments, corresponds to the Frobenius. In Figure 5a, we show that IDE3AL compares well against the other approaches in exploring the Double Chain domain. It achieves superior state entropy and state-action entropy, and it converges faster to the optimum. It displays also a higher probability of visiting the least favorable state, and it behaves positively in the estimation of \u02c6 P . Notably, the CountBased algorithm fails to reach high exploration due to a detachment problem (Ecoffet et al. 2019), since it \ufb02uctuates between two exploratory policies that are greedy towards the two directions of the chain. By contrast, in a domain having a clear direction for exploration, such as the simpler Single Chain domain, CountBased ties the explorative performances of IDE3AL (Figure 5b). On the other hand, MaxEnt is effective in the exploration performance, but much more slower to converge, both in the Double Chain and the Single Chain. Note that in Figure 5a, the model estimation error of MaxEnt starts higher than the other, since it employs a different strategy to \ufb01ll the transition probabilities of never reached states, inspired by (Brafman and Tennenholtz 2002). In Figure 5c, we present an experiment on the higher-dimensional \fH(d\u03c0) min d\u03c0 Frobenius (\u03be = 0) 0.98 6.4 \u00b7 10\u22122 In\ufb01nity (\u03be = 0) 0.98 6.4 \u00b7 10\u22122 Column Sum (\u03be = 0) 0.98 6 \u00b7 10\u22122 Frobenius (\u03be = 0.1) 0.94 4.1 \u00b7 10\u22122 In\ufb01nity (\u03be = 0.1) 0.89 2.6 \u00b7 10\u22122 Column Sum (\u03be = 0.1) 0.95 3.8 \u00b7 10\u22122 0 1,000 2,000 3,000 0.5 0.7 0.9 number of samples state entropy 0 1,000 2,000 3,000 0 0.02 0.04 number of samples min d\u03c0 Frobenius In\ufb01nity Column Sum Random Figure 4: State distribution entropy (H(d\u03c0)) and probability of the least favorable state (min d\u03c0) for different objective formulations on the Single Chain domain. We report exact solutions with \u03b6 = 0 (left), and approximate optimizations with \u03be = 0.1, \u03b6 = 0.7, N = 10 (100 runs, 95% c.i.) (right). Knight Quest environment. IDE3AL achieves a remarkable state entropy, while MaxEnt struggles to converge towards a satisfying exploratory policy. CountBased (not reported in Figure 5c, see Appendix D), fails to explore the environment altogether, oscillating between policies with low entropy. In Figure 5d, we illustrate how the exploratory policies learned in the Double Chain environment are effective to ease learning of any possible goal-conditioned policy afterwards. To this end, the exploratory policies, learned by the three approaches through 3000 samples (Figure 5a), are employed to collect samples in a \ufb01xed horizon (within a range from 10 to 100 steps). Then, a goal-conditioned policy is learned off-line through approximate value iteration (Bertsekas 1995) on this small amount of samples. The goal is to optimize a reward function that is 1 for the hardest state to reach (i.e., the state that is less frequently visited with a random policy), 0 in all the other states. In this setting, all the methods prove to be rather successful w.r.t. the baseline, though IDE3AL compares positively against the other strategies. 6 Discussion In this section, we \ufb01rst discuss how the proposed approach might be extended beyond tabular settings and an alternative formulation for the policy entropy optimization. Then, we consider some relevant work related to this paper. 6.1 Potential Extension to Continuous We believe that the proposed approach has potential to be extended to more general, continuous, settings, by exploiting the core idea of avoiding a probability concentration on a subset of outgoing transitions from a state. Indeed, a compelling feature of the presented lower bounds is that they characterize an in\ufb01nite-step property, the entropy of the steady-state distribution, relying only on one-step quantities, i.e., without requiring to unroll several times the state transition matrix P \u03c0 . In addition to this, the lower bounds provide an evaluation for the current policy, and they can be computed for any policy. Thus, we could potentially operate a direct search in the policy space through the gradient of an approximation of these lower bounds. To perform the approximation we could use a kernel for a soft aggregation over regions of the, now continuous, state space. 6.2 A Dual Formulation A potential alternative to deal with the optimization of the objective (3) is to consider its dual formulation. This is rather similar to the approach proposed in (Tarbouriech and Lazaric 2019) to address the different problem of active exploration in an MDP. The basic idea is to directly maximize the entropy over the state-action stationary distribution and then to recover the policy afterwards. In this setting, we de\ufb01ne the state-action stationary distribution induced by a policy \u03c0 as \u03c9\u03c0 = d\u03c0\u03a0, where \u03c9\u03c0 is a vector of size |S||A| having elements \u03c9\u03c0(s, a). Since not all the distribution over the state-action space can be actually induced by a policy over the MDP, we characterize the set of feasible distributions: \u2126= \b \u03c9 \u2208\u2206(S \u00d7 A) : \u2200s \u2208S, X a\u2208A \u03c9(s, a) = X s\u2032\u2208S,a\u2032\u2208A P(s|s\u2032, a\u2032)\u03c9(s\u2032, a\u2032) \t . Then, we can formulate the Dual Problem as: maximize \u03c9\u2208\u2126 H(\u03c9) (10) Finally, let \u03c9\u2217denotes the solution of Problem (10), we can recover the policy inducing the optimal state-action entropy as \u03c0\u03c9\u2217(a|s) = \u03c9\u2217(s, a)/ P a\u2032\u2208A \u03c9\u2217(s, a\u2032), \u2200s \u2208S, \u2200a \u2208A. The Dual Problem displays some appealing features. Especially, the objective in (10) is already convex, so that it can be optimized right away, and it allows to explicitly maximize the entropy over the state-action space. Nonetheless, we think that this alternative formulation has three major shortcomings. First, the optimization of the convex program (10) could be way slower than the optimization of the linear programs Column Sum and In\ufb01nity (Gr\u00a8 otschel, Lov\u00b4 asz, and Schrijver 1993). Secondly, it does not allow to control the mixing time of the learned policy, which can be extremely relevant. Lastly, the applicability of the Dual Problem to continuous environments seems far-fetched. It is worth noting that, from an empirical evaluation, the dual formulation does not provide any signi\ufb01cant bene\ufb01t in the entropy of the learned policy w.r.t. the lower bounds formulations (see Appendix D). Figure 5e shows how the solve time of the Column Sum scales better with the number of variables (|S||A|) in incrementally large Knight Quest domains. \f0 1,000 2,000 3,000 0.6 0.7 0.8 0.9 number of samples state entropy 0 1,000 2,000 3,000 0.6 0.7 0.8 0.9 number of samples state-action entropy 0 1,000 2,000 3,000 0 0.5 1 1.5 \u00b710\u22122 number of samples min d\u03c0 0 1,000 2,000 3,000 0 2 4 6 8 number of samples model estimation (a) Double Chain 0 1,000 2,000 3,000 0.5 0.7 0.9 number of samples state entropy (b) Single Chain 0 0.5 1 \u00b7106 0.7 0.75 0.8 number of samples state entropy (c) Knight Quest 50 100 0 5 \u00b710\u22122 number of samples expected return (d) Goal-conditioned 0.5 1 \u00b7104 0 1 2 3 4 5 6 |S| \u00b7 |A| solve time (sec) (e) Dual comparison BBBBBB IDE3AL CountBased MaxEnt Random BBBB Column Sum Dual Figure 5: Comparison of the algorithms on exploration tasks (a, b, c) and goal-conditioned learning (d), with parameters \u03be = 0.1, \u03b6 = 0.7, N = 10 (a, b, d) and \u03be = 0.01, \u03b6 = 1, N = 2500 (c). (95% c.i. over 100 runs (a, b), 40 runs (c), 500 runs (d)). Comparison of the solve time (e) achieved by Column Sum and Dual formulations as a function of the number of variables. 6.3 Related Work As discussed in the previous sections, Hazan et al. (2019) consider an objective not that dissimilar to the one presented in this paper, even if they propose a fairly different solution to the problem. Their method learns a mixture of deterministic policies instead of a single stochastic policy. In a similar \ufb02avor, Tarbouriech and Lazaric (2019) develop an approach, based on a dual formulation of the objective, to learn a mixture of stochastic policies for active exploration. Other propose to intrinsically motivate the agent towards learning to reach all possible states in the environment (Lim and Auer 2012). To extend this same idea from the tabular setting to the context of a continuous, high-dimensional state space, Pong et al. (2019) employ a generative model to seek for a maximum-entropy goal distribution. Ecoffet et al. (2019) propose a method, called Go-Explore, to methodically reach any state by keeping an archive of any visited state and the best trajectory that brought the agent there. At each iteration, the agent draws a promising state from the archive, returns there replicating the stored trajectory (Go), then explores from this state trying to discover new states (Explore). Another promising intrinsic objective is to make value out of the exploration phase by acquiring a set of reusable skills, typically formulated by means of the option framework (Sutton, Precup, and Singh 1999). In (Barto, Singh, and Chentanez 2004), a set of options is learned by maximizing an intrinsic reward that is generated at the occurrence of some, user-de\ufb01ned, salient event. The approach proposed by Bonarini, Lazaric, and Restelli (2006), which presents some similarities with the work in (Ecoffet et al. 2019), is based on learning a set of options to return with high probability to promising states. In their context, a promising state presents high unbalance between the probabilities of the input and output transitions (Bonarini et al. 2006), so that it is both a hard state to reach, and a doorway to reach many other states. In this way, the learned options heuristically favor an even exploration of the state space. 7 Conclusions In this paper, we proposed a new model-based algorithm, IDE3AL, to learn highly exploring and fast mixing policies. The algorithm outputs a policy that maximizes a lower bound to the entropy of the steady-state distribution. We presented three formulations of the lower bound that differently tradeoff tightness with computational complexity of the optimization. The experimental evaluation showed that IDE3AL is able to achieve superior performance than other approaches striving for uniform exploration of the environment, while it avoids the risk of detachment and derailment (Ecoffet et al. 2019). Future works could focus on extending the applicability of the presented approach to non-tabular environments, following the blueprint in Section 6.1. We believe that this work provides a valuable contribution in view of solving the conundrum on what should a reinforcement learning agent learn in the absence of any reward coming from the environment. Acknowledgments This work has been partially supported by the Italian MIUR PRIN 2017 Project ALGADIMAR \u201cAlgorithms, Games, and Digital Market\u201d." + } + ], + "Filippo Lazzati": [ + { + "url": "http://arxiv.org/abs/2402.15392v1", + "title": "Offline Inverse RL: New Solution Concepts and Provably Efficient Algorithms", + "abstract": "Inverse reinforcement learning (IRL) aims to recover the reward function of\nan expert agent from demonstrations of behavior. It is well known that the IRL\nproblem is fundamentally ill-posed, i.e., many reward functions can explain the\ndemonstrations. For this reason, IRL has been recently reframed in terms of\nestimating the feasible reward set, thus, postponing the selection of a single\nreward. However, so far, the available formulations and algorithmic solutions\nhave been proposed and analyzed mainly for the online setting, where the\nlearner can interact with the environment and query the expert at will. This is\nclearly unrealistic in most practical applications, where the availability of\nan offline dataset is a much more common scenario. In this paper, we introduce\na novel notion of feasible reward set capturing the opportunities and\nlimitations of the offline setting and we analyze the complexity of its\nestimation. This requires the introduction an original learning framework that\ncopes with the intrinsic difficulty of the setting, for which the data coverage\nis not under control. Then, we propose two computationally and statistically\nefficient algorithms, IRLO and PIRLO, for addressing the problem. In\nparticular, the latter adopts a specific form of pessimism to enforce the novel\ndesirable property of inclusion monotonicity of the delivered feasible set.\nWith this work, we aim to provide a panorama of the challenges of the offline\nIRL problem and how they can be fruitfully addressed.", + "authors": "Filippo Lazzati, Mirco Mutti, Alberto Maria Metelli", + "published": "2024-02-23", + "updated": "2024-02-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction Inverse reinforcement learning (IRL), also called inverse optimal control, consists of recovering a reward function from expert\u2019s demonstrations (Russell, 1998). Speci\ufb01cally, the reward is required to be compatible with the expert\u2019s 1Politecnico di Milano, Milan, Italy 2Technion, Haifa, Israel. Correspondence to: Filippo Lazzati <\ufb01lippo.lazzati@polimi.it>. behavior, i.e., it shall make the expert\u2019s policy optimal. As pointed out in Arora & Doshi (2018), IRL allows mitigating the challenging task of the manual speci\ufb01cation of the reward function, thanks to the presence of demonstrations, and provides an effective method for imitation learning (Osa et al., 2018). In opposition to mere behavioral cloning, IRL allows focusing on the expert intent (instead of behavior) and, for this reason, it has the potential to reveal the underlying objectives that drive the expert\u2019s choices. In this sense, IRL enables interpretability, improving the interaction with the expert by explaining and predicting its behavior, and transferability, as the reward (more than a policy) can be employed under environment shifts (Adams et al., 2022). One of the main concerns of IRL is that the problem is inherently ill-posed or ambiguous (Ng & Russell, 2000), i.e., there exists a variety of reward functions compatible with expert\u2019s demonstrations. In the literature, many criteria for the selection of a single reward among the compatible ones were proposed (e.g., Ng & Russell, 2000; Ratliff et al., 2006; Ziebart et al., 2008; Boularias et al., 2011). Nevertheless, the ambiguity issue has limited the theoretical understanding of the IRL problem for a long time. Recently, IRL has been reframed by Metelli et al. (2021) into the problem of computing the set of all rewards compatible with expert\u2019s demonstrations, named feasible reward set (or just feasible set). By postponing the choice of a speci\ufb01c reward within the feasible set, this formulation has opened the doors to a new perspective that has enabled a deeper theoretical understanding of the IRL problem. The majority of previous works on the reconstruction of the feasible set have focused mostly on the online setting (e.g., Metelli et al., 2021; Lindner et al., 2022; Zhao et al., 2023; Metelli et al., 2023), in which the learner is allowed to actively interact with the environment and with the expert to collect samples. Although these works succeeded in obtaining sample ef\ufb01cient algorithms and represent a fundamental step ahead in the understanding of the challenges of the IRL problem (e.g., providing sample complexity lower bounds), the underlying basic assumption that the learner is allowed to govern the exploration and query the expert wherever is far from being realistic. Indeed, the most common IRL ap1 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms plications are naturally framed in an of\ufb02ine scenario, in which the learner is given in advance a dataset of trajectories of the expert (and, possibly, an additional dataset collected with a behavioral policy, e.g., Boularias et al. (2011)). Typically, no further interaction with the environment and with the expert is allowed (Likmeta et al., 2021). The of\ufb02ine setting has been widely studied in (forward) reinforcement learning (RL, Sutton & Barto, 2018), and a surge of works have analyzed the problem from theoretical and practical perspectives (e.g., Munos, 2007; Levine et al., 2020; Buckman et al., 2020; Yu et al., 2020; Jin et al., 2021). In this context, a powerful technique is represented by pessimism, which discourages the learner from assigning credit to options that have not been suf\ufb01ciently explored in the available dataset, allowing for sample ef\ufb01ciency guarantees (Buckman et al., 2020). The IRL of\ufb02ine setting has been investigated for the problem of recovering the feasible set in the recent preprint (Zhao et al., 2023). The authors consider the same feasible set de\ufb01nition employed for the online case, which enforces the optimality of the expert\u2019s policy in every state (Metelli et al., 2021; Lindner et al., 2022). However, in the of\ufb02ine setting, this learning target is unrealistic unless the dataset covers the full space. This implies that the produced rewards can be safely used in forward RL when the behavioral policy covers the whole reachable portion of the state-action space only. For this reason, Zhao et al. (2023) apply a form of pessimism which allows delivering rewards that make the expert\u2019s policy \u01eb-optimal even in the presence of partial covering of the behavioral policy but only when the latter is suf\ufb01ciently close to the expert\u2019s. This demanding requirements, however, collide with the intuition that, regardless the sampling policy, if we observe the expert\u2019s actions, we can deliver at least one reward making the expert\u2019s optimal.1 Desired Properties In this paper, we seek to develop novel appropriate solution concepts for the feasible reward set and new effective actionable algorithms for recovering them in the of\ufb02ine IRL setting. Speci\ufb01cally, we aim at ful\ufb01lling the following three key properties: (i) (Sample Ef\ufb01ciency) We should output, with high probability, an estimated feasible set using a number of samples polynomial w.r.t. the desired accuracy, error probability, and relevant sizes of the problem. (ii) (Computational Ef\ufb01ciency) We should be able to check the membership of a candidate reward in the feasible set in polynomial time w.r.t. the relevant sizes of the problem. (iii) (Inclusion Monotonicity) We should output one estimated feasible set that includes and one that is in1For instance, simply assign 0 when playing the expert actions and \u00b41 otherwise. R p RY R p R p RX Figure 1. R = set of all rewards, R = true feasible set, p RX and p RY = examples of inclusion monotonic estimated feasible set (i.e., p RY \u010eR\u010e p RY), p R = example of inclusion non-monotonic estimated feasible set (i.e., p R\u0118R and R\u0118 p R). cluded in the true feasible set with high probability. While properties (i) and (ii) are commonly requested, (iii) deserves some comments. Inclusion monotonicity, intuitively, guarantees that we produce a set that does not exclude any reward function that can be feasible and a set that includes only reward functions that are surely feasible, given the current samples (Figure 1). This, remarkably, allows delivering (with high probability) reward functions that make the expert\u2019s policy optimal (not just \u01eb-optimal) regardless of the accuracy with which the feasible set is recovered. Contributions The contributions of this paper are summarized as follows: \u2022 We propose a novel de\ufb01nition of feasible set that takes into account the intrinsic challenges of the of\ufb02ine setting (i.e., partial covering). Moreover, we introduce appropriate solution concepts which are learnable based on the coverage of the given dataset (Section 3). \u2022 We adapt the probably approximately correct (PAC) framework from Metelli et al. (2023) to our of\ufb02ine setting by proposing novel semimetrics which, differently from previous works, allow us to naturally deal with unbounded rewards (Section 4). \u2022 We present a novel algorithm, named IRLO (Inverse Reinforcement Learning for Of\ufb02ine data), for solving of\ufb02ine IRL. We show that it satis\ufb01es the requirements of (i) sample and (ii) computational ef\ufb01ciency (Section 5). \u2022 After having formally de\ufb01ned the notion of (iii) inclusion monotonicity, we propose a pessimism-based algorithm, named PIRLO (Pessimistic Inverse Reinforcement Learning for Of\ufb02ine data), that achieves (iii) inclusion monotonicity preserving sample and computational ef\ufb01ciency, at the price of a larger sample complexity (Section 6). \u2022 We discuss a speci\ufb01c application of our algorithm PIRLO for reward sanity check (Section 7). \u2022 We present a negative result for of\ufb02ine IRL when only 2 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms data from a deterministic expert are available (Section 8). Additional related works are reported in Appendix A. The proofs of all the results are reported in the Appendix B-J. 2. Preliminaries Notation Given a \ufb01nite set X, we denote by |X| its cardinality and by \u2206X :\u201ctqPr0,1s|X || \u0159 xPX qpxq\u201c1u the simplex on X. Given two sets X and Y, we denote the set of conditional distributions as \u2206X Y :\u201ctq:Y \u00d1\u2206X u. Given N PN, we denote JNK:\u201ct1,...,Nu. Given an equivalence relation \u201d\u010eX \u02c6X, and an item xPX, we denote by rxs\u201d the equivalence class of x. Markov Decision Processes (MDPs) without Reward A \ufb01nite-horizon Markov decision process (MDP, Puterman, 1994) without reward is de\ufb01ned as M:\u201cxS,A,\u00b50,p,Hy, where S is the \ufb01nite state space (S :\u201c|S|), A is the \ufb01nite action space (A:\u201c|A|), \u00b50 P\u2206S is the initial-state distribution, p\u201ctphuhPJHK where ph P\u2206S S\u02c6A for every hPJHK is the transition model, and H PN is the horizon. A policy is de\ufb01ned as \u03c0\u201ct\u03c0huhPJHK where \u03c0h P\u2206A S for every hPJHK. Pp,\u03c0 denotes the trajectory distribution induced by \u03c0 and Ep,\u03c0 the expectation w.r.t. Pp,\u03c0 (we omit \u00b50 in the notation). The state-action visitation distribution induced by p and \u03c0 is de\ufb01ned as \u03c1p,\u03c0 h ps,aq:\u201c Pp,\u03c0psh \u201cs,ah \u201caq and the state visitation distribution as \u03c1p,\u03c0 h psq:\u201c\u0159 aPA \u03c1p,\u03c0 h ps,aq, so that \u0159 sPS \u03c1p,\u03c0 h psq\u201c1 for every hPJHK. Additional De\ufb01nitions The sets of transition models, policies, and rewards are denoted as P :\u201c\u2206S S\u02c6A\u02c6JHK, \u03a0:\u201c \u2206A S\u02c6JHK, and R:\u201ctr:S \u02c6A\u02c6JHK\u00d1Ru, respectively.2 For every hPJHK, we de\ufb01ne the set of states and stateaction pairs reachable by \u03c0 at stage hPJHK as Sp,\u03c0 h :\u201ctsP S |\u03c1p,\u03c0 h psq\u01050u and Zp,\u03c0 h :\u201ctps,aqPS \u02c6A|\u03c1p,\u03c0 h ps,aq\u0105 0u, respectively. Moreover, we de\ufb01ne Sp,\u03c0 :\u201ctps,hq: hPJHK, sPSp,\u03c0 h u and Zp,\u03c0 :\u201ctps,a,hq:hPJHK, ps,aqP Zp,\u03c0 h u, with cardinality Sp,\u03c0 \u010fSH and Zp,\u03c0 \u010fSAH, respectively. We refer to these sets as the \u201csupport\u201d of \u03c1p,\u03c0. We denote the cardinality of the largest set Sp,\u03c0 h varying hPJHK, as Sp,\u03c0 max :\u201cmaxhPJHK |Sp,\u03c0 h |\u010fS. Finally, we denote the minimum of the state-action distribution on set Y \u010eS \u02c6A\u02c6JHK as \u03c1\u03c0,Y min :\u201cminps,a,hqPY \u03c1p,\u03c0 h ps,aq. Value Functions and Optimality The Q-function of policy \u03c0 with transition model p and reward function r is de\ufb01ned as Q\u03c0 hps,a;p,rq:\u201cEp,\u03c0r\u0159H t\u201ch rtpst,atq|sh \u201c s,ah \u201cas and the optimal Q-function as Q\u02da hps,a;p,rq:\u201c max\u03c0P\u03a0 Q\u03c0 hps,a;p,rq. The utility (i.e., expected return) of policy \u03c0 under the initial-state distribution \u00b50 is given by 2We remark that we consider real-valued rewards without requiring boundedness. Jp\u03c0;\u00b50,p,rq:\u201cEs\u201e\u00b50,a\u201e\u03c0p\u00a8|sqrQ\u03c0 1ps,a;p,rqs and the optimal utility by J\u02dap\u00b50,p,rq:\u201cmax\u03c0P\u03a0 Jp\u03c0;\u00b50,p,rq. An optimal policy \u03c0\u02da is a policy that maximizes the utility \u03c0\u02da Pargmax\u03c0P\u03a0 Jp\u03c0;\u00b50,p,rq. The existence of a deterministic optimal policy is guaranteed (Puterman, 1994). Equivalence Relations We introduce two equivalence relations: \u201dS (over policies) and \u201dZ (over transition models), de\ufb01ned for arbitrary S \u010eS \u02c6JHK and Z \u010eS \u02c6A\u02c6 JHK. Speci\ufb01cally, let \u03c0,\u03c01 P\u03a0 be two policies, we have: \u03c0\u201dS \u03c01 iff @ps,hqPS : \u03c0hp\u00a8|sq\u201c\u03c01 hp\u00a8|sq. (1) Similarly, let p,p1 PP, be two transition models, we have: p\u201dZ p1 iff @ps,a,hqPZ : php\u00a8|s,aq\u201cphp\u00a8|s,aq. (2) We will often use S \u201cSp,\u03c0 and Z \u201cZp,\u03c0 for some pPP and \u03c0P\u03a0. Intuitively, the equivalence relation \u201dSp,\u03c0 (resp. \u201dZp,\u03c0) group policies (resp. transition models) indistinguishable given the support Sp,\u03c0 (resp. Zp,\u03c0) of \u03c1p,\u03c0. Of\ufb02ine Setting We assume the availability of two datasets Db \u201ctxsb,i 1 ,ab,i 1 ,...,sb,i H\u00b41,ab,i H\u00b41,sb,i H yuiPJ\u03c4 bK and DE \u201ctxsE,i 1 ,aE,i 1 ,...,sE,i H\u00b41,aE,i H\u00b41,sE,i H yuiPJ\u03c4 EK of \u03c4 b and \u03c4 E independent trajectories collected by playing a behavioral policy \u03c0b and the expert\u2019s policy \u03c0E, respectively. Furthermore, we enforce the following assumption. Assumption 2.1 (Expert\u2019s covering). The behavioral policy \u03c0b plays with non-zero probability the actions prescribed by the expert\u2019s policy \u03c0E in its support Sp,\u03c0E: @ps,hqPSp,\u03c0E : \u03c0b hp\u03c0E h psq|sq\u01050. Assumption 2.1 holds when \u03c0b \u201c\u03c0E and generalizes that setting when the behavioral policy \u03c0b is \u201cmore explorative\u201d, possibly playing actions other than expert\u2019s ones.3 3. Solution Concepts for Of\ufb02ine IRL In this section, we introduce a novel de\ufb01nition of feasible reward set, discuss its learnability properties, and propose suitable solution concepts to be targeted for the of\ufb02ine IRL. A New De\ufb01nition of Feasible Set Let us start by recalling the original de\ufb01nition of feasible set presented in the literature and discussing its limitations for of\ufb02ine IRL. De\ufb01nition 3.1 (\u201cOld\u201d Feasible Set Rp,\u03c0E (Metelli et al., 2021)). Let M be an MDP without reward and let \u03c0E be the deterministic expert\u2019s policy. The \u201cold\u201d feasible set Rp,\u03c0E of rewards compatible with \u03c0E in M is de\ufb01ned as:4 Rp,\u03c0E :\u201ctrPR|@ps,hqPS \u02c6JHK, @aPA: 3We elaborate on the limits of learning with just a dataset collected with the expert\u2019s policy \u03c0E in Section 8. Moreover, we discuss how we can use a single dataset collected with \u03c0b, at the price of a slightly larger sample complexity, in Appendix D.1. 4Actually, Metelli et al. (2021) consider rewards bounded in r0,1s, while we consider all real-valued rewards in R. 3 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms Q\u03c0E h ps,\u03c0E h psq;p,rq\u011bQ\u03c0E h ps,a;p,rqu. (3) In words, Rp,\u03c0E contains all the reward functions that make the expert\u2019s policy optimal in every state-stage pair ps,hqPS \u02c6JHK. However, forcing the optimality of \u03c0E in states that are never reached from the initial-state distribution \u00b50 is unnecessary if our ultimate goal is to use the learned reward function r to train a policy \u03c0\u02da that achieves the maximum utility, i.e., \u03c0\u02da Pargmax\u03c0P\u03a0 Jp\u03c0;\u00b50,p,rq. This suggests an alternative de\ufb01nition of feasible set. De\ufb01nition 3.2 (Feasible Set Rp,\u03c0E). Let M be an MDP without reward and let \u03c0E be the deterministic expert\u2019s policy. The feasible set Rp,\u03c0E of rewards compatible with \u03c0E in M is de\ufb01ned as: Rp,\u03c0E :\u201ctrPR|Jp\u03c0E;\u00b50,p,rq\u201cJ\u02dap\u00b50,p,rqu. In words, Rp,\u03c0E contains all the reward functions that make the expert\u2019s policy \u03c0E a utility maximizer. Clearly, since De\ufb01nition 3.1 enforces optimality uniformly over S \u02c6JHK, we have the inclusion Rp,\u03c0E \u010eRp,\u03c0E, where the equality holds when Sp,\u03c0E \u201cS \u02c6JHK, i.e., r\u03c0Es\u201dSp,\u03c0E \u201c t\u03c0Eu. The following result formalizes the intuition that for Rp,\u03c0E, differently from Rp,\u03c0E, the expert\u2019s policy \u03c0E has to be optimal (as in Equation 3) in a subset of S \u02c6JHK only. Theorem 3.1. In the setting of De\ufb01nition 3.2, the feasible reward set Rp,\u03c0E satis\ufb01es: Rp,\u03c0E \u201ctrPR|@\u03c0 Pr\u03c0Es\u201dSp,\u03c0E ,@ps,hqPSp,\u03c0E, @aPA: Q\u03c0 hps,\u03c0E h psq;p,rq\u011bQ\u03c0 hps,a;p,rqu. (4) Theorem 3.1 shows that the optimal action induced by a reward rPRp,\u03c0E outside Sp,\u03c0E, i.e., outside the support of \u03c1p,\u03c0E induced by the expert\u2019s policy \u03c0E, is not relevant. The optimality condition of Equation (4) is requested for all the policies \u03c0 that play the expert\u2019s action within its support. Intuitively, those policies cover the same portion of state space as \u03c0E, i.e., Sp,\u03c0 \u201cSp,\u03c0E and, since they all prescribe the same action in there,5 they all achieve the same utility, i.e., Jp\u03c0;\u00b50,p,rq\u201c Jp\u03c0E;\u00b50,p,rq\u201cJ\u02dap\u00b50,p,rq. Thus, if we train an RL agent with a reward function p rPRp,\u03c0EzRp,\u03c0E, we obtain a policy p \u03c0 Pr\u03c0Es\u201dSp,\u03c0E , i.e., playing optimal actions inside Sp,\u03c0E. Clearly, p \u03c0 will prescribe different actions than \u03c0E outside Sp,\u03c0E, but this is irrelevant since those states will never be reached by p \u03c0. This has important consequences from the of\ufb02ine IRL perspective. Indeed, we can recover this new notion Rp,\u03c0E (De\ufb01nition 3.2) without the knowledge of \u03c0E in the states outside Sp,\u03c0E. Instead, to learn the old notion Rp,\u03c0E (De\ufb01nition 3.1), we would need to 5It is worth noting that, since ps,hqPSp,\u03c0E, the following identity hold: Q\u03c0 hps,\u03c0E h psq;p,rq\u201cQ\u03c0E h ps,\u03c0E h psq;p,rq. enforce that the policy used to collect samples (either \u03c0E or \u03c0b) covers the full space S \u02c6JHK.6 Solution Concepts and Learnability To compute the feasible set Rp,\u03c0E, we need to learn the expert\u2019s policy \u03c0E h psq in every ps,hqPSp,\u03c0E and the transition model php\u00a8|s,aq in every ps,a,hqPS \u02c6A\u02c6JHK (to compare the Q-functions). In the online setting (e.g., Metelli et al., 2021) this is a reasonable requirement because the learner can explore the environment, and, thus, collect samples over the whole S \u02c6A\u02c6JHK space.7 However, in our of\ufb02ine setting, even in the limit of in\ufb01nite samples, triples ps,a,hqRZp,\u03c0b, i.e., outside the support of \u03c1p,\u03c0b are never sampled. Thus, we can identify the transition model p up to its equivalence class rps\u201d Zp,\u03c0b only. Intuitively, this means that, unless Zp,\u03c0b \u201cS \u02c6A\u02c6JHK, i.e., \u03c0b covers the entire space, the problem of estimating the feasible set Rp,\u03c0E of\ufb02ine is not learnable.6 Thus, instead of Rp,\u03c0E directly, we propose to target as solution concepts (i) the largest learnable set of rewards contained into Rp,\u03c0E, and (ii) the smallest learnable set of rewards that contains Rp,\u03c0E, de\ufb01ned as follows. De\ufb01nition 3.3 (Suband Super-Feasible Sets). Let M be an MDP without reward and let \u03c0E be the deterministic expert\u2019s policy. We de\ufb01ne the sub-feasible set RX p,\u03c0E and the super-feasible set RY p,\u03c0E as: RX p,\u03c0E :\u201c \u010d p1Prps\u201d Zp,\u03c0b Rp1,\u03c0E, RY p,\u03c0E :\u201c \u010f p1Prps\u201d Zp,\u03c0b Rp1,\u03c0E. Since pPrps\u201d Zp,\u03c0b , we \u201csqueeze\u201d the feasible set Rp,\u03c0E between these two learnable solution, i.e., RX p,\u03c0E \u010e Rp,\u03c0E \u010eRY p,\u03c0E. A more explicit representation is given as follows: RX p,\u03c0E \u201ctrPR|@p1 Prps\u201d Zp,\u03c0b ,@\u03c0 Pr\u03c0Es\u201d Sp,\u03c0E , @ps,hqPSp,\u03c0E,@aPA : Q\u03c0 hps,\u03c0Epsq;p1,rq\u011bQ\u03c0 hps,a;p1,rqu, RY p,\u03c0E \u201ctrPR|Dp1Prps\u201d Zp,\u03c0b ,@\u03c0 Pr\u03c0Es\u201dSp,\u03c0E , @ps,hqPSp,\u03c0E,@aPA : Q\u03c0 hps,\u03c0Epsq;p1,rq\u011bQ\u03c0 hps,a;p1,rqu. Intuitively, to be robust against the missing knowledge of the transition model outside Zp,\u03c0b, we have to account for all the possible p1 Prps\u201d Zp,\u03c0b and retain the rewards compatible with all of them (for the sub-feasible set RX p,\u03c0E) and with at least one of them (for super-feasible set RY p,\u03c0E), 6A formal de\ufb01nition of learnability and the proofs that Rp,\u03c0E and Rp,\u03c0E are not learnable under partial cover (i.e., Sp,\u03c0E \u2030 S \u02c6JHK and Zp,\u03c0b \u2030S \u02c6A\u02c6JHK) are reported in Appendix C. 7This is true for the generative model case. In a forward model, in which we are allowed to interact through trajectories, we just need to learn the transition model in all state-action pairs ps,a,hq reachable from \u00b50 with any policy, i.e., ps,a,hqP \u0164 \u03c0P\u03a0 Zp,\u03c0. 4 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms as apparent from the quanti\ufb01ers. Moreover, when Zp,\u03c0b \u201c S \u02c6A\u02c6JHK, i.e., rps\u201d Zp,\u03c0b \u201ctpu, we have the equality: RX p,\u03c0E \u201cRp,\u03c0E \u201cRY p,\u03c0E. We now show that the RX p,\u03c0E and RY p,\u03c0E are indeed the tightest learnable subset and superset of Rp,\u03c0E (formal statement and proof in Appendix B). Theorem 3.2. (Informal) Let M be an MDP without reward, let \u03c0E and \u03c0b be the deterministic expert\u2019s policy and the behavioral policy, respectively. Then, RX p,\u03c0E and RY p,\u03c0E are the tightest subset and superset of Rp,\u03c0E learnable from data collected in M by executing \u03c0b and \u03c0E. 4. PAC Framework We now propose a PAC framework for learning RX p,\u03c0E and RY p,\u03c0E from datasets DE and Db, collected with \u03c0E and \u03c0b. We \ufb01rst present the functions to evaluate the dissimilarity between feasible sets and, then, de\ufb01ne the PAC requirement. Dissimilarity Functions Being RX p,\u03c0E and RY p,\u03c0E sets of rewards, we need (i) a function to assess the dissimilarity between items (i.e., reward functions), and (ii) a way of converting it into a dissimilarity function between sets (i.e., the suband super-feasible sets) (Metelli et al., 2021). For (i), we propose the following two semimetrics. De\ufb01nition 4.1 (Semimetrics d and d8 between rewards). Let M be an MDP without reward and let \u03c0E be the expert\u2019s policy. Let \u03c0b be the behavioral policy and let tZp,\u03c0b h uh be its support. Given two reward functions r,p rP R, we de\ufb01ne d:R\u02c6R\u00d1R and d8 :R\u02c6R\u00d1R as: dpr,p rq:\u201c 1 Mpr,p rq \u00ff hPJHK \u00b4 E ps,aq\u201e\u03c1p,\u03c0b h \u02c7 \u02c7rhps,aq\u00b4p rhps,aq \u02c7 \u02c7 ` max ps,aqRZp,\u03c0b h \u02c7 \u02c7rhps,aq\u00b4p rhps,aq \u02c7 \u02c7 \u00af , d8pr,p rq:\u201c 1 Mpr,p rq \u00ff hPJHK }rh \u00b4p rh}8, where Mpr,p rq:\u201cmax \u2423 }r}8,}p r}8 ( . Moreover, we conventionally set both d and d8 to 0 when Mpr,p rq\u201c0. First, d8 corresponds to the \u21138-norm between reward functions, while d combines the \u21131-norm between rewards in Zp,\u03c0b weighted by the visitation distribution of the behavioral policy \u03c1p,\u03c0b and the \u21138-norm outside Zp,\u03c0b. The intuition is that, inside Zp,\u03c0b, we weigh the error based on the number of samples, which are collected by \u03c0b. Instead, outside Zp,\u03c0b, we can afford the \u21138-norm because we adopt as solution concepts RX p,\u03c0E and RY p,\u03c0E that intrinsically manage the lack of samples, so that we can con\ufb01dently achieve zero error in that region. Second, it is easy to verify that both d and d8 are semimetrics.8 Third, 8A semimetric ful\ufb01lls all the properties of a metric except for the two semimetrics are related by the following double inequality: dpr,r1q\u010f2d8pr,r1q\u010f2dpr,r1q{\u03c1\u03c0b,Zp,\u03c0b min (see Proposition E.2), where \u03c1\u03c0b,Zp,\u03c0b min \u01050 by de\ufb01nition. Finally, the normalization term 1{Mpr,p rq enforces that dpr,r1q and d8pr,r1q lie in r0,2Hs for every r,r1 PR. Differently from previous works (e.g., Metelli et al., 2021; Lindner et al., 2022), this term allows to deal with (unbounded) realvalued rewards more naturally and effectively, at the price of accepting a relaxed triangular inequality. Then, for obtaining a dissimilarity function between reward sets (ii), we make use of the Hausdorff distance. De\ufb01nition 4.2 (Hausdorff distance (Rockafellar & Wets, 1998)). Let R, p R\u010eR be two sets of reward functions, and let cPtd,d8u. The Hausdorff distance between R and p R with inner distance c:R\u02c6R\u00d1R is de\ufb01ned as: HcpR, p Rq:\u201cmax ! sup rPR inf p rP p R cpr,p rq,sup p rP p R inf rPRcpr,p rq ) . (5) Moreover, we abbreviate Hd8 with H8. Since the feasible sets are closed (see Appendix I), using d or d8, the Hausdorff distance is a semimetric and sati\ufb01es a relaxed triangle inequality too. Thus, HcpR, p Rq\u201c0 if and only if the two sets coincide, i.e., R\u201c p R. p\u01eb,\u03b4q-PAC Requirement We now formally de\ufb01ne the sample ef\ufb01ciency requirement. To distinguish between the two semimetrics d and d8, we denote by c-IRL the problem of estimating RX p,\u03c0E and RY p,\u03c0E under Hc, where cPtd,d8u. De\ufb01nition 4.3 (p\u01eb,\u03b4q-PAC Algorithm). Let \u01ebPr0,2Hs and \u03b4Pp0,1q. An algorithm A outputting the estimated suband super-feasible sets p RX and p RY is p\u01eb,\u03b4q-PAC for c-IRL if: P pp,\u03c0E,\u03c0bq `\u2423 HcpRX p,\u03c0E, p RXq\u010f\u01eb ( X \u2423 HcpRY p,\u03c0E, p RYq\u010f\u01eb (\u02d8 \u011b1\u00b4\u03b4, where Ppp,\u03c0E,\u03c0bq denotes the probability measure induced by \u03c0E and \u03c0b in M. The sample complexity is the number of trajectories \u03c4 E and \u03c4 b in DE and Db, respectively. 5. Inverse Reinforcement Learning for Of\ufb02ine data (IRLO) Our goal is to devise an algorithm that is (i) statistically, (ii) computationally ef\ufb01cient, and that (iii) guarantees the inclusion monotonicity property. As a warm-up, in this section we present IRLO (Inverse Reinforcement Learning for Of\ufb02ine data), ful\ufb01lling (i) and (ii), but not (iii). Algorithm The pseudo-code of IRLO is reported in Algorithm 1 (IRLO box). It receives two datasets DE and Db of trajectories collected by policies \u03c0E and \u03c0b, respecthe triangular inequality. We show in Appendix I that our semimetrics ful\ufb01ll a \u201crelaxed\u201d form of triangular inequality. 5 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms Algorithm 1 IRLO and PIRLO. Input :Datasets DE \u201ctxsE,i h ,aE,i h yhui, Db \u201ctxsb,i h ,ab,i h yhui Output :Estimated suband super-feasible sets p RX, p RY 1 Estimate the expert\u2019s support: p Sp,\u03c0E \u00d0tps,hqPS \u02c6JHK|DiPJ\u03c4 EK: sE,i h \u201csu 2 Estimate the expert\u2019s policy: for ps,hqP p Sp,\u03c0E do 3 p \u03c0E h psq\u00d0aE,i h for some iPJ\u03c4 EK s.t. si h \u201cs 4 end 5 Estimate the state-action behavioral policy support: p Zp,\u03c0b \u00d0tps,a,hqPS\u02c6A\u02c6JHK|DiPJ\u03c4 bK:psb,i h ,ab,i h q\u201cps,aqu 6 Compute the counts for every ps,a,hqP p Zp,\u03c0b and s1 PS: N b hps,a,s1q\u00d0\u0159 iPJ\u03c4bK 1tpsb,i h ,ab,i h ,sb,i h`1q\u201cps,a,s1qu N b hps,aq\u00d0\u0159 s1PS N b hps,a,s1q 7 Estimate the transition model: for ps,a,hqP p Zp,\u03c0b do 8 for s1 PS do 9 p phps1|s,aq\u00d0 Nb hps,a,s1q maxt1,Nb hps,aqu 10 end 11 end 12 IRLO Compute RX p p,p \u03c0E and RY p p,p \u03c0E with De\ufb01nition 3.3 using p p, p \u03c0E, p Zp,\u03c0b, and p Sp,\u03c0E return pRX p p,p \u03c0E,RY p p,p \u03c0Eq 13 PIRLO Compute the con\ufb01dence set Cpp p,bq via Eq. (7) Compute r RX p p,p \u03c0E and r RY p p,p \u03c0E with Eq. (9) using p p, p \u03c0E, p Zp,\u03c0b, and p Sp,\u03c0E return p r RX p p,p \u03c0E, r RY p p,p \u03c0Eq tively, and it outputs the estimated suband super-feasible sets p RX and p RY as estimates of RX p,\u03c0E and RY p,\u03c0E, respectively. IRLO leverages DE to compute the empirical estimates of the expert\u2019s support Sp,\u03c0E and policy \u03c0E, denoted by p Sp,\u03c0E and p \u03c0E (lines 1-3), and it uses Db to compute the empirical estimates of the behavioral policy support Zp,\u03c0b, and of the transition model p, denoted by p Zp,\u03c0b and p p (lines 5-9). Finally, it returns the suband super-feasible sets computed with the estimated supports, expert\u2019s policy, and transition model: p RX \u201cRX p p,p \u03c0E and p RY \u201cRY p p,p \u03c0E (line 12). Computationally Ef\ufb01cient Implementation In Algorithm 1, IRLO outputs the estimated feasible sets p RY and p RX obtained by computing the intersection and the union of a continuous set of transition models (De\ufb01nition 3.3). To show the computational ef\ufb01ciency of IRLO, we provide in Appendix G (Algorithm 2, IRLO box) a polynomialtime membership checker that tests whether a candidate reward function rPR belongs to p RY and/or p RX. We apply extended value iteration (EVI, Auer et al., 2008) to compute an upper bound Q` and a lower bound Q\u00b4 of the Qfunction induced by the candidate reward r and varying the transition model in a set C. For the IRLO algorithm, C corresponds to the equivalence class of the empirical estimate p p induced by the empirical support p Zp,\u03c0b, i.e., rp ps\u201d x Zp,\u03c0b : C :\u201c ! p1 PP |@ps,a,hqP p Zp,\u03c0b : p1 hp\u00a8|s,aq\u201c p php\u00a8|s,aq ) . (6) The algorithm has a time complexity of order OpHS2Aq. Sample Complexity Analysis We now show that the IRLO algorithm is statistically ef\ufb01cient. The following theorem provides a polynomial upper bound to its sample complexity. Theorem 5.1. Let M be an MDP without reward and let \u03c0E be the expert\u2019s policy. Let DE and Db be two datasets of \u03c4 E and \u03c4 b trajectories collected with policies \u03c0E and \u03c0b in M, respectively. Under Assumption 2.1, IRLO is p\u01eb,\u03b4qPAC for d-IRL with a sample complexity at most: \u03c4 b \u010f r O \u02dc H3Zp,\u03c0b ln 1 \u03b4 \u01eb2 \u02c6 ln 1 \u03b4 `Sp,\u03c0b max \u02d9 ` ln 1 \u03b4 ln 1 1\u00b4\u03c1\u03c0b,Zp,\u03c0b min \u00b8 , \u03c4 E \u010f r O \u02dc ln 1 \u03b4 ln 1 1\u00b4\u03c1\u03c0E ,Zp,\u03c0E min \u00b8 . Some comments are in order. First, we observe that the sample complexity for the expert\u2019s dataset \u03c4 E is constant and depends on the minimum non-zero value of the visitation distribution \u03c1\u03c0E,Zp,\u03c0E min \u01050, but it does not depend on the desired accuracy \u01eb. This accounts for the minimum number of samples to have p Sp,\u03c0E \u201cSp,\u03c0E, with high probability. Second, the sample complexity for the behavioral policy dataset \u03c4 b displays a tight dependence on the desired accuracy \u01eb and a dependence of order H4 on the horizon, since, in the worst case, Zp,\u03c0b \u010fSAH. Moreover, we notice the two-regime behavior represented by lnp1{\u03b4q`Sp,\u03c0b max (i.e., small and large \u03b4) as in previous works (Kaufmann et al., 2021; Metelli et al., 2023). This term is multiplied by an additional lnp1{\u03b4q term, which always appears in of\ufb02ine (forward) RL (Xie et al., 2021) and it is needed to control the minimum number of samples collected from every reachable state-action pair. Finally, we observe a dependence analogous to that of \u03c4 E on the minimum non-zero value of the visitation distribution \u03c1\u03c0b,Zp,\u03c0b min \u01050, to ensure that p Zp,\u03c0b \u201cZp,\u03c0b. Note that when \u03c0b \u201c\u03c0E, Assumption 2.1 is ful\ufb01lled, and the sample complexity reduces to: \u03c4 E \u010f r O \u02dc H3Sp,\u03c0E ln 1 \u03b4 \u01eb2 \u02c6 ln 1 \u03b4 `Sp,\u03c0E max \u02d9 ` ln 1 \u03b4 ln 1 1\u00b4\u03c1\u03c0E ,Zp,\u03c0E min \u00b8 . Since Sp,\u03c0E \u010fS, the dependence on the number of actions is no longer present. An analogous result holds for d8. Theorem 5.2. Under the conditions of Theorem 5.1, IRLO 6 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms is p\u01eb,\u03b4q-PAC for d8-IRL with a sample complexity at most: \u03c4 b \u010f r O \u02dc H4 ln 1 \u03b4 \u03c1\u03c0b,Zp,\u03c0b min \u01eb2 \u02c6 ln 1 \u03b4 `Sp,\u03c0b max \u02d9 ` ln 1 \u03b4 ln 1 1\u00b4\u03c1\u03c0b,Zp,\u03c0b min \u00b8 , and \u03c4 E is bounded as in Theorem 5.1. We note that, since 1{\u03c1\u03c0b,Zp,\u03c0b min \u011bSA, Theorem 5.2 delivers a larger sample complexity w.r.t. Theorem 5.1. This is expected because of the relation dpr,r1q\u010f2d8pr,r1q between the two semimetrics (see Proposition E.2). 6. Pessimistic Inverse Reinforcement Learning for Of\ufb02ine data (PIRLO) In this section, we present our main algorithm, PIRLO (Pessimistic Inverse Reinforcement Learning for Of\ufb02ine data). Beyond statistical and computational ef\ufb01ciency, PIRLO provides guarantees on the inclusion monotonicity of the proposed feasible sets by embedding a form of pessimism.9 Before presenting the algorithm, we formally introduce the notion of inclusion monotonicity and intuitively justify it. Thanks to the PAC property (Theorem 5.1), in the limit of in\ufb01nite samples \u03c4 b,\u03c4 E \u00d1`8, IRLO recovers exactly the subp RX \u00d1RX p,\u03c0E and the superp RY \u00d1RY p,\u03c0E feasible sets, and, consequently, the property p RX \u010eRp,\u03c0E \u010e p RY holds. Because of the meaning of these sets, i.e., the tightest learnable subset RX p,\u03c0E and superset RY p,\u03c0E of the feasible set Rp,\u03c0E, it is desirable to ensure the property p RX \u010eRp,\u03c0E \u010e p RY (in high probability) in the \ufb01nite samples regime \u03c4 b,\u03c4 E \u010f`8 too. The following de\ufb01nition formalizes the property. De\ufb01nition 6.1 (Inclusion Monotonic Algorithm). Let \u03b4P p0,1q. An algorithm A outputting the estimated suband super-feasible sets p RX and p RY is \u03b4-inclusion monotonic if: P pp,\u03c0E,\u03c0bq \u00b4 p RX \u010eRp,\u03c0E \u010e p RY\u00af \u011b1\u00b4\u03b4. Clearly, one can always choose p RX \u201ctu and p RY \u201cR to satisfy De\ufb01nition 6.1. Thus, the inclusion monotonicity property will be always employed in combination with the PAC requirement (De\ufb01nition 4.3). The importance of monotonicity will arise from a practical viewpoint in Section 7. Algorithm The pseudocode of PIRLO is shown in Algorithm 1 (PIRLO box). The \ufb01rst part (lines 1-11) is analo9We remark on the substantial difference between our use of pessimism and that of Zhao et al. (2023). Indeed, we apply pessimism to feasible sets to ensure that the estimated set ful\ufb01lls the inclusion monotonicity property, while Zhao et al. (2023) apply pessimism to ensure the entry-wise monotonicity of the reward function, i.e., p rps,aq\u013arps,aq, for all p rP p R and r PR. gous to IRLO and the main difference lies in the presence of the con\ufb01dence set Cpp p,bq\u010eP (line 13), containing the transition models in P close in \u21131-norm to the empirical estimate p p, except the ones that are not compatible with expert\u2019s actions. Formally, Cpp p,bq is de\ufb01ned as:10 Cpp p,bq:\u201c ! p1 PP| @ps,hqP p Sp,\u03c0E, s1 R p Sp,\u03c0E h`1 : p1 hps1|s,p \u03c0E h psqq\u201c0 @ps,a,hqP p Zp,\u03c0b : }p1 hp\u00a8|s,aq\u00b4 p php\u00a8|s,aq}1 \u010fp bhps,aq ) , (7) where p bhps,aq is de\ufb01ned in Equation (18). The intuition is that, with high probability, the true transition model p, and its equivalence class rps\u201d Zp,\u03c0b , will belong to Cpp p,bq. Drawing inspiration from pessimism in RL, PIRLO \u201cpenalizes\u201d the estimates of the feasible set by removing from p RX the rewards for which we are not con\ufb01dent enough of their membership to RX p,\u03c0E, and by adding to p RY the rewards for which we are not con\ufb01dent enough of their nonmembership to RY p,\u03c0E, based on the con\ufb01dence set Cpp p,bq on the transition model. This translates into the following expressions: p RX \u201c \u010d p1PCpp p,bq RX p1,p \u03c0E, p RY \u201c \u010f p1PCpp p,bq RY p1,p \u03c0E. (8) This way, if pPCpp p,bq and p \u03c0E \u201c\u03c0E with hight probability, we have that, simultaneously, p RX \u010eRX p,\u03c0E and RY p,\u03c0E \u010e p RY. This entails the inclusion monotonicity property (Definition 6.1) thanks to De\ufb01nition 3.3. Computationally Ef\ufb01cient Implementation Differently from IRLO, computing the set operations of Equation (8) cannot be directly carried out by EVI.11 For this reason, we propose a relaxation which achieves the double objective of: (i) enabling a computationally ef\ufb01cient implementation of PIRLO (Algorithm 2, PIRLO box); and (ii) allowing for a simpler statistical analysis, preserving both the PAC and the inclusion monotonicity properties (details in Appendix G): r RX:\u201ctrPR|@\u03c0Prp \u03c0Es\u201d p Sp,\u03c0E ,@ps,hqP p Sp,\u03c0E,@aPA: min p1PCpp p,bqQp \u03c0E h ps,p \u03c0E h psq;p1,rq\u011b max p2PCpp p,bqQ\u03c0 hps,a;p2,rqu, r RY:\u201ctrPR|@\u03c0Prp \u03c0Es\u201d p Sp,\u03c0E ,@ps,hqP p Sp,\u03c0E,@aPA: max p1PCpp p,bqQp \u03c0E h ps,p \u03c0E h psq;p1,rq\u011b min p2PCpp p,bqQ\u03c0 hps,a;p2,rqu, where the universal/existential quanti\ufb01cation over the transition model of De\ufb01nition 3.3 has been relaxed by the two max\u00b4min. In other words, we allow a choice of different transition models for the two Q-functions appearing in 10Actually, this de\ufb01nition does not take into account a corner case. See Appendix D.5 for details and a more precise de\ufb01nition. 11Membership testing can be here implemented with a bilinear program, which is, in general, a dif\ufb01cult problem (Appendix G). 7 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms the two members of the inequality. Thus, r RX \u010e p RX and p RY \u010e r RY, preserving the inclusion monotonicity. For the membership checking of a candidate reward rPR, similarly to the IRLO case, we compute upper and lower bounds Q` and Q\u00b4 to the Q-function by using EVI varying the transition model in the con\ufb01dence set Cpp p,bq de\ufb01ned in Equation (7). Now, the con\ufb01dence set is made of \u21131 constraints and the corresponding max and min programs can be solved by using the approach of (Auer et al., 2008, Figure 2). The overall time complexity is of order OpHS2AlogSq. Sample Ef\ufb01ciency and Inclusion Monotonicity We now show that PIRLO is statistically ef\ufb01cient, with the additional guarantee (w.r.t. IRLO) of the inclusion monotonicity. Theorem 6.1. Let M be an MDP without reward and let \u03c0E be the expert\u2019s policy. Let DE and Db be two datasets of \u03c4 E and \u03c4 b trajectories collected by executing policies \u03c0E and \u03c0b in M. Under Assumption 2.1, PIRLO is p\u01eb,\u03b4qPAC for d-IRL with a sample complexity at most: \u03c4 b \u010f r O \u02dc H3Zp,\u03c0b ln 1 \u03b4 \u01eb2 \u02c6 ln 1 \u03b4 `Sp,\u03c0b max \u02d9 ` H6 ln 1 \u03b4 \u03c1\u03c0b,Zp,\u03c0E min \u01eb2 \u02c6 ln 1 \u03b4 `Sp,\u03c0b max \u02d9 ` ln 1 \u03b4 ln 1 1\u00b4\u03c1\u03c0b,Zp,\u03c0b min \u00b8 , and \u03c4 E is bounded as in Theorem 5.1. Furthermore, PIRLO is inclusion monotonic. The price for the inclusion monotonicity is the additional term in the sample complexity which grows with H6 and with 1{\u03c1\u03c0b,Zp,\u03c0E min . The latter represents the minimum nonzero visitation probability with which policy \u03c0b covers Zp,\u03c0E, i.e., the support of \u03c1p,\u03c0E. Intuitively, since the expert\u2019s policy is optimal, this additional term is due to a mismatch between optimal Q-functions under the different transition models of Cpp p,bq. Notice that, under Assumption 2.1, Zp,\u03c0E \u010eZp,\u03c0b, consequently, \u03c1\u03c0b,Zp,\u03c0E min \u011b \u03c1\u03c0b,Zp,\u03c0b min . We can provide an analogous result for d8. Theorem 6.2. Under the conditions of Theorem 6.1, PIRLO is p\u01eb,\u03b4q-PAC for d8-IRL with a sample complexity at most: \u03c4 b \u010f r O \u02dc H4 ln 1 \u03b4 \u03c1\u03c0b,Zp,\u03c0b min \u01eb2 \u02c6 ln 1 \u03b4 `Sp,\u03c0b max \u02d9 ` H6 ln 1 \u03b4 \u03c1\u03c0b,Zp,\u03c0E min \u01eb2 \u02c6 ln 1 \u03b4 `Sp,\u03c0b max \u02d9 ` ln 1 \u03b4 ln 1 1\u00b4\u03c1\u03c0b,Zp,\u03c0b min \u00b8 , and \u03c4 E is bounded as in Theorem 5.1. Furthermore, PIRLO is inclusion monotonic. Notice that both bounds in Theorem 6.1 and Theorem 6.2 hold also for the objectives de\ufb01ned in Equation (8).12 7. Reward Sanity Check with PIRLO In the literature, IRL algorithms (Ratliff et al., 2006; Ziebart et al., 2008) provide criteria to select a speci\ufb01c reward function from the feasible set. Our algorithm, PIRLO, thanks to the inclusion monotonicity property, provides a partition of the space of rewards R in three sets: (i) rewards contained in the sub-feasible set p RX (i.e., feasible w.h.p.), (ii) rewards not contained in the super-feasible set Rz p RY (i.e., not feasible w.h.p.), and (iii) rewards that we cannot discriminate with the given con\ufb01dence ( p RYz p RX). The situation is illustrated in Figure 1. Thus, PIRLO can be used both as a sanity checker on the rewards outputted by a speci\ufb01c IRL algorithm, and for de\ufb01ning the set of rewards from which selecting one. To exemplify this application, we have run PIRLO using highway driving data from Likmeta et al. (2021) and some human-interpretable reward. We provide the experimental details and the results in Appendix K. 8. A Bitter Lesson Up to now, we assumed to have two datasets DE and Db of trajectories collected by policies \u03c0E and \u03c0b, respectively. As already noted, this setting generalizes the most common IRL scenario where the only dataset DE is collected by the deterministic expert\u2019s policy \u03c0E and there is no possibility of collecting further data. A natural question arises: Why not directly considering the setting with DE only? The reason lies in the following negative result showing that the reward functions that can be learned from a single expert\u2019s dataset DE are not completely satisfactory. Proposition 8.1. Let M be the usual MDP without reward with A\u011b2 and let \u03c0E be the deterministic expert\u2019s policy. Let DE be a dataset of trajectories collected by following \u03c0E in M. Then, for any reward in rPRX p,\u03c0E it holds that: @ps,hqPSp,\u03c0E, @aPA: rhps,\u03c0E h psqq\u011brhps,aq. (9) Thus, if we have no information about the transition model in non-expert\u2019s actions (as when we have DE only), there exists no reward function r that simultaneously: (i) surely belongs to the feasible set (rPRX p,\u03c0E) and (ii) assigns to a non-expert\u2019s action a reward value greater than that assigned to the expert\u2019s action in the same ps,hq pair. This is clearly a property that is undesirable as it signi\ufb01cantly limits the expressive power of the reward function, making IRL closer to behavioral cloning and, consequently, inher12In Appendix F.5, we provide a tighter bound for the superset p RY without using the relaxation. Moreover, in Appendix F.4, we prove a larger sample complexity upper bound, when including an additional requirement. 8 \fOf\ufb02ine Inverse RL: New Solution Concepts and Provably Ef\ufb01cient Algorithms iting its limitations. As mentioned above, this issue can be overcome with a behavioral policy \u03c0b that explores enough. Proposition 8.2. Under the conditions of Theorem 8.1, assume to know php\u00a8|s,aq, where aPA is a non-expert\u2019s action in ps,hqPSp,\u03c0E. Then, if php\u00a8|s,aq\u2030php\u00a8|s,\u03c0E h psqq, there exists a reward rPRX p,\u03c0E such that: rhps,\u03c0E h psqq\u0103rhps,aq. 9. Conclusion In this paper, we have introduced a novel notion of feasible set and an innovative learning framework for managing the intrinsic dif\ufb01culties of the of\ufb02ine IRL setting. Furthermore, we have motivated the importance of inclusion monotonicity and we have devised an original form of pessimism to achieve it. Then, we have presented two provably ef\ufb01cient algorithms, IRLO and PIRLO, we have shown that the latter provides guarantees of inclusion monotonicity, and that it can be employed as a reward sanity checker. Finally, we have highlighted an intrinsic limitation of the of\ufb02ine IRL setting when samples from the experts are the only available. Limitations and Future Works To understand whether our algorithms are minimax optimal, future works should focus on the derivation of sample complexity lower bounds for of\ufb02ine IRL. Moreover, it would be appealing to extend our framework to more challenging (non-tabular) environments." + } + ], + "Matteo Pirotta": [ + { + "url": "http://arxiv.org/abs/1712.03428v1", + "title": "Cost-Sensitive Approach to Batch Size Adaptation for Gradient Descent", + "abstract": "In this paper, we propose a novel approach to automatically determine the\nbatch size in stochastic gradient descent methods. The choice of the batch size\ninduces a trade-off between the accuracy of the gradient estimate and the cost\nin terms of samples of each update. We propose to determine the batch size by\noptimizing the ratio between a lower bound to a linear or quadratic Taylor\napproximation of the expected improvement and the number of samples used to\nestimate the gradient. The performance of the proposed approach is empirically\ncompared with related methods on popular classification tasks.\n The work was presented at the NIPS workshop on Optimizing the Optimizers.\nBarcelona, Spain, 2016.", + "authors": "Matteo Pirotta, Marcello Restelli", + "published": "2017-12-09", + "updated": "2017-12-09", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The optimization of the expectation of a function is a relevant problem in large-scale machine learning and in many stochastic optimization problems involving \ufb01nance, signal processing, neural networks, just to mention a few. The availability of large datasets has called the attention on algorithms that scale favorably both with the number of trainable parameters and the size of the input data. Batch approaches that exploit a large number of samples to compute an approximation of the gradient have been gradually replaced by stochastic approaches that sample a small dataset (usually a single point) per iteration. For example, stochastic gradient descent (SGD) methods have been observed to yield faster convergence and (sometimes) lower test errors than standard batch methods [Bottou and Bousquet, 2011]. Despite the optimality results [Bottou and LeCun, 2003] and the successful applications, in practice SGD requires several steps of manual adjustment of the parameters to obtain good performance. For example, the initial step size together with the design of an appropriate annealing schema is required for learning with stationary data [Bottou, 2012, Schaul et al., 2013]. In addition, to limit the effects of noisy updates, it is often necessary to exploit mini-batch techniques that require the choice of an additional parameter: the batch size. This optimization is costly and tedious since parameters have to be tested on several iterations. Such problems get even worse when nonstationary settings are considered [Schaul et al., 2013]. Several techniques have been designed for the tuning of the step size with pure SGD method. Although these approaches have been successful applied to mini-batch settings, the design of the appropriate batch size is still an open problem. The contribute of this paper is the derivation of a novel algorithm for the selection of the batch size in order to compromise between noisy updates and more certain but expensive steps. The proposed algorithm automatically adapts the batch size at each iteration in order to maximize a lower bound to the expected improvement by accounting for the cost of processing samples. In particular, we consider both a \ufb01rst-order and a second-order Taylor approximation of the expected improvement and, exploiting concentration inequalities, we compute The work was presented at the NIPS workshop on Optimizing the Optimizers. Barcelona, Spain, 2016. arXiv:1712.03428v1 [cs.LG] 9 Dec 2017 \flower bounds to such approximations. The batch size is chosen by maximizing the ratio between the lower bound to the expected improvement and the number of samples used to estimate the gradient. Such optimization problem trades off the desire of increasing the batch size to get more accurate estimates and the cost of using more samples. The only parameter to be handled is the probability \u03b4 that regulates the con\ufb01dence level of the lower bound to the improvement step. The rest of the paper is organized as follows. In the next section we give a brief overview of stochastic gradient descent methods. In Section 3 we de\ufb01ne the optimization problem used to select the batch size. Section 4 introduces an approximation of the expected improvement expoliting Taylor expansion and Sections 5 and 6 deals respectively with a linear and a quadratic Taylor approximation. Section 7 discuss the application of diagonal preconditioning to de\ufb01ne dimension-dependent step sizes. Empirical comparisons of the proposed methods with related approaches are reported in Section 8, while Section 9 draws conclusions and outlines future work. 2 Background Stochastic gradient descent (SGD) is one of the most important optimization methods in machine learning. Most of the research on SGD has focused on the choice of the step size [Peters, 2007, Roux and Fitzgibbon, 2010, Duchi et al., 2011, Zeiler, 2012, Schaul et al., 2013, Orabona, 2014]. Several annealing schemes have been proposed in literature based on the standard rule \u03b7 (t) = \u03b70 (1 + \u03b3t)\u22121 originally proposed in [Herbert Robbins, 1951] and analyzed in [Xu, 2011, Bach and Moulines, 2011]. More recently researchers have proposed techniques to adapt the step size online accordingly to the observed samples and gradients. These techniques derive a global step size or adapt the step size for each parameter (diagonal preconditioning). Refer to [George and Powell, 2006] for a survey on annealing schemes and adaptive step size rules. Traditional SGD processes one example per iteration. This sequential nature makes SGD challenging for distributed inference. A common practical solution is to employ minibatch training, which aggregates multiple examples at each iteration. On the other hand, the choice of the batch size is critical since too small batches lead to high communication costs, while large batches may slow down convergence rate in practice [Li et al., 2014]. Despite the increasing amount of research in this \ufb01eld, all the mentioned approaches focus on obtaining (sub)optimal convergence rate of SGD without considering the possibility to adapt the size of the mini-batch. A notable exception is the work presented in [Byrd et al., 2012] where the authors proposed to adapt the sample size during the algorithm progression. The batch size is selected according to the variance of the gradient estimated from observed samples. Starting from the geometrical de\ufb01nition of descent direction, through several manipulations, the authors derived the following condition |S| \u2265\u2225Var [\u2207\u03b8]\u22251 \u03b32 \r \r\u2207S \u03b8 \r \r2 2 , (1) where \u03b3 \u2208(0, 1) and Var [\u2207\u03b8] is the vector storing the population variance for each component (Var [\u2207\u03b8] = [\u03c32 1, . . . , \u03c32 d] T). The population variance is then approximated through its unbiased estimate Var \u0002 \u2207S \u03b8 \u0003 computed on sample set S. However, the interplay between sample size and step size is not investigated resulting in an algorithm with hyper-parameters for both the selection of the batch size (the meaning of \u03b3 is not clearly de\ufb01ned) and the tuning of the step size. 3 Cost Sensitive Scenario In this section we formalize the problem and the methodology that will be used through all the paper. Consider the problem of maximizing the expected value of a function f (we assume that f is Lipschitz continuous with Lipschitz constant L): max \u03b8 J (\u03b8) = max \u03b8 E x\u223cP [f(x, \u03b8)] , where \u03b8 \u2208Rd is the trainable parameter vector and the samples x are drawn i.i.d. from a distribution P. A common approach is to optimize the previous function through gradient ascent. However, since P is unknown it is not possible to compute the exact gradient \u2207\u03b8J , but we can estimate it through 2 \fsamples. Given a training set S = {xi|xi \u223cP, i = 1, . . . , N}, the mini-batch stochastic gradient (SG) ascent is a stochastic process \u03b8(t+1) = \u03b8(t) + \u2206\u03b8n = \u03b8(t) + \u03b7 (t) \u2207\u03b8J n, t \u2208N+ (2) where \u2206\u03b8n is random variable associated to a n-dimensional subset of S (e.g., randomly drawn). Formally \u2206\u03b8n is de\ufb01ned as the product of a positive scalar (or positive semi-de\ufb01nite matrix) \u03b7 (t) and a the gradient estimate built on a n samples drawn from S \u2207\u03b8J n = 1 n X i\u2208In \u2207\u03b8f(xi, \u03b8), where In is an index set used to identify elements in S. \u2207\u03b8J n is a random variable that depends on the selection of the subset of S, i.e., the index set In. In the following we will show how to select the batch size n for each gradient update. To evaluate the quality of an update we consider the improvement \u2206J n = J (\u03b8 + \u2206\u03b8n) \u2212J (\u03b8) that is again a random variable. As the number of samples n increases, the gradient estimate (and consequently the estimated improvement) gets more and more certain. So, adopting a risk-averse approach, we consider as a goal the maximization of some statistical lower bound \u03a5 n to the expected improvement \u2206J . This allows to account for the uncertainty in the stochastic process de\ufb01ned in 2. On the other hand, this problem is trivially solved by taking the batch size as large as possible, thus not considering the additional computational cost of processing a larger batch size. In practice, this means that the batch dimension n induces a trade-off between a secure but costly update (the estimate converges to the true value as n \u2192+\u221e) and a noisy one. In order to formalize the trade-off, in this paper we consider that any additional sample comes at a price and when the addition of a new sample does not provide any signi\ufb01cant improvement in the estimated performance it is not worth to pay that price. As a consequence, we can formalize the batch size selection problem as a cost sensitive optimization n\u2217= arg max n\u2208N+ \u03a5 n n . (3) 4 Lower Bound to the Improvement This section focuses on the derivation of the lower bound to the improvement \u2206J n. Given an increment \u2206\u03b8n, a realization of the random variable \u2206J n can be computed. However, we do not know the analytical relationship that ties the two terms. The lack of this information prevents a closedform solution for the optimal batch size. On the other hand, resorting to black-box optimization methods (e.g., grid search) is generally not a suitable alternative due to their high cost. In order to simplify the optimization problem, we can consider the Taylor expansion of the expected improvement. For example, the \ufb01rst order expansion is given by \u2206J n = \u2207\u03b8J T\u2206\u03b8n + R1 (\u2206\u03b8n) , (4) where R1 (\u2206\u03b8n) is the remainder. A lower bound to the remainder is easily derived by minimizing the remainder along the line connecting the current parameterization \u03b8 and the value \u03b8 + \u2206\u03b8: R1(\u2206\u03b8) \u22651 2 infc\u2208(0,1) \u0000\u2206\u03b8TH\u03b8J (\u03b8 + c \u2206\u03b8) \u2206\u03b8 \u0001 . By plugging in this result in (4), a deterministic lower bound to the improvement is derived. This formalization has two issues. First, the computation of such lower bound needs to solve a minimization problem that requires the evaluation of the Hessian in several points along the gradient direction. Secondly, the above lower bound does not explicitly depend on the batch size, since it does not take into consideration the uncertainty in the gradient estimate. The \ufb01rst issue will be solved by considering an approximation of the expected improvement obtained by considering a truncation of the Taylor expansion, while the second issue is addressed by considering a probabilistic lower bound to the expected improvement, that explicitly depends on the batch size (uncertainty reduces as batch size increases). 3 \fAlgorithm 1 L-PAST Inputs: Sample set S = {x1, . . . , xN}, initial batch size n, con\ufb01dence level \u03b4 for t=1 to T do In \u2190{i|i \u22081, . . . , N \u2227|In| = n} \u2207\u03b8J n \u21901 n P i\u2208In \u2207\u03b8f \u0010 xi, \u03b8(t)\u0011 \u03b8(t+1) \u2190\u03b8(t) + \u03b7\u2207\u03b8J n n \u2190arg maxn\u2208N+ \u03a5 n,\u03b4 L n end for 4.1 Approximation of the Expected Improvement As mentioned, the computation of the lower bound to the remainder of the \ufb01rst-order Taylor expansion requires the evaluation of the Hessian in several points (depending on c) which has a quadratic cost in the number of parameters d. One way to deal with this issue is to require a high order Lipschitz continuous condition on the objective function in order to derive a bound to the Hessian or to exploit the knowledge of the objective function [Pirotta et al., 2013]. However, in practice this information is hard to retrieve and since our goal is to derive a practical algorithm we suggest to exploit approximations of the improvement \u2206J n. A \ufb01rst formulation is obtained by considering a local linear expansion: \u2206J n \u2248\u2207\u03b8J T\u2206\u03b8n. As we will see in the next section, this simpli\ufb01cation has several advantages. As second option, in Section 6, we suggest to replace the inferior with the evaluation of the Hessian in the current parametrization. Equivalently, this means that we consider a quadratic expansion, a choice that is common in literature [Roux and Fitzgibbon, 2010, Schaul et al., 2013]. Formally, we consider that \u2206J n \u2248\u2207\u03b8J T\u2206\u03b8n + 1 2\u2206\u03b8nTH\u03b8J \u2206\u03b8n, where the second-order remainder has been dropped. 5 Linear Probabilistic Adaptive Sample Technique (L-PAST) The linear expansion allows to select the batch size in a way that is complementary to the step size selection technique since it is independent from the selected step size. In other words, the advantage of this approach is that the step size can be tuned using any automatic technique provided in literature, while the batch size is selected automatically according to the quality of the observed samples. Let b \u2206J n = \u2207\u03b8J T\u2206\u03b8n be the linear simpli\ufb01cation of the expect improvement. We still need to manipulate such formulation in order to remove the dependence on the true gradient. This goal can be achieve by exploiting concentration inequalities on the exact gradient \u2207\u03b8J . Formally, we consider that the following inequality holds with probability (w.p.) 1 \u2212\u03b4 \u2225\u2207\u03b8J \u2212\u2207\u03b8J n\u22252 < B\u2225\u2207\u2225(n, \u03b4). (5) Given the previous inequality it is easy to prove the following bound, w.p. 1 \u2212\u03b4, (see the appendix, Sec. 10.2) b \u2206J n = \u03b7 \u2207\u03b8J T \u2207\u03b8J n (6) > \u03b7 \u0000\u2225\u2207\u03b8J n\u22252 \u2212B\u2225\u2207\u2225(n, \u03b4) \u0001 \u2225\u2207\u03b8J n\u22252 = \u03a5 n,\u03b4 L , where we have considered the global step size (\u03b7 \u2208R+). As expected, the lower bound to the expected improvement depends on the batch size through the concentration bound (\u2225\u2207\u03b8J n\u22252 is a realization of the random variable given the current set In). In particular, as the number of samples increases, the empirical error (according to the concentration inequality) decreases leading to better estimates of the expected improvement. Having derived a sample-based bound to the expected improvement, we can solve the cost-sensitive problem (3) for the \u201coptimal\u201d batch size n. L-PAST is outlined in Algorithm 1. 5.1 Concentration Inequalities and Batch Size The bound in (6) provides a generic lower bound to the expected improvement that is independent from the speci\ufb01c concentration inequality that is used. It is now necessary to provide an explicit 4 \fHoeffding Chebyshev Bernstein n \u2265 18L2 \u2225\u2207\u03b8J n\u22252 2 ln \u0000 d+1 \u03b4 \u0001 n \u22659\u2225Var[\u2207\u03b8J ]\u22251 4\u03b4\u2225\u2207\u03b8J n\u22252 2 n \u2265 9b+16a\u2225\u2207\u03b8J n\u22252+3\u221a 9b2+32ab\u2225\u2207\u03b8J n\u22252 8\u2225\u2207\u03b8J n\u22252 2 Table 1: Batch size obtained by solving Problem (3) using different concentration inequalities. We used the symbols a and b to simplify the Bernstein\u2019s size (a = 2 3L ln \u0000 d+1 \u03b4 \u0001 , b = 2 \u2225Var[\u2207\u03b8J ]\u22252 ln \u0000 d+1 \u03b4 \u0001 ). formulation in order to solve Problem (3). Several concentration inequalities have been provided in literature, in this paper we consider Hoeffding\u2019s, Chebyshev\u2019s and Bernstein\u2019s inequalities [Massart, 2007]. Chebyshev\u2019s inequality has been widely exploited in literature due to its simplicity, it can be applied to any arbitrary distribution (by knowing the variance). On the other side, Hoeffding\u2019s and Bernstein\u2019s inequalities require a bounded support of the distribution, i.e., the knowledge of the range of the random variables (here \u2207\u03b8f(xi, \u03b8)). We use the term distribution aware to refer to the scenario where the properties of the distribution are known (e.g., variance and range). Although these values can be estimated online from the observed value, the results may be unreliable in the event of poor estimates. Empirical versions\u2014that directly account for the estimation error\u2014have been presented in literature [Saw et al., 1984, Mnih et al., 2008, Stellato et al., 2016]. The advantage of using these inequalities is that the batch size can be easily computed in closed form, see Table 1 for the distribution aware scenario. It is worth to notice that all the proposed approaches retains one hyper-parameter \u03b4 \u2208(0, 1) which denotes the desired con\ufb01dence level. This parameter can be easily set due to its clear meaning and typically its contribution is small since it is inside a logarithm. It is worth to notice that, when we consider the Chebyshev\u2019s inequality, i.e., Bn,\u03b4 \u2225\u2207\u2225= q \u2225Var[\u2207\u03b8J ]\u22251 n\u03b4 , our approach provides a probabilistic interpretation of the AGSS algorithm presented in [Byrd et al., 2012] and reported in (1). Nevertheless, our derivation gives a different and more formal interpretation of their approach and gives an explicit meaning to the hyper-parameter by mapping 4\u03b4 9 to \u03b32. It is worth to notice that this result is obtained by considering the distribution aware Chebyshev\u2019s inequality instead of the empirical version. By replacing the variance with its empirical estimate the result may be unreliable. 6 Quadratic Probabilistic Adaptive Sample Technique (Q-PAST) The simplicity of the previous approach comes at a low expressive power. The quadratic expansion of the expected improvement (with global step size) b \u2206J n = \u2207\u03b8J T\u2206\u03b8n + 1 2\u2206\u03b8nTH\u03b8J \u2206\u03b8n (7) = \u03b7 \u2207\u03b8J T\u2207\u03b8J n + 1 2\u03b72(\u2207\u03b8J n)TH\u03b8J \u2207\u03b8J n allows to account for local curvatures of the space. Before to describe the Quadratic-PAST (Q-PAST), as done with L-PAST, we need to manipulate the expected improvement in order to remove the dependence on the exact gradient and Hessian. While the linear term can be lower bounded as done in Section 5, here we show how to handle the quadratic form in a similar way. Consider a component-wise concentration inequality for the Hessian estimate, such that, w.p. 1 \u2212\u03b4/(2d2): \f \f \fH(ij) \u03b8 J \u2212H(ij) \u03b8 J n\f \f \f < B(ij) H \u0012 n, \u03b4 2d2 \u0013 \u2200i, j. (8) Then, \u2207\u03b8J nT H\u03b8J \u2207\u03b8J n > \u2207\u03b8J nT e H\u03b8J n \u2207\u03b8J n, (9) 5 \fAlgorithm 2 Q-PAST Inputs: Sample set S = {x1, . . . , xN}, initial batch size n, con\ufb01dence level \u03b4 for t=1 to T do In \u2190{i|i \u22081, . . . , N \u2227|In| = n} \u2207\u03b8J n \u21901 n P i\u2208In \u2207\u03b8f \u0010 xi, \u03b8(t)\u0011 \u03b8(t+1) \u2190\u03b8(t) + \u03b7\u2207\u03b8J n H\u03b8J n \u21901 n P i\u2208In H\u03b8f \u0010 xi, \u03b8(t)\u0011 n \u2190arg maxn\u2208N+ \u03a5 n,\u03b4 Q n end for where e H(ij) \u03b8 J n = H(ij) \u03b8 J n \u2212B(ij) H (n, \u03b4). By plugging in inequalities (6)\u2013(9) in (7) we obtain a lower bound to the quadratic expansion of the improvement b \u2206J n > \u03a5 n, \u03b4 2 L + 1 2\u03b72\u2207\u03b8J nT e H\u03b8J n \u2207\u03b8J n (10) = \u03a5 n,\u03b4 Q . Given the step size \u03b7 and a set In, we can optimize the lower bound for the batch size n. Finally, we can exploit this sample-based bound to compute the \u201coptimal\u201d batch size as in Problem (3). The concentration inequalities mentioned in Section 5.1 can be used to bound the Hessian components. By exploiting these bounds it is possible to derive closed\u2013form solution for n even in this context. 7 Diagonal Preconditioning Until now we have considered the global step size scenario where each parameter is scaled by the same amount \u03b7. In practice, it may be necessary to considered individual step sizes (\u2206\u03b8i = \u03b7i\u2207\u03b8J n i ) in order to account for the different magnitudes of the parameters. There are several ways to deal with such scenario. We start considering the linear expansion b \u2206J n = \u0010 \u03b7 1 2 \u25e6\u2207\u03b8J \u0011T \u0010 \u03b7 1 2 \u25e6\u2207\u03b8J n\u0011 where \u03b7 is a d-dimensional vector and \u25e6is the Hadamard (element-wise) product. If now we consider the gradient to be scaled by a factor \u03b7 1 2 , we can apply the same procedure presented in Section 5 on the scaled gradient. This means that it is necessary to recompute the concentration inequalities to take into account the change of magnitude. For example, Hoeffding\u2019s inequality requires to know an upper bound to the L2\u2013norm of the random vector involved in the estimate. In our settings (see Section 3) we have assumed that \u2225\u2207\u03b8f(xk, \u03b8)\u22252 \u2264L for any xk and \u03b8. To use a diagonal preconditioning with L-PAST and Hoeffding we need just compute the upper bound to \u2225\u03b7 \u25e6\u2207\u03b8f(xk, \u03b8)\u22252 that in a trivial form is: \u2225\u03b7 \u25e6\u2207\u03b8f(xk, \u03b8)\u22252 \u2264L maxi \u03b7i. Similar considerations can be derived for the other concentration inequalities. Another possible way to deal with the diagonal preconditioning is to exploit a component-wise concentration inequality. Let B\u2207be a vector such that |\u2207\u03b8Ji \u2212\u2207\u03b8J n i | < B(i) \u2207w.p. 1 \u2212\u03b4/2d. This is the element-wise counterpart of the concentration inequality considered in (5). Notice that the following inequality always holds: B\u2225\u2207\u2225\u2264d maxi B(i) \u2207. Let us consider this scenario together with the quadratic expansion, then: b \u2206J n \u2265(\u03b7 \u25e6\u2207\u03b8J n)T e \u2207\u03b8J n (11) + 1 2(\u03b7 \u25e6\u2207\u03b8J n)T e H\u03b8J n (\u03b7 \u25e6\u2207\u03b8J n) w.p. 1 \u2212\u03b4 where e \u2207J n = \u2207\u03b8J n \u2212B\u2207and e H\u03b8J n is de\ufb01ned as in (9). A different way to deal with diagonal preconditioning is to assume the problem to be separable [Schaul et al., 2013]. In our settings this maps to a diagonal approximation of the Hessian e H\u03b8J n. 6 \f8 Experiments We tested the approaches on digit recognition and news classi\ufb01cation tasks, with both convex (logistic regression) and non-convex (multi-layer perceptron) models. Mini-batch SG (SG-n) with \ufb01xed batch size n is the standard approach to stochastic optimization problems. This section compares SG-n with the adaptive algorithms introduced above (three versions of PAST and DSG [Byrd et al., 2012]). A critical parameter in SG optimization is the de\ufb01nition of the step length \u03b7. In order to remove the dependence from these parameters we have tested several adaptive strategies (e.g., AdaGrad, Adam, RMSprop, AdaDelta). We have \ufb01nally decided to use RMSprop which provided the most consistent results across the different settings (parameters are set as suggested in [Tieleman and Hinton, 2012]). Finally, the n-dimensional subset of S is sampled sequentially without shuf\ufb02ing at the beginning of each epoch. Evaluation. The main measure to be considered is the loss J (\u03b8). However, the evaluation needs to take into account two orthogonal dimensions: samples and iterations. The number of samples processed by the algorithm is relevant in applications where the samples need to be actively collected. For example, it is a relevant measure in reinforcement learning problems where samples are obtained interacting with a real or simulated environment. On contrary, in off-line applications (e.g., supervise learning) the iterations play a central role because there is no cost in collecting samples. For example, the highest cost in deep learning approaches is the evaluation of the evaluation the computation of the gradient and the consequent update of the parameters. Clearly this cost is proportional to the number of iterations. In the following we will investigate both the dimensions. Datasets. We chose to test the algorithms both classi\ufb01cation and regression tasks. The classi\ufb01cation tasks are: the MNIST digit recognition task [LeCun et al., 1998] (with 60k training samples, 10k test samples, and 10 classes), and a subset of the Reuters newswire topics classi\ufb01cation1 (with 8982 training samples, 2246 test samples, and 46 classes). For the Reuters task we select the 1000 most frequent words and we used them as binary features. The regression task is performed on the Parkinsons Telemonitoring dataset [Tsanas et al., 2010].2 This dataset is composed by 5875 voice measurements and the goal is to predict the total_UPDRS \ufb01eld by using the 19 available features. We did not used any form of preprocessing for the classi\ufb01cation tasks, while we performed normalization for the Parkinsons one (zero mean and unitary variance). Estimators. The multi-class classi\ufb01er was modeled through different architectures of feed-forward neural networks. The simplest one is a logistic regression (i.e., a network without hidden layers). This model has convex loss (categorical cross entropy) in the parameters. This con\ufb01guration is denoted as \u2019M0\u2019. The second con\ufb01guration is a fully connected multi-layer perceptron with one hidden layer with RELU activation function. In the MNIST task the network (denoted \u2019M1\u2019) has the following con\ufb01guration ([784, 128, 10]), it is not used in Reuters. Finally we test a deep, fully connected multi-layer perceptron with two hidden layers with activation function RELU. This architecture has been used only in the MNIST problem (denoted \u2019M2\u2019) with the following layers ([784, 256, 128, 10]). The multi-layer parceptrons have non-convex loss (cross-entropy) relative to parameters. For the regression task we have decided to exploit a simple linear regressor. We are aware of the limited power of such estimator but the focus is not on the \ufb01nal performance but on the relationship between the different batch strategies. 8.1 L-PAST Behavior. In this section we compare the behavior of L-PAST approaches with the state-of-the-art on the classi\ufb01cation tasks. We start considering the total number of processed samples as evaluation dimension (together with the accuracy score). As shown in Figure 1 (top line), the best accuracy is obtained by the algorithms 1Reuters data are available at https://keras.io/datasets/. 2The dataset is available at https://archive.ics.uci.edu/ml/datasets/Parkinsons+ Telemonitoring. 7 \f0 20000 40000 60000 80000 100000 120000 Iterations 0 500 1000 1500 2000 Batch size (M0) mnist SG_256 Bernstein L-PAST 0 20000 40000 60000 80000 100000 120000 Samples 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy on test set (M0) mnist SG_256 SG_512 SG_1024 DSG Chebyshev L-PAST Hoeffding L-PAST Bernstein L-PAST 0 50000 100000 150000 200000 250000 Samples 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Accuracy on test set (M0) reuters SG_256 SG_512 SG_1024 DSG Chebyshev L-PAST Hoeffding L-PAST Bernstein L-PAST 0 20 40 60 80 100 120 Iterations 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Accuracy on test set (M0) mnist SG_256 SG_512 SG_1024 DSG Chebyshev L-PAST Hoeffding L-PAST Bernstein L-PAST 0 20 40 60 80 100 120 Iterations 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Accuracy on test set (M0) reuters SG_256 SG_512 SG_1024 DSG Chebyshev L-PAST Hoeffding L-PAST Bernstein L-PAST Figure 1: Accuracy on test set. The net is the M0 in both the domain. Model \u03b4 DSG H. L-PAST C. L-PAST B. L-PAST M2 0.10 84.93% (13) 89.69% (37) 83.37% (16) 96.55% (791) M2 0.25 87.31% (25) 89.86% (36) 88.33% (27) 96.64% (850) M2 0.50 90.28% (42) 89.25% (43) 91.69% (44) 96.63% (902) M1 0.1 86.77% (21) 86.95% (21) 88.20% (36) 94.55% (368) M1 0.2 89.10% (31) 89.09% (32) 89.63% (37) 94.67% (422) M1 0.5 89.94% (46) 90.75% (56) 88.79% (38) 94.74% (464) Table 2: MNIST Task. Accuracy and number of iterations with different con\ufb01dence levels. SG-256, SG-512 and SG-1024 obtained 96.20% (705), 95.45% (354) and 94.92% (177) with model M2, while 95.20%, 94.23% and 93.12% with model M1. The algorithms have been trained over 3 epochs. that select small batches (e.g., Bernstein L-PAST). This clearly is a consequence of the higher number of updates performed by the algorithms that select small batch sizes. In particular Bernstein L-PAST is able to outperform the other algorithms in the MNIST task due to the ability of quickly approaching the optimal solution in the initial phase. The rightmost \ufb01gure shows the number of samples selected by Bernstein L-PAST w.r.t. SG-256. Other approaches (DSG, Hoeffding/Chebyshev L-PAST) that exploit more general inequalities are prone to select bigger batch sizes that result in less updates and slower convergence rates. When we consider the number of iterations, the ranking of the algorithms changes. In particular the ones that select the smallest batch sizes are penalized by the noisy estimate of the gradient. The other algorithms perform updates that are more certain and leads to highest scores. Finally, we tested also different con\ufb01dence levels. Table 2 shows that, as expected, the con\ufb01dence \u03b4 has a small in\ufb02uence on the overall behavior. It is worth to notice that smaller batches generally lead to better (maybe noisy) performance since are able to perform a larger number of updates. 8.2 Q-PAST Behavior. In this section we evaluate the performance of the quadratic approximation on the regression task. We assume the problem to be separable in order to consider only the diagonal component of the Hessian. Figure 2 shows that Q-PAST outperforms all the other approaches. It is worth to notice that it is less aggressive in the change of the batch dimension in particular when compared with L-PAST (bottom \ufb01gure). Table 3 shows the R2-score achieved by the algorithm with different con\ufb01dence levels. It is possible to observe that the in\ufb02uence of \u03b4 on the \ufb01nal performance is very limited. This means that the design of such value is not critical. 8 \f0 20000 40000 60000 80000 100000 120000 140000 160000 Samples 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 R2-score on test set (M0) parkinsons SG_256 SG_512 SG_1024 DSG Chebyshev L-PAST Hoeffding L-PAST Bernstein L-PAST Chebyshev Q-PAST Hoeffding Q-PAST Bernstein Q-PAST 0 100 200 300 400 500 600 700 Iterations 0 20000 40000 60000 80000 100000 120000 140000 160000 Total number of points (M0) parkinsons SG_256 SG_512 SG_1024 DSG Chebyshev L-PAST Hoeffding L-PAST Bernstein L-PAST Chebyshev Q-PAST Hoeffding Q-PAST Bernstein Q-PAST Figure 2: Parkinsons dataset. R2-score and total number of samples. The algorithms have been trained for 30 epochs. \u03b4 DSG C. L-PAST H. L-PAST B. L-PAST C. Q-PAST H. Q-PAST B. Q-PAST 0.1 0.0932 0.0980 0.1350 0.1303 0.1443 0.1451 0.1456 0.2 0.1083 0.1242 0.1391 0.1344 0.1448 0.1460 0.1458 0.5 0.1227 0.1364 0.1381 0.1374 0.1462 0.1452 0.1466 Table 3: Regression taks. R2-score with different con\ufb01dence levels. SG-256, SG-512 and SG-1024 obtained 0.1449, 0.1419 and 0.1330, respectively. The algorithms have been trained over 30 epochs. 8.3 Non-Stationary Scenario PAST approaches regulate the batch dimension accordingly to the statistical information associated to the estimated gradient. In particular, when we are far away from the optimal solution we can exploit noisy steps (i.e., small batch) to rapidly approach \u201cgood\u201d solution. Instead, when we approach the optimal solution we need an accurate estimate of the gradient to closely converge to the optimum. This property is relevant in realistic applications (e.g., online scenario) where the optimal solution may change (even drastically) overtime. To simulate this scenario we have considered a regression problem (2-degree polynomial) where the optimal solution is changed every 35 iterations (i.e., parameter updates). Figure 3 shows how Bernstein L-PAST handles such scenario. Initially, small batches are exploited to approach the optimum, then the batch size is increased proportionally to the noise-to-signal ratio. Intuitively, when the noise-to-signal ratio is high we need to average the gradient over many samples in order to lower the in\ufb02uence of the noise. A good proxy for the noise-to-signal ratio is provided by the ratio between variance and squared Euclidean norm of the gradient ( \u2225Var[\u2207\u03b8J n]\u22251 \u2225\u2207\u03b8J n\u22252 2 ), see Figure 3. When the optimum is changed the algorithm detects a decrease of the noise-to-signal variance (the gradient norm increases w.r.t. the variance) and adapts the batch size to the new scenario. This analysis is even more clear when we consider Chebyshev L-PAST since it directly optimize the batch size accordingly to the noise-to-signal ratio. On contrary, Hoeffding L-PAST, which is less aware of the informed that the other approaches, considers only the gradient magnitude. In the same \ufb01gure it is reported the performance of Bernstein Q-PAST. It is possible to observe that Q-PAST selects smaller batches than L-PAST. This means that it performs noisy steps that leads to less over\ufb01tting w.r.t. to L-PAST. In fact, when a change in the objective happens, Q-PAST enjoy lower losses than L-PAST. 9 Conclusions Pure SG has proved to be effective in several applications, but it is highly time consuming since it exploits one sample for each update. We have shown that it is possible to exploit automatic techniques that are able to adapt the batch size overtime. Moreover, these techniques can be used in conjunction 9 \fto any schema for the update of the parameters. While L-PAST based on Bernstein\u2019s inequality has proved to be effective on the well known MNIST task and Reuters dataset, Q-PAST has proved to be more effective in the regression problem. However, the computation or estimation of the Hessian may be prohibitive in big data applications such as deep neural networks. Although the batch size may not play a fundamental role in supervised applications, it is a critical parameters in reinforcement learning specially when the environment is highly stochastic (update the estimate with one sample may be too optimistic). Future work will apply the proposed techniques to re\ufb01ne policy gradient approaches. 0 50 100 150 200 250 300 Iterations 0 20 40 60 80 100 Mean Squared Error Bernstein L-PAST Bernstein Q-PAST 0 50 100 150 200 250 300 Iterations 0 500 1000 1500 2000 Batch size 0 50 100 150 200 250 300 Iterations 0 20 40 60 80 100 120 140 160 180 Noise-to-signal ratio Figure 3: Bernstein L-PAST and Q-PAST in a non-stationary scenario. The model function f is a noisy 2-degree polynomial where the coef\ufb01cients changes every 35 iterations (dashed vertical line). Above: MSE as a function of iteration, center the corresponding batch size, and, below the noise-to-signal ratio that is connected to the batch size selected by B. L-PAST. 10" + } + ], + "Andrea Tirinzoni": [ + { + "url": "http://arxiv.org/abs/2212.09429v1", + "title": "On the Complexity of Representation Learning in Contextual Linear Bandits", + "abstract": "In contextual linear bandits, the reward function is assumed to be a linear\ncombination of an unknown reward vector and a given embedding of context-arm\npairs. In practice, the embedding is often learned at the same time as the\nreward vector, thus leading to an online representation learning problem.\nExisting approaches to representation learning in contextual bandits are either\nvery generic (e.g., model-selection techniques or algorithms for learning with\narbitrary function classes) or specialized to particular structures (e.g.,\nnested features or representations with certain spectral properties). As a\nresult, the understanding of the cost of representation learning in contextual\nlinear bandit is still limited. In this paper, we take a systematic approach to\nthe problem and provide a comprehensive study through an instance-dependent\nperspective. We show that representation learning is fundamentally more complex\nthan linear bandits (i.e., learning with a given representation). In\nparticular, learning with a given set of representations is never simpler than\nlearning with the worst realizable representation in the set, while we show\ncases where it can be arbitrarily harder. We complement this result with an\nextensive discussion of how it relates to existing literature and we illustrate\npositive instances where representation learning is as complex as learning with\na fixed representation and where sub-logarithmic regret is achievable.", + "authors": "Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric", + "published": "2022-12-19", + "updated": "2022-12-19", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Stochastic contextual linear bandits (CLBs) focus on the interplay between exploration and exploitation when the reward f \u22c6(x, a) of each context-arm pair (x, a) \u2208X \u00d7A is a Preprint. linear function of a known feature map \u03c6\u22c6: X \u00d7A \u2192Rd\u03c6\u22c6 and an unknown parameter \u03b8\u22c6. CLBs have been widely studied due to their broad applicability and strong theoretical guarantees (e.g., Lattimore and Szepesv\u00b4 ari, 2020, and references therein). Unfortunately, the assumption that a realizable linear representation is known is often violated in real applications, where one only observes raw contextarm data and a suitable representation has to be learned online. Representation learning in CLBs relaxes this assumption by providing the learner with a set of representations \u03a6 = {\u03c6 : X \u00d7 A \u2192Rd\u03c6} (e.g., a neural network) among which a realizable one exists (i.e., \u03c6\u22c6\u2208\u03a6). Representation learning can be viewed as a special case of learning with a general realizable function class (i.e., F\u03a6 := {f(\u00b7, \u00b7) = \u03c6(\u00b7, \u00b7)T\u03b8 | \u03c6 \u2208 \u03a6, \u03b8 \u2208 Rd\u03c6}), which has been extensively studied in the literature (e.g., Agarwal et al., 2014; Foster and Rakhlin, 2020; Simchi-Levi and Xu, 2020) with algorithms achieving O( p AT log(|F\u03a6|)) worst-case regret, where |F\u03a6| is the covering number of F\u03a6. However, these algorithms do not explicitly leverage the bi-linear structure of the function class F\u03a6. Another direction is to leverage model-selection techniques. While generic modelselection approaches (e.g. Abbasi-Yadkori et al., 2020; Pacchiano et al., 2020; Cutkosky et al., 2021) can be directly applied when \u03a6 is \ufb01nite, more specialized techniques can be used when \u03a6 has additional structure (e.g., nested features (Foster et al., 2019)). Interestingly, some of these algorithms (e.g., Foster et al., 2019; Cutkosky et al., 2021; Ghosh et al., 2021) achieve regret guarantees matching the performance of the best representation in the set, up to a representation learning cost that depends on the number of representations |\u03a6|, the problem horizon T , or other quantities speci\ufb01c to the structure of \u03a6. Nonetheless, these results are worst-case in nature and general model-selection algorithms are limited by an unavoidable \u2126( \u221a T) regret (Pacchiano et al., 2020), which may hinder them from fully exploiting the structure of \u03a6 and achieve instance-optimal performance (e.g., logarithmic regret). Alternatively, Papini et al. (2021) and Tirinzoni et al. (2022) proposed specialized representation learning algorithms that exploit the bi-linear structure of F\u03a6 to obtain the instance-dependent regret bound of the best unknown realizable representation up to a logarithmic factor in |\u03a6|. \fOn the Complexity of Representation Learning in Contextual Linear Bandits Furthermore, they showed that constant regret is achievable (i.e., after a \ufb01nite time \u03c4 the algorithm only plays optimal arms) when a realizable representation satis\ufb01es a certain spectral property. However, these results rely on the strong assumption that either all the representations in \u03a6 are realizable or any misspeci\ufb01ed representation can be identi\ufb01ed by playing any sequence of arms. In this paper, we focus on the following question: What is the cost of representation learning compared to a CLB with a given representation? In order to address this question, we \ufb01rst provide a systematic analysis of representation learning in CLBs through an instance-dependent lens. By specializing existing results, we derive an instance-dependent lower bound on the regret of any \u201cgood\u201d representation learning algorithm which shows that the asymptotic regret must be at least C(f \u22c6, F\u03a6) log(T ), where C(f \u22c6, F\u03a6) is a complexity measure depending both on the reward function f \u22c6and the given set of representations \u03a6. Moreover, this complexity is tight, as there exist algorithms attaining C(f \u22c6, F\u03a6) log(T ) regret in the large T regime. This instance-dependent view allows us to have a more \ufb01ne grained comparison to CLBs with a given representation, thus providing insights on the complexity of representation learning that may remain \u201chidden\u201d in worst-case studies. Leveraging this lower bound we are then able to derive the following results: (1) We show that the regret of representation learning is never smaller than the regret of learning with the worst realizable representation in the set, i.e., C(f \u22c6, F\u03a6) \u2265sup\u03c6\u2208\u03a6,realizable C(f \u22c6, F{\u03c6}). This reveals a fundamental limit to representation learning, showing that it is impossible to adapt to representations with better complexity. Surprisingly, this result holds even for instances f \u22c6where all representations \u03c6 \u2208\u03a6 are realizable. Indeed, this is due to a subtle but crucial effect of representation learning: as in general all representations \u03c6 \u2208\u03a6 may be misspeci\ufb01ed for some of the reward functions f \u2032 \u2208F\u03a6, an algorithm needs to be robust to such misspeci\ufb01cation and it cannnot fully adapt to cases that are favorable for some representations. (2) We further strengthen this result by showing examples where the inequality is strict and the gap arbitrarily large. In particular, we construct instances where all representations are realizable and have small dimensionality and yet the regret can be as large as learning with \u201ctabular\u201d features assigning a distinct dimension to each context-arm pair. (3) We characterize favorable instances where misspeci\ufb01ed representations in \u03a6 can be discarded without increasing the regret so that C(f \u22c6, F\u03a6) = sup\u03c6\u2208\u03a6,realizable C(f \u22c6, F{\u03c6}). Finally, we instantiate our analysis in widely studied representation structures (e.g., tabular, nested features, features with spectral properties, and the special case where all representations are realizable) and provide novel insights on the complexity of representation learning in these settings. 2 Preliminaries We consider a stochastic contextual bandit problem with a \ufb01nite set of contexts X and a \ufb01nite set of arms A. Let X := |X| and A := |A|. At each time step t \u2208N, the learner \ufb01rst observes a context xt \u2208X drawn i.i.d. from a distribution \u03c11, it selects an arm at \u2208A, and it receives a scalar reward drawn from a Gaussian distribution with mean f \u22c6(xt, at) and unit variance. Let \u03a6 be a set of representations, where each \u03c6 \u2208\u03a6 is a d\u03c6-dimensional feature map \u03c6 : X \u00d7 A \u2192Rd\u03c6. We de\ufb01ne the associated function class F\u03a6 := {f(\u00b7, \u00b7) = \u03c6(\u00b7, \u00b7)T\u03b8 | \u03c6 \u2208\u03a6, \u03b8 \u2208Rd\u03c6}. The set \u03a6 and function class F\u03a6 are realizable when: Assumption 1 (Realizability). There exist \u03c6\u22c6\u2208\u03a6 and \u03b8\u22c6\u2208Rd\u22c6, where d\u22c6:= d\u03c6\u22c6, such that f \u22c6(x, a) = \u03c6\u22c6(x, a)T\u03b8\u22c6 \u2200x \u2208X, a \u2208A. This assumption is required only for a representation \u03c6\u22c6\u2208 \u03a6 (which is said to be realizable), while, for any \u03c6 \u0338= \u03c6\u22c6, the approximation error maxx,a |f \u22c6(x, a)\u2212\u03c6(x, a)T\u03b8| may be non-zero for any \u03b8, meaning that f \u22c6cannot be approximated as a linear function of \u03c6. In this case, we shall say that representation \u03c6 is misspeci\ufb01ed. Learning problem. We consider the problem of (bilinear) representation learning for regret minimization. De\ufb01nition 1 (Representation learning problem (f \u22c6, F\u03a6)). Consider an unknown stochastic contextual bandit problem with reward function f \u22c6. The learner is provided only with a set of representations \u03a6 (equiv. function class F\u03a6) satisfying Assumption 1 (\u03c6\u22c6unknown) and it aims at minimizing the cumulative regret over T steps, RT (f \u22c6) := T X t=1 \u0012 max a\u2208A f \u22c6(xt, a) \u2212f \u22c6(xt, at) \u0013 . (1) When \u03a6 = {\u03c6\u22c6}, the learning problem (f \u22c6, F{\u03c6\u22c6}) is known as stochastic contextual linear bandit (CLB), where the learner knows the realizable representation \u03c6\u22c6, while in representation learning the learner needs to learn within the realizable non-linear function class F\u03a6. Note also that \u03a6 may be an in\ufb01nite uncountable set. Notation We use M \u2020 to denote the pseudo-inverse of a matrix M \u2208Rn\u00d7m, while Im(M) and Ker(M) denote its column and null spaces, respectively. For a vector v \u2208Rd and a matrix M \u2208Rd\u00d7d, we let \u2225v\u22252 M := vTMv. We use \u03c0\u22c6 f \u22c6(x) := arg maxa\u2208A f \u22c6(x, a) to denote the optimal 1We assume \u03c1 to be full-support over X w.l.o.g. \fAndrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric arm for context x when facing a problem with reward f \u22c6. We assume \u03c0\u22c6 f \u22c6(x) to be unique for all x. We de\ufb01ne the sub-optimality gap of arm a \u2208A for context x \u2208X as \u2206f \u22c6(x, a) := f \u22c6(x, \u03c0\u22c6 f \u22c6(x)) \u2212f \u22c6(x, a). Note that, under Assumption 1, we have \u2206f \u22c6(x, a) = z\u22c6 \u03c6\u22c6(x, a)T\u03b8\u22c6, where we call z\u22c6 \u03c6(x, a) := \u03c6(x, \u03c0\u22c6 f \u22c6(x))\u2212\u03c6(x, a) the feature gap. We will often use a matrix notation for all quantities. We denote by f \u22c6\u2208RXA a vectorized reward function and by D\u03b7 := diag({\u03b7(x, a)}x\u2208X ,a\u2208A) the XA\u00d7XA matrix containing a function \u03b7 : X \u00d7 A \u2192[0, \u221e). For any \u03c6 \u2208\u03a6, let F\u03c6 \u2208RXA\u00d7d\u03c6 be the matrix containing the feature vectors {\u03c6(x, a)}x\u2208X ,a\u2208A as rows and V\u03b7(\u03c6) := F T \u03c6 D\u03b7F\u03c6 = P x,a \u03b7(x, a)\u03c6(x, a)\u03c6(x, a)T. Note that f \u22c6= F\u03c6\u22c6\u03b8\u22c6. Using this notation, \u2225f \u22c6\u2212F\u03c6\u03b8\u22252 D\u03b7 is exactly the mean square error of the function \u03c6(\u00b7, \u00b7)T\u03b8 in predicting f \u22c6when the learner has \u03b7(x, a) samples from each (x, a). We de\ufb01ne \u03b8\u22c6 \u03b7(\u03c6) := arg min\u03b8\u2208Rd\u03c6 \u2225f \u22c6\u2212F\u03c6\u03b8\u22252 D\u03b7 as the best \ufb01t for the reward parameter using representation \u03c6. By standard regression theory, it is easy to show that \u03b8\u22c6 \u03b7(\u03c6) = V\u03b7(\u03c6)\u2020 P x,a \u03b7(x, a)\u03c6(x, a)f \u22c6(x, a). Similarly, the quantity \u2225f \u22c6\u2212F\u03c6\u03b8\u22c6 \u03b7(\u03c6)\u22252 D\u03b7 is related to the misspeci\ufb01cation of representation \u03c6: it is zero for all \u03b7 if \u03c6 is realizable, while it is positive for at least one \u03b7 if \u03c6 is misspeci\ufb01ed. 3 Instance-dependent Regret Lower Bound We start by stating a novel asymptotic regret lower bound for the representation learning problem (f \u22c6, F\u03a6) (see Definition 1). Let A be any bandit strategy, i.e., a sequence {At}t\u22651 where each At : (X \u00d7 A \u00d7 R)t\u22121 \u00d7 X \u2192A is a measurable mapping w.r.t. the history up to time step t \u22121. We say that a A is uniformly good on a function class F if EA f \u0002 RT (f) \u0003 = o(T \u03b1) for any \u03b1 > 0 and any f \u2208F2, where EA f denotes the expectation under algorithm A in a contextual bandit problem with reward function f \u2208F. Theorem 1. Let A be a uniformly good strategy on the class F\u03a6 and suppose that \u03c0\u22c6 f \u22c6is unique. Then, lim inf T \u2192\u221e EA f \u22c6 \u0002 RT (f \u22c6) \u0003 log(T ) \u2265C(f \u22c6, F\u03a6), where C(f \u22c6, F\u03a6) is the value of the optimization problem inf {\u03b7(x,a)}\u22650 X x\u2208X X a\u2208A \u03b7(x, a)\u2206f \u22c6(x, a) s.t. inf \u03c6\u2208\u03a6 min x,a\u0338=\u03c0\u22c6 f\u22c6(x) \u0010 \u2225f \u22c6\u2212F\u03c6\u03b8\u22c6 \u03b7(\u03c6)\u22252 D\u03b7 + c\u03b7 x,a(f \u22c6, \u03c6) \u0011 \u22652, 2Our analysis easily extends to the weaker notion of uniformly good algorithm requiring O(T \u03b1) regret on all f \u2208F only for some \u03b1 \u2208(0, 1). In this case, the stated lower bound remains the same as in Theorem 1 up to a factor 1\u2212\u03b1 (Tirinzoni et al., 2021). with c\u03b7 x,a(f \u22c6, \u03c6) = \uf8f1 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f3 0 if z\u22c6 \u03c6(x, a)T\u03b8\u22c6 \u03b7(\u03c6) \u22640, 0 if z\u22c6 \u03c6(x, a) / \u2208Im(V\u03b7(\u03c6)), (z\u22c6 \u03c6(x,a)T\u03b8\u22c6 \u03b7(\u03c6))2 \u2225z\u22c6 \u03c6(x,a)\u22252 V\u03b7(\u03c6)\u2020 otherwise. The proof (see Appendix B) builds on the asymptotic regret lower bound for contextual bandits with general function classes (a.k.a. structured bandits), which can be extracted as a special case of the one for Markov decision processes (Ok et al., 2018). While Ok et al. (2018) provide an implicit complexity measure C(f, F) for learning any instance f when knowing that it belongs to a given class F, we derive a more explicit complexity measure C(f \u22c6, F\u03a6) for representation learning. The general lower bound follows from a fundamental result stating that any uniformly good algorithm must guarantee PT t=1 EA f \u22c6[(f \u22c6(xt, at) \u2212 f(xt, at))2] \u22652 log(T ) as T \u2192\u221efor any alternative reward f \u2208F that induces a different optimal policy than \u03c0\u22c6 f \u22c6. Our explicit complexity follows by leveraging a novel reformulation of the set of such alternative rewards for representation learning which allows us to derive a closedform expression of the above general condition. As common in existing instance-dependent lower bounds (e.g., Combes et al., 2017; Ok et al., 2018), the complexity C(f \u22c6, F\u03a6) is the value of an optimization problem which seeks an allocation of samples \u03b7 minimizing the regret while collecting suf\ufb01cient information about the instance f \u22c6. Such an information constraint is the peculiar component in our setting as it formally establishes the minimal level of exploration that any uniformly good representation learning algorithm must guarantee. In particular, for any representation \u03c6 \u2208\u03a6, context x \u2208X, and sub-optimal action a \u0338= \u03c0\u22c6 f \u22c6(x), any feasible allocation \u03b7 must guarantee \u2225f \u22c6\u2212F\u03c6\u03b8\u22c6 \u03b7(\u03c6)\u22252 D\u03b7 | {z } misspeci\ufb01cation + c\u03b7 x,a(f \u22c6, \u03c6) | {z } sub-optimality \u22652. (2) Here we recognize the contribution of two terms. The \ufb01rst one is related to the misspeci\ufb01cation error of representation \u03c6 induced by \u03b7 (i.e., the minimum achievable mean square error when linearly estimating f \u22c6with \u03c6 using samples collected according to \u03b7). It is trivially zero for any \u03b7 if \u03c6 is realizable. The second term is related to the complexity for learning that a is a sub-optimal action for context x when using representation \u03c6 to estimate the reward. Interestingly, c\u03b7 x,a(f \u22c6, \u03c6) resembles the complexity term appearing in the existing lower bound for a CLB problem with given representation \u03c6 (e.g., Hao et al., 2020; Tirinzoni et al., 2020). The constraint requires the sum of these two terms to be large. This means that any feasible allocation \u03b7, and thus any uniformly good representation learning algorithm, must either learn that \u03c6 is misspeci\ufb01ed or that a is suboptimal in context x under the best \ufb01t of f \u22c6with represen\fOn the Complexity of Representation Learning in Contextual Linear Bandits tation \u03c6. We now discuss relevant possible cases to better undestand the complexity for achieving so. Case 1. \u03c6 is realizable and z\u22c6 \u03c6(x, a) \u2208Im(V\u03b7(\u03c6)). In this case, the misspeci\ufb01cation term in (2) is zero for any \u03b7 and z\u22c6 \u03c6(x, a)T\u03b8\u22c6 \u03b7(\u03c6) = \u2206f \u22c6(x, a) > 0 by realizability, de\ufb01nition of z\u22c6 \u03c6, and sub-optimality of a. From (2), this implies that \u03b7 must guarantee that \u2225z\u22c6 \u03c6(x, a)\u22252 V\u03b7(\u03c6)\u2020 \u2264 \u2206f \u22c6(x, a)2/2. It turns out that this is exactly the same complexity measure we have for learning that (x, a) is suboptimal in the CLB (f \u22c6, F{\u03c6}). Since \u2225z\u22c6 \u03c6(x, a)\u2225V\u03b7(\u03c6)\u2020 represents the uncertainty that allocation \u03b7 has on the rewards of (x, a) and (x, \u03c0\u22c6 f \u22c6(x)), this condition simply requires any uniformly good algorithm to reduce such uncertainty below a factor of the gap of (x, a). Case 2. \u03c6 is realizable and z\u22c6 \u03c6(x, a) / \u2208Im(V\u03b7(\u03c6)). In this case, both the misspeci\ufb01cation term and c\u03b7 x,a(f \u22c6, \u03c6) are zero. From (2), this means that \u03b7 is infeasible. This is intuitive since, when the feature gap z\u22c6 \u03c6(x, a) is not in the column space of the design matrix V\u03b7(\u03c6), the allocation \u03b7 does not provide any information about arm a in the representation space of \u03c6, and thus it cannot learn whether a is sub-optimal or not. Therefore, any feasible \u03b7 must guarantee z\u22c6 \u03c6(x, a) \u2208Im(V\u03b7(\u03c6)) for all (x, a) when \u03c6 is realizable, i.e., any good algorithm must explore all feature directions. This has an interesting implication: when span({\u03c6(x, a)}x,a) = d\u03c6, any feasible design matrix must be invertible. This result was already proved by Lattimore and Szepesv\u00b4 ari (2017) in the linear bandit setting using an ad-hoc derivation, while here we establish it in greater generality as a consequence of our lower bound. Case 3. \u03c6 is misspeci\ufb01ed and c\u03b7 x,a(f \u22c6, \u03c6) = 0. This can happen in two cases: either z\u22c6 \u03c6(x, a)T\u03b8\u22c6 \u03b7(\u03c6) \u22640, which means that the sub-optimality gap of (x, a) cannot be accurately estimated using representation \u03c6, or z\u22c6 \u03c6(x, a) / \u2208 Im(V\u03b7(\u03c6)). In both cases, a feasible \u03b7 must make the \ufb01rst term in (2) large, i.e., it must learn that \u03c6 is misspeci\ufb01ed. Interestingly, this implies that, differently from the realizable case, a feasible allocation does not need to explore the whole feature space for \u03c6 (e.g., it does not have to make the design matrix V\u03b7(\u03c6) invertible). This is particularly relevant when \u03c6 is high-dimensional, as identifying the misspeci\ufb01cation may be easier than covering all dimensions. Case 4. \u03c6 is misspeci\ufb01ed and c\u03b7 x,a(f \u22c6, \u03c6) > 0. This is the case with most freedom: a feasible allocation can either learn that \u03c6 is misspeci\ufb01ed or that (x, a) is sub-optimal. As we shall see in Section 4.3, this \ufb02exibility may be exploited to \ufb01nd allocations that manage to \u201cdiscard\u201d representations without signi\ufb01cantly affecting the regret. 3.1 Known-representation Case As expected, when instantiating Theorem 1 in the standard CLB (f \u22c6, F{\u03c6\u22c6}), we recover the existing lower bound for such a setting (Hao et al., 2020; Tirinzoni et al., 2020).3 Corollary 1. Let span({\u03c6\u22c6(x, a)}x,a) = d\u22c6. In the CLB (f \u22c6, F{\u03c6\u22c6}), the complexity C(f \u22c6, F{\u03c6\u22c6}) of Theorem 1 is inf \u03b7:V\u03b7(\u03c6\u22c6)\u22121exists X x,a \u03b7(x, a)\u2206f \u22c6(x, a) s.t. min x,a\u0338=\u03c0\u22c6 f\u22c6(x) \u2206f \u22c6(x, a)2 \u2225z\u03c6\u22c6(x, a)\u22252 V\u03b7(\u03c6\u22c6)\u22121 \u22652. Comparing this result with Theorem 1, we notice that adding one representation to the set \u03a6 implies adding one constraint to the optimization problem, hence making the problem harder. On the positive side, Theorem 1 does not impose the strong constraint of Corollary 1 for every \u03c6 \u2208\u03a6, which would require any good algorithm to learn an optimal action at every context for all representations. In fact, it may be possible to leverage the misspeci\ufb01cation of a representation \u03c6 to lower the additional complexity w.r.t. the one imposed in the realizable case (see Equation 2 and, e.g., Case 4 above). In Section 4, we further elaborate on how the complexity of representation learning is impacted by these elements and how it compares with the complexity of CLBs when given a realizable representation. 3.2 The Lower Bound is Attainable It is known that instance-dependent lower bounds in the general form of Ok et al. (2018) can be attained. Since Theorem 1 is an instantiation of such a result, this implies that C(f \u22c6, F\u03a6) is a tight complexity measure for representation learning as there exist algorithms matching it. Proposition 1. There exists an algorithm A (e.g., Dong and Ma, 2022) such that, for any representation learning problem (f \u22c6, F\u03a6), lim sup T \u2192\u221e EA f \u22c6 \u0002 RT (f \u22c6) \u0003 log(T ) \u2264C(f \u22c6, F\u03a6). While, to the best of our knowledge, the algorithm of Dong and Ma (2022) is the only one attaining instance-optimal complexity in contextual bandits with general function classes, it is actually easy to adapt existing strategies for non-contextual bandits to our setting (Combes et al., 2017; Degenne et al., 2020; Jun and Zhang, 2020). In particular, the algorithm of Jun and Zhang (2020) would obtain an anytime regret of order O(C(f \u22c6, F\u03a6) log(T ) + log log(T )). This shows that C(f \u22c6, F\u03a6) is also a relevant \ufb01nite-time complexity measure (and not only asymptotic), up to a O(log log(T )) term depending on other instance-dependent factors. 3Existing lower bounds are derived under the assumption that the full set of features {\u03c6\u22c6(x, a)}x,a span Rd\u22c6. This is without loss of generality since one can always remove redundant features by computing the low-rank SVD of F\u03c6\u22c6. \fAndrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric 4 Complexity of Representation Learning We now provide a series of results to better characterize the instance-dependenet complexity C(f \u22c6, F\u03a6) of representation learning in comparison with the complexity C(f \u22c6, F{\u03c6\u22c6}) of the single-representation CLB problem. 4.1 Representation learning cannot be easier than learning with a given representation We \ufb01rst prove that the complexity of learning with a single representation is a lower bound for representation learning. Proposition 2. For any \u03a6 such that f \u22c6\u2208F\u03a6, C(f \u22c6, F\u03a6) \u2265 sup\u03c6\u2208\u03a6:f \u22c6\u2208F{\u03c6} C(f \u22c6, F{\u03c6}). This result leverages the instance-dependent nature of the complexity derived in Theorem 1 to compare representation learning with a single-representation CLB for every reward function f \u22c6. This is in contrast with a worst-case analysis, where we would compare the two approaches w.r.t. their respective worst-case reward functions. Whenever there is only one realizable representation \u03c6\u22c6in \u03a6, the result is intuitive, since adding misspeci\ufb01ed representations to \u03a6 cannot make the problem any easier. Nonetheless, Proposition 2 has another, less obvious, implication: representation learning is at least as hard as the hardest CLB (f \u22c6, F{\u03c6}) among all realizable representations. More surprisingly, this result holds even when all the representations in \u03a6 are realizable for f \u22c6. In fact, this is the unavoidable price for an algorithm to be robust (i.e., uniformly good) to any other reward function f \u2032 \u2208F\u03a6 for which some representation \u03c6 may not be realizable and it de\ufb01nes an intrinsic limit to the level of adaptivity to f \u22c6that we can expect in representation learning (see Section 5 for a discussion on how this result relates to existing literature). 4.2 There exist instances where representation learning is strictly harder than learning with a given representation After establishing that representation learning cannot be easier than CLBs, a natural question is: how much harder can it be? Here we show that, for any reward function f \u22c6, there exists a set of representations \u03a6 with f \u22c6\u2208F\u03a6 such that any uniformly good representation learning algorithm must suffer regret scaling linearly with the number of contexts and actions, whereas the regret of learning with any realizable representation in the set only scales with the feature dimensionality d \u226aXA. Proposition 3. Let X, A \u22651 and 2 \u2264d \u2264XA. Fix an arbitrary instance f \u22c6: X \u00d7 A \u2192R and denote by \u2206min its minimum positive gap. Then, there exists a set of d-dimensional representations \u03a6 of cardinality |\u03a6| = \u2308X(A\u22121) d\u22121 \u2309such that f \u22c6\u2208\u2229\u03c6\u2208\u03a6F{\u03c6} and C(f \u22c6, F\u03a6) = X x\u2208X X a\u0338=\u03c0\u22c6 f\u22c6(x) 2 \u2206f \u22c6(x, a). Moreover, for any \u03c6 \u2208\u03a6, C(f \u22c6, F{\u03c6}) \u22642(d \u22121) \u2206min . Note that the complexity C(f \u22c6, F\u03a6) of the representation learning problem built in Proposition 3 is exactly the complexity for learning the contextual bandit problem f \u22c6when ignoring the set of representations \u03a6, i.e., the (unstructured) tabular setting4. Therefore, Proposition 3 proves that there exist \u201chard\u201d representation learning problems whose complexity is the same as learning without any prior knowledge about f \u22c6. While this may be expected as the set \u03a6 is constructed to be worst-case for f \u22c6, the second statement of Proposition 3 is more surprising. In fact, \u03a6 is constructed using only realizable representations for f \u22c6with dimension d \u226aXA. As such, the complexity C(f \u22c6, F{\u03c6}) for learning with any \u03c6 \u2208\u03a6 only scales (in the worst case) with d and it can be arbitrarily smaller than C(f \u22c6, F\u03a6). Remark 1. Rather than constructing a single hard instance (f \u22c6, F\u03a6) where representation learning is dif\ufb01cult, we prove that for any reward function f \u22c6we can \ufb01nd a set \u03a6 such that (f \u22c6, F\u03a6) is dif\ufb01cult. Hence, representation learning can be dif\ufb01cult regardless of the reward function. 4.3 There exist instances where representation learning is not harder than learning with a given representation Unlike the previous hardness results, here we show that there exist favorable instances (f \u22c6, F\u03a6) where the complexity of representation learning is the same as the one of a CLB with a realizable representation in \u03a6. This means that representation learning comes \u201cfor free\u201d on such instances. Proposition 4. Let \u03b7\u22c6(x, a) = 1 \u0010 a = \u03c0\u22c6 f \u22c6(x) \u0011 . Let \u03a6 contain a unique realizable representation \u03c6\u22c6and suppose that there exists \u03b5 > 0 such that, for all \u03c6 \u2208\u03a6 with f \u22c6 / \u2208F{\u03c6}, \u2225f \u22c6\u2212F\u03c6\u03b8\u22c6 \u03b7\u22c6(\u03c6)\u22252 D\u03b7\u22c6 \u2265\u03b5. Then, C(f \u22c6, F\u03a6) = C(f \u22c6, F{\u03c6\u22c6}). Intuitively, the condition on \u03a6 in Proposition 4 requires every misspeci\ufb01ed representation to have a minimum positive mean square error in \ufb01tting f \u22c6when samples are collected by an optimal policy. This means that a learner is able to 4Since the context-action space is \ufb01nite, we can always run a trivial variant of UCB (Auer et al., 2002a) that estimates the reward of each (x, a) independently and achieve regret Ef\u22c6[RT (f \u22c6)] \u2272P x\u2208X P a\u0338=\u03c0\u22c6 f\u22c6(x) log(T ) \u2206f\u22c6(x,a). This is also the instance-optimal rate of the unstructured setting (Ok et al., 2018). \fOn the Complexity of Representation Learning in Contextual Linear Bandits detect all misspeci\ufb01ed representation by playing optimal actions, i.e., while suffering zero regret, hence making representation learning costless in the long run. Consider the following scheme as an example of how a simple strategy can leverage the condition Proposition 4. Assuming \ufb01nite \u03a6, take any algorithm with sub-linear regret on the class F\u03a6, e.g., any of the algorithms for general function classes (e.g., Foster and Rakhlin, 2020; Simchi-Levi and Xu, 2020). Run the algorithm in combination with an elimination rule for misspeci\ufb01ed representations (e.g., the one proposed by Tirinzoni et al. (2022)) until only one representation remains active. When this happens, switch to playing an instance-optimal algorithm for CLBs on the remaining representation. It is easy to see that, since the starting algorithm has sub-linear regret, it plays optimal actions linearly often and, thus, thanks to the assumption in Proposition 4, it collects suf\ufb01cient information to eliminate all misspeci\ufb01ed representations in a \ufb01nite time. This means that the algorithm suffers only constant regret for eliminating misspeci\ufb01ed representations, while it never discards \u03c6\u22c6with high probability. After that, playing an instance-optimal strategy on \u03c6\u22c6implies that the total regret is roughly C(f \u22c6, F{\u03c6\u22c6}) log(T ) in the long run, which is exactly the same regret we would have by running the instance optimal algorithm on \u03c6\u22c6from the very beginning. 5 Speci\ufb01c Representation Structures We now discuss some of the speci\ufb01c representation learning problems studied in the literature, while providing additional insights on their instance-dependent complexity. 5.1 Trivial Representations It is well known that the realizability of F\u03a6 (Assumption 1) is crucial for ef\ufb01cient learning, as sub-linear regret may be impossible otherwise (Lattimore et al., 2020). In practice, when little prior knowledge about the reward function f \u22c6 is available to design a suitable class F\u03a6, a common technique is to reduce the approximation error by expanding \u03a6, in the hope of ensuring realizability. When the context-arm pairs are \ufb01nite, a trivial realizable representation can always be constructed as the canonical basis of RXA. Let {(xi, ai)}N i=1 be an enumeration of all N = XA context-arm pairs. Then, we can de\ufb01ne the XAdimensional features \u00af \u03c6 as \u00af \u03c6i(x, a) := 1 (x = xi, a = ai). It is easy to see that f \u22c6= F\u00af \u03c6\u03b8 for \u03b8i = f \u22c6(xi, ai). A natural idea to build a class for representation learning is thus to start from a set \u03a6 of \u201cgood\u201d features (e.g., low dimensional or with nice spectral properties) and then add the trivial representation \u00af \u03c6 so as to ensure realizability. The hope is that a good algorithm would still be able to leverage the \u201cgood\u201d representations to achieve better results whenever possible. The following result shows that this is impossible: every uniformly good algorithm must pay the complexity of learning without any prior knowledge on f \u22c6as far as \u00af \u03c6 is in the set of candidate representations. Proposition 5. Let \u03a6 be any set of representations (not necessarily realizable for f \u22c6). Then, C(f \u22c6, F\u03a6\u222a{ \u00af \u03c6}) = X x\u2208X X a\u0338=\u03c0\u22c6 f\u22c6(x) 2 \u2206f \u22c6(x, a). As already noted in the discussion of Proposition 3, the complexity C(f \u22c6, F\u03a6\u222a{ \u00af \u03c6}) of Proposition 5 is equivalent to the complexity of learning f \u22c6without any prior knowledge. Hence, no uniformly good algorithm can leverage the representations in \u03a6 when \u00af \u03c6 is also considered, no matter how good they are. For instance, the set \u03a6 could even be a singleton {\u03c6\u22c6} containing a realizable representation of dimension d \u226aXA, and still representation learning over the set {\u03c6\u22c6, \u00af \u03c6} so as to achieve regret scaling with the properties of \u03c6\u22c6is impossible. A similar result was derived by R\u00b4 eda et al. (2021), who showed that learning an instance f \u22c6which is known to be approximately linear in given features \u03c6 without knowing the amount of misspeci\ufb01cation is as complex as learning f \u22c6without any prior knowledge. 5.2 Nested Features A popular design choice for representation learning is to be build a set of nested features (Foster et al., 2019; Pacchiano et al., 2020; Cutkosky et al., 2021; Ghosh et al., 2021) \u03a6 = {\u03c61, . . . , \u03c6N} of increasing dimension (i.e., such that di := d\u03c6i < di+1 := d\u03c6i+1 for all i \u2208[N \u22121]) that satisfy the following property: for all i \u2208[N \u22121] and (x, a), the \ufb01rst di components of \u03c6i+1(x, a) are equal to \u03c6i(x, a). Let i\u22c6\u2208[N] be such that \u03c6i\u22c6is the realizable representation of smallest dimension (which exists by Assumption 1). The nestedness implies that \u03c6i is realizable for all i \u2265i\u22c6. Several approaches have been proposed for this setting. While Foster et al. (2019) designed a strategy with regret e O(\u221adi\u22c6T + T 3/4), model-selection algorithms (e.g., Pacchiano et al., 2020; Cutkosky et al., 2021) achieve regret of order e O(poly(N)\u221adi\u22c6T). Interestingly, Ghosh et al. (2021) obtained e O(\u221adi\u22c6T) regret, that is of the same order as the worst-case regret achievable by, e.g., LinUCB on the (unknown) smallest realizable representation \u03c6i\u22c6. We show that things are considerably more complex from an instance-dependent perspective. Proposition 6. Let \u03a6 be a set of N nested features and f \u22c6\u2208F\u03a6. Then, C(f \u22c6, F\u03a6) = C(f \u22c6, F{\u03c6N}). (3) This result claims that representation learning on a set of nested features \u03a6 is as dif\ufb01cult as a CLB problem with the \fAndrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric realizable representation of largest dimension (\u03c6N). This is somehow surprising since it essentially states that representation learning over nested features is useless, and one may simply learn with the highest dimensional representation (which is known to be realizable) from the very beginning. The intuition is that, while the learner knows \u03c6N to be realizable for any reward function (by assumption), it does not know whether this is true for \u03c6N\u22121, \u03c6N\u22122, etc. Even if, say, \u03c6N\u22121 is realizable for f \u22c6, there might be another reward function f \u2032 where this is not true. Any good algorithm must explore suf\ufb01ciently to eventually discriminate between f \u22c6and f \u2032 in the long run, and it turns out that the complexity for doing so is exactly C(f \u22c6, F{\u03c6N}), hence making any \ufb01ner level of adaptivity impossible. Moreover, we prove in Appendix D that there exist problems with di\u22c6\u226adN where C(f \u22c6, F{\u03c6N}) \u2273dN but C(f \u22c6, F{\u03c6i\u22c6}) \u2272 di\u22c6. This implies that, in the worst-case, any uniformly good algorithm must suffer a dependence on the dimensionality of the largest representation, regardless of the fact that a smaller realizable representation is nested into it. Note that this does not contradict existing results for modelselection (e.g., Foster et al., 2019; Ghosh et al., 2021; Cutkosky et al., 2021). In fact, while they achieve a dependence on the worst-case regret of the best representation \u03c6i\u22c6, they also feature some representation learning cost which dominates in the long run. For instance, the bound of Ghosh et al. (2021) has a O(d2 N log(T )) additive term. Therefore, while some gains are possible in the small T regime (e.g., scaling with \u221adi\u22c6T instead of \u221adNT), in the long run the logarithmic term dominates and C(f \u22c6, F{\u03c6N}) log(T ) becomes the optimal complexity. 5.3 HLS Representations and Sub-logarithmic Regret Hao et al. (2020) and Papini et al. (2021) recently showed that in a CLB (f \u22c6, F{\u03c6}) it is possible to achieve constant regret when the given realizable representation \u03c6 satis\ufb01es a certain spectral condition. De\ufb01nition 2 (HLS representation). A representation \u03c6 is HLS for an instance f \u22c6if, for all x \u2208X and a \u0338= \u03c0\u22c6 f \u22c6(x), \u03c6(x, a) \u2208span({\u03c6(x, \u03c0\u22c6 f \u22c6(x))}x\u2208X ).5 Intuitively, a representation satisfying this property allows exploring the full feature space by playing an optimal policy. That is, playing optimal actions allows re\ufb01ning the reward estimates at all (x, a), even those that are not played. Interestingly, Papini et al. (2021) showed that this condition is both necessary and suf\ufb01cient to achieve constant regret. 5The original de\ufb01nition (Hao et al., 2020) requires the stronger condition span({\u03c6(x, \u03c0\u22c6 f\u22c6(x))}x\u2208X) = Rd\u03c6. This is because the authors assumed that span({\u03c6(x, a)}x,a) = Rd\u03c6. Here we state a generalization that works even without such an assumption. Theorem 2 (Papini et al. (2021)). Constant regret is achievable on an instance f \u22c6if, and only if, the learner is provided with a HLS realizable representation \u03c6\u22c6. When a realizable HLS representation \u03c6\u22c6is not known a-priori and one must perform representation learning, it is natural to ask whether such a strong result can still be achieved. Papini et al. (2021); Tirinzoni et al. (2022) showed that this is indeed the case under strong conditions on \u03a6: either 1) all the representation are realizable or 2) misspeci\ufb01ed representations are detectable by any policy (i.e., such that min\u03b8\u2208Rd\u03c6 \u2225f \u22c6\u2212F\u03c6\u03b8\u22252 D\u03b7 > 0 for any \u03b7). We now provide a necessary and suf\ufb01cient condition on the representations \u03a6 to allow C(f \u22c6, F\u03a6) = 0. This implies that, whenever such a condition is not met, any form of sub-logarithmic regret (e.g., constant) is impossible for any uniformly good algorithm. On the other hand, when the condition is met, sub-logarithmic regret is achievable (and it is achieved by the algorithm mentioned in Section 3.2).6 Proposition 7. A necessary and suf\ufb01cient condition for C(f \u22c6, F\u03a6) = 0 is that the following two properties hold for any \u03c6 \u2208\u03a6 such that min\u03b8\u2208Rd\u03c6 \u2225f \u22c6\u2212F\u03c6\u03b8\u22252 D\u03b7\u22c6= 0 and for all x \u2208X, a \u0338= \u03c0\u22c6 f \u22c6(x): 1. z\u22c6 \u03c6(x, a)T\u03b8\u22c6 \u03b7\u22c6(\u03c6) > 0; 2. \u03c6(x, a) \u2208Im(V\u03b7\u22c6(\u03c6)). Proposition 7 can be read as follows. Any representation \u03c6 whose misspeci\ufb01cation is detectable by an optimal policy (i.e., such that min\u03b8\u2208Rd\u03c6 \u2225f \u22c6\u2212F\u03c6\u03b8\u22252 D\u03b7\u22c6> 0) does not bring any contribution to the regret lower bound, as already noted in Section 4.3. For any other representation \u03c6, the optimal policy must be able to detect that all sub-optimal pairs (x, a) of f \u22c6are indeed sub-optimal. This, in turns, requires \u03c6(x, a) to be in the span of V\u03b7\u22c6(\u03c6) (i.e., the optimal policy explores the direction \u03c6(x, a)) and that z\u22c6 \u03c6(x, a)T\u03b8\u22c6 \u03b7\u22c6(\u03c6) > 0 (i.e., the best approximation to the gap of (x, a) using \u03c6 remains strictly positive). On the other hand, suppose that, for some \u03c6 \u2208\u03a6 with zero misspeci\ufb01cation under an optimal policy, one of the two properties in Proposition 7 does not hold. Then, if z\u22c6 \u03c6(x, a)T\u03b8\u22c6 \u03b7\u22c6(\u03c6) \u22640, (x, a) has higher reward than (x, \u03c0\u22c6 f \u22c6(x)) in the linear instance (\u03c6, \u03b8\u22c6 \u03b7\u22c6(\u03c6)), which means that it is impossible to learn its sub-optimality. Similarly, if \u03c6(x, a) / \u2208Im(V\u03b7\u22c6(\u03c6)), an optimal policy does not explore the direction \u03c6(x, a) at all, which means that it cannot estimate the corresponding reward. In both cases, it is necessary to repeatedly play at least some sub-optimal action, which implies that C(f \u22c6, F\u03a6) > 0 and sub-logarithmic is thus impossible. Perhaps surprisingly, an immediate consequence of Proposition 7 is that sub-logarithmic regret is impossible if \u03a6 contains at least one realizable non-HLS representation. 6The best algorithm mentioned in Section 3.2 achieves O(log log(T )) regret when \u03a6 satis\ufb01es Proposition 7. How to achieve constant regret in this setting remains an open question. \fOn the Complexity of Representation Learning in Contextual Linear Bandits Corollary 2. If there exists a realizable representation \u03c6 \u2208 \u03a6 which does not satisfy the HLS condition (De\ufb01nition 2), C(f \u22c6, F\u03a6) > 0 (i.e., sub-logarithmic regret is impossible). This implies that, even when \u03a6 contains only realizable representations and all but one are HLS, constant regret cannot be attained by any uniformly good algorithm. 5.4 Fully-Realizable Representations Another speci\ufb01c structure is when all representantions in \u03a6 are realizable for all reward functions of interest. Papini et al. (2021) proved that, in this case, a LinUCBbased algorithm can adapt to the best instance-dependent regret bound of a representation in \u03a6 (e.g., it achieves constant regret when at least one representation is HLS). While our Proposition 2 and Corollary 2 seem to contradict their result, it turns out that Papini et al. (2021) consider a simpler problem: they assume the learner to be aware of \u03a6 containing only realizable representations, while we consider the more general setting where it only knows that one of them is realizable. Intuitively, when the learner has such a strong prior knowledge, specialized strategies can be designed to achieve better results. However, such strategies would not be uniformly good on all problems in our class F\u03a6 as they do not account for misspeci\ufb01ed representations. Therefore, the price to pay for being robust to misspeci\ufb01cation is in general very large, no matter how \u201cgood\u201d \u03a6 is. As a complement to the results of Papini et al. (2021), in Appendix F we show that the instance-optimal complexity of representation learning with prior knowledge about full realizability is indeed never larger than the instanceoptimal complexity of a CLB with any representation in \u03a6, while there even exist cases where the former complexity is signi\ufb01cantly smaller. This makes the problem of fullyrealizable representation learning statistically \u201ceasier\u201d than CLBs, as opposed to our general setting (De\ufb01nition 1). 6 General Functions and Worst-case Regret While the instance-dependent viewpoint we considered so far allowed us to provide sharp insights on the complexity of representation learning, it is still asymptotic in nature and may \u201chide\u201d other phenomena happening in the \ufb01nite-time regime. Existing algorithms for general function classes (e.g., Foster and Rakhlin, 2020; Simchi-Levi and Xu, 2020) achieve O( p AT log(|F|)) regret when given an arbitrary class F. This is known to be optimal in the worst possible choice of F (Agarwal et al., 2012). When applied to representation learning with \ufb01nite |\u03a6|, i.e., to learn any instance f \u22c6\u2208F\u03a6 given F\u03a6, their regret bound reduces to O( p AT (log(|\u03a6|) + d)) and it is an open question whether this is optimal in the worst possible set \u03a6. A similar result was obtained by Moradipari et al. (2022). In particular, one might be wondering whether a polynomial dependence on the number of actions A is really unavoidable even when the learner is provided with a set of d-dimensional representations with d \u226aA. The question arises mostly because some model-selection algorithms (Cutkosky et al., 2021) achieve O( p dT log(A)) regret on this problem, with some extra dependences on other problem-independent variables, like |\u03a6| or T . Such bounds give hope that adapting to the worst-case complexity of a CLB with one of the realizable representations in \u03a6 may be possible at least in the small T regime. Once again, we show that this is impossible. We state a worst-case lower bound for representation learning proving that a polynomial depedence on the number of actions is unavoidable. Theorem 3. Let N \u22651, A \u22654 and d \u226512 log2(A). There exists a context distribution, a set of d-dimensional representations \u03a6 of size |\u03a6| = N over A arms, and a universal constant c > 0 such that, for any learning algorithm A and T \u2265max{\u230alog(dN)/ log(A)\u230b, d/ log2(A)}, sup f\u2208F\u03a6 EA f [RT (f)] \u2265c s T \u0012 d log2(A) + A \u0016log(dN) log(A) \u0017\u0013 . The proof of this result combines techniques used to derive two existing lower bounds: the \u2126( p dT log(A)) lower bound for CLB problems of He et al. (2022) and the \u2126( p AT log(|F|)) lower bound for general function classes of Agarwal et al. (2012). Differently from existing upper bounds that scale as \u2126( \u221a Ad), our lower bound decouples the polynomial dependencies on A and d. Whether this is matchable by a specialized algorithm for representation learning, or whether existing algorithm for general function classes are already worst-case optimal in our setting, remains an intriguing open question. 7 Discussion Our main contributions can be summarized in two fundamental hardness results. 1) Through an instance-dependent lens, representation learning is never easier than a CLB with a given realizable representation, while the former problem can be strictly harder, up to the point that knowing that one of some given low-dimensional representations is realizable is useless. 2) Adaptivity to the best representation is impossible in general, both in the instace-dependent long-horizon and in the worst-case small-horizon regimes. In particular, as opposed to worst-case results, instancedependent adaptivity is impossible for representation learning on nested features, and the same holds when all representations are realizable if the learner does not know it a-priori. On the positive side, we characterized \u201csimple\u201d instances where representation learning is not harder than a CLB and where sub-logarithmic regret can be achieved. \fAndrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric Following literature on linear bandits (Tirinzoni et al., 2020; Kirschner et al., 2021), an interesting open question is how to design computationally-ef\ufb01cient representation learning strategies with good (e.g., worst-case optimal) \ufb01nite-time regret and asymptotically instance-optimal performance." + }, + { + "url": "http://arxiv.org/abs/2210.13083v1", + "title": "Scalable Representation Learning in Linear Contextual Bandits with Constant Regret Guarantees", + "abstract": "We study the problem of representation learning in stochastic contextual\nlinear bandits. While the primary concern in this domain is usually to find\nrealizable representations (i.e., those that allow predicting the reward\nfunction at any context-action pair exactly), it has been recently shown that\nrepresentations with certain spectral properties (called HLS) may be more\neffective for the exploration-exploitation task, enabling LinUCB to achieve\nconstant (i.e., horizon-independent) regret. In this paper, we propose\nBanditSRL, a representation learning algorithm that combines a novel\nconstrained optimization problem to learn a realizable representation with good\nspectral properties with a generalized likelihood ratio test to exploit the\nrecovered representation and avoid excessive exploration. We prove that\nBanditSRL can be paired with any no-regret algorithm and achieve constant\nregret whenever an HLS representation is available. Furthermore, BanditSRL can\nbe easily combined with deep neural networks and we show how regularizing\ntowards HLS representations is beneficial in standard benchmarks.", + "authors": "Andrea Tirinzoni, Matteo Papini, Ahmed Touati, Alessandro Lazaric, Matteo Pirotta", + "published": "2022-10-24", + "updated": "2022-10-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction The contextual bandit is a general framework to formalize the exploration-exploitation dilemma arising in sequential decision-making problems such as recommendation systems, online advertising, and clinical trials [e.g., 1]. When solving real-world problems, where contexts and actions are complex and high-dimensional (e.g., users\u2019 social graph, items\u2019 visual description), it is crucial to provide the bandit algorithm with a suitable representation of the context-action space. While several representation learning algorithms have been proposed in supervised learning and obtained impressing empirical results [e.g., 2, 3], how to ef\ufb01ciently learn representations that are effective for the exploration-exploitation problem is still relatively an open question. The primary objective in representation learning is to \ufb01nd features that map the context-action space into a lower-dimensional embedding that allows \ufb01tting the reward function accurately, i.e., realizable representations [e.g., 4\u201310]. Within the space of realizable representations, bandit algorithms leveraging features of smaller dimension are expected to learn faster and thus have smaller regret. Nonetheless, Papini et al. [11] have recently shown that, even among realizable features, certain representations are naturally better suited to solve the exploration-exploitation problem. In particular, they proved that LINUCB [12, 13] can achieve constant regret when provided with a \u201cgood\u201d representation. Interestingly, this property is not related to \u201cglobal\u201d characteristics of the feature map (e.g., dimension, norms), but rather on a spectral property of the representation (the space associated to optimal actions should cover the context-action space, see HLS property in Def. 2.1). This naturally 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2210.13083v1 [cs.LG] 24 Oct 2022 \fraises the question whether it is possible to learn such representation at the same time as solving the contextual bandit problem. Papini et al. [11] provided a \ufb01rst positive answer with the LEADER algorithm, which is proved to perform as well as the best realizable representation in a given set up to a logarithmic factor in the number of representations. While this allows constant regret when a realizable HLS representation is available, the algorithm suffers from two main limitations: 1) it is entangled with LINUCB and it can hardly be generalized to other bandit algorithms; 2) it learns a different representation for each context-action pair, thus making it hard to extend beyond \ufb01nite representations to arbitrary functional space (e.g., deep neural networks). In this paper, we address those limitations through BANDITSRL, a novel algorithm that decouples representation learning and exploration-exploitation so as to work with any no-regret contextual bandit algorithm and to be easily extended to general representation spaces. BANDITSRL combines two components: 1) a representation learning mechanism based on a constrained optimization problem that promotes \u201cgood\u201d representations while preserving realizability; and 2) a generalized likelihood ratio test (GLRT) to avoid over exploration and fully exploit the properties of \u201cgood\u201d representations. The main contributions of the paper can be summarized as follows: 1. We show that adding a GLRT on the top of any no-regret algorithm enables it to exploit the properties of a HLS representation and achieve constant regret. This generalizes the constant regret result for LINUCB in [11] to any no-regret algorithm. 2. Similarly, we show that BANDITSRL can be paired with any no-regret algorithm and perform effective representation selection, including achieving constant regret whenever a HLS representation is available in a given set. This generalizes the result of LEADER beyond LINUCB. In doing this we also improve the analysis of the misspeci\ufb01ed case and prove a tighter bound on the time to converge to realizable representations. Furthermore, numerical simulations in synthetic problems con\ufb01rm that BANDITSRL is empirically competitive with LEADER. 3. Finally, in contrast to LEADER, BANDITSRL can be easily scaled to complex problems where representations are encoded through deep neural networks. In particular, we show that the Lagrangian relaxation of the constrained optimization problem for representation learning becomes a regression problem with an auxiliary representation loss promoting HLS-like representations. We test different variants of the resulting NN-BANDITSRL algorithm showing how the auxiliary representation loss improves performance in a number of dataset-based benchmarks. 2 Preliminaries We consider a stochastic contextual bandit problem with context space X and \ufb01nite action set A. At each round t \u22651, the learner observes a context xt sampled i.i.d. from a distribution \u03c1 over X, selects an action at \u2208A, and receives a reward yt = \u00b5(xt, at)+\u03b7t where \u03b7t is a zero-mean noise and \u00b5 : X \u00d7 A \u2192R is the expected reward. The objective of a learner A is to minimize its pseudo-regret RT := PT t=1 \u0000\u00b5\u22c6(xt) \u2212\u00b5(xt, at) \u0001 for any T \u22651, where \u00b5\u22c6(xt) := maxa\u2208A \u00b5(xt, a). We assume that for any x \u2208X the optimal action a\u22c6 x := argmaxa\u2208A \u00b5(x, a) is unique and we de\ufb01ne the gap \u2206(x, a) := \u00b5\u22c6(x) \u2212\u00b5(x, a). We say that A is a no-regret algorithm if, for any instance of \u00b5, it achieves sublinear regret, i.e., RT = o(T). We consider the problem of representation learning in given a candidate function space \u03a6 \u2286 \b \u03c6 : X \u00d7 A \u2192Rd\u03c6\t , where the dimensionality d\u03c6 may depend on the feature \u03c6. Let \u03b8\u22c6 \u03c6 = argmin\u03b8\u2208Rd\u03c6 Ex\u223c\u03c1 \u0002 P a(\u03c6(x, a)T\u03b8 \u2212\u00b5(x, a))2\u0003 be the best linear \ufb01t of \u00b5 for representation \u03c6. We assume that \u03a6 contains a linearly realizable representation. Assumption 1 (Realizability). There exists an (unknown) subset \u03a6\u22c6\u2286\u03a6 such that, for each \u03c6 \u2208\u03a6\u22c6, \u00b5(x, a) = \u03c6(x, a)T\u03b8\u22c6 \u03c6, \u2200x \u2208X, a \u2208A. Assumption 2 (Regularity). Let B\u03c6 := {\u03b8 \u2208Rd\u03c6 : \u2225\u03b8\u22252 \u2264B\u03c6} be a ball in Rd\u03c6. We assume that, for each \u03c6 \u2208\u03a6, supx,a \u2225\u03c6(x, a)\u22252 \u2264L\u03c6, \u2225\u03b8\u22c6 \u03c6\u22252 \u2264B\u03c6, supx,a |\u03c6(x, a)T\u03b8| \u22641 for any \u03b8 \u2208B\u03c6 and |yt| \u22641 almost surely for all t. We assume parameters L\u03c6 and B\u03c6 are known. We also assume the minimum gap \u2206= infx\u2208X:\u03c1(x)>0,a\u2208A,\u2206(x,a)>0{\u2206(x, a)} > 0 and that \u03bbmin \u0010 1 |A| P a Ex\u223c\u03c1[\u03c6(x, a)\u03c6(x, a)T] \u0011 > 0 for any \u03c6 \u2208\u03a6\u22c6, i.e, all realizable representations are non-redundant. 2 \fUnder Asm. 1, when |\u03a6| = 1, the problem reduces to a stochastic linear contextual bandit and can be solved using standard algorithms, such as LINUCB/OFUL [12, 13], LinTS [14], and \u03f5-greedy [15], which enjoy sublinear regret and, in some cases, logarithmic problem-dependent regret. Recently, Papini et al. [11] showed that LINUCB only suffers constant regret when a realizable representation is HLS, i.e., when the features of optimal actions span the entire d\u03c6-dimensional space. HLS De\ufb01nition 2.1 (HLS Representation). A representation \u03c6 is HLS (the acronym refers to the last names of the authors of [16]) if \u03bb\u22c6(\u03c6) := \u03bbmin \u0000Ex\u223c\u03c1 \u0002 \u03c6(x, a\u22c6 x)\u03c6(x, a\u22c6 x)T\u0003\u0001 > 0 where \u03bbmin(A) denotes the minimum eigenvalue of a matrix A. Papini et al. showed that HLS, together with realizability, is a suf\ufb01cient and necessary property for achieving constant regret in contextual stochastic linear bandits for non-redundant representations. In order to deal with the general case where \u03a6 may contain non-realizable representations, we rely on the following misspeci\ufb01cation assumption from [11]. Assumption 3 (Misspeci\ufb01cation). For each \u03c6 / \u2208\u03a6\u22c6, there exists \u03f5\u03c6 > 0 such that min \u03b8\u2208B\u03c6 min \u03c0:X\u2192A Ex\u223c\u03c1 h\u0000\u03c6(x, \u03c0(x))T\u03b8 \u2212\u00b5(x, \u03c0(x)) \u00012i \u2265\u03f5\u03c6. This assumption states that any non-realizable representation has a minimum level of misspeci\ufb01cation on average over contexts and for any context-action policy. In the \ufb01nite-context case, a suf\ufb01cient condition for Asm. 3 is that, for each \u03c6 / \u2208\u03a6\u22c6, there exists a context x \u2208X with \u03c1(x) > 0 such that \u03c6(x, a)T\u03b8 \u0338= \u00b5(x, a) for all a \u2208A and \u03b8 \u2208B\u03c6. Related work. Several papers have focused on contextual bandits with an arbitrary function space to estimate the reward function under realizability assumptions [e.g., 4, 5, 7]. While these works consider a similar setting to ours, they do not aim to learn \u201cgood\u201d representations, but rather focus on the exploration-exploitation problem to obtain sublinear regret guarantees. This often corresponds to recovering the maximum likelihood representation, which may not lead to the best regret. After the work in [11], the problem of representation learning with constant regret guarantees has also been studied in reinforcement learning [17, 18]. As these approaches build on the ideas in [11], they inherit the same limitations as [11]. Another related literature is the one of expert learning and model selection in bandits [e.g., 19\u201325], where the objective is to select the best candidate among a set of base learning algorithms or experts. While these algorithms are general and can be applied to different settings, including representation learning with a \ufb01nite set of candidates, they may not be able to effectively leverage the speci\ufb01c structure of the problem. Furthermore, at the best of our knowledge, these algorithms suffers a polynomial dependence in the number of base algorithms (|\u03a6| in our setting) and are limited to worst-case regret guarantees. Whether the \u221a T or poly(|\u03a6|) dependency can be improved in general is an open question (see [25] and [11, App. A]). Finally, [8, 26] studied the speci\ufb01c problem of model selection with nested linear representations, where the best representation is the one with the smallest dimension for which the reward is realizable. Several works have recently focused on theoretical and practical investigation of contextual bandits with neural networks (NNs) [27\u201329]. While their focus was on leveraging the representation power of NNs to correctly predict the rewards, here we focus on learning representations with good spectral properties through a novel auxiliary loss. A related approach to our is [29] where the authors leverage self-supervised auxiliary losses for representation learning in image-based bandit problems. 3 A General Framework for Representation Learning We introduce BANDITSRL (Bandit Spectral Representation Learner), an algorithm for stochastic contextual linear bandit that ef\ufb01ciently decouples representation learning from exploration-exploitation. As illustrated in Alg. 1, BANDITSRL has access to a \ufb01xed-representation contextual bandit algorithm A, the base algorithm, and it is built around two key mechanisms: \u0082 a constrained optimization problem where the objective is to minimize a representation loss L to favor representations with HLS properties, whereas the constraint ensures realizability; \u0083 a generalized likelihood ratio test (GLRT) 3 \fAlgorithm 1 BANDITSRL 1: Input: representations \u03a6, no-regret algorithm A, con\ufb01dence \u03b4 \u2208(0, 1), update schedule \u03b3 > 1 2: Initialize j = 0, \u03c6j, \u03b8\u03c6j,0 arbitrarily, V0(\u03c6j) = \u03bbId\u03c6j , tj = 1, let \u03b4j := \u03b4/(2(j + 1)2) 3: for t = 1, . . . do 4: Observe context xt 5: if GLRt\u22121(xt; \u03c6j) > \u03b2t\u22121,\u03b4/|\u03a6|(\u03c6j) then 6: Play at = argmaxa\u2208A \b \u03c6j(xt, a)T\u03b8\u03c6j,t\u22121 \t and observe reward yt 7: else 8: Play at = At \u0000xt; \u03c6j, \u03b4j/|\u03a6| \u0001 , observe reward yt, and feed it into A 9: end if 10: if t = \u2308\u03b3tj\u2309and |\u03a6| > 1 then 11: Set j = j + 1 and tj = t 12: Compute \u03c6j = argmin\u03c6\u2208\u03a6t \b Lt(\u03c6) \t and reset A 13: end if 14: end for to ensure that, if a HLS representation is learned, the base algorithm A does not over-explore and the \u201cgood\u201d representation is exploited to obtain constant regret. Mechanism \u0082 (line 12). The \ufb01rst challenge when provided with a generic set \u03a6 is to ensure that the algorithm does not converge to selecting misspeci\ufb01ed representations, which may lead to linear regret. This is achieved by introducing a hard constraint in the representation optimization, so that BANDITSRL only selects representations in the set (see also [11, App. F]), \u03a6t := \u001a \u03c6 \u2208\u03a6 : min \u03b8\u2208B\u03c6 Et(\u03c6, \u03b8) \u2264min \u03c6\u2032\u2208\u03a6 min \u03b8\u2208B\u03c6\u2032 \b Et(\u03c6\u2032, \u03b8) + \u03b1t,\u03b4(\u03c6\u2032) \t\u001b (1) where Et(\u03c6, \u03b8) := 1 t Pt s=1 \u0000\u03c6(xs, as)T \u03b8 \u2212ys \u00012 is the empirical mean-square error (MSE) of model (\u03c6, \u03b8) and \u03b1t,\u03b4(\u03c6) := 40 t log \u0010 8|\u03a6|2(12L\u03c6B\u03c6t)d\u03c6t3 \u03b4 \u0011 + 2 t . This condition leverages the existence of a realizable representation in \u03a6t to eliminate representations whose MSE is not compatible with the one of the realizable representation, once accounted for the statistical uncertainty (i.e., \u03b1t,\u03b4(\u03c6)). Subject to the realizability constraint, the representation loss Lt(\u03c6) favours learning a HLS representation (if possible). As illustrated in Def. 2.1, a HLS representation is such that the expected design matrix associated to the optimal actions has a positive minimum eigenvalue. Unfortunately it is not possible to directly optimize for this condition, since we have access to neither the context distribution \u03c1 nor the optimal action in each context. Nonetheless, we can design a loss that works as a proxy for the HLS property whenever A is a no-regret algorithm. Let Vt(\u03c6) = \u03bbId\u03c6 + Pt s=1 \u03c6(xs, as)\u03c6(xs, as)T be the empirical design matrix built on the context-actions pairs observed up to time t, then we de\ufb01ne Leig,t(\u03c6) := \u2212\u03bbmin \u0000Vt(\u03c6) \u2212\u03bbId\u03c6 \u0001 /L2 \u03c6, where the normalization factor ensures invariance w.r.t. the feature norm. Intuitively, the empirical distribution of contexts (xt)t\u22651 converges to \u03c1 and the frequency of optimal actions selected by a no-regret algorithm increases over time, thus ensuring that Vt(\u03c6)/t tends to behave as the design matrix under optimal arms Ex\u223c\u03c1[\u03c6(x, a\u22c6 x)\u03c6(x, a\u22c6 x)T]. As discussed in Sect. 5 alternative losses can be used to favour learning HLS representations. Mechanism \u0083 (line 5). While Papini et al. [11] proved that LINUCB is able to exploit HLS representations, other algorithms such as \u03f5-greedy may keep forcing exploration and do not fully take advantage of HLS properties, thus failing to achieve constant regret. In order to prevent this, we introduce a generalized likelihood ratio test (GLRT). At each round t, let \u03c6t\u22121 be the representation used at time t, then BANDITSRL decides whether to act according to the base algorithm A with representation \u03c6t\u22121 or fully exploit the learned representation and play greedily w.r.t. it. Denote by \u03b8\u03c6,t\u22121 = Vt\u22121(\u03c6)\u22121 Pt\u22121 s=1 \u03c6(xs, as)ys the regularized least-squares parameter at time t for representation \u03c6 and by \u03c0\u22c6 t\u22121(x; \u03c6) = argmaxa\u2208A \b \u03c6(x, a)T\u03b8\u03c6,t\u22121 \t the associated greedy policy. Then, BANDITSRL selects the greedy action \u03c0\u22c6 t\u22121(xt; \u03c6t\u22121) when the GLR test is active, otherwise it selects the action proposed by the base algorithm A. Formally, for any \u03c6 \u2208\u03a6 and x \u2208X, we de\ufb01ne 4 \fthe generalized likelihood ratio as GLRt\u22121(x; \u03c6) := min a\u0338=\u03c0\u22c6 t\u22121(x;\u03c6) \u0000\u03c6(x, \u03c0\u22c6 t\u22121(x; \u03c6)) \u2212\u03c6(x, a) \u0001T\u03b8\u03c6,t\u22121 \u2225\u03c6(x, \u03c0\u22c6 t\u22121(x; \u03c6)) \u2212\u03c6(s, a)\u2225Vt\u22121(\u03c6)\u22121 (2) and, given \u03b2t\u22121,\u03b4(\u03c6) = \u03c3 q 2 log(1/\u03b4) + d\u03c6 log(1 + (t \u22121)L2 \u03c6/(\u03bbd\u03c6)) + \u221a \u03bbB\u03c6, the GLR test is GLRt\u22121(x; \u03c6) > \u03b2t\u22121,\u03b4/|\u03a6|(\u03c6) [16, 30, 31]. If this happens at time t and \u03c6t\u22121 is realizable, then we have enough con\ufb01dence to conclude that the greedy action is optimal, i.e., \u03c0\u22c6 t\u22121(xt; \u03c6t\u22121) = a\u22c6 xt. An important aspect of this test is that it is run on the current context xt and it does not require evaluating global properties of the representation. While at any time t it is possible that a non-HLS non-realizable representation may pass the test, the GLRT is sound as 1) exploration through A and the representation learning mechanism work in synergy to guarantee that eventually a realizable representation is always provided to the GLRT; 2) only HLS representations are guaranteed to consistently trigger the test at any context x. In practice, BANDITSRL does not update the representation at each step but in phases. This is necessary to avoid too frequent representation changes and control the regret, but also to make the algorithm more computationally ef\ufb01cient and practical. Indeed, updating the representation may be computationally expensive in practice (e.g., retraining a NN) and a phased scheme with \u03b3 parameter reduces the number of representation learning steps to J \u2248\u2308log\u03b3(T)\u2309. The algorithm A is reset at the beginning of a phase j when the representation is selected and it is run on the samples collected during the current phase when the base algorithm is selected. If A is able to leverage off-policy data, at the beginning of a phase j, we can warm-start it by providing \u03c6j and all the past data (xs, as, ys)s\u2264tj. While the reset is necessary for dealing with any no-regret algorithm, it can be removed for algorithms such as LINUCB and \u03f5-greedy without affecting the theoretical guarantees. Comparison to LEADER. We \ufb01rst recall the basic structure of LEADER. Denote by UCBt(x, a, \u03c6) the upper-con\ufb01dence bound computed by LINUCB for the context-action pair (x, a) and representation \u03c6 after t steps. Then LEADER selects the action at \u2208argmaxa\u2208A min\u03c6\u2208\u03a6t UCBt(xt, a, \u03c6). Unlike the constrained optimization problem in BANDITSRL, this mechanism couples representation learning and exploration-exploitation and it requires optimizing a representation for the current xt and for each action a. Indeed, LEADER does not output a single representation and possibly chooses different representations for each context-action pair. While this enables LEADER to mix representations and achieve constant regret in some cases even when \u03a6 does not include any HLS representation, it leads to two major drawbacks: 1) the representation selection is directly entangled with the LINUCB exploration-exploitation strategy, 2) it is impractical in problems where \u03a6 is an in\ufb01nite functional space (e.g., a deep neural network). The mechanisms \u0082 and \u0083 successfully address these limitations and enable BANDITSRL to be paired with any no-regret algorithm and to be scaled to any representation class as illustrated in the next section. 3.1 Extension to Neural Networks We now consider a representation space \u03a6 de\ufb01ned by the last layer of a NN. We denote by \u03c6 : X \u00d7 A \u2192Rd the last layer and by f(x, a) = \u03c6(x, a)T\u03b8 the full NN, where \u03b8 are the last-layer weights. We show how BANDITSRL can be easily adapted to work with deep neural networks (NN). First, the GLRT requires only to have access to the current context xt and representation \u03c6j, i.e., the features de\ufb01ned by the last layer of the current network, and its cost is linear in the number of actions. Second, the phased scheme allows lazy updates, where we retrain the network only log\u03b3(T) times. Third, we can run any bandit algorithm with a representation provided by the NN, including LINUCB, LinTS, and \u03f5-greedy. Fourth, the representation learning step can be adapted to allow ef\ufb01cient optimization of a NN. We consider a regularized problem obtained through an approximation of the constrained problem: argmin \u03c6 \u001a Lt(\u03c6) \u2212creg \u0012 min \u03c6\u2032,\u03b8\u2032 \b Et(\u03c6\u2032, \u03b8\u2032) + \u03b1t,\u03b4(\u03c6\u2032) \t \u2212min \u03b8 Et(\u03c6, \u03b8) \u0013\u001b = argmin \u03c6 min \u03b8 {Lt(\u03c6) + creg Et(\u03c6, \u03b8)} . (3) where creg \u22650 is a tunable parameter. The fact we consider creg constant allows us to ignore terms that do not depend on either \u03c6 or \u03b8. This leads to a convenient regularized loss that aims to minimize 5 \fthe MSE (second term) while enforcing some spectral property on the last layer of the NN (\ufb01rst term). In practice, we can optimize this loss by stochastic gradient descent over a replay buffer containing the samples observed over time. The resulting algorithm, called NN-BANDITSRL, is a direct and elegant generalization of the theoretically-grounded algorithm. While in theory we can optimize the regularized loss (3) with all the samples, in practice it is important to better control the sample distribution. As the algorithm progresses, we expect the replay buffer to contain an increasing number of samples obtained by optimal actions, which may lead the representation to solely \ufb01t optimal actions while increasing misspeci\ufb01cation on suboptimal actions. This may compromise the behavior of the algorithm and ultimately lead to high regret. This is an instance of catastrophic forgetting induced by a biased/shifting sample distribution [e.g., 32]. To prevent this phenomenon, we store two replay buffers: i) an explorative buffer DA,t with samples obtained when A was selected; ii) an exploitative buffer Dglrt,t with samples obtained when GLRT triggered and greedy actions were selected. The explorative buffer DA,t is used to compute the MSE Et(\u03c6, \u03b8). While this reduces the number of samples, it improves the robustness of the algorithm by promoting realizability. On the other hand, we use all the samples Dt = DA,t \u222aDglrt,t for the representation loss L(\u03c6). This is coherent with the intuition that mechanism \u0082 works when the design matrix Vt drifts towards the design matrix of optimal actions, which is at the core of the HLS property. Refer to App. C for a more detailed description of NN-BANDITSRL. 4 Theoretical Guarantees In this section, we provide a complete characterization of the theoretical guarantees of BANDITSRL when \u03a6 is a \ufb01nite set of representations, i.e., |\u03a6| < \u221e. We consider the update scheme with \u03b3 = 2. 4.1 Constant Regret Bound for HLS Representations We \ufb01rst study the case where a realizable HLS representation is available. For the characterization of the behavior of the algorithm, we need to introduce the following times: \u2022 \u03c4elim: an upper-bound to the time at which all non-realizable representations are eliminated, i.e., for all t \u2265\u03c4elim, \u03a6t = \u03a6\u22c6; \u2022 \u03c4HLS: an upper-bound to the time (if it exists) after which the HLS representation is selected, i.e., \u03c6t = \u03c6\u22c6for all t \u2265\u03c4HLS, where \u03c6\u22c6\u2208\u03a6\u22c6is the unique HLS realizable representation; \u2022 \u03c4glrt: an upper-bound to the time (if it exists) such that the GLR test triggers for the HLS representation \u03c6\u22c6for all t \u2265\u03c4glrt. We begin by deriving a constant problem-dependent regret bound for BANDITSRL with HLS representations. The proof and explicit values of the constants are reported in App. B.1 Theorem 4.1. Let A be any no-regret algorithm for stochastic contextual linear bandits, \u03a6 satisfy Asm. 13, |\u03a6| < \u221e, \u03b3 = 2, and Lt(\u03c6) = Leig,t(\u03c6) := \u2212\u03bbmin(Vt(\u03c6) \u2212\u03bbId\u03c6)/L2 \u03c6. Moreover, let \u03a6\u22c6contains a unique HLS representation \u03c6\u22c6. Then, for any \u03b4 \u2208(0, 1) and T \u2208N, the regret of BANDITSRL is bounded, with probability at least 1 \u22124\u03b4, as2 RT \u22642\u03c4elim + max \u03c6\u2208\u03a6\u22c6RA((\u03c4opt \u2212\u03c4elim) \u2227T, \u03c6, \u03b4log2(\u03c4opt\u2227T )/|\u03a6|) log2(\u03c4opt \u2227T), where \u03b4j := \u03b4/(2(j + 1)2) and \u03c4opt = \u03c4glrt \u2228\u03c4HLS \u2228\u03c4elim \u2272\u03c4alg + L2 \u03c6\u22c6log(|\u03a6|/\u03b4) \u03bb\u22c6(\u03c6\u22c6) L2 \u03c6\u22c6 \u03bb\u22c6(\u03c6\u22c6) + d\u03c6\u22c6 \u22062 + d (min\u03c6/ \u2208\u03a6\u22c6\u03f5\u03c6)\u2206 ! , (4) with \u03c4alg a \ufb01nite (independent from the horizon T) constant depending on algorithm A (see Tab. 1) and RA(\u03c4, \u03c6, \u03b4) an anytime bound (non-decreasing in \u03c4 and 1/\u03b4) on the regret accumulated over \u03c4 steps by A using representation \u03c6 and con\ufb01dence level \u03b4. 1While Thm. 4.1 provides high-probability guarantees, we can easily derive a constant expected-regret bound by running BANDITSRL with a decreasing schedule for \u03b4 and with a slightly different proof. 2We denote by a \u2227b (resp. a \u2228b) the minimum (resp. the maximum) between a and b. 6 \fThe key \ufb01nding of the previous result is that BANDITSRL achieves constant regret whenever a realizable HLS representation is available in the set \u03a6, which may contain non-realizable as well as realizable non-HLS representations. The regret bound above also illustrates the \u201cdynamics\u201d of the algorithm and three main regimes. In the early stages, non-realizable representations may be included in \u03a6t, which may lead to suffering linear regret until time \u03c4elim when the constraint in the representation learning step \ufb01lters out all non-realizable representations (\ufb01rst term in the regret bound). At this point, BANDITSRL leverages the loss L to favor HLS representations and the base algorithm A to perform effective exploration-exploitation. This leads to the second term in the bound, which corresponds to an upper-bound to the sum of the regrets of A in each phase in between \u03c4elim and \u03c4glrt \u2228\u03c4HLS, which is roughly P j\u03c4elim 1 but no realizable HLS exists (\u03c4glrt = \u221e), BANDITSRL still enjoys a sublinear regret. Corollary 4.3 (Regret bound without HLS representation). Consider the same setting in Thm. 4.1 and assume that \u03a6\u22c6does not contain any HLS representation. Then, for any \u03b4 \u2208(0, 1) and T \u2208N, the regret of BANDITSRL is bounded, with probability at least 1 \u22124\u03b4, as follows: RT \u22642\u03c4elim + max \u03c6\u2208\u03a6\u22c6RA(T, \u03c6, \u03b4log2(T )/|\u03a6|) log2(T). This shows that the regret of BANDITSRL is of the same order as the base no-regret algorithm A when running with the worst realizable representation. While such worst-case dependency is undesirable, it is common to many representation learning algorithms, both in bandits and reinforcement learning [e.g. 4, 33].3 In App. C, we show that an alternative representation loss could address this problem and lead to a bound scaling with the regret of the best realizable representation (RT \u22642\u03c4elim + min\u03c6\u2208\u03a6\u22c6RA(T, \u03c6, \u03b4/|\u03a6|) log2(T)), while preserving the guarantees for the HLS case. Since the representation loss requires an upper-bound on the number of suboptimal actions and a carefully tuned schedule for guessing the gap \u2206, it is less practical than the smallest eigenvalue, which we use as the basis for our practical version of BANDITSRL. Algorithm-dependent instances and comparison to LEADER. Table 1 reports the regret bound of BANDITSRL for different base algorithms. These results make explicit the dependence in the number of representations |\u03a6| and show that the cost of representation learning is only logarithmic. In the speci\ufb01c case of LINUCB for HLS representations, we highlight that the upper-bound to the time \u03c4opt 3Notice that the worst-representation dependency is often hidden in the de\ufb01nition of \u03a6, which is assumed to contain features with \ufb01xed dimension and bounded norm, i.e., \u03a6 = {\u03c6 : X \u00d7A \u2192Rd, supx,a \u2225\u03c6(x, a)\u22252 \u2264L}. As d and B are often the only representation-dependent terms in the regret bound RA, no worst-representation dependency is reported. 7 \fAlgorithm RA(T, \u03c6, \u03b4/|\u03a6|) \u03c4alg LINUCB d2 \u03c6 log(|\u03a6|T/\u03b4)2/\u2206 L2 \u03c6\u22c6d2 log(|\u03a6|/\u03b4)2 \u03bb\u22c6(\u03c6\u22c6)\u22062 \u03f5-greedy with \u03f5t = t\u22121/3 p d\u03c6|A| log(|\u03a6|/\u03b4)T 2/3 L6 \u03c6\u22c6(d|A|)3/2L3 log(|\u03a6|/\u03b4)3 \u03bb\u22c6(\u03c6\u22c6)3\u22063 Table 1: Speci\ufb01c regret bounds when using LINUCB or \u03f5-greedy as base algorithms. We omit numerical constants and logarithmic factors. in Thm. 4.1 improves over the result of LEADER. While LEADER has no explicit concept of \u03c4alg, a term with the same dependence of \u03c4alg in Tab. 1 appears also in the LEADER analysis. This term encodes an upper bound to the pulls of suboptimal actions and depends on the LINUCB strategy. As a result, the \ufb01rst three terms in Eq. 4 are equivalent to the ones of LEADER. The improvement comes from the last term (\u03c4elim), where, thanks to a re\ufb01ned analysis of the elimination condition, we are able to improve the dependence on the inverse minimum misspeci\ufb01cation (1/ min\u03c6/ \u2208\u03a6\u22c6\u03f5\u03c6) from quadratic to linear (see App. B for a detailed comparison). On the other hand, BANDITSRL suffers from the worst regret among realizable representations, whereas LEADER scales with the best representation. As discussed above, this mismatch can be mitigated by using by a different choice of representation loss. In the case of \u03f5-greedy, the T 2/3 regret upper-bound induces a worse \u03c4alg due to a larger number of suboptimal pulls. This in turns re\ufb02ects into a higher regret to the constant regime. Finally, LEADER is still guaranteed to achieve constant regret by selecting different representations at different context-action pairs whenever non-HLS representations satisfy a certain mixing condition [cf. 11, Sec. 5.2]. This result is not possible with BANDITSRL, where one representation is selected in each phase. At the same time, it is the single-representation structure of BANDITSRL that allows us to accommodate different base algorithms and scale it to any representation space. 5 Experiments We provide an empirical validation of BANDITSRL both in synthetic contextual linear bandit problems and in non-linear contextual problems [see e.g., 6, 27]. Linear Benchmarks. We \ufb01rst evaluate BANDITSRL on synthetic linear problems to empirically validate our theoretical \ufb01ndings. In particular, we test BANDITSRL with different base algorithms and representation learning losses and we compare it with LEADER.4 We consider the \u201cvarying dimension\u201d problem introduced in [11] which consists of six realizable representations with dimension from 2 to 6. Of the two representations of dimension d = 6, one is HLS. In addition seven misspeci\ufb01ed representations are available. Details are provided in App. D. We consider LINUCB and \u03f5-greedy as base algorithms and we use the theoretical parameters, but we perform warm start using all the past data when a new representation is selected. Similarly, for BANDITSRL we use the theoretical parameters (\u03b3 = 2) and Lt(\u03c6) := Leig,t(\u03c6). Fig. 1 shows that, as expected, BANDITSRL with both base algorithms is able to achieve constant regret when a HLS representation exists. As expected from the theoretical analysis, \u03f5-greedy leads to a higher regret than LINUCB. Furthermore, empirically BANDITSRL with LINUCB obtains a performance that is comparable with the one of LEADER both with and without realizable HLS representation. Note that when no HLS exists, the regret of BANDITSRL with \u03f5-greedy is T 2/3, while LINUCB-based algorithms are able to achieve log(T) regret. When \u03a6 contains misspeci\ufb01ed representations (Fig. 1(center-left)), we can observe that in the \ufb01rst regime [1, \u03c4elim] the algorithm suffers linear regret, after that we have the regime of the base algorithm ([\u03c4elim, \u03c4glrt \u2228\u03c4HLS]) up to the point where the GLRT leads to select only optimal actions. Weak HLS. Papini et al. [11] showed that when realizable representations are redundant (i.e., \u03bb\u22c6(\u03c6\u22c6) = 0), it is still possible to achieve constant regret if the representation is \u201cweakly\u201d-HLS, i.e., the features of the optimal actions span the features \u03c6(x, a) associated to any context-action pair, but not necessarily Rd\u03c6. To test this case, we pad a 5-dimensional vector of ones to all the features of the six realizable representations in the previous experiment. To deal with the weak-HLS condition, we introduce the alternative representation loss Lweak,t(\u03c6) = \u2212mins\u2264t \b \u03c6(xs, as)T(Vt(\u03c6) \u2212\u03bbId\u03c6)\u03c6(xs, as)/L2 \u03c6 \t . Since, Vt(\u03c6) \u2212\u03bbId\u03c6 tends to behave as Ex[\u03c6\u22c6(x)\u03c6\u22c6(x)T], this loss encourages representations where all the observed features are spanned by the optimal arms, thus promoting weak-HLS representations 4We do not report the performance of model selection algorithms. An extensive analysis can be found in [11], where the author showed that LEADER was outperforming all the baselines. 8 \f(see App. C for more details). As expected, Fig. 1(right) shows that the min-eigenvalue loss Leig,t fails in identifying the correct representation in this domain. On the other hand, BANDITSRL with the novel loss is able to achieve constant regret and converge to constant regret (we cut the \ufb01gure for readability), and behaves as LEADER when using LINUCB. 0 0.5 1 1.5 2 \u00b7104 0 200 400 600 800 1,000 Time Misspecified and one HLS 0 0.5 1 1.5 2 \u00b7104 0 50 100 150 Time Realizable, no HLS 0 1 2 3 4 \u00b7104 0 100 200 300 Time Weak HLS BanditSRL-LinUCB (L(eig)) BanditSRL-\u03f5-greedy (L(eig)) Leader BanditSRL-LinUCB (L(weak)) BanditSRL-\u03f5-greedy (L(weak)) LinUCB with HLS 140 180 Cumulative Regret Realizable and one HLS 0 0.5 1 1.5 2 \u00b7104 10 20 Time Figure 1: Varying dimension experiment with all realizable representations (left), misspeci\ufb01ed representations (center-left), realizable non-HLS representations (center-right) and weak-HLS (right). Experiments are averaged over 40 repetitions. 0 1 2 3 4 5 Time \u00d7105 0.00 0.25 0.50 0.75 1.00 1.25 Pseudo Regret \u00d7104 wheel NeuralUCB (\u03b1UCB =1.0) IGW (\u03b3 = 10t1/2) 0 2 4 6 8 10 Time \u00d7105 1.0 1.5 2.0 2.5 Pseudo Regret \u00d7105 covertype NeuralUCB (\u03b1UCB =0.1) IGW (\u03b3 = 1t1/2) 0 1 2 3 4 5 Time \u00d7105 1 2 3 4 5 Pseudo Regret \u00d7104 magic NeuralUCB (\u03b1UCB =0.1) IGW (\u03b3 = 50t1/2) 0 1 2 3 4 5 Time \u00d7105 0.2 0.4 0.6 0.8 1.0 1.2 Pseudo Regret \u00d7103 mushroom NeuralUCB (\u03b1UCB =0.1) IGW (\u03b3 = 100t1/2) 0 1 2 3 4 5 Time \u00d7105 0 2 4 6 Pseudo Regret \u00d7103 statlog NeuralUCB (\u03b1UCB =1.0) IGW (\u03b3 = 10t1/2) NN Architecture: 50,50,50,50,10 ReLu \u03f5t = t\u22121/3, \u03b1UCB =1 Neural-\u03f5-greeedy NN-BanditSRL-\u03f5-greedy (GLRT 5; Lweak,t) NN-BanditSRL-IGW (GLRT 5; Lweak,t, \u03b3 = 10\u03f5\u22121 t ) Neural-LinUCB NN-BanditSRL-LinUCB (GLRT 5; Lweak,t) Neural-TS NN-BanditSRL-TS (GLRT 5; Lweak,t) NeuralUCB (tuned on problem) IGW (tuned on problem) Figure 2: Average cumulative regret (over 20 runs) in non-linear domains. Non-Linear Benchmarks. We study the performance of NN-BANDITSRL in classical benchmarks where non-linear representations are required. We only consider the weak-HLS loss Lweak,t(\u03c6) as it is more general than full HLS. As base algorithms we consider \u03f5-greedy and inverse gap weighting (IGW) with \u03f5t = t\u22121/3, and LINUCB and LINTS with theoretical parameters. These algorithms are run on the representation \u03c6j provided by the NN at each phase j. We compare NN-BANDITSRL against the base algorithms using the maximum-likelihood representation (i.e., Neural-(\u03f5-greedy, LINTS) [6] and Neural-LINUCB [28]), supervised learning with the IGW strategy [e.g., 7, 10] and NeuralUCB [27]5 See App. C-D for details. 5For ease of comparison, all the algorithms use the same phased schema for \ufb01tting the reward and recomputing the parameters. NeuralUCB uses a diagonal approximation of the design matrix. 9 \fIn all the problems6 the reward function is highly non-linear w.r.t. contexts and actions and we use a network composed by layers of dimension [50, 50, 50, 50, 10] and ReLu activation to learn the representation (i.e., d = 10). Fig. 2 shows that all the base algorithms (\u03f5-GREEDY, IGW, LINUCB, LINTS) achieve better performance through representation learning, outperforming the base algorithms. This provides evidence that NN-BANDITSRL is effective even beyond the theoretical scenario. For the baseline algorithms (NEURALUCB, IGW) we report the regret of the best con\ufb01guration on each individual dataset, while for NN-BANDITSRL we \ufb01x the parameters across datasets (i.e., \u03b1GLRT = 5). While this comparison clearly favours the baselines, it also shows that NNBANDITSRL is a robust algorithm that behaves better or on par with the state-of-the-art algorithms. In particular, NN-BANDITSRL uses theoretical parameters while the baselines use tuned con\ufb01gurations. Optimizing the parameters of NN-BANDITSRL is outside the scope of these experiments. 6 Conclusion We proposed a novel algorithm, BANDITSRL, for representation selection in stochastic contextual linear bandits. BANDITSRL combines a mechanism for representation learning that aims to recover representations with good spectral properties, with a generalized likelihood ratio test to exploit the recovered representation. We proved that, thanks to these mechanisms, BANDITSRL is not only able to achieve sublinear regret with any no-regret algorithm A but, when a HLS representation exists, it is able to achieve constant regret. We demonstrated that BANDITSRL can be implemented using NNs and showed its effectiveness in standard benchmarks. A direction for future investigation is to extend the approach to a weaker misspeci\ufb01cation assumption than Asm. 3. Another direction is to leverage the technical and algorithmic tools introduced in this paper for representation learning in reinforcement learning, e.g., in low-rank problems [e.g. 38]. Acknowledgments and Disclosure of Funding M. Papini was supported by the European Research Council (ERC) under the European Union\u2019s Horizon 2020 research and innovation programme (Grant agreement No. 950180)." + }, + { + "url": "http://arxiv.org/abs/2207.05852v1", + "title": "Optimistic PAC Reinforcement Learning: the Instance-Dependent View", + "abstract": "Optimistic algorithms have been extensively studied for regret minimization\nin episodic tabular MDPs, both from a minimax and an instance-dependent view.\nHowever, for the PAC RL problem, where the goal is to identify a near-optimal\npolicy with high probability, little is known about their instance-dependent\nsample complexity. A negative result of Wagenmaker et al. (2021) suggests that\noptimistic sampling rules cannot be used to attain the (still elusive) optimal\ninstance-dependent sample complexity. On the positive side, we provide the\nfirst instance-dependent bound for an optimistic algorithm for PAC RL,\nBPI-UCRL, for which only minimax guarantees were available (Kaufmann et al.,\n2021). While our bound features some minimal visitation probabilities, it also\nfeatures a refined notion of sub-optimality gap compared to the value gaps that\nappear in prior work. Moreover, in MDPs with deterministic transitions, we show\nthat BPI-UCRL is actually near-optimal. On the technical side, our analysis is\nvery simple thanks to a new \"target trick\" of independent interest. We\ncomplement these findings with a novel hardness result explaining why the\ninstance-dependent complexity of PAC RL cannot be easily related to that of\nregret minimization, unlike in the minimax regime.", + "authors": "Andrea Tirinzoni, Aymen Al-Marjani, Emilie Kaufmann", + "published": "2022-07-12", + "updated": "2022-07-12", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction We are interested in the probably approximately correct (PAC) identi\ufb01cation of the best policy in an episodic Markov Decision Process (MDP) with \ufb01nite state space S, action space A, and horizon H. We denote by M := (S, A, (ph, \u03bdh)h\u2208[H], s1, H) such an MDP. Each episode starts in the initial state s1 \u2208S and lasts H steps (called stages). In each stage h \u2208[H], the agent is in some state sh \u2208S, it takes an action ah \u2208A, it receives a random reward drawn from a distributions \u03bdh(s, a) with expectation rh(s, a), and it transitions to a next state sh+1 \u2208S with probability ph(\u00b7|sh, ah). A (deterministic) policy \u03c0 = (\u03c0h)h\u2208[H] is a sequence of mappings \u03c0h : S \u2192A. The action-value function Q\u03c0 h(s, a) quanti\ufb01es the expected cumulative reward when starting in state s at stage h, taking action a and following policy \u03c0 until the end of the episode. It satis\ufb01es the Bellman equations: for all h \u2208[H], s \u2208S, and a \u2208A, Q\u03c0 h(s, a) = rh(s, a) + X s\u2032\u2208S ph(s\u2032|s, a)V \u03c0 h+1(s\u2032), where V \u03c0 h (s) := Q\u03c0 h(s, \u03c0h(s)) is the corresponding value function (with V \u03c0 H+1 = 0). A policy \u03c0\u22c6is optimal if V \u03c0\u22c6 1 (s1) = max\u03c0 V \u03c0 1 (s1). From the theory of MDPs (Puterman, 1994), a suf\ufb01cient condition is that \u03c0\u22c6 h(s) \u2208 arg maxa\u2208A Q\u22c6 h(s, a), where the optimal Q-function satis\ufb01es Q\u22c6 h(s, a) = rh(s, a) + P s\u2032\u2208S ph(s\u2032|s, a)V \u22c6 h+1(s\u2032), with V \u22c6 h (s) = maxa\u2208A Q\u22c6 h(s, a) and V \u22c6 H+1(s) = 0. This condition implies that \u03c0\u22c6maximizes the expected return at any state and stage simultaneously, while the (weaker) optimality condition only requires so at the initial state s1. In online episodic reinforcement learning (RL), the agent interacts with the MDP M by choosing, in each episode t \u2208N, a policy \u03c0t and collecting a trajectory in the MDP under this policy: (st h, at h, rt h)h\u2208[H] where st 1 = s1 and, for all h \u2208[H], at h = \u03c0t h(st h), rt h \u223c\u03bdh(st h, at h), and st h+1 \u223cph(\u00b7|st h, at h). The choice of \u03c0t based on previously observed 1 arXiv:2207.05852v1 [cs.LG] 12 Jul 2022 \fTIRINZONI, AL-MARJANI, AND KAUFMANN trajectories is called the sampling rule. Several objectives have been studied in the literature. An agent seeking to maximize the total reward received in T episodes equivalently aims at minimizing the (pseudo) regret RM(T) := T X t=1 \u0010 V \u22c6 1 (s1) \u2212V \u03c0t 1 (s1) \u0011 . In PAC identi\ufb01cation (or PAC RL), the agent\u2019s sampling rule is coupled with a (possibly adaptive) stopping rule \u03c4 after which the agent stops collecting trajectories and returns a guess for the optimal policy b \u03c0. Given two parameters \u03b5, \u03b4 > 0 with \u03b4 \u2208(0, 1), the algorithm ((\u03c0t)t\u2208N, \u03c4, b \u03c0) is (\u03b5, \u03b4)-PAC if it returns an \u03b5-optimal policy with high probability, i.e., PM \u0010 V b \u03c0 1 (s1) \u2265V \u22c6 1 (s1) \u2212\u03b5 \u0011 \u22651 \u2212\u03b4. The goal is to have (\u03b5, \u03b4)-PAC algorithms using a small number of exploration episodes \u03c4 (a.k.a. sample complexity). The PAC RL framework was originally introduced by Fiechter (1994) and there exists algorithms attaining a sample complexity O((SAH3/\u03b52) log(1/\u03b4)) (Dann and Brunskill, 2015; M\u00e9nard et al., 2021), which is optimal in a minimax sense in time-inhomogeneous MDPs (Domingues et al., 2021). These algorithms use an optimistic sampling rule coupled with a well-chosen stopping rule. Optimistic sampling rules, in which the policy \u03c0t is the greedy policy with respect to an upper con\ufb01dence bound on the optimal Q function, have been mostly proposed for regret minimization (see Neu and Pike-Burke (2020) for a survey). In particular, the UCBVI algorithm of Azar et al. (2017a) (with Bernstein bonuses) attains minimax optimal regret in episodic MDPs. Recent works have provided instance-dependent upper bounds on the regret for optimistic algorithms (Simchowitz and Jamieson, 2019; Xu et al., 2021; Dann et al., 2021). An instance-dependent bound features some complexity term which depends on the MDP instance, typically through some notion of sub-optimality gap. To the best of our knowledge, for PAC RL in episodic MDPs the only algorithms with instance-dependent upper bound on their sample complexity are MOCA (Wagenmaker et al., 2021) and EPRL (Tirinzoni et al., 2022), the latter being analyzed for MDPs with deterministic transitions. Neither of these algorithms are based on an optimistic sampling rule. Notably, Wagenmaker et al. (2021) proved that no-regret sampling rules (including optimistic ones) cannot achieve the instance-optimal rate for PAC identi\ufb01cation. The intuition is quite simple: an optimal algorithm for PAC RL must visit every state-action pair at least a certain amount of times, and this requires playing policies that cover the whole MDP in the minimum amount of episodes. On the other hand, a regret-minimizer focuses on playing high-reward policies which, depending on the MDP instance, might be arbitrarily bad at visiting hard-to-reach states. Despite not being instance-optimal, optimistic sampling rules are simple (e.g., as opposed to the complex design of MOCA), computationally ef\ufb01cient, and do not require any sophisticated elimination rule (e.g., as opposed to the one proposed by Tirinzoni et al. (2022) to obtain the optimal gap dependence in deterministic MDPs). However, it remains an open question what instance-dependent complexity they can achieve. Contributions Our main contribution is a new instance-dependent analysis for (a variant of) BPI-UCRL, a PAC RL algorithm based on an optimistic sampling rule proposed by Kaufmann et al. (2021) with only a worst-case sample complexity bound. In particular, in Theorem 2 we show that the sample complexity of BPI-UCRL can be bounded by \u03c4 \u2272 X h\u2208[H] X s\u2208S X a\u2208A H4 log(1/\u03b4) pmin h (s, a) max{e \u2206h(s, a), \u03b5}2 , where pmin h (s, a) is the minimum positive probability to reach (s, a) at stage h across all deterministic policies, while e \u2206h(s, a) := min\u03c0:p\u03c0 h(s,a)>0 max\u2113\u2208[H] maxs\u2032:p\u03c0 \u2113(s\u2032)>0(V \u22c6 \u2113(s\u2032) \u2212V \u03c0 \u2113(s\u2032)) is a new notion of sub-optimality gap that we call the conditional return gap.1 Interestingly, we show that the gaps e \u2206h(s, a) are larger than both the value gaps of Wagenmaker et al. (2021) and of the (deterministic) return gaps of Tirinzoni et al. (2022). Notably, we prove this result with a remarkably simple analysis based on a new \u201ctarget trick\u201d: instead of bounding the number of times each state-action-stage triplet (s, a, h) is visited (as it is common in the bandit literature), we control the number of times the played policy visits (s, a, h) with positive probability with (s, a, h) being the least visited triplet so far, an event that we refer to as (s, a, h) being \u201ctargeted\u201d. 1. We denote by p\u03c0 h(s, a) (resp. p\u03c0 h(s)) the probability that \u03c0 visits (s, a) (resp. s) at stage h. 2 \fOPTIMISTIC PAC REINFORCEMENT LEARNING: THE INSTANCE-DEPENDENT VIEW Our second contribution is to prove that, unlike what happens in the minimax setting, there is no clear relationship between regret and sample complexity in the instance-dependent framework. Indeed, the \u201cregret-to-PAC conversion\u201d often proposed to turn a regret minimizer into an (\u03b5, \u03b4)-PAC algorithm for PAC RL (e.g., Jin et al., 2018; M\u00e9nard et al., 2021; Wagenmaker et al., 2021) cannot directly exploit an instance-dependent upper bound on the regret. In Theorem 4, we construct an MDP for which the sample complexity suggested by a regret-to-PAC conversion cannot be attained by any (\u03b5, \u03b4)-correct algorithm for PAC RL. In particular, this implies that one cannot take an instance-dependent regret bound for an optimistic algorithm (e.g., Simchowitz and Jamieson, 2019) and turn it into an instance-dependent sample complexity bound of the form above: a speci\ufb01c analysis for PAC RL, like the one proposed in this paper, is actually required. 2. The BPI-UCRL Algorithm Let nt h(s, a) := Pt j=1 1 \u0010 sj h = s, aj h = a \u0011 be the number of times the state-action pair (s, a) has been visited at stage h up to episode t. We introduce the maximum-likelihood estimators b rt h(s, a) := 1 nt h(s, a) t X j=1 1 \u0010 sj h = s, aj h = a \u0011 rj h and b pt h(s\u2032|s, a) := 1 nt h(s, a) t X j=1 1 \u0010 sj h = s, aj h = a, sj h+1 = s\u2032\u0011 for rh(s, a) and ph(s\u2032|s, a), respectively. As common, and without loss of generality, we shall assume that reward distributions are supported on [0, 1]. We de\ufb01ne inductively the following upper and lower bounds on the optimal value function. Letting Q t H+1 = Qt H+1 = 0, for all h \u2208[H] we have Q t h(s, a) = min \u0012 H \u2212h + 1, b rt h(s, a) + bt h(s, a) + X s\u2032\u2208S b pt h(s\u2032|s, a)V t h+1(s\u2032) \u0013 , V t h(s) = max a\u2208A Q t h(s, a), Qt h(s, a) = max \u0012 0, b rt h(s, a) \u2212bt h(s, a) + X s\u2032\u2208S b pt h(s\u2032|s, a)V t h+1(s\u2032) \u0013 , V t h(s) = max a\u2208A Qt h(s, a), where bt h(s, a) is a con\ufb01dence bonus de\ufb01ned as bt h(s, a) := (H \u2212h + 1) s \u03b2(nt h(s, a), \u03b4) nt h(s, a) \u22271 ! for a suitable threshold \u03b2 that we shall specify in the analysis. The BPI-UCRL algorithm (Kaufmann et al., 2021) can be described as follows:2 \u2022 the sampling rule prescribes \u03c0t+1 h (s) = arg maxa\u2208A Q t h(s, a) for each t \u2208N; \u2022 the stopping rule is \u03c4 = inf n t \u2208N : maxa Q t 1(s1, a) \u2212maxa Qt 1(s1, a) \u2264\u03b5 o ; \u2022 the recommendation rule is b \u03c0\u03c4 h(s) = arg maxa\u2208A Q\u03c4 h(s, a). Note that the sampling rule of BPI-UCRL is essentially the UCBVI algorithm with Hoeffding\u2019s bonuses proposed by Azar et al. (2017b) for regret minimization. Such bonuses can be improved using Bernstein\u2019s inequality, yielding either UCBVI with Bernstein\u2019s bonuses (Azar et al., 2017b) or EULER (Zanette and Brunskill, 2019). While this would likely reduce the dependence on the horizon from H4 to H3 in our \ufb01nal sample complexity bound, we focus on Hoeffding\u2019s bonuses for simplicity since the extension to Bernstein\u2019s bonuses is somewhat straightforward given existing analyses. 2. The original BPI-UCRL algorithm uses slightly different Q-function bounds which do not feature b rt h(s, a) and b pt h(s\u2032|s, a) explicitly but rather scale with KL con\ufb01dence regions around them (see Appendix D of Kaufmann et al. (2021)). Here we write the explicit version obtained by appyling Pinsker\u2019s inequality, though our analysis also holds for the original con\ufb01dence intervals. 3 \fTIRINZONI, AL-MARJANI, AND KAUFMANN 3. An Instance-dependent Analysis of BPI-UCRL Before stating and proving our main result, we introduce our novel notion of sub-optimality gap. Formally, the conditional return gap of any state-action pair (s, a) at stage h \u2208[H] is e \u2206h(s, a) := min \u03c0\u2208\u03a0:p\u03c0 h(s,a)>0 max \u2113\u2208[H] max s\u2032\u2208S:p\u03c0 \u2113(s\u2032)>0 \u0000V \u22c6 \u2113(s\u2032) \u2212V \u03c0 \u2113(s\u2032) \u0001 , (1) where we recall that p\u03c0 h(s) := P\u03c0(sh = s) and p\u03c0 h(s, a) = p\u03c0 h(s)1 (\u03c0h(s) = a). The intuition behind this de\ufb01nition is quite simple: in order to \ufb01gure out whether (s, a) is sub-optimal at stage h, the agent must learn that all policies visiting (s, a) at stage h with positive probability are indeed sub-optimal. The complexity for detecting whether any of such policies (say, \u03c0) is sub-optimal is proportional to the maximum gap between the optimal value function and the one of \u03c0 across all possible states visited by \u03c0 itself. This is a gap between expected returns conditioned on different starting states and stages (hence the name conditional return gap). It turns out that these gaps are larger than both the value gaps \u2206h(s, a) := V \u22c6 h (s) \u2212Q\u22c6 h(s, a) (Wagenmaker et al., 2021) and the variant of the return gaps \u2206h(s, a) = V \u22c6 1 (s1) \u2212max\u03c0\u2208\u03a0:p\u03c0 h(s,a)>0 V \u03c0 1 (s1) introduced by Tirinzoni et al. (2022).3 Proposition 1 For all s \u2208S, a \u2208A, h \u2208[H], e \u2206h(s, a) \u2265\u2206h(s, a) and e \u2206h(s, a) \u2265\u2206h(s, a). Moreover, if the MDP has deterministic transitions, e \u2206h(s, a) = \u2206h(s, a). Proof For the \ufb01rst inequality, we have e \u2206h(s, a) \u2265V \u22c6 h (s) \u2212 max \u03c0:p\u03c0 h(s,a)>0 V \u03c0 h (s) = V \u22c6 h (s) \u2212 max \u03c0:p\u03c0 h(s,a)>0 Q\u03c0 h(s, a) = V \u22c6 h (s) \u2212Q\u22c6 h(s, a) = \u2206h(s, a). The second one is trivial by lower bounding the maximum with s\u2032 = s1 and \u2113= 1. To see the equality, note that V \u22c6 h (s) \u2212V \u03c0 h (s) = E\u03c0 hPH \u2113=h \u2206\u2113(s\u2113, \u03c0\u2113(s\u2113)) | sh = s i . In the deterministic case, this implies that V \u22c6 h (s) \u2212V \u03c0 h (s) is a sum of H \u2212h + 1 \ufb01xed (non-negative) value gaps. Therefore, the maximum in (1) must be attained at the initial stage and state, which implies the statement. The \ufb01rst return gaps were actually introduced in the regret-minimization literature by Dann et al. (2021) as gaph(s, a) = \u2206h(s, a) \u22281 H min \u03c0\u2208\u03a0:P(Bh(s,a))>0 E\u03c0 \" h X \u2113=1 \u2206\u2113(s\u2113, a\u2113) | Bh(s, a) # , where Bh(s, a) = {sh = s, ah = a, \u2203\u2113\u2264h : \u2206\u2113(s\u2113, a\u2113) > 0} is the event that policy (s, a) is visited at stage h after at least one mistake was made. We found no clear relationship between gaph(s, a) and e \u2206h(s, a) besides the fact that the former is also comparing returns (from stage 1), as for any policy playing optimally from stage h + 1, V \u22c6(s1) \u2212V \u03c0(s1) = E\u03c0 hPh \u2113=1 \u2206\u2113(s\u2113, a\u2113) i . We now state and prove our main result. Theorem 2 Let \u03b2(t, \u03b4) := ( p \u03b2r(t, \u03b4) + p 2\u03b2p(t, \u03b4))2, where \u03b2r(t, \u03b4) := 1 2(log(3SAH/\u03b4) + log(e(1 + t))) and \u03b2p(t, \u03b4) := log(3SAH/\u03b4) + (S \u22121) log(e(1 + t/(S \u22121))). With probability at least 1 \u2212\u03b4, BPI-UCRL outputs a policy b \u03c0\u03c4 satisfying V b \u03c0\u03c4 1 (s1) \u2265V \u22c6 1 (s1) \u2212\u03b5 using a number of episodes upper bounded as \u03c4 \u2264H4 H X h=1 X s\u2208S X a\u2208A 720 log 3SAH \u03b4 + 1729S log \u0010 1152 S2AH5 pmin\u03b52 log 3SAH \u03b4 \u0011 pmin h (s, a) max{e \u2206h(s, a), \u03b5}2 , where pmin h (s, a) := min\u03c0\u2208\u03a0:p\u03c0 h(s,a)>0 p\u03c0 h(s, a) and pmin h (s, a) = +\u221ewhen (s, a, h) is unreachable by any policy. 3. The return gaps were introduced by Tirinzoni et al. (2022) only for deterministic MDPs. Here we replace their maximum over policies visiting (s, a, h) with probability 1 by the one over policies visiting it with positive probability. 4 \fOPTIMISTIC PAC REINFORCEMENT LEARNING: THE INSTANCE-DEPENDENT VIEW Theorem 2 shows that the sample complexity of BPI-UCRL is upper bounded by a function that scales inversely with the conditional return gaps squared multipled by the minimum visitation probabilities of each triplet (s, a, h). We recall that BPI-UCRL also enjoys the worst-case sample complexity bound \u03c4 \u2264e O(SAH4 log(1/\u03b4)/\u03b52) proved by Kaufmann et al. (2021), which is minimax optimal up to a factor H. Thus, one can always take the minimum between this worst-case bound and the instance-dependent one of Theorem 2. Before proving our main theorem, we brie\ufb02y discuss how it relates to existing results. Comparison to Wagenmaker et al. (2021) The sample complexity upper bound achieved by the MOCA algorithm of Wagenmaker et al. (2021) is roughly \u03c4 \u2264e O \uf8eb \uf8edH2 log(1/\u03b4) X h\u2208[H] X s\u2208S X a\u2208A min \u0012 1 pmax h (s, a)\u2206h(s, a)2 , pmax h (s, a) \u03b52 \u0013 + H4|OPT(\u03b5)| log(1/\u03b4) \u03b52 \uf8f6 \uf8f8, where pmax h (s, a) := max\u03c0\u2208\u03a0:p\u03c0 h(s,a)>0 p\u03c0 h(s, a) and OPT(\u03b5) is roughly the set of all \u03b5-optimal triplets. In contrast to the bound we obtained for BPI-UCRL, this one scales with the maximum probabilities for reaching the different state-action pairs. This is obtained thanks to the clever exploration strategy of MOCA which focuses on ef\ufb01ciently covering the whole MDP. However, the bound of Wagenmaker et al. (2021) scales with value gaps which, from Proposition 1, are provably smaller than our conditional return gaps. Overall, the two bounds result non-comparable as there exist MDP instances where the one of BPI-UCRL is smaller, and viceversa for the one of MOCA. While we are able to show this improved gap dependence thanks to optimism alone, we are not sure how to achieve it with a suitable elimination rule that could be plugged into the MOCA exploration strategy to obtain the best of these two bounds. The dependence on pmin h (s, a) One might be wondering whether a better dependence than pmin h (s, a) can be achieved with an optimistic rule like BPI-UCRL. We conjecture that this is not possible, at least in a worst-case sense. In fact, Wagenmaker et al. (2021) already proved that there exists an MDP instance in which any no-regret sampling rule (thus including optimistic ones) suffers a depence on the minimum visitation probabilities, while a \u201csmart\u201d PAC RL algorithm does not. The intuition is that a no-regret algorithm focuses on playing high-reward policies which, depending on the MDP instance, might be arbitrarily bad at exploring the state space. In our context, this means that, if the policy visiting (s, a, h) with largest reward is also the one that visits it with lowest probability, an optimistic sampling rule is likely to play such policy quite frequently and thus its sample complexity will scale inversely by pmin h (s, a) as we show. Deterministic MDPs (comparison to Tirinzoni et al. (2022)) If the MDP has deterministic transitions, we have e \u2206h(s, a) = \u2206h(s, a) (see Proposition 1) and pmin h (s, a) = 1 if state s is reachable by some policy at stage h, while pmin h (s, a) = +\u221ein the opposite case. Theorem 2 then implies that \u03c4 \u2264e O \uf8eb \uf8edH4 X h\u2208[H] X s\u2208Sh X a\u2208A log(1/\u03b4) + S log log(1/\u03b4) max{\u2206h(s, a), \u03b5}2 \uf8f6 \uf8f8, where Sh is the subset of states reachable at stage h. Up to the extra multiplicative H2 and S log log(1/\u03b4) terms, this matches the bound obtained by Tirinzoni et al. (2022) for the EPRL algorithm with a maximum-diameter sampling rule that is informed a-priori about the MDP being deterministic. These extra terms arise because BPI-UCRL needs to concentrate the transition probabilities to work for general stochastic MDPs. If we knew that the MDP is deterministic, we could modify the bonuses as bt h(s, a) := q \u03b2(nt h(s,a),\u03b4) nt h(s,a) \u22271 and the thresholds as \u03b2(t, \u03b4) := \u03b2r(t, \u03b4). This would yield sample complexity \u03c4 \u2264e O \u0010 H2 P h\u2208[H] P s\u2208Sh P a\u2208A log(1/\u03b4)/ max{\u2206h(s, a), \u03b5}2\u0011 which matches exactly the one of EPRL with maximum-diameter sampling and which is at most a factor of H3 sub-optimal w.r.t. the instance-dependent lower bound of Tirinzoni et al. (2022). This is quite remarkable since EPRL obtains the \u201coptimal\u201d dependence on the deterministic return gaps \u2206h(s, a) using a clever elimination rule, while here we show optimism alone is enough. We note, however, that reducing the sub-optimal dependence on H3 still requires smarter exploration strategies than optimism, like the maximum-coverage one proposed by Tirinzoni et al. (2022). 3.1 Proof of Theorem 2 All lemmas and proofs not explicitly reported here can be found in Appendix A. 5 \fTIRINZONI, AL-MARJANI, AND KAUFMANN We carry out the proof under the \u201cgood event\u201d E := Er \u2229Ep \u2229Ec, where Er := ( \u2200t \u2208N>0, s \u2208S, a \u2208A, h \u2208[H] : \f \f \frh(s, a) \u2212b rt h(s, a) \f \f \f \u2264 s \u03b2r(nt h(s, a), \u03b4) nt h(s, a) \u22281 ) , Ep := \u001a \u2200t \u2208N>0, s \u2208S, a \u2208A, h \u2208[H] : KL \u0000b pt h(\u00b7|s, a), ph(\u00b7|s, a) \u0001 \u2264\u03b2p(nt h(s, a), \u03b4) nt h(s, a) \u22281 \u001b , Ec := \u001a \u2200t \u2208N>0, s \u2208S, a \u2208A, h \u2208[H] : nt h(s, a) \u22651 2nt h(s, a) \u2212log(3SAH/\u03b4) \u001b . Note that event Ec relates the counts nt h(s, a) to the pseudo-counts nt h(s, a) := Pt j=1 p\u03c0j h (s, a). Thanks to Lemma 5, we have P(E) \u22651 \u2212\u03b4 and, thus, the \ufb01nal result will hold with the same probability. This good event is identical to the one used in the original (minimax) analysis of BPI-UCRL (Kaufmann et al., 2021). On this good event, one can prove that our (slighlty different) bounds Q t h(s, a), Qt h(s, a) are respectively upper and lower bounds on the optimal action value Q\u22c6 h(s, a), for all (s, a, h) (see Lemma 6, which justi\ufb01es the choice of threshold \u03b2). The correctness follows from this fact using the same arguments as Theorem 11 of Kaufmann et al. (2021). The original part of our proof is the way we upper bound the sample complexity on the good event E. Our proof is based on the following \u201ctarget trick\u201d which extends the one used by Tirinzoni et al. (2022) to MDPs with stochastic transitions. Fix any s \u2208S, a \u2208A, and h \u2208[H]. Let us introduce the event \u201c(s, a, h) is targeted at time t\u201d as Gt s,a,h := ( p\u03c0t h (s, a) > 0, (s, a, h) \u2208 arg max (s\u2032,a\u2032,\u2113):p\u03c0t \u2113(s\u2032,a\u2032)>0 bt\u22121 \u2113 (s\u2032, a\u2032) ) . Intutively, (s, a, h) is targeted at time t if (1) it is visited with positive probability by \u03c0t and (2) it maximizes the bonuses at time t \u22121 (i.e., the current uncertainty) across all triplets visited by \u03c0t. Let Z\u03c4 h(s, a) := P\u03c4 t=1 1 \u0010 Gt s,a,h \u0011 be the number of times (s, a, h) is targeted up the stopping time. Since at each time step at least one triplet is targeted, \u03c4 \u2264 H X h=1 X s\u2208S X a\u2208A Z\u03c4\u22121 h (s, a) + 1. (2) We shall now focus on bounding ZT h (s, a) for some time T > 0 at the end of which the algorithm did not stop. Thanks to (2), this will imply a bound on the \ufb01nal stopping time. We \ufb01rst state the following crucial result which relates con\ufb01dence intervals and conditional return gaps. Lemma 3 Under event E, for any t \u2208N>0, s \u2208S, h \u2208[H], V \u22c6 h (s) \u2212V \u03c0t+1 h (s) \u22642 H X \u2113=h X s\u2032\u2208S p\u03c0t+1 \u2113 (s\u2032|s, h)bt \u2113(s\u2032, \u03c0t+1 \u2113 (s\u2032)), where p\u03c0 \u2113(s\u2032|s, h) := P\u03c0(s\u2113= s\u2032|sh = s). Let (\u02dc st, \u02dc ht) \u2208arg max(s\u2032,\u2113):p\u03c0t \u2113(s\u2032)>0(V \u22c6 \u2113(s\u2032) \u2212V \u03c0t \u2113(s\u2032)). Using Lemma 3 with this couple, max \u2113\u2208[H] max s\u2032\u2208S:p\u03c0t \u2113(s\u2032)>0 \u0000V \u22c6 \u2113(s\u2032) \u2212V \u03c0t \u2113(s\u2032) \u0001 \u22642 H X \u2113=\u02dc ht X s\u2032\u2208S p\u03c0t \u2113(s\u2032|\u02dc st, \u02dc ht)bt\u22121 \u2113 (s\u2032, \u03c0t \u2113(s\u2032)). Summing both sides for all episodes where (s, a, h) is targeted up to T and using that p\u03c0t h (s, a) > 0 under Gt s,a,h, 2 T X t=1 1 \u0000Gt s,a,h \u0001 H X \u2113=\u02dc ht X s\u2032\u2208S p\u03c0t \u2113(s\u2032|\u02dc st, \u02dc ht)bt\u22121 \u2113 (s\u2032, \u03c0t \u2113(s\u2032)) \u2265ZT h (s, a)e \u2206h(s, a). (3) 6 \fOPTIMISTIC PAC REINFORCEMENT LEARNING: THE INSTANCE-DEPENDENT VIEW Note that, for each time t, since p\u03c0t \u02dc ht(\u02dc st) > 0, then p\u03c0t \u2113(s\u2032|\u02dc st, \u02dc ht) > 0 implies that p\u03c0t \u2113(s\u2032) > 0. Using that, under Gt s,a,h, (s, a, h) maximizes the bonuses at time t \u22121 over all triplets visited by \u03c0t, we can upper bound the left-hand side as T X t=1 1 \u0000Gt s,a,h \u0001 H X \u2113=\u02dc ht X s\u2032\u2208S p\u03c0t \u2113(s\u2032|\u02dc st, \u02dc ht)bt\u22121 \u2113 (s\u2032, \u03c0t \u2113(s\u2032)) \u2264H T X t=1 1 \u0000Gt s,a,h \u0001 bt\u22121 h (s, a) (a) \u22642H2 T X t=1 1 \u0000Gt s,a,h \u0001 s \u03b2(nt\u22121 h (s, a), \u03b4) nt\u22121 h (s, a) \u22281 (b) \u22642H2 T X t=1 1 \u0000Gt s,a,h \u0001 s \u03b2(T, \u03b4) Zt\u22121 h (s, a)pmin h (s, a) \u22281 (c) \u22644H2 s \u03b2(T, \u03b4)ZT h (s, a) pmin h (s, a) . where (a) uses Lemma 7 of Kaufmann et al. (2021) together with the de\ufb01nition of bt\u22121 h (s, a), (b) uses that nt\u22121 h (s, a) \u2265 Pt\u22121 j=1 1 \u0010 Gj s,a,h \u0011 p\u03c0j h (s, a) \u2265Zt\u22121 h (s, a)pmin h (s, a), and (c) uses the pigeon-hole principle (see Lemma 8). Plugging this into (3) and solving the resulting inequality in ZT h (s, a), we obtain ZT h (s, a) \u2264 64H4\u03b2(T, \u03b4) pmin h (s, a)e \u2206h(s, a)2 . A similar derivation using the stopping rule de\ufb01nition together with the fact that the algorithm did not stop at T also allows us to prove that ZT h (s, a) \u2264144H4\u03b2(T,\u03b4) pmin h (s,a)\u03b52 (see Lemma 9). Plugging these two bounds into (2) with T = \u03c4 \u22121, \u03c4 \u2264 H X h=1 X s\u2208S X a\u2208A 144H4\u03b2(\u03c4 \u22121, \u03b4) pmin h (s, a) max{e \u2206h(s, a), \u03b5}2 + 1. (4) The proof is concluded by noting that \u03b2(\u03c4 \u22121, \u03b4) \u22645 log 3SAH \u03b4 + 4S + 4S log (\u03c4) (see Lemma 10) and by using Lemma 11 to solve the resulting inequality in \u03c4 (see Appendix A.4). \u25a0 4. On the Regret-to-PAC Conversion In the minimax setting, the complexity of PAC RL and that of regret minimization are very related. Indeed, Jin et al. (2018) suggest the following regret-to-PAC conversion: one can take a regret minimizer, run it for T episodes, and output a policy b \u03c0 uniformly drawn from the T played. Then, by Markov\u2019s inequality, P \u0000V b \u03c0 1 (s1) < V \u22c6 1 (s1) \u2212\u03b5 \u0001 \u2264 1 T \u03b5 PT t=1 E[V \u22c6 1 (s1) \u2212V \u03c0t 1 (s1)] = 1 \u03b5E[RM(T)/T]. Thus, choosing T such that the expected average regret is smaller than \u03b5\u03b4 yields an \u03b5-optimal policy with probability 1 \u2212\u03b4. This is why in the literature it is common to derive an upper bound R(T) on the expected average regret and then claim that the resulting sample complexity for PAC RL is T\u03b5 := infT \u2208N \b T : R(T) \u2264\u03b5\u03b4 \t . However, this claim can be misleading. Applying this regret-to-PAC conversion to the UCBVI algorithm with Bernstein bonuses (Azar et al., 2017a), we get a sample complexity of order O(SAH3 log(1/\u03b4)/(\u03b52\u03b42)), which is optimal in a minimax sense in all dependencies except \u03b4.4 However, this trick can only be perfomed when R(T) contains quantities known by the algorithm (e.g., it can be a worst-case bound but not an instance-dependent one). In fact, the regret minimizer is used as a sampling rule for PAC identi\ufb01cation coupled with a deterministic stopping rule which simply stops after T\u03b5 episodes. When T\u03b5 is unknown, we need to use an adaptive stopping rule, in which case the claimed sample complexity T\u03b5 might not be attainable. This is proved in the following theorem, where we show that there exist MDPs where T\u03b5 can be exponentially (in S, A) smaller than the actual stopping time of any (\u03b5, \u03b4)-PAC algorithm. 4. The dependence on \u03b4 can be improved to log(1/\u03b4)2, see Appendix F of Kaufmann et al. (2021). 7 \fTIRINZONI, AL-MARJANI, AND KAUFMANN Theorem 4 For any S \u22654, A \u22652 and H \u2265\u2308log2(S)\u2309+ 1, there exists an MDP M with S states, A actions, and horizon H, and a regret minimization algorithm such that T\u03b5 := inf T \u2208N ( T : 1 T T X t=1 EM h V \u22c6 1 (s1) \u2212V \u03c0t 1 (s1) i \u2264\u03b5\u03b4 ) \u2264 2 \u03b52\u03b4 \u0012 36 log(2SAH) + 16 log 17 \u03b52\u03b4 + 9\u03b52 \u0013 + 1. Moreover, on the same instance any (\u03b5, \u03b4)-PAC identi\ufb01cation algorithm must satisfy EM[\u03c4] \u2265SA log(1/4\u03b4) 16\u03b52 . Our proof (see Appendix B) essentially builds an MDP instance with many optimal actions. The intuition is that, in such MDP, it is relatively easy for a regret minimizer to start behaving near optimally (i.e., to have average regret below \u03b5\u03b4). However, when this occurs the regret minimizer has still not enough con\ufb01dence to produce an \u03b5-optimal policy with probability at least 1 \u2212\u03b4. That is, a stopping rule for identi\ufb01cation would not trigger, hence the separation between the two times. The main implication is that the time T\u03b5 at which the average regret goes below \u03b5\u03b4 is not always a good proxy for the sample complexity that a regret minimizer would take for (\u03b5, \u03b4)-PAC identi\ufb01cation. In particular, one cannot simply take an existing instance-dependent regret bound (e.g., Simchowitz and Jamieson, 2019; Dann et al., 2021; Xu et al., 2021) and turn it into a sample complexity bound by the regret-to-PAC conversion suggested above. A speci\ufb01c analysis for the PAC setting, like the one we propose in Section 3 or those of Wagenmaker et al. (2021); Tirinzoni et al. (2022), is actually needed. Finally, we note that this result also solves an open question left by Wagenmaker et al. (2021) in their conclusion. First, it shows that the sample complexity stated in Equation (7.1) of Wagenmaker et al. (2021) for a regret-to-PAC conversion from an instance-dependent regret bound cannot always be attained by a PAC RL algorithm. Second, it shows that the extra term |OPT(\u03b5)|/\u03b52 that appears in the complexity of MOCA is actually tight, at least in a worst-case sense, as our proof essentially builds an MDP where all \u03b5-optimal state-action pairs must be visited \u2126(1/\u03b52) times. 5. Discussion We derived the \ufb01rst instance-dependent sample complexity bound for an optimistic sampling rule (BPI-UCRL). It features a new notion of sub-optimality gap that we call \u201cconditional return gap\u201d and that is tighter than existing value gaps and (deterministic) return gaps. We proved this bound with a remarkably simple analysis based on a new \u201ctarget trick\u201d that could be of independent interest. We complemented this result by showing that one cannot directly leverage the standard regret-to-PAC conversion in the instance-dependent regime, thus making our novel analysis non-trivial. In the bandit setting, it is known that optimism, when coupled with an appropriate stopping and recommendation rule, is near instance-optimal for best-arm identi\ufb01cation with (sub)Gaussian distributions (Jamieson et al., 2014). In this work, we obtained a similar result for deterministic MDPs, where optimistic sampling rules are sub-optimal only by a factor H3. This also explains the good empirical performance of BPI-UCRL observed by Tirinzoni et al. (2022) in such a setting. However, there seems to be a large gap for general stochastic MDPs, where our sample complexity scales with some minimal visitation probabilities that are avoided by algorithms like MOCA. This can be related to known results for structured bandits (Lattimore and Szepesvari, 2017), as a stochastic MDP presents a complex trade-off between collecting rewards and gathering information (i.e., exploring the state space) for which an optimistic algorithm can be arbitrarily sub-optimal. Finding the right complexity (matching upper and lower bounds) for PAC RL in general stochastic MDPs remains the main open problem. In deterministic MDPs, upper and lower bounds are nearly matching and are expressed as (complex) functions of the (simple) deterministic return gaps (Tirinzoni et al., 2022). They were obtained by properly combining a coverage-based exploration strategy with a suitable elimination rule. We conjecture that a similar algorithmic design could be a good direction towards instance optimality in stochastic MDPs. This would involve the combination of (1) a coverage-based exploration strategy like MOCA (Wagenmaker et al., 2021) that ensures scaling with the \u201cright\u201d visitation probabilities, and (2) some elimination rule to avoid over-sampling that ensures scaling with the \u201cright\u201d notion of gap. Unfortunately, an instance-dependent lower bound for the general setting is still unknown and thus it remains 8 \fOPTIMISTIC PAC REINFORCEMENT LEARNING: THE INSTANCE-DEPENDENT VIEW unclear what these \u201cright\u201d notions are. In this work, we take a step forward by proposing a novel and tighter gap de\ufb01nition, though it remains an open question whether our conditional return gaps can be related to an actual sample complexity lower bound." + }, + { + "url": "http://arxiv.org/abs/2205.10936v2", + "title": "On Elimination Strategies for Bandit Fixed-Confidence Identification", + "abstract": "Elimination algorithms for bandit identification, which prune the plausible\ncorrect answers sequentially until only one remains, are computationally\nconvenient since they reduce the problem size over time. However, existing\nelimination strategies are often not fully adaptive (they update their sampling\nrule infrequently) and are not easy to extend to combinatorial settings, where\nthe set of answers is exponentially large in the problem dimension. On the\nother hand, most existing fully-adaptive strategies to tackle general\nidentification problems are computationally demanding since they repeatedly\ntest the correctness of every answer, without ever reducing the problem size.\nWe show that adaptive methods can be modified to use elimination in both their\nstopping and sampling rules, hence obtaining the best of these two worlds: the\nalgorithms (1) remain fully adaptive, (2) suffer a sample complexity that is\nnever worse of their non-elimination counterpart, and (3) provably eliminate\ncertain wrong answers early. We confirm these benefits experimentally, where\nelimination improves significantly the computational complexity of adaptive\nmethods on common tasks like best-arm identification in linear bandits.", + "authors": "Andrea Tirinzoni, R\u00e9my Degenne", + "published": "2022-05-22", + "updated": "2022-10-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The multi-armed bandit is a sequential decision-making task which is now extensively studied (see, e.g., [1] for a recent review). In this problem, an algorithm interacts with its environment by sequentially \u201cpulling\u201d one among K \u2208N arms and observing a sample from a corresponding distribution. Among the possible objectives, we focus on \ufb01xed-con\ufb01dence identi\ufb01cation [2\u20135]. In this setting, the algorithm successively collects samples until it decides to stop and return an answer to a given query about the distributions. Its task is to return the correct answer with at most a given probability of error \u03b4, and its secondary goal is to do so while stopping as early as possible. This problem is called \u201c\ufb01xed-con\ufb01dence\u201d as opposed to \u201c\ufb01xed-budget\u201d, where the goal is to minimize the error probability with at most a given number of samples [6\u201310]. The most studied query is best arm identi\ufb01cation (BAI), where the aim is to return the arm whose distribution has highest mean. A variant is Top-m identi\ufb01cation [11], where the goal is to \ufb01nd the m arms with highest means. While these are the most common, other queries have been studied, including thresholding bandits [9], minimum threshold [12], and multiple correct answers [13]. \u2217Work done while at Inria Lille. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2205.10936v2 [cs.LG] 24 Oct 2022 \fAlgorithms for \ufb01xed-con\ufb01dence identi\ufb01cation can be generally divided into two classes: those based on adaptive sampling and those based on elimination. Adaptive algorithms [e.g., 8, 11, 4, 14] update their sampling strategy at each round and typically stop when they can simultaneously assess the correctness of every answer. They often enjoy strong theoretical guarantees. For instance, some of them [4, 15\u201317] have been shown to be optimal as \u03b4 \u21920. However, since they repeatedly test the correctness of every answer, they are often computationally demanding. Elimination-based strategies [e.g., 2, 18\u201321] maintain a set of \u201cactive\u201d answers (those that are still likely to be the correct one) and stop when only one remains. They typically update their sampling rules and/or the active answers infrequently. This, together with the fact that eliminations reduce the problem size over time, makes them more computationally ef\ufb01cient but also yields large sample complexity in practice. Moreover, while adaptive algorithms for general identi\ufb01cation problems (i.e., with arbitrary queries) exist [4, 15, 17], elimination-based strategies are not easy to design at such a level of generality. In particular, they are not easy to extend to structured combinatorial problems (such as Top-m), where the number of answers is exponential in the problem dimension.2 In this paper, we design a novel elimination rule for general identi\ufb01cation problems which we call selective elimination. It can be easily combined with existing adaptive strategies, both in their stopping and sampling rules, making them achieve the best properties of the two classes mentioned above. In particular, we prove that (1) selective elimination never suffers worse sample complexity than the original algorithm, and hence remain asymptotically optimal whenever the base algorithm is; (2) It provably discards some answers much earlier than the stopping time; (3) It improves the computational complexity of the original algorithm when some answers are eliminated early. Experimentally, we compare several existing algorithms for three identi\ufb01cation problems (BAI, Top-m, and thresholding bandits) on two bandit structures (linear and unstructured). We \ufb01nd that, coherently across all experiments, existing adaptive strategies achieve signi\ufb01cant gains in computation time and, to a smaller extent, in sample complexity when combined with selective elimination. 1.1 Bandit \ufb01xed-con\ufb01dence identi\ufb01cation An algorithm interacts with an environment composed of K > 1 arms. At each time t \u2208N, the algorithm picks an arm kt and observes Xkt t \u223c\u03bdkt, where \u03bdkt is the distribution of arm kt. At a time \u03c4, the algorithm stops and returns an answer \u02c6 \u0131 from a \ufb01nite set I. Formally, let Ft be the \u03c3-algebra generated by the observations up to time t. An identi\ufb01cation algorithm is composed of 1. Sampling rule: the sequence (kt)t\u2208N, where kt is Ft\u22121-measurable. 2. Stopping rule: a stopping time \u03c4 with respect to (Ft)t\u2208N and a random variable \u02c6 \u0131 \u2208I, i.e., the answer returned when stopping at time \u03c4. Note that, while it is common to decouple \u03c4 and \u02c6 \u0131, we group them to emphasize that the time at which an algorithm stops depends strongly on the answer it plans on returning. We assume that the arm distributions depend on some unknown parameter \u03b8 \u2208M, where M \u2286Rd is the set of possible parameters, and write \u03bdk(\u03b8) for k \u2208[K] to make this dependence explicit. For simplicity, we shall use \u03b8 to refer to the bandit problem (\u03bdk(\u03b8))k\u2208[K]. This assumption allows us to include linear bandits in our analysis. We let i\u22c6: M \u2192I be the function, known to the algorithm, which returns the unique correct answer for each problem. The algorithm is correct on \u03b8 if \u02c6 \u0131 = i\u22c6(\u03b8). De\ufb01nition 1.1 (\u03b4-correct algorithm). An algorithm is said to be \u03b4-correct on M \u2286Rd if for all \u03b8 \u2208M, \u03c4 < +\u221ealmost surely and P\u03b8(\u02c6 \u0131 \u0338= i\u22c6(\u03b8)) \u2264\u03b4 . We want to design algorithms that, given a value \u03b4, are \u03b4-correct on M and have minimal expected sample complexity E\u03b8[\u03c4] for all \u03b8 \u2208M. A lower bound on E\u03b8[\u03c4] was proved in [4]. In order to present it, we introduce the concept of alternative set to an answer i \u2208I: \u039b(i) := {\u03bb \u2208M | i\u22c6(\u03bb) \u0338= i}, the set of parameters for which the correct answer is not i. Let us denote by KLk(\u03b8, \u03bb) the KullbackLeibler (KL) divergence between the distribution of arm k under \u03b8 and \u03bb. Then the lower bound states that for any algorithm that is \u03b4-correct on M and any problem \u03b8 \u2208M, E\u03b8[\u03c4] \u2265log(1/(2.4\u03b4))/H\u22c6(\u03b8) , with H\u22c6(\u03b8) := max \u03c9\u2208\u2206K inf \u03bb\u2208\u039b(i\u22c6(\u03b8)) X k\u2208[K] \u03c9k KLk(\u03b8, \u03bb) . (1) 2An elimination strategy for speci\ufb01c unstructured combinatorial problems has been introduced in [22]. 2 \fExample: BAI in Gaussian linear bandits While our results apply to general queries, we illustrate all statements of this paper on the widely-studied task of BAI in Gaussian linear bandits [19, 14, 23, 16, 24]. In this setting, each arm k \u2208[K] has a Gaussian distribution N(\u00b5k(\u03b8), 1) with mean \u00b5k(\u03b8) = \u03c6\u22a4 k \u03b8, a linear function of the unknown parameter \u03b8 \u2208Rd (and M = Rd) and of known arm features \u03c6k \u2208Rd. The set of answers is I = [K] and the correct answer is i\u22c6(\u03b8) := arg maxk\u2208[K] \u03c6\u22a4 k \u03b8. Finally, for x \u2208Rd and A \u2208Rd\u00d7d, we de\ufb01ne \u2225x\u2225A := \u221a x\u22a4Ax. For \u03c9 \u2208RK, let V\u03c9 := PK k=1 \u03c9k\u03c6k\u03c6\u22a4 k . With this notation, we have P k\u2208[K] \u03c9k KLk(\u03b8, \u03bb) = 1 2\u2225\u03b8 \u2212\u03bb\u22252 V\u03c9. 1.2 Log-likelihood ratio stopping rules Most existing adaptive algorithms use a log-likelihood ratio (LLR) test in order to decide when to stop. Informally, they check whether suf\ufb01cient information has been collected to con\ufb01dently discard at once all answers except one. Since such LLR tests are crucial for the design of our general elimination rules, we now describe their principle. Given two parameters \u03b8, \u03bb \u2208M, the LLR of observations X[t] = (Xk1 1 , . . . , Xkt t ) between models \u03b8 and \u03bb is Lt(\u03b8, \u03bb) := log dP\u03b8 dP\u03bb (X[t]) = Pt s=1 log dP\u03b8 dP\u03bb (Xks s ) . Let \u02c6 \u03b8t := arg max\u03bb\u2208M log P\u03bb(X[t]) be the maximum likelihood estimator of \u03b8 from t observations. In Gaussian linear bandits, we have Lt(\u03b8, \u03bb) = 1 2\u2225\u03b8 \u2212\u03bb\u22252 VNt + (\u03b8 \u2212\u03bb)\u22a4VNt(\u02c6 \u03b8t \u2212\u03b8) , where N k t := Pt s=1 1 (ks = k). See Appendix C for more details. Lt(\u03b8, \u03bb) is closely related to PK k=1 N k t KLk(\u03b8, \u03bb), a quantity that appears frequently in our results. Indeed, the difference between these quantities is a martingale, which is a lower order term compared to them. The LLR stopping rule was introduced to the bandit literature in [4]. At each step t \u2208N, the algorithm computes the in\ufb01mum LLR to the alternative set of i\u22c6(\u02c6 \u03b8t) and stops if it exceeds a threshold, i.e., if inf \u03bb\u2208\u039b(i\u22c6(\u02c6 \u03b8t)) Lt(\u02c6 \u03b8t, \u03bb) \u2265\u03b2t,\u03b4 , (2) where the function \u03b2t,\u03b4 can vary, notably based on the shape of the alternative sets. The recommendation rule is then \u02c6 \u0131 = i\u22c6(\u02c6 \u03b8t). Informally, the algorithm stops if it has enough information to exclude all points \u03bb for which the answer is not i\u22c6(\u02c6 \u03b8t). This stopping rule enforces \u03b4-correctness, provided that the sampling rule ensures \u03c4 < +\u221ea.s. and that \u03b2t,\u03b4 is properly chosen. The most popular choice is to ensure a concentration property of Lt(\u02c6 \u03b8t, \u03b8). For example, if for all \u03b4, \u03b2t,\u03b4 guarantees that P \u0010 \u2203t \u22651 : Lt(\u02c6 \u03b8t, \u03b8) \u2265\u03b2t,\u03b4 \u0011 \u2264\u03b4, (3) LLR stopping with that threshold returns a wrong answer with probability at most \u03b4. Such concentration bounds can be found in [25, 26] for linear and unstructured bandits, respectively. This LLR stopping rule is used in many algorithms [4, 14\u201316, 24, 17]3. Some of them have been proven to be asymptotically optimal: their sample complexity upper bound matches the lower bound (1) when \u03b4 \u21920. However, improvements are still possible: their sample complexity for moderate \u03b4 may not be optimal and their computational complexity may be reduced, as we will see. 2 Elimination stopping rules for adaptive algorithms We show how to modify the stopping rule of adaptive algorithms using LLR stopping to perform elimination. We assume that the alternatives sets \u039b(i) can be decomposed into a union of sets which we refer to as alternative pieces (or simply pieces), with the property that computing the in\ufb01mum LLR over these sets is computationally easy. Assumption 2.1. For all i \u2208I, there exist pieces (\u039bp(i))p\u2208P(i), where P(i) is a \ufb01nite set of piece indexes, such that \u039b(i) = S p\u2208P(i) \u039bp(i) and inf\u03bb\u2208\u039bp(i) Lt(\u02c6 \u03b8t, \u03bb) can be ef\ufb01ciently computed for all p \u2208P(i) and t > 0. 3LinGapE [14] does not use LLR stopping explicitly, but its stopping rule is equivalent to it. We can write it as: stop if for all points inside a con\ufb01dence region a gap is small enough, that is if all those points do not belong to the alternative of i\u22c6(\u02c6 \u03b8t). The contrapositive of that statement is exactly LLR stopping. 3 \fThis assumption is satis\ufb01ed in many problems of interest, including BAI, Top-m identi\ufb01cation, and thresholding bandits (see Appendix B). Indeed, in all applications we consider in this paper, the sets of Assumption 2.1 are half-spaces. In our linear BAI example, the piece indexes are simply arms. For i, j \u2208[K] we can de\ufb01ne \u039bj(i) = {\u03bb \u2208M | \u03c6\u22a4 j \u03bb > \u03c6\u22a4 i \u03bb}. Then, \u039b(i) = S j\u2208[K]\\{i} \u039bj(i). Moreover, the in\ufb01mum LLR (and the corresponding minimizer) can be computed in closed form as [e.g., 20] inf\u03bb\u2208\u039bj(i) Lt(\u02c6 \u03b8t, \u03bb) = max{\u02c6 \u03b8T t (\u03c6i \u2212\u03c6j), 0}2/\u2225\u03c6i \u2212\u03c6j\u22252 V \u22121 Nt . Elimination stopping The main idea is that it is not necessary to exclude all \u039bp(i) for p \u2208P(i) at the same time, as LLR stopping (2) does4, in order to know that the algorithm can stop and return answer i. Instead, each piece can be pruned as soon as we have enough information to do so. De\ufb01nition 2.2. A set S \u2286Rd is said to be eliminated at time t if, for all \u03bb \u2208S, Lt(\u02c6 \u03b8t, \u03bb) \u2265\u03b2t,\u03b4. From the concentration property (3), we obtain that the probability that \u03b8 \u2208S and S is eliminated is less than \u03b4. LLR stopping interrupts the algorithm when the alternative set \u039b(i\u22c6(\u02c6 \u03b8t)) can be eliminated. In elimination stopping, we eliminate smaller sets gradually, instead of the whole alternative at once. Formally, let us de\ufb01ne, for all i \u2208I, Pt(i; \u03b2t,\u03b4) = \u001a p \u2208P(i) : inf \u03bb\u2208\u039bp(i) Lt(\u02c6 \u03b8t, \u03bb) < \u03b2t,\u03b4 \u001b (4) as the subset of pieces for answer i \u2208I whose in\ufb01mum LLR at time t is below a threshold \u03b2t,\u03b4. That is, the indexes of pieces that are not eliminated at time t. Moreover, we de\ufb01ne, for all i \u2208I, a set of active pieces Pstp t (i) which is initialized as Pstp 0 (i) = P(i) (all piece indexes). Our selective elimination rule updates, at each time t, only the active pieces of the empirical answer i\u22c6(\u02c6 \u03b8t). That is, for i = i\u22c6(\u02c6 \u03b8t), it sets Pstp t (i) := Pstp t\u22121(i) \u2229Pt(i; \u03b2t,\u03b4), (5) while it sets Pstp t (i) := Pstp t\u22121(i) for all i \u0338= i\u22c6(\u02c6 \u03b8t). One might be wondering why not updating all answers at each round. The main reason is computational: as we better discuss at the end of this section, checking LLR stopping requires one minimization for each piece p \u2208P(i\u22c6(\u02c6 \u03b8t)), while selective elimination requires only one for each active piece p \u2208Pstp t\u22121(i\u22c6(\u02c6 \u03b8t)). Thus, the latter becomes increasingly more computationally ef\ufb01cient as pieces are eliminated. For completeness, we also analyze the variant, that we call full elimination, which updates the active pieces according to (5) for all answers i \u2208I at each round. While we establish slightly better theoretical guarantees for this rule, it is computationally demanding and, as we shall see in our experiments, it does not signi\ufb01cantly improve sample complexity w.r.t. selective elimination, which remains our recommended choice. Let \u03c4s.elim = inft\u22651{t | Pstp t (i\u22c6(\u02c6 \u03b8t)) = \u2205} and \u03c4f.elim := inft\u22651{t | \u2203i \u2208I : Pstp t (i) = \u2205} be the stopping times of selective and full elimination, respectively. Intuitively, these two rules stop when one of the updated answers has all its pieces eliminated (and return that answer). We show that, as far as \u03b2t,\u03b4 is chosen to ensure concentration of \u02c6 \u03b8t to \u03b8, those two stopping rules are \u03b4-correct. Lemma 2.3 (\u03b4-correctness). Suppose that \u03b2t,\u03b4 guarantees (3) and that the algorithm veri\ufb01es that, whenever it stops, there exists i\u2205\u2208I such that Pstp \u03c4 (i\u2205) = \u2205and \u02c6 \u0131 = i\u2205. Then, P\u03b8(\u02c6 \u0131 \u0338= i\u22c6(\u03b8)) \u2264\u03b4. All proofs for this section are in Appendix D. If an algorithm veri\ufb01es the conditions of Lemma 2.3 and has a sampling rule that makes it stop almost surely, then it is \u03b4-correct. Interestingly, we can prove a stronger result than \u03b4-correctness: under the same sampling rule, the elimination stopping rules never trigger later than the LLR one almost surely. In other words, any algorithm equipped with elimination stopping suffers a sample complexity that is never worse than the one of the same algorithm equipped with LLR stopping. Let \u03c4llr := inft\u22651{t | inf\u03bb\u2208\u039b(i\u22c6(\u02c6 \u03b8t)) Lt(\u02c6 \u03b8t, \u03bb) \u2265\u03b2t,\u03b4}. Theorem 2.4. For any sampling rule, almost surely \u03c4f.elim \u2264\u03c4s.elim \u2264\u03c4llr . The proof of this theorem is very simple: if \u03c4llr = t, then at t all pieces \u039bp(i\u22c6(\u02c6 \u03b8t)) for p \u2208P(i\u22c6(\u02c6 \u03b8t)) can be eliminated, hence \u03c4s.elim \u2264t. The proof that \u03c4f.elim \u2264\u03c4s.elim follows from the observation 4Under Assumption 2.1, LLR stopping is written as minp\u2208P(i) inf\u03bb\u2208\u039bp(i) Lt(\u02c6 \u03b8t, \u03bb) \u2265\u03b2t,\u03b4 for i = i\u22c6(\u02c6 \u03b8t), which implies that all alternative pieces of answer i are discarded at once. 4 \fthat full elimination always has less active pieces than selective elimination. Note that all three stopping rules must use the same threshold \u03b2t,\u03b4 to be comparable. Although simple, Theorem 2.4 has an important implication: we can take any existing algorithm that uses LLR stopping, equip it with elimination stopping instead, and obtain a new strategy that is never worse in terms of sample complexity and for which the original theoretical results on the stopping time still hold. Finally, it is important to note that, while de\ufb01ning the elimination rule in the general form (5) allows us to unify many settings, storing/iterating over all sets Pstp t (i) would be intractable in problems with large number of answers (e.g., top-m identi\ufb01cation or thresholding bandits, where the latter is exponential in K). Fortunately, we show in Appendix B that this is not needed and ef\ufb01cient implementations exist for these problems that take only polynomial time and memory. 2.1 Elimination time of alternative pieces We now show that elimination stopping can indeed discard certain alternative pieces much earlier that the stopping time. While all results so far hold for any distribution and bandit structure, in the remaining we focus on Gaussian linear bandits. Other distribution classes beyond Gaussians could be used with minor modi\ufb01cations (see Appendix C.2) but the Gaussian case simpli\ufb01es the exposition. Since most existing adaptive sampling rules target the optimal proportions from the lower bound of [4], we unify them under the following assumption. Assumption 2.5. Consider the concentration events Et := n \u2200s \u2264t : Ls(\u02c6 \u03b8s, \u03b8) \u2264\u03b2t,1/t2 o . (6) A sampling rule is said to have low information regret if there exists a problem-dependent function R(\u03b8, t) which is sub-linear in t such that for each time t where Et holds, inf \u03bb\u2208\u039b(i\u22c6(\u03b8)) X k\u2208[K] N k t KLk(\u03b8, \u03bb) \u2265tH\u22c6(\u03b8) \u2212R(\u03b8, t). (7) The left-hand side of (7) can be understood as the information collected by the sampling rule at time t to discriminate \u03b8 with all its alternatives. Therefore, Assumption 2.5 requires that information to be comparable (up to a low-order term R(\u03b8, t)) with the maximal one from the lower bound. In Appendix F, we show that this is satis\ufb01ed by both Track-and-Stop [4] and the approach in [15]. Let Hp(\u03c9, \u03b8) := inf\u03bb\u2208\u039bp(i\u22c6(\u03b8)) P k\u2208[K] \u03c9k KLk(\u03b8, \u03bb), the information that sampling with proportions \u03c9 brings to discriminate \u03b8 from the alternative piece \u039bp(i\u22c6(\u03b8)). Note that H\u22c6(\u03b8) = max\u03c9\u2208\u2206K minp\u2208P(i\u22c6(\u03b8)) Hp(\u03c9, \u03b8). For \u03f5 \u2265 0, let \u2126\u03f5(\u03b8) := {\u03c9 \u2208 \u2206K | inf\u03bb\u2208\u039b(i\u22c6(\u03b8)) P k \u03c9k KLk(\u03b8, \u03bb) \u2265H\u22c6(\u03b8) \u2212\u03f5} be the set of \u03f5-optimal proportions. Theorem 2.6 (Piece elimination). The stopping time of any sampling rule having low information regret, combined with LLR stopping, satis\ufb01es E[\u03c4] \u2264\u00af t + 2, where \u00af t is the \ufb01rst integer such that t \u2265 \u0012\u0010p \u03b2t,\u03b4 + q \u03b2t,1/t2 \u00112 + R(\u03b8, t) \u0013 /H\u22c6(\u03b8). (8) When the same sampling rule is combined with elimination stopping, let \u03c4p be the time at which p \u2208P(i\u22c6(\u03b8)) is eliminated. Then, E[\u03c4p] \u2264min{\u00af tp, \u00af t} + 2, where \u00af tp is the \ufb01rst integer such that t \u2265max ( \u0000p \u03b2t,\u03b4 + p\u03b2t,1/t2\u00012 min\u03c9\u2208\u2126R(\u03b8,t)/t(\u03b8) Hp(\u03c9, \u03b8), G(\u03b8, t) ) , (9) with G(\u03b8, t) = 0 for full elimination and G(\u03b8, t) = 4\u03b2t,1/t2+R(\u03b8,t) H\u22c6(\u03b8) for selective elimination. First, the bound we obtain on the elimination time of pieces in P(i\u22c6(\u03b8)) is not worse than the bound we obtain on the stopping time of LLR stopping. Second, with elimination stopping, such eliminations can actually happen much sooner. Intuitively, sampling rules with low information regret play arms with proportions that are close to the optimal ones. If all of such \u201cgood\u201d proportions provide large information for eliminating some piece p \u2208P(i\u22c6(\u03b8)), then p is eliminated much sooner than the actual stopping time (which requires eliminating the worst-case piece in the same set). 5 \fWhile both elimination rules are provably ef\ufb01cient, with full elimination enjoying slighly better guarantees5, selective elimination provably never worsens (and possibly improves) the computational complexity over LLR stopping. In all applications we consider, implementing LLR stopping requires one minimization for each of the same alternative pieces we use for elimination stopping. Therefore, the total number of minimizations required by LLR stopping is P\u03c4llr t=1 |P(i\u22c6(\u02c6 \u03b8t))| versus P\u03c4s.elim t=1 |Pstp t (i\u22c6(\u02c6 \u03b8t))| for selective elimination. The second is never larger since \u03c4s.elim \u2264\u03c4llr by Theorem 2.4 and Pstp t (i\u22c6(\u02c6 \u03b8t)) \u2286P(i\u22c6(\u02c6 \u03b8t)) for all t, and much smaller if eliminations happen early, as we shall verify in experiments. In our linear BAI example we need to perform (K \u22121) minimizations at each step, one for each sub-optimal arm, in order to implement LLR stopping. On the other hand, we need only |Pstp t (i\u22c6(\u02c6 \u03b8t))| minimizations with selective elimination, one for each active sub-optimal arm, while full elimination takes P i\u2208[K] |Pstp t (i)| to update all the sets. Note that Theorem 2.6 does not provide a better bound on E[\u03c4] for elimination stopping than for LLR stopping. In fact, when evaluating the bound on E[\u03c4p] for the worst-case piece in p \u2208P(i\u22c6(\u03b8)), we recover the one on E[\u03c4]. This is intuitive since the sampling rule is playing proportions that try to eliminate all alternative pieces at once. The following result formalizes this intuition. Theorem 2.7. Suppose that we can write \u03b2t,\u03b4 = log 1 \u03b4 + \u03be(t, \u03b4) with lim\u03b4\u21920 \u03be(t, \u03b4)/ log(1/\u03b4) = 0. Then for any sampling rule that satis\ufb01es Assumption 2.5, E[\u03c4llr] \u2264E[\u03c4elim] + f(\u03b8, \u03b4) . with lim\u03b4\u21920 f(\u03b8, \u03b4)/ log(1/\u03b4) = 0. Here \u03c4elim can stand for either full or selective elimination. See Appendix D.4 for f. This result shows that when the sampling rule is tailored to the LLR stopping rule, the expected LLR and elimination stopping times differ by at most low-order (in log(1/\u03b4)) terms. As \u03b4 \u21920 the two expected stopping times converge to the same value H\u22c6(\u03b8)\u22121 log(1/\u03b4), which is the asymptotically-optimal sample complexity prescribed by the lower bound (1). We showed that, for both elimination rules, some pieces of the alternative are discarded sooner than the stopping time, and that the overall sample complexity of the method can only improve over LLR stopping. However, since the sampling rule of the algorithm was not changed, elimination does not change the computational cost of each sampling step, only the cost of checking the stopping rule. 2.2 An example We compare LLR and elimination stopping on a simple example so as to better quantify the elimination times of Theorem 2.6 and their computational impact (see Appendix H.1 for a full discussion). 1 1 1 \u2212\u03f5 \u03b8 \u03d51 \u03d52 \u03d53 \u03d54 \u03d55 H2 \u0000\u03c9\u22c6, \u03b8\u0001 = \u03f52/8 Hk \u0000\u03c9\u22c6, \u03b8\u0001 \u22651/32 Figure 1: Example of BAI instance with d = 2 and K = 5. Consider BAI in a Gaussian linear bandit instance with unit variance, d = 2, and arbitrary number of arms K \u22653 (see Figure 1). The arm features are \u03c61 = (1, 0)T , \u03c62 = (0, 1)T , and, for all i = 3, . . . , K, \u03c6i = (ai, bi)T with ai, bi arbitrary values in (\u22121, 0) such that \u2225\u03c6i\u22252 = 1. The true parameter is \u03b8 = (1, 1\u2212\u03b5)T , for \u03b5 \u2208(0, 1/2) a possibly very small value. Arm 1 is optimal with mean \u00b51(\u03b8) = 1, while arm 2 is sub-optimal with mean \u00b52(\u03b8) = 1 \u2212\u03b5. For all other arms i = 3, . . . , K, \u00b5i(\u03b8) \u22640. Let \u03c9 \u2208\u2206K be any allocation. Recall that in BAI each piece index is simply an arm, and P(i\u22c6(\u03b8)) = P(1) = {2, . . . , K}. Let k \u2208 P(1) be any sub-optimal arm. The distance to the k-th alternative piece Hk(\u03c9, \u03b8) can be computed in closed form as Hk(\u03c9, \u03b8) = ((\u03c61 \u2212\u03c6k)T \u03b8)2/(2\u2225\u03c61 \u2212\u03c6k\u22252 V \u22121 \u03c9 ). The optimal allocation is \u03c9\u22c6= arg max\u03c9 mink Hk(\u03c9, \u03b8) = (1/2, 1/2, 0, . . . , 0)\u22a4. The intuition why this example is interesting is as follows. Any correct strategy is required to discriminate between arm 1 and 2 (i.e., to \ufb01gure out that arm 1 is optimal), which requires roughly O(1/\u03b52) samples from both. An optimal strategy plays these two arms nearly with the same proportions. Since \u03c61 and \u03c62 form the canonical basis of R2, the samples collected by this strategy are informative for estimating the 5Note that G(\u03b8, t) for selective elimination contributes only a \ufb01nite (in \u03b4) sample complexity. 6 \fmean reward of every arm, even those than are not played. Then, since arms 3, . . . , K have at least a sub-optimality gap of 1, an elimination-based strategy discards them with a number of samples not scaling with 1/\u03b52. This means that a non-elimination strategy runs for O(1/\u03b52) steps over the original problem with K arms, while an elimination-based one quickly reduces the problem to one with only 2 arms. The main impact is computational: since most algorithms need to compute some statistics for each active arm at each round (e.g., closest alternatives, con\ufb01dence intervals, etc.), the computational complexity of a non-elimination algorithm is at least O(K/\u03b52), while the one of an elimination-based variant is roughly O(K + 1/\u03b52), a potentially very large improvement. We now quantify the elimination times and computational complexity on this example. Since such quantities depend on the speci\ufb01c sampling rule, we do it for an oracle strategy that samples according to \u03c9\u22c6. Similar results can be derived for any low information regret sampling rule (see Appendix H.1). Proposition 2.8. For any K \u22653 and \u03b5 \u2208(0, 1/2), for any \u03b4 \u2208(0, 1), the oracle strategy combined with LLR stopping satis\ufb01es on the example instance E[\u03c4] \u2265\u2126 \u0012log(1/\u03b4) \u03b52 \u0013 . On the same instance, for the oracle strategy with elimination at stopping and a threshold \u03b2t,\u03b4 = log(1/\u03b4) + O(log(t)), the expected elimination time of any piece (i.e., arm) k \u22653 is E[\u03c4k] \u2264e O(log(1/\u03b4)) for full elimination, E[\u03c4k] \u2264e O \u0012 log(1/\u03b4) + 1 \u03b52 \u0013 for selective elimination. Moreover, the expected per-round computation time of the oracle strategy with LLR stopping is \u2126(K), while it is at most O(K2\u03b52) for full elimination and O(K\u03b52 +K/ log(1/\u03b4)) for selective elimination. 3 Elimination at sampling We show how to adapt sampling rules in order to accommodate piece elimination. There are two reasons for doing this: \ufb01rst, adapting the sampling to ignore pieces that have been discarded could reduce the sample complexity; second, the amount of computations needed to update the sampling strategy is often proportional to the number of pieces and decreasing it can reduce the overall time. We start from an algorithm using LLR stopping, for which we change the stopping rule as above. The sampling strategies that we can adapt are those that aggregate information from each alternative piece. For example, in linear BAI, methods that mimic the lower bound allocation (1), like Track-and-Stop [4], LinGame [16], or FWS [17], and even LinGapE [14], all compute distances or closest points to each piece in the decomposition {\u03bb | \u03c6\u22a4 j \u03bb \u2265\u03c6\u22a4 i\u22c6(\u02c6 \u03b8t)\u03bb}. Eliminating pieces at sampling simply means omitting from such computations the arms that were deemed sub-optimal. Algorithm 1 shows how Track-and-Stop [4] can be modi\ufb01ed to incorporate elimination at sampling and stopping. Algorithm 1 Track-and-Stop [4]: vanilla (left) and with selective elimination (right) while not stopped do for p \u2208P(i\u22c6(\u02c6 \u03b8t)) do \u25b7stopping Lp,t = inf\u03bb\u2208\u039bp(i\u22c6(\u02c6 \u03b8t)) Lt(\u02c6 \u03b8t, \u03bb) end for if \u2200p \u2208P(i\u22c6(\u02c6 \u03b8t)) : Lp,t > \u03b2t,\u03b4 then STOP wt = arg max\u03c9 minp\u2208P(i\u22c6(\u02c6 \u03b8t)) Hp(\u03c9, \u02c6 \u03b8t) if \u2203k : N k t < \u221a t pull kt+1 = arg mink N k t else pull kt+1 = arg mink(N k t \u2212twk t ) end while while not stopped do Set Pstp t (i\u22c6(\u02c6 \u03b8t)) = Pstp t\u22121(i\u22c6(\u02c6 \u03b8t)) for p \u2208Pstp t\u22121(i\u22c6(\u02c6 \u03b8t)) do \u25b7stopping Lp,t = inf\u03bb\u2208\u039bp(i\u22c6(\u02c6 \u03b8t)) Lt(\u02c6 \u03b8t, \u03bb) if Lp,t > \u03b2t,\u03b4 delete p from Pstp t (i\u22c6(\u02c6 \u03b8t)) end for if Pstp t (i\u22c6(\u02c6 \u03b8t)) = \u2205then STOP wt = arg max\u03c9 minp\u2208Psmp t (i\u22c6(\u02c6 \u03b8t)) Hp(\u03c9, \u02c6 \u03b8t) if \u2203k : N k t < \u221a t pull kt+1 = arg mink N k t else pull kt+1 = arg mink(N k t \u2212twk t ) Update Psmp t+1 (i\u22c6(\u02c6 \u03b8t)) (Algorithm 3) end while Similarly to elimination stopping, the idea is to maintain sets of active pieces at sampling Psmp t (i) for each i \u2208I. Note that these are different from the ones introduced in Section 2 for the stopping 7 \frule. The set is updated at each step like we did for the stopping sets, but with a different threshold \u03b1t,\u03b4 (see Appendix E for details). Additionally, we reset it very infrequently at steps t \u2208{\u00af t2j 0 }j\u22650, where \u00af t0 \u22652. Formally, let us de\ufb01ne the helper sets \u02dc Psmp t (i) as \u02dc Psmp 0 (i) := P(i) and \u02dc Psmp t (i) := ( \u02dc Psmp t\u22121 (i) \u2229Pt(i; \u03b1t,\u03b4) if t / \u2208{\u00af t2j 0 }j\u22650 Pt(i; \u03b1t,\u03b4) otherwise, where Pt was de\ufb01ned in (4). Let tj := \u00af t2j 0 be the time step at which the j-th reset is performed and j(t) := \u230alog2 log\u00af t0 t\u230bbe the index of the last reset before t. We de\ufb01ne Psmp t (i) := \u02dc Psmp t (i) \u2229 \u02dc Psmp tj(t)\u22121(i), such that Psmp t (i) is the intersection of all active pieces from the second-last reset up to t, i.e., Psmp t (i) = Tt s=tj(t)\u22121 Ps(i; \u03b1s,\u03b4). Since the resets are very infrequent, this de\ufb01nition only drops a small number of rounds from the intersection (less than \u221a t). The detailed procedure to update these sets is summarized in Algorithm 3. As before, we can instantiate both selective and full elimination. The reason for the resets is two-fold. First, they ensure that the algorithm stops almost surely as required by De\ufb01nition 1.1. In fact, without resets, it might happen with some small (less than \u03b4) probability that pieces containing the true parameter are eliminated, in which case the sampling rule could diverge. Second, they guarantee that the thresholds (\u03b1s,\u03b4)t s=tj(t)\u22121 used in Psmp t (i) are within a constant factor of each other. This is crucial to relate the LLR of active pieces at different times. 3.1 Properties We consider a counterpart of Assumption 2.5 for sampling rules combined with piece elimination. Assumption 3.1. There exists a sub-linear (in t) problem-dependent function R(\u03b8, t) such that, for each time t where Et (de\ufb01ned in Equation 6) holds, min p\u2208Psmp t (i\u22c6(\u03b8)) Hp(Nt, \u03b8) \u2265max \u03c9\u2208\u2206K t X s=1 min p\u2208Psmp s\u22121(i\u22c6(\u03b8)) Hp(\u03c9, \u03b8)\u2212R(\u03b8, t). Intuitively, the sampling rule maximizes the information for discriminating \u03b8 with all its alternatives from the sequence of active pieces (Psmp s\u22121(i\u22c6(\u03b8)))t s=1. We prove in Appendix F that the algorithms for which we proved Assumption 2.5 also satisfy Assumption 3.1 when their sampling rules are combined with either full or selective elimination. Theorem 3.2. Consider a sampling rule that veri\ufb01es Assumption 3.1 and uses either full or selective elimination with the sets Psmp t . Then, Assumption 2.5 holds as well. Moreover, when using the same elimination rule at stopping, such a sampling rule veri\ufb01es Theorem 2.6, i.e., it enjoys the same guarantees as without elimination at sampling. The proof is in Appendix E. Theorem 3.2 shows that for an algorithm using elimination at sampling and stopping, we get bounds on the times at which pieces of \u039b(i\u22c6(\u03b8)) are discarded from the stopping rule which are not worse than those we obtained for the same algorithm without elimination at sampling. This result is non-trivial. We know that the sampling rule collects information to discriminate \u03b8 with its closest alternatives, and eliminating a piece cannot make the resulting \u201coptimal\u201d proportions worse at this task. However, it could make them worse at discriminating \u03b8 with alternatives that are not the closest. This would imply that the elimination times for certain pieces could actually increase w.r.t. not eliminating at sampling. Theorem 3.2 guarantees that this does not happen: eliminating pieces at sampling cannot worsen our guarantees. We shall see in our experiments that eliminating pieces in both the sampling and stopping rules often yields improved sample complexity. 4 Experiments Our experiments aim at addressing the following questions: (1) how do existing adaptive strategies behave when combined with elimination at stopping and (when possible) at sampling? How do they compare with native elimination-based methods? (2) What is the difference between selective and full elimination? (3) How do LLR and elimination stopping compare as a function of \u03b4?6 6Our code is available at https://github.com/AndreaTirinzoni/bandit-elimination. 8 \fFigure 2: Experiments on linear instances with K = 50, d = 10, averaged over 100 runs, with the right plot showing standard deviations. (left) How different adaptive algorithms eliminate arms in BAI when using elimination stopping. (middle) LinGame on BAI when combined with full and selective elimination rules, either only at stopping or both at stopping and at sampling. (right) Ratio between the LLR and elimination stopping times of different algorithms as a function of log(1/\u03b4). We ran experiments on two bandit structures: linear (where d < K) and unstructured (where K = d and the arms are the canonical basis of Rd). For each of them, we considered 3 pure exploration problems: BAI, Top-m, and online sign identi\ufb01cation (OSI) [9, 27], also called thresholding bandits. All experiments use \u03b4 = 0.01 and are averaged over 100 runs. We combined adaptive algorithms which are natively based on LLR stopping with our elimination stopping rules and, whenever possible, we extended their sampling rule to use elimination. The selected baselines are the following. For linear BAI, LinGapE [14], LinGame [16], Frank-Wolfe Sampling (FWS) [17], Lazy Track-and-Stop (TaS) [24], XY-Adaptive [19], and RAGE [20] (the latter two are natively elimination based). For linear Top-m, m-LinGapE [28], MisLid [29], FWS, Lazy TaS7, and LinGIFA [28]. For linear OSI, LinGapE8, LinGame, FWS, and Lazy TaS. For unstructured instances linear algorithms are still applicable, and we further implemented LUCB [11], UGapE [8], and the Racing algorithm [18] for BAI and Top-m. We also tested an \u201coracle\u201d sampling rule which uses the optimal proportions from the lower bound. Due to space constraints, we present only the results on linear structures. Those on unstructured problems can be found in Appendix G. The \ufb01rst experiments use randomly generated instances with K = 50 arms and dimension d = 10. Comparison of elimination times. We analyze how different adaptive algorithms eliminate pieces when combined with selective elimination at stopping. To this purpose we focus on BAI, where the sets of pieces can be conveniently reduced to a set of active arms, those that are still likely to be the optimal one. Figure 2(left) shows how the set of active arms evolves over time for the 5 adaptive baselines. Notably, many arms are eliminated very quickly, with most baselines able to halve the set of active arms in the \ufb01rst 3000 steps. The problem size is quickly reduced over time. As we shall see in the last experiment, this will yield signi\ufb01cant computational gains. We further note that the \u201coracle\u201d strategy, which plays \ufb01xed proportions, seems the slowest at eliminating arms. The reason is that the optimal proportions from the lower bound focus on discriminating the \u201chardest\u201d arms, while the extra randomization in adaptive rules might indeed eliminate certain \u201ceasier\u201d arms sooner. Full versus selective elimination. We combine the different algorithms with full and selective elimination, both at sampling and stopping. Due to space constrains, Figure 2(middle) shows the results only for LinGame (see Appendix G for the others). We note that full elimination seems faster at discarding arms in earlier steps, as we would expect theoretically. However, it never stops earlier than its selective counterpart. Moreover, its computational overhead is not advantageous. Overall, we concluded that our selective elimination rule is the best choice and we shall thus focus on it in the remaining. Finally, we remark that combining the sampling rule with elimination (no matter of what type) seems to discard arms faster in later steps, and could eventually make the algorithm stop sooner. LLR versus elimination stopping. We now compare LLR and elimination stopping as a function of \u03b4. We know from theory that both stopping rules allow to achieve asymptotic optimality. Hence for asymptotically optimal sampling rules the resulting stopping times with LLR and elimination should tend to the same quantity as \u03b4 \u21920. Figure 2(right), where we report the ratio between the LLR stopping time and the elimination one for different algorithms, con\ufb01rms that this is the case. Some algorithms (LinGapE and FWS) seem to bene\ufb01t less from elimination stopping than the others, 7Lazy TaS, while analyzed only for BAI, can be applied to any problem since it is a variant of Track-and-Stop. 8LinGapE was originally proposed only for BAI in [14], but its extension to OSI is trivial. 9 \fNo elimination (LLR) Elim. stopping Elim. stopping + sampling Algorithm Samples Time Samples Time Samples Time BAI LinGapE 33.19 \u00b1 8.7 0.23 33.11 \u00b1 8.7 0.2 29.89 \u00b1 8.6 0.18(\u221222%) LinGame 45.34 \u00b1 14.2 0.23 43.67 \u00b1 13.4 0.21 32.49 \u00b1 8.1 0.18(\u221222%) FWS 42.26 \u00b1 60.1 0.73 42.25 \u00b1 60.1 0.7 32.62 \u00b1 18.0 0.45(\u221238%) Lazy TaS 76.33 \u00b1 65.8 0.15 74.08 \u00b1 65.8 0.13 64.48 \u00b1 81.8 0.12(\u221220%) Oracle 56.36 \u00b1 9.1 0.05 55.36 \u00b1 9.3 0.02 XY-Adaptive 87.08 \u00b1 29.1 0.44 RAGE 106.87 \u00b1 30.7 0.02 Top-m (m = 5) m-LinGapE 63.69 \u00b1 11.1 0.56 63.48 \u00b1 11.0 0.41 59.57 \u00b1 9.4 0.24(\u221257%) MisLid 87.77 \u00b1 20.4 0.55 85.95 \u00b1 20.5 0.4 69.58 \u00b1 16.0 0.25(\u221255%) FWS 78.28 \u00b1 65.0 3.0 78.23 \u00b1 65.0 2.85 77.79 \u00b1 65.0 0.97(\u221267%) Lazy TaS 161.43 \u00b1 96.9 0.57 159.86 \u00b1 96.9 0.43 146.06 \u00b1 82.6 0.36(\u221236%) Oracle 102.45 \u00b1 16.1 0.2 101.53 \u00b1 16.4 0.08 LinGIFA 58.31 \u00b1 10.8 2.46 58.31 \u00b1 10.8 2.33 OSI LinGapE 17.31 \u00b1 2.3 0.22 17.29 \u00b1 2.2 0.19 14.71 \u00b1 2.0 0.17(\u221223%) LinGame 23.77 \u00b1 4.1 0.25 23.05 \u00b1 3.9 0.21 14.87 \u00b1 2.0 0.19(\u221224%) FWS 15.26 \u00b1 2.0 0.83 15.24 \u00b1 2.0 0.81 14.99 \u00b1 2.1 0.56(\u221232%) Lazy TaS 35.11 \u00b1 10.2 0.32 33.98 \u00b1 9.7 0.3 23.51 \u00b1 5.6 0.24(\u221225%) Oracle 29.1 \u00b1 4.8 0.06 28.65 \u00b1 5.0 0.03 Table 1: Experiments on linear instances with K = 50, d = 20. The \"Time\" columns report average times per iteration in milliseconds. The percentage in the last column is the change w.r.t. the time without elimination. Each entry reports the mean across 100 runs plus/minus standard deviation (which is omitted for compute times due to space constraints). Algorithms for which the third column is missing cannot be combined with elimination at sampling, while algorithms for which the \ufb01rst two columns are missing are natively elimination-based. Samples are scaled down by a factor 103. i.e., they achieve smaller ratios of stopping times. We believe this to be a consequence of their mostly \u201cgreedy\u201d nature, while the extra randomization of the other algorithms might help in this aspect. Sample complexities and computation times. We \ufb01nally compare our baselines in all three exploration tasks, in terms of sample complexity and computation time. For this experiment, we selected a larger linear instance with K = 50 and d = 20, randomly generated (see the protocol in Appendix G). From the results in Table 1, we highlight three points. (1) The computation times of all adaptive algorithms decrease when using selective elimination stopping instead of LLR and further decrease when also using elimination at sampling. In the case of Top-m (i.e., the hardest combinatorial problem), most adaptive algorithms become at least twice faster with elimination at stopping and sampling instead of LLR. (2) Elimination at sampling improves the sample complexity of all algorithms. (3) For BAI, the natively elimination-based algorithm RAGE, which updates its strategy infrequently, is the fastest in terms of computation time but the slowest in terms of samples. Adaptive algorithms using elimination achieve run times that are within an order of magnitude of those of RAGE, while outperforming it in terms of sample complexity by a factor 2 to 3. 5 Conclusion We proposed a selective elimination rule, which successively prunes the pieces of the empirical answer, that can be easily combined with existing adaptive algorithms for general identi\ufb01cation problems. We proved that it reduces their computational complexity, it never worsens their sample complexity guarantees, and it provably discards certain answers early. Our experiments on different pure exploration problems and bandit structures show that existing adaptive algorithms often bene\ufb01t from a reduced sample complexity when combined with selective elimination, while achieving signi\ufb01cant gains in computation time. Moreover, they show that selective elimination is overall better (in terms of samples vs time) than its full variant which repeatedly updates the pieces of all answers. Interesting directions for future work include investigating whether better guarantees on the stopping time can be derived for algorithms combined with elimination as compared to their LLR counterparts, and designing adaptive algorithms which are speci\ufb01cally tailored for elimination. 10" + }, + { + "url": "http://arxiv.org/abs/2203.09251v3", + "title": "Near Instance-Optimal PAC Reinforcement Learning for Deterministic MDPs", + "abstract": "In probably approximately correct (PAC) reinforcement learning (RL), an agent\nis required to identify an $\\epsilon$-optimal policy with probability\n$1-\\delta$. While minimax optimal algorithms exist for this problem, its\ninstance-dependent complexity remains elusive in episodic Markov decision\nprocesses (MDPs). In this paper, we propose the first nearly matching (up to a\nhorizon squared factor and logarithmic terms) upper and lower bounds on the\nsample complexity of PAC RL in deterministic episodic MDPs with finite state\nand action spaces. In particular, our bounds feature a new notion of\nsub-optimality gap for state-action pairs that we call the deterministic return\ngap. While our instance-dependent lower bound is written as a linear program,\nour algorithms are very simple and do not require solving such an optimization\nproblem during learning. Their design and analyses employ novel ideas,\nincluding graph-theoretical concepts (minimum flows) and a new maximum-coverage\nexploration strategy.", + "authors": "Andrea Tirinzoni, Aymen Al-Marjani, Emilie Kaufmann", + "published": "2022-03-17", + "updated": "2022-10-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction In reinforcement learning [RL, 1], an agent interacts with an environment modeled as a Markov decision process (MDP) by sequentially selecting actions and receiving feedback in the form of reward signals. Depending on the application, the agent may seek to maximize the cumulative rewards received during learning (which is typically phrased as a regret minimization problem) or to minimize the number of learning interactions (i.e., the sample complexity) for identifying a near-optimal policy. The latter pure exploration problem was introduced in [2] under the name of Probably Approximately Correct (PAC) RL: given two parameters \u03b5, \u03b4 > 0, the agent must return a policy that is \u03b5-optimal with probability at least 1 \u2212\u03b4. Our work focuses on this problem in the context of episodic (a.k.a. \ufb01nite-horizon) tabular MDPs. The PAC RL problem has been mostly studied under the lens of minimax (or worst-case) optimality. In the episodic setting, the algorithm proposed in [3] has sample complexity bounded by O(SAH2 log(1/\u03b4)/\u03b52) for an MDP with S states, A actions, horizon H, and time-homogeneous transitions and rewards (i.e., not depending on the stage). This is minimax optimal for such a context [4]. Similarly, in [5] the authors designed a strategy with O(SAH3 log(1/\u03b4)/\u03b52) complexity in time-inhomogeneous MDPs, which was later shown to be minimax optimal [6]. \u2217Work done while at Inria Lille. 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2203.09251v3 [cs.LG] 24 Oct 2022 \fWhile the minimax framework provides a strong notion of statistical optimality, it does not account for one of the most desirable properties for an RL algorithm: the ability to adapt to the dif\ufb01culty of the MDP instance. For this reason, researchers recently started to investigate the instance-dependent complexity of PAC RL. Earlier attempts were made in the simpli\ufb01ed setting where the agent has access to a generative model (i.e., it can query observations from any state-action pair using a simulator) in \u03b3-discounted in\ufb01nite-horizon MDPs [7, 8]. The online setting, where the agent can only sample trajectories from the environment, has been studied in [9] for discounted MDPs and in [10] for episodic time-inhomogeneous MDPs. All these works derive sample complexity bounds that scale with certain gaps between optimal value functions. For instance, in the episodic setting, the value gap \u2206h(s, a) := V \u22c6 h (s) \u2212Q\u22c6 h(s, a)2 intuitively characterizes the degree of sub-optimality of action a for state s at stage h. Unfortunately, these bounds are known to be sub-optimal and how to achieve instance optimality remains one of the main open questions. In fact, recent works on regret minimization [11, 12] showed that value gaps are often overly conservative, and the same holds for PAC RL. We refer the reader to Appendix A for a deeper discussion on problem-dependent results in RL and the review of other related PAC learning frameworks. The main challenge towards instance optimality is that existing lower bounds for exploration problems in MDPs [8, 11, 9, 12] are written in terms of non-convex optimization problems. Their \u201cimplicit\u201d form makes it hard to understand the actual complexity of the setting and, thus, to design optimal algorithms. Existing solutions either derive explicit suf\ufb01cient complexity measures that inspire algorithmic design [10], or solve (a relaxation of) the optimization problem from the lower bound using the empirical MDP as a proxy for the unknown MDP [9]. The latter extends the Track-and-Stop idea originally proposed in [13] for bandits (H = 1), and requires in particular a large amount of forced exploration. Both solutions have limitations. On the one hand, it is not clear if and how such suf\ufb01cient complexity measures or relaxations are related to an actual lower bound. On the other hand, strategies solving a black-box optimization problem to \ufb01nd an optimal exploration strategy are typically very inef\ufb01cient and often come with either only asymptotic (\u03b4 \u21920) guarantees or with poor (far from minimax optimal) sample complexity in the regime of moderate \u03b4. Contributions This paper presents a complete study of PAC RL in tabular deterministic episodic MDPs with time-inhomogeneous transitions, a sub-class of stochastic MDPs where state transitions are deterministic and the agent observes stochastic rewards from unknown distributions. Our \ufb01rst contribution is an instance-dependent lower bound on the sample complexity of any PAC algorithm. We show that the number of visits n\u03c4 h(s, a) to any state-action-stage triplet (s, a, h) at the stopping time \u03c4 satis\ufb01es E[n\u03c4 h(s, a)] \u2273 log(1/\u03b4) max(\u2206h(s, a), \u03b5)2 , (1) where \u2206h(s, a) := V \u22c6 1 \u2212max\u03c0\u2208\u03a0s,a,h V \u03c0 1 , with V \u03c0 1 the expected return of policy \u03c0, V \u22c6 1 the optimal expected return, and \u03a0s,a,h the set of all deterministic policies that visit (s, a) at stage h. We call these quantities the deterministic return gaps due to their closeness with the return gaps introduced in [12] for general MDPs. In deterministic MDPs, the deterministic return gaps are actually H times larger than the return gaps and they are never smaller than value gaps. Our lower bound on the sample complexity \u03c4 is then the value of a minimum \ufb02ow with local lower bounds (1), i.e., roughly the minimum number of policies that must be played to ensure (1) for all (s, a, h). To our knowledge, this is the \ufb01rst instance-dependent lower bound for the PAC setting in episodic MDPs. On the algorithmic side, we design EPRL, a generic elimination-based method for PAC RL, and couple it with a novel adaptive sampling rule called maximum-coverage sampling. The latter is a simple strategy which does not require solving the optimization problem from the lower bound at learning time in a Track-and-Stop fashion. Instead, it greedily selects the policy that maximizes the number of visited under-sampled triplets (s, a, h), i.e. those having received the least amount of visits so far. We prove that EPRL is (\u03b5, \u03b4)-correct under any sampling rule. Moreover, we show that the sample complexity of EPRL with max-coverage sampling matches our instance-dependent lower bound up to logarithmic factors and a multiplicative O(H2) term, while also being minimax optimal. Finally, we perform numerical simulations on random deterministic MDPs which reveal that EPRL can indeed improve over existing minimax-optimal algorithms tailored for the deterministic case. 2V \u22c6and Q\u22c6respectively denote the optimal value and action-value functions, that are de\ufb01ned in Section 2. 2 \f2 Preliminaries Let M := (S, A, {fh, \u03bdh}h\u2208[H], s1, H) be a deterministic time-inhomogeneous \ufb01nite-horizon MDP, where S is a \ufb01nite set of S states, A is a \ufb01nite set of A actions, fh : S \u00d7 A \u2192S and \u03bdh : S \u00d7 A \u2192 P(R) are respectively the transition function and the reward distribution at stage h \u2208[H], s1 \u2208S is the unique initial state, and H is the horizon. Without loss of generality, we assume that, at each stage h \u2208[H] and state s \u2208S, only a subset Ah(s) \u2286A of actions is available. We denote by rh(s, a) := Ex\u223c\u03bdh(s,a)[x] the expected reward after taking action a in state s at stage h. A deterministic policy \u03c0 = {\u03c0h}h\u2208[H] is a sequence of mappings \u03c0h : S \u2192A. We let \u03a0 := {\u03c0 | \u2200h \u2208[H], s \u2208S : \u03c0h(s) \u2208Ah(s)} be the set of all valid deterministic policies. Executing a policy \u03c0 \u2208\u03a0 on MDP M yields a deterministic sequence of states and actions (s\u03c0 h, a\u03c0 h)h\u2208[H], where s\u03c0 1 = s1, a\u03c0 h = \u03c0h(s\u03c0 h) for all h \u2208[H], and s\u03c0 h = fh\u22121(s\u03c0 h\u22121, a\u03c0 h\u22121) for all h \u2208{2, . . . , H}. We let Sh := {s \u2208S | \u2203\u03c0 \u2208\u03a0 : s\u03c0 h = s} be the subset of states that are reachable at stage h \u2208[H]. Finally, we de\ufb01ne N := PH h=1 P s\u2208Sh |Ah(s)| as the total number of reachable state-action-stage triplets. For each (s, a, h), the action-value function Q\u03c0 h(s, a) of a policy \u03c0 \u2208\u03a0 quanti\ufb01es the expected return when starting from s at stage h, playing a and following \u03c0 thereafter. In deterministic MDPs, it has the simple expression Q\u03c0 h(s, a) = rh(s, a)+V \u03c0 h+1(fh(s, a)), where V \u03c0 h (s) := Q\u03c0 h(s, \u03c0h(s)) is the corresponding value function (with V \u03c0 H+1(s) = 0). The expected return of \u03c0 is simply its value at the initial state, i.e., V \u03c0 1 (s1) = PH h=1 rh(s\u03c0 h, a\u03c0 h). We let \u03a0\u22c6:= {\u03c0\u22c6\u2208\u03a0 : V \u03c0\u22c6 1 (s1) = max\u03c0\u2208\u03a0 V \u03c0 1 (s1)} be the set of optimal policies, i.e., those with maximal return. Finally, we denote by V \u22c6 h (s) and Q\u22c6 h(s, a) the optimal value and action-value function, respectively. These are related by the Bellman optimality equations as Q\u22c6 h(s, a) = rh(s, a) + V \u22c6 h+1(fh(s, a)) and V \u22c6 h (s) = maxa\u2208Ah(s) Q\u22c6 h(s, a). Learning problem The agent interacts with an MDP M in episodes indexed by t \u2208N. At the beginning of the t-th episode, the agent selects a policy \u03c0t \u2208\u03a0 based on past history through its sampling rule, executes it on M, and observes the corresponding deterministic trajectory (s\u03c0t h , a\u03c0t h )h\u2208[H] together with random rewards (yt h)h\u2208[H], where yt h \u223c\u03bdh(s\u03c0t h , a\u03c0t h ). At the end of each episode, the agent may decide to terminate the process through its stopping rule and return a policy b \u03c0 prescribed by its recommendation rule. We denote by \u03c4 its random stopping time. An algorithm for PAC identi\ufb01cation is thus made of a triplet ({\u03c0t}t\u2208N, \u03c4, b \u03c0). The goal of the agent is two-fold. First, for given parameters \u03b5, \u03b4 > 0, it must return an \u03b5-optimal policy with probability at least 1 \u2212\u03b4. De\ufb01nition 1. An algorithm is (\u03b5, \u03b4)-PAC on a set of MDPs M if, for all M \u2208M, it stops a.s. with PM \u0010 V b \u03c0 1 (s1) \u2265V \u22c6 1 (s1) \u2212\u03b5 \u0011 \u22651 \u2212\u03b4. Second, it should stop as early as possible, i.e., by minimizing the sample complexity \u03c4. Henceforth, we assume that the transition function f is known but not the reward distribution \u03bd. Note that if the transitions are unknown, the agent can still estimate them (since it knows that M is deterministic) with at most N \u2264SAH episodes. Minimum \ufb02ows We review some basic concepts from graph theory which will be at the core of our algorithms and analyses later. Full details can be found in Appendix B. First note that a deterministic MDP (without reward) can be represented as a directed acyclic graph (DAG) with one arc for each available state-action-stage triplet. Let E := {(s, a, h) : h \u2208[H], s \u2208Sh, a \u2208Ah(s)} be the set of arcs in the DAG. The minimum \ufb02ow problem, originally introduced in [14] and later studied in, e.g., [15\u201317], consists of \ufb01ndining a \ufb02ow (i.e., an allocation of visits) of minimal value which satis\ufb01es certain demand constraints in each arc of the graph. In our speci\ufb01c setting, we de\ufb01ne a \ufb02ow as any non-negative function \u03b7 : E \u2192[0, \u221e) that belongs to the following set \u2126:= n \u03b7 : E \u2192[0, \u221e) | P (s\u2032,a\u2032):fh\u22121(s\u2032,a\u2032)=s \u03b7h\u22121(s\u2032, a\u2032) = P a\u2208Ah(s) \u03b7h(s, a) \u2200h > 1, s \u2208Sh o . This implies that a \ufb02ow, seen as an allocation of visits to the arcs, satis\ufb01es the navigation constraints (i.e., incoming and outcoming \ufb02ows are equal at each state). The minimum \ufb02ow for a non-negative lower-bound function c : E \u2192[0, \u221e) is the solution to the following linear program (LP): \u03d5\u22c6(c) := min \u03b7\u2208\u2126 X a\u2208A1(s1) \u03b71(s1, a) s.t. \u03b7h(s, a) \u2265ch(s, a) \u2200(s, a, h) \u2208E. 3 \fIntuitively, the goal is to minimize the amount of \ufb02ow leaving the initial state while satisfying the navigation and demand constraints. We note that more ef\ufb01cient algorithms exist for this problem than the LP formulation, e.g., the variant of the Ford-Fulkerson method proposed in [17] which is guaranteed to \ufb01nd an integer solution when the lower bound function is integer-valued. 3 The Complexity of PAC RL in Deterministic MDPs Before stating our lower bound, we formally introduce the new notion of sub-optimality gap it features and compare it with other notions that appeared in the literature. On sub-optimality gaps The most popular notion of sub-optimality gap is the so-called value gap. It was introduced \ufb01rst in the discounted in\ufb01nite-horizon setting [e.g., 7] and later for episodic MDPs [e.g., 18, 19]. Formally, in the latter context, the value gap of any action a \u2208Ah(s) in state s \u2208Sh at stage h \u2208[H] is \u2206h(s, a) := V \u22c6 h (s) \u2212Q\u22c6 h(s, a). Such a notion of gap appears in the complexity measure for PAC RL proposed in [10]. In the deterministic setting, such a complexity measure can be written as C(M, \u03b5) = P (s,a,h) 1 max(e \u2206h(s,a),\u03b5)2 , where e \u2206h(s, a) = mina\u2032:\u2206h(s,a\u2032)>0 \u2206h(s, a\u2032) if a is the unique optimal action at (s, h), and e \u2206h(s, a) = \u2206h(s, a) otherwise. Intuitively, the (inverse) value gap is proportional to the dif\ufb01culty of learning whether an action a is sub-optimal for state s at stage h. Then, C(M, \u03b5) is proportional to the dif\ufb01culty of learning a near optimal action at all states and stages. Recent works [11, 12] showed that this is actually not necessary: if one only cares about computing a policy maximizing the return at the initial state, it is not necessary to learn an optimal action at states which are not visited by such an optimal policy, in particular when the return of all policies visiting the state is small. The return gap [12] was introduced to cope with this limitation. In deterministic MDPs, it can be expressed as gaph(s, a) := 1 H min\u03c0\u2208\u03a0s,a,h Ph \u2113=1 \u2206\u2113(s\u03c0 \u2113, a\u03c0 \u2113), where we denote by \u03a0s,a,h := {\u03c0 \u2208\u03a0 : s\u03c0 h = s, a\u03c0 h = a} the subset of deterministic policies that visit (s, a) at stage h. In words, the return gap of (s, a, h) is proportional to the sum of value gaps along the best trajectory (i.e., one with maximal return) that visits (s, a) at stage h. Intuitively, this means that, if \u2206h(s, a) is extremely small but all policies visiting (s, a) at stage h need to play a highly sub-optimal action before, then \u2206h(s, a) \u226agaph(s, a). In the deterministic case, our lower bound reveals that the normalization by H is not necessary, and we de\ufb01ne the deterministic return gap to be \u2206h(s, a) := V \u22c6(s1) \u2212 max \u03c0\u2208\u03a0s,a,h V \u03c0(s1). (2) Using the well-known relationship V \u22c6 1 (s1) \u2212V \u03c0 1 (s1) = PH h=1 \u2206h(s\u03c0 h, a\u03c0 h) [e.g., 11, Proposition 5], it is easy to see that \u2206h(s, a) \u2264\u2206h(s, a) = H \u00d7 gaph(s, a). Lower Bound We now present our instance-dependent lower bound based on deterministic return gaps, which will guide us in the design and analysis of sample ef\ufb01cient algorithms. This result is the \ufb01rst instance-dependent lower bound for PAC RL in the episodic setting. Lower bounds for \u03b5-best arm identi\ufb01cation in a bandit model (which corresponds to H = S = 1) were derived in [20\u201322], while problem-dependent regret lower bounds for \ufb01nite-horizon MDPs are provided in [12, 11]. We consider the class M\u03c32 of deterministic MDPs with \u03c32-Gaussian rewards, in which \u03bdh(s, a) = N(rh(s, a), \u03c32). Let \u03a0\u03b5 := {\u03c0 \u2208\u03a0 : V \u03c0 1 (s1) \u2265V \u22c6 1 (s1) \u2212\u03b5} be the set of all \u03b5-optimal policies and denote by Z\u03b5 h := {s \u2208Sh, a \u2208Ah(s) : \u03a0s,a,h \u2229\u03a0\u03b5 \u0338= \u2205} the set of state-action pairs that are reachable at stage h by some \u03b5-optimal policy. Note that \u2206h(s, a) \u2264\u03b5 for all (s, a) \u2208Z\u03b5 h. Theorem 1. Let \u03c32 > 0 and \ufb01x any MDP M \u2208M\u03c32. Then, any algorithm which is (\u03b5, \u03b4)-PAC on the class M\u03c32 must satisfy, for any h \u2208[H], s \u2208Sh, and a \u2208Ah(s), EM[n\u03c4 h(s, a)] \u2265ch(s, a) := \u03c32 log(1/4\u03b4) 4 max(\u2206h(s, a), \u2206 h min, \u03b5)2 , (3) where \u2206 h min := min(s\u2032,a\u2032):\u2206h(s\u2032,a\u2032)>0 \u2206h(s\u2032, a\u2032) if |Z\u03b5 h| = 1 and \u2206 h min := 0 otherwise. Moreover, for c : E \u2192[0, \u221e) the lower bound function de\ufb01ned above, EM[\u03c4] \u2265\u03d5\u22c6(c). (4) The \ufb01rst lower bound (3) is on the number of visits required for any state-action-stage triplet. It intuitively shows that an (\u03b5, \u03b4)-PAC algorithm must visit each triplet proportionally to its inverse 4 \fdeterministic return gap. The second one (4) shows that the actual sample complexity of the algorithm must be at least the value of a minimum \ufb02ow computed with the local lower bounds (3), i.e. that the algorithm must play the minimum number of episodes (i.e., policies) that guarantees (3) for each (s, a, h). Intuitively, due to the navigation constraints of the MDP, there might be no algorithm which tightly matches (3) for each (s, a, h), and (4) is exactly enforcing these constraints. While \u03d5\u22c6(c) has no explicit form, Lemma 6 in Appendix B gives an idea of how it scales with the gaps: max h\u2208[H] X s\u2208Sh X a\u2208Ah(s) \u03c32 log(1/4\u03b4) 4 max(\u2206h(s, a), \u2206 h min, \u03b5)2 \u2264\u03d5\u22c6(c) \u2264 X h\u2208[H] X s\u2208Sh X a\u2208Ah(s) \u03c32 log(1/4\u03b4) 4 max(\u2206h(s, a), \u2206 h min, \u03b5)2 . Observe that the quantity on the right-hand side resembles the complexity measure C(M, \u03b5) [10], except that value gaps are replaced by return gaps. This implies that, in general, our lower bound can be much smaller than this complexity. For instance, in an MDP with extremely small value gaps in states which are not visited by an optimal policy, \u03d5\u22c6(c) does not scale with such gaps at all. In Appendix C.2 we further provide a minimax lower bound for PAC RL in deterministic MDPs scaling as \u2126 \u0000SAH2 log (1/\u03b4)/\u03b52\u0001 , with a reduced H2 dependency compared to the H3 that appear in the stochastic case [6]. We note that faster rates for deterministic MDPs have already been obtained in other RL settings [e.g., 23]. The BPI-UCRL algorithm [24] particularized to deterministic MDPs is matching this lower bound and is thus minixal optimal. We now present the \ufb01rst algorithm which is simultaneously minimax optimal for deterministic MDPs and nearly matching (up to O(H2) and logarithmic factors) the lower bound of Theorem 1. 4 EPRL and Max-Coverage Sampling We propose a general Elimination-based scheme for PAC RL, called EPRL (Algorithm 1). At each episode t \u2208N, the algorithm plays a policy \u03c0t selected by some sampling rule. Then, based on the collected samples, the algorithm updates its statistics and eliminates all actions which are detected as sub-optimal with enough con\ufb01dence. This procedure is repeated until a stopping rule triggers. Formally, EPRL maintains an estimate \u02c6 rt h(s, a) := 1 nt h(s,a) Pt l=1 yl h1 \u0000sl h = s, al h = a \u0001 , with \u02c6 r0 h = 0, of the unknown mean reward rh(s, a) for each (s, a, h). Here nt h(s, a) := Pt l=1 1 \u0000sl h = s, al h = a \u0001 is the number of times (s, a) is visited at stage h up to episode t. We de\ufb01ne the following upper and lower con\ufb01dence intervals to the value functions of a policy \u03c0 \u2208\u03a0: Q t,\u03c0 h (s, a) := \u02c6 rt h(s, a) + bt h(s, a) + V t,\u03c0 h+1(fh(s, a)), V t,\u03c0 h (s) := Q t,\u03c0 h (s, \u03c0h(s)), Qt,\u03c0 h (s, a) := \u02c6 rt h(s, a) \u2212bt h(s, a) + V t,\u03c0 h+1(fh(s, a)), V t,\u03c0 h (s) := Qt,\u03c0 h (s, \u03c0h(s)), where bt h(s, a) is a bonus function, i.e., the width of the con\ufb01dence interval at (s, a, h). We assume that rewards are \u03c32-sub-Gaussian with a known factor \u03c32,3 which allows us to choose bt h(s, a) := s \u03b2(nt h(s, a), \u03b4) nt h(s, a) , \u03b2(t, \u03b4) := 2\u03c32 log \u00124t2N \u03b4 \u0013 . (5) Elimination rule Algorithm 1 keeps a set of active (or candidate) actions At h(s) for each stage h \u2208[H], state s \u2208Sh, and episode t \u2208N. Let \u03a0t := {\u03c0 \u2208\u03a0 | \u2200s, h : \u03c0h(s) \u2208At h(s)\u2228At h(s) = \u2205} be the subset of active policies that only play active actions at episode t. Note that an active policy can play an arbitrary action in states where all actions have been eliminated. As can be seen in Line 7 of Algorithm 1, action a is eliminated from At h(s) if max\u03c0\u2208\u03a0s,a,h\u2229\u03a0t\u22121 V t,\u03c0 1 (s1) \u2264max\u03c0\u2208\u03a0 V t,\u03c0 1 (s1), that is, when we are con\ufb01dent that none of the policies visiting (s, a) at stage h is optimal. We recall that \u03a0s,a,h denotes the set of all deterministic policies that visit s, a at stage h. The maximum restricted to \u03a0s,a,h can be easily computed by standard dynamic programming (e.g., it is enough to set the reward to \u2212\u221efor all state-action pairs different than (s, a) at stage h). If \u03a0s,a,h \u2229\u03a0t\u22121 = \u2205, we set the maximum to \u2212\u221eso that the elimination rule triggers. Remark 1. While de\ufb01ning \u03a0t simpli\ufb01es the presentation, EPRL neither stores nor enumerates the set of active policies. In particular, EPRL does not eliminate policies but rather (s, a, h) triplets. The sets At h(s) can be updated in polynomial time by dynamic programming without ever computing \u03a0t. 3Note that sub-Gaussianity generalizes the common assumption of bounded rewards in [0, 1] (in which case \u03c32 = 1/4) and the one of Gaussian rewards with variance \u03c32 (as used in the lower bound of Theorem 1). 5 \fAlgorithm 1 Elimination-based PAC RL (EPRL) for deterministic MDPs 1: Input: deterministic MDP (without reward) M := (S, A, {fh}h\u2208[H], s1, H), \u03b5, \u03b4 2: Initialize A0 h(s) \u2190Ah(s) for all h \u2208[H], s \u2208Sh 3: Set n0 h(s, a) \u21900 for all h \u2208[H], s \u2208Sh, a \u2208Ah(s) 4: for t = 1, . . . do 5: Play \u03c0t \u2190SAMPLINGRULE() 6: Update statistics nt h(s, a), \u02c6 rt h(s, a) 7: At h(s) \u2190At\u22121 h (s) \u2229 n a \u2208A : max\u03c0\u2208\u03a0s,a,h\u2229\u03a0t\u22121 V t,\u03c0 1 (s1) \u2265max\u03c0\u2208\u03a0 V t,\u03c0 1 (s1) o 8: where \u03a0t\u22121 \u2190 \b \u03c0 \u2208\u03a0 | \u2200s, h : \u03c0h(s) \u2208At\u22121 h (s) \u2228At\u22121 h (s) = \u2205 \t (need not be stored/computed) 9: if max\u03c0\u2208\u03a0t \u0010 V \u03c0,t 1 (s1) \u2212V \u03c0,t 1 (s1) \u0011 \u2264\u03b5 or \u2200h \u2208[H], s \u2208Sh : |At h(s)| \u22641 then 10: Stop and recommend b \u03c0 \u2208arg max\u03c0\u2208\u03a0t V \u03c0,t 1 (s1) 11: end if 12: end for 13: function MAXCOVERAGE() 14: Let kt \u2190minh\u2208[H],s\u2208Sh,a\u2208At\u22121 h (s) nt\u22121 h (s, a) + 1 and \u00af tkt \u2190infl\u2208N{l : kl = kt} 15: if t mod 2 = 1 then 16: return \u03c0t \u2190arg max\u03c0\u2208\u03a0 PH h=1 1 \u0010 a\u03c0 h \u2208A \u00af tkt\u22121 h (s\u03c0 h), nt\u22121 h (s\u03c0 h, a\u03c0 h) < kt \u0011 17: else 18: return \u03c0t \u2190arg max\u03c0\u2208\u03a0t\u22121 PH h=1 bt\u22121 h (s\u03c0 h, a\u03c0 h) (MAXDIAMETER) 19: end if Stopping rule EPRL uses two different stopping rules (Line 9). The \ufb01rst one checks whether, for all active policies \u03c0 \u2208\u03a0t, the con\ufb01dence interval on the return, V \u03c0,t 1 (s1) \u2212V \u03c0,t 1 (s1) = 2 PH h=1 bt h(s\u03c0 h, a\u03c0 h), which we refer to as diameter, is below \u03b5. The second one checks whether each set At h(s) contains either 1 action or 0 actions (which happens when the state is unreachable by an optimal policy). In both cases, we recommend the optimistic (active) policy (Line 10). Sampling rule While EPRL may be used with different sampling rules, we recommend the maxcoverage sampling rule described in Algorithm 1. This sampling rule aims at ensuring that no (s, a, h) triplet remains under-visited for too long. This is achieved by selecting the policy which greedily maximizes the number of visited under-sampled triplets, denoted by Ut. The quantity kt = min(s,a,h):a\u2208At\u22121 h (s) nt\u22121 h (s, a) + 1 can be interpreted as the target minimum number of visits from active triplets that we want to achieve in round t and permit to de\ufb01ne \u03c0t = arg max \u03c0\u2208\u03a0 H X h=1 1 ((s\u03c0 h, a\u03c0 h, h) \u2208Ut) with Ut = n (s, a, h) : a \u2208A tkt\u22121 h (s), nt\u22121 h (s, a) < kt o , where tk = inf{t : kt = k} is the \ufb01rst round in which the target is set to k. The argmax over \u03a0 can be computed using dynamic programming. We emphasize that this argmax is not restricted to the set of active policies, meaning that we may play eliminated actions in order to augment the coverage (that is, the minimal number of visits) faster. Every even round, max-coverage instead chooses an active policy maximizing the diameter featured in the stopping rule (max-diameter sampling). As we shall see in our analysis, this dichotomous behavior is needeed in order to maintain minimax-optimality. Comparison with other elimination-based algorithms The work of [25] provides a heuristic using action eliminations to \ufb01nd an \u03b5-optimal policy in a discounted MDP. However, no sample complexity guarantees are given for this algorithm, which uses a different elimination rule, based on con\ufb01dence intervals on the optimal value function, and a uniform sampling rule. The MOCA algorithm [10] also uses a different action elimination rule compared to ours. In particular, the decision to eliminate (s, a, h) is made based only on rewards that can be obtained after visiting (s, a, h). Moreover, this algorithm uses a complex phase-based sampling rule, while the sampling rule of EPRL is fully adaptive. 6 \f5 Theoretical Guarantees Our \ufb01rst result, proved in Appendix D.2.1, shows that EPRL is (\u03b5, \u03b4)-PAC under any sampling rule. It follows from the fact that 1) the choice of bonus function (5) ensures that all the con\ufb01dence intervals are valid and 2) state-action pairs from optimal trajectories are never eliminated when this holds. Theorem 2. Algorithm 1 is (\u03b5, \u03b4)-PAC provided that the sampling rule makes it stop almost surely. We now analyze the sample complexity of EPRL combined with max-coverage sampling. Theorem 3. (Informal version of Theorem 8 in Appendix D.3) With probability at least 1 \u2212\u03b4, the sample complexity of EPRL combined with the maximum-coverage sampling rule satis\ufb01es \u03c4 = e O(\u03d5\u22c6(g)), where g : E \u2192[0, \u221e) is the lower bound function de\ufb01ned by gh(s, a) := 32\u03c32H2 max \u0000\u2206h(s, a), \u2206min, \u03b5 \u00012 \uf8eb \uf8edlog \u00124N 3 \u03b4 \u0013 + 8 log \uf8eb \uf8ed 16\u03c3H log \u0010 4N3 \u03b4 \u0011 max \u0000\u2206h(s, a), \u2206min, \u03b5 \u0001 \uf8f6 \uf8f8 \uf8f6 \uf8f8+ 2. Moreover, with the same probability, \u03c4 = e O( SAH2 \u03b52 log(1/\u03b4)), where e O hides logarithmic terms. First note that EPRL combined with such a sampling rule is minimax optimal, since it matches the worst-case lower bound derived in Appendix C.2. In addition, the leading term in the instancedependent complexity is the value of a minimum \ufb02ow with a lower bound function g that, in case multiple disjoint optimal trajectories exist4, matches the gap-dependence in (1). If we suppose that there exist at least two disjoint optimal trajectories, in which case \u2206min = \u2206 h min = 0, then, thanks to Lemma 7 in Appendix B, one can easily see that \u03d5\u22c6(g) \u2264\u03b1H2\u03d5\u22c6(c) + \u03d5\u22c6(g\u2032), where g\u2032 h(s, a) := e O(H2/ max \u0000\u2206h(s, a), \u03b5 \u00012) does not depend on \u03b4, c is the \u201coptimal\u201d lower bound function from (1), and \u03b1 is a numerical constant. Hence, in the asymptotic regime (\u03b4 \u21920), \u03d5\u22c6(g) matches our lower bound up to a O(H2) multiplicative factor. Remark 2. Since Theorem 1 was derived for Gaussian rewards, EPRL is instance-optimal only when the reward distribution is Gaussian. This is not surprising since it is well known from the bandit literature [e.g., 26] that sample complexity bounds scaling with a sum of inverse squared gaps are optimal only for Gaussian distributions. Note, however, that EPRL works in greater generality and achieves complexity \u03d5\u22c6(c) for any \u03c32-sub-Gaussian distribution without knowing its speci\ufb01c form (e.g., whether it is Gaussian or not). What is the optimal rate for other common distributions (e.g., bounded rewards in [0, 1]) and how to achieve it remains an open question. Finally, our sample complexity bound has an extra multiplicative logarithmic term which roughly scales as O(log(H) log(H log(1/\u03b4)/\u03b5)). While this term makes the dependence on \u03b4 sub-optimal by a log log(1/\u03b4) factor, we show in Appendix E that it can be removed in the speci\ufb01c case of tree-based MDPs [12]. Remark 3. We believe that the sub-optimality on H could be reduced to a single H factor by boosting the lower bound. In Appendix E, we show that this is indeed possible in tree-based MDPs. As for the upper bound, reducing H2 to H is likely to require tighter concentration bounds on values. Remark 4. In Appendix D.4, we prove that, when using the max-diameter sampling rule (Line 18 in Algorithm 1) at each step, the sample complexity is e O(P (s,a,h) H2/ max(\u2206h(s, a), \u2206min, \u03b5)2). While this scales with the same gaps as Theorem 3, it is only a naive upper bound to the minimum \ufb02ow value (see Section 3). The intuition is that max-diameter sampling alone does not ensure that all triplets are visited suf\ufb01ciently often, which prevents us from tightly controlling their elimination times. Proof sketch The complete proof is given in Appendix D.3. It \ufb01rst relies on the following crucial result which relates the deterministic return gaps to the sum of con\ufb01dence bonuses. Lemma 1 (Diameter vs gaps). With probability at least 1 \u2212\u03b4, for any t \u2208N, h \u2208[H], s \u2208Sh, a \u2208 Ah(s), if a \u2208At h(s) and the algorithm did not stop at the end of episode t, max \u0012\u2206h(s, a) 4 , \u2206min 4 , \u03b5 2 \u0013 \u2264max \u03c0\u2208\u03a0t\u22121 H X h=1 bt h(s\u03c0 h, a\u03c0 h), 4When there is a unique optimal trajectory, our upper bound scales with \u2206min = minh\u2208[H] \u2206 h min at all stages h, while the lower bound scales with \u2206 h min at stage h. We believe the latter should be improvable to obtain a dependence on \u2206min matching the one in the upper bound. 7 \fwhere \u2206min := minh\u2208[H] mins\u2208Sh mina:\u2206h(s,a)>0 \u2206h(s, a) if there exists a unique optimal trajectory (s\u22c6 h, a\u22c6 h)h\u2208[H], and \u2206min := 0 in the opposite case. In our analysis, we refer to the set of consecutive time steps {t \u2208N : kt = k} as the k-th period. Using the fact that in period k + 1 each active triplet has been visited at least k times (which allows to upper bound each bonus bt h(s\u03c0 h, a\u03c0 h) for \u03c0 \u2208\u03a0t\u22121 by a quantity scaling in p 1/k), one can use Lemma 1 to obtain an upper bound \u03bas,a,h \u2243 H2 log(1/\u03b4) max(\u2206h(s,a),\u2206min,\u03b5) 2 on the last period in which (s, a, h) is active (Lemma 18 in Appendix D.3). A crucial step of the proof is then to upper bound the duration of the k-th period, dk := P\u03c4 t=1 1 (kt = k). Lemma 2. dk \u22642(log(H) + 1)\u03d5\u22c6(ck) where ck h(s, a) = 1(a \u2208Atk\u22121 h (s), ntk\u22121 h (s, a) < k). The intuition behind this result is as follows. Recall that the goal of the max-coverage sampling rule in period k is to visit at least once each (s, a, h) that is active (i.e., a \u2208Atk\u22121 h (s)) and undersampled (i.e., ntk\u22121 h (s, a) < k). By de\ufb01nition, the minimum \ufb02ow \u03d5\u22c6(ck) is the minimum number of policies that need to be played to achieve this goal. Interestingly, Lemma 2 shows that the number of policies played by max-coverage to visit all active undersampled triplets is very close to its theoretical minimum, despite the fact that the algorithm never computes an actual minimum \ufb02ow. We prove this by interpreting max-coverage sampling as a greedy maximization of some coverage function (related to a minimum \ufb02ow problem) and leveraging the theory of sub-modular maximization [e.g., 27]. Thanks to Lemma 2, we have that \u03c4 \u22642(log(H) + 1) k\u03c4 X k=1 \u03d5\u22c6(ck), where k\u03c4 is the index of the period at which the algorithm stops. To bound this quantity we carefully apply the theory of minimum \ufb02ows and their dual problem of maximum cuts. Let us de\ufb01ne a cut C as any subset of states containing the initial state and let E(C) be the set of arcs that connect states in C with states not in C. The well-known min-\ufb02ow-max-cut theorem (Theorem 4 stated in Appendix B) states that, for any lower bound function c, \u03d5\u22c6(c) = maxC\u2208C P (s,a,h) ch(s, a), where C denotes the set of all valid cuts. Then, k\u03d5\u22c6(ck) \u2264max C\u2208C X (s,a,h)\u2208E(C) k1 \u0010 a \u2208A \u00af tk\u22121 h (s) \u0011 \u2264max C\u2208C X (s,a,h)\u2208E(C) (\u03bas,a,h + 1) = \u03d5\u22c6(g), where g : E \u2192[0, \u221e) is de\ufb01ned by gh(s, a) = \u03bas,a,h + 1. It follows that \u03c4 \u2264 2(log(H) + 1) k\u03c4 X k=1 1 k \u03d5\u22c6(g) \u2264 2(log(H) + 1) (log(k\u03c4) + 1) \u03d5\u22c6(g) \u2264 2(log(H) + 1) \u0012 max (s,a,h) log(\u03bas,a,h) + 1 \u0013 \u03d5\u22c6(g). Using the expression of \u03bas,a,h given in Lemma 18 of Appendix D.3 concludes the proof of the stated e O(\u03d5\u22c6(g)) instance-dependent bound. For the worse-case bound, we refer the reader to Theorem 12. 6 Experiments We compare numerically EPRL to the minimax optimal BPI-UCRL algorithm [24], adapted to the deterministic setting, on synthetic MDP instances. For EPRL, we experiment with two sampling rules: max-coverage (maxCov) and max-diameter (maxD, see Line 18 of Algorithm 1). We defer to Appendix F some implementation details, including a precise description of the BPI-UCRL baseline. We generate random \u201ceasy\u201d deteministic MDP instances with Gaussian rewards of variance 1 using the following protocol. For \ufb01xed S, A, H the mean rewards rh(s, a) are drawn i.i.d. from a uniform 8 \fdistribution over [0, 1] and for each state-action pair, the next state is chosen uniformly at random in {1, . . . , S}. Finally, we only keep MDP instances whose minimum value gap, denoted by \u2206min, is larger than 0.1. Our \ufb01rst observation is that depending on the MDP, the identity of the best performing algorithm can be different. In Figure 1 we show the distribution of the sample complexity (estimated over 10 Monte Carlo simulations) for three different MDPs obtained from our sampling procedure with S, A = 2 and H = 3 and for algorithms that are run with parameters \u03b4 = 0.1 and \u03b5 = 1.5\u2206min. BPI_UCRL maxD maxCov 2500 3000 3500 4000 4500 5000 5500 6000 BPI_UCRL maxD maxCov 2500 3000 3500 4000 4500 5000 5500 BPI_UCRL maxD maxCov 600 800 1000 1200 1400 1600 Figure 1: Distribution of stopping times on particular MDPs over 10 runs, with \u03b5 = 1.5\u2206min. The horizontal lines represent the average sample complexity. To get a better understanding of this phenomenon, we then generated 10 MDP instances of size (S, A, H) = (2, 2, 3) and for each MDP we ran EPRL and BPI-UCRL for 25 values of \u03b5 in a grid [0.05\u2206min, 10\u2206min] and \u03b4 = 0.1. We ran 10 Monte-Carlo simulations for each value of the triplet (MDP, algorithm A, \u03b5), in order to estimate the expected sample complexity EA[\u03c4\u03b4]. In Figure 2 we plot the relative performance (ratio of sample complexities) of different algorithms as a function of the value of \u03b5/\u2206min: each point corresponds to a different MDP and a different value of \u03b5. We observe that for large values of \u03b5/\u2206min, BPI-UCRL has a smaller sample complexity than both versions of EPRL, with a ratio never exceeding 2 (resp. 3) for max-diameter (resp. max-coverage). However, in the more interesting small \u03b5/\u2206min regime EPRL is better by several orders of magnitude. This is expected since, for small \u03b5, EPRL is able, through its elimination rule, to identify the optimal policy long before the diameter goes below \u03b5. We observe that the threshold of \u03b5/\u2206min at which EPRL algorithms become a better choice than BPI-UCRL seems to vary with the MDP. 0 1 2 10 3 10 2 10 1 100 maxD maxCov 0 1 2 0.6 0.8 1.0 Figure 2: Ratios in log-scale EA[\u03c4\u03b4]/EBPI\u2212UCRL[\u03c4\u03b4] for A in {maxD, maxCov} (left) and EmaxD[\u03c4\u03b4]/EmaxCov[\u03c4\u03b4] (right) as a function of \u03b5/\u2206min. Our experiments also reveal an intriguing phenomenon: the use of max-diameter sampling within EPRL often outperforms max-coverage sampling, even if there exists MDPs (2 out of 10 in our experiments) in which max-coverage is indeed empirically better. We leave as future work to obtain a better characterizations of MDPs for which EPRL with max-coverage sampling performs best. 9 \f7 Discussion We derived an instance-dependent and a worst-case lower bound characterizing the complexity of PAC RL in deterministic MDPs, and proposed a general elimination algorithm together with a novel maximum-coverage sampling rule that nearly matches them (up to O(H2) and logarithmic factors). We conclude with some discussion about our results and future directions. Max-coverage vs max-diameter While minimax optimality can be easily achieved with very simple strategies (like max-diameter or BPI-UCRL), instance optimality requires careful algorithmic design. Our coverage-based strategy is built around the idea of \u201cuniformly\u201d exploring the whole MDP, while using an elimination strategy to ensure that no (s, a, h) is sampled much more than what the lower bound prescribes. Notably, this sampling rule is very simple, while exiting PAC RL algorithms with instance-dependent complexity are all quite involved [10, 9]. Moreover, max-coverage sampling naturally extends to stochastic MDPs, e.g., by doing optimistic planning on an MDP with a reward function equal to 1 for under-sampled triplets and 0 for the others. Finally, in our experiments on random instances, we observed that max-diameter is often comparable or better than max-coverage. We leave as future work to investigate whether the latter is also provably near instance-optimal. Computational aspects Our sampling rule requires solving one dynamic program per episode, which takes O(N) time. The bottleneck is the elimination rule, which requires O(N 2) per-episode time complexity to solve one dynamic program for each active triplet. However, we note that eliminations could be checked periodically (e.g., even at exponentially-separated times) without signi\ufb01cantly compromising the sample complexity guarantees. Improving our results Our instance-dependent upper bound for max-coverage sampling is suboptimal by a factor H2 and a multiplicative O(log log(1/\u03b4)) term. In Appendix E, we show that, for the speci\ufb01c sub-class of tree-based MDPs [12], we can obtain improved results in all these aspects. In particular, we show that (1) the lower bound scales with an extra factor H and it is fully explicit, (2) the multiplicative log terms in the sample complexity of coverage-based sampling can be removed, and (3) maximum-diameter sampling also achieves near instance-optimal guarantees. Beyond Gaussian distributions As it is common, e.g., in the bandit literature, the gaps in our lower and upper bounds are optimal only for Gaussian reward distributions. Extending Theorem 1 to general distributions is actually simple (see, e.g., [28] and Lemma 8 in Appendix C). However, this would yield gaps written in terms of KL divergences between arm distributions rather than differences of mean rewards as in the Gaussian case. How to match such gaps is an interesting open question. Instance optimality in stochastic MDPs The main open question is how to achieve (near) instanceoptimality for PAC RL in stochastic MDPs. We believe that many of the results presented in this paper could help in this direction. First, our instance-dependent lower bound could be extended to the stochastic case by modifying return gaps to include visitation probabilities and minimum \ufb02ows to account for stochastic navigation constraints. Second, on the algorithmic side, our maximumcoverage sampling rule easily extends to stochastic MDPs as mentioned above, while our elimination rule could also be adapted by computing the optimistic return of policies visiting a certain (s, a, h) with a least some probability, which corresponds to a constrained MDP problem [e.g., 29]. Studying how these components behave in stochastic MDPs is an exciting direction for future work. Acknowledgments and Disclosure of Funding Aymen Al-Marjani ackowledges the support of the Chaire SeqALO (ANR-20-CHIA-0020). Emilie Kaufmann acknoweldges the support of the French National Research Agency under the BOLD project (ANR-19-CE23-0026-04)." + }, + { + "url": "http://arxiv.org/abs/2106.13013v1", + "title": "A Fully Problem-Dependent Regret Lower Bound for Finite-Horizon MDPs", + "abstract": "We derive a novel asymptotic problem-dependent lower-bound for regret\nminimization in finite-horizon tabular Markov Decision Processes (MDPs). While,\nsimilar to prior work (e.g., for ergodic MDPs), the lower-bound is the solution\nto an optimization problem, our derivation reveals the need for an additional\nconstraint on the visitation distribution over state-action pairs that\nexplicitly accounts for the dynamics of the MDP. We provide a characterization\nof our lower-bound through a series of examples illustrating how different MDPs\nmay have significantly different complexity. 1) We first consider a \"difficult\"\nMDP instance, where the novel constraint based on the dynamics leads to a\nlarger lower-bound (i.e., a larger regret) compared to the classical analysis.\n2) We then show that our lower-bound recovers results previously derived for\nspecific MDP instances. 3) Finally, we show that, in certain \"simple\" MDPs, the\nlower bound is considerably smaller than in the general case and it does not\nscale with the minimum action gap at all. We show that this last result is\nattainable (up to $poly(H)$ terms, where $H$ is the horizon) by providing a\nregret upper-bound based on policy gaps for an optimistic algorithm.", + "authors": "Andrea Tirinzoni, Matteo Pirotta, Alessandro Lazaric", + "published": "2021-06-24", + "updated": "2021-06-24", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG" + ], + "main_content": "Introduction There has been a recent surge of interest for problem-dependent analyses of reinforcement learning (RL) algorithms, both in the context of best policy identi\ufb01cation (e.g., Zanette et al., 2019; Marjani and Proutiere, 2021) and regret minimization (e.g., Simchowitz and Jamieson, 2019; He et al., 2020; Yang et al., 2021; Xu et al., 2021). Before this recent trend, problem-dependent bounds were limited to regret minimization in average-reward Markov decision processes (MDPs) (e.g. Burnetas and Katehakis, 1997; Tewari and Bartlett, 2007; Jaksch et al., 2010; Ok et al., 2018). Notably, Burnetas and Katehakis (1997) derived the \ufb01rst problem-dependent asymptotic lower bound for regret minimization in ergodic averagereward MDPs and designed an algorithm matching this fundamental limit. Their lower bound was successively extended by Ok et al. (2018) to structured MDPs. However, these results remain restricted to ergodic MDPs, where the need of exploration is limited to the action space since states are repeatedly visited under any policy. In \ufb01nite-horizon MDPs, the literature has focused on deriving problem-dependent \u201cworst-case\u201d lower bounds for regret minimization (Simchowitz and Jamieson, 2019; Xu et al., 2021) with no state reachability assumption (i.e., the counterpart of ergodicity for \ufb01nite-horizon MDPs). These results are simultaneously i) problem-dependent since they scale with instance-speci\ufb01c quantities (e.g., the action-gaps); ii) \u201cworst-case\u201d since they are derived only for \u201chard\u201d instances. Notably, Xu et al. (2021) proved that there exists a speci\ufb01c MDP such that any consistent algorithm must su\ufb00er a regret depending on the inverse of the minimum gap and derived an algorithm with matching regret upper bound. Despite these results, \u201cfully\u201d problem-dependent lower bounds are still missing, i.e., bounds that depend on the properties of any given MDP, instead of relying on speci\ufb01c worst-case instances. In this paper, we 1 \ftake a step in this direction by deriving the \ufb01rst \u201cfully\u201d problem-dependent asymptotic regret lower bound for \ufb01nite-horizon MDPs. Our lower bound generalizes existing results and provides new insights on the \u201ctrue\u201d complexity of exploration in this setting. Similarly to average-reward MDPs, our lower-bound is the solution to an optimization problem, but it does not require any assumption on state reachability. Our derivation reveals the need for a constraint on the visitation distribution over state-action pairs that explicitly accounts for the dynamics of the MDP. Interestingly, we show examples where this constraint is crucial to obtain tight lower-bounds and to match existing results derived for speci\ufb01c MDP instances. Finally, we show that, in certain \u201csimple\u201d MDPs, the lower bound is considerably smaller than in the general case and it does not scale with the minimum action-gap. We show that this result is attainable (up to poly(H) terms) by providing a novel regret upper-bound for an optimistic algorithm. Existing asymptotic problem-dependent lower bounds. In ergodic average-reward MDPs, Burnetas and Katehakis (1997); Tewari and Bartlett (2007); Ok et al. (2018) showed that the optimal problem-dependent regret scales roughly as O(P s,a log T \u2206(s,a)), where \u2206(s, a) is the sub-optimality gap (i.e., the advantage) of action a in state s and T is the number of learning steps.1 This bound has the same shape as the optimal problem-dependent regret in contextual bandits (e.g., Lattimore and Szepesv\u00e1ri, 2020). In \ufb01nite-horizon MDPs, Simchowitz and Jamieson (2019) \ufb01rst showed that the sum of the inverse gaps is a loose lower bound for a speci\ufb01c family of optimistic algorithms, which in the worst-case may su\ufb00er from a regret of at least \u2126( S \u2206min log K), where S is the number of states, K is the number of episodes, and \u2206min is the minimum sub-optimality gap. Xu et al. (2021) later re\ufb01ned this result showing that there exists a \u201chard\u201d MDP where any consistent algorithm su\ufb00ers a regret proportional to \u2126( SA \u2206min log K). In such an instance, this bound is exponentially (in S) larger than the sum of inverse gaps and it is proportional to \u2126( Zmul \u2206min log K), where Zmul is the total number of optimal actions in states where the optimal action is not unique. This suggests that the number of non-unique optimal actions may be key to characterize the \u201cworst-case\u201d complexity in \ufb01nite-horizon MDPs. 2 Preliminaries We consider a time-inhomogeneous \ufb01nite-horizon MDP M := (S, A, {ph, qh}h\u2208[H], p0, H) (Puterman, 1994), where S is a \ufb01nite set of S states, A is a \ufb01nite set of A actions, ph : S \u00d7 A \u2192P(S) and qh : S \u00d7 A \u2192P(R) are the transition probabilities and the reward distribution at stage h \u2208[H] := {1, . . . , H}, p0 \u2208P(S) is the initial state distribution, and H is the horizon.2 We denote by rh(s, a) the expected reward after taking action a in state s. A (deterministic) Markov policy \u03c0 = {\u03c0h}h\u2208[H] \u2208\u03a0 is a sequence of mappings \u03c0h : S \u2192A. Let \u03a0 be the set of such policies. Executing a policy \u03c0 on M yields random trajectories (s1, a1, y1, . . . , sH, aH, yH), where s1 \u223cp0, ah = \u03c0h(sh), sh+1 \u223cph(sh, ah), and yh \u223cqh(sh, ah). We denote by P\u03c0 M, E\u03c0 M the corresponding probability and expectation operators, and let \u03c1\u03c0 M,h(s, a) := P\u03c0 M{sh = s, ah = a} and \u03c1\u03c0 M,h(s) := \u03c1\u03c0 M,h(s, \u03c0h(s)) be the state-action and state occupancy measures at stage h. For each s \u2208S and h \u2208[H], we de\ufb01ne the action-value function of a policy \u03c0 in M as Q\u03c0 M,h(s, a) := E\u03c0 M \" H X h\u2032=h rh\u2032(sh\u2032, ah\u2032)|sh = s, ah = a # , while the corresponding value function is V \u03c0 M,h(s) := Q\u03c0 M,h(s, \u03c0h(s)). Let V \u03c0 M,0 := Es1\u223cp0[V \u03c0 M,1(s1)] be the expected return of policy \u03c0 and V \u22c6 M,0 = sup\u03c0 V \u03c0 M,0. We de\ufb01ne the set of return-optimal policies as \u03a0\u22c6(M) := {\u03c0 \u2208\u03a0 | V \u03c0 M,0 = V \u22c6 M,0}. (1) 1More precisely, it scales with a sum of local complexity measures which are related to the sub-optimality gaps (Tewari and Bartlett, 2007). 2P(\u2126) denotes the set of probability measures over a set \u2126. 2 \fBy standard MDP theory (e.g., Puterman, 1994), there exists a unique optimal action-value function Q\u22c6 M,h that satis\ufb01es the Bellman optimality equations for any h \u2208[H], s \u2208S, a \u2208A, Q\u22c6 M,h(s, a) = rh(s, a) + ph(s, a)TV \u22c6 M,h+1, (2) where V \u22c6 M,h(s) := maxa\u2208A Q\u22c6 M,h(s, a). We de\ufb01ne the set of Bellman-optimal actions at state-stage (s, h) as OM,h(s) := {a \u2208A : Q\u22c6 M,h(s, a) = V \u22c6 M,h(s)}. Then, the set of Bellman-optimal policies is \u03a0\u22c6 O(M) := {\u03c0 \u2208\u03a0 | \u2200s, h : \u03c0h(s) \u2208OM,h(s)}. A Bellman-optimal policy is always return optimal, i.e., \u03a0\u22c6 O(M) \u2286\u03a0\u22c6(M), while it easy to construct examples where the reverse is not true (i.e., a returnoptimal policy is not Bellman optimal). Finally, we introduce the policy gap \u0393M(\u03c0) := V \u22c6 M,0 \u2212V \u03c0 M,0 and the action gap of a \u2208A in state s \u2208S at stage h \u2208[H] as \u2206M,h(s, a) := V \u22c6 M,h(s) \u2212Q\u22c6 M,h(s, a). (3) These two notions of sub-optimality are related by the following equation (proof in App. E): \u0393M(\u03c0) = X s\u2208S X a\u2208A X h\u2208[H] \u03c1\u03c0 M,h(s, a)\u2206M,h(s, a). (4) This relationship further shows that a policy \u03c0 can be return-optimal despite selecting actions with \u2206M,h(s, a) > 0 (hence it is not Bellman-optimal) at states that have zero occupancy measure \u03c1\u03c0 M,h(s, a). We consider the standard online learning protocol for \ufb01nite-horizon MDPs. At each episode k \u2208[K], the learner plays a policy \u03c0k and observes a random trajectory (sk,h, ak,h, yk,h, . . . , sk,H, ak,H, yk,H) \u223cP\u03c0k M. The choice of \u03c0k is made by a learning algorithm A, i.e., a measurable function that maps the observations up to episode k \u22121 to policies. The goal is to minimize the cumulative regret, RegretK(M) := K X k=1 \u0393M(\u03c0k) = K X k=1 \u0010 V \u22c6 M,0 \u2212V \u03c0k M,0 \u0011 . (5) 3 Problem-dependent Lower Bound As customary in information-theoretic problem-dependent lower bounds, we derive our result for any MDP M in a given set M of MDPs with the same state-action space but di\ufb00erent transition probabilities and reward distributions.3 Formally, we derive an asymptotic problem-dependent lower bound on the expected regret of any \u201cprovably-e\ufb03cient\u201d learning algorithm on the set of MDPs M. De\ufb01nition 1 (\u03b1-uniformly good algorithm). Let \u03b1 \u2208(0, 1), then a learning algorithm A is \u03b1-uniformly good on M if, for each K \u2208N>0 and M \u2208M, there exists a constant c(M) such that EA M [RegretK(M)] \u2264 c(M)K\u03b1. Note that existing algorithms with O( \u221a K) worst-case regret (e.g., Azar et al., 2017; Zanette and Brunskill, 2019) are 1/2-uniformly good, while those with logarithmic regret (e.g., Simchowitz and Jamieson, 2019; Xu et al., 2021) are \u03b1-uniformly good for all \u03b1 \u2208(0, 1). For the purpose of deriving asymptotic lower bounds, De\ufb01nition 1 can be replaced by assuming that the expected regret is o(K\u03b1) only for suf\ufb01ciently large K and for any \u03b1 \u2208(0, 1) (Burnetas and Katehakis, 1997; Ok et al., 2018). We make the following assumption on the MDP M for which we derive the lower bound. Assumption 1 (Unique optimal state distribution). There exists \u03c1\u22c6 M,h \u2208P(S) such that, for any optimal policy \u03c0 \u2208\u03a0\u22c6(M) and for any s \u2208S, h \u2208[H], \u03c1\u22c6 M,h(s) = \u03c1\u03c0 M,h(s). 3While the focus of this paper is on the standard (unstructured) tabular setting, the set M can be used to encode structure, i.e., prior knowledge about the problem (Ok et al., 2018). 3 \fThis assumption requires all return-optimal policies of M to induce the same distribution over the state space. This is strictly weaker than assuming a unique optimal action at each state (see Lem. 10 in App. E), as commonly done in the contextual bandit setting (Hao et al., 2020; Tirinzoni et al., 2020) and in MDPs (Marjani and Proutiere, 2021). Let O\u22c6 M := {(s, a, h) : s \u2208supp(\u03c1\u22c6 M,h), a \u2208OM,h(s)} be the set of state-action-stage triplets containing all optimal actions in states that are visited by optimal policies of M. We introduce the following set of alternative MDPs to M: \u039b(M) := \u039bwa(M) \u2229\u039bwc(M), where \u039bwa(M) := {M\u2032 \u2208M | \u03a0\u22c6(M) \u2229\u03a0\u22c6(M\u2032) = \u2205} and4 \u039bwc(M) := {M\u2032 \u2208M | \u2200(s, a, h) \u2208O\u22c6 M : KLs,a,h(M, M\u2032) = 0}. The set of alternatives is a key component in the derivation of information-theoretic problem-dependent lower bounds (e.g., Lai and Robbins, 1985). Similar to (Burnetas and Katehakis, 1997; Ok et al., 2018), the set of alternatives \u039b(M) is the intersection of two sets: (1) the set of weak alternatives \u039bwa(M), i.e., MDPs that have no return-optimal policy in common with M; and (2) the set of weakly confusing MDPs \u039bwc(M), i.e., MDPs whose dynamics and rewards are indistinguishable from M on the state-action pairs observed while executing any return-optimal policy for M. Notice that the set \u039bwc(M) di\ufb00ers from the set of confusing MDPs considered in (Burnetas and Katehakis, 1997; Ok et al., 2018). In their case, the zero-KL condition is imposed over all states since the MDP M is assumed ergodic, which implies that any optimal policy visits all the states with positive probability. In our case, since we do not make any ergodicity assumption, optimal policies may not visit some states at some stages. Therefore, even if the kernels of M and M\u2032 di\ufb00er at some optimal action in any such state, the two MDPs remain indistinguishable by playing return-optimal policies. With these notions in mind, we are now ready to state our problem-dependent lower bound. Theorem 1. Let A be any \u03b1-uniformly good learning algorithm on M with \u03b1 \u2208(0, 1). Then, for any M \u2208M that satis\ufb01es Assumption 1, lim inf K\u2192\u221e EA M [RegretK(M)] log(K) \u2265v\u22c6(M), where v\u22c6(M) is the value of the optimization problem inf \u03b7\u2208RSAH \u22650 X s\u2208S X a\u2208A X h\u2208[H] \u03b7h(s, a)\u2206M,h(s, a), s.t. inf M\u2032\u2208\u039b(M) X s\u2208S X a\u2208A X h\u2208[H] \u03b7h(s, a)KLs,a,h(M, M\u2032) \u22651 \u2212\u03b1, X a\u2208A \u03b7h(s, a) = X s\u2032\u2208S X a\u2032\u2208A p(s|s\u2032, a\u2032)\u03b7h\u22121(s\u2032, a\u2032) \u2200s \u2208S, h > 1, X a\u2208A \u03b71(s, a) = 0 \u2200s / \u2208supp(p0). The lower bound is the solution to a constrained optimization problem that de\ufb01nes an optimal exploration strategy \u03b7 \u2208RSAH, where \u03b7h(s, a) is proportional to the number of visits allocated to each state s and action a at stage h. Such optimal exploration strategy must minimize the resulting regret (written as a weighted sum of local sub-optimality gaps), while satisfying three constraints. First, the KL constraint, which is common in this type of information-theoretic lower bounds, requires that the exploration strategy allocates su\ufb03cient visits to relevant state-action-stage triplets so as to discriminate M from all its 4The KL divergence between two MDPs is de\ufb01ned as KLs,a,h(M, M\u2032) = KL(ph(s, a), p\u2032 h(s, a)) + KL(qh(s, a), q\u2032 h(s, a)). 4 \falternatives M\u2032 \u2208\u039b(M). The last two constraints, taken as a whole, form what we refer to as the dynamics constraint. This requires the optimal exploration strategy to be realizable according to (i.e., compatible with) the MDP dynamics. As we shall see in our examples later, the dynamics constraint is a crucial component to introduce MDP structure into the optimization problem. Without it, an exploration strategy would be allowed to allocate visits to certain state-action pairs regardless of the probability to reach them (i.e., as if a generative model were available), thus resulting in a non-realizable allocation in most cases and loose lower bounds. 3.1 The policy-based perspective Note that, by de\ufb01nition, we can realize any allocation \u03b7 that satis\ufb01es the dynamics constraint by playing some stochastic policy. Moreover, we can always express the occupancy measure of any stochastic Markov policy as a mixture of deterministic Markov policies (e.g., Altman, 1999, Remark 6.1, page 64). This implies that an allocation \u03b7 satis\ufb01es the dynamics constraint in the optimization problem of Thm. 1 if, and only if, there exists a vector \u03c9 \u2208R|\u03a0| \u22650 of \u201cmixing coe\ufb03cients\u201d such that \u03b7h(s, a) = P \u03c0\u2208\u03a0 \u03c9\u03c0\u03c1\u03c0 h(s, a) for all s, a, h. This allows us to rewrite the optimization problem in a simpler form. Proposition 1. The optimization problem of Thm. 1 can be rewritten in the following equivalent form inf \u03c9\u2208R|\u03a0| \u22650 X \u03c0\u2208\u03a0 \u03c9\u03c0\u0393M(\u03c0), s.t. inf M\u2032\u2208\u039b(M) X \u03c0\u2208\u03a0 \u03c9\u03c0KL\u03c0(M, M\u2032) \u22651 \u2212\u03b1, where KL\u03c0(M, M\u2032) := P h\u2208[H] P s\u2208S P a\u2208A \u03c1\u03c0 M,h(s, a)KLs,a,h(M, M\u2032). While computationally harder than its counterpart in Thm. 1 (we moved from optimizing over SAH variables to |\u03a0| = ASH variables), this policy-based perspective is convenient to interpret and instantiate the lower bound in speci\ufb01c cases. 4 Examples We now illustrate a series of examples that show some interesting properties of our lower bound. 4.1 On the importance of the dynamics constraint to match existing results We consider the MDP M introduced by Xu et al. (2021) (see Fig. 1 with \u03ba = 0) and we de\ufb01ne M as the set of MDPs with exactly the same dynamics as M but arbitrary Gaussian rewards. In this problem \u2206min = \u03b5 > 0. We instantiate our lower bound in this case with and without the dynamics constraints. Corollary 1. Let M be the MDP of Fig. 1 with \u03ba = 0. Let e v(M) be the value of the optimization problem of Thm. 1 without dynamics constraints, then e v(M) = 2(1 \u2212\u03b1)(log2(S + 1) + A \u22122)/\u2206min. On the other hand, the lower bound in Thm. 1 with dynamics constraints yields v\u22c6(M) \u2265(1 \u2212\u03b1)SA/\u2206min. This result shows that ignoring the dynamics constraints leads to an exponentially smaller (and thus looser) bound w.r.t. v\u22c6(M). On the other hand, when computing the lower bound of Thm. 1, we match the lower bound of Xu et al. (2021) for this con\ufb01guration. 5 \fs1 1 s2 1 s2 2 sH 1 sH 2 sH 3 sH nH a2 . . . am a1 . . . am a1 . . . am a1 . . . \u03b5 0 0 0 0 0 0 0 a1 L L L R R R am \u03ba Figure 1: Variant of the example in (Xu et al., 2021). The MDP is binary tree with S = 2H \u22121 states, A = m \u22652 actions, and deterministic transitions. The \ufb01gure shows an instance with H = 3. The agent starts from the root state s1 1 and descends the tree using only two actions (L and R). In the leaf states, all the m actions are available. The rewards follow a Gaussian distribution with unit variance and mean equal to zero everywhere except for at most two leaf state-action pairs (whose values are \u03b5 and \u03ba). 4.2 On the usefulness of the policy view The policy view of Prop. 1 is particularly convenient to simplify the expression of the lower bound in some speci\ufb01c cases. One illustrative example is the problem considered above, where all MDPs in M share the same dynamics. In this case, the lower bound can be written as inf \u03c9\u2208R|\u03a0| \u22650 X \u03c0\u2208\u03a0 \u03c9\u03c0\u27e8\u03b8, \u03c6\u22c6\u2212\u03c6\u03c0\u27e9, s.t. \u2200\u03c0 / \u2208\u03a0\u22c6(M) : \u2225\u03c6\u03c0\u22252 D\u22121 \u03c9 \u2264\u0393M(\u03c0)2 2(1 \u2212\u03b1), where \u03b8 \u2208RSAH is a vector containing all mean rewards, i.e., \u03b8s,a,h = rh(s, a), \u03c6\u03c0 \u2208RSAH is the vector containing the state distribution \u03c1\u03c0 M, \u03c6\u03c0 s,a,h = \u03c1\u03c0 M,h(s, a), and D\u03c9 := P \u03c0\u2208\u03a0 \u03c9\u03c0diag(\u03c1\u03c0 M,h(s, a)) \u2208 RSAH\u00d7SAH is a diagonal matrix proportional to the number of times each policy is selected. With this notation we have V \u03c0 M,0 = \u03b8T \u03c6\u03c0 and V \u22c6 M,0 = \u03b8T \u03c6\u22c6:= \u03b8T \u03c6\u03c0\u22c6for some optimal policy \u03c0\u22c6. It is then possible to instantiate this expression for Fig. 1 and obtain the statement of Cor. 1 (see App. C). Interestingly, this formulation bears a strong resemblance with the asymptotic lower bound for the combinatorial semi-bandit setting (e.g., Wagenmaker et al., 2021). In general, the similarity between the two settings comes from the fact that, in MDPs, there exists a combinatorial number (ASH) of policies, which may be treated as arms, whose expected return can be described by only SAH unknown variables (the immediate mean rewards). One di\ufb00erence is that, in combinatorial semi-bandits, the feature vectors usually take values in {0, 1}d, where d is their dimension (d = SAH in our case). Here, instead, they contain values in [0, 1]SAH representing the probabilities that the corresponding policy visits each stateaction pair. When the MDP is deterministic, the two problems are indeed equivalent. We remark that the learning feedback itself is the same as the one in combinatorial semi-bandits: whenever we execute a policy \u03c0 in a deterministic MDP, we receive a random reward observation for each state-action-stage triplet associated with an element where \u03c6\u03c0 is equal to 1. When the MDP is not deterministic, on the other hand, we receive the observation only with the corresponding probability. We leave it as a future work to further study the connection and di\ufb00erences between the two settings. 4.3 On the dependence on the sum of inverse gaps While the lower bound of Xu et al. (2021) shows that there exists an MDP where the regret is signi\ufb01cantly larger than the sum of inverse gaps whenever multiple equivalent optimal actions exist, in the following we derive a result that is somewhat complementary: we show that there exists a large class of MDPs where the lower bound scales as the sum of the inverse gaps, even when Zmul > 0. 6 \fProposition 2. Let M be an MDP satisfying Asm. 1 such that \u03c1\u22c6 M is full-support (i.e., \u03c1\u22c6 M,h(s) > 0, \u2200s, h). Then, v\u22c6(M) = (1 \u2212\u03b1) X h,s,a \u2206M,h(s, a) Ks,a,h(M) \u2264 X h,s,a 2(H \u2212h)2 \u2206M,h(s, a), where Ks,a,h(M) := inf \u00af p,\u00af q\u2208\u039bs(M) \b KL(ph(s, a), \u00af p) + KL(qh(s, a), \u00af q) \t and \u039bs(M) := {\u00af p \u2208P(S), \u00af q \u2208 P([0, 1]) : Ey\u223c\u00af q[y] + \u00af pT V \u22c6 M,h+1 > V \u22c6 M,h(s)}. Note that the full-support condition for \u03c1\u22c6 M,h is weaker than ergodicity for average-reward MDPs since it is required only for the optimal policy. For MDPs with this property, the lower bound is obtained by a decoupled exploration strategy similar to the one for ergodic MDPs, where the optimal allocation focuses on exploring sub-optimal actions regardless of how to reach the corresponding state, while the exploration of the state space comes \u201cfor free\u201d from trying to minimize regret w.r.t. the optimal policy itself. Interestingly, this result holds even for Zmul > 0, suggesting that the dependency Zmul \u2206min derived in (Xu et al., 2021) may be relevant only under speci\ufb01c reachability properties (e.g., when the optimal occupancy measure is not full support). 4.4 On the dependence on the minimum gap Let us consider again the MDP of Fig. 1 under the same setting as before except that \u03ba \u22652\u03b5 > 0. In this problem, \u2206min = \u01eb and \u2206max = \u03ba are the minimum and maximum action gap, respectively. Perhaps surprisingly, despite we only added a single positive reward (\u03ba) to the original hard instance of Xu et al. (2021), we now show that the lower bound of Thm. 1 does not scale with the minimum gap at all. Proposition 3. Let M be the MDP of Fig. 1 with \u03ba \u22652\u03b5 > 0, then the lower bound of Thm. 1 yields v\u22c6(M) \u226412(1 \u2212\u03b1) SA \u2206max . On the other hand, the sum of inverse gaps of M is at least (log2(S + 1) + A \u22123)/\u2206min. This result shows that, for given S, A, H, one can always construct an MDP where the lower bound of Thm. 1 is smaller than the sum of inverse gaps by an arbitrarily large factor. The intuition is as follows. In order to learn an optimal policy, any consistent algorithm must \ufb01gure out which among actions L and R is optimal at the root state s1 1. Action L leads to a return of \u01eb, while action R yields a (possibly much larger) return \u03ba. Suppose the agent has estimated all the rewards in the MDP up to an error of \u03ba/2. This is enough for it to \u201cprune\u201d the whole left branch of the tree since its return is certainly smaller than the one in the right branch. This is better illustrated using the policy view: each policy in this MDP has a gap \u0393(\u03c0) \u2265\u03ba/2. Thus, an estimation error below the minimum policy gap su\ufb03ces to discriminate all sub-optimal policies w.r.t. the optimal one. Notably, this means that the left branch need not be explored to gain \u01eb-accurate estimates, which would translate into a much larger O(1/\u2206min) regret. In other words, the agent is not required to explore until it learns a Bellman optimal policy (i.e., one that correctly chooses action a1 in state sH 1 ); any return optimal policy su\ufb03ces to minimize regret, and this can be obtained by simply learning to take the right path while playing arbitrary actions at all other states. 5 Policy-Gap-Based Regret Bound for Optimistic Algorithms To con\ufb01rm that the result of Sec. 4.4 is not an artifact of our lower bound, we provide a novel (logarithmic) problem-dependent regret bound for the optimistic algorithm UCBVI (Azar et al., 2017) that scales with the minimum policy gap \u0393min instead of the action gaps \u2206M as in recent works (Simchowitz and Jamieson, 7 \f2019; Xu et al., 2021). We \ufb01rst recall the general template of optimistic algorithms (e.g., Simchowitz and Jamieson, 2019). We de\ufb01ne the optimistic action-value function as Q k h(s, a) := b rk h(s, a) + b pk h(s, a)T V k h+1 + ck h(s, a), where b rk h, b pk h are the empirical mean rewards and transition probabilities, respectively, while ck h(s, a) is a con\ufb01dence bonus to ensure optimism. The corresponding optimistic value function is V k h(s) := maxa Q k h(s, a). At each episode k \u2208[K], the agent plays the policy \u03c0k h(s) := argmaxa Q k h(s, a). We carry out the analysis for an MDP M with unknown rewards and transition probabilities and, for simplicity, deterministic initial state s1. As common, we only assume that rewards lie in [0, 1] almost surely.5 Since our goal is to show the dependence on the policy gaps, we shall choose the simple Cherno\ufb00Hoe\ufb00ding-based con\ufb01dence bonus (Azar et al., 2017), which yields a regret bound that is sub-optimal in the (minimax) dependence on H. The analysis can be conducted analogously (and improved in the dependence on H) with Bernstein-based bonuses following recent works (Zanette and Brunskill, 2019; Simchowitz and Jamieson, 2019). Theorem 2. Let M be any MDP with rewards in [0, 1] and K \u22651, then the expected regret of UCBVI with Cherno\ufb00-Hoe\ufb00ding bonus (ignoring low-order terms in log(K)) is EM[Regret(K)] \u22724H4SA \u0393min log(4SAHK2). This result shows that 1) UCBVI attains the result in Prop. 3 (up to poly(H) factors) where \u0393min = \u2206max, even when dynamics are unknown; 2) Prop. 3 is tight w.r.t. the gaps; 3) it is possible to achieve regret not scaling with \u2206min. 6 Discussion While Thm. 1 provides the \ufb01rst \u201cfully\u201d problem-dependent lower bound for \ufb01nite-horizon MDPs, it opens a number of interesting directions. 1) As all existing problem-dependent lower bounds for this setting, the result is asymptotic in nature. A more re\ufb01ned \ufb01nite-time analysis could be obtained following Garivier et al. (2019). 2) For the case studied in Cor. 1, Xu et al. (2021) provide an algorithm with matching upper bound (up to poly(H) factors), while we provide a matching upper bound (Thm. 2) for Prop. 3. It remains an open question how to design an algorithm to match the bound of Thm. 1. 3) Most of the ingredients in deriving and analyzing Thm. 1 could be adapted to the average-reward case to obtain a lower bound with no ergodicity assumption." + }, + { + "url": "http://arxiv.org/abs/2010.12247v2", + "title": "An Asymptotically Optimal Primal-Dual Incremental Algorithm for Contextual Linear Bandits", + "abstract": "In the contextual linear bandit setting, algorithms built on the optimism\nprinciple fail to exploit the structure of the problem and have been shown to\nbe asymptotically suboptimal. In this paper, we follow recent approaches of\nderiving asymptotically optimal algorithms from problem-dependent regret lower\nbounds and we introduce a novel algorithm improving over the state-of-the-art\nalong multiple dimensions. We build on a reformulation of the lower bound,\nwhere context distribution and exploration policy are decoupled, and we obtain\nan algorithm robust to unbalanced context distributions. Then, using an\nincremental primal-dual approach to solve the Lagrangian relaxation of the\nlower bound, we obtain a scalable and computationally efficient algorithm.\nFinally, we remove forced exploration and build on confidence intervals of the\noptimization problem to encourage a minimum level of exploration that is better\nadapted to the problem structure. We demonstrate the asymptotic optimality of\nour algorithm, while providing both problem-dependent and worst-case\nfinite-time regret guarantees. Our bounds scale with the logarithm of the\nnumber of arms, thus avoiding the linear dependence common in all related prior\nworks. Notably, we establish minimax optimality for any learning horizon in the\nspecial case of non-contextual linear bandits. Finally, we verify that our\nalgorithm obtains better empirical performance than state-of-the-art baselines.", + "authors": "Andrea Tirinzoni, Matteo Pirotta, Marcello Restelli, Alessandro Lazaric", + "published": "2020-10-23", + "updated": "2020-11-20", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction We study the contextual linear bandit (CLB) setting [e.g., 1], where at each time step t the learner observes a context Xt drawn from a context distribution \u03c1, pulls an arm At, and receives a reward Yt drawn from a distribution whose expected value is a linear combination between d-dimensional features \u03c6(Xt, At) describing context and arm, and an unknown parameter \u03b8\u22c6. The objective of the learner is to maximize the reward over time, that is to minimize the cumulative regret w.r.t. an optimal strategy that selects the best arm in each context. This setting formalizes a wide range of problems such as online recommendation systems, clinical trials, dialogue systems, and many others [2]. Popular algorithmic principles, such as optimism-in-face-of-uncertainty and Thompson sampling [3], have been applied to this setting leading to algorithms such as OFUL [4] and LINTS [5, 6] with strong \ufb01nite-time worst-case regret guarantees. Nonetheless, Lattimore & Szepesvari [7] recently showed that these algorithms are not asymptotically optimal (in a problem-dependent sense) as they fail to adapt to the structure of the problem at hand. In fact, in the CLB setting, the values of \u2217Work done while at Facebook. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. arXiv:2010.12247v2 [cs.LG] 20 Nov 2020 \fdifferent arms are tightly connected through the linear assumption and a possibly suboptimal arm may provide a large amount of information about \u03b8\u22c6and thus the optimal arm. Optimistic algorithms naturally discard suboptimal arms and thus may miss the chance to acquire information about \u03b8\u22c6and signi\ufb01cantly reduce the regret. Early attempts to exploit general structures in MAB either adapted UCB-based strategies [8, 9] or focused on different criteria, such as regret to information ratio [10]. While these approaches succeed in improving the \ufb01nite-time performance of optimism-based algorithms, they still do not achieve asymptotic optimality. An alternative approach to exploit the problem structure was introduced in [7] for (non-contextual) linear bandits. Inspired by approaches for regret minimization [11, 12, 13] and best-arm identi\ufb01cation [14] in MAB, Lattimore & Szepesvari [7] proposed to compute an exploration strategy by solving the (estimated) optimization problem characterizing the asymptotic regret lower bound for linear bandits. While the resulting algorithm matches the asymptotic logarithmic lower bound with tight leading constant, it performs rather poorly in practice. Combes et al. [15] followed a similar approach and proposed OSSB, an asymptotically optimal algorithm for bandit problems with general structure (including, e.g., linear, Lipschitz, unimodal). Unfortunately, once instantiated for the linear bandit case, OSSB suffers from poor empirical performance due to the large dependency on the number of arms. Recently, Hao et al. [16] introduced OAM, an asymptotically optimal algorithm for the CLB setting. While OAM effectively exploits the linear structure and outperforms other bandit algorithms, it suffers from major limitations. From an algorithmic perspective, at each exploration step, OAM requires solving the optimization problem of the regret lower bound, which can hardly scale beyond problems with a handful of contexts and arms. Furthermore, OAM implements a forcing exploration strategy that often leads to long periods of linear regret and introduces a linear dependence on the number of arms |A|. Finally, the regret analysis reveals a critical dependence on the inverse of the smallest probability of a context (i.e., minx \u03c1(x)), thus suggesting that OAM may suffer from poor \ufb01nite-time performance in problems with unbalanced context distributions.2 Degenne et al. [17] recently introduced SPL, which signi\ufb01cantly improves over previous algorithms for MAB problems with general structures. Inspired by algorithms for best-arm identi\ufb01cation [18], Degenne et al. reformulate the optimization problem in the lower bound as a saddle-point problem and show how to leverage online learning methods to avoid recomputing the exploration strategy from scratch at each step. Furthermore, SPL removes any form of forced exploration by introducing optimism into the estimated optimization problem. As a result, SPL is computationally ef\ufb01cient and achieves better empirical performance in problems with general structures. Contributions. In this paper, we follow similar steps as in [17] and introduce SOLID, a novel algorithm for the CLB setting. Our main contributions can be summarized as follows. \u2022 We \ufb01rst reformulate the optimization problem associated with the lower bound for contextual linear bandits [15, 19, 16] by introducing an additional constraint to guarantee bounded solutions and by explicitly decoupling the context distribution and the exploration policy. While we bound the bias introduced by the constraint, we also illustrate how the resulting exploration policy is better adapted to unbalanced context distributions. \u2022 Leveraging the Lagrangian dual formulation associated with the constrained lower-bound optimization problem, we derive SOLID, an ef\ufb01cient primal-dual learning algorithm that incrementally updates the exploration strategy at each time step. Furthermore, we replace forced exploration with an optimistic version of the optimization problem by speci\ufb01cally leveraging the linear structure of the problem. Finally, SOLID does not require any explicit tracking step and it samples directly from the current exploration strategy. \u2022 We establish the asymptotic optimality of SOLID, while deriving a \ufb01nite-time problemdependent regret bound that scales only with log |A| and without any dependence on minx \u03c1(x). To this purpose, we introduce a new concentration bound for regularized least-squares that scales as O(log t + d log log t), hence removing the d log t dependence of the bound in [4]. Moreover, we establish a e O(( \u221a d + |X|) \u221a dn) worst-case regret bound for any CLB problem with |X| contexts, d features, and horizon n. Notably, this is implies that SOLID is the \ufb01rst algorithm to be simultaneously asymptotically optimal and minimax optimal when |X| \u2264 \u221a d (e.g., in non-contextual linear bandits, when |X| = 1). 2Interestingly, Hao et al. [16] explicitly mention in their conclusions the importance of properly managing the context distribution to achieve satisfactory \ufb01nite-time performance. 2 \f\u2022 We empirically compare to a number of state-of-the-art methods for contextual linear bandits and show how SOLID is more computationally ef\ufb01cient and often has the smallest regret. A thorough comparison between SOLID and related work is reported in App. B. 2 Preliminaries We consider the contextual linear bandit setting. Let X be the set of contexts and A be the set of arms with cardinality |X| < \u221eand |A| < \u221e, respectively. Each context-arm pair is embedded into Rd through a feature map \u03c6 : X \u00d7 A \u2192Rd. For any reward model \u03b8 \u2208Rd, we denote by \u00b5\u03b8(x, a) = \u03c6(x, a)T\u03b8 the expected reward for each context-arm pair. Let a\u22c6 \u03b8(x) := argmaxa\u2208A \u00b5\u03b8(x, a) and \u00b5\u22c6 \u03b8(x) := maxa\u2208A \u00b5\u03b8(x, a) denote the optimal arm and its value for context x and parameter \u03b8. We de\ufb01ne the sub-optimality gap of arm a for context x in model \u03b8 as \u2206\u03b8(x, a) := \u00b5\u22c6 \u03b8(x) \u2212\u00b5\u03b8(x, a). We assume that every time arm a is selected in context x, a random observation Y = \u03c6(x, a)T\u03b8 + \u03be is generated, where \u03be \u223cN(0, \u03c32) is a Gaussian noise.3 Given two parameters \u03b8, \u03b8\u2032 \u2208Rd, we de\ufb01ne dx,a(\u03b8, \u03b8\u2032) := 1 2\u03c32 (\u00b5\u03b8(x, a) \u2212\u00b5\u03b8\u2032(x, a))2, which corresponds to the Kullback-Leibler divergence between the Gaussian reward distributions of the two models in context x and arm a. At each time step t \u2208N, the learner observes a context Xt \u2208X drawn i.i.d. from a context distribution \u03c1, it pulls an arm At \u2208A, and it receives a reward Yt = \u03c6(Xt, At)T\u03b8\u22c6+ \u03bet, where \u03b8\u22c6\u2208Rd is unknown to the learner. A bandit strategy \u03c0 := {\u03c0t}t\u22651 chooses the arm At to pull at time t as a measurable function \u03c0t(Ht\u22121, Xt) of the current context Xt and of the past history Ht\u22121 := (X1, Y1, . . . , Xt\u22121, Yt\u22121). The objective is to de\ufb01ne a strategy that minimizes the expected cumulative regret over n steps, E\u03c0 \u03be,\u03c1 \u0002 Rn(\u03b8) \u0003 := E\u03c0 \u03be,\u03c1 [Pn t=1 (\u00b5\u22c6 \u03b8(Xt) \u2212\u00b5\u03b8(Xt, At))] , where E\u03c0 \u03be,\u03c1 denotes the expectation w.r.t. the randomness of contexts, the noise of the rewards, and any randomization in the algorithm. We denote by \u03b8\u22c6the reward model of the bandit problem at hand, and without loss of generality we rely on the following regularity assumptions. Assumption 1. The realizable parameters belong to a compact subset \u0398 of Rd such that \u2225\u03b8\u22252 \u2264B for all \u03b8 \u2208\u0398. The features are bounded, i.e., \u2225\u03c6(x, a)\u22252 \u2264L for all x \u2208X, a \u2208A. The context distribution is supported over the whole context set, i.e., \u03c1(x) \u2265\u03c1min > 0 for all x \u2208X. Finally, w.l.o.g. we assume \u03b8\u22c6has a unique optimal arm in each context [see e.g., 15, 16]. Regularized least-squares estimator. We introduce the regularized least-square estimate of \u03b8\u22c6using t samples as b \u03b8t := V \u22121 t Ut, where V t := Pt s=1 \u03c6(Xs, As)\u03c6(Xs, As)T + \u03bdI, with \u03bd \u2265max{L2, 1} and I the d \u00d7 d identity matrix, and Ut := Pt s=1 \u03c6(Xs, As)Ys. The estimator b \u03b8t satis\ufb01es the following concentration inequality (see App. J for the proof and exact formulation). Theorem 1. Let \u03b4 \u2208(0, 1), n \u22653, and b \u03b8t be a regularized least-square estimator obtained using t \u2208[n] samples collected using an arbitrary bandit strategy \u03c0 := {\u03c0t}t\u22651. Then, P n \u2203t \u2208[n] : \u2225b \u03b8t \u2212\u03b8\u22c6\u2225V t \u2265\u221acn,\u03b4 o \u2264\u03b4, where cn,\u03b4 is of order O(log(1/\u03b4) + d log log n). For the usual choice \u03b4 = 1/n, cn,1/n is of order O(log n + d log log n), which illustrates how the dependency on d is on a lower-order term w.r.t. n (as opposed to the well-known concentration bound derived in [4]). This result is the counterpart of [7, Thm. 8] for the concentration on the reward parameter estimation error instead of the prediction error and we believe it is of independent interest. 3 Lower Bound We recall the asymptotic lower bound for multi-armed bandit problems with structure from [20, 15, 19]. We say that a bandit strategy \u03c0 is uniformly good if E\u03c0 \u03be,\u03c1 \u0002 Rn \u0003 = o(n\u03b1) for any \u03b1 > 0 and any contextual linear bandit problem satisfying Asm. 1. Proposition 1. Let \u03c0 := {\u03c0t}t\u22651 by a uniformly good bandit strategy then, lim inf n\u2192\u221e E\u03c0 \u03be,\u03c1 \u0002 Rn(\u03b8\u22c6) \u0003 log(n) \u2265v\u22c6(\u03b8\u22c6), (1) 3This assumption can be relaxed by considering sub-Gaussian rewards. 3 \fwhere v\u22c6(\u03b8\u22c6) is the value of the optimization problem inf \u03b7(x,a)\u22650 X x\u2208X X a\u2208A \u03b7(x, a)\u2206\u03b8\u22c6(x, a) s.t. inf \u03b8\u2032\u2208\u0398alt X x\u2208X X a\u2208A \u03b7(x, a)dx,a(\u03b8\u22c6, \u03b8\u2032) \u22651, (P) where \u0398alt := {\u03b8\u2032 \u2208\u0398 | \u2203x \u2208X, a\u22c6 \u03b8\u22c6(x) \u0338= a\u22c6 \u03b8\u2032(x)} is the set of alternative reward parameters such that the optimal arm changes for at least a context x.4 The variables \u03b7(x, a) can be interpreted as the number of pulls allocated to each context-arm pair so that enough information is obtained to correctly identify the optimal arm in each context while minimizing the regret. Formulating the lower bound in terms of the solution of (P) is not desirable for two main reasons. First, (P) is not a well-posed optimization problem since the inferior may not be attainable, i.e., the optimal solution may allocate an in\ufb01nite number of pulls to some optimal arms. Second, (P) removes any dependency on the context distribution \u03c1. In fact, the optimal solution \u03b7\u22c6 of (P) may prescribe to select a context-arm (x, a) pair a large number of times, despite x having low probability of being sampled from \u03c1. While this has no impact on the asymptotic performance of \u03b7\u22c6(as soon as \u03c1min > 0), building on \u03b7\u22c6to design a learning algorithm may lead to poor \ufb01nite-time performance. In order to mitigate these issues, we propose a variant of the previous lower bound obtained by adding a constraint on the cumulative number of pulls in each context and explicitly decoupling the context distribution \u03c1 and the exploration policy \u03c9(x, a) de\ufb01ning the probability of selecting arm a in context x. Given z \u2208R>0, we de\ufb01ne the optimization problem min \u03c9\u2208\u2126 zE\u03c1 \u0014 X a\u2208A \u03c9(x, a)\u2206\u03b8\u22c6(x, a) \u0015 s.t. inf \u03b8\u2032\u2208\u0398alt E\u03c1 \u0014 X a\u2208A \u03c9(x, a)dx,a(\u03b8\u22c6, \u03b8\u2032) \u0015 \u22651/z (Pz) where \u2126= {\u03c9(x, a) \u22650 | \u2200x \u2208X : P a\u2208A \u03c9(x, a) = 1} is the probability simplex. We denote by \u03c9\u22c6 z,\u03b8\u22c6the optimal solution of (Pz) and u\u22c6(z, \u03b8\u22c6) its associated value (if the problem is unfeasible we set u\u22c6(z, \u03b8\u22c6) = +\u221e). Inspecting (Pz), we notice that z serves as a global constraint on the number of samples. In fact, for any \u03c9 \u2208\u2126, the associated number of samples \u03b7(x, a) allocated to a context-arm pair (x, a) is now z\u03c1(x)\u03c9(x, a). Since \u03c1 is a distribution over X and P a \u03c9(x, a) = 1 in each context, the total number of samples sums to z. As a result, (Pz) admits a minimum and it is more amenable to designing a learning algorithm based on its Lagrangian relaxation. Furthermore, we notice that z can be interpreted as de\ufb01ning a more \u201c\ufb01nite-time\u201d formulation of the lower bound. Finally, we remark that the total number of samples that can be assigned to a context x is indeed constrained to z\u03c1(x). This constraint crucially makes (Pz) more context aware and forces the solution \u03c9 to be more adaptive to the context distribution. In Sect. 4, we leverage these features to design an incremental algorithm whose \ufb01nite-time regret does not depend on \u03c1min, thus improving over previous algorithms [7, 16], as supported by the empirical results in Sect. 6. The following lemma provides a characterization of (Pz) and its relationship with (P) (see App. C for the proof and further discussion). Lemma 1. Let z(\u03b8\u22c6) := min {z > 0 : (Pz) is feasible}, z(\u03b8\u22c6) = maxx\u2208X P a\u0338=a\u22c6 \u03b8\u22c6(x) \u03b7\u22c6(x,a) \u03c1(x) and z\u22c6(\u03b8\u22c6) := P x\u2208X P a\u0338=a\u22c6 \u03b8\u22c6(x) \u03b7\u22c6(x, a). Then 1 z(\u03b8\u22c6) = max\u03c9\u2208\u2126inf\u03b8\u2032\u2208\u0398alt E\u03c1 \u0002P a\u2208A \u03c9(x, a)dx,a(\u03b8\u22c6, \u03b8\u2032) \u0003 and there exists a constant c\u0398 > 0 such that, for any z \u2208(z(\u03b8\u22c6), +\u221e), u\u22c6(z, \u03b8\u22c6) \u2264v\u22c6(\u03b8\u22c6) + 2zBLz(\u03b8\u22c6) z \u2212z(\u03b8\u22c6) \u00b7 ( 1 if z < z(\u03b8\u22c6) min n max n c\u0398 \u221a 2z\u22c6(\u03b8\u22c6) \u03c3\u221az , z\u22c6(\u03b8\u22c6) z o , 1 o otherwise The \ufb01rst result characterizes the range of z for which (Pz) is feasible. Interestingly, z(\u03b8\u22c6) < +\u221eis the inverse of the sample complexity of the best-arm identi\ufb01cation problem [21] and the associated solution is the one that maximizes the amount of information gathered about the reward model \u03b8\u22c6. As z increases, \u03c9\u22c6 z,\u03b8\u22c6becomes less aggressive in favoring informative context-arm pairs and more sensitive to the regret minimization objective. The second result quanti\ufb01es the bias w.r.t. the optimal solution of (Pz). For z \u2265z(\u03b8\u22c6), the error decreases approximately at a rate 1/\u221az showing that the solution of (Pz) can be made arbitrarily close to v\u22c6(\u03b8\u22c6). 4The in\ufb01mum over this set can be computed in closed-form when the alternative parameters are allowed to lie in the whole Rd (see App. K.1). When these parameters are forced to have bounded \u21132-norm, the in\ufb01mum has no closed-form expression, though its computation reduces to a simple convex optimization problem (see [21]). 4 \fIn designing our learning algorithm, we build on the Lagrangian relaxation of (Pz). For any \u03c9 \u2208\u2126, let f(\u03c9; \u03b8\u22c6) denote the objective function and g(\u03c9, z; \u03b8\u22c6) denote the KL constraint f(\u03c9; \u03b8\u22c6) = E\u03c1 h X a\u2208A \u03c9(x, a)\u00b5\u03b8\u22c6(x, a) i , g(\u03c9; z, \u03b8\u22c6) = inf \u03b8\u2032\u2208\u0398altE\u03c1 h X a\u2208A \u03c9(x, a)dx,a(\u03b8\u22c6, \u03b8\u2032) i \u22121 z . We introduce the Lagrangian relaxation problem min \u03bb\u22650 max \u03c9\u2208\u2126 n h(\u03c9, \u03bb; z, \u03b8\u22c6) := f(\u03c9; \u03b8\u22c6) + \u03bbg(\u03c9; z, \u03b8\u22c6) o , (P\u03bb) where \u03bb \u2208R\u22650 is a multiplier. Notice that f(\u03c9; \u03b8\u22c6) is not equal to the objective function of (Pz), since we replaced the gap \u2206\u03b8\u22c6by the expected value \u00b5\u03b8\u22c6and we removed the constant multiplicative factor z in the objective function. The associated problem is thus a concave maximization problem. While these changes do not affect the optimality of the solution, they do simplify the algorithmic design. Refer to App. D for details about the Lagrangian formulation. 4 Asymptotically Optimal Linear Primal Dual Algorithm Algorithm 1: SOLID Input: Multiplier \u03bb1, con\ufb01dence values {\u03b2t}t and {\u03b3t}t, maximum multiplier \u03bbmax, normalization factors {zk}k\u22650, phase lengths {pk}k\u22650, step sizes \u03b1\u03bb k, \u03b1\u03c9 k Set \u03c91 \u21901XA |A| , V 0 \u2190\u03bdI, U0 \u21900, e \u03b80 \u21900, S0 \u21900 Phase index: K1 \u21900 for t = 1, . . . , n do Receive context Xt \u223c\u03c1 Set Kt+1 \u2190Kt if inf\u03b8\u2032\u2208\u0398t\u22121 \u2225e \u03b8t\u22121 \u2212\u03b8\u2032\u22252 V t\u22121 > \u03b2t\u22121 then // EXPLOITATION STEP At \u2190argmaxa\u2208A \u00b5e \u03b8t\u22121(Xt, a) \u03bbt+1 \u2190\u03bbt, \u03c9t+1 \u2190\u03c9t else // EXPLORATION STEP Sample arm: At \u223c\u03c9t(Xt, \u00b7) Set St \u2190St\u22121 + 1 // UPDATE SOLUTION Compute qt \u2208\u2202ht(\u03c9t, \u03bbt, zKt) (see Eq. 4) Update policy \u03c9t+1(x, a) \u2190 \u03c9t(x,a)e \u03b1\u03c9 Kt qt(x,a) P a\u2032\u2208A \u03c9t(x,a\u2032)e \u03b1\u03c9 Kt qt(x,a\u2032) Update multiplier \u03bbt+1 \u2190min{[\u03bbt \u2212\u03b1\u03bb Ktgt(\u03c9t, zKt)]+, \u03bbmax} // PHASE STOPPING TEST if St \u2212STKt \u22121 = pk then Change phase: Kt+1 \u2190Kt + 1 Reset solution: \u03c9t+1 \u2190\u03c91, \u03bbt+1 \u2190\u03bb1 Pull At and observe outcome Yt Update V t, Ut, b \u03b8t, b \u03c1t using Xt, At, Yt Set e \u03b8t := argmin\u03b8\u2208\u0398\u2229Ct \u2225\u03b8 \u2212b \u03b8t\u2225V t We introduce SOLID (aSymptotic Optimal Linear prImal Dual), which combines a primal-dual approach to incrementally compute the solution of an optimistic estimate of the Lagrangian relaxation (P\u03bb) within a scheme that, depending on the accuracy of the estimate b \u03b8t, separates exploration steps, where arms are pulled according to the exploration policy \u03c9t, and exploitation steps, where the greedy arm is selected. The values of the input parameters for which SOLID enjoys regret guarantees are reported in Sect. 5. In the following, we detail the main ingredients composing the algorithm (see Alg. 1). Estimation. SOLID stores and updates the regularized least-square estimate b \u03b8t using all samples observed over time. To account for the fact that b \u03b8t may have large norm (i.e., \u2225b \u03b8t\u22252 > B and b \u03b8t / \u2208\u0398), SOLID explicitly projects b \u03b8t onto \u0398. Formally, let Ct := {\u03b8 \u2208 Rd : \u2225\u03b8 \u2212b \u03b8t\u22252 V t \u2264\u03b2t} be the con\ufb01dence ellipsoid at time t. Then, SOLID computes e \u03b8t := argmin\u03b8\u2208\u0398\u2229Ct \u2225\u03b8 \u2212b \u03b8t\u22252 V t. This is a simple convex optimization problem, though it has no closed-form expression.5 Note that, on those steps where \u03b8\u22c6/ \u2208Ct, \u0398 \u2229Ct might be empty, in which case we can set e \u03b8t = e \u03b8t\u22121. Then, SOLID uses e \u03b8t instead of b \u03b8t in all steps of the algorithm. SOLID also computes an empirical estimate of the context distribution as b \u03c1t(x) = 1 t Pt s=1 1 {Xs = x}. Accuracy test and tracking. Similar to previous algorithms leveraging asymptotic lower bounds, we build on the generalized likelihood ratio test [e.g., 18] to verify the accuracy of the estimate b \u03b8t. At the beginning of each step t, SOLID \ufb01rst computes inf\u03b8\u2032\u2208\u0398t\u22121 \u2225e \u03b8t\u22121 \u2212\u03b8\u2032\u22252 V t\u22121, where \u0398t\u22121 = {\u03b8\u2032 \u2208\u0398 | \u2203x \u2208X, a\u22c6 e \u03b8t\u22121(x) \u0338= a\u22c6 \u03b8\u2032(x)} is the set of alternative models. This quantity 5The projection is required to carry out the analysis, while we ignore it in our implementation (see App. K.1). 5 \fmeasures the accuracy of the algorithm, where the in\ufb01mum over alternative models de\ufb01nes the problem \u03b8\u2032 that is closest to e \u03b8t\u22121 and yet different in the optimal arm of at least one context.6 This serves as a worst-case scenario for the true \u03b8\u22c6, since if \u03b8\u2217= \u03b8\u2032 then selecting arms according to e \u03b8t\u22121 would lead to linear regret. If the accuracy exceeds a threshold \u03b2t\u22121, then SOLID performs an exploitation step, where the estimated optimal arm a\u22c6 e \u03b8t\u22121(Xt) is selected in the current context. On the other hand, if the test fails, the algorithm moves to an exploration step, where an arm At is sampled according to the estimated exploration policy \u03c9t(Xt, \u00b7). While this approach is considerably simpler than standard tracking strategies (e.g., selecting the arm with the largest gap between the policy \u03c9t and the number of pulls), in Sect. 5 we show that sampling from \u03c9t achieves the same level of tracking ef\ufb01ciency. Optimistic primal-dual subgradient descent. At each step t, we de\ufb01ne an estimated optimistic version of the Lagrangian relaxation (P\u03bb) as ft(\u03c9) := X x\u2208X b \u03c1t\u22121(x) X a\u2208A \u03c9(x, a) \u0010 \u00b5e \u03b8t\u22121(x, a) + \u221a\u03b3t\u2225\u03c6(x, a)\u2225V \u22121 t\u22121 \u0011 , (2) gt(\u03c9, z) := inf \u03b8\u2032\u2208\u0398t\u22121 X x\u2208X b \u03c1t\u22121(x) X a\u2208A \u03c9(x, a) \u0012 dx,a(e \u03b8t\u22121, \u03b8\u2032) + 2BL \u03c32 \u221a\u03b3t\u2225\u03c6(x, a)\u2225V \u22121 t\u22121 \u0013 \u22121 z , (3) ht(\u03c9, \u03bb, z) := ft(\u03c9) + \u03bbgt(\u03c9, z), (4) where \u03b3t is a suitable parameter de\ufb01ning the size of the con\ufb01dence interval. Notice that we do not use optimism on the context distribution, which is simply replaced by its empirical estimate. Therefore, ht is not necessarily optimistic with respect to the original Lagrangian function h. Nonetheless, we prove in Sect. 5 that this level of optimism is suf\ufb01cient to induce enough exploration to have accurate estimates of \u03b8\u22c6. This is in contrast with the popular forced exploration strategy [e.g. 7, 15, 19, 16], which prescribes a minimum fraction of pulls \u03f5 such that at any step t, any of the arms with less than \u03f5St pulls is selected, where St is the number of exploration rounds so far. While this strategy is suf\ufb01cient to guarantee a minimum level of accuracy for b \u03b8t and to obtain asymptotic regret optimality, in practice it is highly inef\ufb01cient as it requires selecting all arms in each context regardless of their value or amount of information. At each step t, SOLID updates the estimates of the optimal exploration policy \u03c9t and the Lagrangian multiplier \u03bbt. In particular, given the sub-gradient qt of ht(\u03c9t, \u03bbt, zKt), SOLID updates \u03c9t and \u03bbt by performing one step of projected sub-gradient descent with suitable learning rates \u03b1\u03c9 Kt and \u03b1\u03bb Kt. In the update of \u03c9t, we perform the projection onto the simplex \u2126using an entropic metric, while the multiplier is clipped in [0, \u03bbmax]. While this is a rather standard primal-dual approach to solve the Lagrangian relaxation (P\u03bb), the interplay between estimates b \u03b8t, \u03c1t, the optimism used in ht, and the overall regret performance of the algorithm is at the core of the analysis in Sect. 5. This approach signi\ufb01cantly reduces the computational complexity compared to [15, 16], which require solving problem P at each exploratory step. In Sect. 6, we show that the incremental nature of SOLID allows it to scale to problems with much larger context-arm spaces. Furthermore, we leverage the convergence rate guarantees of the primal-dual gradient descent to show that the incremental nature of SOLID does not compromise the asymptotic optimality of the algorithm (see Sect. 5). The z parameter. While the primal-dual algorithm is guaranteed to converge to the solution of (Pz) for any \ufb01x z, it may be dif\ufb01cult to properly tune z to control the error w.r.t. (P). SOLID leverages the fact that the error scales as 1/\u221az (Lem. 1 for z suf\ufb01ciently large) and it increases z over time. Given as input two non-decreasing sequences {pk}k and {zk}k, at each phase k, SOLID uses zk in the computation of the subgradient of ht and in the de\ufb01nition of ft and gt. After pk explorative steps, it resets the policy \u03c9t and the multiplier \u03bbt and transitions to phase k +1. Since pk = STk+1\u22121 \u2212STk\u22121 is the number of explorative steps of phase k starting at time Tk, the actual number of steps during k may vary. Notice that at the end of each phase only the optimization variables are reset, while the learning variables (i.e., b \u03b8t, V t, and b \u03c1t) use all the samples collected through phases. 6In practice, it is more ef\ufb01cient to take the in\ufb01mum only over problems with different optimal arm in the last observed context Xt. This is indeed what we do in our experiments and all our theoretical results follow using this alternative de\ufb01nition with only minor changes. 6 \f5 Regret Analysis Before reporting the main theoretical result of the paper, we introduce the following assumption. Assumption 2. The maximum multiplier used by SOLID is such that \u03bbmax \u22652BLz(\u03b8\u22c6). While an assumption on the maximum multiplier is rather standard for the analysis of primal-dual projected subgradient [e.g., 22, 23], we conjecture that it may be actually relaxed in our case by replacing the \ufb01xed \u03bbmax by an increasing sequence as done for {zk}k. Theorem 2. Consider a contextual linear bandit problem with contexts X, arms A, reward parameter \u03b8\u22c6, features bounded by L, zero-mean Gaussian noise with variance \u03c32 and context distribution \u03c1 satisfying Asm. 1. If SOLID is run with con\ufb01dence values \u03b2t\u22121 = cn,1/n and \u03b3t = cn,1/S2 t , where cn,\u03b4 is de\ufb01ned as in Thm. 1, learning rates \u03b1\u03bb k = \u03b1\u03c9 k = 1/\u221apk and increasing sequences zk = z0ek and pk = zke2k, for some z0 \u22651, then it is asymptotically optimal with the same constant as in the lower bound of Prop. 1. Furthermore, for any \ufb01nite n the regret of SOLID is bounded as E\u03c0 \u03be,\u03c1 \u0002 Rn(\u03b8\u22c6) \u0003 \u2264v\u22c6(\u03b8\u22c6)cn,1/n 2\u03c32 + Clog(log log n) 1 2 (log n) 3 4 + Cconst, (5) where Clog = lin\u22650(v\u22c6(\u03b8\u22c6), |X|, L2, B2, \u221a d, 1/\u03c32) and Cconst = v\u22c6(\u03b8\u22c6) B2L2 \u03c32 + lin\u22650(L, B, z0(z(\u03b8\u22c6)/ z0)3, (z(\u03b8\u22c6)/ z0)2).7 The \ufb01rst result shows that SOLID run with an exponential schedule for z is asymptotic optimal, while the second one provides a bound on the \ufb01nite-time regret. We can identify three main components in the \ufb01nite-time regret. 1) The \ufb01rst term scales with the logarithmic term cn,1/n = O(log n + d log log n) and a leading constant v\u22c6(\u03b8\u22c6), which is optimal as shown in Prop. 1. In most cases, this is the dominant term of the regret. 2) Lower-order terms in o(log n). Notably, a regret of order \u221alog n is due to the incremental nature of SOLID and it is directly inherited from the convergence rate of the primal-dual algorithm we use to optimize (Pz). The larger term (log n)3/4 that we obtain in the \ufb01nal regret is actually due to the schedule of {zk} and {pk}. While it is possible to design a different phase schedule to reduce the exponent towards 1/2, this would negatively impact the constant regret term. 3) The constant regret Cconst is due to the exploitation steps, burn-in phase and the initial value z0. The regret due to z0 takes into account the regime when (Pz) is unfeasible (zk < z(\u03b8\u22c6)) or when zk is too small to assess the rate at which u\u22c6(zk, \u03b8\u22c6) approaches v\u22c6(\u03b8\u22c6) (z < z(\u03b8\u22c6)), see Lem. 1. Notably, the regret due to the initial value z0 vanishes when z0 > z(\u03b8\u22c6). A more aggressive schedule for zk reaching z(\u03b8\u22c6) in few phases would reduce the initial regret at the cost of a larger exponent in the sub-logarithmic terms. The sub-logarithmic terms in the regret have only logarithmic dependency on the number of arms. This is better than existing algorithms based on exploration strategies built from lower bounds. OSSB [15] indeed depends on |A| directly in the main O(log n) regret terms. While the regret analysis of OAM is asymptotic, it is possible to identify several lower-order terms depending linearly on |A|. In fact, OAM as well as OSSB require forced exploration on each context-arm pair, which inevitably translates into regret. In this sense, the dependency on |A| is hard-coded into the algorithm and cannot be improved by a better analysis. SPL depends linearly on |A| in the explore/exploit threshold (the equivalent of our \u03b2t) and in other lower-order terms due to the analysis of the tracking rule. On the other hand, SOLID never requires all arms to be repeatedly pulled and we were able to remove the linear dependence on |A| through a re\ufb01ned analysis of the sampling procedure (see App. E). This is inline with the experimental results where we did not notice any explicit linear dependence on |A|. The constant regret term depends on the context distribution through z(\u03b8\u22c6) (Lem. 1). Nonetheless, this dependency disappears whenever z0 is a fraction z(\u03b8\u22c6). This is in striking contrast with OAM, whose analysis includes several terms depending on the inverse of the context probability \u03c1min. This con\ufb01rms that SOLID is able to better adapt to the distribution generating the contexts. While the phase schedule of Thm. 2 leads to an asymptotically-optimal algorithm and sublinear-regret in \ufb01nite time, it may be possible to \ufb01nd a different schedule having the same asymptotic performance and better \ufb01nite-time guarantees, although this may depend on the horizon n. Refer to App. G.3 for a regret bound highlighting the explicit dependence on the sequences {zk} and {pk}. 7lin(\u00b7) denotes any function with linear or sublinear dependence on the inputs (ignoring logarithmic terms). For example, lin\u22650(x, y2) \u2208{a0 + a1x + a2y + a3y2 + a4xy2 : ai \u22650}. 7 \f0 0.5 1 1.5 \u00b7105 0 50 100 150 Time Cumulative Regret 0 0.5 1 1.5 \u00b7105 Time 0 0.5 1 1.5 \u00b7105 Time SOLID OAM (\u03f5 = 0) OAM (\u03f5 = 0.01) OAM (\u03f5 = 0.05) LinTS LinUCB Figure 1: Toy problem with 2 contexts and (left) \u03c1(x1) = .5, (center) \u03c1(x1) = .9, (right) \u03c1(x1) = .99. As shown in [16], when the features of the optimal arms span Rd, the asymptotic lower bound vanishes (i.e., v\u22c6(\u03b8\u22c6) = 0). In this case, selecting optimal arms is already informative enough to correctly estimate \u03b8\u22c6and no explicit exploration is needed and SOLID, like OAM, has sub-logarithmic regret. Worst-case analysis. The constant terms in Thm. 2 are due to a naive bound which assumes linear regret in those phases where zk is small (e.g., when the optimization problem is infeasible). While this simpli\ufb01es the analysis for asymptotic optimality, we verify that SOLID always suffers sub-linear regret, regardless of the values of zk. For the following result, we do not require Asm. 2 to hold. Theorem 3 (Worst-case regret bound). Let zk be arbitrary, pk = erk for some constant r \u22651, and the other parameters be the same as in Thm. 2. Then, for any n the regret of SOLID is bounded as E\u03c0 \u03be,\u03c1 \u0002 Rn(\u03b8\u22c6) \u0003 \u226412BL\u03c02C\u03bbmax + 2er \u0000\u03bb2 max + log |A| \u0001 r \u221an + CsqrtC\u03bbmax log(n) \u221a dn, where Csqrt = lin\u22650(|X| + \u221a d, B, L) and C\u03bbmax := \u00001 + \u03bbmaxBL \u03c32 \u0001 . Notably, this bound removes the dependencies on z(\u03b8\u22c6) and z(\u03b8\u22c6), while its derivation is agnostic to the values of zk. Interestingly, we could set \u03bbmax = 0 and the algorithm would completely ignore the KL constraint, thus focusing only on the objective function. This is re\ufb02ected in the worst-case bound since all terms with a dependence on \u03c32 or a quadratic dependence on BL disappear. The key result is that the objective function alone, thanks to optimism, is suf\ufb01cient for proving sub-linear regret but not for proving asymptotic optimality. More precisely, the bound is e O(( \u221a d + |X|) \u221a dn + log |A|\u221an), which matches the minimax optimal rate apart from the dependence on |X| (see [1], Sec. 24.1). We believe the latter could be reduced to p |X| by a re\ufb01ned analysis. It remains an open question how to design an asymptotically optimal algorithm for the contextual case whose regret does not scale with |X|. 6 Numerical Simulations We compare SOLID to LinUCB, LinTS, and OAM. For SOLID, we set \u03b2t = \u03c32(log(t) + d log log(n)) and \u03b3t = \u03c32(log(St) + d log log(n)) (i.e., we remove all numerical constants) and we use the exponential schedule for phases de\ufb01ned in Thm. 2. For OAM, we use the same \u03b2t for the explore/exploit test and we try different values for the forced-exploration parameter \u03f5. LinUCB uses the con\ufb01dence intervals from Thm. 2 in [4] with the log-determinant of the design matrix, and LinTS is as de\ufb01ned in [5] but without the extra-sampling factor \u221a d used to prove its frequentist regret. All plots are the results of 100 runs with 95% Student\u2019s t con\ufb01dence intervals. See App. K for additional details and results on a real dataset. Toy contextual linear bandit with structure. We start with a CLB problem with |X| = 2 and |A|, d = 3. Let xi (ai) be the i-th context (arm). We have \u03c6(x1, a1) = [1, 0, 0], \u03c6(x1, a2) = [0, 1, 0], \u03c6(x1, a3) = [1 \u2212\u03be, 2\u03be, 0], \u03c6(x2, a1) = [0, 0.6, 0.8], \u03c6(x2, a2) = [0, 0, 1], \u03c6(x2, a3) = [0, \u03be/10, 1 \u2212 \u03be] and \u03b8\u22c6= [1, 0, 1]. We consider a balanced context distribution \u03c1(x1) = \u03c1(x2) = 0.5. This is a two-context counterpart of the example presented by [7] to show the asymptotic sub-optimality of optimism-based strategies. The intuition is that, for \u03be small, an optimistic strategy pulls a2 in x1 and a1 in x2 only a few times since their gap is quite large, and suffers high regret (inversely proportional to \u03be) to \ufb01gure out which of the remaining arms is optimal. On the other hand, an asymptotically 8 \f0 2 4 \u00b7104 0 200 400 600 800 1,000 Time Cumulative Regret 0 2 4 \u00b7104 Time 0 2 4 \u00b7104 Time 0 2 4 \u00b7104 Time SOLID OAM (\u03f5 = 0.01) LinTS LinUCB Figure 2: Randomly generated bandit problems with d = 8, |X| = 4, and |A| = 4, 8, 16, 32. optimal strategy allocates more pulls to \u201cbad\" arms as they bring information to identify \u03b8\u22c6, which in turns avoids a regret scaling with \u03be. This indeed translates into the empirical performance reported in Fig. 1-(left), where SOLID effectively exploits the structure of the problem and signi\ufb01cantly reduces the regret compared to LinTS and LinUCB. Actually, not only the regret is smaller but the \u201ctrend\u201d is better. In fact, the regret curves of LinUCB and LinTS have a larger slope than SOLID\u2019s, suggesting that the gap may increase further with n, thus con\ufb01rming the theoretical \ufb01nding that the asymptotic performance of SOLID is better. OAM has a similar behavior, but the actual performance is worse than SOLID and it seems to be very sensitive to the forced exploration parameter, where the best performance is obtained for \u03f5 = 0.0, which is not theoretically justi\ufb01ed. We also study the in\ufb02uence of the context distribution. We \ufb01rst notice that solving (P) leads to an optimal exploration strategy \u03b7\u22c6where the only sub-optimal arm with non-zero pulls is a1 in x2 since it yields lower regret and similar information than a2 in x1. This means that the lower bound prescribes a greedy policy in x1, deferring exploration to x2 alone. In practice, tracking this optimal allocation might lead to poor \ufb01nite-time performance when the context distribution is unbalanced towards x1, in which case the algorithm would take time proportional to 1/\u03c1(x2) before performing any meaningful exploration. We verify these intuitions empirically by considering the case of \u03c1(x1) = 0.9 and \u03c1(x1) = 0.99 (middle and right plots in Fig. 1 respectively). SOLID is consistently better than all other algorithms, showing that its performance is not negatively affected by \u03c1min. On the other hand, OAM is more severely affected by the context distribution. In particular, its performance with \u03f5 = 0 signi\ufb01cantly decreases when increasing \u03c1(x1) and the algorithm reduces to an almost greedy strategy, thus suffering linear regret in some problems. In this speci\ufb01c case, forcing exploration leads to slightly better \ufb01nite-time performance since the algorithm pulls the informative arm a2 in x1, which is however not prescribed by the lower bound. Random problems. We evaluate the impact of the number of actions |A| in randomly generated structured problems with d = 8 and |X| = 4. We run each algorithm for n = 50000 steps. For OAM, we set forced-exploration \u03f5 = 0.01 and solve (P) every 100 rounds to speed-up execution as computation becomes prohibitive. The plots in Fig. 2 show the regret over time for |A| = 4, 8, 16, 32. This test con\ufb01rms the advantage of SOLID over the other methods. Interestingly, the regret of SOLID does not seem to signi\ufb01cantly increase as a function of |A|, thus supporting its theoretical analysis. On the other hand, the regret of OAM scales poorly with |A| since forced exploration pulls all arms in a round robin fashion. 7 Conclusion We introduced SOLID, a novel asymptotically-optimal algorithm for contextual linear bandits with \ufb01nite-time regret and computational complexity improving over similar methods and better empirical performance w.r.t. state-of-the-art algorithms in our experiments. The main open question is whether SOLID is minimax optimal for contextual problems with |X| > \u221a d. In future work, our method could be extended to continuous contexts, which would probably require a reformulation of the lower bound and the adoption of parametrized policies. Furthermore, it would be interesting to study \ufb01nite-time lower bounds, especially for problems in which bounded regret is achievable [9, 24, 25]. Finally, we could use algorithmic ideas similar to SOLID to go beyond the realizable linear bandit setting. 9 \fBroader Impact This work is mainly a theoretical contribution. We believe it does not present any foreseeable societal consequence. Funding Transparency Statement Marcello Restelli was partially funded by the Italian MIUR PRIN 2017 Project ALGADIMAR \u201cAlgorithms, Games, and Digital Market\u201d. Acknowledgements The authors would like to thank R\u00e9my Degenne, Han Shao, and Wouter Koolen for kindly sharing the draft of their paper before publication. We also would like to thank Pierre M\u00e9nard for carefully reading the paper and for providing insightful feedback." + }, + { + "url": "http://arxiv.org/abs/2007.00722v1", + "title": "Sequential Transfer in Reinforcement Learning with a Generative Model", + "abstract": "We are interested in how to design reinforcement learning agents that\nprovably reduce the sample complexity for learning new tasks by transferring\nknowledge from previously-solved ones. The availability of solutions to related\nproblems poses a fundamental trade-off: whether to seek policies that are\nexpected to achieve high (yet sub-optimal) performance in the new task\nimmediately or whether to seek information to quickly identify an optimal\nsolution, potentially at the cost of poor initial behavior. In this work, we\nfocus on the second objective when the agent has access to a generative model\nof state-action pairs. First, given a set of solved tasks containing an\napproximation of the target one, we design an algorithm that quickly identifies\nan accurate solution by seeking the state-action pairs that are most\ninformative for this purpose. We derive PAC bounds on its sample complexity\nwhich clearly demonstrate the benefits of using this kind of prior knowledge.\nThen, we show how to learn these approximate tasks sequentially by reducing our\ntransfer setting to a hidden Markov model and employing spectral methods to\nrecover its parameters. Finally, we empirically verify our theoretical findings\nin simple simulated domains.", + "authors": "Andrea Tirinzoni, Riccardo Poiani, Marcello Restelli", + "published": "2020-07-01", + "updated": "2020-07-01", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction Knowledge transfer has proven to be a fundamental tool for enabling lifelong reinforcement learning (RL) (Sutton & Barto, 1998), where agents face sequences of related tasks. In this context, upon receiving a new task, the agent aims at reusing knowledge from the previously-solved ones to speed-up the learning process. This has the potential to achieve signi\ufb01cant performance improvements over standard RL from scratch, and several studies have con\ufb01rmed this hypothesis (Taylor & Stone, 2009; Lazaric, 2012). 1Politecnico di Milano, Milan, Italy. Correspondence to: Andrea Tirinzoni . Proceedings of the 37 th International Conference on Machine Learning, Vienna, Austria, PMLR 119, 2020. Copyright 2020 by the author(s). A key question is what and how knowledge should be transferred (Taylor & Stone, 2009). As for the kind of knowledge to be reused, a variety of algorithms have been proposed to transfer experience samples (Lazaric et al., 2008; Taylor et al., 2008; Yin & Pan, 2017; Tirinzoni et al., 2018b; 2019), policies (Fern\u00b4 andez & Veloso, 2006; Mahmud et al., 2013; Rosman et al., 2016; Abel et al., 2018; Yang et al., 2019; Feng et al., 2019), value-functions (Liu & Stone, 2006; Tirinzoni et al., 2018a), features (Konidaris et al., 2012; Barreto et al., 2017; 2018), and more. Regarding how knowledge should be transferred, the answer is more subtle and depends directly on the desired objectives. On the one hand, one could aim at maximizing the jumpstart, i.e., the expected initial performance in the target task. This can be done by transferring initializers, e.g., policies or value-functions that, based on past knowledge, are expected to immediately yield high rewards. Among the theoretical studies, Mann & Choe (2012) and Abel et al. (2018) showed that jumpstart policies can provably reduce the number of samples needed for learning a target task. However, since these initializers are computed before receiving the new task, the agent, though starting from good performance, might still take a long time before converging to near-optimal behavior. An alternative approach is to spend some initial interactions with the target to gather information about the task itself, so as to better decide what to transfer. For instance, the agent could aim at identifying which of the previously-solved problems is most similar to the target. If the similarity is actually large, transferring the past solution would instantaneously lead to near-optimal behavior. This task identi\ufb01cation problem has been studied by Dyagilev et al. (2008); Brunskill & Li (2013); Liu et al. (2016). One downside is that these approaches do not actively seek information to minimize identi\ufb01cation time (or, equivalently, sample complexity), in part because it is non-trivial to \ufb01nd which actions are the most informative given knowledge from related tasks. To the best of our knowledge, it is an open problem how to leverage this prior knowledge to reduce identi\ufb01cation time. Regardless of how knowledge is transferred, a common assumption is that tasks are drawn from some \ufb01xed distribution (Taylor & Stone, 2009), which often hides the sequential nature of the problem. In many lifelong learning problems, tasks evolve dynamically and are temporally correlated. Consider an autonomous car driving through traf\ufb01c. arXiv:2007.00722v1 [cs.LG] 1 Jul 2020 \fSequential Transfer in Reinforcement Learning with a Generative Model Based on the traf\ufb01c conditions, both the environment dynamics and the agent\u2019s desiderata could signi\ufb01cantly change. However, this change follows certain temporal patterns, e.g., the traf\ufb01c level might be very high in the early morning, medium in the afternoon, and so on. This setting could thus be modeled as a sequence of related RL problems with temporal dependencies. It is therefore natural to expect that a good transfer agent will exploit the sequential structure among tasks rather than only their static similarities. Our paper aims at advancing the theory of transfer in RL by addressing the following questions related to the abovementioned points: (1) how can the agent quickly identify an accurate solution of the target task when provided with solved related problems? (2) how can the agent exploit the sequential nature of the lifelong learning setup? Taking inspiration from previous works on non-stationary RL (Choi et al., 2000a; Hadoux et al., 2014), we model the sequential transfer setting by assuming that the agent faces a \ufb01nite number of tasks whose evolution is governed by an underlying Markov chain. Each task has potentially different rewards and dynamics, while the state-action space is \ufb01xed and shared. The motivation for this framework is that some real systems often work in a small number of operating modes (e.g., the traf\ufb01c condition in the example above), each with different dynamics. Unlike these works, we assume that the agent is informed when a new task arrives as in the standard transfer setting (Taylor & Stone, 2009). Then, as in previous studies on transfer in RL (Brunskill & Li, 2013; Azar et al., 2013a), we decompose the problem into two parts. First, in Section 3, we assume that the agent is provided with prior knowledge in the form of a set of tasks containing an approximation to the target one. Under the assumption that a generative model of the environment is available, we design an algorithm that actively seeks information in order to identify a near-optimal policy and transfers it to the target task. We derive PAC bounds (Strehl et al., 2009) on its sample complexity which are signi\ufb01cantly tighter than existing results and clearly demonstrate performance improvements over learning from scratch. Then, in Section 4, we show how this prior knowledge can be learned in our sequential setup to allow knowledge transfer. We reduce the problem to learning a hidden Markov model and use spectral methods (Anandkumar et al., 2014) to estimate the task models necessary for running our policy transfer algorithm. We derive \ufb01nite-sample bounds on the error of these estimated models and discuss how to leverage the temporal correlations to further reduce the sample complexity of our methods. Finally, we report numerical simulations in some standard RL benchmarks which con\ufb01rm our theoretical \ufb01ndings. In particular, we show examples where identi\ufb01cation yields faster convergence than jumpstart methods, and examples where the exploitation of the sequential correlations among tasks is superior than neglecting them. 2. Preliminaries We model each task \u03b8, as a discounted Markov decision process (MDP) (Puterman, 2014), M\u03b8 := \u27e8S, A, p\u03b8, q\u03b8, \u03b3\u27e9, where S is a \ufb01nite set of S states, A is a \ufb01nite set of A actions, p\u03b8 : S \u00d7 A \u2192P(S) are the transition probabilities, q\u03b8 : S \u00d7 A \u2192P(U) is the reward distribution over space U (e.g., U = R), and \u03b3 \u2208[0, 1) is a discount factor. Here P(\u2126) denotes the set of probability distributions over a set \u2126. We suppose that the sets of states and actions are \ufb01xed and that tasks are uniquely determined by their parameter \u03b8. For this reason, we shall occasionally use \u03b8 to indicate M\u03b8, while alternatively referring to it as task, parameter, or (MDP) model. We denote by r\u03b8(s, a) := Eq\u03b8[Ut|St = s, At = a] and assume, without loss of generality, that rewards take values in [0, 1]. We use V \u03c0 \u03b8 (s) to indicate the value function of a (deterministic) policy \u03c0 : S \u2192A in task \u03b8, i.e., the expected discounted return achieved by \u03c0 when starting from state s in \u03b8, V \u03c0 \u03b8 (s) := E\u03c0 \u03b8 [P\u221e t=0 \u03b3tUt|S0 = s]. The optimal policy \u03c0\u2217 \u03b8 for task \u03b8 maximizes the value function at all states simultaneously, V \u03c0\u2217 \u03b8 \u03b8 (s) \u2265V \u03c0 \u03b8 (s) for all s and \u03c0. We let V \u2217 \u03b8 (s) := V \u03c0\u2217 \u03b8 \u03b8 (s) denote the optimal value function of \u03b8. Given \u03f5 > 0, we say that a policy \u03c0 is \u03f5-optimal for \u03b8 if V \u03c0 \u03b8 (s) \u2265V \u2217 \u03b8 (s) \u2212\u03f5 for all states s. We de\ufb01ne \u03c3r \u03b8(s, a)2 := Varq\u03b8(\u00b7|s,a)[U] as the variance of the reward in s, a for task \u03b8, and \u03c3p \u03b8(s, a; \u03b8\u2032)2 := Varp\u03b8(\u00b7|s,a)[V \u2217 \u03b8\u2032(S\u2032)] as the variance of the optimal value function of \u03b8\u2032 under the transition model of \u03b8. To simplify the exposition, we shall alternatively use the standard vector notation to indicate these quantities. For instance, V \u2217 \u03b8 \u2208RS is the vector of optimal values, p\u03b8(s, a) \u2208RS is the vector of transition probabilities from s, a, r\u03b8 \u2208RSA is the \ufb02attened reward matrix, and so on. To measure the distance between two models \u03b8, \u03b8\u2032, we de\ufb01ne \u2206r s,a(\u03b8, \u03b8\u2032) := |r\u03b8(s, a) \u2212r\u03b8\u2032(s, a)| for the rewards and \u2206p s,a(\u03b8, \u03b8\u2032) = |(p\u03b8(s, a) \u2212p\u03b8\u2032(s, a))TV \u2217 \u03b8 | for the transition probabilities. The latter measures how much the expected return of an agent taking s, a and acting optimally in \u03b8 changes when the \ufb01rst transition only is governed by \u03b8\u2032. See Appendix A for a quick reference of notation. Sequential Transfer Setting We model our problem as a hidden-mode MDP (Choi et al., 2000b). We suppose that the agent faces a \ufb01nite number of k possible tasks, \u0398 = {\u03b81, \u03b82, . . . , \u03b8k}. Tasks arrive in sequence and evolve according to a Markov chain with transition matrix T \u2208 [0, 1]k\u00d7k and an arbitrary initial task distribution. Let \u03b8\u2217 h, for h = 1, . . . , m, be the h-th random task faced by the agent. Then, [T]i,j = P \b \u03b8\u2217 h+1 = \u03b8i|\u03b8\u2217 h = \u03b8j \t . The agent only knows the number of tasks k, while both the MDP models and the task-transition matrix T are unknown. The goal is to identify an \u03f5-optimal policy for \u03b8\u2217 h using as few samples as possible, while leveraging knowledge from the previous tasks \u03b8\u2217 1, . . . , \u03b8\u2217 h\u22121 to further reduce the sample complexity. In order to provide deeper insights into how \fSequential Transfer in Reinforcement Learning with a Generative Model to design ef\ufb01cient identi\ufb01cation strategies and facilitate the analysis, we assume that the agent can access a generative model of state-action pairs. Similarly to experimental optimal design (Pukelsheim, 2006), upon receiving each new task \u03b8\u2217 h, the agent can perform at most n experiments for identi\ufb01cation, where each experiment consists in choosing an arbitrary state-action pair s, a and receiving a random next-state S\u2032 \u223cp\u03b8\u2217 h(\u00b7|s, a) and reward R \u223cq\u03b8\u2217 h(\u00b7|s, a). After this identi\ufb01cation phase, the agent has to transfer a policy to the target task and starts interacting with the environment in the standard online fashion. As in prior works (e.g., Brunskill & Li, 2013; Azar et al., 2013a; Liu et al., 2016), we suppose that the agent maintains an approximation to all MDP models, { f M\u03b8}\u03b8\u2208\u0398, and decompose the problem in two main steps.1 (1) First, in Section 3 we present an algorithm that, given the approximate models together with the corresponding approximation errors, actively queries the generative model of the target task by seeking state-action pairs that yield high information about the optimal policy of \u03b8\u2217 h. (2) Then, in Section 4 we show how the agent can build these approximate models from the samples collected while interacting with the sequence of tasks so far. 3. Policy Transfer from Uncertain Models Throughout this section, we shall focus on learning a single model \u03b8\u2217\u2208\u0398 given knowledge from previous tasks. Therefore, to simplify the notation, we shall drop dependency on the speci\ufb01c task sequence (i.e., we drop the task indexes h). Let { f M\u03b8}\u03b8\u2208\u0398 be the approximate MDP models, with e p\u03b8 and e r\u03b8 denoting their transition probabilities and rewards, respectively. We use the intuition that, if these approximations are accurate enough, the problem can be reduced to identifying a suitable policy among the optimal ones of the MDPs { f M\u03b8}\u03b8\u2208\u0398, which necessarily contains an approximation of \u03c0\u2217 \u03b8\u2217. We rely on the following assumption to assess the quality of the approximate models. Assumption 1 (Model uncertainty). There exist known constants \u2206r max, \u2206p max, \u2206\u03c3r max, \u2206\u03c3p max such that max \u03b8\u2208\u0398 \u2225r\u03b8 \u2212e r\u03b8\u2225\u221e\u2264\u2206r max, max \u03b8,\u03b8\u2032\u2208\u0398 \u2225(p\u03b8 \u2212e p\u03b8) T e V \u2217 \u03b8\u2032\u2225\u221e\u2264\u2206p max, max \u03b8\u2208\u0398 \u2225\u03c3r \u03b8 \u2212e \u03c3r \u03b8\u2225\u221e\u2264\u2206\u03c3r max, max \u03b8,\u03b8\u2032\u2208\u0398 \u2225\u03c3p \u03b8(\u03b8\u2032) \u2212e \u03c3p \u03b8(\u03b8\u2032)\u2225\u221e\u2264\u2206\u03c3p max. Assumption 1 ensures that an upper bound to the approximation error of the given models is known. In partic1In the remainder, we overload the notation by using tildes to indicate quantities related to approximate models. Algorithm 1 Policy Transfer from Uncertain Models (PTUM) Require: Set of approximate MDPs { f M\u03b8}\u03b8\u2208\u0398, accuracy \u03f5, con\ufb01dence \u03b4, number of samples n, model uncertainty \u2206 Ensure: An \u03f5-optimal policy for M\u03b8\u2217with probability 1 \u2212\u03b4 1: // CHECK ACCURACY CONDITION 2: If \u2206\u2265\u03f5(1\u2212\u03b3) 4(1+\u03b3) \u2192Stop and run (\u03f5, \u03b4)-PAC algorithm 3: // TRANSFER MODE 4: Initialize datasets Dr s,a, Dp s,a \u2190\u2205 5: for t = 1, 2, . . . , n do 6: // STEP 1. BUILD EMPIRICAL MDP MODEL 7: b rt(s, a) \u2190 1 Nt(s,a) P u\u2208Dr s,a u 8: b pt(s\u2032|s, a) \u2190 1 Nt(s,a) P s\u2032\u2032\u2208Dp s,a 1 {s\u2032 = s\u2032\u2032} 9: b \u03c3r t (s, a)2 \u2190 P u\u2208Dr s,a (u\u2212b rt(s,a))2 Nt(s,a)\u22121 10: b \u03c3p t (s, a; \u03b8\u2032)2 \u2190 P s\u2032\u2032\u2208Dp s,a ( e V \u2217 \u03b8\u2032 (s\u2032\u2032)\u2212b pt(s,a)T e V \u2217 \u03b8\u2032 )2 Nt(s,a)\u22121 11: // STEP 2. UPDATE CONFIDENCE SET 12: \u00af \u0398t \u2190 n \u03b8 \u2208\u00af \u0398t\u22121 \f \f (1)-(4) hold for all s, a and \u03b8\u2032 \u2208\u0398 o 13: // STEP 3. CHECK STOPPING CONDITION 14: If there exists \u03b8 \u2208\u00af \u0398t such that for all \u03b8\u2032 \u2208\u00af \u0398t we have e V e \u03c0\u2217 \u03b8 \u03b8\u2032 \u2265e V \u2217 \u03b8\u2032 \u2212\u03f5 + 2\u2206(1+\u03b3) 1\u2212\u03b3 \u2192Stop and return e \u03c0\u2217 \u03b8 15: // STEP 4. QUERY GENERATIVE MODEL 16: Ir t (s, a) \u2190max\u03b8,\u03b8\u2032\u2208\u00af \u0398t Ir s,a(\u03b8, \u03b8\u2032) 17: Ip t (s, a) \u2190max\u03b8,\u03b8\u2032\u2208\u00af \u0398t Ip s,a(\u03b8, \u03b8\u2032) 18: (St, At) \u2190argmaxs,a max {Ir t (s, a), Ip t (s, a)} 19: Obtain St+1 \u223cp\u03b8\u2217(\u00b7|St, At) and Ut+1 \u223cq\u03b8\u2217(\u00b7|St, At) 20: Store Dp s,a = Dp s,a \u222a{St+1} and Dr s,a = Dr s,a \u222a{Ut+1} 21: end for 22: If the algorithm did not stop \u2192Run (\u03f5, \u03b4)-PAC algorithm ular, the maximum error among all components, \u2206:= max{\u2206r max, \u2206p max, \u2206\u03c3r max, \u2206\u03c3p max} is a fundamental parameter for our approach. In Section 4, we shall see how to guarantee this assumption when facing tasks sequentially. 3.1. The PTUM Algorithm We now present Policy Transfer from Uncertain Models (PTUM), whose pseudo-code is provided in Algorithm 1. Given the approximate models { f M\u03b8}\u03b8\u2208\u0398, whose maximum error is bounded by \u2206from Assumption 1, and two values \u03f5, \u03b4 > 0, our approach returns a policy which, with probability at least 1 \u2212\u03b4, is \u03f5-optimal for the target task \u03b8\u2217. We now describe in detail all the steps of Algorithm 1. First (lines 1-2), we check whether positive transfer can occur. The intuition is that, if the approximate models are too uncertain (i.e., \u2206is large), the transfer of a policy might actually lead to poor performance in the target task, i.e., negative transfer occurs. Our algorithm checks whether \u2206 is below a certain threshold (line 2). Otherwise, we run any (\u03f5, \u03b4)-PAC2 algorithm to obtain an \u03f5-optimal policy. Later on, we shall discuss how this algorithm could be 2An algorithm is (\u03f5, \u03b4)-PAC if, with probability 1 \u2212\u03b4, it computes an \u03f5-optimal policy using a polynomial number of samples in the relevant problem-dependent quantities (Strehl et al., 2009). \fSequential Transfer in Reinforcement Learning with a Generative Model chosen. Although the condition at line 2 seems restrictive, as \u2206is required to be below a factor of \u03f5, we conjecture this dependency to be nearly-tight (at least in a worst-case sense). In fact, Feng et al. (2019) have recently shown that the sole knowledge of a poorly-approximate model cannot reduce the worst-case sample complexity of any agent seeking an \u03f5-optimal policy. If the condition at line 2 fails, i.e., the models are accurate enough, we say that the algorithm enters the transfer mode. Here, the generative model is queried online until an \u03f5-optimal policy is found. Similarly to existing works on model identi\ufb01cation (Dyagilev et al., 2008; Brunskill & Li, 2013), the algorithm proceeds by elimination. At each time-step t (up to at most n), we keep a set \u00af \u0398t \u2286\u0398 of those models, called active, that are likely to be (close approximations of) the target \u03b8\u2217. This set is created based on the distance between each model and the empirical MDP constructed with the observed samples. Then, the algorithm chooses the next state-action pair St, At to query the generative model so that the samples from St, At are informative to eliminate one of the \u201dwrong\u201d models from the active set. This process is iterated until the algorithm \ufb01nds a policy that is \u03f5-optimal for all active models, in which case the algorithm stops and returns such policy. We now describe these main steps in detail. Step 1. Building the empirical MDP In order to \ufb01nd the set of active models \u00af \u0398t at time t, the algorithm \ufb01rst builds an empirical MDP as a proxy for the true one. Let Nt(s, a) be the number of samples collected from s, a up to (and not including) time t. First, the algorithm estimates, for each s, a, the empirical rewards b rt(s, a) and transition probabilities b pt(s, a) (lines 7-8). Then, it computes the empirical variance of the rewards b \u03c3r t (s, a)2 and of the optimal value functions b \u03c3p t (s, a; \u03b8\u2032)2 for each model \u03b8\u2032 \u2208\u0398 (lines 9-10). This quantities are arbitrarily initialized when Nt(s, a) = 0. Step 2. Building the con\ufb01dence set We de\ufb01ne the con\ufb01dence set \u00af \u0398t as the set of models that are \u201ccompatible\u201d with the empirical MDP in all steps up to t. Formally, a model \u03b8 \u2208\u0398 belongs to the con\ufb01dence set \u00af \u0398t at time t if it was active before (i.e., \u03b8 \u2208\u00af \u0398t\u22121) and the following conditions are satis\ufb01ed for all s \u2208S, a \u2208A, and \u03b8\u2032 \u2208\u0398: |b rt(s, a) \u2212e r\u03b8(s, a)| \u2264Cr t,\u03b4(s, a), (1) |(b pt(s, a) \u2212e p\u03b8(s, a)) T e V \u2217 \u03b8\u2032| \u2264Cp t,\u03b4(s, a, \u03b8\u2032), (2) |b \u03c3r t (s, a) \u2212e \u03c3r \u03b8(s, a)| \u2264C\u03c3r t,\u03b4(s, a), (3) |b \u03c3p t (s, a; \u03b8\u2032) \u2212e \u03c3p \u03b8(s, a; \u03b8\u2032)| \u2264C\u03c3p t,\u03b4(s, a). (4) Intuitively, a model belongs to the con\ufb01dence set if its distance to the empirical MDP does not exceed, in any component, a suitable con\ufb01dence level Ct,\u03b4(s, a). The latter has the form q log c/\u03b4 Nt(s,a) and is obtained from standard applications of Bernstein\u2019s inequality (Boucheron et al., 2003). We refer the reader to Appendix B for the full expression. Alternatively, we say that a model is eliminated from the con\ufb01dence set (i.e., it will never be active again) as soon as it is not compatible with the empirical MDP. It is important to note that, with probability at least 1\u2212\u03b4, the target task \u03b8\u2217 is never eliminated from \u00af \u0398t (see Lemma 3 in Appendix B). Step 3. Checking whether to stop After building the con\ufb01dence set, the algorithm checks whether the optimal policy of some active model is (\u03f5 \u22122\u22061+\u03b3 1\u2212\u03b3 )-optimal in all other models in \u00af \u0398t, in which case it stops and returns this policy. As we shall see in our analysis, this ensures that the returned policy is also \u03f5-optimal for \u03b8\u2217. Step 4. Deciding where to query the generative model The \ufb01nal step involves choosing the next state-action pair St, At from which to obtain a sample. This is a key point as the sampling rule is what directly determines the samplecomplexity of the algorithm. As discussed previously, our algorithm eliminates the models from \u00af \u0398t until the stopping condition is veri\ufb01ed. Therefore, a good sampling rule should aim at minimizing the stopping time, i.e., it should aim at eliminating as soon as possible all models that prevent the algorithm from stopping. The design of our strategy is driven by this principle. Given the set of active models \u00af \u0398t, we compute, for each s, a, a score It(s, a), which we refer to as the index of s, a, that is directly related to the information to discriminate between any two active models using s, a only (lines 20-22). Then, we choose the s, a that maximizes the index, which can be interpreted as sampling the state-action pair that allows us to discard an active model in the shortest possible time. We con\ufb01rm this intuition in our analysis later. Formally, our information measure is de\ufb01ned as follows. De\ufb01nition 1 (Information for model discrimination). Let \u03b8, \u03b8\u2032 \u2208\u0398. For e \u2206x s,a := [e \u2206x s,a(\u03b8, \u03b8\u2032) \u22128\u2206]+, with x \u2208 {r, p}, the information for discriminating between \u03b8 and \u03b8\u2032 using reward/transition samples from s, a are, respectively, Ir s,a(\u03b8, \u03b8\u2032) = min \uf8f1 \uf8f2 \uf8f3 e \u2206r s,a e \u03c3r \u03b8(s, a) !2 , e \u2206r s,a \uf8fc \uf8fd \uf8fe, Ip s,a(\u03b8, \u03b8\u2032) = min \uf8f1 \uf8f2 \uf8f3 e \u2206p s,a e \u03c3p \u03b8(s, a; \u03b8) !2 , (1 \u2212\u03b3)e \u2206p s,a \uf8fc \uf8fd \uf8fe. The total information is the maximum of these two, Is,a(\u03b8, \u03b8\u2032) = max \b Ir s,a(\u03b8, \u03b8\u2032), Ip s,a(\u03b8, \u03b8\u2032) \t . The information I is a fundamental tool for our analysis and it can be understood as follows. The terms on the left-hand side are ratios of the squared deviation between the means of the random variables involved and their variance. If these random variables were Gaussian, this would be proportional \fSequential Transfer in Reinforcement Learning with a Generative Model to the Kullback-Leibler divergence between the distributions induced by the two models, which in turn is related to their mutual information. The terms on the right-hand side arise from our choice of Bernstein\u2019s con\ufb01dence intervals but have a minor role in the algorithm and its analysis. 3.2. Sample Complexity Analysis We now analyze the sample complexity of Algorithm 1. Throughout the analysis, we assume that the model uncertainty \u2206is such that Algorithm 1 enters the transfer mode and that the sample budget n is large enough to allow the identi\ufb01cation. The opposite case is of minor importance as it reduces to the analysis of the chosen (\u03f5, \u03b4)-PAC algorithm. Theorem 1. Assume \u2206is such that Algorithm 1 enters the transfer mode. Let \u03c4 be the random stopping time and \u03c0\u03c4 be the returned policy. Then, with probability at least 1 \u2212\u03b4, \u03c0\u03c4 is \u03f5-optimal for \u03b8\u2217and the total number of queries to the generative model can be bounded by \u03c4 \u2264128 min{SA, |\u0398|} log(8SAn(|\u0398| + 1)/\u03b4) maxs,a min\u03b8\u2208\u0398\u03f5 Is,a(\u03b8\u2217, \u03b8) , where, for \u03ba\u03f5 := (1\u2212\u03b3)\u03f5 4 \u2212\u2206(1+\u03b3) 2 , the set \u0398\u03f5 \u2286\u0398 is \u0398\u03f5 := \u001a \u03b8 \f \f \f \u2225e r\u03b8 \u2212e r\u03b8\u2217\u2225> \u03ba\u03f5 \u2228\u2225(e p\u03b8 \u2212e p\u03b8\u2217) T e V \u2217 \u03b8\u2217\u2225> \u03ba\u03f5 \u03b3 \u001b . The proof, provided in Appendix B, combines standard techniques used to analyze PAC algorithms (Azar et al., 2013b; Zanette et al., 2019) with recent ideas in \ufb01eld of structured multi-armed bandits (Tirinzoni et al., 2020). At \ufb01rst glance, Theorem 1 looks quite different from the standard sample complexity bounds available in the literature (Strehl et al., 2009). We shall see now that it reveals many interesting properties. First, this result implies that PTUM is (\u03f5, \u03b4)-PAC as the sample complexity is bounded by polynomial functions of all the relevant quantities. Next, we note that, except for logarithmic terms, the sample complexity scales with the minimum between the number of tasks and the number of state-action pairs. As in practice, we expect the former to be much smaller than the latter, we get a signi\ufb01cant gain compared to the no-transfer case, where, even with a generative model, the sample complexity is at least linear in SA. The set \u0398\u03f5 can be understood as the set of all models in \u0398 whose optimal policy cannot be guaranteed as \u03f5-optimal for the target \u03b8\u2217. As Lemma 6 in Appendix B shows, it is suf\ufb01cient to eliminate all models in this set to ensure stopping. Our key result is that the sample complexity of PTUM is proportional to the one of an \u201doracle\u201d strategy that knows in advance the most informative state-action pairs to achieve this elimination. Note, in fact, that the denominator involves the information to discriminate any model in \u0398\u03f5 with \u03b8\u2217, but the latter is not known to the algorithm. The following result provides further insights into the improvements over the no-transfer case. Corollary 1. Let \u0393 be the minimum gap between \u03b8\u2217and any other model in \u0398, \u0393 := min \u03b8\u0338=\u03b8\u2217max n \u2225e r\u03b8 \u2212e r\u03b8\u2217\u2225, \u2225(e p\u03b8 \u2212e p\u03b8\u2217) T e V \u2217 \u03b8\u2217\u2225 o . Then, with probability at least 1 \u2212\u03b4, \u03c4 \u2264e O \u0012min{SA, |\u0398|} log(1/\u03b4) max{\u03932, \u03f52}(1 \u2212\u03b3)4 \u0013 . This result reveals that the sample complexity of Algorithm 1 does not scale with \u03f5, which is typically regarded as the main term in PAC bounds. That is, when \u03f5 is small, the bound scales with the minimum gap \u0393 between the approximate models. Interestingly, the dependence on \u0393 is the same as the one obtained by Brunskill & Li (2013), but in our case it constitutes a worst-case scenario since \u0393 can be regarded as the minimum positive information for model discrimination, while Theorem 1 scales with the maximum one. Moreover, since our sample complexity bound is never worse than the one of Brunskill & Li (2013) and theirs achieves robustness to negative transfer, PTUM directly inherits this property. We note that \u0393 > 0 since, otherwise, two identical models would exist and one could be safely neglected. The key consequence is that one could set \u03f5 = 0 and the algorithm would retrieve an optimal policy. However, this requires the models to be perfectly approximated so as to enter the transfer mode. Finally, we point out that the optimal dependence on the discount factor was proved to be O(1/(1 \u2212\u03b3)3) (Azar et al., 2013b). Here, we get a slightly sub-optimal result since, for simplicity, we naively upper-bounded the variances with their maximum value, but a more involved analysis (e.g., those by Azar et al. (2013b); Sidford et al. (2018)) should lead to optimal dependence. 3.3. Discussion The sampling procedure of PTUM (Step 4) relies on the availability of a generative model to query informative stateaction pairs. We note that the de\ufb01nition of informative state-action pair and all other components of the algorithm are independent on the assumption that a generative model is available. Therefore, one could use ideas similar to PTUM even in the absence of a generative model. For instance, taking inspiration from E3 (Kearns & Singh, 2002), we could build a surrogate \u201cexploration\u201d MDP with high rewards in informative state-action pairs and solve this MDP to obtain a policy that autonomously navigates the true environment to collect information. Alternatively, we could use the information measure Is,a as an exploration bonus (Jian et al., 2019). We conjecture that the analysis of this kind of approaches would follow quite naturally from the one of PTUM under \fSequential Transfer in Reinforcement Learning with a Generative Model the standard assumption of \ufb01nite MDP diameter D < \u221e (Jaksch et al., 2010), for which the resulting bound would have an extra linear scaling in D as in prior works (Brunskill & Li, 2013). PTUM calls an (\u03f5, \u03b4)-PAC algorithm whenever the models are too inaccurate or whenever n queries to the generative model are not suf\ufb01cient to identify a near-optimal policy. This algorithm can be freely chosen among those available in the literature. For instance, we could choose the MaxQInit algorithm of Abel et al. (2018) which uses an optimistic value function to initialize the learning process of a PAC-MDP method (Strehl et al., 2009). In our case, the information about \u03b8\u2217collected through the generative model could be used to compute much tighter upper bounds to the optimal value function than those obtained solely from previous tasks, thus signi\ufb01cantly reducing the overall sample complexity. Alternatively, we could use the Finite-ModelRL algorithm of (Brunskill & Li, 2013) or the Parameter ELimination (PEL) method of (Dyagilev et al., 2008) by passing the set of survived models \u00af \u0398n instead of \u0398, so that the number of remaining eliminations is potentially much smaller than |\u0398|. 4. Learning Sequentially-Related Tasks We now show how to estimate the MDP models (i.e., the reward and transition probabilities) for each \u03b8 sequentially, together with the task-transition matrix T. The method we propose allows us to obtain con\ufb01dence bounds over these estimates which can be directly plugged into Algorithm 1 to \ufb01nd an \u03f5-optimal policy for each new task. We start by noticing that our sequential transfer setting can be formulated as a hidden Markov model (HMM) where the target tasks {\u03b8\u2217 h}h\u22651 are the unobserved variables, or hidden states, which evolve according to T, and the samples collected while learning each task are the agent\u2019s observations. Formally, consider the h-th task \u03b8\u2217 h and, with some abuse of notation, let b qh(u|s, a) and b ph(s\u2032|s, a) respectively denote the empirical reward distribution and transition model estimated after solving \u03b8\u2217 h and, without loss of generality, after collecting at least one sample from each state-action pair. In order to simplify the exposition and analysis, we assume, in this section only, that the reward-space is \ufb01nite with U elements.3 It is easy to see that E [b qh(u|s, a)|\u03b8\u2217 h = \u03b8] = q\u03b8(u|s, a) and E [b ph(s\u2032|s, a)|\u03b8\u2217 h = \u03b8] = p\u03b8(s\u2032|s, a). Furthermore, the random variables b qh(u|s, a) and b ph(s\u2032|s, a) are conditionally independent of all other variables given the target task \u03b8\u2217 h. Let oh \u2208Rd, for d = SA(S + U), be a vectorized version of these empirical models (i.e., our observation vector), oh = [vec(b q); vec(b p)]. Let O \u2208Rd\u00d7k be 3The proposed methods can be applied to continuous reward distributions, e.g., Gaussian, with only minor tweaks. Algorithm 2 Sequential Transfer 1: Initialize set of approximate models { f M1 \u03b8}\u03b8\u2208\u0398, model errors \u22061 \u2190\u221e, and initial active set e \u03981 \u2190\u0398 2: for h = 1, 2, . . . do 3: Receive new task \u03b8\u2217 h \u223cT(\u00b7|\u03b8\u2217 h\u22121) 4: Run Algorithm 1 with { f Mh \u03b8}\u03b8\u2208e \u0398h and \u2206h 5: Obtain estimates b qh(u|s, a) and b ph(s\u2032|s, a) 6: Estimate b Oh, b Th using Algorithm 3 in Appendix C 7: Update model estimates { f Mh+1 \u03b8 }\u03b8\u2208\u0398 from b Oh 8: Compute model error \u2206h 9: Predict set of initial models e \u0398h+1 from b Th 10: end for de\ufb01ned as O = [\u00b51, \u00b52, . . . , \u00b5k] with \u00b5j := E [oh|\u03b8\u2217 h = \u03b8j]. Intuitively, O has the (exact) \ufb02attened MDP models as its columns and can be regarded as the matrix of conditionalobservation means for our HMM. Then, the problem reduces to estimating the matrices T and O while facing sequential tasks, which is a standard HMM learning problem. Spectral methods (Anandkumar et al., 2012; 2014) have been widely applied for this purpose. Besides their good empirical performance, these approaches typically come with strong \ufb01nite-sample guarantees, which is a fundamental tool for our study. We also note that spectral methods have already been applied to solve problems related to ours, including transfer in multi-armed bandits (Azar et al., 2013a) and learning POMDPs (Azizzadenesheli et al., 2016; Guo et al., 2016). Here, we apply the tensor decomposition approach of Anandkumar et al. (2014). As their algorithm is quite involved, we include a brief description in Appendix C, while in the following we focus on analyzing the estimated HMM parameters. Our high-level sequential transfer procedure, which combines PTUM with HMM learning, is presented in Algorithm 2. The approach keeps a running estimate of all MDP models { f Mh \u03b8}\u03b8\u2208\u0398 together with a bound on their errors \u2206h. These estimates are used to solve each task via PTUM (line 4) and updated by plugging the resulting observations into the spectral method (lines 5-7). Then, the error bounds of the new estimates are computed as explained in Section 4.1 (line 8). Finally, the estimated task-transition matrix is used to predict which tasks are more likely to occur and the resulting set e \u0398h+1 is used to run PTUM at the next step. 4.1. Error Bounds for the Estimated Models We now derive \ufb01nite-sample bounds on the estimation error of each MDP model. First, we need the following assumption, due to Anandkumar et al. (2012), to ensure that we can recover the HMM parameters from the samples. Assumption 2. The matrix O is full column-rank. Furthermore, the stationary distribution of the underlying Markov \fSequential Transfer in Reinforcement Learning with a Generative Model chain, \u03c9 \u2208P(\u0398), is such that \u03c9(\u03b8) > 0 for all \u03b8 \u2208\u0398. As noted by Anandkumar et al. (2012), this assumption is milder than assuming minimal positive gaps between the different models (as was done, e.g., by Brunskill & Li (2013) and Liu et al. (2016)). We now present the main result of this section, which provides deviation inequalities between true and estimated models that hold uniformly over time. Theorem 2. Let { f Mh \u03b8}\u03b8\u2208\u0398,h\u22651 be the sequence of MDP models estimated by Algorithm 2, with e ph,\u03b8(s\u2032|s, a) and e qh,\u03b8(u|s, a) the transition and reward distributions, e V \u2217 h,\u03b8 the optimal value functions, and e \u03c3r h,\u03b8(s, a), e \u03c3p h,\u03b8(s, a; \u03b8\u2032) the corresponding variances. There exist constants \u03c1, \u03c1r, \u03c1p, \u03c1\u03c3r, \u03c1\u03c3p such that, for \u03b4\u2032 \u2208(0, 1), if h > \u03c1 log 2h2SA(S + U) \u03b4\u2032 , then, with probability at least 1 \u2212\u03b4\u2032, the following hold simultaneously for all h \u22651, s \u2208S, a \u2208A, and \u03b8, \u03b8\u2032 \u2208\u0398: |r\u03b8(s, a) \u2212e rh,\u03b8(s, a)| \u2264\u03c1r r log ch,\u03b4\u2032 h , |(p\u03b8(s, a) \u2212e ph,\u03b8(s, a)) T e V \u2217 h,\u03b8\u2032| \u2264\u03c1p r log ch,\u03b4\u2032 h , |\u03c3r \u03b8(s, a) \u2212e \u03c3r h,\u03b8(s, a)| \u2264\u03c1\u03c3r r log ch,\u03b4\u2032 h , |\u03c3r \u03b8(s, a; \u03b8\u2032) \u2212e \u03c3p h,\u03b8(s, a; \u03b8\u2032)| \u2264\u03c1\u03c3p r log ch,\u03b4\u2032 h , where ch,\u03b4\u2032 := \u03c02h2SA(S + U)/\u03b4\u2032. To favor readability, we collapsed all terms of minor relevance to our purpose into the constants \u03c1. These are functions of the given family of tasks (through maximum/minimum eigenvalues of the covariance matrices introduced previously) and of the underlying Markov chain. Full expressions are reported in Appendix D. These bounds provide us with the desired deviations between the true and estimated models and can be used as input to the PTUM algorithm to learn each task in the sequential setup. It is important to note that, like other applications of spectral methods to RL problems (Azar et al., 2013a; Azizzadenesheli et al., 2016), the constants \u03c1 are not known in practice and should be regarded as a parameter of the algorithm. Such a parameter can be interpreted as the con\ufb01dence level in a standard con\ufb01dence interval. Finally, regardless of constants, these deviations decay at the classic rate of O(1/ \u221a h), so that the agent recovers the perfect models of all tasks in the asymptotic regime. This also implies that the agent needs to observe O(1/\u03f52) tasks before PTUM enters the transfer mode, which is theoretically tight in the worst-case (Feng et al., 2019) but quite conservative in practice. 4.2. Exploiting the Inter-Task Dynamics We now show how the agent can exploit the task-transition matrix T to further reduce the sample complexity. Suppose that T and the identity of the current task \u03b8\u2217 h are known. Then, through T(\u00b7|\u03b8\u2217 h), the agent has prior knowledge about the next task and it could, for instance, discard all models whose probability is low enough without even passing them to the PTUM algorithm. Using this insight, we can design an approach to pre-process the initial set of models by replacing T with its estimate b T. Let b Th be the estimated task-transition matrix and \u00af \u0398h be the set of active models returned by Algorithm 1. Then, by design of the algorithm, \u00af \u0398h contains the target task \u03b8\u2217 h with suf\ufb01cient probability and thus we can predict the probability that \u03b8\u2217 h+1 is equal to some \u03b8 as P \u03b8\u2032\u2208\u00af \u0398h b Th(\u03b8, \u03b8\u2032). Our idea is to check whether this probability, plus some con\ufb01dence value, is lower than a certain threshold \u03b7. In such a case, we discard \u03b8 from the initial set of models e \u0398h+1 to be used at the next step. The following theorem ensures that, for a well-chosen threshold value, the target model is never discarded from the initial set at any time with high probability. Theorem 3. Let \u03b4\u2032 \u2208(0, 1) and \u03b4 \u2264 \u03b4\u2032 3m2 be the con\ufb01dence value for Algorithm 1. Suppose that, before each task h, a model \u03b8 is eliminated from the initial active set if: X \u03b8\u2032\u2208\u00af \u0398h b Th(\u03b8, \u03b8\u2032) + \u03b4k + \u03c1T k r log(9kdm2/\u03b4\u2032) h \u2264\u03b7. Then, for \u03b7 = \u03b4\u2032 3km2 , with probability at least 1 \u2212\u03b4\u2032, at any time the true model is never eliminated from the initial set. As before, we collapsed minor terms into the constant \u03c1T . When T is sparse (e.g., only a small number of successor tasks is possible), this technique could signi\ufb01cantly reduce the sample complexity to identify a near-optimal policy as many models are discarded even before starting PTUM. 5. Related Works In recent years, there has been a growing interest in understanding transfer in RL from a theoretical viewpoint. The work by Brunskill & Li (2013) and its follow-up by Liu et al. (2016) are perhaps the most related to ours. The authors consider a multi-task scenario in which the agent faces \ufb01nitely many tasks drawn from a \ufb01xed distribution. Although their methods are designed for task identi\ufb01cation, they do not actively seek information as our algorithm does and the resulting guarantees are therefore weaker. Azar et al. (2013a) present similar ideas in the context of multi-armed bandits. Like our work, they employ spectral methods to learn tasks sequentially, without however exploiting any temporal correlations. Dyagilev et al. (2008) propose a method for task identi\ufb01cation based on sequential hypothesis testing. Their algorithm explores the environment by \fSequential Transfer in Reinforcement Learning with a Generative Model 0 20 40 60 80 0 10 20 30 40 Episodes Expected Return PEL PTUM MaxQInit RMAX 0 200 400 600 800 0 0.2 0.4 0.6 0.8 Episodes Expected Return PEL PTUM MaxQInit RMAX Figure 1. Policy transfer from known models. (left) Optimism gains information. (right) High-reward states are poorly informative. 0 20 40 60 80 1 1.2 1.4 Tasks Normalized Sample Complexity Static Sequential 0 20 40 60 80 0.6 0.8 1 1.2 Tasks Normalized Expected Return Static Sequential MaxQInit Figure 2. Sequential transfer experiment. (left) The normalized sample complexity. (right) Performance of the computed policies. running optimistic policies and, like ours, aims at eliminating models from a con\ufb01dence set. Mann & Choe (2012) and Abel et al. (2018) study how to to achieve jumpstart. The idea is to optimistically initialize the value function of the target task and then run any PAC algorithm. The resulting methods provably achieve positive transfer. Mahmud et al. (2013) cluster solved tasks in a few groups to ease policy transfer, which relates to our spectral learning of MDP models. Our setup is also related to other RL settings, including contextual MDPs (Hallak et al., 2015; Jiang et al., 2017; Modi et al., 2018; Sun et al., 2018), where the decision process depends on latent contexts, and hiddenparameter MDPs (Doshi velez & Konidaris, 2013; Killian et al., 2017), which follow a similar idea. Finally, we notice that the PAC-MDP framework has been widely employed to study RL algorithms (Strehl et al., 2009). Many of the above-mentioned works take inspiration from classic PAC methodologies, including the well-known E3 (Kearns & Singh, 2002), R-MAX (Brafman & Tennenholtz, 2002), and Delayed Q-learning (Strehl et al., 2006), and so does ours. 6. Experiments The goal of our experiments is twofold. First, we analyze the performance of PTUM when the task models are known, focusing on the comparison between identi\ufb01cation and jumpstart strategies. Then, we consider the sequential setting and show that our approach progressively transfers knowledge and exploits the temporal task correlations to improve over \u201dstatic\u201d transfer methods. Due to space constraints, here we present only the high-level setup of each experiment and discuss the results. We refer the reader to Appendix E for all the details and further results. All our plots report averages of 100 runs with 99% Student\u2019s t con\ufb01dence intervals. Identi\ufb01cation vs Jumpstart For this experiment, we adopt a standard 12 \u00d7 12 grid-world divided into two parts by a vertical wall (i.e., the 4-rooms domain of Sutton et al. (1999) with only two rooms). The agent starts in the left room and must reach a goal state in the right one, with the two rooms connected by a door. We consider 12 different tasks, whose models are known to the agent, with different goal and door locations. We compare PTUM with three baselines: RMAX (Brafman & Tennenholtz, 2002), which does not perform any transfer, RMAX with MaxQInit (Abel et al., 2018), in which the Q-function is initialized optimistically to achieve a jumpstart, and PEL (Dyagilev et al., 2008), which performs model identi\ufb01cation. Not to give advantage to PTUM, as the other algorithms run episodes online with no generative model, in our plots we count each sample taken by PTUM as one episode and assign it the minimum reward value. Once PTUM returns a policy, we report its online performance. The results in Figure 1(left) show that the two identi\ufb01cation methods \ufb01nd an optimal policy in a very small number of episodes. The jumpstart of MaxQInit is delayed since the optimistic Q-function directs the agent towards the wall, but \ufb01nding the door takes many samples. PEL, which is also optimistic, instantly identi\ufb01es the solution since attempts to cross the wall in locations without a door immediately discriminate between tasks. This is in fact an example where the optimism principle leads to both rewards and information. To understand why this is not always the case, we run the same experiment by removing the wall and adding multiple goal locations with different reward values. Some goals have uniformly high values over all tasks (thus they provide small information) while others are more variable (thus with higher information). In Figure 1(right), we see that both PEL and MaxQInit achieve a high jumpstart, as they immediately go towards one of the goals, but converge much more slowly than PTUM, which focuses on the informative ones. Sequential Transfer We consider a variant of the objectworld domain by Levine et al. (2011). Here, we have a 5 \u00d7 5 grid where each cell can contain one of 11 possible items with different values (or no items at all). There are 8 possible tasks, each of which randomizes the items with probabilities inversely proportional to their value. In order to show the bene\ufb01ts of exploiting temporal correlations, we design a sparse task-transition matrix in which each task has only a few possible successors. We run our sequential transfer method (Algorithm 2) and compare it with a variant that ignores the inter-task dynamics and only performs \u201dstatic\u201d transfer (i.e., only the estimated MDP models \fSequential Transfer in Reinforcement Learning with a Generative Model are used). We note that model-learning in this variant reduces to the spectral method proposed by Azar et al. (2013a) for bandits. For the initial phase of the learning process, where the transfer condition is not satis\ufb01ed, we solve each task by sampling state-action pairs uniformly as in Azar et al. (2013b). Figure 2(left) shows the normalized number of samples required by PTUM to solve each task starting from the \ufb01rst step in which the algorithm enters the transfer mode4. We can appreciate that the sample complexities of both variants decay as the number of tasks grows, meaning that the learned models become more and more accurate. As expected, the sequential version outperforms the static one by exploiting the temporal task correlations. It is important to note that the normalized sample complexity of our algorithm goes below 1 (i.e., below the oracle). This is a key point as the sample complexity of oracle PTUM is computed with the set of all models as input, while the sequential approach \ufb01lters this set based on the estimated task-transition matrix. In Figure 2(right), we con\ufb01rm that both variants of our approach return near-optimal policies at all steps (all are \u03f5-optimal for the chosen value of \u03f5). We also report the performance of the policies computed by MaxQInit when the same maximum-sample budget as our approach is given (to be fair, we overestimated it as 5 to 10 times larger than what oracle PTUM needs). Although for MaxQInit we used the oracle optimistic initialization (with the true Q-functions), the algorithm is not able to obtain near-optimal performance using the given budget, while PTUM achieves it despite the estimated models. 7. Conclusion We studied two important questions in lifelong RL. First, we addressed the problem of quickly identifying a near-optimal policy to be transferred among the solutions of previouslysolved tasks. The proposed approach sheds light on how to design strategies that actively seek information to reduce identi\ufb01cation time, and our sample complexity bounds con\ufb01rm a signi\ufb01cant positive transfer. Second, we showed that learning sequentially-related tasks is not signi\ufb01cantly harder than the common case where they are statically sampled from a \ufb01xed distribution. By combining these two building blocks, we obtained an algorithm that sequentially learns the task structure, exploits temporal correlations, and quickly identi\ufb01es the solution of each new task. We con\ufb01rmed these properties in simple simulated domains, showing the bene\ufb01ts of identi\ufb01cation over jumpstart strategies and of exploiting temporal task relations. Our work opens up several interesting directions. Our ideas for task identi\ufb01cation could be extended to the online setting, where a generative model is not available. Here, we 4The sample complexity to solve each task is divided by the one of the oracle version of PTUM with known models would probably need a more principled trade-off between information and rewards, which can be modeled by different objectives (such as regret). Furthermore, it would be interesting to study the sequential transfer setting in large continuous MDPs. Here we could take inspiration from recent papers on meta RL that learn latent task representations (Humplik et al., 2019; Lan et al., 2019; Rakelly et al., 2019; Zintgraf et al., 2019) and meta-train policies that seek information for quickly identifying the task parameters." + }, + { + "url": "http://arxiv.org/abs/2005.11593v1", + "title": "A Novel Confidence-Based Algorithm for Structured Bandits", + "abstract": "We study finite-armed stochastic bandits where the rewards of each arm might\nbe correlated to those of other arms. We introduce a novel phased algorithm\nthat exploits the given structure to build confidence sets over the parameters\nof the true bandit problem and rapidly discard all sub-optimal arms. In\nparticular, unlike standard bandit algorithms with no structure, we show that\nthe number of times a suboptimal arm is selected may actually be reduced thanks\nto the information collected by pulling other arms. Furthermore, we show that,\nin some structures, the regret of an anytime extension of our algorithm is\nuniformly bounded over time. For these constant-regret structures, we also\nderive a matching lower bound. Finally, we demonstrate numerically that our\napproach better exploits certain structures than existing methods.", + "authors": "Andrea Tirinzoni, Alessandro Lazaric, Marcello Restelli", + "published": "2020-05-23", + "updated": "2020-05-23", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The widely studied multi-armed bandit (MAB) (Lai and Robbins, 1985; Bubeck and Cesa-Bianchi, 2012) problem is one of the simplest sequential decision-making settings in which a learner faces the exploration-exploitation dilemma. At each time t, the learner chooses an arm It from a \ufb01nite set A and receives a random reward Xt whose unknown distribution depends on the chosen arm. The goal is to maximize the cumulative reward (or, equivalently, to minimize the regret w.r.t. the best arm) over a horizon n, which requires the agent to trade o\ufb00between exploring arms to understand their uncertain outcomes and exploiting those that have performed best in the past. Proceedings of the 23rdInternational Conference on Arti\ufb01cial Intelligence and Statistics (AISTATS) 2020, Palermo, Italy. PMLR: Volume 108. Copyright 2020 by the author(s). The classic MAB problem, in which the rewards of the di\ufb00erent arms are uncorrelated, is now theoretically well understood. In their seminal paper, Lai and Robbins (1985) provided the \ufb01rst asymptotic problemdependent lower bound on the regret. Several simple yet near-optimal strategies have then been proposed, such as UCB1 (Auer et al., 2002), Thompson Sampling (TS, Thompson, 1933), and KL-UCB (Garivier and Capp\u00b4 e, 2011). However, the assumption that the arms are uncorrelated might be too general. In many applications, such as recommender systems or health-care, arms exhibit known structural properties that bandit algorithms could exploit to signi\ufb01cantly speed-up the learning process.1 Several speci\ufb01c structures have been addressed in the literature. Linear bandits are a well-known example, in which the mean reward of each arm is a linear function of some unknown parameter. Several algorithms have been proposed for these settings, such as extensions of UCB (Abbasi-Yadkori et al., 2011) and TS (Agrawal and Goyal, 2013; Abeille and Lazaric, 2017). However, these approaches, mostly based on the optimism in the face of uncertainty (OFU) principle, have been proved not asymptotically optimal (Lattimore and Szepesvari, 2017). Examples of other speci\ufb01c structures include combinatorial bandits (CesaBianchi and Lugosi, 2012), Lipschitz bandits (Magureanu et al., 2014), ranking bandits (Combes et al., 2015), unimodal bandits (Yu and Mannor, 2011), etc. Recently, there has been a growing interest in designing bandit strategies to exploit general structures, where the learner is provided with a subset of all possible bandit problems containing the (unknown) problem she has to face. The structured UCB algorithm, proposed almost-simultaneously by Lattimore and Munos (2014) and Azar et al. (2013), applies the OFU principle to general structures. Atan et al. (2018) proposed a greedy algorithm for the special case where all arms are informative, while Wang et al. (2018) extended these settings to consider correlations 1In recommender systems, it is often possible to cluster users in a few types based on their preferences. Once the type of user is known, the value of each item is \ufb01xed. arXiv:2005.11593v1 [cs.LG] 23 May 2020 \fA Novel Con\ufb01dence-Based Algorithm for Structured Bandits only within certain groups of arms and independence among them. Gupta et al. (2018) generalized UCB and TS to exploit the structure and quickly identify sub-optimal arms. One of the interesting \ufb01ndings of these works is that, in some structures, constant regret (i.e., independent of n) is possible. In the remainder, we shall call these strategies con\ufb01dencebased since they explicitly maintain the uncertainties about the true bandit and use these to trade-o\ufb00exploration/exploitation. Although conceptually simple, con\ufb01dence-based strategies are typically hard to design and analyze in a fully structure-aware manner. In fact, in structured problems, pulling an arm provides not only a sample of its mean, but also information about the bandit problem itself through the knowledge of the overall structure. In turn, information about the problem itself potentially allow to re\ufb01ne the estimates of the means of all arms. Combes et al. (2017) made a signi\ufb01cant step in exploiting this interplay between arms and bandit problems in the very de\ufb01nition of the algorithm itself. The authors derived a structure-aware lower bound characterizing the optimal pull counts as the solution to an optimization problem. Their algorithm, OSSB, approximates this solution and achieves asymptotic optimality for any general structure. However, since the lower bound depends on the true (unknown) bandit at hand, this approach requires to force some exploration to guarantee a su\ufb03ciently accurate solution. For this reason, we shall call this kind of strategy forced-exploration. Compared to con\ufb01dence-based ones, it can be intractable in many structures and it remains an open question how well it performs in \ufb01nite time. In this paper, we focus on the widely-applied con\ufb01dence-based strategies for structured bandits. Our contributions are as follows. 1) We propose an algorithm running through phases. At the beginning of each phase, the set of bandit models compatible with the con\ufb01dence intervals computed so far is built and the corresponding optimal arms are repeatedly pulled in a round-robin fashion, until the end of the phase. For this strategy, we prove an upper bound on the expected regret that, compared to existing bounds, better shows the potential bene\ufb01ts of exploiting the structure. The key \ufb01nding is that the number of pulls to a sub-optimal arm i can be signi\ufb01cantly reduced by exploiting the information obtained while pulling other arms, and notably the arm that is most informative for this purpose, i.e., the arm for which the mean of the true bandit di\ufb00ers the most from that of any other bandit in which arm i is optimal. This is in contrast to existing methods, which rely exclusively on the samples obtained from arm i to identify its suboptimality (a property that is true for the unstructured settings). 2) Since our algorithm requires to know the horizon n, we design a practical anytime extension for which, under the same assumptions as in (Lattimore and Munos, 2014), we derive a constant-regret bound with a better scaling in the relevant structuredependent quantities. 3) For certain structures that satisfy the aforementioned assumption, we also derive a matching lower bound that shows the optimality of our algorithm in the constant-regret regime. 4) We report numerical simulations in some simple illustrative structures that con\ufb01rm our theoretical \ufb01ndings. 2 Preliminaries We follow similar notation and notions to formalize MAB with structure as in (Agrawal et al., 1988; Graves and Lai, 1997; Burnetas and Katehakis, 1996; Azar et al., 2013; Lattimore and Munos, 2014; Combes et al., 2017). We denote by \u0398all the collection of all bandit problems \u03b8 with a set of arms A and whose reward distributions {\u03bdi}i\u2208A are bounded in [0, 1]2. We refer to each \u03b8 \u2208\u0398all as a bandit (problem), or model. We denote by \u00b5i(\u03b8) the mean reward of arm i in model \u03b8 and let \u00b5\u2217(\u03b8) := maxi\u2208A \u00b5i(\u03b8). For the sake of readability, we assume that the corresponding optimal arm, i\u2217(\u03b8) := argmaxi\u2208A \u00b5i(\u03b8), is unique for all models. The sub-optimality gap of arm i \u2208A is \u2206i(\u03b8) := \u00b5\u2217(\u03b8) \u2212\u00b5i(\u03b8), while the model gap w.r.t. \u03b8\u2032 \u2208\u0398all is \u0393i(\u03b8, \u03b8\u2032) := |\u00b5i(\u03b8) \u2212\u00b5i(\u03b8\u2032)|. It is known that the gaps \u2206characterize the complexity of a bandit problem in the unstructured case. As we shall see, the model gaps \u0393 play the analogous role in structured problems. A structure \u0398 \u2286\u0398all is a subset of possible models. For instance, a linear structure is a set of models whose mean rewards can be written as a linear combination of given features. We denote by A\u2217(\u0398), abbreviated A\u2217when \u0398 is clear from context, the set of arms that are optimal for at least one model in \u0398, while \u0398\u2217 i is the set of models in which arm i is optimal. Let \u03b8\u2217\u2208\u0398all be the true model and \u2126:= {\u0398\u2032 \u2286 \u0398all | \u03b8\u2217\u2208\u0398\u2032}. A (structured) bandit algorithm \u03c0 receives as input a structure \u0398 \u2208\u2126and de\ufb01nes a strategy for choosing the arm It given the history Ht\u22121 = (I1, X1, . . . , It\u22121, Xt\u22121)3. Our performance measure is the expected regret after n steps, R\u03c0 n(\u03b8\u2217, \u0398) := n\u00b5\u2217(\u03b8\u2217) \u2212E\u03c0,\u03b8\u2217 \" n X t=1 \u00b5It(\u03b8\u2217) # . Note that the regret depends on \u0398 through the strategy \u03c0. In the remaining, whenever \u03b8 is dropped from a model-dependent quantity, we implicitly refer to \u03b8\u2217. 2As usual, this assumption can be relaxed to subGaussian noise with no additional complications. 3Whenever \u03c0 receives as input \u0398all, it reduces to the standard MAB case. \fAndrea Tirinzoni, Alessandro Lazaric, Marcello Restelli Structured UCB Structured UCB (SUCB)4 is a natural extension of the OFU principle to general structures and it reduces to UCB whenever the structure \u0398 provided as input is the set of all possible bandit problems (i.e., \u0398all). At each step t, the algorithm builds a con\ufb01dence set \u02dc \u0398t \u2286\u0398 containing all the models compatible with the con\ufb01dence intervals built for each arm and it pulls the optimistic arm It = argmaxi\u2208A sup\u03b8\u2208\u02dc \u0398t \u00b5i(\u03b8). While taking the optimistic arm ensures that \u201cgood\u201d arms are selected, re\ufb01ning the con\ufb01dence set \u02dc \u0398t allows to exploit the structure to possibly discard arms more rapidly. Lattimore and Munos (2014) derived the same upper bound to the regret as the one of UCB without making any assumption on set \u0398. On the other hand, Azar et al. (2013) derived a more structure-aware bound, but only for \ufb01nite \u0398. The next theorem combines the best of these analyses (see proof in App. B). We \ufb01rst introduce two quantities that conveniently characterize the number of samples needed to distinguish between models. For any \u0398\u2032 \u2208\u2126and A\u2032 \u2286A, we de\ufb01ne: \u03a8(\u0398\u2032, A\u2032) := inf \u03b8\u2208\u0398\u2032 max j\u2208A\u2032 \u03932 j(\u03b8, \u03b8\u2217), (1) \u03c8(\u0398\u2032, A\u2032) := arginf \u03b8\u2208\u0398\u2032 max j\u2208A\u2032 \u03932 j(\u03b8, \u03b8\u2217). (2) It is known that the number of pulls to an arm i that are su\ufb03cient to distinguish between \u03b8\u2217and any \u03b8 is bounded as O(1/\u03932 i (\u03b8, \u03b8\u2217)) with high-probability (Azar et al., 2013). Then, we can interpret \u03a8(\u0398\u2032, A\u2032) as proportional to the inverse number of pulls required from the most e\ufb00ective arm in A\u2032 to distinguish \u03b8\u2217from the model \u03c8(\u0398\u2032, A\u2032), i.e., the bandit problem in \u0398\u2032 that is most similar to \u03b8\u2217in terms of model gaps. For this reason, we refer to \u03c8(\u0398\u2032, A\u2032) as the hardest model in \u0398\u2032 using arms in A\u2032. Finally, we de\ufb01ne the following sets of optimistic models w.r.t. \u03b8\u2217: \u0398+ := {\u03b8 \u2208\u0398 : \u00b5\u2217(\u03b8) > \u00b5\u2217(\u03b8\u2217)} and \u0398+ i := {\u03b8 \u2208\u0398+ : i\u2217(\u03b8) = i}. Theorem 1. There exist constants c, c\u2032 > 0 such that for any model \u03b8\u2217\u2208\u0398all and any structure \u0398 \u2208\u2126, the expected regret at time n of the SUCB algorithm (Lattimore and Munos, 2014) is upper-bounded as RSUCB n (\u03b8\u2217, \u0398) \u2264 X i\u2208A\u2217\\{i\u2217} c\u2206i(\u03b8\u2217) log n \u03a8(\u0398+ i , {i}) + c\u2032. This result shows that SUCB is able to leverage the knowledge of \u0398 to improve over UCB, which relies only on \u0398all. First, the summation is limited to arms that are optimal in at least one model in \u0398. Second, the number of pulls of a sub-optimal arm i depends 4The algorithm was originally called UCB-S by Lattimore and Munos (2014) and mUCB by Azar et al. (2013). on the model gap \u0393i(\u03b8+ i , \u03b8\u2217) w.r.t. the hardest model \u03b8+ i = \u03c8(\u0398+ i , {i}). This measures the number of pulls necessary to distinguish \u03b8+ i from \u03b8\u2217by pulling i. This gap can be much larger than the sub-optimality gap \u2206i(\u03b8\u2217) which appears in unstructured settings (e.g., UCB), thus signi\ufb01cantly reducing the \ufb01nal regret. While UCB-based algorithms are proved to be optimal (i.e., they match the asymptotic lower bound of Lai and Robbins (1985)), evaluating the optimality of Thm. 1 is less obvious. We need to \ufb01rst introduce a speci\ufb01c type of structures. We say that \u0398 is a worstcase structure if it belongs to the set \u2126wc := \b \u0398 \u2208\u2126| \u2200i \u0338= i\u2217: \u03a8(\u0398+ i , {i}) = \u03a8(\u00af \u0398+ i , {i}) \t , where \u00af \u0398+ i := {\u03b8 \u2208\u0398+ i | maxj\u0338=i \u0393j(\u03b8, \u03b8\u2217) = 0} is the subset of optimistic models that are indistinguishable from \u03b8\u2217except in their optimal arm. Thus, a worstcase structure is such that the hardest optimistic models cannot be distinguished from \u03b8\u2217except in their optimal arm. Note that \u0398all \u2208\u2126wc. An asymptotic lower bound for these structures has already been provided by Burnetas and Katehakis (1996). We state here the version for Gaussian bandits with \ufb01xed variance equal to 1 to facilitate comparison with the upper-bounds. Theorem 2 (Burnetas and Katehakis (1996)). For any \u0398 \u2286\u2126wc and uniformly convergent strategy \u03c0, lim inf n\u2192\u221e R\u03c0 n(\u03b8\u2217, \u0398) log n \u2265 X i\u2208A\u2217\\{i\u2217} \u2206i(\u03b8\u2217) \u03a8(\u0398+ i , {i}). We refer the reader to (Garivier et al., 2018) for a simple proof and the de\ufb01nition of uniformly convergent strategies. The immediate consequence of Theorem 2 is that SUCB is asymptotically order-optimal for all worst-case structures. 3 Structured Arm Elimination Our structured arm elimination (SAE) strategy (Algorithm 1) is a phased algorithm inspired by Improved UCB (Auer and Ortner, 2010). In each phase h, the algorithm keeps a con\ufb01dence set containing the models such that the mean of each arm i does not deviate too much from the empirical one \u02c6 \u00b5i,h\u22121 according to its number of pulls Ti(h \u22121), both computed at the end of the previous phase. Then, all active arms (i.e., those that are optimal for at least one of the models in the con\ufb01dence set) are played until a well-chosen pull count is reached. Such count is computed to ensure that all models that are su\ufb03ciently distant from the target \u03b8\u2217(according to an exponentially-decaying removal threshold \u02dc \u0393h) are discarded from the con\ufb01dence set. Once all the models in which a certain arm i \u2208A \fA Novel Con\ufb01dence-Based Algorithm for Structured Bandits Algorithm 1 Structured Arm Elimnation (SAE) Require: Set of models \u0398, horizon n, scalars \u03b1 > 0, \u03b2 \u22651 1: Initialization: 2: \u02dc \u03980 \u2190\u0398 (con\ufb01dence set) 3: \u02dc A0 \u2190A\u2217(\u0398) (set of active arms) 4: \u02dc \u03930 \u21901 (removal threshold) 5: Foreach phase h = 0, 1, . . . do 6: Play all active arms in a round-robin fashion until \u0018 \u03b1 log n \u02dc \u03932 h \u0010 1 + 1 \u03b2 \u00112\u0019 pulls are reached for all i \u2208\u02dc Ah 7: Update con\ufb01dence set: \u02dc \u0398h+1 \u2190 n \u03b8 \u2208\u0398 \f \f \u2200i \u2208A : |\u02c6 \u00b5i,h \u2212\u00b5i(\u03b8)| < q \u03b1 log n Ti(h) o 5 8: Update set of active arms: \u02dc Ah+1 = A\u2217(\u02dc \u0398h+1) \u2229\u02dc Ah 9: Decrease removal threshold: \u02dc \u0393h+1 \u2190 \u02dc \u0393h 2 10: End is optimal have been eliminated, i is labeled as inactive and no longer pulled. Algorithm 1 can be applied to any set of models (not only \ufb01nite ones) as far as we can determine the set of optimal arms at each step. This is an optimization problem that can be solved ef\ufb01ciently for, e.g., linear, piecewise-linear, and convex structures, while it becomes intractable in general. Note that SAE is not an optimistic algorithm since it might pull arms that are never optimistic w.r.t. \u03b8\u2217. This property is due to the phased nature of the algorithm, such that no optimistic bias in selecting the active arms is used, unlike in SUCB. While in unstructured problems SUCB and SAE reduce to UCB and improved UCB, respectively, and have similar regret guarantees (i.e., each arm is pulled roughly the same amount of times in the two algorithms), in structured problems they may behave very di\ufb00erently, as we shall see in the next examples. 3.1 Examples Figure 1 presents two simple structures in which SUCB and SAE signi\ufb01cantly di\ufb00er. The model set is divided in di\ufb00erent regions. Since all bandits in the same region have, for the purpose of our discussion, the same properties, we call \u03b81 any model in the \ufb01rst part, \u03b82 any model in the second, and so on. Note that the following comments hold for an ideal realization in which certain high-probability events occur. In the structure of Figure 1(left), arm 2 is never optimistic since its mean is always below the value of the optimal arm \u00b51(\u03b81). Therefore, SUCB never pulls it and needs only to discard the optimistic arm 3. This, in turn, takes O(1/\u03932 3(\u03b81, \u03b82)) pulls of such arm, which can be rather large. Since SAE pulls also arm 2, the 5We implicitly assume this condition to hold for arms that have never been pulled before. \u03932 1 1 0 \u03b8 \u00b5 \u03934 \u03932 1 \u03b8 Arm 1 Arm 2 Arm 3 Arm 4 Figure 1: Two structures in which SUCB and SAE signi\ufb01cantly di\ufb00er. The true model is any in the shaded region. (left) SUCB never pulls an informative arm. (right) SUCB discards an informative arm too early. large gap \u03932(\u03b81, \u03b82) (\u03932 in the \ufb01gure) allows to discard arm 3 much sooner. From the de\ufb01nition of the algorithm, SAE also needs to discard arm 2. Once again, this can be done quickly due to the large gap \u03931(\u03b81, \u03b83) and the fact that the optimal arm 1 is always pulled. In the structure of Figure 1(right), the optimistic bias makes SUCB pull the arms starting from the one with the highest value, arm 2, downwards to the optimal one, arm 1. Since the gap \u03932(\u03b81, \u03b83) (\u03932 in the \ufb01gure) is larger than \u03932(\u03b81, \u03b84), SUCB implicitly discards \u03b83, and so arm 4, before arm 2. Thus, once both these arms have been eliminated, the algorithm takes O(1/\u03932 3(\u03b81, \u03b82)) pulls of arm 3 to discard the arm itself. By simultaneously pulling all four arms, SAE discards arm 3 \ufb01rst using the pulls of arm 4 (the one prematurely discarded by SUCB) due to the large gap \u03934(\u03b81, \u03b82) (\u03934 in the \ufb01gure). Finally, the deletion of the remaining two sub-optimal arms occurs with the same number of pulls as SUCB, and it can be veri\ufb01ed that the overall regret is much smaller. 3.2 Regret Analysis In order to upper bound the regret of Alg. 1, we need to characterize the arms pulled in each phase, which are speci\ufb01ed by the sets of active arms \b \u02dc Ah \t h. Since these sets are random quantities, we cannot study them directly. Instead, we introduce a deterministic sequence of active arm sets {Ah}h that e\ufb00ectively works as a proxy for \b \u02dc Ah \t h and, under certain highprobability events, allows us to de\ufb01ne how many samples are needed for arms to be discarded. We now provide intuitions (made formal in the proof of the regret bound) on how such sequence is built. Clearly, we have A0 = \u02dc A0 = A\u2217(\u0398) by de\ufb01nition. Since all arms in A0 are pulled in h = 0, and recalling the meaning of \u03a8 (Equation 1), our well-chosen pull counts are su\ufb03cient to prove that all arms i such that \u03a8(\u0398\u2217 i , A0) \u2265\u02dc \u03932 0 \fAndrea Tirinzoni, Alessandro Lazaric, Marcello Restelli are discarded. Let us call the set of these discarded arms \u00af A0 and apply this reasoning inductively by setting A1 = A0 \\ \u00af A0. Unfortunately, it is general not possible to conclude that A1 = \u02dc A1 since other arms might be discarded. Therefore, we build an additional set Ah of those arms that are guaranteed to be active in phase h. The main intuition is that, if we can prove that certain arms are still active, we can also show that the algorithm uses their information (i.e., the model-gaps) to discard certain other arms/models faster. Imagine that an oracle provides us with the set Ah. Then, for h \u22650 we have \u00af Ah := \u001a i \u2208Ah | \u02dc \u0393h \u2264inf \u03b8\u2208\u0398\u2217 i max j\u2208Ah\u222a{i} \u0393j(\u03b8, \u03b8\u2217) \u001b , with A0 = A\u2217(\u0398) and Ah+1 = Ah \\ \u00af Ah for h \u22651. Given these sets, we have A0 := A\u2217(\u0398) and Ah := \u001a i \u2208Ah | \u02dc \u0393h\u22121 > k\u03b2 inf \u03b8\u2208\u0398\u2217 i max j\u2208A\u2217(\u0398) \u0393j(\u03b8, \u03b8\u2217) 2[h\u2212\u00af hj\u22121]+ \u001b for all h \u22651, where k\u03b2 := 1 \u03b2\u22121 q (\u03b2 + 1)2 + 1 log n and \u00af hj := maxh\u2208N+{h | j \u2208Ah} is the last phase in which arm j is active in our deterministic sequence {Ah}h. This is essentially the set of arms for which the number of pulls to the active arms at the previous phase is below the removal threshold by a margin (de\ufb01ned by k\u03b2). Finally, we de\ufb01ne the set of arms that are active in the last phase when i is active as A\u2217 i = A\u00af hi \u222a{i}. The following theorem is the key result of this paper. It shows that the regret incurred by SAE for arm i is inversely proportional to the maximum model-gap (taken over the set of arms that are active when arm i is discarded) w.r.t. the hardest model in \u0398\u2217 i . Theorem 3. Let \u03b2 \u22651, \u03b1 = \u03b22, n \u226564, and c\u03b2 := 4(1 + \u03b22). Then, RSAE n (\u03b8\u2217, \u0398) \u2264 X i\u2208A\u2217\\{i\u2217} c\u03b2\u2206i(\u03b8\u2217) log n \u03a8(\u0398\u2217 i , A\u2217 i ) + 2|A\u2217(\u0398)|. One of the key novelties, and complications, in the proof (reported in App. C) is that, in order to carry out a fully structure-aware analysis, we do not only care about proving that sub-optimal arms are not pulled after certain phases, but also about guaranteeing that some arms are not discarded too early since their pulls might allow to discard other models/arms. The parameter \u03b2 plays an important role for this purpose. In particular, k\u03b2 controls the sets of arms that, with high probability, are guaranteed to be active at certain phases. For example, for large n, setting \u03b2 = 3 yields k\u03b2 \u22432, which in turn implies that Ah is the set of arms such that \u02dc \u0393h > inf\u03b8\u2208\u0398\u2217 i maxj\u2208A\u2217(\u0398) \u0393j(\u03b8,\u03b8\u2217) 2[h\u2212\u00af hj \u22121]+ . This is close to saying that all the arms that are not eliminated in phase h are also active in such phase. 3.3 Discussion First, as a sanity check, we verify that the regret bound of Theorem 3 is never worse than the one of UCB. That is, SAE is never negatively a\ufb00ected by the knowledge of the structure and, whenever applied to unstructured problems, the algorithm is, apart from multiplicative/additive constants, \ufb01nite-time optimal. Proposition 1. The SAE algorithm is always subUCB, in the sense that there exist constants c, c\u2032 > 0 such that its regret satis\ufb01es RSAE n (\u03b8\u2217, \u0398) \u2264 X i\u2208A\\{i\u2217} c log n \u2206i(\u03b8\u2217) + c\u2032. The key property of Thm. 3 is that the regret suffered for discarding a sub-optimal arm i does not necessarily scale with the model gaps of such arm (i.e., \u03a8(\u0398\u2217 i , {i})) but with those of the most e\ufb00ective arm in A\u2217 i . Thus, compared to SUCB, in which the elimination of a model \u03b8 \u2208\u0398\u2217 i requires O(1/\u03932 i (\u03b8, \u03b8\u2217)) pulls of arm i, SAE needs only O(1/ maxj\u2208A\u2217 i \u03932 j(\u03b8, \u03b8\u2217)), which is by de\ufb01nition always smaller. Note that, to be precise, SUCB can potentially eliminate models using the pulls of any arm since the con\ufb01dence sets are built as in SAE. However, in general, it is not possible to prove the same regret bound since the optimism induces a speci\ufb01c pull order that might prevent the algorithm from choosing the arm with the largest model gap. Obviously, SAE does not know this arm in advance and, therefore, ensures it is pulled by choosing all active arms. However, the additional regret incurred to achieve this property can make the algorithm, in some cases, worse than SUCB. In fact, a key di\ufb00erence is that SUCB stops playing a sub-optimal arm i when all optimistic models in \u0398+ i are discarded, while SAE needs to eliminate all models in which arm i is optimal (even non-optimistic ones). Therefore, although SAE improves the elimination of all optimistic models, it su\ufb00ers further regret for discarding non-optimistic ones and, in general, the two algorithms are not comparable. A special case are those structures in which the hardest models for each arm i are in the optimistic set, \u03c8(\u0398\u2217 i , A\u2217 i ) \u2208\u0398+ i , in which SAE improves over SUCB. These optimistic structures are de\ufb01ned as: \u2126opt := {\u0398 \u2208\u2126| \u2200i \u0338= i\u2217: \u03a8(\u0398+ i , A\u2217 i ) = \u03a8(\u0398\u2217 i , A\u2217 i )}. Proposition 2. If \u0398 \u2208\u2126opt, SAE is sub-SUCB, in the sense that its regret can be upper bounded by the one of Theorem 1. Since SUCB is order-optimal in \u2126wc and SAE is subSUCB in \u2126opt, Theorem 2 immediately implies that SAE is order optimal in \u2126wc \u2229\u2126opt. Although we are \fA Novel Con\ufb01dence-Based Algorithm for Structured Bandits Algorithm 2 Anytime SAE (ASAE) Require: Set of models \u0398, scalars \u03b1 > 0, \u03b2 \u22651, \u03b7 > 0 1: Initialization: \u02dc n0 \u21902, \u02dc \u0398\u22121 \u2190\u0398 2: Foreach period k = 0, 1, . . . do 3: Initialize con\ufb01dence sets: \u02dc \u0398k 0 \u2190\u02dc \u0398k\u22121, \u02dc Ak 0 \u2190A\u2217(\u02dc \u0398k 0) 4: Run Algorithm 1 with n = \u02dc nk, \u02dc \u03980 = \u02dc \u0398k 0, and \u02dc A0 = \u02dc Ak 0 5: Update horizon: \u02dc nk+1 \u2190\u02dc n1+\u03b7 k 6: End able to guarantee the optimality in less cases, Proposition 2 ensures that SAE improves over SUCB in a wide variety of structures. Unfortunately, we were not able to prove the optimality of our algorithm in any structure besides the worst-case ones. 4 Anytime SAE and Constant Regret Algorithm 1 cannot be applied whenever the horizon n is unknown, as the length of each phase explicitly depends on it. This has the additional drawback of preventing constant regret from being achieved since a log n term naturally appears in the resulting bound. As shown by Lattimore and Munos (2014), there exist structures in which constant regret can be obtained and it would be desirable for our strategy to exploit this fact. We, therefore, propose an anytime extension (Algorithm 2). The idea is once again similar to the one by Auer and Ortner (2010): we split the horizon into di\ufb00erent periods with exponentially increasing length. Therefore, in Algorithm 2, and throughout this section, we overload our notation by adding a superscript k to denote the period of each period-dependent quantity. The key property is that our approach does not reset in each period (as Auer and Ortner (2010) do) but retains the last con\ufb01dence sets. Though this makes the proofs more involved, we shall see that it allows us to guarantee a constant regret. One can see the analogy between our non-resetting phased approach and the standard way of handling unknown horizons in online algorithms. In the latter case, we typically replace log n with log t in the con\ufb01dence sets, while here we do the same with log \u02dc nk. Then, after proving that certain high-probability events occur at each time/period, we can carry out the proofs without forcing any reset. Due to the additional complications introduced by the anytime extension (in particular, controlling the sets Ah), we were able to prove only a weaker bound than the one in Theorem 3 which, however, retains the same bene\ufb01ts. The proofs are reported in Appendix D. Theorem 4. Let \u03b7 = 1, \u03b1 = 2, and \u03b2 = 1. Then, RASAE n (\u03b8\u2217, \u0398) \u2264 X i\u2208A\u2217\\{i\u2217} 192\u2206i(\u03b8\u2217) log n \u03a8(\u0398\u2217 i , {i, i\u2217}) + 6|A\u2217(\u0398)|. The new bound has the same form as the one of Algorithm 1, except for the fact that the set of active arms for eliminating each i is reduced to {i, i\u2217} \u2286A\u2217 i . Note, however, that the presence of these two arms is enough to prove Proposition 1 and 2. Remark 1. Algorithm 2 is sub-UCB and, under the same conditions as in Proposition 2, is also sub-SUCB. We now prove a constant-regret bound for Algorithm 2. We need the following assumption from (Lattimore and Munos, 2014), which was proven both necessary and su\ufb03cient to achieve constant regret. Assumption 1 (Informative optimal arm). The structure \u0398 satis\ufb01es \u0393\u2217:= inf \u03b8\u2208\u0398\\\u0398\u2217 i\u2217 \u0393i\u2217(\u03b8, \u03b8\u2217) > 0. In words, when a model is \u0393\u2217-distant (or less) in arm i\u2217 from \u03b8\u2217, its optimal arm is still i\u2217. Therefore, pulling i\u2217eventually discards all sub-optimal arms. This is fundamental to guarantee that, after the algorithm has pulled i\u2217a su\ufb03cient number of times, no sub-optimal arm can become active again due to the increasing period length (hence we choose i\u2217forever). Theorem 5. Let \u03b7 = 1, \u03b1 = 5 2, \u03b2 = 1, \u00af t := 20|A\u2217(\u0398)| log 2 \u03932 \u2217 + 2|A\u2217(\u0398)|, and suppose Assumption 1 holds. Then, RASAE n (\u03b8\u2217, \u0398) \u2264 X i\u2208A\u2217\\{i\u2217} 480\u2206i(\u03b8\u2217) log \u00af t \u03a8(\u0398\u2217 i , {i, i\u2217}) + 9|A\u2217(\u0398)|. This bound improves over the one shown by Lattimore and Munos (2014) for SUCB in its dependence on \u00af t, which can be understood as the time at which the algorithm transitions to the constant regret regime. While Lattimore and Munos (2014) proved \u00af t \u2243O(max{1/\u03932 \u2217, 1/\u22062 min}), here we show that such time does not depend on the minimum gap \u2206min = mini:\u2206i(\u03b8\u2217)>0 \u2206i(\u03b8\u2217). This is intuitive since, by Assumption 1, O(1/\u03932 \u2217) pulls of i\u2217should be enough to identify the optimal arm. Although the analysis of SUCB can be improved by replacing the minimum suboptimality gap with the minimum model gap, it seems that this dependence is tight. As an example, consider a structure in which the optimal arm is very informative (\u0393\u2217\u226b0) but never optimistic. SUCB will never pull it until all optimistic models are discarded, which requires O(1/\u03932 min) steps in the worst case. Note that, whenever it is applied to structures satisfying Assumption 1, the bound of Theorem 4 does not show constant regret since the proof uses an implicit worst-case argument (i.e., Assumption 1 is assumed false). \fAndrea Tirinzoni, Alessandro Lazaric, Marcello Restelli 5 Constant-Regret Lower Bound We have seen that SUCB and SAE are order-optimal for structures in \u2126wc and \u2126wc\u2229\u2126opt, respectively. One might wonder whether we can still guarantee optimality in some structures where constant regret is achievable (i.e., when Assumption 1 holds). We answer this question a\ufb03rmatively by deriving a \ufb01nite-time lower bound on the expected regret of any \u2019good\u2019 strategy. Note that the problem is non-trivial since, under Assumption 1, one cannot build hard models that di\ufb00er from the true bandit only in the mean of one arm as in the proof of standard lower-bounds (e.g., Burnetas and Katehakis, 1996). Before stating our result, we specify the class of strategies under consideration. We shall use the following de\ufb01nition due to Garivier et al. (2018), which have been adopted to derive \ufb01nite-time lower-bounds. De\ufb01nition 1 (Super-fast convergence). A strategy \u03c0 is super-fast convergent on a set \u0398 if there exists a constant c > 0 such that, for any model \u03b8 \u2208\u0398 and sub-optimal arm i \u2208A, it satis\ufb01es E\u03b8[Ti(n)] \u2264c log n \u2206i(\u03b8)2 . It is easy to see that UCB, SUCB, and SAE are examples of super-fast convergent strategies. Furthermore, we call the class of structures considered in the lower bound worst-case constant regret and de\ufb01ne it as \u2126cr :={\u0398 \u2208\u2126| \u2200\u03b8 \u2208\u0398 \\ \u0398\u2217 i\u2217: \u0393i\u2217(\u03b8, \u03b8\u2217) = \u0393\u2217\u2227\u0393j(\u03b8, \u03b8\u2217) = 0 \u2200j \u0338= i\u2217(\u03b8), i\u2217}. This can be understood as a generalization of the worst-case structure to make Assumption 1 hold. Due to the challenges in deriving the lower bound for large \u0393\u2217, we also need to assume that 0 < \u0393\u2217\u2264 O \u0012q 1 P i\u0338=i\u2217\u2206\u22122 i (\u03b8\u2217) \u0013 , with the precise dependence given in Appendix E. Note that \u0393\u2217is a function of the structure and the dependence was omitted for conciseness. We are now ready to state our result. Theorem 6. Let \u0398 \u2208\u2126cr and n \u2265 1 \u03932 \u2217. Then, for su\ufb03ciently small \u0393\u2217, the expected regret of any superfast convergent strategy \u03c0 can be lower bounded by R\u03c0 n(\u03b8\u2217, \u0398) \u2265 X i\u2208A\u2217\\{i\u2217} \u2206i(\u03b8\u2217) 2\u03a8(\u0398\u2217 i , {i}) log \u22062 4e2c\u03932 \u2217log 1 \u03932 \u2217 , where \u2206:= inf\u03b8\u2032\u2208\u0398\\\u0398\u2217 i\u2217\u2206i\u2217(\u03b8\u2032). The proof, which combines ideas from Garivier et al. (2018) and Degenne et al. (2018), is reported in Appendix E. Note that the lower bound is positive for su\ufb03ciently small \u0393\u2217. Apart from other constants, the dependence on \u0393\u2217matches the upper bound of Theorem 5. However, Theorem 5 seems tighter due to the larger set of arms in \u03a8 at the denominator. This is not surprising since the lower bound considers only structures with well-chosen hard models. It is easy to prove that, when SAE or SUCB are applied to structures in \u2126cr, the two bounds match. Other lower bounds for constant-regret settings have recently been derived. Bubeck et al. (2013) showed that, for the classic unstructured problems, it is enough to know \u00b5\u2217and a lower bound on the minimum gap to achieve a constant regret. Garivier et al. (2018) re\ufb01ned this result by showing that the knowledge of \u00b5\u2217 alone actually su\ufb03ces. Lattimore and Munos (2014) studied several speci\ufb01c structured problems where constant regret is (or is not) possible, providing both lower bounds and algorithms to match them. Finally, we note that the asymptotic lower bound by Combes et al. (2017) is zero when Assumption 1 holds as the regret scaled by log n correctly vanishes as n grows. Their algorithm reduces to a greedy strategy in this setting which is not necessarily \ufb01nite-time optimal according to Theorem 6. 6 Numerical Simulations We perform two di\ufb00erent classes of experiments. In the \ufb01rst one, we consider well-chosen structures that allow us to better understand the behavior of all algorithms. In the second one, we randomize the structures to provide a more general comparison. In all experiments, we run SAE and its anytime version (ASAE), SUCB, and UCB on Bernoulli bandits. We also compared to the WAGP algorithm of Atan et al. (2018), which however incurred linear regret in all our experiments (their assumptions never hold in our structures) and, therefore, is omitted from the plots. We use \u03b1 = 2 for all algorithms and \u03b2 = 1 for SAE. Each plotted curve is the average of 100 independent runs with 95% Student\u2019s t con\ufb01dence intervals. Hand-coded Structures We \ufb01rst consider the structure of Figure 1(left). We set n = 10, 000 and \u03b7 = 0.1. The results are shown in Figure 2a. SUCB su\ufb00ers a large regret for removing models in which arm 3 is optimal. On the other hand, SAE quickly discards these models by pulling arm 2, which, in turn, is eliminated by pulling arm 1. Hence the much lower regret, with the anytime version that performs slightly better. Notice also that Assumption 1 is veri\ufb01ed and SAE obtains constant regret. SUCB eventually transitions to constant regret too but needs a longer horizon. Alternatively, we can show an example where SUCB is expected to perform better. We modify the structure of Figure 1(left) to make arm 2 non-informative (i.e., \fA Novel Con\ufb01dence-Based Algorithm for Structured Bandits 0 0.2 0.4 0.6 0.8 1 \u00b7104 0 50 100 150 Time Expected Regret UCB SUCB SAE ASAE (a) 0 0.2 0.4 0.6 0.8 1 \u00b7104 0 50 100 150 Time (b) 0 1 2 3 4 5 \u00b7105 0 200 400 Time (c) 0 1 2 3 4 5 \u00b7105 0 2,000 4,000 Time (d) Figure 2: Expected regret in (a) the structure of Figure 1(left), (b) the same structure with non-informative arm 2, (c) the structure of Figure 1(right), and (d) randomly-generated structures. we set its mean to the highest value in the \ufb01gure for all models) and run the experiment under the same setting. Figure 2b shows that, as expected, SAE su\ufb00ers from some additional regret for discarding the useless arm and performs worse than SUCB. However, it remains sub-UCB as proved in Section 3.3. We now consider the structure of Figure 1(right). We set n = 500, 000, \u03b7 = 0.01, and report the results in Figure 2c. The arm ordering induced by SUCB (from the most optimistic to the optimal one) leads the algorithm to discard arm 4 before even pulling it once. Such arm, however, could be used to quickly discard arm 3, which is what SAE does. Notice that the larger regret of SAE with respect to its anytime counterpart is mainly due to the fact that phased procedures update the con\ufb01dence sets much less than online approaches. This drawback is alleviated in the anytime version, which reduces the duration of some of these phases and retains good empirical performance. Randomized Structures We now consider random structures. In each run, we \ufb01rst randomize a set of 100 models with 50 arms by drawing their means from the uniform distribution and we randomly choose the true model among them. Then, we build 50 additional \u2019hard\u2019 models by perturbing a random arm of the true model to become optimal and optimistic, and another random arm to become informative. In particular, the mean of the \ufb01rst random arm is set to \u00b5\u2217(\u03b8\u2217) + 0.2\u03f5, with \u03f5 \u223cU([0, 1]), while the second to 1/10 of the original mean (so that we potentially get a larger model gap). The results are shown in Figure 2d. Most of the regret su\ufb00ered by SUCB is due to the hard instances we introduced. Some of them are likely to be eliminated by informative arms, but this is not always guaranteed by the SUCB strategy. Both versions of SAE, on the other hand, implicitly exploit these informative arms, with the anytime version outperforming all alternatives. Once again, the original version suffers a high initial regret due to the phased procedure. 7 Discussion Similarly to most of related literature, our SAE algorithm con\ufb01rms that simple con\ufb01dence-based strategies can be designed to exploit general structures, though so far they have been proven optimal only for worstcase structures. Although it only pulls potentiallyoptimal arms, SAE is not optimistic. The design of non-optimistic algorithms is a key step towards optimality since it is known that OFU-based strategies are not optimal for general structures (Lattimore and Szepesvari, 2017; Combes et al., 2017; Hao et al., 2019). Our regret bounds fully re\ufb02ect the structureawareness and their derivation might be of independent interest for analyzing other approaches. Although considering phased strategies is one of our key choices to both obtain the desired algorithmic properties and simplify the proofs, we show empirically that SAE does not su\ufb00er from it too much. In particular, it outperforms online strategies in speci\ufb01c structures where informative arms exist that are not always pulled with the OFU principle. The key open question is how to design con\ufb01dencebased strategies that are optimal for general structures. The algorithms discussed in this paper have been proven optimal only for certain worst-case structures, while algorithms like OSSB are asymptotically optimal for general structures but require to force exploration to solve an oracle optimization problem. Whether the optimal pull counts of a lower-bound like the one by Combes et al. (2017) can be attained in con\ufb01dence-based settings and with good \ufb01nite-time performance remains unknown. We believe that recent advances in the context of pure exploration for bandit problems (M\u00b4 enard, 2019; Degenne et al., 2019) might provide useful insights into this problem. Furthermore, a \ufb01nite-time extension of the asymptotic lower bound for general structures, and the corresponding design of \ufb01nite-time optimal algorithms, is a challenging but interesting research direction. \fAndrea Tirinzoni, Alessandro Lazaric, Marcello Restelli" + }, + { + "url": "http://arxiv.org/abs/1805.10886v1", + "title": "Importance Weighted Transfer of Samples in Reinforcement Learning", + "abstract": "We consider the transfer of experience samples (i.e., tuples < s, a, s', r >)\nin reinforcement learning (RL), collected from a set of source tasks to improve\nthe learning process in a given target task. Most of the related approaches\nfocus on selecting the most relevant source samples for solving the target\ntask, but then all the transferred samples are used without considering anymore\nthe discrepancies between the task models. In this paper, we propose a\nmodel-based technique that automatically estimates the relevance (importance\nweight) of each source sample for solving the target task. In the proposed\napproach, all the samples are transferred and used by a batch RL algorithm to\nsolve the target task, but their contribution to the learning process is\nproportional to their importance weight. By extending the results for\nimportance weighting provided in supervised learning literature, we develop a\nfinite-sample analysis of the proposed batch RL algorithm. Furthermore, we\nempirically compare the proposed algorithm to state-of-the-art approaches,\nshowing that it achieves better learning performance and is very robust to\nnegative transfer, even when some source tasks are significantly different from\nthe target task.", + "authors": "Andrea Tirinzoni, Andrea Sessa, Matteo Pirotta, Marcello Restelli", + "published": "2018-05-28", + "updated": "2018-05-28", + "primary_cat": "cs.LG", + "cats": [ + "cs.LG", + "stat.ML" + ], + "main_content": "Introduction The goal of transfer in Reinforcement Learning (RL) (Sutton & Barto, 1998) is to speed-up RL algorithms by reusing knowledge obtained from a set of previously learned tasks. The intuition is that the experience made by learning source tasks might be useful for solving a related, but different, target task. Transfer across multiple tasks may be achieved in different ways. The available approaches differ in the type of information transferred (e.g., samples, value func1Politecnico di Milano, Milan, Italy 2SequeL Team, INRIA Lille, France. Correspondence to: Andrea Tirinzoni . Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). tions, parameters, policies, etc.) and in the criteria used to establish whether such knowledge could be bene\ufb01cial for solving the target or not. This work focuses on the problem of transferring samples from a set of source MDPs to augment the dataset used to learn the target MDP. To motivate our approach, consider a typical learning scenario where samples are costly to obtain. This is often the case in robotics applications, where the interaction with the real environment could be extremely time-consuming, thus reducing the number of samples available. The typical remedy of adopting a simulator often leads to sub-optimal solutions due to the differences with respect to the real environment. A more effective approach is to transfer the simulated samples to speedup learning in the target task. The transfer of samples has been widely studied in the supervised learning community. In particular, Crammer et al. (2008) formalized the problem from a theoretical perspective and provided generalization bounds for the transfer scenario. An interesting result is a trade-off between the number of tasks from which to transfer and the total number of samples. In RL, Taylor et al. (2008) and Lazaric et al. (2008) proposed almost simultaneously methods to transfer single samples. While the former method focused on a model-based approach, the latter one proposed a selective approach to transfer samples into a batch RL algorithm (e.g., Fitted Q-Iteration (Ernst et al., 2005)). Furthermore, Lazaric et al. (2008) considered a model-free approach to compute a similarity measure between tasks, which was used to decide which samples to transfer. More recently, Lazaric & Restelli (2011) analyzed the transfer of samples in batch RL from a theoretical perspective, demonstrating again the trade-off between the total number of samples and the number of tasks from which to transfer. Finally, Laroche & Barlier (2017) proposed a way to transfer all the samples to augment the dataset used by Fitted QIteration. The limitation of this approach resides in the restrictive assumption that all the tasks are assumed to share the same transition dynamics and differ only in the reward function. For a survey on transfer in RL, we refer the reader to (Taylor & Stone, 2009; Lazaric, 2012). One of the main drawbacks of many previous works is that, even after a detailed selection, transferred samples are used arXiv:1805.10886v1 [cs.LG] 28 May 2018 \fImportance Weighted Transfer of Samples in Reinforcement Learning in the target task without accounting for the differences between the original (source) MDP and the target one, thus introducing a bias even in the asymptotic case. In this paper, we present a novel approach to transfer samples into a batch RL algorithm. Unlike other works, we do not assume any particular similarity between tasks besides a shared stateaction space, and we develop a new model-based methodology to automatically select the relevance (importance weight) of each sample. Existing algorithms for transferring across different state-action spaces (e.g., Taylor et al., 2007) can be straightforwardly combined to our method. Our approach transfers all the samples, but their impact in solving the target task is proportional to their importance weight. To compute the importance weight of each sample, we rely on a non-parametric estimate of the MDP structure. In particular, we adopt Gaussian processes (Rasmussen & Williams, 2006) to estimate the reward and state transition models of the source and target tasks from samples. Then, we propose a robust way to compute two sets of importance weights, one for the reward model and one for the transition model. We introduce an approximate value iteration algorithm based on Fitted Q-iteration that uses such weights to account for the distribution shift introduced by the different MDPs, thus implicitly selecting which samples have higher priority based on their likelihood to be generated from the target MDP. We provide a theoretical analysis showing the asymptotic correctness of our approach and an empirical evaluation on two classical RL domains and a real-world task. 2. Preliminaries In this section, we start by introducing our mathematical notation. Then, we recall concepts of Markov decision processes and approximate value iteration. Finally, we formalize the transfer settings considered in this work. Notation. For a measurable space \u27e8\u2126, \u03c3\u2126\u27e9, we denote by \u2206(\u2126) the set of probability measures over \u03c3\u2126and by B(\u2126, L) the space of measurable functions over \u2126bounded by 0 < L < \u221e, i.e., \u2200f \u2208B(\u2126, L), \u2200x, |f(x)| \u2264L. Given a probability measure \u00b5, we de\ufb01ne the \u2113p-norm of a measurable function f as \u2225f\u2225p,\u00b5 = \u0000R |f|pd\u00b5 \u00011/p. Let z1:N be a Z-valued sequence (z1, . . . , zN) for some space Z. For DN = z1:N, the empirical norm of a function f : Z \u2192R is \u2225f\u2225p p,DN := 1 N PN i=1 |f(zi)|p. Note that when Zi \u223c\u00b5, we have that E[\u2225f\u2225p p,DN ] = \u2225f\u2225p p,\u00b5. Whenever the subscript p is dropped, we implicitly consider the \u21132-norm. Markov Decision Process. We de\ufb01ne a discounted Markov Decision Process (MDP) as a tuple M = \u27e8S, A, P, R, \u03b3\u27e9, where S is a measurable state space, A is a \ufb01nite set of actions, P : S \u00d7 A \u2192\u2206(S) is the transition probability kernel, R : S \u00d7 A \u2192\u2206(R) is the reward probability kernel, and \u03b3 \u2208[0, 1) is the discount factor. We suppose R(s, a) = E [R(\u00b7|s, a)] is uniformly bounded by rmax. A Markov randomized policy maps states to distributions over actions as \u03c0 : S \u2192\u2206(A). As a consequence of taking an action at in st, the agent receives a reward rt \u223c R(\u00b7|st, at) and the state evolves accordingly to st+1 \u223c P(\u00b7|st, at). We de\ufb01ne the action-value function of a policy \u03c0 as Q\u03c0(s, a) = E [P\u221e t=0 \u03b3trt | M, \u03c0, s0 = s, a0 = a] and the optimal action-value function as Q\u2217(s, a) = sup\u03c0 Q\u03c0(s, a) for all (s, a). Notice that Q is bounded by Qmax := rmax 1\u2212\u03b3 . Then, the optimal policy \u03c0\u2217is a policy that is greedy with respect to Q\u2217, i.e., for all s \u2208S, \u03c0\u2217(s) \u2208arg maxa\u2208A{Q\u2217(s, a)}. The optimal actionvalue function is also the unique \ufb01xed-point of the optimal Bellman operator L\u2217: B(S \u00d7 A, Qmax) \u2192B(S \u00d7 A, Qmax), which is de\ufb01ned by (L\u2217Q)(s, a) := R(s, a) + \u03b3 R S P(ds\u2032|s, a) maxa\u2032 Q(s\u2032, a\u2032) (e.g., Puterman, 1994). Approximate solution. Fitted Q-Iteration (FQI) (Ernst et al., 2005) is a batch RL algorithm that belongs to the family of Approximate Value Iteration (AVI). AVI is a value-based approach that represents Q-functions by a hypothesis space H \u2282B(S \u00d7 A, Qmax) of limited capacity. Starting from an initial action-value function Q0 \u2208H, at each iteration k \u22650, AVI approximates the application of the optimal Bellman operator in H such that Qk+1 \u2248L\u2217Qk. Formally, let DN = {\u27e8si, ai, s\u2032 i, ri\u27e9}N i=1 be a set of transitions such that (si, ai) \u223c\u00b5 and de\ufb01ne the empirical optimal Bellman operator as (b L\u2217Q)(si, ai) := ri +\u03b3 maxa\u2032 Q(s\u2032 i, a\u2032). Then, at each iteration k, FQI computes Qk+1 = arg min h\u2208H \r \r \rh \u2212b L\u2217Qk \r \r \r 2 DN . (1) Transfer settings. We consider a set of tasks, i.e., MDPs, {Mj = \u27e8S, A, Pj, Rj\u27e9, j = 0, . . . , m}, where M0 denotes the target and M1, . . . , Mm the sources. We suppose all tasks share the same state-action space and have potentially different dynamics and reward. Suppose that, for j = 0, . . . , m, we have access to a dataset of Nj samples from the j-th MDP, Dj = {\u27e8si, ai, s\u2032 i, ri\u27e9}Nj i=1, where state-action pairs are drawn from a common distribution \u00b5 \u2208\u2206(S \u00d7 A).1 The goal of transfer learning is to use the samples in D1, . . . , Dm to speed up the learning process in the target task M0. 3. Importance Weights for Transfer In this section, we introduce our approach to the transfer of samples. Recall that our goal is to exploit at best samples in 1This assumption can be relaxed at the price of a much more complex theoretical analysis. \fImportance Weighted Transfer of Samples in Reinforcement Learning Algorithm 1 Importance Weighted Fitted Q-Iteration Input: The number of iterations K, a dataset e D+ = Sm j=0 SNj i=0 n s(j) i , a(j) i , s\u2032(j) i , r(j) i , e w(j) r,i , e w(j) p,i o , a hypothesis space H Output: Greedy policy \u03c0K b R \u2190arg infh\u2208H 1 Zr P j,i e w(j) r,i \f \f \fh(s(j) i , a(j) i ) \u2212r(j) i \f \f \f 2 Q0 \u2190b R for k = 0, . . . , K \u22121 do Y (j) i \u2190e L\u2217Qk(s(j) i , a(j) i ), \u2200i, j Qk+1 \u2190arg inf h\u2208H 1 Zp P j,i e w(j) i,p \f \f \fh(s(j) i , a(j) i ) \u2212Y (j) i \f \f \f 2 end for \u03c0K(s) \u2190arg maxa\u2208A{QK(s, a)}, \u2200s \u2208S {D1, . . . , DM} to augment the dataset D0 used by FQI to solve the target task, thus speeding up the learning process. In the rest of the paper we exploit the fact that FQI decomposes the RL problem into a sequence of supervised learning problems. It is easy to notice that the optimization problem (1) is an instance of empirical risk minimization, where Xi = (si, ai) are the input data, Yi = (b L\u2217Qk)(si, ai) are the targets, and L(f(Xi), Yi) = |f(Xi) \u2212Yi|2 is a squared loss. As mentioned in the introduction, we aim to exploit all the available samples to solve the target task. Suppose we adopt a naive approach where we concatenate all the samples, i.e., e D = Sm j=0 Dj = Sm j=0 SNj i=0\u27e8sj i, aj i, s\u2032j i, rj i \u27e9, to solve (1). This approach suffers from sample selection bias (Cortes et al., 2008), i.e., samples are collected from different distributions or domains. In fact, although we assumed state-action pairs to be sampled from a \ufb01xed taskindependent distribution, the target variables Y are distributed according to the MDP they come from. A standard technique used to correct the bias or discrepancy induced by the distribution shift is importance weighting. This technique consists in weighting the loss function to emphasize the error on some samples and decrease it on others, to correct the mismatch between distributions (Cortes et al., 2008). The de\ufb01nition of the importance weight for the point X is w(X) = P(X)/Q(X) where P is the distribution of the target, and Q is the distribution according to which sample X is collected. In our speci\ufb01c case, given an arbitrary sample (X, Y ), its joint distribution under MDP Mj is P((X, Y )|Mj) = P(Y |X, Mj)\u00b5(X). Denote by (X(j) i , Y (j) i ) the i-th sample drawn from MDP Mj, then its importance weight is given by w(X(j) i , Y (j) i ) = P(Y (j) i |X(j) i ,M0) P(Y (j) i |X(j) i ,Mj). By feeding FQI on the full dataset e D with samples weighted by w(j) i (for short), we get an algorithm that automatically selects which samples to exploit, i.e., those that, based on the importance weights, are more likely to be generated from the target MDP. This approach looks appealing but presents several issues. First, the distribution P(Y |X, Mj) is, even in the case where the MDPs are known, very hard to characterize. Second, consider a simple case where we have a source MDP with the same transition dynamics as the target, but with entirely different reward. Then, the importance weights de\ufb01ned above are likely to be very close to zero for any source sample, thus making transfer useless. However, we would like a method able to leverage the fact that transition dynamics do not change, thus transferring only that part of the sample. To overcome the second limitation, we consider the following variation of the FQI algorithm. At the \ufb01rst iteration of FQI, we use all the samples to \ufb01t a model b R \u2248R of the target reward function: b R = arg inf h\u2208H 1 Zr m X j=0 Nj X i=0 w(j) r,i \f \f \fh(X(j) i ) \u2212r(j) i \f \f \f 2 , (2) where H \u2282B(S \u00d7 A, Qmax) is the hypothesis space we consider to represent action-value functions2 and w(j) r,i = R0(r(j) i |X(j) i ) Rj(r(j) i |X(j) i ) . (3) Problem (2) is unbiased if Zr = Pm j=0 Nj, though Zr = P i,j w(j) r,i is frequently used since it provides lower variance. The theoretical analysis is not affected by the choice of Zr, while in the experiments we will use Zr = P i,j w(j) r,i . Then, at each iteration k \u22650, FQI updates the Q-function as: Qk+1 = arg inf h\u2208H 1 Zp m X j=0 Nj X i=0 w(j) p,i \f \f \fh(X(j) i ) \u2212e Y (j) i \f \f \f 2 (4) where e Y (j) i = e L\u2217Qk(X(j) i ) := b R(s(j) i , a(j) i ) + \u03b3 maxa\u2032 Qk(s\u2032(j) i , a\u2032) and Q0 = b R. Intuitively, instead of considering the reward r(j) i in the dataset e D, we use b R(s(j) i , a(j) i ). Since the stochasticity due to the reward samples is now removed, only the transition kernel plays a role, and the importance weights are given by: w(j) p,i = P0(s\u2032(j) i |X(j) i ) Pj(s\u2032(j) i |X(j) i ) . (5) 2Differently from other works (e.g., Farahmand & Precup, 2012; Tosatto et al., 2017), we suppose, for the sake of simplicity, the hypothesis space to be bounded by Qmax. Although this is a strong assumption, it can be relaxed by considering truncated functions. We refer the reader to (Gy\u00a8 or\ufb01et al., 2006) for the theoretical consequences of such relaxation. \fImportance Weighted Transfer of Samples in Reinforcement Learning The resulting algorithm, named Importance Weighted Fitted Q-Iteration (IWFQI), is shown in Algorithm 1. In practice, we have to compute an estimate of w(j) r,i and w(j) p,i since Pj and Rj are unknown quantities. We postpone this topic to Section 5 since several approaches can be exploited. Instead, in the following section, we present a theoretical analysis that is independent of the way the importance weights are estimated. 4. Theoretical Analysis We now study the theoretical properties of our IWFQI algorithm. We analyze the case where we have samples from one source task, but no samples from the target task are available, i.e., m = 1, N0 = 0, and N1 = N. A generalization to the case where target samples or samples from more sources are available is straightforward, and it only complicates our derivation. To ease our notation, we adopt the subscript \u201cT\u201d and \u201cS\u201d to denote the target and the source. Furthermore, we want to emphasize that the results provided in this section are independent from the way the importance weights are estimated. Consider the sequence of action-value functions Q0, Q1, . . . , QK computed by IWFQI. At each iteration k, we incur in an error \u03f5k = L\u2217Qk \u2212Qk+1 in approximating the optimal Bellman operator. Our goal is to bound, in terms of such errors, \u2225Q\u2217\u2212Q\u03c0k\u22251,\u03c1, i.e., the expected error under distribution \u03c1 between the performance of the optimal policy and that of the policy \u03c0k greedy w.r.t. Qk. Here \u03c1 is an arbitrary evaluation distribution over S \u00d7 A that the user can freely choose. In practice, it might coincide with the sampling distribution \u00b5. Since IWFQI belongs to the family of AVI algorithms, we can resort to Theorem 3.4 in (Farahmand, 2011). We report here the version with \u21131-norm for the sake of completeness. Theorem 1. (Theorem 3.4 of (Farahmand, 2011)) Let K be a positive integer and Qmax \u2264rmax 1\u2212\u03b3 . Then, for any sequence (Qk)K k=0 \u2282B(S \u00d7 A, Qmax) and the corresponding sequence (\u03f5k)K k=0, where \u03f5k = L\u2217Qk\u2212Qk+1, we have: \u2225Q\u2217\u2212Q\u03c0K\u22251,\u03c1 \u2264 2\u03b3 (1 \u2212\u03b3)2 \" 2\u03b3KQmax + inf b\u2208[0,1] n C 1 2 VI,\u03c1,\u00b5(K; b)E 1 2 (\u03f50, . . . , \u03f5K\u22121; b) o # , where: E(\u03f50, . . . , \u03f5K\u22121; b) = K\u22121 X k=0 \u03b12b k \u2225\u03f5k\u22252 \u00b5. We refer the reader to Chapter 3 of (Farahmand, 2011) for the de\ufb01nitions of the coef\ufb01cients CVI,\u03c1,\u00b5 and \u03b1k. Intuitively, the bound given in Theorem 1 depends on the errors made by IWFQI in approximating the optimal Bellman operator at each iteration. Thus, our problem reduces to bounding such errors. Cortes et al. (2010) already provided a theoretical analysis of importance weighted regression. However, their results are not immediately applicable to our case since they only consider a regression problem where the target variable Y is a deterministic function of the input X. On the other hand, we have the more general regression estimation problem where Y is a random variable, and we want to learn its conditional expectation given X. Thus, we extend Theorem 4 of (Cortes et al., 2010) to provide a bound on the expected \u21132-error \u2225b h \u2212h\u2217\u2225\u00b5 between the hypothesis b h returned by a weighted regressor (with estimated weights e w) and the regression function h\u2217. Following (Cortes et al., 2010), we denote by Pdim(U) the pseudo-dimension of a real-valued function class U. The proof is in the appendix. Theorem 2. Let H \u2282 B(X, Fmax) be a functional space. Suppose we have a dataset of N i.i.d. samples D = {(xi, yi)} distributed according to Q(X, Y ) = q(Y |X)\u00b5(X), while P(X, Y ) = p(Y |X)\u00b5(X) is the target distribution. Assume |Y | \u2264Fmax almost surely. Let w(x, y) = p(y|x) q(y|x), e w(x, y) be any positive function, b h(x) = arg minf\u2208H b ED \u0002 e w(X, Y )|f(X) \u2212Y |2\u0003 , h\u2217(x) = Ep[Y |x], g(x) = Eq[ e w(x, Y )|x] \u22121, and M( e w) = p EQ[ e w(X, Y )2] + q b ED[ e w(X, Y )2], where b ED denotes the empirical expectation on D. Furthermore, assume d = Pdim({|f(x) \u2212y|2 : f \u2208H}) < \u221eand EQ[ e w(X, Y )2] < \u221e. Then, for any \u03b4 > 0, the following holds with probability at least 1 \u22122\u03b4: \u2225b h \u2212h\u2217\u2225\u00b5 \u2264inf f\u2208H \u2225f \u2212h\u2217\u2225\u00b5 + Fmax q \u2225g\u22251,\u00b5 + 213/8Fmax p M( e w) d log 2Ne d + log 4 \u03b4 N ! 3 16 + 2Fmax\u2225e w \u2212w\u2225Q Notice that this result is of practical interest outside of the reinforcement learning \ufb01eld. Here it is used to bound the errors \u2225\u03f5k\u2225\u00b5 in order to state the following result. Theorem 3. Let H \u2282B(S \u00d7 A, Qmax) be a hypothesis space, \u00b5 a distribution over S \u00d7 A, (Qi)k+1 i=0 a sequence of Q-functions as de\ufb01ned in Equation (4), and L\u2217 the optimal Bellman operator of the target task. Suppose to have a dataset of N i.i.d. samples D drawn from the source task MS according to a joint distribution \u03c6S. Let wp, wr denote the ideal importance weights de\ufb01ned in (5) and (3), and e wr(r|s, a), e wp(s\u2032|s, a) denote arbitrary positive functions with bounded second moments. De\ufb01ne gr(s, a) = ERS[ e wr(r|s, a)|s, a] \u22121, M( e wr) = q E\u03c6R S [ e wr(r|s, a)2]+ q b ED[ e wr(r|s, a)2], where \fImportance Weighted Transfer of Samples in Reinforcement Learning \u03c6R S (r|s, a) = \u00b5(s, a)RS(r|s, a). Similarly, de\ufb01ne gp, M( e wp), and \u03c6P S (s\u2032|s, a) for the transition model. Then, for any \u03b4 > 0, with probability at least 1 \u22124\u03b4: \u2225L\u2217Qk \u2212Qk+1\u2225\u00b5 \u2264Qmax p \u2225gp\u22251,\u00b5 + 2rmax p \u2225gr\u22251,\u00b5 + 2Qmax\u2225e wp \u2212wp\u2225\u03c6P S + 4rmax\u2225e wr \u2212wr\u2225\u03c6R S + inf f\u2208H \u2225f \u2212(L\u2217)k+1Q0\u2225\u00b5 + 2 inf f\u2208H \u2225f \u2212R\u2225\u00b5 + Qmax 2\u221213 8 \u0010p M( e wp) + 2 p M( e wr) \u0011 d log 2Ne d + log 4 \u03b4 N ! 3 16 + k\u22121 X i=0 (\u03b3CAE(\u00b5))i+1\u2225\u03f5k\u2212i\u22121\u2225\u00b5, where CAE is the concentrability coef\ufb01cient of one-step transitions as de\ufb01ned in (Farahmand, 2011, De\ufb01nition 5.2). As expected, four primary sources of error contribute to our bound: (i) the bias due to estimated weights (\ufb01rst four terms), (ii) the approximation error (\ufb01fth and sixth term), (iii) the estimation error (seventh term), (iv) the propagation error (eighth term). Notice that, assuming to have a consistent estimator for the importance weights (an example is given in Section 5), the bias term vanishes as the number of samples N tends to in\ufb01nity. Furthermore, the estimation error decreases with N, thus vanishing as the number of samples increases. Thus, in the asymptotic case our bound shows that the only source of error is due to the limited capacity of the functional space H under consideration, as in most AVI algorithms. Furthermore, we notice that \ufb01tting the reward function and using it instead of the available samples propagates an error term through iterations, i.e., the approximation error inff\u2208H \u2225f \u2212R\u2225\u00b5. If we were able to estimate the importance weights for the typical case where both reward and transition samples are used, we could get rid of such error. However, since the resulting weights somehow depend on the joint densities between P and R, we expect their variance, as measured by M( e w), to be much bigger, thus making the resulting bound even larger. Furthermore, we argue that, when the reward function is simple enough and only a limited number of samples is available, a separate \ufb01t might be bene\ufb01cial even for plain FQI. In fact, the variance of the empirical optimal Bellman operator can be reduced by removing the source of stochasticity due to the reward samples at the cost of propagating a small approximation error through iterations. The bounds for AVI, (e.g., Munos & Szepesv\u00b4 ari, 2008; Farahmand, 2011; Farahmand & Precup, 2012), can be straightforwardly extended to such case by adopting a procedure similar to the one described in the proof of Theorem 3. Finally, in most practical applications the reward function is actually known and, thus, does not need to be \ufb01tted. In such cases, it is possible to get rid of the corresponding terms in Theorem 3, allowing transfer to occur without errors even when rewards are completely different between tasks. 5. Estimation of Importance Weights In this section, we specify how to compute the importance weights. Since P and R are unknown, we only have access to an estimation of w(j) r,i and w(j) p,i used in (3) and (5), respectively. To obtain an approximation of the unknown densities, we consider Gaussian Processes (GPs) although any distribution matching technique and/or probabilistic model can be used. Gaussian Processes. We use the available samples to \ufb01t two Gaussian processes (GPs) (Rasmussen & Williams, 2006) for each task Mj: one for the transition model Pj and one for the reward model Rj. To motivate our choice, GPs have been successfully adopted to model stochastic dynamical systems with high-dimensional and continuous state-action spaces in many existing works (e.g., Kuss & Rasmussen, 2004; Deisenroth & Rasmussen, 2011; DoshiVelez & Konidaris, 2016; Berkenkamp et al., 2017). For the sake of simplicity, we only show how to compute the importance weights for the reward model. Our procedure straightforwardly generalizes to the transition model. Given a sample \u27e8s, a, r\u27e9from the j-th task, the j-th GP returns a Gaussian distribution over the reward\u2019s mean, i.e., r(s, a) \u223cN(\u00b5GPj(s, a), \u03c32 GPj(s, a)), which, together with the target GP\u2019s prediction, induces a distribution over the importance weights. In practice, the choice of a single importance weight can rely on some statistics of such distribution (e.g., its mean or mode). Perhaps not surprisingly, this is made non-trivial by the fact that explicitly characterizing such distribution is very complicated, and computing empirical statistics requires an expensive repeated sampling from the GPs\u2019 posteriors. Interestingly, the following theorem shows that this is not necessary when the reward model follows a Gaussian law, as the expected weights under their unknown distribution can be computed in closedform. Theorem 4 (Reward Weights in Gaussian Models). Assume each task to have Gaussian reward distribution Rj(\u00b7|s, a) = N \u0010 \u00b5(j) r (s, a), \u03c32 j (s, a) \u0011 with unknown mean. Given the available samples in e D, we build an estimate of the reward distribution such that, for any MDP Mj, r(j)(s, a) \u223cN(\u00b5GPj(s, a), \u03c32 GPj(s, a)). Then, given a sample \u27e8s, a, r\u27e9from the j-th MDP, its importance weight w = N(r|r(0)(s,a),\u03c32 0(s,a)) N(r|r(j)(s,a),\u03c32 j (s,a)) \u223cG, where G is the distribution induced by the GPs\u2019 predictions. Let C = \u03c32 j (s,a) \u03c32 j (s,a)\u2212\u03c32 GPj (s,a) and suppose \u03c32 GPj(s, a) < \u03c32 j (s, a), then EG [w] = C N \u0010 r \f \f\u00b5GP0(s,a),\u03c32 0(s,a)+\u03c32 GP0(s,a) \u0011 N \u0010 r \f \f\u00b5GPj (s,a),\u03c32 j (s,a)\u2212\u03c32 GPj (s,a) \u0011. (6) \fImportance Weighted Transfer of Samples in Reinforcement Learning The proof is in Appendix A. In practice, we estimate the importance weights by taking their expectation as in (6), i.e., e w = EG[w]. Intuitively, using the expected weights is more robust than merely taking the ratio of the estimated densities. Furthermore, the estimated weights converge to the true ones when the GP predictions are perfect, i.e., when \u00b5GP (s, a) = \u00b5r(s, a) and \u03c32 GP (s, a) \u21920, both in the source and in the target. This is a signi\ufb01cant advantage over the more common approach of density-ratio estimation (Sugiyama et al., 2012), where a parametric form for the weight function is typically assumed. One drawback is that the expectation diverges when \u03c32 GPj(s, a) > \u03c32 j , that is, when the source GP has a prediction variance that is greater than the intrinsic noise of the model. Notice, however, that this happens very rarely since the source GP is never asked to predict samples it has not seen during training. Furthermore, since in practice the model noise is unknown and has to be estimated, an overestimation is bene\ufb01cial as it introduces a regularization effect (Mohammadi et al., 2016), thus avoiding the problem mentioned above. 6. Related Work In (Taylor et al., 2008), the authors propose a method to transfer samples for model-based RL. Although they assume tasks might have different state-action space, intertask mappings are used to map source samples to the target. However, the proposed method does not account for differences in the transition or reward models, which could lead to signi\ufb01cant negative transfer when the tasks are different. Our approach, on the other hand, can selectively discard samples based on the estimated difference between the MDPs. Lazaric et al. (2008) compute a compliance measure between the target and source tasks and use it to specify from which tasks the transfer is more likely to be bene\ufb01cial. Furthermore, a relevance measure is computed within each task to determine what are the best samples to transfer. These two measures are then combined to transfer samples into FQI. Once again, our approach does not require any explicit condition to decide what to transfer, nor does it require any assumption of similarity between the tasks. Furthermore, the compliance and relevance measures computed in (Lazaric et al., 2008) jointly account for both the reward and transition models, thus discarding samples when either one of the models is very different between the tasks. On the other hand, our approach can retain at least the part of the sample that is similar, at the cost of introducing a small bias. In (Laroche & Barlier, 2017), the authors propose a technique for transferring samples into FQI under the assumption that the transition dynamics do not change between the tasks. Similarly to our method, they learn the reward function at the \ufb01rst iteration and substitute the predicted values to the reward samples in the dataset. This allows them to safely adopt the full set of target and source samples in the remaining FQI iterations, as all tasks share the same transition model and, thus, samples are unbiased. However, we argue that this assumption of shared dynamics indeed limits the applicability of the transfer method to most real-world tasks. In the supervised learning literature, Crammer et al. (2008) analyzed the transfer problem from a theoretical perspective and extended the classical generalization bounds to the case where samples are directly transferred from a set of source tasks. The most relevant result in their bounds is a trade-off between the total number of samples transferred and the total number of tasks from which transfer occurs. Increasing the \ufb01rst term decreases the variance, while it is likely to increase the bias due to the differences between the tasks. On the other hand, decreasing the \ufb01rst term also decreases the bias, but it is likely to increase the variance due to the limited number of samples. We observe that such trade-off does not arise in our case. Our method transfers all samples while accounting for the differences between the tasks. The only bias term is due to the errors in the estimation of the task models, which is likely to decrease as the number of samples increases. Another work from the supervised learning literature that is related to our approach is (Garcke & Vanck, 2014). The authors proposed a method to transfer samples from a different dataset and used importance weighting to correct the distribution shift. However, they leveraged ideas from density-ratio estimation (e.g., Sugiyama et al., 2012) and supposed the weight function to have a given parametric form, thus directly estimating it from the data. Conversely, we estimate the densities involved and try to characterize the weight distribution, taking its expectation as our \ufb01nal estimate. 7. Experiments We evaluate IWFQI on three different domains with increasing level of complexity. In all experiments, we compare our method to two existing algorithms for transferring samples into FQI: the relevance-based transfer (RBT) algorithm of (Lazaric et al., 2008) and the shared-dynamics transfer (SDT) algorithm of (Laroche & Barlier, 2017). 7.1. Puddle World Our \ufb01rst experimental domain is a modi\ufb01ed version of the puddle world environment presented in (Sutton, 1996). Puddle world is a discrete-action, continuous-state (stochastic) navigation problem (see Appendix C.1 for a complete description). At each time-step, the agent receives a reward of \u22121 plus a penalization proportional to the distance from all puddles. Each action moves the agent by \u03b1 in the corresponding direction. In particular, we con\fImportance Weighted Transfer of Samples in Reinforcement Learning 20 30 40 50 \u221260 \u221250 \u221240 \u221230 \u221220 Episodes Discounted Return IWFQI-ID SDT FQI RBT IWFQI 20 30 40 50 \u221260 \u221250 \u221240 \u221230 \u221220 Episodes Discounted Return IWFQI-ID SDT FQI RBT IWFQI Figure 1. Puddle world with 20 \u00d7 3 episodes transferred from 3 source tasks in the case of shared dynamics (left) and puddle-based dynamics (right). 50 100 150 \u221245 \u221247 \u221249 \u221251 Episodes Discounted Return FQI RBT IWFQI 50 100 150 0 0.5 1 1.5 2 Episodes E\ufb00ective Sample Size Swing-up Constant-spin RBT RBT IWFQI (R) IWFQI (R) IWFQI (P) IWFQI (P) Figure 2. Acrobot swing-up with (100 + 50) episodes transferred from 2 source tasks. (left) learning performance. (right) relative number of samples transferred from each source task. sider two versions of the environment: (i) shared dynamics, where \u03b1 = 1 is \ufb01xed, and (ii) puddle-based dynamics, where \u03b1 slows-down the agent proportionally to the distance from all puddles. We consider three source tasks and one target task, where each task has different puddles in different locations (see Appendix C.1). For each source task, we generate a dataset of 20 episodes from a nearly-optimal policy. We run IWFQI with weights computed according to Equation (6), where we set the model noise to be ten times the true value. For evaluating our weight estimation procedure, we also run IWFQI with ideal importance weights (computed as the ratio of the true distributions). In each algorithm, FQI is run for 50 iterations with Extra-Trees (Ernst et al., 2005). An \u03f5-greedy policy (\u03f5 = 0.3) is used to collect data in the target task. Shared dynamics. We start by showing the results for \u03b1 = 1 in Figure 1(left). All results are averaged over 20 runs and are reported with 95% con\ufb01dence intervals. As expected, FQI alone is not able to learn the target task in such a small number of episodes. On the other hand, IWFQI has a good jump-start and converges to an optimal policy in only 20 episodes. Interestingly, IWFQI with ideal weights has almost the same performance, thus showing the robustness of our weight estimation procedure. RBT also learns the optimal policy rather quickly. However, the limited number of target and source samples available in this experiment makes it perform signi\ufb01cantly worse in the \ufb01rst episodes. Since in this version of the puddle world the dynamics do not change between tasks, SDT also achieves good performance, converging to a nearly-optimal policy. Puddle-based dynamics. We also show the results for the more challenging version of the environment were puddles both penalize and slow-down the agent (see Figure 1(right)). Notice that, in this case, transition dynamics change between tasks, thus making the transfer more challenging. Similarly, as before, our approach quickly learns the optimal policy and is not affected by the estimated weights. Furthermore, the bene\ufb01ts of over-estimating the model noise can be observed from the small improvement over IWFQI-ID. RBT is also able to learn the optimal policy. However, the consequences of inaccurately computing compliance and relevance are more evident in this case, where the algorithm negatively transfers samples in the \ufb01rst episodes. Finally, SDT still shows an improvement over plain FQI, but it is not able to learn the optimal policy due to the bias introduced by the different dynamics. 7.2. Acrobot Acrobot (Sutton & Barto, 1998) is a classic control problem where the goal is to swing-up a two-link pendulum by applying positive or negative torque to the joint between the two links. Due to its non-linear and complex dynamics, Acrobot represents a very challenging problem, requiring a considerable amount of samples to be solved. In this experiment, we consider a multi-task scenario where robots might have different link lengths (l1, l2) and masses (m1, m2). Our target task is the classic Acrobot swing-up problem, where the robot has lengths (1.0, 1.0) and masses (1.0, 1.0). Furthermore, we consider two source tasks. The \ufb01rst is another swing-up task where the robot has lengths (1.1, 0.7) and masses (0.9, 0.6). The second is a constantspin task, where the goal is to make the \ufb01rst joint rotate at a \ufb01xed constant speed, with lengths (0.95, 0.95) and masses (0.95, 1.0). The exact de\ufb01nition of the tasks\u2019 dynamics and rewards is in Appendix C.2. Notice the intrinsic dif\ufb01culty of transfer: the \ufb01rst source task has the same reward as the target but very different dynamics, and conversely for the second source task. Using nearly-optimal policies, we generate 100 episodes from the \ufb01rst source and 50 episodes from the second. We run all algorithms (except SDT since the problem violates the shared-dynamics assumption) for 200 episodes and average over 20 runs. Results are shown in Figure 2(left). We notice that both our approach and RBT achieve a good jump-start and learn faster than plain FQI. However, to better investigate how samples are transferred, we show the transfer ratio from each source task in Figure 2(right). Since RBT transfers rewards and transitions jointly, it decides to compensate the highly biased re\fImportance Weighted Transfer of Samples in Reinforcement Learning ward samples from the constant-spin task by over-sampling the \ufb01rst source task. However, it inevitably introduces bias from the different dynamics. Our approach, on the other hand, correctly transfers almost all reward samples from the swing-up task, while discarding those from the constant-spin task. Due to transition noise over-estimation, IWFQI achieves an interesting adaptive behaviour: during the initial episodes, when few target samples are available, and the GPs are inaccurate, more samples are transferred. This causes a reduction of the variance in the \ufb01rst phases of learning that is much greater than the increase of bias. However, as more target samples are available, the transfer becomes useless, and our approach correctly decides to discard most transition samples, thus minimizing both bias and variance. 7.3. Water Reservoir Control In this experiment, we consider a real-world problem where the goal is to learn how to optimally control a water reservoir system. More speci\ufb01cally, the objective is to learn a per-day water release policy that meets a given demand while keeping the water level below a \ufb02ooding threshold. Castelletti et al. (2010) successfully addressed such problem by adopting batch RL techniques. However, the authors proved that, due to the highly non-linear and noisy environment, an enormous amount of historical data is needed to achieve good performance. Consider now the case where a new water reservoir, for which no historical data is available, needs to be controlled. Since each sample corresponds to one day of release, learning by direct interaction with the environment is not practical and leads to poor control policies during the initial years, when only a little experience has been collected. Although we do not know the new environment, it is reasonable to assume that we have access to operational data from existing reservoirs. Then, our solution is to transfer samples to immediately achieve good performance. However, such reservoirs might be located in very different environments and weight objectives differently, thus making transfer very challenging. We adopt a system model similar to the one proposed in (Castelletti et al., 2010). The state variables are the current water storage st and day t \u2208[1, 365], while there are 8 discrete actions, each corresponding to a particular release decision. The system evolves according to the simple mass balance equation st+1 = st + it \u2212at, where it is the net in\ufb02ow at day t and is modeled as periodic function, with period of one year, plus Gaussian noise. Given the demand d and the \ufb02ooding threshold f, the reward function is a convex combination of the two objectives, R(st, at) = \u2212\u03b1 max{0, st \u2212f} \u2212\u03b2(max{0, d \u2212at})2, where \u03b1, \u03b2 \u22650. Different tasks have different in\ufb02ow functions and different reward weights, which model different geographic regions and objectives, respectively. 1 2 3 4 5 6 7 8 9 10 2.5 3 3.5 4 Years Average Cost Optimal Expert RBT IWFQI Figure 3. Water reservoir control. Average cost per day during the \ufb01rst 10 years of learning. IWFQI outperforms the expert and quickly achieves near-optimal performance. We collected 10800 samples, corresponding to 30 years of historical data, from each of 6 source water reservoirs under a hand-coded expert policy. Further details about the tasks are given in Appendix C.3. We compared our approach to FQI and RBT over the \ufb01rst 10 years of learning. An \u03f5-greedy policy (\u03f5 = 0.3) was used to collect batches of 1 year of samples, except for the \ufb01rst batch, for which an expert\u2019s policy was used. Results, averaged over 20 runs, are shown in Figure 3. We notice that IWFQI immediately outperforms the expert\u2019s policy and quickly achieves nearoptimal performance. RBT, on the other hand, has a good jump-start but then seems to worsen its performance. Once again, this is because each source task has at least few samples that can be transferred. However, selecting such samples is very complicated and leads to negative transfer in case of failure. Finally, FQI performs signi\ufb01cantly worse than all alternatives and is, thus, not reported. 8. Conclusions In this paper, we presented Importance Weighted Fitted QIteration, a novel AVI algorithm for transferring samples in batch RL that uses importance weighting to automatically account for the difference in the source and target distributions. IWFQI exploits Gaussian processes to learn transition and reward models that are used to compute the importance weights. The use of two different processes for reward and transition models allows maximizing the information transferred. We theoretically investigated IWFQI showing (i) its asymptotic correctness in general settings, and (ii) how to compute a robust statistical estimate of the weights for Gaussian models. Finally, we empirically proved its effectiveness in common benchmarks and on a real-world water control problem. One of the drawbacks of our method is that it does not fully exploit possible similarities between tasks. Recent approaches (e.g., Doshi-Velez & Konidaris, 2016; Killian et al., 2017) learn models relating a family of tasks to ease the transfer of knowledge. Exploring how such relations can bene\ufb01t our approach (e.g., to improve the weight estimates) is an interesting line for future developments. \fImportance Weighted Transfer of Samples in Reinforcement Learning" + } + ], + "Alessandro Lazaric": [ + { + "url": "http://arxiv.org/abs/1108.6211v2", + "title": "Transfer from Multiple MDPs", + "abstract": "Transfer reinforcement learning (RL) methods leverage on the experience\ncollected on a set of source tasks to speed-up RL algorithms. A simple and\neffective approach is to transfer samples from source tasks and include them\ninto the training set used to solve a given target task. In this paper, we\ninvestigate the theoretical properties of this transfer method and we introduce\nnovel algorithms adapting the transfer process on the basis of the similarity\nbetween source and target tasks. Finally, we report illustrative experimental\nresults in a continuous chain problem.", + "authors": "Alessandro Lazaric, Marcello Restelli", + "published": "2011-08-31", + "updated": "2011-09-01", + "primary_cat": "cs.AI", + "cats": [ + "cs.AI", + "cs.LG" + ], + "main_content": "Introduction The objective of transfer in reinforcement learning (RL) [12] is to speed-up RL algorithms by reusing knowledge (e.g., samples, value function, features, parameters) obtained from a set of source tasks. The underlying assumption of transfer methods is that the source tasks (or a suitable combination of these) are somehow similar to the target task, so that the transferred knowledge can be useful in learning its solution. A wide range of scenarios and methods for transfer in RL have been studied in the last decade (see [14, 9] for a thorough survey). In this paper, we focus on the simple transfer approach where trajectory samples are transferred from source MDPs to increase the size of the training set used to solve the target MDP. This approach is particularly suited in problems (e.g., robotics, applications involving human interaction) where it is not possible to interact with the environment long enough to collect samples to solve the task at hand. If samples are available from other sources (e.g., simulators in case of robotic applications), the solution of the target task can bene\ufb01t from a larger training set that includes also some source samples. This approach has been already investigated in the case of transfer between tasks with different state-action spaces in [13], where the source samples are used to build a model of the target task whenever the number of target samples is not large enough. A more sophisticated sample-transfer method is proposed in [8]. The authors introduce an algorithm which estimates the similarity between source and target tasks and selectively transfers from the source tasks which are more likely to provide samples similar to those generated by the target MDP. Although the empirical results are encouraging, the proposed method is based on heuristic measures and no theoretical analysis of its performance is provided. On the other hand, in supervised learning a number of theoretical works investigated the effectiveness of transfer in reducing the sample complexity of the learning process. In domain adaptation, a solution learned on a source task is transferred to a target task and its performance depends on how similar the two tasks are. In [2] and [10] different distance measures are proposed and are shown to be connected to the performance of the transferred solution. The case of transfer of samples from multiple source tasks is studied in [3]. The most interesting \ufb01nding is that the transfer performance bene\ufb01ts from using a larger training set at the cost of an additional error due to the average distance between source and target tasks. This implies the existence of a transfer tradeoff between transferring as many samples as possible and limiting the transfer to sources which are similar to the target task. As a result, the 1 \ftransfer of samples is expected to outperform single-task learning whenever negative transfer (i.e., transfer from source tasks far from the target task) is limited w.r.t. to the advantage of increasing the size of the training set. This also opens the question whether it is possible to design methods able to automatically detect the similarity between tasks and adapt the transfer process accordingly. In this paper, we investigate the transfer of samples in RL from a more theoretical perspective w.r.t. previous works. The main contributions of this paper can be summarized as follows: \u2022 Algorithmic contribution. We introduce three sample-transfer algorithms based on \ufb01tted Qiteration [4]. The \ufb01rst algorithm (AST in Section 3) simply transfers all the source samples. We also design two adaptive methods (BAT and BTT in Section 4 and 5) whose objective is to solve the transfer tradeoff by identifying the best combination of source tasks. \u2022 Theoretical contribution. We formalize the setting of transfer of samples and we derive a \ufb01nite-sample analysis of AST which highlights the importance of the average MDP obtained by the combination of the source tasks. We also report the analysis for BAT which shows both the advantage of identifying the best combination of source tasks and the additional cost in terms of auxiliary samples needed to compute the similarity between tasks. \u2022 Empirical contribution. We report results (in Section 6) on a simple chain problem which con\ufb01rm the main theoretical \ufb01ndings and support the idea that sample transfer can signi\ufb01cantly speed-up the learning process and that adaptive methods are able to solve the transfer tradeoff and avoid negative transfer effects. The rest of the paper is organized as follows. In Section 2 we introduce the notation and we de\ufb01ne the transfer problem. Section 3 reports the theoretical analysis of AST. BAT is described in Section 4 along with its theoretical analysis. A more challenging setting is introduced in Section 5 together with BTT. Section 6 reports the experimental results and Section 7 concludes the paper. Finally, in the appendix we report the proofs and some additional experimental analysis. 2 Preliminaries In this section we introduce the notation and the transfer problem considered in the rest of the paper. Notation for MDPs. We de\ufb01ne a discounted Markov decision process (MDP) as a tuple M = \u27e8X, A, R, P, \u03b3\u27e9where the state space X is a bounded closed subset of the Euclidean space, A is a \ufb01nite (|A| < \u221e) action space, the deterministic1 reward function R : X \u00d7 A \u2192R is uniformly bounded by Rmax, the transition kernel P is such that for all x \u2208X and a \u2208A, P(\u00b7|x, a) is a distribution over X, and \u03b3 \u2208(0, 1) is a discount factor. We denote by S(X \u00d7 A) the set of probability measures over X \u00d7 A and by B(X \u00d7 A; Vmax = Rmax 1\u2212\u03b3 ) the space of bounded measurable functions with domain X \u00d7 A and bounded in [\u2212Vmax, Vmax]. We de\ufb01ne the optimal action-value function Q\u2217as the unique \ufb01xed-point of the optimal Bellman operator T : B(X \u00d7 A; Vmax) \u2192 B(X \u00d7 A; Vmax) de\ufb01ned by (T Q)(x, a) = R(x, a) + \u03b3 Z X max a\u2032\u2208A Q(y, a\u2032)P(dy|x, a). Notation for function spaces. For any measure \u00b5 \u2208S(X \u00d7 A) obtained from the combination of a distribution \u03c1 \u2208S(X) and a uniform distribution over the discrete set A, and a measurable function f : X \u00d7 A \u2192R, we de\ufb01ne the L2(\u00b5)-norm of f as ||f||2 \u00b5 = 1 |A| P a\u2208A R X f(x, a)2\u03c1(dx). The supremum norm of f is de\ufb01ned as ||f||\u221e= supx\u2208X |f(x)|. Finally, we de\ufb01ne the standard L2-norm for a vector \u03b1 \u2208Rd as ||\u03b1||2 = Pd i=1 \u03b12 i . We denote by \u03c6(\u00b7, \u00b7) = \u0000\u03d51(\u00b7, \u00b7), . . . , \u03d5d(\u00b7, \u00b7) \u0001\u22a4 a feature vector with features \u03d5i : X \u00d7 A \u2192[\u2212C, C], and by F = {f\u03b1(\u00b7, \u00b7) = \u03c6(\u00b7, \u00b7)\u22a4\u03b1} the linear space of action-value functions spanned by the basis functions in \u03c6. Given a set of state-action pairs {(Xl, Al)}L l=1, let \u03a6 = [\u03c6(X1, A1)\u22a4; . . . ; \u03c6(XL, AL)\u22a4] be the corresponding feature matrix. We de\ufb01ne the orthogonal projection operator \u03a0 : B(X \u00d7 A; Vmax) \u2192F as \u03a0Q = arg minf\u2208F ||Q \u2212 f||\u00b5. Finally, by T (Q) we denote the truncation of a function Q in the range [\u2212Vmax, Vmax]. 1The extension to stochastic reward functions is straightforward. 2 \fInput: Linear space F = span{\u03d5i, 1 \u2264i \u2264d}, initial function e Q0 \u2208F for k = 1, 2, . . . do Build the training set {(Xl, Al, Yl, Rl)}L l=1 [according to random tasks design] Build the feature matrix \u03a6 = [\u03c6(X1, A1)\u22a4; . . . ; \u03c6(XL, AL)\u22a4] Compute the vector p \u2208RL with pl = Rl + \u03b3 maxa\u2032\u2208A e Qk\u22121(Yl, a\u2032) Compute the projection \u02c6 \u03b1k = (\u03a6\u22a4\u03a6)\u22121\u03a6\u22a4p and the function b Qk = f\u02c6 \u03b1k Return the truncated function e Qk = T ( b Qk) end for Figure 1: A pseudo-code for All-Sample Transfer (AST) Fitted Q-iteration. Problem setup. We consider the transfer problem in which M tasks {Mm}M m=1 are available and the objective is to learn the solution for the target task M1 transferring samples from the source tasks {Mm}M m=2. We de\ufb01ne an assumption on how the training sets are generated. De\ufb01nition 1. (Random Tasks Design) An input set {(Xl, Al)}L l=1 is built with samples drawn from an arbitrary sampling distribution \u00b5 \u2208S(X \u00d7 A), i.e. (Xl, Al) \u223c\u00b5. For each task m, one transition and reward sample is generated in each of the state-action pairs in the input set, i.e. Y m l \u223cP(\u00b7|Xl, Al), and Rm l = R(Xl, Al). Finally, we de\ufb01ne the random sequence {Ml}L l=1 where the indexes Ml are drawn i.i.d. from a multinomial distribution with parameters (\u03bb1, . . . , \u03bbM). The training set available to the learner is {(Xl, Al, Yl, Rl)}L l=1 where Yl = Yl,Ml and Rl = Rl,Ml. This is an assumption on how the samples are generated but in practice, a single realization of samples and task indexes Ml is available. We consider the case in which \u03bb1 \u226a\u03bbm (m = 2, . . . , M). This condition implies that (on average) the number of target samples is much less than the source samples and it is usually not enough to learn an accurate solution for the target task. We will also consider the pure transfer case in which \u03bb1 = 0 (i.e., no target sample is available). Finally, we notice that Def. 1 implies the existence of a generative model for all the MDPs, since the stateaction pairs are generated according to an arbitrary sampling distribution \u00b5. 3 All-Sample Transfer Algorithm We \ufb01rst consider the case when the source samples are generated beforehand according to Def. 1 and the designer has no access to the source tasks. We study the algorithm called All-Sample Transfer (AST) (Fig. 1) which simply runs FQI with a linear space F on the whole training set {(Xl, Al, Yl, Rl)}L l=1. At each iteration k, given the result of the previous iteration e Qk\u22121 = T ( b Qk\u22121), the algorithm returns b Qk = arg min f\u2208F 1 L L X l=1 \u0012 f(Xl, Al) \u2212(Rl + \u03b3 max a\u2032\u2208A e Qk\u22121(Yl, a\u2032)) \u00132 . (1) In the case of linear spaces, the minimization problem is solved in closed form as in Fig. 1. In the following we report a \ufb01nite-sample analysis of the performance of AST. Similar to [11], we \ufb01rst study the prediction error in each iteration and we then propagate it through iterations. 3.1 Single Iteration Finite-Sample Analysis We de\ufb01ne the average MDP M\u03bb as the average of the M MDPs at hand. We de\ufb01ne its reward function R\u03bb and its transition kernel P\u03bb as the weighted average of reward functions and transition kernels of the basic MDPs with weights determined by the proportions \u03bb of the multinomial distribution in the de\ufb01nition of the random tasks design (i.e., R\u03bb = PM m=1 \u03bbmRm, P\u03bb = PM m=1 \u03bbmPm). The resulting average Bellman operator is (T \u03bbQ)(x, a) = \u0010 M X m=1 \u03bbmT mQ \u0011 (x, a) = R(x, a) + \u03b3 Z X max a\u2032 Q(y, a\u2032)P(dy|x, a). (2) 3 \fIn the random tasks design, the average MDP plays a crucial role since the implicit target function of the minimization of the empirical loss in Eq. 1 is indeed T \u03bb e Qk\u22121. At each iteration k, we prove the following performance bound for AST. Theorem 1. Let M be the number of tasks {Mm}M m=1, with M1 the target task. Let the training set {(Xl, Al, Yl, Rl)}L l=1 be generated as in Def. 1, with a proportion vector \u03bb = (\u03bb1, . . . , \u03bbM). Let f\u03b1k \u2217= \u03a0T1 e Qk\u22121 = arg inff\u2208F ||f \u2212T1 e Qk\u22121||\u00b5, then for any 0 < \u03b4 \u22641, b Qk (Eq. 1) satis\ufb01es ||T ( b Qk) \u2212T1 e Qk\u22121||\u00b5 \u22644||f\u03b1k \u2217\u2212T1 e Qk\u22121||\u00b5 + 5 q E\u03bb( e Qk\u22121) + 24(Vmax + C||\u03b1k \u2217||) r 2 L log 9 \u03b4 + 32Vmax s 2 L log \u001227(12Le2)2(d+1) \u03b4 \u0013 . with probability 1 \u2212\u03b4 (w.r.t. samples), where ||\u03d5i||\u221e\u2264C and E\u03bb( e Qk\u22121) = \u2225(T1 \u2212T \u03bb) e Qk\u22121\u22252 \u00b5. Remark 1 (Analysis of the bound). We \ufb01rst notice that the previous bound reduces (up to constants) to the standard bound for FQI when M = 1 (see Section B). The bound is composed by three main terms: (i) approximation error, (ii) estimation error, and (iii) transfer error. The approximation error ||f\u03b1k \u2217\u2212T1 e Qk\u22121||\u00b5 is the smallest error of functions in F in approximating the target function T1 e Qk\u22121 and it is independent from the transfer algorithm. The estimation error (third and fourth terms in the bound) is due to the \ufb01nite random samples used to learn b Qk and it depends on the dimensionality d of the function space and it decreases with the total number of samples L with the fast rate of linear spaces (O(d/L) instead of O( p d/L)). Finally, the transfer error E\u03bb accounts for the difference between source and target tasks. In fact, samples from source tasks different from the target might bias b Qk towards a wrong solution, thus resulting in a poor approximation of the target function T1 e Qk\u22121. It is interesting to notice that the transfer error depends on the difference between the target task and the average MDP M\u03bb obtained by taking a linear combination of the source tasks weighted by the parameters \u03bb. This means that even when each of the source tasks is very different from the target, if there exists a suitable combination which is similar to the target task, then the transfer process is still likely to be effective. Furthermore, E\u03bb considers the difference in the result of the application of the two Bellman operators to a given function e Qk\u22121. As a result, when the two operators T1 and T \u03bb have the same reward functions, even if the transition distributions are different (e.g., the total variation ||P1(\u00b7|x, a)\u2212P\u03bb(\u00b7|x, a)||TV is large), their corresponding averages of e Qk\u22121 might still be similar (i.e., R maxa\u2032 e Q(y, a\u2032)P1(dy|x, a) similar to R maxa\u2032 e Q(y, a\u2032)P\u03bb(dy|x, a)). Remark 2 (Comparison to single-task learning). Let b Qk s be the solution obtained by solving one iteration of FQI with only samples from the source task, the performance bounds of b Qk and b Qk s can be written as (up to constants and logarithmic factors) \u2225T ( b Qk) \u2212T1 e Qk\u22121\u2225\u00b5 \u2264||f\u03b1k \u2217\u2212T1 e Qk\u22121||\u00b5 + (Vmax + C||\u03b1k \u2217||) r 1 L + Vmax r d L + p E\u03bb , \u2225T ( b Qk s) \u2212T1 e Qk\u22121\u2225\u00b5 \u2264||f\u03b1k \u2217\u2212T1 e Qk\u22121||\u00b5 + (Vmax + C||\u03b1k \u2217||) r 1 N1 + Vmax r d N1 , with N1 = \u03bb1L (on average). Both bounds share exactly the same approximation error. The main difference is that b Qk s uses only N1 samples and, as a result, has a much bigger estimation error than b Qk, which takes advantage of all the L samples transferred from the source tasks. At the same time, b Qk suffers from an additional transfer error which does not exist in the case of b Qk s. Thus, we can conclude that AST is expected to perform better than single-task learning whenever the advantage of using more samples is greater than the bias due to samples coming from tasks different from the target task. This introduces a transfer tradeoff between including many source samples, so as to reduce the estimation error, and \ufb01nding source tasks whose combination leads to a small transfer error. In Section 4 we show how it is possible to de\ufb01ne an adaptive transfer algorithm which selects proportions \u03bb so as to keep the transfer error E\u03bb as small as possible. Finally, in Section 5 we consider a different setting where the maximum number of samples in each source is \ufb01xed. 4 \f3.2 Propagation Finite-Sample Analysis We now study how the previous error is propagated through iterations. Let \u03bd be the evaluation norm (i.e., in general different from the sampling distribution \u00b5). We \ufb01rst report two assumptions. 2 Assumption 1. [11] Given \u00b5, \u03bd, p \u22651, and an arbitrary sequence of policies {\u03c0p}p\u22651, we assume that the future-state distribution \u00b5P1 \u03c01 \u00b7 \u00b7 \u00b7 P1 \u03c0p is absolutely continuous w.r.t. \u03bd. We assume that c(p) = sup\u03c01\u00b7\u00b7\u00b7\u03c0p ||d(\u00b5P1 \u03c01 \u00b7 \u00b7 \u00b7 P1 \u03c0p)/\u03bd||\u221esatis\ufb01es C\u00b5,\u03bd = (1 \u2212\u03b32)2 P p p\u03b3p\u22121c(p) < \u221e. We also need the features \u03d5i to be linearly independent w.r.t. \u00b5. Assumption 2. Let G \u2208Rd\u00d7d be the Gram matrix with [G]ij = R \u03d5i(x, a)\u03d5j(x, a)\u00b5(dx, a). We assume that its smallest eigenvalue \u03c9 is strictly positive (i.e., \u03c9 > 0). Using the two previous assumptions we derive the following performance bound for AST. Theorem 2. Let Assumptions 1 and 2 hold and the setting be as in Theorem 1. After K iterations, AST returns an action-value function e QK, whose corresponding greedy policy \u03c0K satis\ufb01es ||Q\u2217\u2212Q\u03c0K||\u03bd \u2264 2\u03b3 (1 \u2212\u03b3)3/2 p C\u00b5,\u03bd \" 4 sup g\u2208F inf f\u2208F ||f \u2212T1g||\u00b5 + 5 sup \u03b1 \u2225(T1 \u2212T \u03bb)T (f\u03b1)\u2225\u00b5 + 56(Vmax + Vmax \u221a\u03c9 ) r 2 L log 9K \u03b4 + 32Vmax s 2 L log \u001227K(12Le2)2(d+1) \u03b4 \u0013 + 2Vmax p C\u00b5,\u03bd \u03b3K # . Remark (Analysis of the bound). The bound reported in the previous theorem displays few differences w.r.t. to the single-iteration bound. The additional term O(\u03b3K) accounts for the error due to the \ufb01nite number of iterations of FQI and it decreases exponentially with base \u03b3. The approximation error is now supg inff ||f \u2212T 1g||\u00b5. This term is referred to as the inherent Bellman error [11] of the space F and it is related to how well the Bellman images of functions in F can be approximated by F itself. It is possible to show that for particular classes of MDPs (e.g., Lipschitz), if a large enough number of carefully designed features is available, then this term is small. In the estimation error, the norm ||\u03b1k \u2217|| is bounded using the linear independency between features (Assumption 2) and the boundedness of the functions e Qk returned at each iteration. The resulting term has an inverse dependency on the smallest eigenvalue \u03c9 which tends to be small whenever the Gram matrix is not well-de\ufb01ned (i.e., the features are almost linearly dependent). The transfer error sup\u03b1 \u2225(T1 \u2212T \u03bb)T (f\u03b1)\u2225\u00b5 characterizes the difference between the target and average Bellman operators through the space F. As a result, even MDPs with signi\ufb01cantly different rewards and transitions might have a small transfer error because of the functions in F. This introduces a tradeoff in the design of F between a \u201clarge\u201d enough space containing functions able to approximate T1Q (i.e., small approximation error) and a small function space where the Q-functions induced by T1 and T \u03bb can be closer (i.e., small transfer error). This term also displays interesting similarities with the notion of discrepancy introduced in [10] in domain adaptation. 4 Best Average Transfer Algorithm As discussed in the previous section, the transfer error E\u03bb plays a crucial role in the comparison with single-task learning. In particular, E\u03bb is related to the proportions \u03bb inducing the average Bellman operator T \u03bb which de\ufb01nes the target function approximated at each iteration. We now consider the case where the designer has direct access to the source tasks (i.e., it is possible to choose how many samples to draw from each source) and can de\ufb01ne an arbitrary proportion \u03bb. In particular, we propose a method that adapts \u03bb at each iteration so as to minimize the transfer error E\u03bb. We consider the case in which L is \ufb01xed as a parameter of the algorithm and \u03bb1 = 0 (i.e., no target samples are used in the learning training set). At each iteration k, we need to estimate the quantity E\u03bb( e Qk\u22121). We assume that for each task additional samples available. Let {(Xs, As, Rs,1, . . . , Rs,M)}S s=1 be an auxiliary training set where (Xs, As) \u223c\u00b5 and Rs,m = 2We refer to [11] for a thorough explanation of the concentrability terms. 5 \fInput: Space F = span{\u03d5i, 1 \u2264i \u2264d}, initial function e Q0 \u2208F, number of samples L Build the auxiliary set {(Xs, As, Rs,1, . . . , Rs,M}S s=1 and {Y t s,1,. . ., Y t s,M}T t=1 for each s for k = 1, 2, . . . do Compute b \u03bbk = arg min\u03bb\u2208\u039b b E\u03bb( e Qk\u22121) Run one iteration of AST (Fig. 1) using L samples generated according to b \u03bbk end for Figure 2: A pseudo-code for the Best Average Transfer (BAT) algorithm. Rm(Xs, As). In each state-action pair, we generate T next states for each task, that is Y t s,m \u223c Pm(\u00b7|Xs, As) with t = 1, . . . , T . Thus, for any function Q we de\ufb01ne the estimated transfer error as b E\u03bb(Q)= 1 S S X s=1 \" Rs,1\u2212 M X m=2 \u03bbmRs,m + \u03b3 T T X t=1 \u0010 max a\u2032 Q(Y t s,1, a\u2032)\u2212 M X m=2 \u03bbm max a\u2032 Q(Y t s,m, a\u2032) \u0011#2 . (3) At each iteration, the algorithm Best Average Transfer (BAT) (Fig. 2) \ufb01rst computes b \u03bbk = arg min\u03bb\u2208\u039b b E\u03bb( e Qk\u22121), where \u039b is the (M-2)-dimensional simplex, and then runs an iteration of AST with samples generated according to the proportions b \u03bbk. We denote by \u03bbk \u2217the best combination at iteration k, that is \u03bbk \u2217= arg min \u03bb\u2208\u039b E\u03bb( e Qk\u22121) = arg min \u03bb\u2208\u039b E\u00b5 \"\u0010 M X m=2 \u03bbm(T m e Qk\u22121)(x, a) \u2212(T 1 e Qk\u22121)(x, a) \u00112 # . (4) The following performance guarantee can be proved for BAT. Lemma 1. Let {(Xs, As, R1 s, . . . , RM s )}S s=1 be a training set where (Xs, As) iid \u223c\u00b5 and Rm s = Rm(Xs, As) and for each state-action pair and for each task m, T next states Y m s,t \u223cPm(\u00b7|Xs, As) with t = 1, . . . , T are available. For any \ufb01xed bounded function Q \u2208B(X \u00d7 A; Vmax), the b \u03bb returned by minimizing Eq. 3 is such that Eb \u03bb(Q) \u2212E\u03bb\u2217(Q) \u22642Vmax r (M \u22122) log 4S/\u03b4 S + 16V 2 max log 4SM/\u03b4 T (5) with probability 1 \u2212\u03b4. From the previous lemma the approximation performance of BAT at each iteration follows. Theorem 3. Let e Qk\u22121 be the function returned at the previous iteration and b Qk BAT the function returned by the BAT algorithm (Fig. 2). Then for any 0 < \u03b4 \u22641, b Qk BAT satis\ufb01es ||T ( b Qk BAT) \u2212T1 e Qk\u22121||\u00b5 \u22644||f\u03b1k \u2217\u2212T1 e Qk\u22121||\u00b5 + 5 q E\u03bbk \u2217( e Qk\u22121) + 5 p 2Vmax (M \u22122) log 8S/\u03b4 S !1/4 + 20Vmax r log 8SM/\u03b4 T + 24(Vmax + C||\u03b1k \u2217||) r 2 L log 18 \u03b4 + 32Vmax s 2 L log \u001254(12Le2)2(d+1) \u03b4 \u0013 . with probability 1 \u2212\u03b4. Remark 1 (Comparison with AST and single-task learning). The analysis of the bound shows that BAT outperforms AST whenever the advantage in achieving the smallest possible transfer error E\u03bbk \u2217is larger than the additional estimation error due to the auxiliary training set. It is also interesting 6 \fto compare BAT to single-task learning. In fact, BAT performs better than single-task learning whenever the best possible combination of source tasks has a small transfer error and the additional estimation error related to the auxiliary training set is smaller than the estimation error in singletask learning. In particular, this means that O((M/S)1/4) + O((1/T )1/2) should be smaller than O((d/N)1/2) (with N the number of target samples). The number of calls to the generative model for BAT is ST . In order to have a fair comparison with single-task learning we set S = N 2/3 and T = N 1/3, then we obtain the condition M \u2264d2N \u22124/3 that constrains the number of tasks to be smaller than the dimensionality of the function space F. We remark that the dependency of the auxiliary estimation error on M is due to the fact that the \u03bb vectors (over which the transfer error is optimized) belong to the simplex \u039b of dimensionality M-2. Hence, the previous condition suggests that, in general, adaptive transfer methods may signi\ufb01cantly improve the transfer performance (i.e., in this case a smaller transfer error) at the cost of additional sources of errors which depend on the dimensionality of the search space used to adapt the transfer process (i.e., in this case \u039b). Remark 2 (Iterations). BAT recomputes the proportions b \u03bbk at each iteration k. In fact a combination \u03bb1 approximating well the reward function R1 at the \ufb01rst iteration (i.e., R1 \u2248R\u03bb1) does not necessarily have a small transfer error ||(T1 \u2212T\u03bb1) e Q1||\u00b5 at the second iteration. We further investigate how the best source combination changes through iterations in the experiments of Section 6. Remark 3 (Best source combination). The previous theorem shows that BAT achieves the smallest transfer error E\u03bbk \u2217( e Qk\u22121) at the cost of an additional estimation error which scales with the size of the auxiliary training set as O((M/S)1/4)+O((1/T )1/2). We notice that the \ufb01rst term of the estimation error depends on how well the \u00b5 is approximated by using a \ufb01nite number S of state-action pairs and it has a slower rate w.r.t. the other terms. The second term depends on the number of next states T simulated at each state-action pair which are used to estimate the value of the Bellman operators. As a result, in order to reduce the estimation error we need to increase both S and the number of next states T in each state-action pair. It is interesting to notice that similar estimation errors appear in FVI [11] where the optimal Bellman operator is approximated by Monte-Carlo estimation. Remark 4 (Training set). The implicit assumption in the de\ufb01nition of the auxiliary training set is that it is possible to generate a series of next states and rewards for all the tasks at the same stateaction pairs. If the source training sets are \ufb01xed in advance and the designer has no access to the source tasks, then this assumption is not veri\ufb01ed and it is not possible to test the similarity between the MDP M and the target task. Nonetheless, if the generative model for the source tasks is available at learning time, the auxiliary training set could be generated before the learning phase actually begins. Furthermore, in the theoretical analysis, BAT does not use the samples in the auxiliary training set at learning time. A trivial improvement is to include the auxiliary samples to the training set. Remark 5 (Comparison to other transfer methods). In [8] a method to compute the similarity between MDPs is proposed. In particular, the authors introduce the de\ufb01nition of compliance as the average probability of the target samples to be generated from an sample-based estimation of the source MDPs. The compliance is later used to determine the proportion of samples to be transferred from each of the source tasks. Although this algorithm shares a similar objective as BAT, they use different notions of similarity. In particular, the method in [8] tries to identify source tasks which are individually similar to the target task, while the transfer error minimized in BAT considers the average MDP obtained by the transfer process. Furthermore, the notion of compliance tries to measures the overall distance between two MDPs, while E\u03bb(Q) always measures the distance of the images of a function Q through two different Bellman operators. Remark 6 (Computational complexity). Finally, we notice that the minimization of b E\u03bb is a convex quadratic problem since the objective function is convex in \u03bb and \u03bb belongs to the (M-2)dimensional simplex. 5 Best Transfer Trade-off Algorithm The previous algorithm is proved to successfully estimate the combination of source tasks which better approximates the Bellman operator of the target task. Nonetheless, BAT relies on the implicit 7 \fInput: Linear space F = span{\u03d5i, 1 \u2264i \u2264d}, initial function e Q0 \u2208F, maximum number of samples available for each task Nm, transfer parameter c Build a training set {Xs, As, R1 s, . . . , RM s }S s=1 and the next states {Y 1 s,t, . . . , Y M s,t }T t=1 for each state-action pair for k = 1, 2, . . . do Compute \u02c6 \u03b2 = arg min\u03b2\u2208[0,1]M b E\u03b2 + c q d PM m=1 \u03b2mNm Run one iteration of AST (Fig. 1) using L samples generated according to \u02c6 \u03b2 end for Figure 3: A pseudo-code for Best Tradeoff Transfer (BTT). assumption that L samples can always be generated from any source task 3 and it cannot be applied to the case where the number of source samples is limited. Here we consider the more challenging case where the designer has still access to the source tasks but only a limited number of samples is available in each of them. In this case, an adaptive transfer algorithm should solve a tradeoff between selecting as many samples as possible, so as to reduce the estimation error, and choosing the proportion of source samples properly, so as to control the transfer error. The solution of this tradeoff may return non-trivial results, where source tasks similar to the target task but with few samples are removed in favor of a pool of tasks whose average roughly approximate the target task but can provide a larger number of samples. Here we introduce the Best Tradeoff Transfer (BTT) algorithm (see Figure 3). Similar to BAT, it relies on an auxiliary training set to solve the tradeoff. We denote by Nm the maximum number of samples available for source task m. Let \u03b2 \u2208[0, 1]M be a weight vector, where \u03b2m is the fraction of samples from task m used in the transfer process. We denote by E\u03b2 (b E\u03b2) the transfer error (the estimated transfer error) with proportions \u03bb where \u03bbm = (\u03b2mNm)/ P m\u2032(\u03b2m\u2032Nm\u2032). At each iteration k, BTT returns the vector \u03b2 which optimizes the tradeoff between estimation and transfer errors, that is \u02c6 \u03b2k = arg min \u03b2\u2208[0,1]M \u0010 b E\u03b2( e Qk\u22121) + \u03c4 s d PM m=1 \u03b2mNm \u0011 , (6) where \u03c4 is a parameter. While the \ufb01rst term accounts for the transfer error induced by \u03b2, the second term is the estimation error due to the total amount of samples used by the algorithm. Unlike AST and BAT, BTT is a heuristic algorithm motivated by the performance bound in Theorem 1 and we do not provide any theoretical guarantee about its performance. The main technical dif\ufb01culty w.r.t. the previous algorithms is that the setting considered here does not match the random task design assumption (see Def. 1) since the number of source samples is constrained by Nm. As a result, given a proportion vector \u03bb, we cannot assume samples to be drawn at random according to a multinomial of parameters \u03bb. Without this assumption, it is an open question whether a similar bound to AST and BAT could be derived. Nonetheless, the experimental results reported in Section 6 show the effectiveness of BTT in solving the transfer tradeoff. 6 Experiments In this section, we report and discuss preliminary experimental results of the transfer algorithms introduced in the previous sections. The main objective is to illustrate the functioning of the algorithms and compare their results with the theoretical \ufb01ndings. Thus, we focus on a simple problem and we leave more challenging problems for future work. We consider a continuous extension of the 50-state variant of the chain walk problem proposed in [6]. The state space is described by a continuous state variable x and two actions are available: one that 3If \u03bbm = 1 for task m, then the algorithm would generate all the L training samples from task m. 8 \fTable 1: Parameters for the \ufb01rst set of tasks tasks p l \u03b7 Reward M1 0.9 1 0.1 +1 in [\u221211, \u22129] \u222a[9, 11] M2 0.9 2 0.1 \u22125 in [\u221211, \u22129] \u222a[9, 11] M3 0.9 1 0.1 +5 in [\u221211, \u22129] \u222a[9, 11] M4 0.9 1 0.1 +1 in [\u22126, \u22124] \u222a[4, 6] M5 0.9 1 0.1 \u22121 in [\u22126, \u22124] \u222a[4, 6] Table 2: Parameters for the second set of tasks tasks p l \u03b7 Reward M1 0.9 1 0.1 +1 in [\u221211, \u22129] \u222a[9, 11] M6 0.7 1 0.1 +1 in [\u221211, \u22129] \u222a[9, 11] M7 0.1 1 0.1 +1 in [\u221211, \u22129] \u222a[9, 11] M8 0.9 1 0.1 \u22125 in [\u221211, \u22129] \u222a[9, 11] M9 0.7 1 0.5 +5 in [\u221211, \u22129] \u222a[9, 11] -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 2000 4000 6000 8000 10000 Average reward per step Number of target samples without transfer BAT with 1000 samples BAT with 5000 samples BAT with 10000 samples AST with 10000 samples -0.1 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 9 10 11 12 13 Probability Number of iterations \u03bb2 \u03bb3 \u03bb4 \u03bb5 Figure 4: Transfer from M2, M3, M4, M5. Left: Comparison between single-task learning, AST with L = 10000, BAT with L = 1000, 5000, 10000. Right: Source task probabilities estimated by BAT algorithm as a function of FQI iterations. moves toward left and the other toward right. With probability p each action makes a step of length l, affected by a noise \u03b7, in the intended direction, while with probability 1 \u2212p it moves in the opposite direction. For the target task M1, the state\u2013transition model is de\ufb01ned by the following parameters: p = 0.9, l = 1, and \u03b7 is uniform in the interval [\u22120.1, 0.1]. The reward function provides +1 when the system state reaches the regions [\u221211, \u22129] and [9, 11] and 0 elsewhere. Furthermore, to evaluate the performance of the transfer algorithms previously described, we considered eight source tasks {M2, . . . , M9} whose state\u2013transition model parameters and reward functions are reported in Tab. 1 and 2. To approximate the Q-functions, we use a linear combination of 20 radial basis functions. In particular, for each action, we consider 9 Gaussians with means uniformly spread in the interval [\u221220, 20] and variance equal to 16, plus a constant feature. The number of iterations for the FQI algorithm has been empirically \ufb01xed to 13. Samples are collected through a sequence of episodes, each one starting from the state x0 = 0 with actions chosen uniformly at random. For all the experiments, we average over 100 runs and we report standard deviation error bars. We \ufb01rst consider the pure transfer problem where no target samples are actually used in the learning training set (i.e., \u03bb1 = 0). The objective is to study the impact of the transfer error due to the use of source samples and the effectiveness of BAT in \ufb01nding a suitable combination of source tasks. The left plot in Fig. 4 compares the performances of FQI with and without the transfer of samples from the \ufb01rst four tasks listed in Tab. 1. In case of single-task learning, the number of target samples refers to the samples used at learning time, while for BAT it represents the size S of the auxiliary training set used to estimate the transfer error. Thus, while in single-task learning the performance increases with the target samples, in BAT they just make estimation of E\u03bb more accurate. The number of source samples added to the auxiliary set for each target sample was empirically \ufb01xed to one (T = 1). We \ufb01rst run AST with L = 10000 and \u03bb2 = \u03bb3 = \u03bb4 = \u03bb5 = 0.25 (which on average corresponds to 2500 samples from each source). As it can be noticed by looking at the models in Tab. 1, this combination is very different from the target model and AST does not learn any good policy. On the other hand, even with a small set of auxiliary target samples, BAT is able to learn good policies. Such result is due to the existence of linear combinations of source tasks which closely approximate the target task M1 at each iteration of FQI. An example of the proportion coef\ufb01cients computed at each iteration of BAT is shown in the right plot in Fig. 4. At the \ufb01rst iteration, FQI produces an approximation of the reward function. Given the \ufb01rst four source tasks, 9 \f 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 2000 4000 6000 8000 10000 Average reward per step Number of target samples without transfer BAT with 1000 source samples BAT with 5000 source samples BAT with 10000 source samples 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0 1000 2000 3000 4000 5000 Average reward per step Number of target samples without transfer BAT with 1000 source samples + target samples BAT with 10000 source samples + target samples BTT with max 5000 samples for each source BTT with max 10000 samples for each source Figure 5: Transfer from M6, M7, M8, M9. Left: Comparison between single-task learning and BAT with L = 1000, 5000, 10000. Right: Comparison between single-task learning, BAT with L = 1000, 10000 in addition to the target samples, and BTT (\u03c4 = 0.75) with 5000 and 10000 samples for each source task. To improve readability, the plot is truncated at 5000 target samples. BAT \ufb01nds a combination (\u03bb \u2243(0.2, 0.4, 0.2, 0.2)) that produces the same reward function as R1. However, after a few FQI iterations, such combination is no more able to accurately approximate functions T1 e Q. In fact, the state\u2013transition model of task M2 is different from all the other ones (the step length is doubled). As a result, the coef\ufb01cient \u03bb2 drops to zero, while a new combination among the other source tasks is found. Note that BAT signi\ufb01cantly improves single-task learning, in particular when very few target samples are available. In the general case, the target task cannot be obtained as any combination of the source tasks, as it happens by considering the second set of source tasks (M6, M7, M8, M9). The impact of such situation on the learning performance of BAT is shown in the left plot in Fig. 5. Note that, when a few target samples are available, the transfer of samples from a combination of the source tasks using the BAT algorithm is still bene\ufb01cial. On the other hand, the performance attainable by BAT is bounded by the transfer error corresponding to the best source task combination (which in this case is large). As a result, single-task FQI quickly achieves a better performance. Results presented so far for the BAT transfer algorithm assume that FQI is trained only with the samples obtained through combinations of source tasks. Since a number of target samples is already available in the auxiliary training set, a trivial improvement is to include them in the training set together with the source samples (selected according to the proportions computed by BAT). As shown in the plot in the right side of Fig. 5 this leads to a signi\ufb01cant improvement. From the behavior of BAT it is clear that with a small set of target samples, it is better to transfer as many samples as possible from source tasks, while as the number of target samples increases, it is preferable to reduce the number of samples obtained from a combination of source tasks that actually does not match the target task. In fact, for L = 10000, BAT has a much better performance at the beginning but it is then outperformed by single-task learning. On the other hand, for L = 1000 the initial advantage is small but the performance remains close to single-task FQI for large number of target samples. This experiment highlights the tradeoff between the need of samples to reduce the estimation error and the resulting transfer error when the target task cannot be expressed as a combination of source tasks (see Section 5). BTT algorithm provides a principled way to address such tradeoff, and, as shown by the right plot in Fig. 5, it exploits the advantage of transferring source samples when a few target samples are available, and it reduces the weight of the source tasks (so as to avoid large transfer errors) when samples from the target task are enough. It is interesting to notice that increasing the number of samples available for each source task from 5000 to 10000 improves the performance in the \ufb01rst part of the graph, while keeping unchanged the \ufb01nal performance. This is due to the capability of the BTT algorithm to avoid the transfer of source samples when there is no need for them, thus avoiding negative transfer effects. 10 \f7 Conclusions In this paper, we formalized and studied the sample-transfer problem. We \ufb01rst derived a \ufb01nitesample analysis of the performance of a simple transfer algorithm which includes all the source samples into the training set used to solve a given target task. At the best of our knowledge, this is the \ufb01rst theoretical result for a transfer algorithm in RL showing the potential bene\ufb01t of transfer over single-task learning. Then, in the case when the designer has direct access to the source tasks, we introduced an adaptive algorithm which selects the proportion of source tasks so as to minimize the bias due to the use of source samples. Finally, we considered a more challenging setting where the number of samples available in each source task is limited and a tradeoff between the amount of transferred samples and the similarity between source and target tasks must be solved. For this setting, we proposed a principled adaptive algorithm. Finally, we report a detailed experimental analysis on a simple problem which con\ufb01rms and supports the theoretical \ufb01ndings. This work opens several directions for future work. \u2022 Transfer with transformations. In many problems, there exist simple transformations to the source tasks dynamics and reward which would increase their similarity w.r.t. the target task, thus making the transfer process more effective. How af\ufb01ne transformations could be used in the adaptive transfer algorithms presented in this paper is an interesting direction for future work. In particular, it is an open question whether the cost (in terms of samples) of \ufb01nding a suitable transformation would be effectively counter-balanced by transferring more similar samples. \u2022 Transfer between tasks with different state-action spaces. In many real applications source and target tasks might have a different number of state variables and different actions. Thus, the current work should be extended to the more general case of tasks with different stateaction spaces and it should be integrated with inter-task mapping transfer methods (see [14]). \u2022 Transfer with \ufb01xed tasks design. De\ufb01nition 1 prescribes the process used to generate the training set used in learning the target task. At each state-action pair, the sample is generated from a source task chosen at random according to a multinomial distribution. When the designer has no access to the source tasks and their samples are generated beforehand, this generative model is not reasonable. A different model (\ufb01xed tasks design) should be de\ufb01ned where each sample is coming from a speci\ufb01c source which is \ufb01xed in advance. An interesting direction for future work is to understand how this different generative model affects the performance of the transfer algorithm and whether it is possible to de\ufb01ne effective adaptive algorithms for this case. Acknowledgments This work was supported by French National Research Agency through the projects EXPLO-RA n\u25e6ANR-08-COSI-004, by Ministry of Higher Education and Research, NordPas de Calais Regional Council and FEDER through the \u201ccontrat de projets \u00b4 etat region 2007\u20132013\u201d, and by PASCAL2 European Network of Excellence. The research leading to these results has also received funding from the European Community\u2019s Seventh Framework Programme (FP7/20072013) under grant agreement n 231495. 11" + } + ] + }, + "edge_feat": {} + } +} \ No newline at end of file