SlowGuess's picture
Add Batch 2128362d-2231-4e01-8aae-2794405974ef
20091b1 verified

Actor-Critic based Improper Reinforcement Learning

Mohammadi Zaki1 Avinash Mohan2 Aditya Gopalan1 Shie Mannor3

Abstract

We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process, and wishes to combine them optimally to produce a potentially new controller that can outperform each of the base ones. This can be useful in tuning across controllers, learnt possibly in mismatched or simulated environments, to obtain a good controller for a given target environment with relatively few trials. Towards this, we propose two algorithms: (1) a Policy Gradient-based approach; and (2) an algorithm that can switch between a simple Actor-Critic (AC) based scheme and a Natural Actor-Critic (NAC) scheme depending on the available information. Both algorithms operate over a class of improper mixtures of the given controllers. For the first case, we derive convergence rate guarantees assuming access to a gradient oracle. For the AC-based approach we provide convergence rate guarantees to a stationary point in the basic AC case and to a global optimum in the NAC case. Numerical results on (i) the standard control theoretic benchmark of stabilizing an cartpole; and (ii) a constrained queueing task show that our improper policy optimization algorithm can stabilize the system even when the base policies at its disposal are unstable.

1. Introduction

A natural approach to design effective controllers for large, complex systems is to first approximate the system using a tried-and-true Markov decision process (MDP) model, such as the Linear Quadratic Regulator (LQR) (Dean et al., 2017) or tabular MDPs (Auer et al., 2009), and then compute (near-) optimal policies for the assumed model. Though this

$^{1}$ Department of ECE, IISc, Bangalore, India $^{2}$ Boston University, Massachusetts, USA $^{3}$ Faculty of Electrical Engineering, Technion, Haifa, Israel and NVIDIA Research, Israel.. Correspondence to: Mohammadi Zaki mohammadi@iisc.ac.in.

Proceedings of the $39^{th}$ International Conference on Machine Learning, Baltimore, Maryland, USA, PMLR 162, 2022. Copyright 2022 by the author(s).

yields favorable results in principle, it is quite possible that errors in describing or understanding the system – leading to misspecified models – may lead to ‘overfitting’, resulting in subpar controllers in practice. Moreover, in many cases, the stability of the designed controller may be crucial and more desirable than optimizing a fine-grained cost function. From the controller design standpoint, it is often easier, cheaper and more interpretable to specify or hardcode control policies based on domain-specific principles, e.g., anti-lock braking system (ABS) controllers (Radac & Precup, 2018). For these reasons, we investigate in this paper a promising, general-purpose reinforcement learning (RL) approach towards designing controllers1 given predefined ensembles of basic or atomic controllers, which (a) allows for flexibly combining the given controllers to obtain richer policies than the atomic policies, and, at the same time, (b) can preserve the basic structure of the given class of controllers and confer a high degree of interpretability on the resulting hybrid policy.

Overview of the approach. We consider a situation where we are given 'black-box access' to $M$ controllers (maps from state to action distributions) ${K_1, \dots, K_M}$ for an unknown MDP. By this we mean that we can choose to invoke any of the given controllers at any point during the operation of the system. With the understanding that the given family of controllers is 'reasonable,' we frame the problem of learning the best combination of the controllers by trial and error. We first set up an improper policy class of all randomized mixtures of the $M$ given controllers - each such mixture is parameterized by a probability distribution over the $M$ base controllers. Applying an improper policy in this class amounts to selecting independently at each time a base controller according to this distribution and implementing the recommended action as a function of the present state of the system. The learner's goal is to find the best performing mixture policy by iteratively testing from the pool of given controllers and observing the resulting state-action-reward trajectory.

Note that the underlying parameterization in our setting is over a set of given controllers which could be potentially abstract and defined for complex MDPs with continuous

state/action spaces, instead of the (standard) policy gradient (PG) view where the parameterization directly defines the policy in terms of the state-action map. Our problem, therefore, hews more closely to a meta RL framework, in that we operate over a set of controllers that have themselves been designed using some optimization framework to which we are agnostic. This has the advantage of conferring a great deal of generality, since the class of controllers can now be chosen to promote any desirable secondary characteristic such as interpretability, ease of implementation or cost effectiveness.

It is also worth noting that our approach is different from treating each of the base controllers as an 'expert' and applying standard mixture-of-experts algorithms, e.g., Hedge or Exponentiated Gradient (Littlestone & Warmuth, 1994; Auer et al., 1995; Kocák et al., 2014; Neu, 2015). Whereas the latter approach is tailored to converge to the best single controller (under the usual gradient approximation framework) and hence qualifies as a 'proper' learning algorithm, the former optimization problem is in the improper class of mixture policies which not only contains each atomic controller but also allows for a true mixture (i.e., one which puts positive probability on at least two elements) of many atomic controllers to achieve optimality; we exhibit concrete examples where this is indeed possible.

Our Contributions. We make the following contributions in this context:

  • We develop a gradient-based RL algorithm to iteratively tune a softmax parameterization of an improper (mixture) policy defined over the base controllers (Algorithm 1). While this algorithm, Softmax Policy Gradient (or Softmax PG), relies on the availability of value function gradients, we later propose a modification that we call GradEst (see Alg. 6 in appendix) to Softmax PG to rectify this. GradEst uses a combination of rollouts and Simultaneously Perturbed Stochastic Approximation (SPSA) (Borkar, 2008) to estimate the value gradient at the current mixture distribution.
  • We show a convergence rate of $\mathcal{O}(1 / t)$ to the optimal value function for finite state-action MDPs. To do this, we employ a novel Non-uniform Lojasiewicz-type inequality (Lojasiewicz, 1963), that lower bounds the 2-norm of the value gradient in terms of the suboptimality of the current mixture policy's value. Essentially, this helps establish that when the gradient of the value function hits zero, the value function is itself close to the optimum.
  • Policy-gradient methods are well-known to suffer from high variance (Peters & Schaal, 2008; Bhatnagar et al., 2009). To circumvent this issue, we develop an algorithm that can switch between a simple Actor-Critic (AC) based scheme and a Natural Actor-Critic (NAC) scheme depending on the available information. The algorithm, 'ACIL' (Sec. 5), executes on a single sample path, without requiring any

forced resets, as is common in many RL algorithms. We provide convergence rate guarantees to a stationary point in the basic AC case and to a global optimum in the NAC case, under some additional (but standard) assumptions (of uniform ergodicity). The total complexity of AC is measured to attain an $(\varepsilon + \text{Critical_error})$ -accurate stationary point. The total complexity of NAC is measured to attain an $(\varepsilon + \text{Critical_error} + \text{Actor_error})$ -accurate stationary point. We use linear function approximation to approximate the value function and our convergence analysis show exactly how this approximation affects the final complexity bound.

  • We corroborate our theory using extensive simulation studies. For the PG based method we use GradEst in two different settings (a) the well-known CartPole system and (b) a scheduling task in a constrained queueing system. We discuss both these settings in detail in Sec. 2, where we also demonstrate the power of our improper learning approach in finding control policies with provably good performance. In our experiments (see Sec. 6), we eschew access to exact value gradients and instead rely on a combination of roll outs and SPSA to estimate them. For the actor-critic based learner, we demonstrate simulations on various queuing theoretic simulations using the natural-actor-critic based ACIL. All the results show that our proposed algorithms quickly converge to the correct mixture of available atomic controllers.

Related Work (brief). We provide a quick survey of relevant literature. A detailed survey is deferred to the appendix. Policy gradient. The basic policy gradient method has become a cornerstone of modern RL and given birth to an entire class of highly efficient policy search techniques such as CPI (Kakade & Langford, 2002), TRPO (Schulman et al., 2015), PPO (Schulman et al., 2017), and MADDPG (Lowe et al., 2020). A growing body of recent work shows promising results about convergence rates for PG algorithms over finite state-action MDPs (Agarwal et al., 2020a; Shani et al., 2020; Bhandari & Russo, 2019; Mei et al., 2020), where the parameterization is over the entire space of state-action pairs, i.e., $\mathbb{R}^{S\times A}$ . These advances, however, are partially offset by negative results such as those in Li et al. (2021), which show that the convergence time is $\Omega \left(|\mathcal{S}|^{2^{1 / (1 - \gamma)}}\right)$ where $\mathcal{S}$ is the state space of the MDP and $\gamma$ the discount factor, even with exact gradient knowledge.

Improper learning. The above works concern proper learning, where the policy search space is usually taken to be the set of all deterministic policies for an MDP. Improper learning, on the other hand, has been studied in statistical learning theory for the IID setting (Daniely et al., 2014; 2013). In this representation independent learning framework, the learning algorithm is not restricted to output a hypothesis from a given set of hypotheses.

Boosting. Agarwal et al. (2020b) attempts to frame and solve policy optimization over an improper class by boosting

a given class of controllers. This work, however, is situated in the context of non-stochastic control and assumes perfect knowledge of (i) the memory-boundedness of the MDP, and (ii) the state noise vector in every round, which amounts to essentially knowing the MDP transition dynamics. We work in the stochastic MDP setting and assume no access to the MDP's transition kernel. Further, it is assumed in (Agarwal et al., 2020b) that all the atomic controllers available are stabilizing which, when working with an unknown MDP, is a very strong assumption to make. While making no such assumptions on our atomic controller class; we show our algorithms can begin with provably unstable controllers and yet succeed in stabilizing the system (Sec. 2.2 and 6).

Options framework. Our work differs from the options framework (Barreto et al., 2017; Sutton et al., 1999) for hierarchical RL in spirit, in that we allow for each controller to be applied in each round rather than waiting for a subtask to complete. The current work deals with finding an optimal mixture of basic controllers to solve a particular task. However, if we allow for a state-dependent choice of controllers, then the methods proposed can be generalized for solving hierarchical RL tasks.

Ensemble policy-based RL. Our current work deals with accessing given (possibly separately trained) controllers as black-boxes and learning to combine them optimally. In contrast, in ensemble RL approaches (Maclin & Opitz, 2011; Xiliang et al., 2018; Wiering & van Hasselt, 2008) the base policies are learnt on the fly (e.g., Q-learning, SARSA) by the agent whereas the combining rule is fixed upfront (e.g., majority voting, rank voting, Boltzmann multiplication, etc.). Moreover, the base policies have access to the new system in Ensemble RL, which gives them a distinct advantage. Our method can serve as a meta-RL adaptation framework with theoretical guarantees which can use such pre-trained models to combine them optimally. To the best of our knowledge, ensemble RL works like (Xiliang et al., 2018; Wiering & van Hasselt, 2008) do not provide theoretical guarantees on the learnt combined policy. Our work on the other hand provides a firm theoretical as well as empirical basis for the methods we propose.

Improper learning with given base controllers. Probably the closest resemblance with our work is that of Banijamali et al. (2019) which aims at finding the best convex combination of a given set of base controllers for a given MDP. They however frame it as a planning problem where the transition kernel $P$ is known to the agent. Furthermore, we treat the base controllers as black-box entities, whereas they exploit their structure to compute the state-occupancy measures.

Actor-critic methods. Actor-critic (AC) methods were first introduced in Konda & Tsitsiklis (2000). Natural actor-critic methods were first introduced in (Peters & Schaal, 2008; Bhatnagar et al., 2009). While many studies are available for the asymptotic convergence of AC and NAC, we use the new techniques proposed by Xu et al. (2020) and Barakat

et al. (2021) for showing convergence results.

2. Motivating Examples

We begin with two examples that help illustrate the need for improper learning over a given set of atomic controllers. These examples concretely demonstrate the power of this approach to find (improper) control policies that go well beyond what the atomic set can accomplish, while retaining some of their desirable properties (such as interpretability and simplicity of implementation).

2.1. Ergodic Control of the Cartpole System

Consider the Cartpole system which has, over the years, become a benchmark for testing control strategies (Khalil, 2015). The system's dynamics, evolving in $\mathbb{R}^4$ , can be approximated via a Linear Quadratic Regulator around an (unstable) equilibrium state vector that we designate the origin $(\mathbf{x} = \mathbf{0})$ . The objective now reduces to finding a (potentially randomized) control policy $u \equiv {u(t), t \geqslant 0}$ that solves $\inf_{u} J\left(\mathbb{E}{u} \sum{t=0}^{\infty} \mathbf{x}^{\intercal}(t) Q \mathbf{x}(t) + R u^{2}(t)\right)$ subject to $\mathbf{x}(t+1) = A_{open} \mathbf{x}(t) + \mathbf{b} u(t)$ at all times $t \geqslant 0$ .

Under standard assumptions of controllability and observability, this optimization has a stationary, linear solution $u^{*}(t) = -\mathbf{K}^{\intercal}\mathbf{x}(t)$ (Bertsekas, 2011). Moreover, setting $A := A_{open} - \mathbf{b}\mathbf{K}^{\intercal}$ , it is well known that the dynamics $\mathbf{x}(t + 1) = A\mathbf{x}(t)$ , $t \geqslant 0$ , are stable. The usual design strategy for a given Cartpole involves a combination of system identification, followed by linearization and computing the controller gain $\mathbf{K}$ . This would typically produce a controller with tolerable performance fairly quickly, but would also suffer from nonidealities of parameter estimation.

To alleviate this problem, first consider a generic (ergodic) control policy that builds on this strategy by switching across a menu of controllers ${K_1,\dots ,K_N}$ produced as above. That is, at any time $t$ , this policy chooses $K_{i}$ $i\in [N]$ , w.p. $p_i$ , so that the control input at time $t$ is $u(t) = -\mathbf{K}i^\top \mathbf{x}(t)$ w.p. $p_i$ . Let $A(i)\coloneqq A{open} - \mathbf{bK}_i^\top$ The resulting controlled dynamics are given by $\mathbf{x}(t + 1) =$ $A(r(t))\mathbf{x}(t)$ $t\geqslant 0$ , where $r(t) = i$ w.p. $p_i$ , IID across $t$

This is an example of an ergodic parameter linear system (EPLS) (Bolzern et al., 2008), which is said to be Exponentially Almost Surely Stable (EAS) if the state norm decays at least exponentially fast with time: $\mathbb{P}\left{\lim \sup_{t\to \infty}\frac{1}{t}\log | \mathbf{x}(t)| \leqslant -\rho \right} = 1$ for some $\rho >0$ . Let the random variable $\lambda (\omega)\coloneqq \lim \sup_{t\to \infty}\frac{1}{t}\log | \mathbf{x}(t,\omega)|$ . For our dynamics $\mathbf{x}(t + 1) = A(r(t))\mathbf{x}(t)$ , $t\geqslant 0$ , it is seen that the Lyapunov exponent $\frac{1}{t}\log | \mathbf{x}(t)|$ is at most the quantity $\sum_{i = 1}^{N}p_{i}\log | A(i)| a.s.$ (see appendix for details).

A good mixture controller can now be designed by choosing ${p_1,\dots ,p_N}$ such that $\lambda (\omega) < - \rho$ for some $\rho >0$

ensuring exponentially almost sure stability (subject to $\log | A(i)| < 0$ for some $i$ ). As we show in the sequel, our policy gradient algorithm (SoftMax PG) learns an improper mixture ${p_1,\dots ,p_N}$ that (i) can stabilize the system even when a majority of the constituent atomic controllers ${K_{1},\dots ,K_{N}}$ are unstable, i.e., converges to a mixture that ensures that the average exponent $\lambda (\omega) < 0$ and (ii) shows better performance than that each of the atomic controllers.

2.2. Scheduling in Constrained Queueing Networks

We consider a system that comprises two queues fed by independent, stochastic arrival processes $A_{i}(t), i \in {1,2}$ , $t \in \mathbb{N}$ . The length of queue $i$ , measured at the beginning of time slot $t$ , is denoted by $Q_{i}(t) \in \mathbb{Z}_{+}$ .


Figure 1: $K_{1}$ and $K_{2}$ by themselves can only stabilize $\mathcal{C}_1 \cup \mathcal{C}_2$ (gray rectangles). With improper learning, we enlarge the set of stabilizable arrival rates by the triangle $\Delta ABC$ shown in purple, above.

A common server serves both queues and can drain at most one packet from the system in a time slot. The server, therefore, needs to decide which of the two queues it intends to serve in a given slot (we assume that once the server chooses to serve a packet, service succeeds with probability 1). The server's decision is denoted by the vector $\mathbf{D}(t) \in \mathcal{A} := {[0,0], [1,0], [0,1]}$ , where a "1" denotes service and a "0" denotes lack thereof. Let $\mathbb{E}A_{i}(t) = \lambda_{i}$ and note that the arrival rate $\lambda = [\lambda_1, \lambda_2]$ is unknown to the learner. We aim to find a (potentially randomized) policy $\pi$ to minimize the discounted system backlog given by $J_{\pi}(\mathbf{Q}(0)) := \mathbb{E}{\mathbf{Q}(0)}^{\pi} \sum{t=0}^{\infty} \gamma^{t} (Q_{1}(t) + Q_{2}(t))$ .

Any policy with $J_{\pi}(\cdot) < \infty$ , is said to be stabilizing (or, equivalently, a stable policy). It is well known that there exist stabilizing policies iff $\lambda_1 + \lambda_2 < 1$ (Tassiulas & Ephremides, 1992). A policy $\pi_{\mu_1,\mu_2}$ that chooses Queue $i$ w.p. $\mu_i$ in every slot, can provably stabilize a system iff $\mu_i > \lambda_i, \forall i \in {1,2}$ . Now, assume our control set consists of two stationary policies $K_1, K_2$ with $K_1 \equiv \pi_{\varepsilon,1 - \varepsilon}$ , $K_1 \equiv \pi_{1 - \varepsilon, \varepsilon}$ and sufficiently small $\varepsilon > 0$ . That is, we have $M = 2$ controllers $K_1, K_2$ . Clearly, neither of these can, by itself, stabilize a network with $\lambda = [0.49, 0.49]$ .

However, an improper mixture of the two that selects $K_{1}$ and $K_{2}$ each with probability $1/2$ can. In fact, as Fig. 1 shows, our improper learning algorithm can stabilize all arrival rates in $\mathcal{C}_1 \cup \mathcal{C}_2 \cup \Delta ABC$ , without prior knowledge of $[\lambda_1, \lambda_2]$ . In other words, our algorithm enlarges the stability region by the triangle $\Delta ABC$ , over and above $\mathcal{C}_1 \cup \mathcal{C}_2$ . We

will return to these examples in Sec. 6, and show, using experiments, (1) how our improper learner converges to the stabilizing mixture of the available policies and (2) if the optimal policy is among the available controllers, how our algorithm can find and converge to it.

3. Problem Statement and Notation

A (finite) Markov Decision Process $(S, \mathcal{A}, \mathbb{P}, r, \rho, \gamma)$ is specified by a finite state space $S$ , a finite action space $\mathcal{A}$ , a transition probability matrix $\mathbb{P}$ , where $\mathbb{P}(\tilde{s}|s, a)$ is the probability of transitioning into state $\tilde{s}$ upon taking action $a \in \mathcal{A}$ in state $s$ , a single stage reward function $r: S \times \mathcal{A} \to \mathbb{R}$ , a starting state distribution $\rho$ over $S$ and a discount factor $\gamma \in (0,1)$ . A (stationary) policy or controller $\pi: S \to \mathcal{P}(\mathcal{A})$ specifies a decision-making strategy in which the learner chooses actions $(a_t)$ adaptively based on the current state $(s_t)$ , i.e., $a_t \sim \pi(s_t)$ . $\pi$ and $\rho$ , together with $\mathbb{P}$ , induce a probability measure $\mathbb{P}{\rho}^{\pi}$ on the space of all sample paths of the underlying Markov process and we denote by $\mathbb{E}{\rho}^{\pi}$ the associated expectation operator. The value function of policy $\pi$ (also called the value of policy $\pi$ ), denoted by $V^{\pi}$ is the total discounted reward obtained by following $\pi$ , i.e., $V^{\pi}(\rho) := \mathbb{E}{\rho}^{\pi} \sum{t=0}^{\infty} \gamma^t r(s_t, a_t)$ .

Improper Learning. We assume that the learner is provided with a finite number of (stationary) controllers $\mathcal{C} \coloneqq {K_1, \dots, K_M}$ and, as described below, set up a parameterized improper policy class $\mathcal{I}_{soft}(\mathcal{C})$ that depends on $\mathcal{C}$ . The aim therefore, is to identify the best policy for the given MDP within this class, i.e.,

π=Vπ(ρ).(1) \pi^ {*} = \underset {\pi \in \mathcal {I} _ {s o f t} (\mathcal {C})} {\operatorname {a r g m a x}} V ^ {\pi} (\rho). \tag {1}

We now describe the construction of the class $\mathcal{I}_{soft}(\mathcal{C})$ .

The Softmax Policy Class. We assign weights $\theta_{m}\in \mathbb{R}$ , to each controller $K_{m}\in \mathcal{C}$ and define $\theta \coloneqq [\theta_1,\dots ,\theta_M]$ . The improper class $\mathcal{I}{soft}$ is parameterized by $\theta$ as follows. In each round, the policy $\pi{\theta}\in \mathcal{I}{soft}(\mathcal{C})$ chooses a controller drawn from softmax $(\theta)$ , i.e., the probability of choosing Controller $K{m}$ is given by, $\pi_{\theta}(m)\coloneqq e^{\theta_m} / \left(\sum_{m' = 1}^{M}e^{\theta_{m'}}\right)$ . Note, therefore, that in every round, our algorithm interacts with the MDP only through the controller sampled in that round. In the rest of the paper, we will deal exclusively with a fixed and given $\mathcal{C}$ and the resultant $\mathcal{I}{soft}$ . Therefore, we overload the notation $\pi{\theta_t}(a|s)$ for any $a\in \mathcal{A}$ and $s\in S$ to denote the probability with which the algorithm chooses action $a$ in state $s$ at time $t$ . For ease of notation, whenever the context is clear, we will also drop the subscript $\theta$ i.e., $\pi_{\theta_t}\equiv \pi_t$ . Hence, we have at any time $t\geqslant 0:\pi_{\theta_t}(a|s) = \sum_{m = 1}^{M}\pi_{\theta_t}(m)K_m(s,a)$ . Since we deal with gradient-based methods in the sequel, we define the value gradient of policy $\pi_{\theta}\in \mathcal{I}{soft}$ , by $\nabla{\theta}V^{\pi_{\theta}}\equiv \frac{dV^{\pi_{\theta}}}{d\theta^t}$ . We say that $V^{\pi_{\theta}}$ is $\beta$ -smooth if $\nabla_{\theta}V^{\pi_{\theta}}$ is $\beta$ -Lipschitz (Agarwal et al., 2020a). Finally, let for any two integers $a$ and $b$

Algorithm 1 SoftMax PG
Input: learning rate $\eta >0$ , initial state distribution $\mu$
Initialize: each $\theta_{m}^{1} = 1$ , for all $m\in [M],s_1\sim \mu$
for $t = 1$ to $T$ do Choose controller $m_t\sim \pi_t$ Play action $a_{t}\sim K_{m_{t}}(s_{t},:)$ Observe $s_{t + 1}\sim \mathbb{P}(.|s_t,a_t)$ Update: $\theta_{t + 1} = \theta_t + \eta \nabla_{\theta_t}V^{\pi_{\theta_t}}$
end for

$\mathbb{I}_{ab}$ denote the indicator that $a = b$

Comparison to the standard PG setting. This problem we define is different from the usual policy gradient setting where the parameterization completely defines the policy in terms of the state-action mapping. One can use the methodology followed in (Mei et al., 2020), by assigning a parameter $\theta_{s,m}$ for every $s\in S,m\in [M]$ . With some calculation, it can be shown that this is equivalent to the tabular setting with $S$ states and $M$ actions, with the new 'reward' defined by $r(s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)r(s,a)$ where $r(s,a)$ is the usual expected reward obtained at state $s$ and playing action $a\in \mathcal{A}$ . By following the approach in (Mei et al., 2020) on this modified setting, it can be shown that the policy converges for each $s\in S,\pi_{\theta}(m^{*}(s)\mid s)\to 1$ , for every $s\in S$ , which is the optimum policy. However, the problem that we address, is to select a single controller (from within $\mathcal{I}_{soft}$ , the convex hull of the given $M$ controllers), which would guarantee maximum return if one plays that single mixture for all time, from among the given set of controllers.

4. Improper Learning using Gradients

In this and the following sections, we propose and analyze a policy gradient-based algorithm that provably finds the best, potentially improper, mixture of controllers for the given MDP. While we employ gradient ascent to optimize the mixture weights, the fact that this procedure works at all is far from obvious. We begin by noting that $V^{\pi_{\theta}}$ , as described in Section 3, is nonconcave in $\theta$ for both direct and softmax parameterizations, which renders analysis with standard tools of convex optimization inapplicable.

Lemma 4.1. (Non-concavity of Value function) There is an MDP and a set of controllers, for which the maximization problem of the value function (i.e. (1)) is non-concave for both the SoftMax and direct parameterizations, i.e., $\theta \mapsto V^{\pi_{\theta}}$ is non-concave.

The proof follows from a counterexample whose construction we show in the appendix. Our PG algorithm, SoftMax PG, is shown in Algorithm 1. The parameters $\theta \in \mathbb{R}^{M}$ which define the policy are updated by following the gradient of the value function at the current policy parameters.

Convergence Guarantees. The following result shows that

with SoftMax PG, the value function converges to that of the best in-class policy at a rate $\mathcal{O}(1/t)$ . Furthermore, the theorem shows an explicit dependence on the number of controllers $M$ , in place of the usual $|\mathcal{S}|$ . Note that with perfect gradient knowledge the algorithm becomes deterministic. This is a standard assumption in the analysis of PG algorithms (Fazel et al., 2018; Agarwal et al., 2020a; Mei et al., 2020).

Theorem 4.2 (Convergence of Policy Gradient). With ${\theta_t}{t\geqslant 1}$ generated as in Algorithm 1 and using a learning rate $\eta = \frac{(1 - \gamma)^2}{7\gamma^2 + 4\gamma + 5}$ , for all $t\geqslant 1$ , $V^{*}(\rho) - V^{\pi{\theta_t}}(\rho) = \mathcal{O}\left(\frac{1}{t}\frac{M\gamma^2}{c_t^2(1 - \gamma)^3}\right)$ , where $c_{t}\coloneqq \min_{1\leqslant s\leqslant t}\min_{m:\pi^{*}(m) > 0}\pi_{\theta_s}(m)$ .

Remark 4.3. The quantity $c_{t}$ in the statement is the minimum probability that SoftMax PG puts on the controllers for which the best mixture $\pi^{*}$ has positive probability mass. Empirical evidence (Sec. 6) makes us conjecture that $\lim_{t\to \infty}c_t$ is positive, which shows a convergence rate of $\mathcal{O}(1 / t)$ .

Remark 4.4. The proof of the above theorem uses the $\beta-$ smoothness property of the value function under the softmax parameterization along with a new non-uniform Lojaseiwicz-type inequality (NULI) for our probabilistic mixture class, which lower bounds the magnitude of the gradient of the value function, which we mention below.

Lemma 4.5 (NUL1). $\left| \frac{\partial}{\partial \theta} V^{\pi_{\theta}}(\mu) \right|2 \geqslant \frac{1}{\sqrt{M}} \left( \min{m: \pi_{\theta_m}^* > 0} \pi_{\theta_m} \right) \times \left| \frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi_{\theta}}} \right|\infty^{-1} \times [V^*(\rho) - V^{\pi{\theta}}(\rho)].$

The proof of Theorem 4.2, then follows by an induction argument over $t \geqslant 1$ .

Technical Challenges. We note here that while the basic recipe for the analysis of Theorem 4.2 is similar to (Mei et al., 2020), our setting does not directly inherit the intuition of standard PG (sPG) analysis. (1) With $|\mathcal{S} \times \mathcal{A}| < \infty$ , the sPG analysis critically depends on the fact that a deterministic optimal policy exists and shows convergence to it. In contrast, in our setting, $\pi^{}$ could be a strictly randomized mixture of the base controllers (see Sec. 2). (2) A crucial step in sPG analysis is establishing that the value function $V^{\pi}(s), \forall s \in S$ increases monotonically with time such that parameter of the optimal action $\theta_{s,a^{}} \uparrow \infty$ . In the appendix, we supply a simple counterexample showing that monotonicity of the $V$ function is not guaranteed in our setting for every $s \in S$ . (3) The value function gradient in sPG has no 'cross contamination' from other states, in the sense that modifying the parameter at one state does not affect the values of the others. This plays a crucial part in simplifying the proof of global convergence to the optimal policy in sPG analysis. Our setting cannot leverage this property since the value function gradient at a given controller possesses contributions from all states.

For the special case of $S = 1$ , which is the Multiarmed

Bandits, each controller is a probability distribution over the $A$ arms of the bandit. We call this special case Bandit-over-Bandits. We obtain a convergence rate of $\mathcal{O}\left(M^2 /t\right)$ to the optimum and recover $M^2\log T$ regret bound when our softmax PG algorithm is applied to this special case. We refer to the appendix for details.

Discussion on $c_t$ . Convergence in Theorem 4.2 depends inversely on $c_t^2$ . It follows that in order for SoftMax PG to converge, $c_t$ must either (a) converge to a positive constant, or (b) decay (to 0) slower than $\mathcal{O}\left(1 / \sqrt{t}\right)$ . The technical challenges discussed above, render proving this extremely hard analytically. Hence, while we currently do not show this theoretically, our experiments in Sec. 6 repeatedly confirm that its empirical analog, i.e., $\bar{c}_t$ (defined formally in Sec. 6) approaches a positive value. Hence, we conjecture that the rate of convergence in Thm 4.2 is $\mathcal{O}(1 / t)$ .

5. Actor-Critic based Improper Learning

Softmax PG follows a gradient ascent scheme to solve the optimization problem (1), but is limited by the requirement of the true gradient in every round. To address situations where this might be unavailable, we resort to a Monte-carlo sampling based procedure (see appendix: Alg 6), which may lead to high variance. In this section, we take an alternative approach and provide a new algorithm based on an actor-critic framework for solving our problem. Actor-Critic methods are well-known to have low variance than their Monte-carlo counterparts (Konda & Tsitsiklis, 2000).

We begin by proposing modifications to the standard $Q$ -function and advantage function definitions. Recall that we wish to solve for the following optimization problem: $\max_{\pi \in \mathcal{I}{\mathrm{soft}}}\mathbb{E}{s\sim \rho}[V^{\pi}(s)]$ , where $\pi$ is some distribution over the $M$ base controllers. Let $\tilde{Q}^{\pi}(s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)Q^{\pi}(s,a)$ . Let $\tilde{A}^{\pi}(s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)A^{\pi}(s,a) = \sum_{a\in \mathcal{A}}K_m(s,a)Q^{\pi}(s,a) - V^{\pi}(s)$ , where $Q^{\pi}$ and $A^{\pi}$ are the usual action-value functions and advantage functions respectively. We also define the new reward function $\tilde{r} (s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)r(s,a)$ and a new transition kernel $P(s^{\prime}|s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)P(s^{\prime}|s,a)$ . Then, following the distribution $\pi$ over the controllers induces a Markov Chain on the state space $S$ . Define $\nu_{\pi}(s,m)$ as the state-controller visitation measure induced by the policy $\pi$ : $\nu_{\pi}(s,m)\coloneqq (1 - \gamma)\sum_{t\geqslant 0}\gamma^{t}\mathbb{P}^{\pi}(s_{t} = s,m_{t} = m) = d_{\rho}^{\pi}(s)\pi (m)$ . With these definitions, we have the following variant of the policy-gradient theorem.

Lemma 5.1 (Modified Policy Gradient Theorem). $\begin{array}{rlr}{\nabla_{\theta}V^{\pi_{\theta}}(\rho)} & = & {\mathbb{E}{(s,m)\sim \nu{\pi_{\theta}}}[\tilde{Q}^{\pi_{\theta}}(s,m)\psi_{\theta}(m)]}\ {\mathbb{E}{(s,m)\sim \nu{\pi_{\theta}}}[\tilde{A}^{\pi_{\theta}}(s,m)\psi_{\theta}(m)],} & \text{where} & {\psi_{\theta}(m)\quad \coloneqq}\ {\nabla_{\theta}\log (\pi_{\theta}(m)).} & \end{array}$

Note the independence of the score function $\psi$ from the state

Algorithm 2 Actor-Critic based Improper RL (ACIL)
Input: $\varphi$ actor stepsize $\alpha$ ,critic stepsize $\beta$ ,regulariza tion parameter $\lambda$ ,AC' or NAC'
Initialize: $\theta_0 = (1,1,\dots ,1){M\times 1},s_0\sim \rho$
flag $= \mathbb{1}{\mathrm{NAC}}$ {Selects AC or NAC}
for $t\gets 0$ to $T - 1$ do
$s
{init} = s_{t - 1,B}$ (when $t = 0$ $s_{init} = s_0$ $w_{t},s_{t,0}\leftarrow \mathrm{Critic - TD}(s_{init},\pi_{\theta_t},\varphi ,\beta ,T_c,H)$ $F_{t}(\theta_{t})\gets 0.$
for $i\gets 0$ to $B - 1$ do
$m_{t,i}\sim \pi_{\theta_t},a_{t,i}\sim K_{m_{t,i}}(s_{t,i},.)$ $s_{t,i + 1}\sim \tilde{P} (.|s_{t,i},m_{t,i})$ $\mathcal{E}{w_t}(s{t,i},m_{t,i},s_{t,i + 1}) = \tilde{r} (s_{t,i},m_{t,i}) +$ $(\gamma \varphi (s_{t,i + 1}) - \varphi (s_{t,i}))^{\top}w_{t}$ $F_{t}(\theta_{t})\leftarrow F_{t}(\theta_{t}) + \frac{1}{B}\psi_{\theta_{t}}(m_{t,i})\psi_{\theta_{t}}(m_{t,i})^{\top}$
end for
if {flag} then
$G_{t}\coloneqq [F_{t}(\theta_{t}) + \lambda I]$ $\theta_{t + 1}\qquad = \qquad \theta_t\qquad +$ $G_{t}^{-1}\frac{\alpha}{B}\sum_{i = 0}^{B - 1}\mathcal{E}{w{t}}(s_{t,i},m_{t,i},s_{t,i + 1})\psi_{\theta_{t}}(m_{t,i})$
else
$\theta_{t + 1} = \theta_t + \frac{\alpha}{B}\sum_{i = 0}^{B - 1}\mathcal{E}{w_t}(s{t,i},m_{t,i},s_{t,i + 1})\psi_{\theta_t}(m_{t,i})$
end if
$\pi_{\theta_{t + 1}} = \mathrm{softmax}(\theta_{t + 1})$
end for
Output: $\theta_T$ with $\widehat{T}$ chosen uniformly at random from
${1,\ldots ,T}$

$s$ . For the gradient ascent update of the parameters $\theta$ we need to estimate $\tilde{A}^{\pi_{\theta}}(s,m)$ where $(s,m)$ are drawn according to $\nu_{\pi_{\theta}}(\cdot ,\cdot)$ . We recall how to sample from $\nu_{\pi}$ . Following Konda & Tsitsiklis (2000) and the recent works like Xu et al. (2020); Barakat et al. (2021) and casting into our setting, observe that $\nu_{\pi}$ is a stationary distribution of a Markov chain over the pair $(s,m)$ with state-to-state transition kernel defined by $\bar{P} (s'|s,m)\coloneqq \gamma \tilde{P} (s'|s,m) + (1 - \gamma)\rho (s')$ and $m\sim \pi (.)$ .

Algorithm Description. We present the algorithm in detail in Algorithm 2 along with a subroutine Alg 3 which updates the critic's parameters. ACIL is a single-trajectory based algorithm, in the sense that it does not require a forced reset along the run. We begin with the critic's updates. The critic uses linear function approximation $V_{w}(s) \coloneqq \varphi(s)^{\top} w$ , and uses TD learning to update its parameters $w \in \mathbb{R}^d$ . We assume that $\varphi(\cdot): S \to \mathbb{R}^d$ is a known feature mapping. Let $\Phi$ be the corresponding $|S| \times d$ matrix. We assume that the columns of $\Phi$ are linearly independent. Next, based on the critic's parameters, the actor approximates the $\tilde{A}(s, m)$ function using the TD error: $\mathcal{E}_{w}(s, m, s') = \tilde{r}(s, m) + (\gamma \varphi(s') - \varphi(s))^{\top} w$ .

In order to provide guarantees of the convergence rates

Algorithm 3 Critic-TD Subroutine
Input: $s_{init},\pi ,\varphi ,\beta ,T_c,H$
Initialize: $w_{0}$
for $k\gets 0$ to $T_{c} - 1$ do $s_{k,0} = s_{k - 1,H}$ (when $k = 0$ $s_{k,0} = s_{init})$
for $j\gets 0$ to $H - 1$ do $m_{k,j}\sim \pi (.)$ $a_{k,j}\sim K_{m_{k,j}}(s_{k,j},.)$ $s_{k,j + 1}\sim \tilde{P} (.|s_{k,j},m_{k,k})$ $\mathcal{E}{w_k}(s{k,j},m_{k,j},s_{k,j + 1}) = \tilde{r} (s_{k,j},m_{k,j}) +$ $(\gamma \varphi (s_{k,j + 1}) - \varphi (s_{k,j}))^\top w_k$
end for
$w_{k + 1} = w_k + \frac{\beta}{H}\sum_{i = 0}^{H - 1}\mathcal{E}{w_k}(s{k,i},m_{k,i},s_{k,i + 1})\varphi (s_{k,i})$
end for
Output: $w_{T_c},s_{T_c - 1,H}$

of Algorithm ACIL, we make the following assumptions, which are standard in RL literature (Konda & Tsitsiklis, 2000; Bhandari et al., 2018; Xu et al., 2020).

Assumption 5.2 (Uniform Ergodicity). For any $\theta \in \mathbb{R}^M$ , consider the Markov Chain induced by the policy $\pi_{\theta}$ , and following the transition kernel $\bar{P}(.|s, m)$ . Let $\xi_{\pi_{\theta}}$ be the stationary distribution of this Markov Chain. We assume that there exists constants $\kappa > 0$ and $\xi \in (0,1)$ such that

supsSP(sts0=s,πθ)ξπθ()TVκξt. \sup _ {s \in \mathcal {S}} \| \mathbb {P} (s _ {t} \in \cdot | s _ {0} = s, \pi_ {\theta}) - \xi_ {\pi_ {\theta}} (\cdot) \| _ {T V} \leqslant \kappa \xi^ {t}.

Further, let $L_{\pi} \coloneqq \mathbb{E}{\nu{\pi}}[\varphi(s)(\gamma \varphi(s') - \varphi(s))^{\top}]$ and $v_{\pi} \coloneqq \mathbb{E}{\nu{\pi}}[r(s,m,s')\varphi(s)]$ . The optimal solution to the critic's TD learning is now $w^{*} \coloneqq -L_{\pi}^{-1}v_{\pi}$ .

Assumption 5.3. There exists a positive constant $\Gamma_L$ such that for all $w\in \mathbb{R}^d$ , we have $\langle w - w^{},L_{\pi}(w - w^{})\rangle \leqslant -\Gamma_L| w - w^{*}| _2^2$ .

Based on the above two assumptions, let $L_{V} \coloneqq \frac{2\sqrt{2}C_{\kappa\xi} + 1}{1 - \gamma}$ , where $C_{\kappa\xi} = \left(1 + \left\lceil \log_{\xi}\frac{1}{\kappa}\right\rceil +\frac{1}{1 - \xi}\right)$ .

Theorem 5.4. Consider the Actor-Critic improper learning algorithm ACIL (Alg 2). Assume $\sup_{s\in S}| \varphi (s)| 2\leqslant 1$ Under Assumptions 5.2 and 5.3 with step-sizes chosen as $\alpha = \left(\frac{1}{4L_V\sqrt{M}}\right),\beta = \min {\mathcal{O}(\Gamma_L),\mathcal{O}(1 / \Gamma_L)}$ , batch sizes $H = \mathcal{O}\left(\frac{1}{\varepsilon}\right),B = \mathcal{O}(1 / \varepsilon),T_c = \mathcal{O}\left(\frac{\sqrt{M}}{\Gamma_L}\log (1 / \varepsilon)\right),$ $T = \mathcal{O}\left(\frac{\sqrt{M}}{(1 - \gamma)^2\varepsilon}\right),$ we have $\mathbb{E}[| \nabla{\theta}V(\theta_{\hat{T}})| 2^2 ]\leqslant$ $\varepsilon +\mathcal{O}(\Delta{\mathrm{critic}})$ . Hence, the total sample complexity is $\mathcal{O}\left(M(1 - \gamma)^{-2}\varepsilon^{-2}\log (1 / \varepsilon)\right)$

Here, $\Delta_{\text{critic}} := \max_{\theta \in \mathbb{R}^M} \mathbb{E}{\nu{\pi_\theta}} \left[ \left| V^{\pi_\theta}(s) - V^{w_{\pi_\theta}^*} \right|^2 \right]$ , which equals zero, if the value function lies in the linear space spanned by the features.

Next we provide the global optimality guarantee for the Natural-Actor-Critic version of ACIL.

Theorem 5.5. Assume $\sup_{s\in S}| \varphi (s)| 2\leqslant 1.$ Under Assumptions 5.2 and 5.3 with step-sizes chosen as $\alpha = \left(\frac{\lambda^2}{2\sqrt{M}L_V(1 + \lambda)}\right),\beta = \min {\mathcal{O}(\Gamma_L),\mathcal{O}(1 / \Gamma_L)} ,$ batch-sizes $H = \mathcal{O}\left(\frac{1}{\Gamma_L\varepsilon^2}\right),B = \mathcal{O}\left(\frac{1}{(1 - \gamma)^2\varepsilon^2}\right),$ $T{c} = \mathcal{O}\left(\frac{\sqrt{M}}{\Gamma_{L}}\log (1 / \varepsilon)\right),T = \mathcal{O}\left(\frac{\sqrt{M}}{(1 - \gamma)^{2}\varepsilon}\right)$ and $\lambda =$ $\mathcal{O}(\Delta_{\text{critic}})$ we have $V(\pi^{*}) - \frac{1}{T}\sum_{t = 0}^{T - 1}\mathbb{E}[V(\pi_{\theta_t})]\leqslant \varepsilon +$ $\mathcal{O}\left(\sqrt{\frac{\Delta_{\text{actor}}}{(1 - \gamma)^3}}\right) + \mathcal{O}(\Delta_{\text{critic}}).$ Hence, the total sample complexity is $\mathcal{O}\left(\frac{M}{(1 - \gamma)^4\varepsilon^3}\log \frac{1}{\varepsilon}\right).$ where $\Delta_{\text{actor}}:= \max_{\theta \in \mathbb{R}^{M}}\min_{w\in \mathbb{R}^{d}}\mathbb{E}{\nu{\pi_{\theta}}}[[\psi_{\theta}^{\top}w - A_{\pi_{\theta}}(s,m)]^{2}]]$ and $\Delta_{\text{critic}}$ is same as before.

6. Numerical Results

6.1. Simulations with Softmax PG

We now discuss the results of implementing Softmax PG (Alg 1) on the cartpole system and on the constrained queueing examples described in Sec. 2. Since neither value functions nor value gradients for these problems are available in closed-form, we modify SoftMax PG (Algorithm 1) to make it generally implementable using a combination of (1) rollouts to estimate the value function of the current (improper) policy and (2) simultaneous perturbation stochastic approximation (SPSA) to estimate its value gradient. Specifically, we use the approach in (Flaxman et al., 2005), noting that for a function $V: \mathbb{R}^M \to \mathbb{R}$ , the gradient, $\nabla V$ , $\nabla V(\theta) \approx \mathbb{E}\left[(V(\theta + \alpha.u) - V(\theta))u\right]$ . $\frac{M}{\alpha}$ , where the perturbation parameter $\alpha \in (0,1)$ and $u$ is sampled uniformly randomly from the unit sphere. This expression requires evaluation of the value function at the point $(\theta + \alpha.u)$ . Since the value function may not be explicitly computable, we employ rollouts, for its evaluation. The full algorithm, GradEst, can be found in the appendix (Alg. 6).

Note that all of the simulations shown have been averaged over $# \mathbb{L} = 20$ trials, and the mean and standard deviations plotted. We also show empirically that $c_{t}$ in Theorem 4.2 is indeed strictly positive. In the sequel, for every trial $l\in [# \mathbb{L}]$ , let $\bar{c}t^l\coloneqq \inf{1\leqslant s\leqslant t}\min_{m\in {m'\in [M]:\pi^* (m') > 0}}\pi_{\theta_s}(m)$ , and $\bar{c}t\coloneqq \frac{1}{# \mathbb{L}}\sum{l = 1}^{# \mathbb{L}}\bar{c}t^l$ . Also let $\bar{c}^T\coloneqq \min{l\in [# \mathbb{L}]}\min_{1\leqslant t\leqslant T}\bar{c}_t^l$ . That is the sequences ${\bar{c}t^l}{t = 1,l = 1}^{T,# \mathbb{L}}$ define the minimum probabilities that the algorithm puts, over rounds $1:t$ in trial $l$ , on controllers with $\pi^{*}(\cdot) > 0$ . ${\bar{c}t}{t = 1}^T$ represents its average across the different trials, and $\bar{c}^T$ is the minimum such probability that the algorithm learns across all rounds $1\leqslant t\leqslant T$ and across trials.

Simulations for the Cartpole. We study two different settings for the Cartpole example. Let $K_{opt}$ be the optimal controller for the given system, computed via standard procedures (details can be found in (Bertsekas, 2011)). We


(a) Cartpole with ${K_1, K_{opt}, K_2 = K_{opt} + \Delta}$ .


$\mathbf{\sigma} = \mathbf{(b)}$ Cartpole with ${K_1 = K_{opt} - \Delta, K_2 = K_{opt} + \Delta}$ .


(c) Softmax PG applied to a Path Graph Network.


(d) 2 queue system with time-varying arrival rates


Figure 2: Softmax PG algorithm applied to the cartpole control and path graph scheduling tasks. Each plot shows (a) the learnt probabilities of various base controllers over time, and (b) the minimum probability $\bar{c}_t$ and $\bar{c}^T$ as described in text.
(a) Arrival rate: $(\lambda_1, \lambda_2) = (0.4, 0.4)$


(b) Arrival rate: $(\lambda_1, \lambda_2) = (0.35, 0.35)$


(c) (Estimated) 2 queue system with time-varying arrival rates
Figure 3: Natural-actor-critic based improper learning algorithm applied to various queuing networks show convergence to the best mixture policy.

set $M = 2$ and consider two scenarios: (i) the two base controllers are $\mathcal{C} \equiv {K_{opt}, K_{opt} + \Delta}$ , where $\Delta$ is a random matrix, each entry of which is drawn IID $\mathcal{N}(0,0.1)$ , (ii) $\mathcal{C} \equiv {K_{opt} - \Delta, K_{opt} + \Delta}$ . In the first case a corner point of the simplex is optimal. In the second case a strict improper mixture of the available controllers is optimum. As we can see in Fig. 2(a) and 2(b) our policy gradient algorithm converges to the best controller/mixture in both the cases. The details of all the hyperparameters for this setting are provided in the appendix. We note here that in the second setting even though none of the controllers, applied individually, stabilizes the system, our Softmax PG algorithm finds and follows a improper mixture of the controllers which stabilizes the given Cartpole.

Constrained Queueing Networks. We present simulation results for the following networks.

(i) Path Graph Networks. The scheduling constraints in the first network we study dictate that Queues $i$ and $i + 1$ cannot be served simultaneously for $i\in [N - 1]$ in any round $t\geqslant 0$ . Such queueing systems are called path graph networks (Mohan et al., 2020). We work with $N = 4$ . Therefore, sets of queues which can be served simultaneously are $\mathcal{A} = {\emptyset ,{1} ,{2} ,{3} ,{4} ,{1,3} ,{2,4} ,{1,4} }$ . The constituents of $\mathcal{A}$ are called independent sets in the literature. In each round $t$ , the scheduler selects an independent set to serve the queues therein. Let $Q_{j}(t)$ be the backlog of Queue $j$ at time $t$ . We use the follow

ing base controllers: (i) $K_{1}$ : Max Weight (MW) controller (Tassiulas & Ephremides, 1992) chooses a set $s_t \coloneqq \operatorname{argmax}{\underline{\mathbf{S}} \in \mathcal{A}} \sum{j \in \underline{\mathbf{S}}} Q_j(t)$ , i.e., the set with the largest backlog, (ii) $K_{2}$ : Maximum Egress Rate (MER) controller chooses a set $s_t \coloneqq \operatorname{argmax}{\underline{\mathbf{S}} \in \mathcal{A}} \sum{j \in \underline{\mathbf{S}}} \mathbb{I}{Q_j(t) > 0}$ , i.e., the set which has the maximum number of non-empty queues. We also choose $K_{3}, K_{4}$ and $K_{5}$ which serve the sets ${1, 3}, {2, 4}, {1, 4}$ respectively with probability 1. We fix the arrival rates to the queues (0.495, 0.495, 0.495, 0.495). It is well known that the MER rule is mean-delay optimal in this case (Mohan et al., 2020). In Fig. 2(c), we plot the probability of choosing $K_i, i \in [5]$ , learnt by our algorithm. The probability of choosing MER indeed converges to 1.

(ii) Non-stationary arrival rates. Recall the example discussed in Sec. 2.2 of two queues. The scheduler there is now given two base/atomic controllers $\mathcal{C} \coloneqq {K_1, K_2}$ , i.e. $M = 2$ . Controller $K_i$ serves Queue $i$ with probability $1$ , $i = 1, 2$ . As can be seen in Fig. 2(d), the arrival rates $\lambda$ to the two queues vary over time (adversarily) during the learning. In particular, $\lambda$ varies from $(0.3, 0.6) \to (0.6, 0.3) \to (0.49, 0.49)$ . Our PG algorithm successfully tracks this change and adapts to the optimal improper stationary policies in each case.

In all the simulations shown above we note that the empirical trajectories of $\bar{c}t$ and $\bar{c}^T$ become flat after some initial rounds and are bounded away from zero. This supports our conjecture that $\lim{t\to \infty}c_t$ in Theorem 4.2 is bounded away

from zero, rendering the theorem statement non-vacuous. Note that Alg. 1 performs well in challenging scenarios, even with estimates of the value function and its gradient.

6.2. Simulations with ACIL

We perform some queueing theoretic simulations on the natural actor critic version of ACIL, which we will call NACIL in this section. Unlike Softmax PG, ACIL estimates gradients using temporal difference instead of SPSA. We study three different settings (1) where in the first case the optimal policy is a strict improper combination of the available controllers and (2) where it is at a corner point, i.e., one of the available controllers itself is optimal (3) arrival rates are time-varying as in the previous section. Our simulations show that in all the cases, ACIL converges to the correct controller mixture.

Recall the example that we discussed in Sec. 2.2. We consider the case with Bernoulli arrivals with rates $\lambda = [\lambda_1, \lambda_2]$ and are given two base/atomic controllers ${K_1, K_2}$ , where controller $K_i$ serves Queue $i$ with probability $1$ , $i = 1, 2$ . As can be seen in Fig. 3(a) when $\lambda = [0.4, 0.4]$ (equal arrival rates), NACIL converges to an improper mixture policy that serves each queue with probability $[0.5, 0.5]$ . Next in Fig 3(b) shows a situation where one of the base controllers, i.e., the "Longest-Queue-First" (LQF) is the optimal controller. NACIL converges correctly to the corner point.

Lastly, Fig. 3(c) shows a setting similar to (ii) Sec. 6.1 above. Here there is a single transition of $(\lambda_1,\lambda_2)$ from $(0.4,0.3)\rightarrow (0.3,0.4)$ which occurs at $t = \lceil 10^{5} / 3\rceil$ , which is unknown to the learner. We show the probability of choosing controller 1. NACIL tracks the changing arrival rates over time. We supply some more simulations with NACIL in the appendix due to space limitations.

Acknowledgment

This work was partially supported by the Israel Science Foundation under contract 2199/20. Mohammadi Zaki was supported by the Aerospace Network Research Consortium (ANRC) Grant on Airplane IOT Data Management.

References

Abbasi-Yadkori, Y. and Szepesvári, C. Regret bounds for the adaptive control of linear quadratic systems. In Proceedings of the 24th Annual Conference on Learning Theory, volume 19 of Proceedings of Machine Learning Research, pp. 1-26, Budapest, Hungary, 09-11 Jun 2011. JMLR Workshop and Conference Proceedings.
Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. Optimality and approximation with policy gradient methods in markov decision processes. In Proceedings of Thirty Third Conference on Learning Theory, pp. 64-66. PMLR, 2020a.
Agarwal, N., Brukhim, N., Hazan, E., and Lu, Z. Boosting for control of dynamical systems. In III, H. D. and Singh, A. (eds.), Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 96-103. PMLR, 13-18 Jul 2020b.
Auer, P., Cesa-Bianchi, N., Freund, Y., and Schapire, R. E. Gambling in a rigged casino: The adversarial multi-armed bandit problem. In Proceedings of IEEE 36th Annual Foundations of Computer Science, pp. 322-331, 1995.
Auer, P., Jaksch, T., and Ortner, R. Near-optimal regret bounds for reinforcement learning. In Advances in Neural Information Processing Systems, volume 21, pp. 89-96. Curran Associates, Inc., 2009.
Banijamali, E., Abbasi-Yadkori, Y., Ghavamzadeh, M., and Vlassis, N. Optimizing over a restricted policy class in mdps. In Chaudhuri, K. and Sugiyama, M. (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 3042-3050. PMLR, 16-18 Apr 2019.
Barakat, A., Bianchi, P., and Lehmann, J. Analysis of a target-based actor-critic algorithm with linear function approximation. CoRR, abs/2106.07472, 2021.
Barreto, A., Dabney, W., Munos, R., Hunt, J. J., Schaul, T., van Hasselt, H. P., and Silver, D. Successor features for transfer in reinforcement learning. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
Bertsekas, D. P. Dynamic programming and optimal control 3rd edition, volume ii. Belmont, MA: Athena Scientific, 2011.
Bhandari, J. and Russo, D. Global optimality guarantees for policy gradient methods. ArXiv, abs/1906.01786, 2019.

Bhandari, J., Russo, D., and Singal, R. A finite time analysis of temporal difference learning with linear function approximation. Oper. Res., 69:950-973, 2018.
Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., and Lee, M. Natural actor-critic algorithms. Automatica, 45(11): 2471-2482, 2009. ISSN 0005-1098.
Bolzern, P., Colaneri, P., and De Nicolao, G. Almost sure stability of stochastic linear systems with ergodic parameters. European Journal of Control, 14(2):114-123, 2008.
Borkar, V. S. Stochastic Approximation. Cambridge Books. Cambridge University Press, December 2008.
Cassel, A., Cohen, A., and Koren, T. Logarithmic regret for learning linear quadratic regulators efficiently. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pp. 1328-1337. PMLR, 13-18 Jul 2020.
Chen, X. and Hazan, E. Black-box control for linear dynamical systems. arXiv preprint arXiv:2007.06650, 2020.
Daniely, A., Linial, N., and Shalev-Shwartz, S. More data speeds up training time in learning halfspaces over sparse vectors. In Advances in Neural Information Processing Systems, volume 26, pp. 145-153. Curran Associates, Inc., 2013.
Daniely, A., Linial, N., and Shalev-Shwartz, S. From average case complexity to improper learning complexity. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC '14, pp. 441-448, New York, NY, USA, 2014. Association for Computing Machinery.
Dean, S., Mania, H., Matni, N., Recht, B., and Tu, S. On the Sample Complexity of the Linear Quadratic Regulator. arXiv e-prints, art. arXiv:1710.01688, October 2017.
Denisov, D. and Walton, N. Regret analysis of a markov policy gradient algorithm for multi-arm bandits. ArXiv, abs/2007.10229, 2020.
Durrett, R. Probability: Theory and examples, 2011.
Fazel, M., Ge, R., Kakade, S. M., and Mesbahi, M. Global convergence of policy gradient methods for the linear quadratic regulator, 2018.
Flaxman, A. D., Kalai, A. T., and McMahan, H. B. Online convex optimization in the bandit setting: Gradient descent without a gradient. SODA '05, pp. 385-394, USA, 2005. Society for Industrial and Applied Mathematics.

Gao, B. and Pavel, L. On the properties of the softmax function with application in game theory and reinforcement learning. ArXiv, abs/1704.00805, 2017.
Gopalan, A. and Mannor, S. Thompson Sampling for Learning Parameterized Markov Decision Processes. In Proceedings of The 28th Conference on Learning Theory, volume 40 of Proceedings of Machine Learning Research, pp. 861-898, Paris, France, 03-06 Jul 2015. PMLR.
Ibrahimi, M., Javanmard, A., and Roy, B. Efficient reinforcement learning for high dimensional linear quadratic systems. In Advances in Neural Information Processing Systems, volume 25, pp. 2636-2644. Curran Associates, Inc., 2012.
Kakade, S. and Langford, J. Approximately optimal approximate reinforcement learning. In In Proc. 19th International Conference on Machine Learning, 2002.
Khalil, H. K. Nonlinear Control. Pearson, 2015.
Kocák, T., Neu, G., Valko, M., and Munos, R. Efficient learning by implicit exploration in bandit problems with side observations. In Advances in Neural Information Processing Systems, volume 27, pp. 613-621. Curran Associates, Inc., 2014.
Konda, V. and Tsitsiklis, J. Actor-critic algorithms. In Solla, S., Leen, T., and Müller, K. (eds.), Advances in Neural Information Processing Systems, volume 12. MIT Press, 2000.
Lai, T. and Robbins, H. Asymptotically efficient adaptive allocation rules. Advances in Applied Mathematics, 6(1): 4 - 22, 1985. ISSN 0196-8858.
Li, G., Wei, Y., Chi, Y., Gu, Y., and Chen, Y. Softmax policy gradient methods can take exponential time to converge. arXiv preprint arXiv:2102.11270, 2021.
Littlestone, N. and Warmuth, M. K. The weighted majority algorithm. Inform. Comput., 108(2):212-261, 1994.
Łojasiewicz, S. Les équations aux dérivées partielles (paris, 1962), 1963.
Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., and Mordatch, I. Multi-agent actor-critic for mixed cooperative-competitive environments, 2020.
Maclin, R. and Opitz, D. W. Popular ensemble methods: An empirical study. CoRR, abs/1106.0257, 2011.
Mania, H., Tu, S., and Recht, B. Certainty equivalence is efficient for linear quadratic control. In Advances in Neural Information Processing Systems, volume 32, pp. 10154-10164. Curran Associates, Inc., 2019.

Mei, J., Xiao, C., Szepesvari, C., and Schuurmans, D. On the global convergence rates of softmax policy gradient methods. In Proceedings of the 37th International Conference on Machine Learning, pp. 6820-6829. PMLR, 2020.
Mohan, A., Chattopadhyay, A., and Kumar, A. Hybrid mac protocols for low-delay scheduling. In 2016 IEEE 13th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), pp. 47-55, Los Alamitos, CA, USA, oct 2016. IEEE Computer Society.
Mohan, A., Gopalan, A., and Kumar, A. Throughput optimal decentralized scheduling with single-bit state feedback for a class of queueing systems. ArXiv, abs/2002.08141, 2020.
Neu, G. Explore no more: Improved high-probability regret bounds for non-stochastic bandits. In Advances in Neural Information Processing Systems, volume 28, pp. 3168-3176. Curran Associates, Inc., 2015.
Osband, I., Russo, D., and Van Roy, B. (more) efficient reinforcement learning via posterior sampling. In Advances in Neural Information Processing Systems, volume 26, pp. 3003-3011. Curran Associates, Inc., 2013.
Ouyang, Y., Gagrani, M., Nayyar, A., and Jain, R. Learning unknown markov decision processes: A thompson sampling approach. In NIPS, 2017.
Peters, J. and Schaal, S. Natural actor-critic. Neurocomputing, 71(7):1180-1190, 2008. ISSN 0925-2312. Progress in Modeling, Theory, and Application of Computational Intelligenc.
Radac, M.-B. and Precup, R.-E. Data-driven model-free slip control of anti-lock braking systems using reinforcement q-learning. Neurocomput., 275(C):317-329, January 2018.
Rummery, G. A. and Niranjan, M. On-line q-learning using connectionist systems. Technical report, 1994.
Schulman, J., Levine, S., Abbeel, P., Jordan, M., and Moritz, P. Trust region policy optimization. In Bach, F. and Blei, D. (eds.), Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pp. 1889-1897, Lille, France, 07-09 Jul 2015. PMLR.
Schulman, J., Wolski, F., Dhariwal, P., Radford, A., and Klimov, O. Proximal policy optimization algorithms, 2017.
Shani, L., Efroni, Y., and Mannor, S. Adaptive trust region policy optimization: Global convergence and faster rates for regularized mdps. ArXiv, abs/1909.02769, 2020.

Silver, D., Huang, A., Maddison, C. J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., and Hassabis, D. Mastering the game of go with deep neural networks and tree search. Nature, 529:484-503, 2016.
Singh, S., Okun, A., and Jackson, A. Artificial intelligence: Learning to play Go from scratch. 550(7676):336-337, October 2017. doi: 10.1038/550336a.
Sutton, R. S. and Barto, A. G. Reinforcement Learning: An Introduction. The MIT Press, second edition, 2018.
Sutton, R. S., Precup, D., and Singh, S. Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence, 112(1): 181-211, 1999. ISSN 0004-3702.
Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. Policy gradient methods for reinforcement learning with function approximation. In Advances in Neural Information Processing Systems, volume 12, pp. 1057-1063. MIT Press, 2000.
Tassiulas, L. and Ephremides, A. Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. IEEE Transactions on Automatic Control, 37(12):1936-1948, 1992. doi: 10.1109/9.182479.
Wiering, M. A. and van Hasselt, H. Ensemble algorithms in reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), 38(4):930-936, 2008.
Xiliang, C., Cao, L., Li, C.-x., Xu, Z.-x., and Lai, J. Ensemble network architecture for deep reinforcement learning. Mathematical Problems in Engineering, 2018:1-6, 04 2018.
Xu, T., Wang, Z., and Liang, Y. Improving sample complexity bounds for (natural) actor-critic algorithms. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 4358-4369. Curran Associates, Inc., 2020.

A. Glossary of Symbols

  1. $S$ : State space
  2. $\mathcal{A}$ : Action space
  3. $S$ :Cardinality of $s$
  4. $A$ :Cardinality of $\mathcal{A}$
  5. $M$ : Number of controllers
  6. $K_{i}$ Controller $i, i = 1, \dots, M$ . For finite SA space MDP, $K_{i}$ is a matrix of size $S \times A$ , where each row is a probability distribution over the actions.
  7. $\mathcal{C}$ : Given collection of $M$ controllers.
  8. $\mathcal{I}_{soft}(\mathcal{C})$ : Improper policy class setup by the learner.
  9. $\theta \in \mathbb{R}^{M}$ : Parameter assigned to the controllers to controllers, representing weights, updated each round by the learner.
  10. $\pi(.)$ : Probability of choosing controllers
  11. $\pi(.|s)$ Probability of choosing action given state $s$ . Note that in our setting, given $\pi(.)$ over controllers (see previous item) and the set of controllers, $\pi(.|s)$ is completely defined, i.e., $\pi(a|s) = \sum_{m=1}^{M} \pi(m) K_m(s,a)$ . Hence we use simply $\pi$ to denote the policy followed, whenever the context is clear.
  12. $r(s, a)$ : Immediate (one-step) reward obtained if action $a$ is played in state $s$ .
  13. $\mathsf{P}(s^{\prime}\mid s,a)$ Probability of transitioning to state $s^\prime$ from state $s$ having taken action $a$
  14. $V^{\pi}(\rho) \coloneqq \mathbb{E}{s_0 \sim \rho} [V^{\pi}(s_0)] = \mathbb{E}{\rho}^{\pi} \sum_{t=0}^{\infty} \gamma^t r(s_t, a_t)$ Value function starting with initial distribution $\rho$ over states, and following policy $\pi$ .
  15. $Q^{\pi}(s,a)\coloneqq \mathbb{E}\left[r(s,a) + \gamma \sum_{s'\in \mathcal{S}}\mathsf{P}(s'\mid s,a)V^{\pi}(s')\right]$ .
  16. $\tilde{Q}^{\pi}(s,m)\coloneqq \mathbb{E}\left[\sum_{a\in \mathcal{A}}K_m(s,a)r(s,a) + \gamma \sum_{s'\in \mathcal{S}}\mathsf{P}(s'\mid s,a)V^{\pi}(s')\right].$
  17. $A^{\pi}(s,a)\coloneqq Q^{\pi}(s,a) - V^{\pi}(s)$
  18. $\tilde{A} (s,m)\coloneqq \tilde{Q}^{\pi}(s,m) - V^{\pi}(s).$
  19. $d_{\nu}^{\pi} \coloneqq \mathbb{E}{s_0 \sim \nu}\left[(1 - \gamma)\sum{t = 0}^{\infty}\mathbb{P}\left[s_t = s \mid s_o, \pi, \mathrm{P}\right]\right]$ . Denotes a distribution over the states, is called the "discounted state visitation measure"
  20. $c:\inf_{t\geqslant 1}\min_{m\in {m^{\prime}\in [M]:\pi^{*}(m^{\prime}) > 0}}\pi_{\theta_{t}}(m).$
  21. $\left| \frac{d_{\mu}^{\pi^{}}}{\mu}\right|{\infty} = \max{s}\frac{d_{\mu}^{\pi^{}}(s)}{\mu(s)}.$
  22. $\left| \frac{1}{\mu}\right|_{\infty} = \max_s\frac{1}{\mu(s)}.$

B. Expanded Survey of Related Work

In this section, we provide a detailed survey of related works. It is vital to distinguish the approach investigated in the present paper from the plethora of existing algorithms based on 'proper learning'. Essentially, these algorithms try to find an (approximately) optimal policy for the MDP under investigation. These approaches can broadly be classified in two groups: model-based and model-free.

The former is based on first learning the dynamics of the unknown MDP followed by planning for this learnt model. Algorithms in this class include Thompson Sampling-based approaches (Osband et al., 2013; Ouyang et al., 2017; Gopalan & Mannor, 2015), Optimism-based approaches such as the UCRL algorithm (Auer et al., 2009), both achieving order-wise optimal $\mathcal{O}(\sqrt{T})$ regret bound.

A particular class of MDPs which has been studied extensively is the Linear Quadratic Regulator (LQR) which is a continuous state-action MDP with linear state dynamics and quadratic cost (Dean et al., 2017). Let $x_{t} \in \mathbb{R}^{m}$ be the current state and let $u_{t} \in \mathbb{R}^{n}$ be the action applied at time $t$ . The infinite horizon average cost minimization problem for LQR is to find a policy to choose actions ${u_{t}}_{t \geqslant 1}$ so as to minimize

limTE[1Tt=1TxtTQxt+utTRut] \operatorname * {l i m} _ {T \rightarrow \infty} \mathbb {E} \left[ \frac {1}{T} \sum_ {t = 1} ^ {T} x _ {t} ^ {\mathrm {T}} Q x _ {t} + u _ {t} ^ {\mathrm {T}} R u _ {t} \right]

such that $x_{t + 1} = Ax_t + Bu_t + n(t)$ , $n(t)$ is iid zero-mean noise. Here the matrices $A$ and $B$ are unknown to the learner. Earlier works like (Abbasi-Yadkori & Szepesvári, 2011; Ibrahimi et al., 2012) proposed algorithms based on the well-known optimism principle (with confidence ellipsoids around estimates of $A$ and $B$ ). These show regret bounds of $\mathcal{O}(\sqrt{T})$ .

However, these approaches do not focus on the stability of the closed-loop system. (Dean et al., 2017) describes a robust controller design which seeks to minimize the worst-case performance of the system given the error in the estimation process. They show a sample complexity analysis guaranteeing convergence rate of $\mathcal{O}(1 / \sqrt{N})$ to the optimal policy for the given LQR, $N$ being the number of rollouts. More recently, certainty equivalence (Mania et al., 2019) was shown to achieve $\mathcal{O}(\sqrt{T})$ regret for LQRs. Further, (Cassel et al., 2020) show that it is possible to achieve $\mathcal{O}(\log T)$ regret if either one of the matrices $A$ or $B$ are known to the learner, and also provided a lower bound showing that $\Omega (\sqrt{T})$ regret is unavoidable when both are unknown.

The model-free approach on the other hand, bypasses model estimation and directly learns the value function of the unknown MDP. While the most popular among these have historically been Q-learning, TD-learning (Sutton & Barto, 2018) and SARSA (Rummery & Niranjan, 1994), algorithms based on gradient-based policy optimization have been gaining considerable attention of late, following their stunning success with playing the game of Go which has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. (Silver et al., 2016) and more recently (Singh et al., 2017) use policy gradient method combined with a neural network representation to beat human experts. Indeed, the Policy Gradient method has become the cornerstone of modern RL and given birth to an entire class of highly efficient policy search algorithms such as TRPO (Schulman et al., 2015), PPO(Schulman et al., 2017), and MADDPG (Lowe et al., 2020).

Despite its excellent empirical performance, not much was known about theoretical guarantees for this approach until recently. There is now a growing body of promising results showing convergence rates for PG algorithms over finite state-action MDPs (Agarwal et al., 2020a; Shani et al., 2020; Bhandari & Russo, 2019; Mei et al., 2020), where the parameterization is over the entire space of state-action pairs, i.e., $\mathbb{R}^{S\times A}$ . In particular, (Bhandari & Russo, 2019) show that projected gradient descent does not suffer from spurious local optima on the simplex, (Agarwal et al., 2020a) show that the with softmax parameterization PG converges to the global optima asymptotically. (Shani et al., 2020) show a $\mathcal{O}(1 / \sqrt{t})$ convergence rate for mirror descent. (Mei et al., 2020) show that with softmax policy gradient convergence to the global optima occurs at a rate $\mathcal{O}(1 / t)$ and at $\mathcal{O}(e^{-t})$ with entropy regularization.

We end this section noting once again that all of the above works concern proper learning. Improper learning, on the other hand, has been separately studied in statistical learning theory in the IID setting (Daniely et al., 2014; 2013). In this framework, which is also called Representation Independent learning, the learning algorithm is not restricted to output a hypothesis from a given set of hypotheses. We note that improper learning has not been studied in RL literature to the best of our knowledge.

To our knowledge, (Agarwal et al., 2020b) is the only existing work that attempts to frame and solve policy optimization

over an improper class via boosting a given class of controllers. However, the paper is situated in the rather different context of non-stochastic control and assumes perfect knowledge of (i) the memory-boundedness of the MDP, and (ii) the state noise vector in every round, which amounts to essentially knowing the MDP transition dynamics. We work in the stochastic MDP setting and moreover assume no access to the MDP's transition kernel. Further, (Agarwal et al., 2020b) also assumes that all the atomic controllers available to them are stabilizing which, when working with an unknown MDP, is a very strong assumption to make. We make no such assumptions on our atomic controller class and, as we show in Sec. 2 and Sec. 6, our algorithms even begin with provably unstable controllers and yet succeed in stabilizing the system.

In summary, the problem that we address concerns finding the best among a given class of controllers. None of these need be optimal for the MDP at hand. Moreover, our PG algorithm could very well converge to an improper mixture of these controllers meaning that the output of our algorithms need not be any of the atomic controllers we are provided with. This setting, to the best of our knowledge has not been investigated in the RL literature hitherto.

C. Details of Setup and Modelling of the Cartpole


Figure 4: The Cartpole system. The mass of the pendulum is denoted by $m_{p}$ , that of the cart by $m_{K}$ , the force used to drive the cart by $F$ , and the distance of the center of mass of the cart from its starting position by $s$ . $\theta$ denotes the angle the pendulum makes with the normal and its length is denoted by $2l$ . Gravity is denoted by $g$ .

As shown in Fig. 4, it comprises a pendulum whose pivot is mounted on a cart which can be moved in the horizontal direction by applying a force. The objective is to modulate the direction and magnitude of this force $F$ to keep the pendulum from keeling over under the influence of gravity. The state of the system at time $t$ , is given by the 4-tuple $\mathbf{x}(t) \coloneqq [s, \dot{s}, \theta, \dot{\theta}]$ with $\mathbf{x}(\cdot) = \mathbf{0}$ corresponding to the pendulum being upright and stationary. One of the strategies used to design control policies for this system is by first approximating the dynamics around $\mathbf{x}(\cdot) = \mathbf{0}$ with a linear, quadratic cost model and designing a linear controller for these approximate dynamics. This, after time discretization, The objective now reduces to finding a (potentially randomized) control policy $u \equiv {u(t), t \geqslant 0}$ that solves:

infuJ(x(0))=Eut=0x(t)Qx(t)+Ru2(t),s.t.x(t+1)=(010000gl(43mpmp+mk)0000100gl(43mpmp+mk)0)Ao p e nx(t)+(01mp+mk01l(43mpmp+mk))bu(t).(2) \begin{array}{l} \inf _ {u} J (\mathbf {x} (0)) = \mathbb {E} _ {u} \sum_ {t = 0} ^ {\infty} \mathbf {x} ^ {\intercal} (t) Q \mathbf {x} (t) + R u ^ {2} (t), \\ s. t. \mathbf {x} (t + 1) = \underbrace {\left( \begin{array}{l l l l} 0 & 1 & 0 & 0 \\ 0 & 0 & \frac {g}{l \left(\frac {4}{3} - \frac {m _ {p}}{m _ {p} + m _ {k}}\right)} & 0 \\ 0 & 0 & 0 & 1 \\ 0 & 0 & \frac {g}{l \left(\frac {4}{3} - \frac {m _ {p}}{m _ {p} + m _ {k}}\right)} & 0 \end{array} \right)} _ {A _ {\text {o p e n}}} \mathbf {x} (t) + \underbrace {\left( \begin{array}{c} 0 \\ \frac {1}{m _ {p} + m _ {k}} \\ 0 \\ \frac {1}{l \left(\frac {4}{3} - \frac {m _ {p}}{m _ {p} + m _ {k}}\right)} \end{array} \right)} _ {\mathbf {b}} u (t). \tag {2} \\ \end{array}

Under standard assumptions of controllability and observability, this optimization has a stationary, linear solution $u^{*}(t) = -\mathbf{K}^{\top}\mathbf{x}(t)$ (details are available in (?)Chap. 3]bertsekas11dynamic). Moreover, setting $A := A_{open} - \mathbf{bK}^{\top}$ , it is well known that the dynamics $\mathbf{x}(t + 1) = A\mathbf{x}(t)$ , $t \geqslant 0$ , are stable.

C.1. Details of simulations settings for the cartpole system

In this section we supply the adjustments we made for specifically for the cartpole experiments. We first mention that we scale down the estimated gradient of the value function returned by the GradEst subroutine (Algorithm 6) (in the cartpole simulation only). The scaling that worked for us is $\frac{10}{|\nabla V^{\pi}(\mu)|}$ .

Next, we provide the values of the constants that were described in Sec. C in Table 1.

ParameterValue
Gravity g9.8
Mass of pole mp0.1
Length of pole l1
Mass of cart mk1
Total mass mt1.1

Table 1: Values of the hyperparameters used for the cartpole simulation

D. Stability for Ergodic Parameter Linear Systems (EPLS)

For simplicity and ease of understanding, we connect our current discussion to the cartpole example discussed in Sec. 2.1. Consider a generic (ergodic) control policy that switches across a menu of controllers ${K_1,\dots ,K_N}$ . That is, at any time $t$ , it chooses controller $K_{i}$ , $i\in [N]$ , w.p. $p_i$ , so that the control input at time $t$ is $u(t) = -\mathbf{K}i^\top \mathbf{x}(t)$ w.p. $p_i$ . Let $A(i)\coloneqq A{open} - \mathbf{b}\mathbf{K}_i^\top$ . The resulting controlled dynamics are given by

x(t+1)=A(r(t))x(t)x(0)=0,(3) \begin{array}{l} \mathbf {x} (t + 1) = A (r (t)) \mathbf {x} (t) \\ \mathbf {x} (0) = \mathbf {0}, \tag {3} \\ \end{array}

where $r(t) = i$ w.p. $p_i$ , IID across time. In the literature, this belongs to a class of systems known as Ergodic Parameter Linear Systems (EPLS) (Bolzern et al., 2008), which are said to be Exponentially Almost Surely Stable (EAS) if there exists $\rho > 0$ such that for any $\mathbf{x}(0)$ ,

P{ωΩlimtsupt1tlogx(t,ω)ρ}=1.(4) \mathbb {P} \left\{\omega \in \Omega \left| \lim _ {t \rightarrow \infty} \sup _ {t \rightarrow \infty} \frac {1}{t} \log \| \mathbf {x} (t, \omega) \| \leqslant - \rho \right.\right\} = 1. \tag {4}

In other words, w.p. 1, the trajectories of the system decay to the origin exponentially fast. The random variable $\lambda (\omega)\coloneqq$ $\lim \sup_{t\to \infty}\frac{1}{t}\log | \mathbf{x}(t,\omega)|$ in (4) is called the Lyapunov Exponent of the system. For our EPLS,

λ(ω)=lim supt1tlogx(t,ω)=lim supt1tlogs=1tA(r(s,ω))x(0)limt1tlogx(0)+limt1tlogs=1tA(r(s,ω))limt1ts=1tlogA(r(s,ω))limt1ts=1tlogA(r(s,ω))ElogA(r)=i=1NpilogA(i),(5) \begin{array}{l} \lambda (\omega) = \limsup_{t\to \infty}\frac{1}{t}\log \| \mathbf{x}(t,\omega)\| = \limsup_{t\to \infty}\frac{1}{t}\log \left\| \prod_{s = 1}^{t}A(r(s,\omega))\mathbf{x}(0)\right\| \\ \leqslant \lim _ {t \rightarrow \infty} \frac {1}{t} \log \| \mathbf {x} (0) \| + \lim _ {t \rightarrow \infty} \frac {1}{t} \log \left\| \prod_ {s = 1} ^ {t} A (r (s, \omega)) \right\| \\ \leqslant \lim _ {t \rightarrow \infty} \frac {1}{t} \sum_ {s = 1} ^ {t} \log \| A (r (s, \omega)) \| \stackrel {(*)} {=} \lim _ {t \rightarrow \infty} \frac {1}{t} \sum_ {s = 1} ^ {t} \log \| A (r (s, \omega)) \| \\ \stackrel {(\dagger)} {=} \mathbb {E} \log \| A (r) \| = \sum_ {i = 1} ^ {N} p _ {i} \log \| A (i) \|, \tag {5} \\ \end{array}

where the equalities $(\ast)$ and $(\dagger)$ are due to the ergodic law of large numbers. The control policy can now be designed by choosing ${p_1,\dots ,p_N}$ such that $\lambda (\omega) < -\rho$ for some $\rho >0$ , ensuring exponentially almost sure stability.

E. The Constrained Queuing Example

The system, shown in Fig. 5, comprises two queues fed by independent, stochastic arrival processes $A_{i}(t), i \in {1,2}$ , $t \in \mathbb{N}$ . The length of Queue $i$ , measured at the beginning of time slot $t$ , is denoted by $Q_{i}(t) \in \mathbb{Z}_{+}$ . A common server serves both queues and can drain at most one packet from the system in a time slot2. The server, therefore, needs to decide which of the two queues it intends to serve in a given slot (we assume that once the server chooses to serve a packet, service succeeds with probability 1). The server's decision is denoted by the vector $\mathbf{D}(t) \in \mathcal{A} := {[0,0], [1,0], [0,1]}$ , where a "1" denotes service and a "0" denotes lack thereof.


Figure 5: $Q_{i}(t)$ is the length of Queue $i$ ( $i \in {1,2}$ ) at the beginning of time slot $t$ , $A_{i}(t)$ is its packet arrival process and $\mathbf{D}(t) \in {[0,0],[1,0],[0,1]}$ .

For simplicity, we assume that the processes $(A_{i}(t)){t = 0}^{\infty}$ are both IID Bernoulli, with $\mathbb{E}A{i}(t) = \lambda_{i}$ . Note that the arrival rate $\lambda = [\lambda_1,\lambda_2]$ is unknown to the learner. Defining $(x)^{+}:= \max {0,x}$ , $\forall x\in \mathbb{R}$ , queue length evolution is given by the equations

Qi(t+1)=(Qi(t)Di(t))++Ai(t+1),i{1,2}.(6) Q _ {i} (t + 1) = \left(Q _ {i} (t) - D _ {i} (t)\right) ^ {+} + A _ {i} (t + 1), i \in \{1, 2 \}. \tag {6}

F. Non-concavity of the Value function

We show here that the value function $V^{\pi}(\rho)$ is in general non-concave, and hence standard convex optimization techniques for maximization may get stuck in local optima. We note once again that this is different from the non-concavity of $V^{\pi}$ when the parameterization is over the entire state-action space, i.e., $\mathbb{R}^{S\times A}$ .

We show here that for both SoftMax and direct parameterization, the value function is non-concave where, by "direct" parameterization we mean that the controllers $K_{m}$ are parameterized by weights $\theta_{m} \in \mathbb{R}$ , where $\theta_{i} \geqslant 0$ , $\forall i \in [M]$ and $\sum_{i=1}^{M} \theta_{i} = 1$ . A similar argument holds for softmax parameterization, which we outline in Note F.2.

Lemma F.1. (Non-concavity of Value function) There is an MDP and a set of controllers, for which the maximization problem of the value function (i.e. (1)) is non-concave for SoftMax parameterization, i.e., $\theta \mapsto V^{\pi_{\theta}}$ is non-concave.


Figure 6: An example of an MDP with controllers as defined in (7) having a non-concave value function. The MDP has $S = 5$ states and $A = 2$ actions. States $s_3, s_4$ and $s_5$ are terminal states. The only transition with nonzero reward is $s_2 \rightarrow s_4$ .

Proof. Consider the MDP shown in Figure 6 with 5 states, $s_1,\ldots ,s_5$ . States $s_3,s_4$ and $s_5$ are terminal states. In the figure we also show the allowed transitions and the rewards obtained by those transitions. Let the action set $\mathcal{A}$ consists of only three actions ${a_{1},a_{2},a_{3}} \equiv {\mathrm{right},\mathrm{up},\mathrm{null}}$ , where 'null' is a dummy action included to accommodate the three terminal states. Let us consider the case when $M = 2$ . The two controllers $K_{i}\in \mathbb{R}^{S\times A}$ , $i = 1,2$ (where each row is probability distribution over $\mathcal{A}$ ) are shown below.

K1=[1/43/403/41/40001001001],K2=[3/41/401/43/40001001001].(7) K _ {1} = \left[ \begin{array}{l l l} 1 / 4 & 3 / 4 & 0 \\ 3 / 4 & 1 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right], K _ {2} = \left[ \begin{array}{l l l} 3 / 4 & 1 / 4 & 0 \\ 1 / 4 & 3 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right]. \tag {7}

Let $\theta^{(1)} = (1,0)^{\mathrm{T}}$ and $\theta^{(2)} = (0,1)^{\mathrm{T}}$ . Let us fix the initial state to be $s_1$ . Since a nonzero reward is only earned during a $s_2 \to s_4$ transition, we note for any policy $\pi : \mathcal{A} \to \mathcal{S}$ that $V^{\pi}(s_1) = \pi(a_1|s_1)\pi(a_2|s_2)r$ . We also have,

(K1+K2)/2=[1/21/201/21/20001001001]. (K _ {1} + K _ {2}) / 2 = \left[ \begin{array}{c c c} 1 / 2 & 1 / 2 & 0 \\ 1 / 2 & 1 / 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right].

We will show that $\frac{1}{2} V^{\pi_{\theta(1)}} + \frac{1}{2} V^{\pi_{\theta(2)}} > V^{\pi_{\left(\theta^{(1)} + \theta^{(2)}\right) / 2}}$ .

We observe the following.

Vπθ(1)(s1)=VK1(s1)=(1/4).(1/4).r=r/16. V ^ {\pi_ {\theta^ {(1)}}} (s _ {1}) = V ^ {K _ {1}} (s _ {1}) = (1 / 4). (1 / 4). r = r / 1 6.

Vπθ(2)(s1)=VK2(s1)=(3/4).(3/4).r=9r/16. V ^ {\pi_ {\theta^ {(2)}}} \left(s _ {1}\right) = V ^ {K _ {2}} \left(s _ {1}\right) = (3 / 4). (3 / 4). r = 9 r / 1 6.

where $V^{K}(s)$ denotes the value obtained by starting from state $s$ and following a controller matrix $K$ for all time. Also, on the other hand we have,

Vπ(θ(1)+θ(2))/2=V(K1+K2)/2(s1)=(1/2).(1/2).r=r/4. V ^ {\pi \left(\theta^ {(1)} + \theta^ {(2)}\right) / 2} = V ^ {\left(K _ {1} + K _ {2}\right) / 2} (s _ {1}) = (1 / 2). (1 / 2). r = r / 4.

Hence we see that,

12Vπθ(1)+12Vπθ(2)=r/32+9r/32=10r/32=1.25r/4>r/4=Vπ(θ(1)+θ(2))/2. \frac {1}{2} V ^ {\pi_ {\theta^ {(1)}}} + \frac {1}{2} V ^ {\pi_ {\theta^ {(2)}}} = r / 3 2 + 9 r / 3 2 = 1 0 r / 3 2 = 1. 2 5 r / 4 > r / 4 = V ^ {\pi_ {\left(\theta^ {(1)} + \theta^ {(2)}\right) / 2}}.

This shows that $\theta \mapsto V^{\pi_{\theta}}$ is non-concave, which concludes the proof for direct parameterization.

Remark F.2. For softmax parametrization, we choose the same 2 controllers $K_{1}, K_{2}$ as above. Fix some $\varepsilon \in (0,1)$ and set $\theta^{(1)} = (\log (1 - \varepsilon), \log \varepsilon)^{\mathrm{T}}$ and $\theta^{(2)} = (\log \varepsilon, \log (1 - \varepsilon))^{\mathrm{T}}$ . A similar calculation using softmax projection, and using the fact that $\pi_{\theta}(a|s) = \sum_{m=1}^{M} \pi_{\theta}(m) K_m(s, a)$ , shows that under $\theta^{(1)}$ we follow matrix $(1 - \varepsilon)K_{1} + \varepsilon K_{2}$ , which yields a Value of $(1/4 + \varepsilon/2)^{2}r$ . Under $\theta^{(2)}$ we follow matrix $\varepsilon K_{1} + (1 - \varepsilon)K_{2}$ , which yields a Value of $(3/4 - \varepsilon/2)^{2}r$ . On the other hand, $(\theta^{(1)} + \theta^{(2)})/2$ amounts to playing the matrix $(K_{1} + K_{2})/2$ , yielding the value of $r/4$ , as above. One can verify easily that $(1/4 + \varepsilon/2)^{2}r + (3/4 - \varepsilon/2)^{2}r > 2.r/4$ . This shows the non-concavity of $\theta \mapsto V^{\pi_{\theta}}$ under softmax parameterization.

G. Example showing that the value function need not be pointwise (over states) monotone over the improper class

Consider the same MDP as in Sec F, however with different base controllers. Let the initial state be $s_1$ .

The two base controllers $K_{i} \in \mathbb{R}^{S \times A}$ , $i = 1, 2$ (where each row is probability distribution over $\mathcal{A}$ ) are shown below.

K1=[1/43/401/43/40001001001],K2=[3/41/403/41/40001001001].(8) K _ {1} = \left[ \begin{array}{l l l} 1 / 4 & 3 / 4 & 0 \\ 1 / 4 & 3 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right], K _ {2} = \left[ \begin{array}{l l l} 3 / 4 & 1 / 4 & 0 \\ 3 / 4 & 1 / 4 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right]. \tag {8}

Let $\theta^{(1)} = (1,0)^{\mathrm{T}}$ and $\theta^{(2)} = (0,1)^{\mathrm{T}}$ . Let us fix the initial state to be $s_1$ . Since a nonzero reward is only earned during a $s_2 \to s_4$ transition, we note for any policy $\pi$ , that $V^{\pi}(s_1) = \pi(a_1|s_1)\pi(a_2|s_2)r$ and $V^{\pi}(s_2) = \pi(a_2|s_2)r$ . Note here that the optimal policy of this MDP is deterministic with $\pi^*(a_1|s_1) = 1$ and $\pi^*(a_2|s_2) = 1$ . The transitions are all deterministic.

However, notice that the optimal policy (with initial state $s_1$ ) given $K_1$ and $K_2$ is strict mixture, because, given any $\theta = [\theta, 1 - \theta]$ , $\theta \in [0, 1]$ , the value of the policy $\pi_\theta$ is

vπθ=14(32θ)(1+2θ)r,(9) v ^ {\pi \theta} = \frac {1}{4} (3 - 2 \theta) (1 + 2 \theta) r, \tag {9}

which is maximized at $\theta = 1 / 2$ . This means that the optimal non deterministic policy chooses $K_{1}$ and $K_{2}$ with probabilities $(1 / 2, 1 / 2)$ , i.e.,

K=(K1+K2)/2=[1/21/201/21/20001001001]. K ^ {*} = (K _ {1} + K _ {2}) / 2 = \left[ \begin{array}{c c c} 1 / 2 & 1 / 2 & 0 \\ 1 / 2 & 1 / 2 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \\ 0 & 0 & 1 \end{array} \right].

We observe the following.

Vπθ(1)(s1)=VK1(s1)=(1/4).(3/4).r=3r/16. V ^ {\pi_ {\theta^ {(1)}}} (s _ {1}) = V ^ {K _ {1}} (s _ {1}) = (1 / 4). (3 / 4). r = 3 r / 1 6.

Vπθ(2)(s1)=VK2(s1)=(3/4).(1/4).r=3r/16. V ^ {\pi_ {\theta^ {(2)}}} (s _ {1}) = V ^ {K _ {2}} (s _ {1}) = (3 / 4). (1 / 4). r = 3 r / 1 6.

Vπθ(1)(s2)=VK1(s2)=(3/4).r=3r/4. V ^ {\pi_ {\theta^ {(1)}}} (s _ {2}) = V ^ {K _ {1}} (s _ {2}) = (3 / 4). r = 3 r / 4.

On the other hand we have,

Vπ(θ(1)+θ(2))/2(s1)=VK(s1)=(1/2).(1/2).r=r/4. V ^ {\pi \left(\theta^ {(1)} + \theta^ {(2)}\right) / 2} (s _ {1}) = V ^ {K ^ {*}} (s _ {1}) = (1 / 2). (1 / 2). r = r / 4.

Vπ(θ(1)+θ(2))/2(s2)=VK(s2)=(1/2).r=r/2. V ^ {\pi \left(\theta^ {(1)} + \theta^ {(2)}\right) / 2} (s _ {2}) = V ^ {K ^ {*}} (s _ {2}) = (1 / 2). r = r / 2.

We see that $V^{K^*}(s_1) > \max {V^{K_1}(s_1), V^{K_2}(s_1)}$ . However, $V^{K^*}(s_2) < V^{K_1}(s_2)$ . This implies that playing according to an improved mixture policy (here the optimal given the initial state is $s_1$ ) does not necessarily improve the value across all states.

H. Proof details for Bandit-over-bandits

In this section we consider the instructive sub-case when $S = 1$ , which is also called the Multiarmed Bandit. We provide regret bounds for two cases (1) when the value gradient $\frac{dV^{\pi\theta_t}(\mu)}{d\theta^t}$ (in the gradient update) is available in each round, and (2) when it needs to be estimated.

Note that each controller in this case, is a probability distribution over the $A$ arms of the bandit. We consider the scenario where the agent at each time $t \geqslant 1$ , has to choose a probability distribution $K_{m_t}$ from a set of $M$ probability distributions over actions $\mathcal{A}$ . She then plays an action $a_t \sim K_{m_t}$ . This is different from the standard MABs because the learner cannot choose the actions directly, instead chooses from a given set of controllers, to play actions. Note the $V$ function has

no argument as $S = 1$ . Let $\mu \in [0,1]^A$ be the mean vector of the arms $\mathcal{A}$ . The value function for any given mixture $\pi \in \mathcal{P}([M])$ ,

Vπ:=E[t=0γtrtπ]=t=0γtE[rtπ]=t=0γtaAm=1Mπ(m)Km(a)μa.=11γm=1MπmμTKm=11γm=1Mπmrmμ.(10) \begin{array}{l} V ^ {\pi} := \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} r _ {t} \mid \pi \right] = \sum_ {t = 0} ^ {\infty} \gamma^ {t} \mathbb {E} \left[ r _ {t} \mid \pi \right] \\ = \sum_ {t = 0} ^ {\infty} \gamma^ {t} \sum_ {a \in \mathcal {A}} \sum_ {m = 1} ^ {M} \pi (m) K _ {m} (a) \mu_ {a}. \\ = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mu^ {\mathrm {T}} K _ {m} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mathfrak {r} _ {m} ^ {\mu}. \tag {10} \\ \end{array}

where the interpretation of $\mathfrak{r}m^\mu$ is that it is the mean reward one obtains if the controller $m$ is chosen at any round $t$ . Since $V^{\pi}$ is linear in $\pi$ , the maximum is attained at one of the base controllers $\pi^{}$ puts mass 1 on $m^{}$ where $m^{*} := \operatorname{argmax}{m \in [M]} V^{K_{m}}$ , and $V^{K_{m}}$ is the value obtained using $K_{m}$ for all time. In the sequel, we assume $\Delta_{i} := \mathfrak{r}{m^{*}}^{\mu} - \mathfrak{r}{i}^{\mu} > 0$ .

H.1. Proofs for MABs with perfect gradient knowledge

With access to the exact value gradient at each step, we have the following result, when Softmax PG (Algorithm 1) is applied for the bandits-over-bandits case.

Theorem H.1. With $\eta = \frac{2(1 - \gamma)}{5}$ and with $\theta_m^{(1)} = 1 / M$ for all $m\in [M]$ , with the availability for true gradient, we have $\forall t\geqslant 1$ ,

VπVπθt51γM2t. V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} \leqslant \frac {5}{1 - \gamma} \frac {M ^ {2}}{t}.

Also, defining regret for a time horizon of $T$ rounds as

R(T):=t=1TVπVπθt,(11) \mathcal {R} (T) := \sum_ {t = 1} ^ {T} V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}}, \tag {11}

we show as a corollary to Thm. H.4 that,

Corollary H.2.

R(T)min{5M21γlogT,51γMT}. \mathcal {R} (T) \leqslant \min \left\{\frac {5 M ^ {2}}{1 - \gamma} \mathrm {l o g} T, \sqrt {\frac {5}{1 - \gamma}} M \sqrt {T} \right\}.

Proof. Recall from eq (10), that the value function for any given policy $\pi \in \mathcal{P}([M])$ , that is a distribution over the given $M$ controllers (which are itself distributions over actions $\mathcal{A}$ ) can be simplified as:

Vπ=11γm=1MπmμTKm=11γm=1Mπmrmμ V ^ {\pi} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mu^ {\mathrm {T}} K _ {m} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {m} \mathfrak {r} _ {m} ^ {\mu}

where $\mu$ here is the (unknown) vector of mean rewards of the arms $\mathcal{A}$ . Here, $\mathfrak{r}m^\mu \coloneqq \mu^{\mathrm{T}}K_m$ , $i = 1,\dots ,M$ , represents the mean reward obtained by choosing to play controller $K{m}, m \in M$ . For ease of notation, we will drop the superscript $\mu$ in the proofs of this section. We first show a simplification of the gradient of the value function w.r.t. the parameter $\theta$ . Fix a $m \in [M]$ ,

θmVπθ=11γm=1Mθmπθ(m)rm=11γm=1Mπθ(m){Immπθ(m)}rm.(12) \frac {\partial}{\partial \theta_ {m ^ {\prime}}} V ^ {\pi_ {\theta}} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \frac {\partial}{\partial \theta_ {m}} \pi_ {\theta} (m) \mathfrak {r} _ {m} = \frac {1}{1 - \gamma} \sum_ {m = 1} ^ {M} \pi_ {\theta} \left(m ^ {\prime}\right) \left\{\mathbb {I} _ {m m ^ {\prime}} - \pi_ {\theta} (m) \right\} \mathfrak {r} _ {m}. \tag {12}

Next we show that $V^{\pi}$ is $\beta-$ smooth. A function $f:\mathbb{R}^{M}\to \mathbb{R}$ is $\beta-$ smooth, if $\forall \theta ',\theta \in \mathbb{R}^{M}$

f(θ)f(θ)ddθf(θ),θθβ2θθ22. \left| f (\theta^ {\prime}) - f (\theta) - \left\langle \frac {d}{d \theta} f (\theta), \theta^ {\prime} - \theta \right\rangle \right| \leqslant \frac {\beta}{2} \| \theta^ {\prime} - \theta \| _ {2} ^ {2}.

Let $S \coloneqq \frac{d^2}{d\theta^2} V^{\pi_\theta}$ . This is a matrix of size $M \times M$ . Let $1 \leqslant i, j \leqslant M$ .

Si,j=(ddθ(ddθVπθ))i,j(13)=11γd(πθ(i)(r(i)πθTr))dθj(14)=11γ(dπθ(i)dθj(r(i)πθTr)+πθ(i)d(r(i)πθTr)dθj)(15)=11γ(πθ(j)(r(i)πθTr)πθ(i)πθ(j)(r(i)πθTr)πθ(i)πθ(j)(r(j)πθTr)).(16) \begin{array}{l} S _ {i, j} = \left(\frac {d}{d \theta} \left(\frac {d}{d \theta} V ^ {\pi_ {\theta}}\right)\right) _ {i, j} (13) \\ = \frac {1}{1 - \gamma} \frac {d \left(\pi_ {\theta} (i) \left(\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right)\right)}{d \theta_ {j}} (14) \\ = \frac {1}{1 - \gamma} \left(\frac {d \pi_ {\theta} (i)}{d \theta_ {j}} \left(\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right) + \pi_ {\theta} (i) \frac {d \left(\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right)}{d \theta_ {j}}\right) (15) \\ = \frac {1}{1 - \gamma} \left(\pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (j) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right). (16) \\ \end{array}

Next, let $y \in \mathbb{R}^M$ ,

yTSy=i=1Mj=1MSijy(i)y(j)=11γi=1Mj=1M(πθ(j)(r(i)πθTr)πθ(i)πθ(j)(r(i)πθTr)πθ(i)πθ(j)(r(j)πθTr))y(i)y(j)=11γi=1Mπθ(i)(r(i)πθTr)y(i)22i=1Mj=1Mπθ(i)πθ(j)(r(i)πθTr)y(i)y(j)=11γi=1Mπθ(i)(r(i)πθTr)y(i)22i=1Mπθ(i)(r(i)πθTr)y(i)j=1Mπθ(j)y(j)11γi=1Mπθ(i)(r(i)πθTr)y(i)2+21γi=1Mπθ(i)(r(i)πθTr)y(i)j=1Mπθ(j)y(j)11γπθ(rπθTr)yy1+21γπθ(rπθTr)1.y.πθ1y. \begin{array}{l} \left| y ^ {\mathrm {T}} S y \right| = \left| \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} S _ {i j} y (i) y (j) \right| \\ = \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \left(\pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (j) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right) y (i) y (j) \right| \\ = \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) ^ {2} - 2 \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {\theta} (i) \pi_ {\theta} (j) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) y (j) \right| \\ = \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) ^ {2} - 2 \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) \sum_ {j = 1} ^ {M} \pi_ {\theta} (j) y (j) \right| \\ \leqslant \frac {1}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) ^ {2} \right| + \frac {2}{1 - \gamma} \left| \sum_ {i = 1} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) y (i) \sum_ {j = 1} ^ {M} \pi_ {\theta} (j) y (j) \right| \\ \leqslant \frac {1}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {\infty} \| y \odot y \| _ {1} + \frac {2}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {1}. \| y \| _ {\infty}. \| \pi_ {\theta} \| _ {1} \| y \| _ {\infty}. \\ \end{array}

The last equality is by the assumption that reward are bounded in [0,1]. We observe that,

πθ(rπθTr)1=m=1Mπθ(i)(r(i)πθTr)=m=1Mπθ(i)r(i)πθTr=maxi=1,,Mr(i)πθTr1. \begin{array}{l} \left| \left| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \right| \right| _ {1} = \sum_ {m = 1} ^ {M} \left| \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \right| \\ = \sum_ {m = 1} ^ {M} \pi_ {\theta} (i) | \mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r} | \\ = \max _ {i = 1, \dots , M} \left| \mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r} \right| \leqslant 1. \\ \end{array}

Next, for any $i \in [M]$ ,

πθ(i)(r(i)πθTr)=πθ(i)r(i)πθ(i)2r(i)jiπθ(i)πθ(j)r(j)=πθ(i)(1πθ(i))+πθ(i)(1πθ(i))2.1/4=1/2. \begin{array}{l} \left| \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \right| = \left| \pi_ {\theta} (i) \mathfrak {r} (i) - \pi_ {\theta} (i) ^ {2} r (i) - \sum_ {j \neq i} \pi_ {\theta} (i) \pi_ {\theta} (j) \mathfrak {r} (j) \right| \\ = \pi_ {\theta} (i) \left(1 - \pi_ {\theta} (i)\right) + \pi_ {\theta} (i) \left(1 - \pi_ {\theta} (i)\right) \leqslant 2. 1 / 4 = 1 / 2. \\ \end{array}

Combining the above two inequalities with the fact that $| \pi_{\theta}| 1 = 1$ and $| y|{\infty}\leqslant | y|_{2}$ , we get,

yTSy11γπθ(rπθTr)yy1+21γπθ(rπθTr)1.y.πθ1y11γ(1/2+2)y22. \left| y ^ {\mathsf {T}} S y \right| \leqslant \frac {1}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {\infty} \| y \odot y \| _ {1} + \frac {2}{1 - \gamma} \left\| \pi_ {\theta} \odot (\mathfrak {r} - \pi_ {\theta} ^ {\mathsf {T}} \mathfrak {r}) \right\| _ {1}. \| y \| _ {\infty}. \| \pi_ {\theta} \| _ {1} \| y \| _ {\infty} \leqslant \frac {1}{1 - \gamma} (1 / 2 + 2) \| y \| _ {2} ^ {2}.

Hence $V^{\pi_{\theta}}$ is $\beta$ -smooth with $\beta = \frac{5}{2(1 - \gamma)}$ .

We establish a lower bound on the norm of the gradient of the value function at every step $t$ as below (these type of inequalities are called Łojaseiwickz inequalities (Łojasiewicz, 1963))

Lemma H.3. [Lower bound on norm of gradient]

Vπθθ2πθm(VπVπθ). \left\| \frac {\partial V ^ {\pi_ {\theta}}}{\partial \theta} \right\| _ {2} \geqslant \pi_ {\theta_ {m ^ {*}}} \left(V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}}\right).

Proof of Lemma H.3.

Proof. Recall from the simplification of gradient of $V^{\pi}$ , i.e., eq (12):

θmVπθ=11γm=1Mπθ(m){Immπθ(m)}rm=11γπ(m)(r(m)πTr). \begin{array}{l} \frac {\partial}{\partial \theta_ {m}} V ^ {\pi_ {\theta}} = \frac {1}{1 - \gamma} \sum_ {m ^ {\prime} = 1} ^ {M} \pi_ {\theta} (m) \left\{\mathbb {I} _ {m m ^ {\prime}} - \pi_ {\theta} (m ^ {\prime}) \right\} \mathfrak {r} _ {m} ^ {\prime} \\ = \frac {1}{1 - \gamma} \pi (m) \left(\mathfrak {r} (m) - \pi^ {\mathrm {T}} \mathfrak {r}\right). \\ \end{array}

Taking norm both sides,

θVπθ=11γm=1M(π(m))2(r(m)πTr)211γ(π(m))2(r(m)πTr)2=11γ(π(m))(r(m)πTr)=11γ(π(m))(ππ)Tr=(π(m))[VπVπθ]. \begin{array}{l} \left\| \frac {\partial}{\partial \theta} V ^ {\pi_ {\theta}} \right\| = \frac {1}{1 - \gamma} \sqrt {\sum_ {m = 1} ^ {M} (\pi (m)) ^ {2} \left(\mathfrak {r} (m) - \pi^ {\mathrm {T}} \mathfrak {r}\right) ^ {2}} \\ \geqslant \frac {1}{1 - \gamma} \sqrt {\left(\pi \left(m ^ {*}\right)\right) ^ {2} \left(\mathfrak {r} \left(m ^ {*}\right) - \pi^ {\mathfrak {T}} \mathfrak {r}\right) ^ {2}} \\ = \frac {1}{1 - \gamma} \left(\pi \left(m ^ {*}\right)\right) \left(\mathfrak {r} \left(m ^ {*}\right) - \pi^ {\mathrm {T}} \mathfrak {r}\right) \\ = \frac {1}{1 - \gamma} \left(\pi \left(m ^ {*}\right)\right) \left(\pi^ {*} - \pi\right) ^ {\mathrm {T}} \mathfrak {r} \\ = \left(\pi (m ^ {*})\right) \left[ V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}} \right]. \\ \end{array}

where $\pi^{} = e_{m^{}}$

We will now prove Theorem H.4 and corollary H.2. We restate the result here.

Theorem H.4. With $\eta = \frac{2(1 - \gamma)}{5}$ and with $\theta_m^{(1)} = 1 / M$ for all $m\in [M]$ , with the availability for true gradient, we have $\forall t\geqslant 1$ ,

VπVπθt51γM2t. V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} \leqslant \frac {5}{1 - \gamma} \frac {M ^ {2}}{t}.

Proof. First, note that since $V^{\pi}$ is smooth we have:

VπθtVπθt+1ddθtVπθt,θt+1θt+52(1γ)θt+1θt22=ηddθtVπθt22+54(1γ)η2ddθtVπθt22=ddθtVπθt22(5η24(1γ)η)=(1γ5)ddθtVπθt22.(1γ5)(πθt(m))2[VπVπθ]2L e m m a H . 3(1γ5)(inf1stπθt(m)=:ct)2[VπVπθ]2. \begin{array}{l} V ^ {\pi_ {\theta_ {t}}} - V ^ {\pi_ {\theta_ {t + 1}}} \leqslant - \left\langle \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}}, \theta_ {t + 1} - \theta_ {t} \right\rangle + \frac {5}{2 (1 - \gamma)} \| \theta_ {t + 1} - \theta_ {t} \| _ {2} ^ {2} \\ = - \eta \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2} + \frac {5}{4 (1 - \gamma)} \eta^ {2} \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2} \\ = \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2} \left(\frac {5 \eta^ {2}}{4 (1 - \gamma)} - \eta\right) \\ = - \left(\frac {1 - \gamma}{5}\right) \left\| \frac {d}{d \theta_ {t}} V ^ {\pi_ {\theta_ {t}}} \right\| _ {2} ^ {2}. \\ \leqslant - \left(\frac {1 - \gamma}{5}\right) \left(\pi_ {\theta_ {t}} \left(m ^ {*}\right)\right) ^ {2} \left[ V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}} \right] ^ {2} \quad \text {L e m m a H . 3} \\ \leqslant - \left(\frac {1 - \gamma}{5}\right) (\underbrace {\inf _ {1 \leqslant s \leqslant t} \pi_ {\theta_ {t}} (m ^ {*})} _ {=: c _ {t}}) ^ {2} \left[ V ^ {\pi^ {*}} - V ^ {\pi_ {\theta}} \right] ^ {2}. \\ \end{array}

The first equality is by smoothness, second inequality is by the update equation in algorithm 1.

Next, let $\delta_t \coloneqq V^{\pi^*} - V^{\pi_{\theta_t}}$ . We have,

δt+1δt(1γ)5ct2δt2.(17) \delta_ {t + 1} - \delta_ {t} \leqslant - \frac {(1 - \gamma)}{5} c _ {t} ^ {2} \delta_ {t} ^ {2}. \tag {17}

Claim: $\forall t\geqslant 1,\delta_t\leqslant \frac{5}{c_t^2(1 - \gamma)}\frac{1}{t}.$

We prove the claim by using induction on $t \geqslant 1$ .

Base case. Since $\delta_t \leqslant \frac{1}{1 - \gamma}$ , the claim is true for all $t \leqslant 5$ .

Induction step: Let $\varphi_t \coloneqq \frac{5}{c_t^2(1 - \gamma)}$ . Fix a $t \geqslant 2$ , assume $\delta_t \leqslant \frac{\varphi_t}{t}$ .

Let $g: \mathbb{R} \to \mathbb{R}$ be a function defined as $g(x) = x - \frac{1}{\varphi_t} x^2$ . One can verify easily that $g$ is monotonically increasing in $\left[0, \frac{\varphi_t}{2}\right]$ . Next with equation 19, we have

δt+1δt1φtδt2=g(δt)g(φtt)φttφtt2=φt(1t1t2)φt(1t+1). \begin{array}{l} \delta_ {t + 1} \leqslant \delta_ {t} - \frac {1}{\varphi_ {t}} \delta_ {t} ^ {2} \\ = g \left(\delta_ {t}\right) \\ \leqslant g \left(\frac {\varphi_ {t}}{t}\right) \\ \leqslant \frac {\varphi_ {t}}{t} - \frac {\varphi_ {t}}{t ^ {2}} \\ = \varphi_ {t} \left(\frac {1}{t} - \frac {1}{t ^ {2}}\right) \\ \leqslant \varphi_ {t} \left(\frac {1}{t + 1}\right). \\ \end{array}

This completes the proof of the claim. We will show that $c_{t} \geqslant 1 / M$ in the next lemma. We first complete the proof of the corollary assuming this.

We fix a $T \geqslant 1$ . Observe that, $\delta_t \leqslant \frac{5}{(1 - \gamma)c_t^2} \frac{1}{t} \leqslant \frac{5}{(1 - \gamma)c_T^2} \frac{1}{t}$ .

t=1TVπVπθt=11γt=1T(ππθt)Tr5logT(1γ)cT2+1. \sum_ {t = 1} ^ {T} V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} = \frac {1}{1 - \gamma} \sum_ {t = 1} ^ {T} (\pi^ {*} - \pi_ {\theta_ {t}}) ^ {\mathrm {T}} \mathfrak {r} \leqslant \frac {5 \log T}{(1 - \gamma) c _ {T} ^ {2}} + 1.

Also we have that,

t=1TVπVπθt=t=1TδtTt=1Tδt2Tt=1T5(1γ)cT2(δtδt+1)1cT5T(1γ). \sum_ {t = 1} ^ {T} V ^ {\pi^ {*}} - V ^ {\pi_ {\theta_ {t}}} = \sum_ {t = 1} ^ {T} \delta_ {t} \leqslant \sqrt {T} \sqrt {\sum_ {t = 1} ^ {T} \delta_ {t} ^ {2}} \leqslant \sqrt {T} \sqrt {\sum_ {t = 1} ^ {T} \frac {5}{(1 - \gamma) c _ {T} ^ {2}} (\delta_ {t} - \delta_ {t + 1})} \leqslant \frac {1}{c _ {T}} \sqrt {\frac {5 T}{(1 - \gamma)}}.

We next show that with $\theta_m^{(1)} = 1 / M, \forall m$ , i.e., uniform initialization, $\inf_{t \geqslant 1} c_t = 1 / M$ , which will then complete the proof of Theorem H.4 and of corollary H.2.

Lemma H.5. We have $\inf_{t\geqslant 1}\pi_{\theta_t}(m^*) > 0$ . Furthermore, with uniform initialization of the parameters $\theta_m^{(1)}$ , i.e., $1 / M$ , $\forall m\in [M]$ , we have $\inf_{t\geqslant 1}\pi_{\theta_t}(m^*) = \frac{1}{M}$ .

Proof. We will show that there exists $t_0$ such that $\inf_{t \geqslant 1} \pi_{\theta_t}(m^*) = \min_{1 \leqslant t \leqslant t_0} \pi_{\theta_t}(m^*)$ , where $t_0 = \min {t : \pi_{\theta_t}(m^*) \geqslant C}$ . We define the following sets.

S1={θ:dVπθdθmdVπθdθm,mm} \mathcal {S} _ {1} = \left\{\theta : \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {m ^ {*}}} \geqslant \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {m}}, \forall m \neq m ^ {*} \right\}

S2={θ:πθ(m)πθ(m),mm} \mathcal {S} _ {2} = \left\{\theta : \pi_ {\theta} \left(m ^ {*}\right) \geqslant \pi_ {\theta} (m), \forall m \neq m ^ {*} \right\}

S3={θ:πθ(m)C} \mathcal {S} _ {3} = \left\{\theta : \pi_ {\theta} \left(m ^ {*}\right) \geqslant C \right\}

Note that $S_{3}$ depends on the choice of $C$ . Let $C \coloneqq \frac{M - \Delta}{M + \Delta}$ . We claim the following:

Claim 2. $(i)\theta_{t}\in S_{1}\Rightarrow \theta_{t + 1}\in S_{1}$ and $(ii)\theta_t\in \mathcal{S}1\stackrel {\sim}{\longrightarrow}\pi{\theta_{t + 1}}(m^*)\geqslant \pi_{\theta_t}(m^*)$

Proof of Claim 2. (i) Fix a $m \neq m^*$ . We will show that if $\frac{dV^{\pi_{\theta}}}{d\theta_t(m^*)} \geqslant \frac{dV^{\pi_{\theta}}}{d\theta_t(m)}$ , then $\frac{dV^{\pi_{\theta}}}{d\theta_{t+1}(m^*)} \geqslant \frac{dV^{\pi_{\theta}}}{d\theta_{t+1}(m)}$ . This will prove the first part.

Case (a): $\pi_{\theta_t}(m^*) \geqslant \pi_{\theta_t}(m)$ . This implies, by the softmax property, that $\theta_t(m^*) \geqslant \theta_t(m)$ . After gradient ascent update step we have:

θt+1(m)=θt(m)+ηdVπθtdθt(m)θt(m)+ηdVπθtdθt(m)=θt+1(m). \begin{array}{l} \theta_ {t + 1} (m ^ {*}) = \theta_ {t} (m ^ {*}) + \eta \frac {d V ^ {\pi_ {\theta_ {t}}}}{d \theta_ {t} (m ^ {*})} \\ \geqslant \theta_ {t} (m) + \eta \frac {d V ^ {\pi_ {\theta_ {t}}}}{d \theta_ {t} (m)} \\ = \theta_ {t + 1} (m). \\ \end{array}

This again implies that $\theta_{t + 1}(m^{*})\geqslant \theta_{t + 1}(m)$ . By the definition of derivative of $V^{\pi_{\theta}}$ w.r.t $\theta_t$ (see eq (12)),

dVπθdθt+1(m)=11γπθt+1(m)(r(m)πθt+1Tr)=11γπθt+1(m)(r(m)πθt+1Tr)=dVπθdθt+1(m). \begin{array}{l} \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {t + 1} (m ^ {*})} = \frac {1}{1 - \gamma} \pi_ {\theta_ {t + 1} (m ^ {*})} (\mathfrak {r} (m ^ {*}) - \pi_ {\theta_ {t + 1}} ^ {\mathrm {T}} \mathfrak {r}) \\ = \frac {1}{1 - \gamma} \pi_ {\theta_ {t + 1} (m)} (\mathbf {r} (m) - \pi_ {\theta_ {t + 1}} ^ {\mathbf {T}} \mathbf {r}) \\ = \frac {d V ^ {\pi_ {\theta}}}{d \theta_ {t + 1} (m)}. \\ \end{array}

This implies $\theta_{t + 1}\in \mathcal{S}_1$

Case (b): $\pi_{\theta_t}(m^*) < \pi_{\theta_t}(m)$ . We first note the following equivalence:

dVπθdθ(m)dVπθdθ(m)(r(m)r(m))(1πθ(m)πθ(m))(r(m)πθTr). \frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} \geqslant \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)} \longleftrightarrow (\mathfrak {r} (m ^ {*}) - \mathfrak {r} (m)) \left(1 - \frac {\pi_ {\theta} (m ^ {*})}{\pi_ {\theta} (m ^ {*})}\right) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}).

which can be simplified as:

(r(m)r(m))(1πθ(m)πθ(m))(r(m)πθTr)=(r(m)r(m))(1exp(θt(m)θt(m)))(r(m)πθTr). \left(\mathfrak {r} (m ^ {*}) - \mathfrak {r} (m)\right) \left(1 - \frac {\pi_ {\theta} (m ^ {*})}{\pi_ {\theta} (m ^ {*})}\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right) = \left(\mathfrak {r} (m ^ {*}) - \mathfrak {r} (m)\right) \left(1 - \exp \left(\theta_ {t} (m ^ {*}) - \theta_ {t} (m)\right)\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right).

The above condition can be rearranged as:

r(m)r(m)(1exp(θt(m)θt(m)))(r(m)πθtTr). \mathfrak {r} (m ^ {*}) - \mathfrak {r} (m) \geqslant \left(1 - \exp \left(\theta_ {t} (m ^ {*}) - \theta_ {t} (m)\right)\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta_ {t}} ^ {\mathrm {T}} \mathfrak {r}\right).

By lemma I.10, we have that $V^{\pi_{\theta_{t + 1}}} \geqslant V^{\pi_{\theta_t}} \Rightarrow \pi_{\theta_{t + 1}}^{\mathrm{T}}\mathfrak{r} \geqslant \pi_{\theta_t}^{\mathrm{T}}\mathfrak{r}$ . Hence,

0<r(m)πθt+1TrπθtTr. 0 < \mathfrak {r} (m ^ {*}) - \pi_ {\theta_ {t + 1}} ^ {\mathrm {T}} \mathfrak {r} \leqslant \pi_ {\theta_ {t}} ^ {\mathrm {T}} \mathfrak {r}.

Also, we note:

θt+1(m)θt+1(m)=θt(m)+ηdVπtdθt(m)θt+1(m)ηdVπtdθt(m)θt(m)θt(m). \theta_ {t + 1} \left(m ^ {*}\right) - \theta_ {t + 1} (m) = \theta_ {t} \left(m ^ {*}\right) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)} - \theta_ {t + 1} (m) - \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} (m)} \geqslant \theta_ {t} \left(m ^ {*}\right) - \theta_ {t} (m).

This implies, $1 - \exp \left(\theta_{t + 1}(m^{}) - \theta_{t + 1}(m)\right) \leqslant 1 - \exp \left(\theta_{t}(m^{}) - \theta_{t}(m)\right)$ .

Next, we observe that by the assumption $\pi_t(m^*) < \pi_t(m)$ , we have

1exp(θt(m)θt(m))=1πt(m)πt(m)>0. 1 - \exp \left(\theta_ {t} (m ^ {*}) - \theta_ {t} (m)\right) = 1 - \frac {\pi_ {t} (m ^ {*})}{\pi_ {t} (m)} > 0.

Hence we have,

(1exp(θt+1(m)θt+1(m)))(r(m)πθt+1Tr)(1exp(θt(m)θt(m)))(r(m)πθtTr)r(m)r(m). \begin{array}{l} \left(1 - \exp \left(\theta_ {t + 1} \left(m ^ {*}\right) - \theta_ {t + 1} (m)\right)\right) \left(\mathfrak {r} \left(m ^ {*}\right) - \pi_ {\theta_ {t + 1}} ^ {\mathrm {T}} \mathfrak {r}\right) \leqslant \left(1 - \exp \left(\theta_ {t} \left(m ^ {*}\right) - \theta_ {t} (m)\right)\right) \left(\mathfrak {r} \left(m ^ {*}\right) - \pi_ {\theta_ {t}} ^ {\mathrm {T}} \mathfrak {r}\right) \\ \leqslant \mathfrak {r} (m ^ {*}) - \mathfrak {r} (m). \\ \end{array}

Equivalently,

(1πt+1(m)πt+1(m))(r(m)πt+1Tr)r(m)r(m). \left(1 - \frac {\pi_ {t + 1} (m ^ {*})}{\pi_ {t + 1} (m)}\right) (\mathfrak {r} (m ^ {*}) - \pi_ {t + 1} ^ {\mathrm {T}} \mathfrak {r}) \leqslant \mathfrak {r} (m ^ {*}) - \mathfrak {r} (m).

Finishing the proof of the claim 2(i).

(ii) Let $\theta_t \in S_1$ . We observe that:

πt+1(m)=exp(θt+1(m))m=1Mexp(θt+1(m))=exp(θt(m)+ηdVπtdθt(m))m=1Mexp(θt(m)+ηdVπtdθt(m))exp(θt(m)+ηdVπtdθt(m))m=1Mexp(θt(m)+ηdVπtdθt(m))=exp(θt(m))m=1Mexp(θt(m))=πt(m) \begin{array}{l} \pi_ {t + 1} (m ^ {*}) = \frac {\exp (\theta_ {t + 1} (m ^ {*}))}{\sum_ {m = 1} ^ {M} \exp (\theta_ {t + 1} (m))} \\ = \frac {\exp \left(\theta_ {t} \left(m ^ {*}\right) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)}\right)}{\sum_ {m = 1} ^ {M} \exp \left(\theta_ {t} (m) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} (m)}\right)} \\ \geqslant \frac {\exp \left(\theta_ {t} \left(m ^ {*}\right) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)}\right)}{\sum_ {m = 1} ^ {M} \exp \left(\theta_ {t} (m) + \eta \frac {d V ^ {\pi_ {t}}}{d \theta_ {t} \left(m ^ {*}\right)}\right)} \\ = \frac {\exp \left(\theta_ {t} \left(m ^ {*}\right)\right)}{\sum_ {m = 1} ^ {M} \exp \left(\theta_ {t} (m)\right)} = \pi_ {t} \left(m ^ {*}\right) \\ \end{array}

This completes the proof of Claim 2(ii).

Claim 3. $S_{2}\subset S_{1}$ and $S_{3}\subset S_{1}$

Proof. To show that $S_{2} \subset S_{1}$ , let $\theta \in cS_{2}$ . We have $\pi_{\theta}(m^{}) \geqslant \pi_{\theta}(m), \forall m \neq m^{}$ .

dVπθdθ(m)=11γπθ(m)(r(m)πθTr)>11γπθ(m)(r(m)πθTr)=dVπθdθ(m). \begin{array}{l} \frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} = \frac {1}{1 - \gamma} \pi_ {\theta} (m ^ {*}) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \\ > \frac {1}{1 - \gamma} \pi_ {\theta} (m) (\mathfrak {r} (m) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) \\ = \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)}. \\ \end{array}

This shows that $\theta \in S_1$ . For showing the second part of the claim, we assume $\theta \in S_3 \cap S_2^c$ , because if $\theta \in S_2$ , we are done. Let $m \neq m^*$ . We have,

dVπθdθ(m)dVπθdθ(m)=11γ(πθ(m)(r(m)πθTr)πθ(m)(r(m)πθTr))=11γ(2πθ(m)(r(m)πθTr)+im,mMπθ(i)(r(i)πθTr))=11γ((2πθ(m)+im,mMπθ(i))(r(m)πθTr)im,mMπθ(i)(r(m)r(i)))11γ((2πθ(m)+im,mMπθ(i))(r(m)πθTr)im,mMπθ(i))11γ((2πθ(m)+im,mMπθ(i))ΔMim,mMπθ(i)). \begin{array}{l} \frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} - \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)} = \frac {1}{1 - \gamma} \left(\pi_ {\theta} (m ^ {*}) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \pi_ {\theta} (m) (\mathfrak {r} (m) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right) \\ = \frac {1}{1 - \gamma} \left(2 \pi_ {\theta} (m ^ {*}) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) + \sum_ {i \neq m ^ {*} , m} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (i) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r})\right) \\ = \frac {1}{1 - \gamma} \left(\left(2 \pi_ {\theta} (m ^ {*}) + \sum_ {i \neq m ^ {*} , m} ^ {M} \pi_ {\theta} (i)\right) \left(\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}\right) - \sum_ {i \neq m ^ {*} , m} ^ {M} \pi_ {\theta} (i) (\mathfrak {r} (m ^ {*}) - \mathfrak {r} (i))\right) \\ \geqslant \frac {1}{1 - \gamma} \left(\left(2 \pi_ {\theta} (m ^ {*}) + \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right) (\mathfrak {r} (m ^ {*}) - \pi_ {\theta} ^ {\mathrm {T}} \mathfrak {r}) - \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right) \\ \geqslant \frac {1}{1 - \gamma} \left(\left(2 \pi_ {\theta} (m ^ {*}) + \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right) \frac {\Delta}{M} - \sum_ {i \neq m ^ {*}, m} ^ {M} \pi_ {\theta} (i)\right). \\ \end{array}

Observe that, $\sum_{i\neq m^{},m}^{M}\pi_{\theta}(i) = 1 - \pi (m^{}) - \pi (m)$ . Using this and rearranging we get,

dVπθdθ(m)dVπθdθ(m)11γ(π(m)(1+ΔM)(1ΔM)+π(m)(1ΔM))11γπ(m)(1ΔM)0. \frac {d V ^ {\pi_ {\theta}}}{d \theta (m ^ {*})} - \frac {d V ^ {\pi_ {\theta}}}{d \theta (m)} \geqslant \frac {1}{1 - \gamma} \left(\pi (m ^ {*}) \left(1 + \frac {\Delta}{M}\right) - \left(1 - \frac {\Delta}{M}\right) + \pi (m) \left(1 - \frac {\Delta}{M}\right)\right) \geqslant \frac {1}{1 - \gamma} \pi (m) \left(1 - \frac {\Delta}{M}\right) \geqslant 0.

The last inequality follows because $\theta \in S_3$ and the choice of $C$ . This completes the proof of Claim 3.

Claim 4. There exists a finite $t_0$ , such that $\theta_{t_0} \in S_3$ .

Proof. The proof of this claim relies on the asymptotic convergence result of (Agarwal et al., 2020a). We note that their convergence result hold for our choice of $\eta = \frac{2(1 - \gamma)}{5}$ . As noted in (Mei et al., 2020), the choice of $\eta$ is used to justify the gradient ascent lemma I.10. Hence we have $\pi_{\theta_t} \to 1$ as $t \to \infty$ . Therefore, there exists a finite $t_0$ such that $\pi_{\theta_{t_0}}(m^*) \geqslant C$ and hence $\theta_{t_0} \in S_3$ .

This completes the proof that there exists a $t_0$ such that $\inf_{t\geqslant 1}\pi_{\theta_t}(m^*) = \inf_{1\leqslant t\leqslant t_0}\pi_{\theta_t}(m^*)$ , since once the $\theta_t\in S_3$ , by Claim 3, $\theta_t\in S_1$ . Further, by Claim 2, $\forall t\geqslant t_0$ , $\theta_t\in S_1$ and $\pi_{\theta_t}(m^*)$ is non-decreasing after $t_0$ .

With uniform initialization $\theta_{1}(m^{}) = \frac{1}{M} \geqslant \theta_{1}(m)$ , for all $m \neq m^{}$ . Hence, $\pi_{\theta_1}(m^*) \geqslant \pi_{\theta_1}(m)$ for all $m \neq m^{}$ . This implies $\theta_{1} \in S_{2}$ , which implies $\theta_{1} \in S_{1}$ . As established in Claim 2, $S_{1}$ remains invariant under gradient ascent updates, implying $t_0 = 1$ . Hence we have that $\inf_{t \geqslant 1} \pi_{\theta_t}(m^) = \pi_{\theta_1}(m^*) = 1 / M$ , completing the proof of Theorem H.4 and corollary H.2.

H.2. Proofs for MABs with noisy gradients

When value gradients are unavailable, we follow a direct policy gradient algorithm instead of softmax projection. The full pseudo-code is provided here in Algorithm 4. At each round $t \geqslant 1$ , the learning rate for $\eta$ is chosen asynchronously for each controller $m$ , to be $\alpha \pi_t(m)^2$ , to ensure that we remain inside the simplex, for some $\alpha \in (0,1)$ . To justify its name as a policy gradient algorithm, observe that in order to minimize regret, we need to solve the following optimization problem:

minπP([M])m=1Mπ(m)(rμ(m)rμ(m)). \min _ {\pi \in \mathcal {P} ([ M ])} \sum_ {m = 1} ^ {M} \pi (m) \left(\mathfrak {r} _ {\mu} \left(m ^ {*}\right) - \mathfrak {r} _ {\mu} (m)\right).

A direct gradient with respect to the parameters $\pi(m)$ gives us a rule for the policy gradient algorithm. The other changes in the update step (eq 18), stem from the fact that true means of the arms are unavailable and importance sampling.

We have the following result.

Theorem H.6. With value of $\alpha$ chosen to be less than $\frac{\Delta_{min}}{\mathfrak{r}{m^*}^\mu - \Delta{min}}$ , $(\pi_t)$ is a Markov process, with $\pi_t(m^*) \to 1$ as $t \to \infty$ , a.s. Further the regret till any time $T$ is bounded as

R(T)11γmmΔmαΔmin2logT+C, \mathcal {R} (T) \leqslant \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} \frac {\Delta_ {m}}{\alpha \Delta_ {m i n} ^ {2}} \log T + C,

where $C\coloneqq \frac{1}{1 - \gamma}\sum_{t\geqslant 1}\mathbb{P}\left{\pi_t(m^* (t))\leqslant \frac{1}{2}\right} < \infty$

We make couple of remarks before providing the full proof of Theorem H.6.

Remark H.7. The "cost" of not knowing the true gradient seems to cause the dependence on $\Delta_{min}$ in the regret, as is not the case when true gradient is available (see Theorem H.4 and Corollary H.2). The dependence on $\Delta_{min}$ as is well known from the work of (Lai & Robbins, 1985), is unavoidable.

Remark H.8. The dependence of $\alpha$ on $\Delta_{min}$ can be removed by a more sophisticated choice of learning rate, at the cost of an extra log $T$ dependence on regret (Denisov & Walton, 2020).

Algorithm 4 Projection-free Policy Gradient (for MABs)
Input: learning rate $\eta \in (0,1)$
Initialize each $\pi_1(m) = \frac{1}{M}$ , for all $m \in [M]$ .
for $t = 1$ to $T$ do
[ m_{}(t) \gets \operatorname{argmax}{m \in [M]} \pi{t}(m) ]
Choose controller $m_{t} \sim \pi_{t}$ .
Play action $a_{t} \sim K_{m_{t}}$ .
Receive reward $R_{m_t}$ by pulling arm $a_{t}$ .
Update $\forall m \in [M], m \neq m_{*}(t)$ :

πt+1(m)=πt(m)+η(RmImπt(m)Rm(t)Im(t)πt(m(t)))(18) \pi_ {t + 1} (m) = \pi_ {t} (m) + \eta \left(\frac {R _ {m} \mathbb {I} _ {m}}{\pi_ {t} (m)} - \frac {R _ {m _ {*}} (t) \mathbb {I} _ {m _ {*}} (t)}{\pi_ {t} \left(m _ {*} (t)\right)}\right) \tag {18}

Setπt+1(m(t))=1mm(t)πt+1(m). \operatorname {S e t} \pi_ {t + 1} (m _ {*} (t)) = 1 - \sum_ {m \neq m _ {*} (t)} \pi_ {t + 1} (m).

end for

Proof. The proof is an extension of that of Theorem 1 of (Denisov & Walton, 2020) for the setting that we have. The proof is divided into three main parts. In the first part we show that the recurrence time of the process ${\pi_t(m^*)}{t \geqslant 1}$ is almost surely finite. Next we bound the expected value of the time taken by the process $\pi_t(m^*)$ to reach 1. Finally we show that almost surely, $\lim{t \to \infty} \pi_t(m^*) \to 1$ , in other words the process ${\pi_t(m^*)}_{t \geqslant 1}$ is transient. We use all these facts to show a regret bound.

Recall $m_{}(t) \coloneqq \operatorname{argmax}{m \in [M]} \pi{t}(m)$ . We start by defining the following quantity which will be useful for the analysis of algorithm 4.

Let $\tau := \min \left{t \geqslant 1 : \pi_t(m^*) > \frac{1}{2}\right}$ .

Next, let $S \coloneqq \left{\pi \in \mathcal{P}([M]): \frac{1 - \alpha}{2} \leqslant \pi(m^*) < \frac{1}{2}\right}$ .

In addition, we define for any $a \in \mathbb{R}$ , $\mathcal{S}_a \coloneqq \left{\pi \in \mathcal{P}([M]) : \frac{1 - \alpha}{a} \leqslant \pi(m^*) < \frac{1}{x}\right}$ . Observe that if $\pi_1(m^*) \geqslant 1/a$ and $\pi_2(m^*) < 1/a$ then $\pi_1 \in \mathcal{S}_a$ . This fact follows just by the update step of the algorithm 4, and choosing $\eta = \alpha \pi_t(m)$ for every $m \neq m^*$ .

Lemma H.9. For $\alpha > 0$ such that $\alpha < \frac{\Delta_{min}}{\mathfrak{r}(m^{*}) - \Delta_{min}}$ , we have that

supπSE[τπ1=π]<. \sup _ {\pi \in \mathcal {S}} \mathbb {E} \left[ \tau \mid \pi_ {1} = \pi \right] < \infty .

Proof. The proof here is for completeness. We first make note of the following useful result: For a sequence of positive real numbers ${a_n}_{n\geqslant 1}$ such that the following condition is met:

a(n+1)a(n)b.a(n)2, a (n + 1) \leqslant a (n) - b. a (n) ^ {2},

for some $b > 0$ , the following is always true:

ana11+bt. a _ {n} \leqslant \frac {a _ {1}}{1 + b t}.

This inequality follows by rearranging and observing the $a_{n}$ is a non-increasing sequence. A complete proof can be found in eq. ((Denisov & Walton, 2020), Appendix A.1). Returning to the proof of lemma, we proceed by showing that the sequence $1 / \pi_t(m^*) - ct$ is a supermartingale for some $c > 0$ . Let $\Delta_{min} \coloneqq \Delta$ for ease of notation. Note that if the condition on $\alpha$ holds then there exists an $\varepsilon > 0$ , such that $(1 + \varepsilon)(1 + \alpha) < \mathfrak{r}^{} / (\mathfrak{r}^{} - \Delta)$ , where $\mathfrak{r}^{} \coloneqq \mathfrak{r}(m^{})$ . We choose $c$ to be

c:=α.r1+αα(rΔ)(1+ε)>0. c := \alpha . \frac {\mathfrak {r} ^ {*}}{1 + \alpha} - \alpha (\mathfrak {r} ^ {*} - \Delta) (1 + \varepsilon) > 0.

Next, let $x$ to be greater than $M$ and satisfying:

xxαM1+ε. \frac {x}{x - \alpha M} \leqslant 1 + \varepsilon .

Let $\xi_{x}:= \min {t\geqslant 1:\pi_{t}(m^{}) > 1 / x}$ . Since for $t = 1,\dots ,\xi_{x} - 1$ , $m_{}(t)\neq m^{}$ , we have $\pi_{t + 1}(m^{}) = (1 + \alpha)\pi_{t}(m^{})$ w.p. $\pi_t(m^)\mathfrak{r}^*$ and $\pi_{t + 1}(m^{}) = \pi_{t}(m^{}) + \alpha \pi_{t}(m^{})^{2} / \pi_{t}(m_{})^{2}$ w.p. $\pi_t(m_*)\mathfrak{r}*(t)$ , where $\mathfrak{r}(t)\coloneqq \mathfrak{r}(m_(t))$ .

Let $y(t)\coloneqq 1 / \pi_t(m^*)$ , then we observe by a short calculation that,

y(t+1)={y(t)α1+αy(t),w.p.ry(t)y(t)+αy(t)πt(m(t))y(t)α.w.p.πt(m)r(t)y(t)otherwise. y (t + 1) = \left\{ \begin{array}{l l} y (t) - \frac {\alpha}{1 + \alpha} y (t), & w. p. \frac {\mathfrak {r} ^ {*}}{y (t)} \\ y (t) + \alpha \frac {y (t)}{\pi_ {t} (m _ {*} (t)) y (t) - \alpha}. & w. p. \pi_ {t} (m _ {*}) \mathfrak {r} _ {*} (t) \\ y (t) & o t h e r w i s e. \end{array} \right.

We see that,

E[y(t+1)H(t)]y(t)=ry(t).(y(t)α1+αy(t))+πt(m)r(t).(y(t)+αy(t)πt(m(t))y(t)α)y(t)(ry(t)+πt(m)r(t))α(rΔ)(1+ε)αr1+α=c. \begin{array}{l} \mathbb {E} \left[ y (t + 1) \mid H (t) \right] - y (t) \\ = \frac {\mathfrak {r} ^ {*}}{y (t)}. (y (t) - \frac {\alpha}{1 + \alpha} y (t)) + \pi_ {t} (m _ {*}) \mathfrak {r} _ {*} (t). (y (t) + \alpha \frac {y (t)}{\pi_ {t} (m _ {*} (t)) y (t) - \alpha}) - y (t) (\frac {\mathfrak {r} ^ {*}}{y (t)} + \pi_ {t} (m _ {*}) \mathfrak {r} _ {*} (t)) \\ \leqslant \alpha (\mathfrak {r} ^ {*} - \Delta) (1 + \varepsilon) - \frac {\alpha \mathfrak {r} ^ {*}}{1 + \alpha} = - c. \\ \end{array}

The inequality holds because $\mathfrak{r}(t) \leqslant \mathfrak{r}^\Delta$ and that $\pi_t(m*) > 1 / M$ . By the Optional Stopping Theorem (Durrett, 2011),

cE[ξxt]E[y(ξxt)E[y(1)]]x1α. - c \mathbb {E} [ \xi_ {x} \wedge t ] \geqslant \mathbb {E} [ y (\xi_ {x} \wedge t) - \mathbb {E} [ y (1) ] ] \geqslant - \frac {x}{1 - \alpha}.

The final inequality holds because $\pi_1(m^*)\geqslant \frac{1 - \alpha}{x}$

Next, applying the monotone convergence theorem gives theta $\mathbb{E}\left[\xi_x\right] \leqslant \frac{x}{c(1 - \alpha)}$ . Finally to show the result of lemma H.9, we refer the reader to (Appendix A.2, (Denisov & Walton, 2020)), which follows from standard Markov chain arguments.

Next we define an embedded Markov Chain ${p(s), s \in \mathbb{Z}_+}$ as follows. First let $\sigma(k) \coloneqq \min \left{t \geqslant \tau(k) : \pi_t(m^*) < \frac{1}{2}\right}$ and $\tau(k) \coloneqq \min \left{t \geqslant \sigma(k - 1) : \pi_t(m^*) \geqslant \frac{1}{2}\right}$ . Note that within the region $[\tau(k), \sigma(k))$ , $\pi_t(m^*) \geqslant 1/2$ and in $[\sigma(k), \tau(k + 1))$ , $\pi_t(m^*) < 1/2$ . We next analyze the rate at which $\pi_t(m^*)$ approaches 1. Define

p(s):=πts(m)w h e r ets=s+i=0k(τ(i+1)σ(i)) p (s) := \pi_ {t _ {s}} \left(m ^ {*}\right) \text {w h e r e} \quad t _ {s} = s + \sum_ {i = 0} ^ {k} (\tau (i + 1) - \sigma (i))

f o rs[i=0k(σ(i)τ(i)),i=0k+1(σ(i)τ(i))) \text {f o r} \quad s \in \left[ \sum_ {i = 0} ^ {k} (\sigma (i) - \tau (i)), \sum_ {i = 0} ^ {k + 1} (\sigma (i) - \tau (i))\right)

Also let,

σs:=min{t>0:πt+ts(m)>1/2} \sigma_ {s} := \min \left\{t > 0: \pi_ {t + t _ {s}} \left(m ^ {*}\right) > 1 / 2 \right\}

and,

τs:=min{t>σs:πt+ts(m)1/2} \tau_ {s} := \min \left\{t > \sigma_ {s}: \pi_ {t + t _ {s}} \left(m ^ {*}\right) \leqslant 1 / 2 \right\}

Lemma H.10. The process ${p(s)}_{s\geqslant 1}$ , is a submartingale. Further, $p(s)\to 1$ , as $s\to \infty$ . Finally,

E[p(s)]111+αΔ2(mmΔm)s. \mathbb{E}\left[p(s)\right]\geqslant 1 - \frac{1}{1 + \alpha\frac{\Delta^{2}}{\left(\sum_{m^{\prime}\neq m^{*}}\Delta_{m^{\prime}}\right)}} s.

Proof. We first observe that,

p(s+1)={πts+1(m)ifπts+1(m)1/2πts+τ+s(m)ifπts+1(m)<1/2 p (s + 1) = \left\{ \begin{array}{l l} \pi_ {t _ {s} + 1} (m ^ {*}) & i f \pi_ {t _ {s} + 1} (m ^ {*}) \geqslant 1 / 2 \\ \pi_ {t _ {s} + \tau + s} (m ^ {*}) & i f \pi_ {t _ {s} + 1} (m ^ {*}) < 1 / 2 \end{array} \right.

Since $\pi_{t_s + \tau_s}(m^*)\geqslant 1 / 2$ we have that,

p(s+1)πts+1(m)a n dp(s)=πts(m). p (s + 1) \geqslant \pi_ {t _ {s} + 1} \left(m ^ {*}\right) \text {a n d} p (s) = \pi_ {t _ {s}} \left(m ^ {*}\right).

Since at times $t_s, \pi_{t_s}(m^*) > 1/2$ , we know that $m^*$ is the leading arm. Thus by the update step, for all $m \neq m^*$ ,

πts+1(m)=πts(m)+απts(m)2[ImRm(ts)πts(m)ImRm(ts)πts(m)]. \pi_ {t _ {s} + 1} (m) = \pi_ {t _ {s}} (m) + \alpha \pi_ {t _ {s}} (m) ^ {2} \left[ \frac {\mathbb {I} _ {m} R _ {m} (t _ {s})}{\pi_ {t _ {s}} (m)} - \frac {\mathbb {I} _ {m ^ {*}} R _ {m ^ {*}} (t _ {s})}{\pi_ {t _ {s}} (m ^ {*})} \right].

Taking expectations both sides,

E[πts+1(m)H(ts)]πts(m)=απts(m)2(rmrm)=αΔmπts(m)2. \mathbb {E} \left[ \pi_ {t _ {s} + 1} (m) \mid H (t _ {s}) \right] - \pi_ {t _ {s}} (m) = \alpha \pi_ {t _ {s}} (m) ^ {2} (\mathfrak {r} _ {m} - \mathfrak {r} _ {m ^ {*}}) = - \alpha \Delta_ {m} \pi_ {t _ {s}} (m) ^ {2}.

Summing over all $m \neq m^*$ :

E[πts+1(m)H(ts)]+πts(m)=αmmΔmπts(m)2. - \mathbb {E} \left[ \pi_ {t _ {s} + 1} (m ^ {*}) \mid H (t _ {s}) \right] + \pi_ {t _ {s}} (m ^ {*}) = - \alpha \sum_ {m \neq m ^ {*}} \Delta_ {m} \pi_ {t _ {s}} (m) ^ {2}.

By Jensen's inequality,

mmΔmπts(m)2=(mmΔm)mmΔm(mmΔm)πts(m)2(mmΔm)(mmΔmπts(m)(mmΔm))2(mmΔm)Δ2(mmπts(m))2(mmΔm)2=Δ2(1πts(m))2(mmΔm). \begin{array}{l} \sum_ {m \neq m ^ {*}} \Delta_ {m} \pi_ {t _ {s}} (m) ^ {2} = \left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) \sum_ {m \neq m ^ {*}} \frac {\Delta_ {m}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)} \pi_ {t _ {s}} (m) ^ {2} \\ \geqslant \left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) \left(\sum_ {m \neq m ^ {*}} \frac {\Delta_ {m} \pi_ {t _ {s}} (m)}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}\right) ^ {2} \\ \geqslant \left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) \frac {\Delta^ {2} \left(\sum_ {m \neq m ^ {*}} \pi_ {t _ {s}} (m)\right) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) ^ {2}} \\ = \frac {\Delta^ {2} \left(1 - \pi_ {t _ {s}} (m ^ {*})\right) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}. \\ \end{array}

Hence we get,

p(s)E[p(s+1)H(ts)]αΔ2(1p(s))2(mmΔm)E[p(s+1)H(ts)]p(s)+αΔ2(1p(s))2(mmΔm). p (s) - \mathbb {E} \left[ p (s + 1) \mid H (t _ {s}) \right] \leqslant - \alpha \frac {\Delta^ {2} (1 - p (s)) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)} \Rightarrow \mathbb {E} \left[ p (s + 1) \mid H (t _ {s}) \right] \geqslant p (s) + \alpha \frac {\Delta^ {2} (1 - p (s)) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}.

This implies immediately that ${p(s)}_{s\geqslant 1}$ is a submartingale.

Since, ${p(s)}$ is non-negative and bounded by 1, by Martingale Convergence Theorem, $\lim_{s\to \infty}p(s)$ exists. We will now show that the limit is 1. Clearly, it is sufficient to show that $\lim \sup p(s) = 1$ . For $a > 2$ , let

s s \rightarrow \infty

φa:=min{s1:p(s)a1a}. \varphi_ {a} := \min \left\{s \geqslant 1: p (s) \geqslant \frac {a - 1}{a} \right\}.

As is shown in (Denisov & Walton, 2020), it is sufficient to show $\varphi_{a} < \infty$ , with probability 1, because then one can define a sequence of stopping times for increasing $a$ , each finite w.p. 1. which implies that $p(s) \to 1$ . By the previous display, we have

E[p(s+1)H(ts)]p(s)αΔ2(mmΔm)a2 \mathbb {E} \left[ p (s + 1) \mid H (t _ {s}) \right] - p (s) \geqslant \alpha \frac {\Delta^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) a ^ {2}}

as long as $p(s) \leqslant \frac{a - 1}{a}$ . Hence by applying Optional Stopping Theorem and rearranging we get,

E[φa]limsE[φas](mmΔm)a2αΔ(1E[p(1)])<. \mathbb {E} \left[ \varphi_ {a} \right] \leqslant \lim _ {s \rightarrow \infty} \mathbb {E} \left[ \varphi_ {a} \wedge s \right] \leqslant \frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) a ^ {2}}{\alpha \Delta} (1 - \mathbb {E} \left[ p (1) \right]) < \infty .

Since $\varphi_{a}$ is a non-negative random variable with finite expectation, $\varphi_{a} < \infty a.s$ . Let $q(s) = 1 - p(s)$ . We have:

E[q(s+1)]E[q(s)]αΔ2(q(s))2(mmΔm). \mathbb {E} \left[ q (s + 1) \right] - \mathbb {E} \left[ q (s) \right] \leqslant - \alpha \frac {\Delta^ {2} \left(q (s)\right) ^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}.

By the useful result H.2, we get,

\mathbb {E} [ q (s) ] \leqslant \frac {\mathbb {E} [ q (1) ]}{1 + \alpha \frac {\Delta^ {2} \mathbb {E} [ q (1) ]}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)} s}} \leqslant \frac {1}{1 + \alpha \frac {\Delta^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}}.

This completes the proof of the lemma.

Finally we provide a lemma to tie the results above. We refer (Appendix A.5 (Denisov & Walton, 2020)) for the proof of this lemma.

Lemma H.11.

t1P[πt(m)<1/2]<. \sum_ {t \geqslant 1} \mathbb {P} \left[ \pi_ {t} \left(m ^ {*}\right) < 1 / 2 \right] < \infty .

Also, with probability 1, $\pi_t(m^*) \to 1$ , as $t \to \infty$ .

Proof of regret bound: Since $\mathfrak{r}^* - \mathfrak{r}(m) \leqslant 1$ , we have by the definition of regret (see eq 11)

R(T)=E[11γt=1T(m=1Mπ(m)rmπt(m)rm)]. \mathcal {R} (T) = \mathbb {E} \left[ \frac {1}{1 - \gamma} \sum_ {t = 1} ^ {T} \left(\sum_ {m = 1} ^ {M} \pi^ {*} (m) \mathfrak {r} _ {m} - \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right].

Here we recall that $\pi^{} = e_{m^{}}$ , we have:

R(T)=11γE[t=1T(m=1M(π(m)rmπt(m)rm))]=11γE[m=1M(t=1T(π(m)rmπt(m)rm))]=11γE[t=1T(rm=1Mπt(m)rm)]=11γE[(t=1Trt=1Tm=1Mπt(m)rm)]=11γE[(t=1Tr(1πt(m))t=1Tmmπt(m)rm)]=11γE[(t=1Tmmrπt(m)t=1Tmmπt(m)rm)]=11γmm(rrm)E[t=1Tπt(m)]. \begin{array}{l} \mathcal {R} (T) = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\sum_ {m = 1} ^ {M} (\pi^ {*} (m) \mathfrak {r} _ {m} - \pi_ {t} (m) \mathfrak {r} _ {m})\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {m = 1} ^ {M} \left(\sum_ {t = 1} ^ {T} (\pi^ {*} (m) \mathfrak {r} _ {m} - \pi_ {t} (m) \mathfrak {r} _ {m})\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(\mathfrak {r} ^ {*} - \sum_ {m = 1} ^ {M} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \left(\sum_ {t = 1} ^ {T} \mathfrak {r} ^ {*} - \sum_ {t = 1} ^ {T} \sum_ {m = 1} ^ {M} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \left(\sum_ {t = 1} ^ {T} \mathfrak {r} ^ {*} (1 - \pi_ {t} (m ^ {*})) - \sum_ {t = 1} ^ {T} \sum_ {m \neq m ^ {*}} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \left(\sum_ {t = 1} ^ {T} \sum_ {m \neq m ^ {*}} \mathfrak {r} ^ {*} \pi_ {t} (m) - \sum_ {t = 1} ^ {T} \sum_ {m \neq m ^ {*}} \pi_ {t} (m) \mathfrak {r} _ {m}\right) \right] \\ = \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} \left(\mathfrak {r} ^ {*} - \mathfrak {r} _ {m}\right) \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \pi_ {t} (m) \right]. \\ \end{array}

Hence we have,

R(T)=11γmm(rrm)E[t=1Tπt(m)]11γmmE[t=1Tπt(m)]=11γE[t=1T(1πt(m))] \begin{array}{l} \mathcal {R} (T) = \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} (\mathfrak {r} ^ {*} - \mathfrak {r} _ {m}) \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \pi_ {t} (m) \right] \\ \leqslant \frac {1}{1 - \gamma} \sum_ {m \neq m ^ {*}} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \pi_ {t} (m) \right] \\ = \frac {1}{1 - \gamma} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \right] \\ \end{array}

We analyze the following term:

E[t=1T(1πt(m))]=E[t=1T(1πt(m))I{πt(m)1/2}]+E[t=1T(1πt(m))I{πt(m)<1/2}]=E[t=1T(1πt(m))I{πt(m)1/2}]+C1. \begin{array}{l} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \right] = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] + \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) < 1 / 2 \right\} \right] \\ = \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] + C _ {1}. \\ \end{array}

where, $C_1 \coloneqq \sum_{t=1}^{\infty} \mathbb{P}\left[\pi_t(m^*) < 1/2\right] < \infty$ by Lemma H.11. Next we observe that,

E[t=1T(1πt(m))I{πt(m)1/2}]=E[s=1Tq(s)I{πt(m)1/2}]E[s=1Tq(s)]=t=1T11+αΔ2(mmΔm)st=1T(mmΔm)αΔ2s(mmΔm)αΔ2logT. \begin{array}{l} \mathbb {E} \left[ \sum_ {t = 1} ^ {T} \left(1 - \pi_ {t} \left(m ^ {*}\right)\right) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] = \mathbb {E} \left[ \sum_ {s = 1} ^ {T} q (s) \mathbb {I} \left\{\pi_ {t} \left(m ^ {*}\right) \geqslant 1 / 2 \right\} \right] \leqslant \mathbb {E} \left[ \sum_ {s = 1} ^ {T} q (s) \right] \\ = \sum_ {t = 1} ^ {T} \frac {1}{1 + \alpha \frac {\Delta^ {2}}{\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right) s}} \leqslant \sum_ {t = 1} ^ {T} \frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2} s} \\ \leqslant \frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2}} \log T. \\ \end{array}

Putting things together, we get,

R(T)11γ((mmΔm)αΔ2logT+C1)=11γ((mmΔm)αΔ2logT)+C. \begin{array}{l} \mathcal {R} (T) \leqslant \frac {1}{1 - \gamma} \left(\frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2}} \log T + C _ {1}\right) \\ = \frac {1}{1 - \gamma} \left(\frac {\left(\sum_ {m ^ {\prime} \neq m ^ {*}} \Delta_ {m ^ {\prime}}\right)}{\alpha \Delta^ {2}} \log T\right) + C. \\ \end{array}

This completes the proof of Theorem H.6.

I. Proofs for MDPs

First we recall the policy gradient theorem.

Theorem I.1 (Policy Gradient Theorem (Sutton et al., 2000)).

θVπθ(μ)=11γsSdμπθ(s)aAπθ(as)θQπθ(s,a). \frac {\partial}{\partial \theta} V ^ {\pi_ {\theta}} (\mu) = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \frac {\partial \pi_ {\theta} (a | s)}{\partial \theta} Q ^ {\pi_ {\theta}} (s, a).

Let $s \in S$ and $m \in [m]$ . Let $\tilde{Q}^{\pi_{\theta}}(s, m) \coloneqq \sum_{a \in \mathcal{A}} K_m(s, a) Q^{\pi_{\theta}}(s, a)$ . Also let $\tilde{A}(s, m) \coloneqq \tilde{Q}(s, m) - V(s)$ .

Lemma I.2 (Gradient Simplification). The softmax policy gradient with respect to the parameter $\theta \in \mathbb{R}^M$ is $\frac{\partial}{\partial\theta_m} V^{\pi_\theta}(\mu) = \frac{1}{1 - \gamma}\sum_{s\in S}d_{\mu}^{\pi_\theta}(s)\pi_\theta (m)\tilde{A} (s,m)$ , where $\tilde{A} (s,m)\coloneqq \tilde{Q} (s,m) - V(s)$ and $\tilde{Q} (s,m)\coloneqq \sum_{a\in \mathcal{A}}K_m(s,a)Q^{\pi_\theta}(s,a)$ , and $d_{\mu}^{\pi_\theta}(.)$ is the discounted state visitation measure starting with an initial distribution $\mu$ and following policy $\pi_{\theta}$ .

The interpretation of $\tilde{A}(s, m)$ is the advantage of following controller $m$ at state $s$ and then following the policy $\pi_{\theta}$ for all time versus following $\pi_{\theta}$ always. As mentioned in section 4, we proceed by proving smoothness of the $V^{\pi}$ function over the space $\mathbb{R}^{M}$ .

Proof. From the policy gradient theorem I.1, we have:

θmVπθ(μ)=11γsSdμπθ(s)aAπθm(as)θQπθ(s,a)=11γsSdμπθ(s)aAθm(m=1Mπθ(m)Km(s,a))Qπθ(s,a)=11γsSdμπθ(s)m=1MaA(θmπθ(m))Km(s,a)Q(s,a)=11γsSdμπθ(s)aAπm(Km(s,a)m=1MπmKm(s,a))Q(s,a)=11γsSdμπθ(s)πmaA(Km(s,a)m=1MπmKm(s,a))Q(s,a)=11γsSdμπθ(s)πm[aAKm(s,a)Q(s,a)aAm=1MπmKm(s,a)Q(s,a)]=11γsSdμπθ(s)πm[Q~(s,m)V(s)]=11γsSdμπθ(s)πmA~πθ(s,m). \begin{array}{l} \frac {\partial}{\partial \theta_ {m ^ {\prime}}} V ^ {\pi_ {\theta}} (\mu) = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \frac {\partial \pi_ {\theta_ {m ^ {\prime}}} (a | s)}{\partial \theta} Q ^ {\pi_ {\theta}} (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \frac {\partial}{\partial \theta_ {m ^ {\prime}}} \left(\sum_ {m = 1} ^ {M} \pi_ {\theta} (m) K _ {m} (s, a)\right) Q ^ {\pi_ {\theta}} (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \left(\frac {\partial}{\partial \theta_ {m ^ {\prime}}} \pi_ {\theta} (m)\right) K _ {m} (s, a) Q (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {a \in \mathcal {A}} \pi_ {m ^ {\prime}} \left(K _ {m ^ {\prime}} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) Q (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \sum_ {a \in \mathcal {A}} \left(K _ {m ^ {\prime}} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) Q (s, a) \\ = \frac {1}{1 - \gamma} \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \left[ \sum_ {a \in \mathcal {A}} K _ {m ^ {\prime}} (s, a) Q (s, a) - \sum_ {a \in \mathcal {A}} \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a) Q (s, a) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \left[ \tilde {Q} (s, m ^ {\prime}) - V (s) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m ^ {\prime}} \tilde {A} ^ {\pi_ {\theta}} (s, m ^ {\prime}). \\ \end{array}

Lemma I.3. $V^{\pi_{\theta}}(\mu)$ is $\frac{7\gamma^2 + 4\gamma + 5}{2(1 - \gamma)^2}$ -smooth.

Proof. The proof uses ideas from (Agarwal et al., 2020a) and (Mei et al., 2020). Let $\theta_{\alpha} = \theta + \alpha u$ , where $u \in \mathbb{R}^{M}$ , $\alpha \in \mathbb{R}$ . For any $s \in S$ ,

aπθα(as)αα=0=aπθα(as)θαα=0,θαα=aπθα(as)θαα=0,u \sum_ {a} \left| \frac {\partial \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha} \Big | _ {\alpha = 0} \right| = \sum_ {a} \left| \left\langle \frac {\partial \pi_ {\theta_ {\alpha}} (a | s)}{\partial \theta_ {\alpha}} \Big | _ {\alpha = 0}, \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right\rangle \right| = \sum_ {a} \left| \left\langle \frac {\partial \pi_ {\theta_ {\alpha}} (a | s)}{\partial \theta_ {\alpha}} \Big | _ {\alpha = 0}, u \right\rangle \right|

=am=1Mm=1Mπθm(Immπθm)Km(s,a)u(m)=am=1Mπθm(Km(s,a)u(m)m=1MKm(s,a)u(m))am=1MπθmKm(s,a)u(m)+am=1Mm=1MπθmπθmKm(s,a)u(m)=m=1Mπθmu(m)aKm(s,a)=1+m=1Mm=1Mπθmπθmu(m)aKm(s,a)=1=m=1Mπθmu(m)+m=1Mm=1Mπθmπθmu(m)=2m=1Mπθmu(m)2u2. \begin{array}{l} = \sum_ {a} \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \left(\mathbb {I} _ {m m ^ {\prime \prime}} - \pi_ {\theta_ {m}}\right) K _ {m} (s, a) u \left(m ^ {\prime \prime}\right) \right| \\ = \sum_ {a} \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \left(K _ {m ^ {\prime \prime}} (s, a) u (m ^ {\prime \prime}) - \sum_ {m = 1} ^ {M} K _ {m} (s, a) u (m ^ {\prime \prime})\right) \right| \\ \leqslant \sum_ {a} \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} K _ {m ^ {\prime \prime}} (s, a) | u (m ^ {\prime \prime}) | + \sum_ {a} \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \pi_ {\theta_ {m}} K _ {m} (s, a) | u (m ^ {\prime \prime}) | \\ = \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} | u (m ^ {\prime \prime}) | \underbrace {\sum_ {a} K _ {m ^ {\prime \prime}} (s , a)} _ {= 1} + \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \pi_ {\theta_ {m}} | u (m ^ {\prime \prime}) | \underbrace {\sum_ {a} K _ {m} (s , a)} _ {= 1} \\ = \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} | u (m ^ {\prime \prime}) | + \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} \pi_ {\theta_ {m}} | u (m ^ {\prime \prime}) | \\ = 2 \sum_ {m ^ {\prime \prime} = 1} ^ {M} \pi_ {\theta_ {m ^ {\prime \prime}}} | u (m ^ {\prime \prime}) | \leqslant 2 \| u \| _ {2}. \\ \end{array}

Next we bound the second derivative.

a2πθα(as)α2α=0=aθαπθα(as)αα=0,u=a2πθα(as)α2α=0u,u. \sum_ {a} \left| \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a \mid s)}{\partial \alpha^ {2}} \mid_ {\alpha = 0} \right| = \sum_ {a} \left| \left\langle \frac {\partial}{\partial \theta_ {\alpha}} \frac {\partial \pi_ {\theta_ {\alpha}} (a \mid s)}{\partial \alpha} \mid_ {\alpha = 0}, u \right\rangle \right| = \sum_ {a} \left| \left\langle \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a \mid s)}{\partial \alpha^ {2}} \mid_ {\alpha = 0} u, u \right\rangle \right|.

Let $H^{a,\theta} \coloneqq \frac{\partial^2\pi_{\theta_\alpha}(a|s)}{\partial\theta^2} \in \mathbb{R}^{M\times M}$ . We have,

Hi,ja,θ=θj(m=1Mπθi(Imiπθm)Km(s,a))=θj(πθiKi(s,a)m=1MπθiπθmKm(s,a))=πθj(Iijπθi)Ki(s,a)m=1MKm(s,a)πθiπθmθj=πj(Iijπi)Ki(s,a)m=1MKm(s,a)(πj(Iijπi)πm+πiπj(Imjπm))=πj((Iijπi)Ki(s,a)m=1Mπm(Iijπi)Km(s,a)m=1Mπi(Imjπm)Km(s,a)). \begin{array}{l} H _ {i, j} ^ {a, \theta} = \frac {\partial}{\partial \theta_ {j}} \left(\sum_ {m = 1} ^ {M} \pi_ {\theta_ {i}} \left(\mathbb {I} _ {m i} - \pi_ {\theta_ {m}}\right) K _ {m} (s, a)\right) \\ = \frac {\partial}{\partial \theta_ {j}} \left(\pi_ {\theta_ {i}} K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {\theta_ {i}} \pi_ {\theta_ {m}} K _ {m} (s, a)\right) \\ = \pi_ {\theta_ {j}} \left(\mathbb {I} _ {i j} - \pi_ {\theta_ {i}}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} K _ {m} (s, a) \frac {\partial \pi_ {\theta_ {i}} \pi_ {\theta_ {m}}}{\partial \theta_ {j}} \\ = \pi_ {j} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} K _ {m} (s, a) \left(\pi_ {j} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) \pi_ {m} + \pi_ {i} \pi_ {j} \left(\mathbb {I} _ {m j} - \pi_ {m}\right)\right) \\ = \pi_ {j} \left(\left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {m} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {i} \left(\mathbb {I} _ {m j} - \pi_ {m}\right) K _ {m} (s, a)\right). \\ \end{array}

Plugging this into the second derivative, we get,

2θ2πθ(as)u,u=j=1Mi=1MHi,ja,θuiuj=j=1Mi=1Mπj((Iijπi)Ki(s,a)m=1Mπm(Iijπi)Km(s,a)m=1Mπi(Imjπm)Km(s,a))uiuj=i=1MπiKi(s,a)ui2i=1Mj=1MπiπjKi(s,a)uiuji=1Mm=1MπiπmKm(s,a)ui2+i=1Mj=1Mm=1MπiπjπmKm(s,a)uiuji=1Mj=1MπiπjKj(s,a)uiuj+i=1Mj=1Mm=1MπiπjπmKm(s,a)uiuj=i=1MπiKi(s,a)ui22i=1Mj=1MπiπjKi(s,a)uiuji=1Mm=1MπiπmKm(s,a)ui2+2i=1Mj=1Mm=1MπiπjπmKm(s,a)uiuj=i=1Mπiui2(Ki(s,a)m=1MπmKm(s,a))2i=1Mπiuij=1Mπjuj(Ki(s,a)m=1MπmKm(s,a))i=1Mπiui2Ki(s,a)m=1MπmKm(s,a)1+2i=1Mπiuij=1MπjujKi(s,a)m=1MπmKm(s,a)1u22+2i=1Mπiuij=1Mπjuj3u22. \begin{array}{l} \left| \left\langle \frac {\partial^ {2}}{\partial \theta^ {2}} \pi_ {\theta} (a | s) u, u \right\rangle \right| \\ = \left| \sum_ {j = 1} ^ {M} \sum_ {i = 1} ^ {M} H _ {i, j} ^ {a, \theta} u _ {i} u _ {j} \right| \\ = \left| \sum_ {j = 1} ^ {M} \sum_ {i = 1} ^ {M} \pi_ {j} \left(\left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} \left(\mathbb {I} _ {i j} - \pi_ {i}\right) K _ {m} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {i} \left(\mathbb {I} _ {m j} - \pi_ {m}\right) K _ {m} (s, a)\right) u _ {i} u _ {j} \right| \\ = \left| \sum_ {i = 1} ^ {M} \pi_ {i} K _ {i} (s, a) u _ {i} ^ {2} - \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {i} \pi_ {j} K _ {i} (s, a) u _ {i} u _ {j} - \sum_ {i = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {m} K _ {m} (s, a) u _ {i} ^ {2} \right. \\ + \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {j} \pi_ {m} K _ {m} (s, a) u _ {i} u _ {j} - \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {i} \pi_ {j} K _ {j} (s, a) u _ {i} u _ {j} \\ + \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {j} \pi_ {m} K _ {m} (s, a) u _ {i} u _ {j} \\ = \left| \sum_ {i = 1} ^ {M} \pi_ {i} K _ {i} (s, a) u _ {i} ^ {2} - 2 \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \pi_ {i} \pi_ {j} K _ {i} (s, a) u _ {i} u _ {j} \right. \\ - \sum_ {i = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {m} K _ {m} (s, a) u _ {i} ^ {2} + 2 \sum_ {i = 1} ^ {M} \sum_ {j = 1} ^ {M} \sum_ {m = 1} ^ {M} \pi_ {i} \pi_ {j} \pi_ {m} K _ {m} (s, a) u _ {i} u _ {j} \\ = \left| \sum_ {i = 1} ^ {M} \pi_ {i} u _ {i} ^ {2} \left(K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) - 2 \sum_ {i = 1} ^ {M} \pi_ {i} u _ {i} \sum_ {j = 1} ^ {M} \pi_ {j} u _ {j} \left(K _ {i} (s, a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s, a)\right) \right| \\ \leqslant \sum_ {i = 1} ^ {M} \pi_ {i} u _ {i} ^ {2} \underbrace {\left| K _ {i} (s , a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s , a) \right|} _ {\leqslant 1} + 2 \sum_ {i = 1} ^ {M} \pi_ {i} | u _ {i} | \sum_ {j = 1} ^ {M} \pi_ {j} | u _ {j} | \underbrace {\left| K _ {i} (s , a) - \sum_ {m = 1} ^ {M} \pi_ {m} K _ {m} (s , a) \right|} _ {\leqslant 1} \\ \leqslant \| u \| _ {2} ^ {2} + 2 \sum_ {i = 1} ^ {M} \pi_ {i} | u _ {i} | \sum_ {j = 1} ^ {M} \pi_ {j} | u _ {j} | \leqslant 3 \| u \| _ {2} ^ {2}. \\ \end{array}

The rest of the proof is similar to (Mei et al., 2020) and we include this for completeness. Define $P(\alpha)\in \mathbb{R}^{S\times S}$ , where $\forall (s,s^{\prime})$

[P(α)](s,s)=aAπθα(as).P(ss,a). \left[ P (\alpha) \right] _ {(s, s ^ {\prime})} = \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (a \mid s). \mathrm {P} (s ^ {\prime} | s, a).

The derivative w.r.t. $\alpha$ is,

[αP(α)α=0](s,s)=aA[απθα(as)α=0].P(ss,a). \left[ \frac {\partial}{\partial \alpha} P (\alpha) \Big | _ {\alpha = 0} \right] _ {(s, s ^ {\prime})} = \sum_ {a \in \mathcal {A}} \left[ \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a \mid s) \Big | _ {\alpha = 0} \right]. \mathrm {P} (s ^ {\prime} | s, a).

For any vector $x\in \mathbb{R}^S$

[αP(α)α=0x](s)=sSaA[απθα(as)α=0].P(ss,a).x(s). \left[ \frac {\partial}{\partial \alpha} P (\alpha) \Big | _ {\alpha = 0} x \right] _ {(s)} = \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left[ \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a \mid s) \Big | _ {\alpha = 0} \right]. \mathrm {P} (s ^ {\prime} | s, a). x (s ^ {\prime}).

The $l_{\infty}$ norm can be upper-bounded as,

αP(α)α=0x=maxsSsSaA[απθα(as)α=0].P(ss,a).x(s) \left\| \frac {\partial}{\partial \alpha} P (\alpha) \right| _ {\alpha = 0} x \Bigg | _ {\infty} = \max _ {s \in \mathcal {S}} \left| \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left[ \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a \mid s) \right| _ {\alpha = 0} \right]. \mathbb {P} (s ^ {\prime} | s, a). x (s ^ {\prime})

maxsSsSaAαπθα(as)α=0.P(ss,a).x2u2x. \begin{array}{l} \leqslant \max _ {s \in \mathcal {S}} \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left| \frac {\partial}{\partial \alpha} \pi_ {\theta_ {\alpha}} (a | s) \right| _ {\alpha = 0} \Bigg |. P (s ^ {\prime} | s, a). \| x \| _ {\infty} \\ \leqslant 2 \left\| u \right\| _ {2} \left\| x \right\| _ {\infty}. \\ \end{array}

Now we find the second derivative,

[2P(α)α2α=0(s,s)=aA[2πθα(as)α2α=0]P(ss,a) \left[ \frac {\partial^ {2} P (\alpha)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \bigg | _ {(s, s ^ {\prime})} = \sum_ {a \in \mathcal {A}} \left[ \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \bigg ] P (s ^ {\prime} | s, a)

taking the $l_{\infty}$ norm,

[2P(α)α2α=0]x=maxssSaA[2πθα(as)α2α=0]P(ss,a)x(s)maxssS[2πθα(as)α2α=0]P(ss,a)x3u2x. \begin{array}{l} \left\| \left[ \frac {\partial^ {2} P (\alpha)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \right] x \left\| _ {\infty} = \max _ {s} \left| \sum_ {s ^ {\prime} \in \mathcal {S}} \sum_ {a \in \mathcal {A}} \left[ \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \right] \mathrm {P} (s ^ {\prime} | s, a) x (s ^ {\prime}) \right| \\ \leqslant \max _ {s} \sum_ {s ^ {\prime} \in \mathcal {S}} \left[ \left| \frac {\partial^ {2} \pi_ {\theta_ {\alpha}} (a | s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \right] \mathrm {P} \left(s ^ {\prime} | s, a\right) \| x \| _ {\infty} \leqslant 3 \| u \| _ {2} \| x \| _ {\infty}. \\ \end{array}

Next we observe that the value function of $\pi_{\theta_{\alpha}}$ ..

Vπθα(s)=aAπθα(as)r(s,a)rθα+γaAπθα(as)sSP(ss,a)Vπθα(s). V ^ {\pi_ {\theta_ {\alpha}}} \left(s\right) = \underbrace {\sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (a | s) r (s , a)} _ {r _ {\theta_ {\alpha}}} + \gamma \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (a | s) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} | s, a\right) V ^ {\pi_ {\theta_ {\alpha}}} \left(s ^ {\prime}\right).

In matrix form,

Vπθα=rθα+γP(α)Vπθα(IdγP(α))Vπθα=rθαVπθα=(IdγP(α))1rθα. \begin{array}{l} V ^ {\pi_ {\theta_ {\alpha}}} = r _ {\theta_ {\alpha}} + \gamma P (\alpha) V ^ {\pi_ {\theta_ {\alpha}}} \\ \Rightarrow (I d - \gamma P (\alpha)) V ^ {\pi_ {\theta_ {\alpha}}} = r _ {\theta_ {\alpha}} \\ V ^ {\pi_ {\theta_ {\alpha}}} = (I d - \gamma P (\alpha)) ^ {- 1} r _ {\theta_ {\alpha}}. \\ \end{array}

Let $M(\alpha) \coloneqq (Id - \gamma P(\alpha))^{-1} = \sum_{t=0}^{\infty} \gamma^{t}[P(\alpha)]^{t}$ . Also, observe that

1=11γ(IdγP(α))1M(α)1=11γ1.i[M(α)]i,:1=11γ \begin{array}{l} \mathbf {1} = \frac {1}{1 - \gamma} (I d - \gamma P (\alpha)) \mathbf {1} \Longrightarrow M (\alpha) \mathbf {1} = \frac {1}{1 - \gamma} \mathbf {1}. \\ \Rightarrow \forall i \| [ M (\alpha) ] _ {i,:} \| _ {1} = \frac {1}{1 - \gamma} \\ \end{array}

where $[M(\alpha)]{i,:}$ is the $i^{th}$ row of $M(\alpha)$ . Hence for any vector $x\in \mathbb{R}^S$ , $| M(\alpha)x|{\infty}\leqslant \frac{1}{1 - \gamma}| x|_{\infty}$ .

By assumption I.6, we have $| r_{\theta_{\alpha}} |{\infty} = \max_s |r{\theta_{\alpha}}(s)| \leqslant 1$ . Next we find the derivative of $r_{\theta_{\alpha}}$ w.r.t $\alpha$ .

rθα(s)α=(rθα(s)θα)Tθααm=1Mm=1MaAπθα(m)(Immπθα(m))Km(s,a)r(s,a)u(m)=m=1MaAπθα(m)Km(s,a)r(s,a)u(m)m=1Mm=1MaAπθα(m)πθα(m)Km(s,a)r(s,a)u(m)m=1MaAπθα(m)Km(s,a)r(s,a)m=1Mm=1MaAπθα(m)πθα(m)Km(s,a)r(s,a)uu2. \begin{array}{l} \left| \frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \alpha} \right| = \left| \left(\frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \theta_ {\alpha}}\right) ^ {T} \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right| \\ \leqslant \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) (\mathbb {I} _ {m m ^ {\prime \prime}} - \pi_ {\theta_ {\alpha}} (m)) K _ {m} (s, a) r (s, a) u (m ^ {\prime \prime}) \right| \\ = \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) K _ {m ^ {\prime \prime}} (s, a) r (s, a) u (m ^ {\prime \prime}) - \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) \pi_ {\theta_ {\alpha}} (m) K _ {m} (s, a) r (s, a) u (m ^ {\prime \prime}) \right. \\ \leqslant \left| \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) K _ {m ^ {\prime \prime}} (s, a) r (s, a) - \sum_ {m ^ {\prime \prime} = 1} ^ {M} \sum_ {m = 1} ^ {M} \sum_ {a \in \mathcal {A}} \pi_ {\theta_ {\alpha}} (m ^ {\prime \prime}) \pi_ {\theta_ {\alpha}} (m) K _ {m} (s, a) r (s, a) \right| \| u \| _ {\infty} \leqslant \| u \| _ {2}. \\ \end{array}

Similarly, we can calculate the upper-bound on second derivative,

rθαα2=maxsrθα(s)α2=maxs(α{rθα(s)α})Tθαα=maxs(2rθα(s)α2θαα)Tθαα5/2u22. \begin{array}{l} \left\| \frac {\partial r _ {\theta_ {\alpha}}}{\partial \alpha^ {2}} \right\| _ {\infty} = \max _ {s} \left| \frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \alpha^ {2}} \right| \\ = \max _ {s} \left| \left(\frac {\partial}{\partial \alpha} \left\{\frac {\partial r _ {\theta_ {\alpha}} (s)}{\partial \alpha} \right\}\right) ^ {T} \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right| \\ = \max _ {s} \left| \left(\frac {\partial^ {2} r _ {\theta_ {\alpha}} (s)}{\partial \alpha^ {2}} \frac {\partial \theta_ {\alpha}}{\partial \alpha}\right) ^ {T} \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right| \leqslant 5 / 2 \| u \| _ {2} ^ {2}. \\ \end{array}

Next, the derivative of the value function w.r.t $\alpha$ is given by,

Vπθα(s)α=γesTM(α)P(α)αM(α)rθα+esTM(α)rθαα. \frac {\partial V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha} = \gamma e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) r _ {\theta_ {\alpha}} + e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial r _ {\theta_ {\alpha}}}{\partial \alpha}.

And the second derivative,

2Vπθα(s)α2=2γ2esTM(α)P(α)αM(α)P(α)αM(α)rθαT1+γesTM(α)2P(α)α2M(α)rθαT2+2γesTM(α)P(α)αM(α)rθααT3+esTM(α)2rθαα2T4. \begin{array}{l} \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha^ {2}} = \underbrace {2 \gamma^ {2} e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) r _ {\theta_ {\alpha}}} _ {T 1} + \underbrace {\gamma e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial^ {2} P (\alpha)}{\partial \alpha^ {2}} M (\alpha) r _ {\theta_ {\alpha}}} _ {T 2} \\ + \underbrace {2 \gamma e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial P (\alpha)}{\partial \alpha} M (\alpha) \frac {\partial r _ {\theta_ {\alpha}}}{\partial \alpha}} _ {T 3} + \underbrace {e _ {s} ^ {\mathrm {T}} M (\alpha) \frac {\partial^ {2} r _ {\theta_ {\alpha}}}{\partial \alpha^ {2}}} _ {T 4}. \\ \end{array}

We use the above derived bounds to bound each of the term in the above display. The calculations here are same as shown for Lemma 7 in (Mei et al., 2020), except for the particular values of the bounds. Hence we directly, mention the final bounds that we obtain and refer to (Mei et al., 2020) for the detailed but elementary calculations.

T14(1γ)3u22 | T 1 | \leqslant \frac {4}{(1 - \gamma) ^ {3}} \| u \| _ {2} ^ {2}

T23(1γ)2u22 | T 2 | \leqslant \frac {3}{(1 - \gamma) ^ {2}} \| u \| _ {2} ^ {2}

T32(1γ)2u22 | T 3 | \leqslant \frac {2}{(1 - \gamma) ^ {2}} \| u \| _ {2} ^ {2}

T45/2(1γ)u22. \left| T 4 \right| \leqslant \frac {5 / 2}{(1 - \gamma)} \| u \| _ {2} ^ {2}.

Combining the above bounds we get,

2Vπθα(s)α2α=0(8γ2(1γ)3+3γ(1γ)2+4γ(1γ)2+5/2(1γ))u22=7γ2+4γ+52(1γ)3u2. \begin{array}{l} \left| \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0} \Bigg | \leqslant \left(\frac {8 \gamma^ {2}}{(1 - \gamma) ^ {3}} + \frac {3 \gamma}{(1 - \gamma) ^ {2}} + \frac {4 \gamma}{(1 - \gamma) ^ {2}} + \frac {5 / 2}{(1 - \gamma)}\right) \| u \| _ {2} ^ {2} \\ = \frac {7 \gamma^ {2} + 4 \gamma + 5}{2 (1 - \gamma) ^ {3}} \| u \| _ {2}. \\ \end{array}

Finally, let $y \in \mathbb{R}^M$ and fix a $\theta \in \mathbb{R}^M$ :

yT2Vπθ(s)θ2y=yy2T2Vπθ(s)θ2yy2.y22maxu2=12Vπθ(s)θ2u,u.y22=maxu2=12Vπθα(s)θα2α=0θαα,θαα.y22=maxu2=12Vπθα(s)α2α=0.y22 \begin{array}{l} \left| y ^ {\mathrm {T}} \frac {\partial^ {2} V ^ {\pi_ {\theta}} (s)}{\partial \theta^ {2}} y \right| = \left| \frac {y}{\| y \| _ {2}} ^ {\mathrm {T}} \frac {\partial^ {2} V ^ {\pi_ {\theta}} (s)}{\partial \theta^ {2}} \frac {y}{\| y \| _ {2}} \right|. \| y \| _ {2} ^ {2} \\ \leqslant \max _ {\| u \| _ {2} = 1} \left| \left\langle \frac {\partial^ {2} V ^ {\pi_ {\theta}} (s)}{\partial \theta^ {2}} u, u \right\rangle \right|. \| y \| _ {2} ^ {2} \\ = \max _ {\| u \| _ {2} = 1} \left| \left\langle \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \theta_ {\alpha} ^ {2}} \right| _ {\alpha = 0} \frac {\partial \theta_ {\alpha}}{\partial \alpha}, \frac {\partial \theta_ {\alpha}}{\partial \alpha} \right\rangle \Bigg |. \| y \| _ {2} ^ {2} \\ = \max _ {\| u \| _ {2} = 1} \left| \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\alpha}}} (s)}{\partial \alpha^ {2}} \right| _ {\alpha = 0}. \| y \| _ {2} ^ {2} \\ \end{array}

7γ2+4γ+52(1γ)3y22. \leqslant \frac {7 \gamma^ {2} + 4 \gamma + 5}{2 (1 - \gamma) ^ {3}} \| y \| _ {2} ^ {2}.

Let $\theta_{\xi} \coloneqq \theta + \xi (\theta' - \theta)$ where $\xi \in [0,1]$ . By Taylor's theorem $\forall s, \theta, \theta'$ ,

Vπθ(s)Vπθ(s)Vπθ(s)θ=12.(θθ)T2Vπθξ(s)θξ2(θθ)7γ2+4γ+54(1γ)3θθ22. \begin{array}{l} \left| V ^ {\pi_ {\theta^ {\prime}}} (s) - V ^ {\pi_ {\theta}} (s) - \left\langle \frac {\partial V ^ {\pi_ {\theta}} (s)}{\partial \theta} \right\rangle \right| = \frac {1}{2}. \left| (\theta^ {\prime} - \theta) ^ {\mathrm {T}} \frac {\partial^ {2} V ^ {\pi_ {\theta_ {\xi}}} (s)}{\partial \theta_ {\xi} ^ {2}} (\theta^ {\prime} - \theta) \right| \\ \leqslant \frac {7 \gamma^ {2} + 4 \gamma + 5}{4 (1 - \gamma) ^ {3}} \| \theta^ {\prime} - \theta \| _ {2} ^ {2}. \\ \end{array}

Since $V^{\pi_{\theta}}(s)$ is $\frac{7\gamma^2 + 4\gamma + 5}{2(1 - \gamma)^3}$ smooth for every $s$ , $V^{\pi_{\theta}}(\mu)$ is also $\frac{7\gamma^2 + 4\gamma + 5}{2(1 - \gamma)^3}$ smooth.

Lemma 1.4 (Value Difference Lemma-1). For any two policies $\pi$ and $\pi'$ , and for any state $s \in S$ , the following is true.

Vπ(s)Vπ(s)=11γsSdsπ(s)m=1MπmA~(s,m). V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \tilde {A} (s ^ {\prime}, m).

Proof.

Vπ(s)Vπ(s)=m=1MπmQ~(s,m)m=1MπmQ~(s,m)=m=1Mπm(Q~(s,m)Q~(s,m))+m=1M(πmπm)Q~(s,m)=m=1M(πmπm)Q~(s,m)+m=1MπmaAKm(s,a)=aAπθ(as)sSP(ss,a)[Vπ(s)Vπ(s)]=11γsSdsπ(s)m=1M(πmπm)Q~(s,m)=11γsSdsπ(s)m=1Mπm(Q~s,mV(s))=11γsSdsπ(s)m=1MπmA~(s,m). \begin{array}{l} V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \tilde {Q} ^ {\prime} (s, m) - \sum_ {m = 1} ^ {M} \pi_ {m} \tilde {Q} (s, m) \\ = \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \left(\tilde {Q} ^ {\prime} (s, m) - \tilde {Q} (s, m)\right) + \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} (s, m) \\ = \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {\prime} - \pi_ {m}) \tilde {Q} (s, m) + \underbrace {\sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \sum_ {a \in \mathcal {A}} K _ {m} (s , a)} _ {= \sum_ {a \in \mathcal {A}} \pi_ {\theta} (a | s)} \sum_ {s ^ {\prime} \in \mathcal {S}} \mathbb {P} (s ^ {\prime} | s, a) \left[ V ^ {\pi^ {\prime}} (s ^ {\prime}) - V ^ {\pi} (s ^ {\prime}) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m ^ {\prime} = 1} ^ {M} \left(\pi_ {m ^ {\prime}} ^ {\prime} - \pi_ {m ^ {\prime}}\right) \tilde {Q} \left(s ^ {\prime}, m ^ {\prime}\right) \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m ^ {\prime} = 1} ^ {M} \pi_ {m ^ {\prime}} ^ {\prime} (\tilde {Q} s ^ {\prime}, m ^ {\prime} - V (s ^ {\prime})) \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi^ {\prime}} (s ^ {\prime}) \sum_ {m ^ {\prime} = 1} ^ {M} \pi_ {m ^ {\prime}} ^ {\prime} \tilde {A} (s ^ {\prime}, m ^ {\prime}). \\ \end{array}

Lemma 1.5. (Value Difference Lemma-2) For any two policies $\pi$ and $\pi'$ and state $s \in S$ , the following is true.

Vπ(s)Vπ(s)=11γsSdsπ(s)m=1M(πmπm)Q~π(s,m). V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi} (s ^ {\prime}) \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {\prime} - \pi_ {m}) \tilde {Q} ^ {\pi^ {\prime}} (s ^ {\prime}, m).

Proof. We will use $\tilde{Q}$ for $\tilde{Q}^{\pi}$ and $\tilde{Q}'$ for $\tilde{Q}^{\pi'}$ as a shorthand.

Vπ(s)Vπ(s)=m=1MπmQ~(s,m)m=1MπmQ~(s,m)=m=1M(πmπm)Q~(s,m)+m=1Mπm(Q~(s,m)Q~(s,m))=m=1M(πmπm)Q~(s,m)+ \begin{array}{l} V ^ {\pi^ {\prime}} (s) - V ^ {\pi} (s) = \sum_ {m = 1} ^ {M} \pi_ {m} ^ {\prime} \tilde {Q} ^ {\prime} (s, m) - \sum_ {m = 1} ^ {M} \pi_ {m} \tilde {Q} (s, m) \\ = \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} ^ {\prime} (s, m) + \sum_ {m = 1} ^ {M} \pi_ {m} \left(\tilde {Q} ^ {\prime} (s, m) - \tilde {Q} (s, m)\right) \\ = \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} ^ {\prime} (s, m) + \\ \end{array}

γm=1Mπm(aAKm(s,a)sSP(ss,a)V(s)aAKm(s,a)sSP(ss,a)V(s))=m=1M(πmπm)Q~(s,m)+γaAπθ(as)sSP(ss,a)[V(s)V(s)]=11γsSdsπ(s)m=1M(πmπm)Q~(s,m). \begin{array}{l} \gamma \sum_ {m = 1} ^ {M} \pi_ {m} \left(\sum_ {a \in \mathcal {A}} K _ {m} (s, a) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} \mid s, a\right) V ^ {\prime} \left(s ^ {\prime}\right) - \sum_ {a \in \mathcal {A}} K _ {m} (s, a) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} \mid s, a\right) V \left(s ^ {\prime}\right)\right) \\ = \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {\prime} - \pi_ {m}\right) \tilde {Q} ^ {\prime} (s, m) + \gamma \sum_ {a \in \mathcal {A}} \pi_ {\theta} (a | s) \sum_ {s ^ {\prime} \in \mathcal {S}} \mathrm {P} \left(s ^ {\prime} | s, a\right) \left[ V ^ {\prime} (s) - V \left(s ^ {\prime}\right) \right] \\ = \frac {1}{1 - \gamma} \sum_ {s ^ {\prime} \in \mathcal {S}} d _ {s} ^ {\pi} (s ^ {\prime}) \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {\prime} - \pi_ {m}) \tilde {Q} ^ {\prime} (s ^ {\prime}, m). \\ \end{array}

Assumption 1.6. The reward $r(s,a)\in [0,1]$ , for all pairs $(s,a)\in S\times \mathcal{A}$

Assumption I.7. Let $\pi^{} := \operatorname{argmax}{\pi \in \mathcal{P}{M}} V^{\pi}(s_{0})$ . We make the following assumption.

Emπ[Qπθ(s,m)]Vπθ(s)0,sS,πθΠ. \mathbb {E} _ {m \sim \pi^ {*}} \left[ Q ^ {\pi_ {\theta}} (s, m) \right] - V ^ {\pi_ {\theta}} (s) \geqslant 0, \forall s \in S, \forall \pi_ {\theta} \in \Pi .

Let the best controller be a point in the $M - simplex$ , i.e., $K^{} := \sum_{m=1}^{M} \pi_{m}^{} K_{m}$ .

Lemma 1.8 (NUL1). $\left| \frac{\partial}{\partial \theta} V^{\pi_{\theta}}(\mu) \right|2 \geqslant \frac{1}{\sqrt{M}} \left( \min{m: \pi_{\theta_m}^* > 0} \pi_{\theta_m} \right) \times \left| \frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi^*}} \right|\infty^{-1} \times [V^*(\rho) - V^{\pi{\theta}}(\rho)]$ .

Proof.

θVπθ(μ)2=(m=1M(Vπθ(μ)θm)2)1/21Mm=1MVπθ(μ)θm(C a u c h y - S c h w a r z)=1Mm=1M11γsSdμπθ(s)πmA~(s,m)L e m m a I . 21Mm=1Mπmπm1γsSdμπθ(s)A~(s,m)(minm:πθm>0πθm)1Mm=1Mπm1γsSdμπθ(s)A~(s,m)(minm:πθm>0πθm)1Mm=1Mπm1γsSdμπθ(s)A~(s,m) \begin{array}{l} \left\| \frac {\partial}{\partial \theta} V ^ {\pi_ {\theta}} (\mu) \right\| _ {2} = \left(\sum_ {m = 1} ^ {M} \left(\frac {\partial V ^ {\pi_ {\theta}} (\mu)}{\partial \theta_ {m}}\right) ^ {2}\right) ^ {1 / 2} \\ \geqslant \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \left| \frac {\partial V ^ {\pi_ {\theta}} (\mu)}{\partial \theta_ {m}} \right| \quad \text {(C a u c h y - S c h w a r z)} \\ = \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \frac {1}{1 - \gamma} \left| \sum_ {s \in S} d _ {\mu} ^ {\pi_ {\theta}} (s) \pi_ {m} \tilde {A} (s, m) \right| \quad \text {L e m m a I . 2} \\ \geqslant \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*} \pi_ {m}}{1 - \gamma} \left| \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \tilde {A} (s, m) \right| \\ \geqslant \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \frac {1}{\sqrt {M}} \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \left| \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \tilde {A} (s, m) \right| \\ \geqslant \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \frac {1}{\sqrt {M}} \left| \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \tilde {A} (s, m) \right| \\ \end{array}

=(minm:πθm>0πθm)1MsSdμπθ(s)m=1Mπm1γA~(s,m)=(minm:πθm>0πθm)1MsSdμπθ(s)m=1Mπm1γA~(s,m)A s s u m p t i o n I . 71M11γ(minm:πθm>0πθm)dρπdμπθ1sSdρ(s)m=1MπmA~(s,m)=1M(minm:πθm>0πθm)dρπdμπθ1[V(ρ)Vπθ(ρ)]L e m m a I . 4. \begin{array}{l} = \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \left| \frac {1}{\sqrt {M}} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \tilde {A} (s, m) \right| \\ = \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \frac {1}{\sqrt {M}} \sum_ {s \in \mathcal {S}} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \frac {\pi_ {m} ^ {*}}{1 - \gamma} \tilde {A} (s, m) \quad \text {A s s u m p t i o n I . 7} \\ \geqslant \frac {1}{\sqrt {M}} \frac {1}{1 - \gamma} \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 1} \sum_ {s \in \mathcal {S}} d _ {\rho} ^ {*} (s) \sum_ {m = 1} ^ {M} \pi_ {m} ^ {*} \tilde {A} (s, m) \\ = \frac {1}{\sqrt {M}} \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 1} [ V ^ {*} (\rho) - V ^ {\pi_ {\theta}} (\rho) ] \quad \text {L e m m a I . 4}. \\ \end{array}

I.1. Proof of the Theorem 4.2

Lemma 1.9 (Modified Policy Gradient Theorem). $\nabla_{\theta}V^{\pi_{\theta}}(\rho)$ = $\mathbb{E}{(s,m)\sim \nu{\pi_\theta}}[\tilde{Q}^{\pi_\theta}(s,m)\psi_\theta (m)]$ = $\mathbb{E}{(s,m)\sim \nu{\pi_\theta}}[\tilde{A}^{\pi_\theta}(s,m)\psi_\theta (m)],$ where $\psi_{\theta}(m)\coloneqq \nabla_{\theta}\log (\pi_{\theta}(m))$

Let $\beta := \frac{7\gamma^2 + 4\gamma + 5}{(1 - \gamma)^2}$ . We have that,

V(ρ)Vπθ(ρ)=11γsSdρπθ(s)m=1M(πmπm)Q~π(s,m)(LemmaI.5)=11γsSdρπθ(s)dμπθ(s)dμπθ(s)m=1M(πmπm)Q~π(s,m)11γ1dμπθsSm=1M(πmπm)Q~π(s,m)1(1γ)21μsSm=1M(πmπm)Q~π(s,m)=1(1γ)1μ[V(μ)Vπθ(μ)](LemmaI.5). \begin{array}{l} V ^ {*} (\rho) - V ^ {\pi_ {\theta}} (\rho) = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} d _ {\rho} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {*} - \pi_ {m}\right) \tilde {Q} ^ {\pi^ {*}} (s, m) \quad (Lemma I.5) \\ = \frac {1}{1 - \gamma} \sum_ {s \in \mathcal {S}} \frac {d _ {\rho} ^ {\pi_ {\theta}} (s)}{d _ {\mu} ^ {\pi_ {\theta}} (s)} d _ {\mu} ^ {\pi_ {\theta}} (s) \sum_ {m = 1} ^ {M} (\pi_ {m} ^ {*} - \pi_ {m}) \tilde {Q} ^ {\pi^ {*}} (s, m) \\ \leqslant \frac {1}{1 - \gamma} \left\| \frac {1}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} \sum_ {s \in \mathcal {S}} \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {*} - \pi_ {m}\right) \tilde {Q} ^ {\pi^ {*}} (s, m) \\ \leqslant \frac {1}{(1 - \gamma) ^ {2}} \left\| \frac {1}{\mu} \right\| _ {\infty} \sum_ {s \in S} \sum_ {m = 1} ^ {M} \left(\pi_ {m} ^ {*} - \pi_ {m}\right) \tilde {Q} ^ {\pi^ {*}} (s, m) \\ = \frac {1}{(1 - \gamma)} \left\| \frac {1}{\mu} \right\| _ {\infty} \left[ V ^ {*} (\mu) - V ^ {\pi_ {\theta}} (\mu) \right] \quad (Lemma I.5). \\ \end{array}

Let $\delta_t \coloneqq V^*(\mu) - V^{\pi_{\theta_t}}(\mu)$ .

δt+1δt=Vπθt(μ)Vπθt+1(μ)(LemmaI.3)12βθVπθt(μ)22(LemmaI.10)12β1M(minm:πθm>0πθm)2dρπdμπθ2δt2(L e m m a4.5)12β(1γ)21M(minm:πθm>0πθm)2dρπdμπθ2δt212β(1γ)21M(min1stminm:πθm>0πθm)2dρπdμπθ2δt2=12β1M(1γ)2dμπμ2ct2δt2, \begin{array}{l} \delta_ {t + 1} - \delta_ {t} = V ^ {\pi_ {\theta_ {t}}} (\mu) - V ^ {\pi_ {\theta_ {t + 1}}} (\mu) \quad (Lemma I.3) \\ \leqslant - \frac {1}{2 \beta} \left\| \frac {\partial}{\partial \theta} V ^ {\pi_ {\theta_ {t}}} (\mu) \right\| _ {2} ^ {2} \quad (Lemma I.10) \\ \leqslant - \frac {1}{2 \beta} \frac {1}{M} \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) ^ {2} \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 2} \delta_ {t} ^ {2} \quad (\text {L e m m a} 4. 5) \\ \leqslant - \frac {1}{2 \beta} (1 - \gamma) ^ {2} \frac {1}{M} \left(\min _ {m: \pi_ {\theta_ {m}} ^ {*} > 0} \pi_ {\theta_ {m}}\right) ^ {2} \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi^ {\theta}}} \right\| _ {\infty} ^ {- 2} \delta_ {t} ^ {2} \\ \leqslant - \frac {1}{2 \beta} (1 - \gamma) ^ {2} \frac {1}{M} \left(\min _ {1 \leqslant s \leqslant t} \min _ {m: \pi_ {\theta_ {m}} > 0} \pi_ {\theta_ {m}}\right) ^ {2} \left\| \frac {d _ {\rho} ^ {\pi^ {*}}}{d _ {\mu} ^ {\pi_ {\theta}}} \right\| _ {\infty} ^ {- 2} \delta_ {t} ^ {2} \\ = - \frac {1}{2 \beta} \frac {1}{M} (1 - \gamma) ^ {2} \left\| \frac {d _ {\mu} ^ {\pi^ {*}}}{\mu} \right\| _ {\infty} ^ {- 2} c _ {t} ^ {2} \delta_ {t} ^ {2}, \\ \end{array}

where $c_{t} \coloneqq \min_{1 \leqslant s \leqslant t} \min_{m: \pi_{m}^{*} > 0} \pi_{\theta_{s}}(m)$ . Hence we have that,

δt+1δt12β(1γ)2Mdμπμ2ct2δt2.(19) \delta_ {t + 1} \leqslant \delta_ {t} - \frac {1}{2 \beta} \frac {(1 - \gamma) ^ {2}}{M} \left\| \frac {d _ {\mu} ^ {\pi^ {*}}}{\mu} \right\| _ {\infty} ^ {- 2} c _ {t} ^ {2} \delta_ {t} ^ {2}. \tag {19}

The rest of the proof follows from a induction argument over $t \geqslant 1$ .

Base case: Since $\delta_t \leqslant \frac{1}{1 - \gamma}$ , and $c_t \in (0, 1)$ , the result holds for all $t \leqslant \frac{2\beta M}{(1 - \gamma)} \left| \frac{d_\mu^{\pi^*}}{\mu} \right|_\infty^2$ .

For ease of notation, let $\varphi_t \coloneqq \frac{2\beta M}{c_t^2(1 - \gamma)^2} \left| \frac{d_\mu^{\pi^*}}{\mu} \right|_\infty^2$ . We need to show that $\delta_t \leqslant \frac{\varphi_t}{t}$ , for all $t \geqslant 1$ .

Induction step: Fix a $t \geqslant 2$ , assume $\delta_t \leqslant \frac{\varphi_t}{t}$ .

Let $g: \mathbb{R} \to \mathbb{R}$ be a function defined as $g(x) = x - \frac{1}{\varphi_t} x^2$ . One can verify easily that $g$ is monotonically increasing in $\left[0, \frac{\varphi_t}{2}\right]$ . Next with equation 19, we have

δt+1δt1φtδt2=g(δt)g(φtt)φttφtt2=φt(1t1t2)φt(1t+1)φt+1(1t+1). \begin{array}{l} \delta_ {t + 1} \leqslant \delta_ {t} - \frac {1}{\varphi_ {t}} \delta_ {t} ^ {2} \\ = g \left(\delta_ {t}\right) \\ \leqslant g \left(\frac {\varphi_ {t}}{t}\right) \\ \leqslant \frac {\varphi_ {t}}{t} - \frac {\varphi_ {t}}{t ^ {2}} \\ = \varphi_ {t} \left(\frac {1}{t} - \frac {1}{t ^ {2}}\right) \\ \leqslant \varphi_ {t} \left(\frac {1}{t + 1}\right) \\ \leqslant \varphi_ {t + 1} \left(\frac {1}{t + 1}\right). \\ \end{array}

where the last step follows from the fact that $c_{t + 1} \leqslant c_t$ (infimum over a larger set does not increase the value). This completes the proof.

Lemma I.10. Let $f: \mathbb{R}^M \to \mathbb{R}$ be $\beta$ -smooth. Then gradient ascent with learning rate $\frac{1}{\beta}$ guarantees, for all $x, x' \in \mathbb{R}^M$ :

f(x)f(x)12βdf(x)dx22. f (x) - f (x ^ {\prime}) \leqslant - \frac {1}{2 \beta} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2}.

Proof.

f(x)f(x)f(x)x+β2.xx22=1βdf(x)dx22+β21β2df(x)dx22=12βdf(x)dx22. \begin{array}{l} f (x) - f \left(x ^ {\prime}\right) \leqslant - \left\langle \frac {\partial f (x)}{\partial x} \right\rangle + \frac {\beta}{2}. \| x ^ {\prime} - x \| _ {2} ^ {2} \\ = \frac {1}{\beta} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2} + \frac {\beta}{2} \frac {1}{\beta^ {2}} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2} \\ = - \frac {1}{2 \beta} \left\| \frac {d f (x)}{d x} \right\| _ {2} ^ {2}. \\ \end{array}

J. Proofs for (Natural) Actor-critic based improper learning

We will begin with some useful lemmas.

Lemma J.1. For any $\theta, \theta \in \mathbb{R}^M$ , we have $| \psi_{\theta}(m) - \psi_{\theta'}(m) |_2 \leqslant | \theta - \theta' |_2$ .

Proof. Recall, $\psi_{\theta}(m) \coloneqq \nabla_{\theta} \log \pi_{\theta}(m)$ . Fix $m' \in [M]$ ,

logπθ(m)θm=log(eθmj=1Meθj)θm=θm(θmlog(j=1Meθj))=1{m=m}eθmj=1Meθj=1{m=m}πθ(m). \begin{array}{l} \frac {\partial \log \pi_ {\theta} (m)}{\partial \theta_ {m ^ {\prime}}} = \frac {\partial \log \left(\frac {e ^ {\theta_ {m}}}{\sum_ {j = 1} ^ {M} e ^ {\theta_ {j}}}\right)}{\partial \theta_ {m ^ {\prime}}} \\ = \frac {\partial}{\partial \theta_ {m ^ {\prime}}} \left(\theta_ {m} - \log \left(\sum_ {j = 1} ^ {M} e ^ {\theta_ {j}}\right)\right) \\ = \mathbb {1} \{m ^ {\prime} = m \} - \frac {e ^ {\theta_ {m ^ {\prime}}}}{\sum_ {j = 1} ^ {M} e ^ {\theta_ {j}}} \\ = \mathbb {1} \{m ^ {\prime} = m \} - \pi_ {\theta} (m ^ {\prime}). \\ \end{array}

ψθ(m)ψθ(m)2θθ2=θlogπθ(m)θlogπθ(m)2=πθ(.)πθ(.)2()θθ2. \begin{array}{l} \left\| \psi_ {\theta} (m) - \psi_ {\theta^ {\prime}} (m) \right\| _ {2} \leqslant \left\| \theta - \theta^ {\prime} \right\| _ {2} = \left\| \nabla_ {\theta} \log \pi_ {\theta} (m) - \nabla_ {\theta} \log \pi_ {\theta^ {\prime}} (m) \right\| _ {2} \\ = \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {2} \\ \leqslant^ {(*)} \| \theta - \theta^ {\prime} \| _ {2}. \\ \end{array}

Here $(^{*})$ follows from the fact that the softmax function is 1-Lipschitz (Gao & Pavel, 2017).

Lemma J.2. For all $m \in [M]$ and $\theta \in \mathbb{R}^M$ , $| \psi_{\theta}(m)|_2 \leqslant \sqrt{2}$ .

Proof. Proof follows by noticing that $| \psi_{\theta}(m)| _2 = | \nabla_\theta \log \pi_\theta (m)| _2\leqslant \sqrt{2}$ , where the last inequality follows because the 2-norm of a probability vector is bounded by 1.

Lemma J.3. For all $\theta, \theta' \in \mathbb{R}^M$ , $| \pi_\theta(.) - \pi_{\theta'}(.)|_{TV} \leqslant \frac{\sqrt{M}}{2}|\theta - \theta'|_2$ .

Proof.

πθ(.)πθ(.)TV=12πθ(.)πθ(.)1M2πθ(.)πθ(.)2. \begin{array}{l} \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {T V} = \frac {1}{2} \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {1} \\ \leqslant \frac {\sqrt {M}}{2} \left\| \pi_ {\theta} (.) - \pi_ {\theta^ {\prime}} (.) \right\| _ {2}. \\ \end{array}

The inequality follows from relation between 1-norm and 2-norm.

Proposition J.4. For any $\theta, \theta' \in \mathbb{R}^M$ ,

V(θ)V(θ)MLVθθ2 \nabla V (\theta) - \nabla V \left(\theta^ {\prime}\right) \leqslant \sqrt {M} L _ {V} \| \theta - \theta^ {\prime} \| _ {2}

where $L_{V} = \frac{2\sqrt{2}C_{\kappa\xi} + 1}{1 - \gamma}$ , and $C_{\kappa \xi} = \left(1 + \left\lceil \log_{\xi}\frac{1}{\kappa}\right\rceil +\frac{1}{1 - \xi}\right)$ .

Proof. We follow the same steps as in Proposition 1 in (Xu et al., 2020) along with Lemmas J.3,J.2,J.1 and that the maximum reward is bounded by 1.

We will now restate a useful result from (Xu et al., 2020), about the convergence of the critic parameter $w_{t}$ to the equilibrium point $w^{*}$ of the underlying ODE, applied to our setting.

Proposition J.5. Suppose assumptions 5.3 and 5.2 hold. Then is $\beta \leqslant \min \left{\frac{\Gamma_L}{16}, \frac{8}{\Gamma_L}\right}$ and $H \geqslant \left(\frac{4}{\Gamma_L} + 2\alpha\right)\left[\frac{1536[1 + (\kappa - 1)\xi]}{(1 - \xi)\Gamma_L}\right]$ . We have

E[wTcw22](1ΓL16α)Tcw0w22+(4ΓL+2α)1536(1+Rw2)[1+(κ1)ξ](1ξ)H. \mathbb {E} \left[ \| w _ {T _ {c}} - w ^ {*} \| _ {2} ^ {2} \right] \leqslant \left(1 - \frac {\Gamma_ {L}}{1 6} \alpha\right) ^ {T _ {c}} \| w _ {0} - w ^ {*} \| _ {2} ^ {2} + \left(\frac {4}{\Gamma_ {L}} + 2 \alpha\right) \frac {1 5 3 6 (1 + R _ {w} ^ {2}) [ 1 + (\kappa - 1) \xi ]}{(1 - \xi) H}.

If we further let $T_{c} \geqslant \frac{16}{\Gamma_{L}\alpha} \log \frac{2|w_{0} - w^{}|{2}^{2}}{\varepsilon}$ and $H \geqslant \left(\frac{4}{\Gamma{L}} + 2\alpha\right) \frac{3072(R_{w}^{2} + 1)[1 + (\kappa - 1)\xi]}{(1 - \xi)\Gamma_{L}\varepsilon}$ , then we have $\mathbb{E}\left[|w_{T_c}-w^|2^2\right] \leqslant \varepsilon$ with total sample complexity given by $T{c}H = \mathcal{O}\left(\frac{1}{\alpha\varepsilon} \log \frac{1}{\varepsilon}\right)$ .

Proof. Proof follows along the similar lines as in Thm. 4 in Xu et al. (2020) and by using $\left| \varphi(s)(\gamma \varphi(s') - \varphi(s))^{\top} \right|_{F} \leqslant (1 + \gamma) \leqslant 2$ and assuming $| \varphi(s) |_2 \leqslant 1$ for all $s, s' \in S$ .

J.1. Actor-critic based improper learning

Proof of Theorem 5.4. Let $v_{t}(w) \coloneqq \frac{1}{B} \sum_{i=0}^{B-1} \mathcal{E}(s_{t,i}, m_{t,i}, s_{t,i+1}) \psi_{\theta_{t}}(m_{t,i})$ and $A_{w}(s,m) \coloneqq \mathbb{E}{\bar{P}}[\mathcal{E}(s,m,s')|(s,m)]$ and $g(w,\theta) \coloneqq \mathbb{E}{\nu_{\theta}}[A_{w}(s,m)\psi_{\theta}(m)]$ for all $\theta \in \mathbb{R}^{M}, w \in \mathbb{R}^{d}, s \in S, m \in [M]$ . Using Prop J.4 we get,

V(θt+1)V(θt)+θV(θt),θt+1θtMLV2θt+1θt22=V(θt)+αθV(θt),vt(wt)θV(θt)+θV(θt)MLVα22vt(wt)22=V(θt)+αθV(θt)22+αθV(θt),vt(wt)θV(θt)MLVα22vt(wt)22V(θt)+(12αMLVα2)θV(θt)22(12α+MLVα2)vt(wt)θV(θt)22 \begin{array}{l} V (\theta_ {t + 1}) \geqslant V (\theta_ {t}) + \langle \nabla_ {\theta} V (\theta_ {t}), \theta_ {t + 1} - \theta_ {t} \rangle - \frac {\sqrt {M} L _ {V}}{2} \| \theta_ {t + 1} - \theta_ {t} \| _ {2} ^ {2} \\ = V (\theta_ {t}) + \alpha \left\langle \nabla_ {\theta} V (\theta_ {t}), v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) + \nabla_ {\theta} V (\theta_ {t}) \right\rangle - \frac {\sqrt {M} L _ {V} \alpha^ {2}}{2} \| v _ {t} (w _ {t}) \| _ {2} ^ {2} \\ = V \left(\theta_ {t}\right) + \alpha \| \nabla_ {\theta} V \left(\theta_ {t}\right) \| _ {2} ^ {2} \\ + \alpha \left\langle \nabla_ {\theta} V (\theta_ {t}), v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) \right\rangle - \frac {\sqrt {M} L _ {V} \alpha^ {2}}{2} \left\| v _ {t} (w _ {t}) \right\| _ {2} ^ {2} \\ \geqslant V \left(\theta_ {t}\right) + \left(\frac {1}{2} \alpha - \sqrt {M} L _ {V} \alpha^ {2}\right) \left\| \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2} - \left(\frac {1}{2} \alpha + \sqrt {M} L _ {V} \alpha^ {2}\right) \left\| v _ {t} \left(w _ {t}\right) - \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2} \\ \end{array}

Taking expectations and rearranging, we have

(12αMLVα2)E[θV(θt)22Ft]E[V(θt+1)Ft]V(θt)+(12α+MLVα2)E[vt(wt)θV(θt)22Ft]. \begin{array}{l} \left(\frac {1}{2} \alpha - \sqrt {M} L _ {V} \alpha^ {2}\right) \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \mid \mathcal {F} _ {t} \right] \\ \leqslant \mathbb {E} \left[ V (\theta_ {t + 1}) | \mathcal {F} _ {t} \right] - V (\theta_ {t}) + \left(\frac {1}{2} \alpha + \sqrt {M} L _ {V} \alpha^ {2}\right) \mathbb {E} \left[ \| v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} | \mathcal {F} _ {t} \right]. \\ \end{array}

Next we will upperbound $\mathbb{E}\left[| v_t(w_t) - \nabla_\theta V(\theta_t)| _2^2 |\mathcal{F}_t\right]$ .

vt(wt)θV(θt)223vt(wt)vt(wθt)22+3vt(wθt)g(wθt)22+3g(wθt)θV(θt)22. \begin{array}{l} \left\| v _ {t} \left(w _ {t}\right) - \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2} \\ \leqslant 3 \left\| v _ {t} \left(w _ {t}\right) - v _ {t} \left(w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} + 3 \left\| v _ {t} \left(w _ {\theta_ {t}} ^ {*}\right) - g \left(w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} + 3 \left\| g \left(w _ {\theta_ {t}} ^ {*}\right) - \nabla_ {\theta} V \left(\theta_ {t}\right) \right\| _ {2} ^ {2}. \\ \end{array}

vt(wt)vt(wθt)22=1Bi=0B1[Ewt(st,i,mt,i,st,i+1)Ewθt(st,i,mt,i,st,i+1)]ψ(mt,i)221Bi=0B1[Ewt(st,i,mt,i,st,i+1)Ewθt(st,i,mt,i,st,i+1)]ψ(mt,i)222Bi=0B1[Ewt(st,i,mt,i,st,i+1)Ewθt(st,i,mt,i,st,i+1)]22=2Bi=0B1(γφ(st,i+1)φ(st,i))(wtwθt)22 \begin{array}{l} \left\| v _ {t} \left(w _ {t}\right) - v _ {t} \left(w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} = \left\| \frac {1}{B} \sum_ {i = 0} ^ {B - 1} \left[ \mathcal {E} _ {w _ {t}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) - \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) \right] \psi \left(m _ {t, i}\right) \right\| _ {2} ^ {2} \\ \leqslant \frac {1}{B} \sum_ {i = 0} ^ {B - 1} \left\| \left[ \mathcal {E} _ {w _ {t}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) - \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) \right] \psi \left(m _ {t, i}\right) \right\| _ {2} ^ {2} \\ \leqslant \frac {2}{B} \sum_ {i = 0} ^ {B - 1} \left\| \left[ \mathcal {E} _ {w _ {t}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) - \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} \left(s _ {t, i}, m _ {t, i}, s _ {t, i + 1}\right) \right] \right\| _ {2} ^ {2} \\ = \frac {2}{B} \sum_ {i = 0} ^ {B - 1} \left\| \left(\gamma \varphi \left(s _ {t, i + 1}\right) - \varphi \left(s _ {t, i}\right)\right) ^ {\top} \left(w _ {t} - w _ {\theta_ {t}} ^ {*}\right) \right\| _ {2} ^ {2} \\ \end{array}

8Bi=0B1(wtwθt)22=8(wtwθt)22. \leqslant \frac {8}{B} \sum_ {i = 0} ^ {B - 1} \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} = 8 \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2}.

Next we have,

g(wθt)θV(θt)22=Eνθt[Awθt(s,m)ψθt(m)]Eνθt[Aπθt(s,m)ψθt(m)]222EνθtAwθt(s,m)Aπθt(s,m)22=2Eνθt[γE[Vwθ^t(s)Vπθt(s)s,m]++Vπθt(s)Vwθ^t(s)2]8Δcritic. \begin{array}{l} \left\| g (w _ {\theta_ {t}} ^ {*}) - \nabla_ {\theta} V (\theta_ {t}) \right\| _ {2} ^ {2} = \left\| \mathbb {E} _ {\nu_ {\theta_ {t}}} [ A _ {w _ {\theta_ {t}} ^ {*}} (s, m) \psi_ {\theta_ {t}} (m) ] - \mathbb {E} _ {\nu_ {\theta_ {t}}} [ A _ {\pi_ {\theta_ {t}}} (s, m) \psi_ {\theta_ {t}} (m) ] \right\| _ {2} ^ {2} \\ \leqslant 2 \mathbb {E} _ {\nu_ {\theta_ {t}}} \left\| A _ {w _ {\theta_ {t}} ^ {*}} (s, m) - A _ {\pi_ {\theta_ {t}}} (s, m) \right\| _ {2} ^ {2} \\ = 2 \mathbb {E} _ {\nu_ {\theta_ {t}}} \left[ | \gamma \mathbb {E} \left[ V _ {w _ {\hat {\theta} _ {t}} ^ {*}} (s ^ {\prime}) - V _ {\pi_ {\theta_ {t}}} (s ^ {\prime}) | s, m \right] + + V _ {\pi_ {\theta_ {t}}} (s) - V _ {w _ {\hat {\theta} _ {t}} ^ {*}} (s) | ^ {2} \right] \\ \leqslant 8 \Delta_ {c r i t i c}. \\ \end{array}

Finally we bound the last term $\left| v_{t}(w_{\theta_{t}}^{}) - g(w_{\theta_{t}}^{}) \right|_{2}^{2}$ by using Assumption 5.2 we have,

vt(wθt)g(wθt)22E[1Bi=0B1Ewθt(st,i,mt,i,st,i+1)ψθt(mt,i)Eνθt[Awθt(s,m)ψθt(m)]22Ft]. \left\| v _ {t} (w _ {\theta_ {t}} ^ {*}) - g (w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \leqslant \mathbb {E} \left[ \left\| \frac {1}{B} \sum_ {i = 0} ^ {B - 1} \mathcal {E} _ {w _ {\theta_ {t}} ^ {*}} (s _ {t, i}, m _ {t, i}, s _ {t, i + 1}) \psi_ {\theta_ {t}} (m _ {t, i}) - \mathbb {E} _ {\nu_ {\theta_ {t}}} [ A _ {w _ {\theta_ {t}} ^ {*}} (s, m) \psi_ {\theta_ {t}} (m) ] \right\| _ {2} ^ {2} | \mathcal {F} _ {t} \right].

We will now proceed in the similar manner as in (Xu et al., 2020) (eq 24 to eq 26), and using Lemma J.2, we have

E[vt(wθt)g(wθt)22Ft]32(1+Rw)2[1+(κ1)ξ]B(1ξ). \mathbb {E} \left[ \left\| v _ {t} (w _ {\theta_ {t}} ^ {*}) - g (w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} | \mathcal {F} _ {t} \right] \leqslant \frac {3 2 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)}.

Putting things back we have,

E[vt(wt)θV(θt)22Ft]96(1+Rw)2[1+(κ1)ξ]B(1ξ)+24E[(wtwθt)22]+24Δcritic. \mathbb {E} \left[ \right.\left|\left| v _ {t} (w _ {t}) - \nabla_ {\theta} V (\theta_ {t}) \right|\right| _ {2} ^ {2} \left. \right\rvert \mathcal {F} _ {t} \left. \right] \leqslant \frac {9 6 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 2 4 \mathbb {E} \left[\left|\left| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right|\right| _ {2} ^ {2} \right] + 2 4 \Delta_ {c r i t i c}.

Hence we get,

(12αMLVα2)E[θV(θt)22]E[V(θt+1)]E[V(θt)]+(12α+MLVα2)(96(1+Rw)2[1+(κ1)ξ]B(1ξ)+24E[(wtwθt)22]+24Δcritic). \begin{array}{l} \left(\frac {1}{2} \alpha - \sqrt {M} L _ {V} \alpha^ {2}\right) \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] \\ \leqslant \mathbb {E} \left[ V \left(\theta_ {t + 1}\right) \right] - \mathbb {E} \left[ V \left(\theta_ {t}\right) \right] \\ + \left(\frac {1}{2} \alpha + \sqrt {M} L _ {V} \alpha^ {2}\right) \left(\frac {9 6 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 2 4 \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 2 4 \Delta_ {c r i t i c}\right). \\ \end{array}

We put $\alpha = \frac{1}{4L_V\sqrt{M}}$ above to get,

(116LVM)E[θV(θt)22]E[V(θt+1)]E[V(θt)]+(14LVM)(96(1+Rw)2[1+(κ1)ξ]B(1ξ)+24E[(wtwθt)22]+24Δcritic). \begin{array}{l} \left(\frac {1}{1 6 L _ {V} \sqrt {M}}\right) \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] \leqslant \mathbb {E} \left[ V \left(\theta_ {t + 1}\right) \right] - \mathbb {E} \left[ V \left(\theta_ {t}\right) \right] \\ + \left(\frac {1}{4 L _ {V} \sqrt {M}}\right) \left(\frac {9 6 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 2 4 \mathbb {E} \left[ \big \| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \big \| _ {2} ^ {2} \right] + 2 4 \Delta_ {c r i t i c}\right). \\ \end{array}

which simplifies as

E[θV(θt)22]16LVM(E[V(θt+1)]E[V(θt)])+384(1+Rw)2[1+(κ1)ξ]B(1ξ)+96E[(wtwθt)22]+96Δcritic. \begin{array}{l} \mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {t}) \right\| _ {2} ^ {2} \right] \\ \leqslant 1 6 L _ {V} \sqrt {M} \left(\mathbb {E} \left[ V (\theta_ {t + 1}) \right] - \mathbb {E} \left[ V (\theta_ {t}) \right]\right) + \frac {3 8 4 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 9 6 \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 9 6 \Delta_ {c r i t i c}. \\ \end{array}

Taking summation over $t = 0,1,2,\ldots ,T - 1$ and dividing by $T$

E[θV(θT^)22]=1Tt=0T1E[θV(θt)22]16LVM(E[V(θT)]E[V(θ0)])T+384(1+Rw)2[1+(κ1)ξ]B(1ξ)+961Tt=0T1E[(wtwθt)22]+96Δcritic16LVM(1γ)T+384(1+Rw)2[1+(κ1)ξ]B(1ξ)+961Tt=0T1E[(wtwθt)22]+96Δcritic \begin{array}{l} \mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {\widehat {T}}) \right\| _ {2} ^ {2} \right] \\ = \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] \\ \leqslant \frac {1 6 L _ {V} \sqrt {M} \left(\mathbb {E} \left[ V (\theta_ {T}) \right] - \mathbb {E} \left[ V (\theta_ {0}) \right]\right)}{T} + \frac {3 8 4 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 9 6 \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 9 6 \Delta_ {c r i t i c} \\ \leqslant \frac {1 6 L _ {V} \sqrt {M}}{(1 - \gamma) T} + \frac {3 8 4 (1 + R _ {w}) ^ {2} [ 1 + (\kappa - 1) \xi ]}{B (1 - \xi)} + 9 6 \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| (w _ {t} - w _ {\theta_ {t}} ^ {*}) \right\| _ {2} ^ {2} \right] + 9 6 \Delta_ {c r i t i c} \\ \end{array}

We now let $B \geqslant \frac{1152}{(1 + R_w)^2[1 + (\kappa - 1)\xi]} (1 - \xi) \varepsilon$ , $\mathbb{E}\left[\left| (w_t - w_{\theta_t}^*)\right|_2^2\right] \leqslant \frac{\varepsilon}{288}$ and $T \geqslant \frac{48L_V\sqrt{M}}{(1 - \gamma)\varepsilon}$ , then we have

E[θV(θT^)22]ε+O(Δc r i t i c). \mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {\widehat {T}}) \right\| _ {2} ^ {2} \right] \leqslant \varepsilon + \mathcal {O} (\Delta_ {\text {c r i t i c}}).

This leads to the final sample complexity of $(B + HT_{c})T = \left(\frac{1}{\varepsilon} +\frac{\sqrt{M}}{\varepsilon}\log \frac{1}{\varepsilon}\right)\left(\frac{\sqrt{M}}{(1 - \gamma)^{2}\varepsilon}\right) = \mathcal{O}\left(\frac{M}{(1 - \gamma)^{2}\varepsilon^{2}}\log \frac{1}{\varepsilon}\right)$

J.2. Natural-actor-critic based improper learning

J.2.1. PROOF OF THEOREM 5.5

Proof. We first show that the natural actor-critic improper learner converges to a stationary point. We will then show convergence to the global optima which is what is different from that of (Xu et al., 2020).

Let $v_{t}(w) \coloneqq \frac{1}{B} \sum_{i=0}^{B-1} \mathcal{E}{w}(s{t,i}, m_{t,i}, s_{t,i+1}) \psi_{\theta_{t}}(m_{t,i}), \quad A_{w}(s,m) \coloneqq \mathbb{E}{\tilde{P}}[\mathcal{E}(s,m,s')|s,m]$ and $g(w,\theta) \coloneqq \mathbb{E}{\nu_{\theta}}[A_{w}(s,m)\psi_{\theta}(m)]$ for $w \in \mathbb{R}^d$ and $\theta \in \mathbb{R}^M$ . Also let $u_{t}(w) \coloneqq [F_{t}(\theta_{t}) + \lambda I]^{-1}\left[\frac{1}{B} \sum_{i=0}^{B-1} \mathcal{E}{w}(s{t,i}, m_{t,i}, s_{t,i+1}) \psi_{\theta_{t}}(m_{t,i})\right] = [F_{t}(\theta_{t}) + \lambda I]^{-1}v_{t}(w)$ .

Recall Prop J.4. We have

Lemma J.6. Assume $\sup_{s\in S}| \varphi (s)| _2\leqslant 1$ . Under Assumptions 5.2 and 5.3 with step-sizes chosen as $\alpha = \left(\frac{\lambda^2}{2\sqrt{ML_V}(1 + \lambda)}\right)$ we have

E[θV(θT^)22]=1Tt=0T1E[θV(θt)22]16MLV(1+λ)2λ2E[V(θT)]V(θ0)T+108λ2[2(1+λ)2+λ2]t=0T1E[wtwθt22]T+[2(1+λ)2+λ2](32λ4(1γ)2+432(1+2Rw)2λ2)1+(κ1)ξ(1ξ)B+216λ2[2(1+λ)2+λ2]Δcritic. \begin{array}{l} \mathbb {E} [ \| \nabla_ {\theta} V (\theta_ {\hat {T}}) \| _ {2} ^ {2} ] = \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} [ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} ] \\ \leqslant \frac {1 6 \sqrt {M} L _ {V} (1 + \lambda) ^ {2}}{\lambda^ {2}} \frac {\mathbb {E} [ V (\theta_ {T}) ] - V (\theta_ {0})}{T} + \frac {1 0 8}{\lambda^ {2}} [ 2 (1 + \lambda) ^ {2} + \lambda^ {2} ] \frac {\sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \| w _ {t} - w _ {\theta_ {t}} ^ {*} \| _ {2} ^ {2} \right]}{T} \\ + [ 2 (1 + \lambda) ^ {2} + \lambda^ {2} ] \left(\frac {3 2}{\lambda^ {4} (1 - \gamma) ^ {2}} + \frac {4 3 2 (1 + 2 R _ {w}) ^ {2}}{\lambda^ {2}}\right) \frac {1 + (\kappa - 1) \xi}{(1 - \xi) B} + \frac {2 1 6}{\lambda^ {2}} [ 2 (1 + \lambda) ^ {2} + \lambda^ {2} ] \Delta_ {c r i t i c}. \\ \end{array}

Proof. Proof is similar to first part of proof of Thm 6 in (Xu et al., 2020) and similar to Thm 5.4, along with using Prop J.4 and Lemmas J.3, J.2 and J.1.

We now move to proving the global optimality of natural actor critic based improper learner. Let $KL(\cdot, \cdot)$ be the KL-divergence between two distributions. We denote $\mathsf{D}(\theta) \coloneqq \mathsf{KL}(\pi^*, \pi_\theta)$ , $u_{\theta_t}^\lambda \coloneqq (F(\theta_t) + \lambda I)^{-1}\nabla_\theta V(\theta_t)$ and $u_{\theta_t}^\dagger \coloneqq$

$F(\theta_t)^\dagger \nabla_\theta V(\theta_t)$ . We see that

D(θt)D(θt+1)=m=1Mπ(m)logπθt+1(m)logπθt(m)sSdρπ(s)m=1Mπ(m)logπθt+1(m)logπθt(m)=Eνπ[logπθt+1(m)logπθt(m)]Eνπ[θlog(πθt(m))](θt+1θt)θt+1θt222=Eνπ[ψθt(m)](θt+1θt)θt+1θt222=αEνπ[ψθt(m)]ut(wt)α2ut(wt)222=αEνπ[ψθt(m)]uθtλ+αEνπ[ψθt(m)](ut(wt)uθtλ)α2ut(wt)222=αEνπ[ψθt(m)]uθt+αEνπ[ψθt(m)](uθtλuθt)+αEνπ[ψθt(m)](ut(wt)uθtλ)α2ut(wt)222=αEνπ[Aπθt(s,m)]+αEνπ[ψθt(m)uθtAπθt(s,m)]+αEνπ[ψθt(m)](uθtλuθt)+αEνπ[ψθt(m)](ut(wt)uθtλ)α2ut(wt)222(1γ)(V(π)V(πθt))+αEνπ[ψθt(m)uθtAπθt(s,m)]+αEνπ[ψθt(m)](uθtλuθt)+αEνπ[ψθt(m)](ut(wt)uθtλ)α2ut(wt)222(1γ)(V(π)V(πθt))αEνπ[[ψθt(m)uθtAπθt(s,m)]2]+αEνπ[ψθt(m)](uθtλuθt)+αEνπ[ψθt(m)](ut(wt)uθtλ)α2ut(wt)222(1γ)(V(π)V(πθt))νπνπθtαEνπ[[ψθt(m)uθtAπθt(s,m)]2]+αEνπ[ψθt(m)](uθtλuθt)+αEνπ[ψθt(m)](ut(wt)uθtλ)α2ut(wt)222(1γ)(V(π)V(πθt))11γνπνπθ0αEνπ[[ψθt(m)uθtAπθt(s,m)]2]+αEνπ[ψθt(m)](uθtλuθt)+αEνπ[ψθt(m)](ut(wt)uθtλ)α2ut(wt)222(1γ)(V(π)V(πθt))11γνπνπθ0αEνπ[[ψθt(m)uθtAπθt(s,m)]2]αCsoftλ2αut(wt)uθtλ2α2ut(wt)222. \begin{array}{l} \mathrm {D} \left(\theta_ {\mathrm {t}}\right) - \mathrm {D} \left(\theta_ {\mathrm {t} + 1}\right) \\ = \sum_ {m = 1} ^ {M} \pi^ {*} (m) \log \pi_ {\theta_ {t + 1}} (m) - \log \pi_ {\theta_ {t}} (m) \\ \stackrel {(i)} {=} \sum_ {s \in \mathcal {S}} d _ {\rho} ^ {\pi^ {*}} (s) \sum_ {m = 1} ^ {M} \pi^ {*} (m) \log \pi_ {\theta_ {t + 1}} (m) - \log \pi_ {\theta_ {t}} (m) \\ = \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \log \pi_ {\theta_ {t + 1}} (m) - \log \pi_ {\theta_ {t}} (m) ] \\ \stackrel {(i i)} {\geqslant} \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \nabla_ {\theta} \log \left(\pi_ {\theta_ {t}} (m)\right) \right] ^ {\top} \left(\theta_ {t + 1} - \theta_ {t}\right) - \frac {\left\| \theta_ {t + 1} - \theta_ {t} \right\| _ {2} ^ {2}}{2} \\ = \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} \left(\theta_ {t + 1} - \theta_ {t}\right) - \frac {\| \theta_ {t + 1} - \theta_ {t} \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} u _ {t} (w _ {t}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} u _ {\theta_ {t}} ^ {\lambda} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} u _ {\theta_ {t}} ^ {\dagger} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ = \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ A _ {\pi_ {\theta_ {t}}} (s, m) ] + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s, m) ] + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \stackrel {(i i i)} {=} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s, m) ] + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} [ \psi_ {\theta_ {t}} (m) ] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \geqslant (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} \right]} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \overset {(i v)} {\geqslant} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {t}}}} \right\| _ {\infty}} \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} \right]} + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} \left(u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}\right) - \frac {\alpha^ {2} \left\| u _ {t} (w _ {t}) \right\| _ {2} ^ {2}}{2} \\ \overset {(v)} {\geqslant} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} [ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} ]} \\ + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {\theta_ {t}} ^ {\lambda} - u _ {\theta_ {t}} ^ {\dagger}) + \alpha \mathbb {E} _ {\nu_ {\pi^ {*}}} \left[ \psi_ {\theta_ {t}} (m) \right] ^ {\top} (u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda}) - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \overset {(v i)} {\geqslant} (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\mathbb {E} _ {\nu_ {\pi^ {*}}} [ [ \psi_ {\theta_ {t}} (m) ^ {\top} u _ {\theta_ {t}} ^ {\dagger} - A _ {\pi_ {\theta_ {t}}} (s , m) ] ^ {2} ]} - \alpha C _ {s o f t} \lambda \\ - 2 \alpha \left\| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} - \frac {\alpha^ {2} \left\| u _ {t} (w _ {t}) \right\| _ {2} ^ {2}}{2}. \\ \end{array}

where (i) is by taking an extra expectation without changing the inner summand, (ii) follows by Lemma J.1 and Lemma 5 in (Xu et al., 2020), (iii) follows by the value difference lemma (Lemma I.4), (iv) follows by defining $\left| \frac{\nu_{\pi^*}}{\nu_{\pi_\theta_t}} \right|\infty := \max{s,m} \frac{\nu_{\pi^*}(s,m)}{\nu_{\pi_\theta_t}(s,m)}$ , (v) follows because $\nu_{\pi_\theta_t}(s,m) \geqslant (1 - \gamma)\nu_{\pi_\theta_0}(s,m)$ , (vi) follows by Lemma6 in (Xu et al., 2020) and

Lemma J.2.

Next, we denote $\Delta_{actor} := \max_{\theta \in \mathbb{R}^M} \min_{w \in \mathbb{R}^d} \mathbb{E}{\nu{\pi_\theta}} [[\psi_\theta^\top w - A_{\pi_\theta}(s, m)]^2]$ as the actor error.

D(θt)D(θt+1)(1γ)(V(π)V(πθt))11γνπνπθ0αΔactorαCsoftλ2αut(wt)uθtλ2α2ut(wt)222(1γ)(V(π)V(πθt))11γνπνπθ0αΔactorαCsoftλ2αut(wt)uθtλ2α2ut(wt)uλ(θt)222α2λ2θV(θt)22. \begin{array}{l} \mathrm {D} \left(\theta_ {\mathrm {t}}\right) - \mathrm {D} \left(\theta_ {\mathrm {t} + 1}\right) \\ \geqslant (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\Delta_ {a c t o r}} - \alpha C _ {s o f t} \lambda \\ - 2 \alpha \left\| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} - \frac {\alpha^ {2} \| u _ {t} (w _ {t}) \| _ {2} ^ {2}}{2} \\ \geqslant (1 - \gamma) (V (\pi^ {*}) - V (\pi_ {\theta_ {t}})) - \sqrt {\frac {1}{1 - \gamma} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \alpha \sqrt {\Delta_ {a c t o r}} - \alpha C _ {s o f t} \lambda \\ - 2 \alpha \left\| u _ {t} \left(w _ {t}\right) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} - \frac {\alpha^ {2} \left\| u _ {t} \left(w _ {t}\right) - u ^ {\lambda} \left(\theta_ {t}\right) \right\| _ {2} ^ {2}}{2} - \frac {\alpha^ {2}}{\lambda^ {2}} \| \nabla_ {\theta} V \left(\theta_ {t}\right) \| _ {2} ^ {2}. \\ \end{array}

Rearranging and dividing by $(1 - \gamma)\alpha$ , and taking expectation both sides we get

V(π)E[V(πθt)]E[D(θt)]E[D(θt+1)](1γ)α+2E[ut(wt)uθtλ22]1γ+αE[ut(wt)uλ(θt)22]2(1γ)+αλ2(1γ)E[θV(θt)22]+1(1γ)3νπνπθ0Δactor+Csoftλ1γ. \begin{array}{l} V \left(\pi^ {*}\right) - \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant \frac {\mathbb {E} [ \mathrm {D} (\theta_ {t}) ] - \mathbb {E} [ \mathrm {D} (\theta_ {t + 1}) ]}{(1 - \gamma) \alpha} + \frac {2 \sqrt {\mathbb {E} [ \| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \| _ {2} ^ {2} ]}}{1 - \gamma} + \frac {\alpha \mathbb {E} \left[ \| u _ {t} (w _ {t}) - u ^ {\lambda} (\theta_ {t}) \| _ {2} ^ {2} \right]}{2 (1 - \gamma)} \\ + \frac {\alpha}{\lambda^ {2} (1 - \gamma)} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + \frac {C _ {s o f t} \lambda}{1 - \gamma}. \\ \end{array}

Next we use the same argument as in eq (33) and Lemma 2 in Xu et al. (2020) to bound the second term.

E[ut(wt)uθtλ22]CB+108E[wtwθt22]λ2+216Δcriticλ2. \mathbb {E} \left[ \left\| u _ {t} (w _ {t}) - u _ {\theta_ {t}} ^ {\lambda} \right\| _ {2} ^ {2} \right] \leqslant \frac {C}{B} + \frac {1 0 8 \mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}} + \frac {2 1 6 \Delta_ {c r i t i c}}{\lambda^ {2}}.

where $C \coloneqq \frac{18}{\lambda^2} \frac{24(1 + 2R_w)^2[1 + (\kappa - 1)\xi]}{B(1 - \xi)} + \frac{4}{\lambda^4(1 - \gamma)^2} \cdot \frac{8[1 + (\kappa - 1)\xi]}{(1 - \xi)B}$ . Using this in the bound and using $\sqrt{a + b} \leqslant \sqrt{a} + \sqrt{b}$ for positive $a, b$ above, we have,

V(π)E[V(πθt)]E[D(θt)]E[D(θt+1)](1γ)α+21γ(CB+11E[wtwθt22]λ2+15Δc r i t i cλ2)+α2(1γ)(CB+108E[wtwθt22]λ2+216Δcriticλ2)+αλ2(1γ)E[θV(θt)22]+1(1γ)3νπνπθ0Δactor+Csoftλ1γ. \begin{array}{l} V \left(\pi^ {*}\right) - \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant \frac {\mathbb {E} [ \mathrm {D} (\theta_ {\mathrm {t}}) ] - \mathbb {E} [ \mathrm {D} (\theta_ {\mathrm {t + 1}}) ]}{(1 - \gamma) \alpha} + \frac {2}{1 - \gamma} \left(\sqrt {\frac {C}{B}} + 1 1 \sqrt {\frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}}} + 1 5 \sqrt {\frac {\Delta_ {\text {c r i t i c}}}{\lambda^ {2}}}\right) \\ + \frac {\alpha}{2 (1 - \gamma)} \left(\frac {C}{B} + 1 0 8 \frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}} + 2 1 6 \frac {\Delta_ {c r i t i c}}{\lambda^ {2}}\right) \\ + \frac {\alpha}{\lambda^ {2} (1 - \gamma)} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + \frac {C _ {s o f t} \lambda}{1 - \gamma}. \\ \end{array}

Summing over all $t = 0,1,\ldots ,T - 1$ and then dividing by $T$ we get,

V(π)1Tt=0T1E[V(πθt)]D(θ0)E[D(θT)](1γ)αT+2(1γ)(CB+15Δc r i t i cλ2)+22(1γ)Tt=0T1E[wtwθt22]λ2+α2(1γ)(CB+216Δcriticλ2)+54α(1γ)Tt=0T1E[wtwθt22]λ2+αλ2(1γ)Tt=0T1E[θV(θt)22]+1(1γ)3νπνπθ0Δactor+Csoftλ1γ. \begin{array}{l} V (\pi^ {*}) - \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant \frac {\mathbb {D} (\theta_ {0}) - \mathbb {E} [ \mathbb {D} (\theta_ {\mathrm {T}}) ]}{(1 - \gamma) \alpha T} + \frac {2}{(1 - \gamma)} \left(\sqrt {\frac {C}{B}} + 1 5 \sqrt {\frac {\Delta_ {\text {c r i t i c}}}{\lambda^ {2}}}\right) + \frac {2 2}{(1 - \gamma) T} \sum_ {t = 0} ^ {T - 1} \sqrt {\frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}}} \\ + \frac {\alpha}{2 (1 - \gamma)} \left(\frac {C}{B} + 2 1 6 \frac {\Delta_ {c r i t i c}}{\lambda^ {2}}\right) + \frac {5 4 \alpha}{(1 - \gamma) T} \sum_ {t = 0} ^ {T - 1} \frac {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right]}{\lambda^ {2}} \\ + \frac {\alpha}{\lambda^ {2} (1 - \gamma) T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \| \nabla_ {\theta} V (\theta_ {t}) \| _ {2} ^ {2} \right] + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + \frac {C _ {s o f t} \lambda}{1 - \gamma}. \\ \end{array}

We now put the value of $\alpha \leqslant \frac{\lambda^2}{2\sqrt{ML}_V(1 + \lambda)}$ , we get,

V(π)1Tt=0T1E[V(πθt)]C1MT+C2B+C3Δcritic+C4Tt=0T1E[wtwθt22]+C5B+C6Δcritic+C7Tt=0T1E[wtwθt22]+C8Tt=0T1E[θV(θt)22]+1(1γ)3νπνπθ0Δactor+C9λ. \begin{array}{l} V \left(\pi^ {*}\right) - \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \\ \leqslant C _ {1} \frac {\sqrt {M}}{T} + \frac {C _ {2}}{\sqrt {B}} + C _ {3} \sqrt {\Delta_ {c r i t i c}} + \frac {C _ {4}}{T} \sum_ {t = 0} ^ {T - 1} \sqrt {\mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t} ^ {*}} \right\| _ {2} ^ {2} \right]} \\ + \frac {C _ {5}}{B} + C _ {6} \sqrt {\Delta_ {c r i t i c}} + \frac {C _ {7}}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| w _ {t} - w _ {\theta_ {t}} ^ {*} \right\| _ {2} ^ {2} \right] + \frac {C _ {8}}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ \left\| \nabla_ {\theta} V (\theta_ {t}) \right\| _ {2} ^ {2} \right] \\ + \sqrt {\frac {1}{(1 - \gamma) ^ {3}} \left\| \frac {\nu_ {\pi^ {*}}}{\nu_ {\pi_ {\theta_ {0}}}} \right\| _ {\infty}} \sqrt {\Delta_ {a c t o r}} + C _ {9} \lambda . \\ \end{array}

Letting $T = \mathcal{O}\left(\frac{\sqrt{M}}{(1 - \gamma)^2\varepsilon}\right)$ , $B = \mathcal{O}\left(\frac{1}{(1 - \gamma)^2\varepsilon^2}\right)$ then $\mathbb{E}\left[| \nabla_{\theta}V(\theta_t)| _2^2\right]\leqslant \varepsilon^2$ and

V(π)1Tt=0T1E[V(πθt)]ε+O(Δa c t o r(1γ)3)+O(Δc r i t i c)+O(λ). V \left(\pi^ {*}\right) - \frac {1}{T} \sum_ {t = 0} ^ {T - 1} \mathbb {E} \left[ V \left(\pi_ {\theta_ {t}}\right) \right] \leqslant \varepsilon + \mathcal {O} \left(\sqrt {\frac {\Delta_ {\text {a c t o r}}}{(1 - \gamma) ^ {3}}}\right) + \mathcal {O} \left(\Delta_ {\text {c r i t i c}}\right) + \mathcal {O} (\lambda).

This leads to the total sample complexity as

(B+HTc)T=O((1(1γ)2ε2+Mε2log1ε)M(1γ)2ε)=O(M(1γ)4ε3log1ε). (B + H T _ {c}) T = \mathcal {O} \left(\left(\frac {1}{(1 - \gamma) ^ {2} \varepsilon^ {2}} + \frac {\sqrt {M}}{\varepsilon^ {2}} \log \frac {1}{\varepsilon}\right) \frac {\sqrt {M}}{(1 - \gamma) ^ {2} \varepsilon}\right) = \mathcal {O} \left(\frac {M}{(1 - \gamma) ^ {4} \varepsilon^ {3}} \log \frac {1}{\varepsilon}\right).

K. Simulation Details

In this section we describe the details of the Sec. 6. Recall that since neither value functions nor value gradients are available in closed-form, we modify SoftMax PG (Algorithm 1) to make it generally implementable using a combination of (1) rollouts to estimate the value function of the current (improper) policy and (2) a stochastic approximation-based approach to estimate its value gradient.

The Softmax PG with Gradient Estimation or SPGE (Algorithm 5), and the gradient estimation algorithm 6, GradEst, are shown below.


Figure 7: A chain MDP with 10 states.

Algorithm 5 Softmax PG with Gradient Estimation (SPGE)

1: Input: learning rate $\eta > 0$ , perturbation parameter $\alpha > 0$ , Initial state distribution $\mu$
2: Initialize each $\theta_{m}^{1} = 1$ , for all $m \in [M]$ , $s_{1} \sim \mu$
3: for $t = 1$ to $T$ do
4: Choose controller $m_t \sim \pi_t$ .
5: Play action $a_{t} \sim K_{m_{t}}(s_{t},:)$ .
6: Observe $s_{t+1} \sim \mathsf{P}(.|s_t, a_t)$ .
7: $\nabla_{\theta^t} \widehat{V^{\pi_{\theta_t}}}(\mu) = \operatorname{GradEst}(\theta_t, \alpha, \mu)$
8: Update: $\theta^{t + 1} = \theta^t +\eta .\nabla_{\theta^t V^{\pi_{\theta_t}}}(\mu).$
9: end for

Algorithm 6 GradEst (subroutine for SPGE)

1: Input: Policy parameters $\theta$ , parameter $\alpha > 0$ , Initial state distribution $\mu$ .
2: for $i = 1$ to #runs do
3: $u^i\sim Unif(\mathbb{S}^{M - 1}).$
4: $\theta_{\alpha} = \theta + \alpha .u^{i}$
5: $\pi_{\alpha} = \mathrm{softmax}(\theta_{\alpha})$
6: for $l = 1$ to #rollouts do
7: Generate trajectory $(s_0, a_0, r_0, s_1, a_1, r_1, \ldots, s_{1t}, a_{1t}, r_{1t})$ using the policy $\pi_{\alpha}$ : and $s_0 \sim \mu$ .
8: reward1 = ∑j=01t γj rj
9: end for
10: $\mathrm{mr}(\mathrm{i}) = \mathrm{mean}(\mathrm{reward})$
11: end for
12: GradValue = $\frac{1}{#runs} \sum_{i=1}^{#runs} mr(i).u^{i} \cdot \frac{M}{\alpha}$ .
13: Return: GradValue.

Next we report some extra simulations we performed under different environments.

K.1. State Dependent controllers - Chain MDP

We consider a linear chain MDP as shown in Figure 7. As evident from the figure, $|S| = 10$ and the learner has only two actions available, which are $\mathcal{A} = {\text{left}, \text{right}}$ . Hence the name 'chain'. The numbers on the arrows represent the reward obtained with the transition. The initial state is $s_1$ . We let $s_{100}$ as the terminal state. Let us define 2 base controllers, $K_1$ and $K_2$ , as follows.

K1(leftsj)={1,j[9]\{5}0.1,j=50,j=10. K _ {1} (\mathsf {l e f t} \mid \mathsf {s} _ {\mathrm {j}}) = \left\{ \begin{array}{l l} 1, & j \in [ 9 ] \backslash \{5 \} \\ 0. 1, & j = 5 \\ 0, & j = 1 0. \end{array} \right.

K2(leftsj)={1,j[9]\{6}0.1,j=60,j=10. K _ {2} (\mathsf {l e f t} \mid \mathsf {s} _ {\mathsf {j}}) = \left\{ \begin{array}{l l} 1, & j \in [ 9 ] \backslash \{6 \} \\ 0. 1, & j = 6 \\ 0, & j = 1 0. \end{array} \right.

and obviously $K_{i}(\mathrm{right}|\mathbf{s}{\mathrm{j}}) = 1 - \mathrm{K}{\mathrm{i}}(\mathrm{left}|\mathbf{s}{\mathrm{j}})$ for $i = 1,2$ . An improper mixture of the two controllers, i.e., $(K{1} + K_{2}) / 2$ is the optimal in this case. We show that our policy gradient indeed converges to the 'correct' combination, see Figure 8. We here provide an elementary calculation of our claim that the mixture $K_{\mathrm{mix}} \coloneqq (K_1 + K_2) / 2$ is indeed better than applying $K_{1}$ or $K_{2}$ for all time. We first analyze the value function due to $K_{i}, i = 1,2$ (which are the same due to symmetry of the problem and the probability values described).

VKi(s1)=E[t0γtrt(at,st)] V ^ {K _ {i}} (s _ {1}) = \mathbb {E} \left[ \sum_ {t \geqslant 0} \gamma^ {t} r _ {t} (a _ {t}, s _ {t}) \right]


Figure 8: Softmax PG alg applied to the linear Chain MDP with various randomly chosen initial distribution. Plot shows probability of choosing controller $K_{1}$ averaged over #trials

=0.1×γ9+0.1×0.9×0.1×γ11+0.1×0.9×0.1×0.9×0.1×γ13.=0.1×γ9(1+(0.1×0.9γ2)+(0.1×0.9γ2)2+)=0.1×γ910.1×0.9×γ2. \begin{array}{l} = 0. 1 \times \gamma^ {9} + 0. 1 \times 0. 9 \times 0. 1 \times \gamma^ {1 1} + 0. 1 \times 0. 9 \times 0. 1 \times 0. 9 \times 0. 1 \times \gamma^ {1 3}. \\ = 0. 1 \times \gamma^ {9} \left(1 + \left(0. 1 \times 0. 9 \gamma^ {2}\right) + \left(0. 1 \times 0. 9 \gamma^ {2}\right) ^ {2} + \dots\right) = \frac {0 . 1 \times \gamma^ {9}}{1 - 0 . 1 \times 0 . 9 \times \gamma^ {2}}. \\ \end{array}

We will next analyze the value if a true mixture controller i.e., $K_{\mathrm{mix}}$ is applied to the above MDP. The analysis is a little more intricate than the above. We make use of the following key observations, which are elementary but crucial.

  1. Let Paths be the set of all sequence of states starting from $s_1$ , which terminate at $s_{10}$ which can be generated under the policy $K_{\mathrm{mix}}$ . Observe that

VKmix(s1)=pP a t h sγlength(p)P[p].1.(20) V ^ {K _ {\mathrm {m i x}}} \left(s _ {1}\right) = \sum_ {\underline {{p}} \in \text {P a t h s}} \gamma^ {\operatorname {l e n g t h} (\underline {{p}})} \mathbb {P} [ \underline {{p}} ]. 1. \tag {20}

Recall that reward obtained from the transition $s_9 \rightarrow s_{10}$ is 1.

  1. Number of distinct paths with exactly $n$ loops: $2^{n}$
  2. Probability of each such distinct path with $n$ cycles:

=(0.55×0.45)×(0.55×0.45)×(0.55×0.45)ntimes×0.55×0.55×γ9+2n=(0.55)2×γ9(0.55×0.45×γ2)n. \begin{array}{l} = \underbrace {(0 . 5 5 \times 0 . 4 5) \times (0 . 5 5 \times 0 . 4 5) \times \ldots (0 . 5 5 \times 0 . 4 5)} _ {n t i m e s} \times 0. 5 5 \times 0. 5 5 \times \gamma^ {9 + 2 n} \\ = (0. 5 5) ^ {2} \times \gamma^ {9} (0. 5 5 \times 0. 4 5 \times \gamma^ {2}) ^ {n}. \\ \end{array}

  1. Finally, we put everything together to get:

VKmix(s1)=n=02n×(0.55)2×γ9×(0.55×0.45×γ2)n=(0.55)2×γ912×0.55×0.45×γ2>VKi(s1). \begin{array}{l} V ^ {K _ {\mathrm {m i x}}} (s _ {1}) = \sum_ {n = 0} ^ {\infty} 2 ^ {n} \times (0. 5 5) ^ {2} \times \gamma^ {9} \times (0. 5 5 \times 0. 4 5 \times \gamma^ {2}) ^ {n} \\ = \frac {(0 . 5 5) ^ {2} \times \gamma^ {9}}{1 - 2 \times 0 . 5 5 \times 0 . 4 5 \times \gamma^ {2}} > V ^ {K _ {i}} (s _ {1}). \\ \end{array}

This shows that a mixture performs better than the constituent controllers. The plot shown in Fig. 8 shows the Softmax PG algorithm (even with estimated gradients and value functions) converges to a $(0.5,0.5)$ mixture correctly.


(a) Arrival rate: $(\lambda_1, \lambda_2)$ = (0.49, 0.49)


(b) Arrival rate: $(\lambda_1, \lambda_2)$ = (0.49, 0.49)


(c) Arrival rate: $(\lambda_1, \lambda_2) = (0.3, 0.4)$


(d) (Estimated) Value functions for case with the two base policies and Longest Queue First ("LQF")
Figure 9: Softmax policy gradient algorithm applies show convergence to the best mixture policy.


(e) Case with 3 experts: Always Queue 1, Always Queue 2 and LQF.

K.2. Stationary Bernoulli Queues

We study two different settings (1) where in the first case the optimal policy is a strict improper combination of the available controllers and (2) where it is at a corner point, i.e., one of the available controllers itself is optimal. Our simulations show that in both the cases, PG converges to the correct controller distribution.

Recall the example that we discussed in Sec. 2.2. We consider the case with Bernoulli arrivals with rates $\lambda = [\lambda_1, \lambda_2]$ and are given two base/atomic controllers ${K_1, K_2}$ , where controller $K_i$ serves Queue $i$ with probability $1$ , $i = 1, 2$ . As can be seen in Fig. 9(b) when $\lambda = [0.49, 0.49]$ (equal arrival rates), GradEst converges to an improper mixture policy that serves each queue with probability $[0.5, 0.5]$ . Note that this strategy will also stabilize the system whereas both the base controllers lead to instability (the queue length of the unserved queue would obviously increase without bound). Figure 9(c), shows that with unequal arrival rates too, GradEst quickly converges to the best policy.

Fig. 9(d) shows the evolution of the value function of GradEst (in blue) compared with those of the base controllers (red) and the Longest Queue First policy (LQF) which, as the name suggests, always serves the longest queue in the system (black). LQF, like any policy that always serves a nonempty queue in the system whenever there is one3, is known to be optimal in the sense of delay minimization for this system (Mohan et al., 2016). See Sec. K in the Appendix for more details about this experiment.

Finally, Fig. 9(e) shows the result of the second experimental setting with three base controllers, one of which is delay optimal. The first two are $K_{1}, K_{2}$ as before and the third controller, $K_{3}$ , is LQF. Notice that $K_{1}, K_{2}$ are both queue length-agnostic, meaning they could attempt to serve empty queues as well. LQF, on the other hand, always and only serves nonempty queues. Hence, in this case the optimal policy is attained at one of the corner points, i.e., $[0,0,1]$ . The plot shows the PG algorithm converging to the correct point on the simplex.

Here, we justify the value of the two policies which always follow one fixed queue, that is plotted as straight line in Figure 9(d). Let us find the value of the policy which always serves queue 1. The calculation for the other expert (serving queue 2 only) is similar. Let $q_{i}(t)$ denote the length of queue $i$ at time $t$ . We note that since the expert (policy) always recommends


(a) A basic path-graph interference system with $N = 4$ communication links.


(b) The associated conflict (interference) graph is a path-graph.
Figure 10: An example of a path graph network. The interference constraints are such that physically adjacent queues cannot be served simultaneously.

to serve one of the queue, the expected cost suffered in any round $t$ is $c_{t} = q_{1}(t) + q_{2}(t) = 0 + t.\lambda_{2}$ . Let us start with empty queues at $t = 0$ .

VExpert1(0)=E[t=0TγtctExpert1]=t=0Tγt.t.λ2λ2.γ(1γ)2. \begin{array}{l} V ^ {E x p e r t 1} (\underline {{\mathbf {0}}}) = \mathbb {E} \left[ \sum_ {t = 0} ^ {T} \gamma^ {t} c _ {t} \mid E x p e r t 1 \right] \\ = \sum_ {t = 0} ^ {T} \gamma^ {t}. t. \lambda_ {2} \\ \leqslant \lambda_ {2}. \frac {\gamma}{(1 - \gamma) ^ {2}}. \\ \end{array}

With the values, $\gamma = 0.9$ and $\lambda_{2} = 0.49$ , we get $V^{Expert1}(\underline{0}) \leqslant 44$ , which is in good agreement with the bound shown in the figure.

K.3. Details of Path (Interference) Graph Networks

Consider a system of parallel transmitter-receiver pairs as shown in Figure 10(a). Due to the physical arrangement of the Tx-Rx pairs, no two adjacent systems can be served simultaneously because of interference. This type of communication system is commonly referred to as a path graph network (Mohan et al., 2020). Figure 10(b) shows the corresponding conflict graph. Each Tx-Rx pair can be thought of as a queue, and the edges between them represent that the two connecting queues, cannot be served simultaneously. On the other hand, the sets of queues which can be served simultaneously are called independent sets in the queuing theory literature. In the figure above, the independent sets are ${\emptyset, {1}, {2}, {3}, {4}, {1, 3}, {2, 4}, {1, 4}}$ .

Finally, in Table 2, we report the mean delay values of the 5 base controllers we used in our simulation Fig. 2(c), Sec.6. We see the controller $K_{2}$ which was chosen to be MER, indeed has the lowest cost associated, and as shown in Fig. 2(c), our Softmax PG algorithm (with estimated value functions and gradients) converges to it.

Table 2: Mean Packet Delay Values of Path Graph Network Simulation.

ControllerMean delay (# time slots) over 200 trialsStandard deviation
K1(MW)22.110.63
K2(MER)20.960.65
K3({1,3})80.100.92
K4({2,4})80.220.90
K5({1,4})80.130.91

K.4. Cartpole Experiments

We investigate further the example in our simulation in which the two constituent controllers are $K_{opt} + \Delta$ and $K_{opt} - \Delta$ . We use OpenAI gym to simulate this situation. In the Figure 2(b), it was shown our Softmax PG algorithm (with estimated values and gradients) converged to a improper mixture of the two controllers, i.e., $\approx (0.53, 0.47)$ . Let $K_{conv}$ be defined as the (randomized) controller which chooses $K_{1}$ with probability 0.53, and $K_{2}$ with probability 0.47. Recall from Sec. 2.1 that this control law converts the linearized cartpole into an Ergodic Parameter Linear System (EPLS). In Table 3 we report the average number of rounds the pendulum stays upright when different controllers are applied for all time, over trajectories of length 500 rounds. The third column displays an interesting feature of our algorithm. Over 100 trials, the base controllers do not stabilize the pendulum for a relatively large number of trials, however, $K_{conv}$ successfully does so most of the times.

Table 3: A table showing the number of rounds the constituent controllers manage to keep the cartpole upright.

ControllerMean number of rounds before the pendulum falls ∧ 500#Trials out of 100 in which the pendulum falls before 500 rounds
K1(Kopt + Δ)40338
K2(Kopt - Δ)35546
Kconv4658

We mention here that if one follows $K^{*}$ , which is the optimum controller matrix one obtains by solving the standard Discrete-time Algebraic Riccati Equation (DARE) (Bertsekas, 2011), the pole does not fall over 100 trials. However, as indicated in Sec.1, constructing the optimum controller for this system from scratch requires exponential, in the number of state dimension, sample complexity (Chen & Hazan, 2020). On the other hand $K_{\mathrm{conv}}$ performs very close to the optimum, while being sample efficient.

Choice of hyperparameters. In the simulations, we set learning rate to be $10^{-4}$ , $# \text{runs} = 10$ , $# \text{rollouts} = 10$ , $\mathsf{lt} = 30$ , discount factor $\gamma = 0.9$ and $\alpha = 1 / \sqrt{# \text{runs}}$ . All the simulations have been run for 20 trials and the results shown are averaged over them. We capped the queue sizes at 1000.

K.5. Some extra simulations for natural-actor-critic based improper learner NACIL

  • First we show a queuing theory where we have 2 queues to be served and we have two base controllers similar to as we discussed in the Sec 2. However, here we have two different arrival rates for the two queues $(\lambda_1, \lambda_2) \equiv (0.4, 0.3)$ , i.e., the arrival rates are unequal. We plot in Fig. 11 the probability of choosing the two different controllers. We see that ACIL converges to the "correct" mixture of the base controllers.
  • Next, we show a simulation on the setting in Sec. K.1, which we called a Chain MDP. We recall that this setting consists of two base controllers $K_{1}$ and $K_{2}$ , however a $(1/2, 1/2)$ mixture of the two controllers was shown (analytically) to perform better than each individual ones. As the plot in Fig. 12 shows NACIL identifies the correct combination and follows it.

Choice of hyperparameters. For the queuing theoretic simulations of Algorithm 2 ACIL, we choose $\alpha = 10^{-4}$ , $\beta = 10^{-3}$ . We choose the identity mapping $\varphi(s) \equiv s$ , where $s$ is the current state of the system which is a $N$ -length vector, which consists of the $i^{th}$ queue length at the $i^{th}$ position. $\lambda$ was chosen to be 0.1. The other parameters are chosen as $B = 50$ , $H = 30$ and $T_{c} = 20$ . We choose a buffer of size 1000 to keep the states bounded, i.e., if a queue exceeds a size 1000, those arrivals are ignored and queue length is not increased. This is used to normalize the $| \varphi(s) |_2$ across time.

L. Additional Comments

  • Comment on the 'simple' experimental settings. The motivating examples may seem "simple" and trainable from scratch with respect to progress in the field of RL. However, our main point is that there are situations where, for example, one may have trained controllers for a range of environments in simulation. However, the real life environment may differ from the simulated ones. We demonstrate that exploiting such basic pre-learnt controllers via our approach can help in generating a better (meta) controller for a new, unseen environment, instead of learning a new controller for the new environment from scratch.


Figure 11: NACIL alg applied to the a queuing system with two queues, having arrival rates $(\lambda_1,\lambda_2)\equiv (0.4,0.3)$ . Plot shows probability of choosing controllers $K_{1}$ and $K_{2}$ averaged over 20 trials


Figure 12: NACIL alg applied to the linear Chain MDP with various randomly chosen initial distribution. Plot shows probability of choosing controller $K_{1}$ averaged over 20 trials

  • On characterizing the performance of the optimal mixture policy. As correctly noticed by the reviewer, the inverted pendulum experiment showed that the optimal mixture policy can vastly outperform the component controllers. Currently, however, we do not provide any theoretical guarantees regarding this, since this depends on the structure of the policy space and the underlying MDP, which is very challenging. We hope to explore this task in our future work.

M. Discussion

We have considered the problem of using a menu of baseline controllers and combining them using improper probabilistic mixtures to form a superior controller. In many relevant MDP learning settings, we saw that this is indeed possible, and the policy gradient and actor-critic based analyses indicate that this approach may be widely applicable. This work opens up a plethora of avenues. One can consider a richer class of mixtures that can look at the current state and mix accordingly. For example, an attention model can be used to choose which controller to use, or other state-dependent models can be relevant. Another example is to artificially force switching across controllers to occur less frequently than in every round. The can help create momentum and allow the controlled process to 'mix' better, when using complex controllers.

A few caveats are in order regarding the potential societal impact and consequences of this work. As such, this paper offers a way of combining or 'blending' a given class of decision-making entities in the hope of producing a 'better' one. In this process, the definitions of what constitutes 'optimal' or 'expected' behavior from a policy are likely to be subjective, and may encode biases and attitudes of the system designer(s). More importantly, it is possible that the base policy class (or some elements of it) have undesirable properties to begin with (e.g., bias or insensitivity), which could get amplified in the improper learning process as an unintended outcome. We sound ample caution to practitioners who contemplate adopting this method.

Finally, in the present setting, the base controllers are fixed. It would be interesting to consider adding adaptive, or 'learning' controllers as well as the fixed ones. Including the base controllers can provide baseline performance below which the performance of the learning controllers would not drop.