diff --git "a/2207.09090/paper_text/intro_method.md" "b/2207.09090/paper_text/intro_method.md"
new file mode 100644--- /dev/null
+++ "b/2207.09090/paper_text/intro_method.md"
@@ -0,0 +1,1038 @@
+# Method
+
+1. $\cS$: State space
+
+2. $\cA:$ Action space
+
+3. $S:$ Cardinality of $\cS$
+
+4. $A:$ Cardinality of $\cA$
+
+5. $M:$ Number of controllers
+
+6. $K_i$ Controller $i$, $i=1,\cdots,M$. For finite SA space MDP, $K_i$ is a matrix of size $S\times A$, where each row is a probability distribution over the actions.
+
+7. $\cC:$ Given collection of $M$ controllers.
+
+8. $\cI_{soft}(\cC):$ Improper policy class setup by the learner.
+
+9. $\theta\in \Real^M:$ Parameter assigned to the controllers to controllers, representing weights, updated each round by the learner.
+
+10. $\pi(.):$ Probability of choosing controllers
+
+11. $\pi(.\given s)$ Probability of choosing action given state $s$. Note that in our setting, given $\pi(.)$ over controllers (see previous item) and the set of controllers, $\pi(.\given s)$ is completely defined, i.e., $\pi(a\given s)=\sum\limits_{m=1}^{M}\pi(m)K_m(s,a)$. Hence we use simply $\pi$ to denote the policy followed, whenever the context is clear.
+
+12. $r(s,a):$ Immediate (one-step) reward obtained if action $a$ is played in state $s$.
+
+13. $\tP(s'\given s,a)$ Probability of transitioning to state $s'$ from state $s$ having taken action $a$.
+
+14. $V^{\pi}(\rho):=\mathbb{E}_{s_0\sim \rho}\left[V^\pi(s_0) \right] = \mathbb{E}^{\pi}_{\rho}\sum_{t=0}^{\infty}\gamma^tr(s_t,a_t)$ Value function starting with initial distribution $\rho$ over states, and following policy $\pi$.
+
+15. $Q^\pi(s,a):= \expect{r(s,a) + \gamma\sum\limits_{s'\in \cS}\tP(s'\given s,a)V^\pi(s')}$.
+
+16. ${\tilde{Q}}^\pi(s,m):= \expect{\sum\limits_{a\in \cA}K_m(s,a)r(s,a) + \gamma\sum\limits_{s'\in \cS}\tP(s'\given s,a)V^\pi(s')}$.
+
+17. $A^\pi(s,a):=Q^\pi(s,a)-V^\pi(s)$
+
+18. $\tilde{A}(s,m):=\tilde{Q}^\pi(s,m)-V^\pi(s)$.
+
+19. $d_\nu^\pi:=\mathbb{E}_{s_0\sim \nu}\left[(1-\gamma)\sum\limits_{t=0}^\infty \prob{s_t=s\given s_o,\pi,\tP} \right]$. Denotes a distribution over the states, is called the "discounted state visitation measure\"
+
+20. $c : \inf_{t\geq 1}\min\limits_{m\in \{m'\in[M]:\pi^*(m') >0\}}\pi_{\theta_t}(m)$.
+
+21. $\norm{\frac{d_\mu^{\pi^*}}{\mu}}_\infty = \max_s \frac{d_\mu^{\pi^*}(s)}{\mu(s)}$.
+
+22. $\norm{\frac{1}{\mu}}_\infty = \max_s \frac{1}{\mu(s)}$.
+
+
+
+The Cartpole system. The mass of the pendulum is denoted by mp, that of the cart by mK, the force used to drive the cart by F, and the distance of the center of mass of the cart from its starting position by s.θ denotes the angle the pendulum makes with the normal and its length is denoted by 2l. Gravity is denoted by g.
+
+
+As shown in Fig. [1](#fig:inverted-pendulum){reference-type="ref" reference="fig:inverted-pendulum"}, it comprises a pendulum whose pivot is mounted on a cart which can be moved in the horizontal direction by applying a force. The objective is to modulate the direction and magnitude of this force $F$ to keep the pendulum from keeling over under the influence of gravity. The state of the system at time $t,$ is given by the 4-tuple $\mathbf{x}(t):=[s,\Dot{s},\theta,\Dot{\theta}]$, with $\mathbf{x}(\cdot)=\mathbf{0}$ corresponding to the pendulum being upright and stationary. One of the strategies used to design control policies for this system is by first approximating the dynamics around $\mathbf{x}(\cdot)=\mathbf{0}$ with a linear, quadratic cost model and designing a linear controller for these approximate dynamics. This, after time discretization, The objective now reduces to finding a (potentially randomized) control policy $u\equiv \{u(t),t\geq0\}$ that solves:
+
+$$\begin{eqnarray}
+\label{eqn:cartpole-LQR-approx-WITHOUT-Details}
+\inf_{u} J(\mathbf{x}(0)) &=& \mathbb{E}_{u}\sum_{t=0}^\infty \mathbf{x}^\intercal(t) Q \mathbf{x}(t) + R u^2(t), \nonumber \\
+%s.t. \mathbf{x}(t+1) &=& A_{open}\mathbf{x}(t) + \mathbf{b} u(t)
+%\nonumber\\
+s.t.~{\mathbf{x}}(t+1) &=& \underbrace{\begin{pmatrix}
+0 & 1 & 0 & 0\\
+0 & 0 & \frac{g}{l\left(\frac{4}{3}-\frac{m_p}{m_p+m_k}\right)} & 0 \\
+0 & 0 & 0 & 1 \\
+0 & 0 & \frac{g}{l\left(\frac{4}{3}-\frac{m_p}{m_p+m_k}\right)} & 0 \\
+\end{pmatrix}}_{\color{blue}A_{open}} \mathbf{x}(t)
++ \underbrace{\begin{pmatrix}
+0 \\
+\frac{1}{m_p+m_k}\\
+0 \\
+\frac{1}{l\left(\frac{4}{3}-\frac{m_p}{m_p+m_k}\right)} \\
+\end{pmatrix}}_{\color{blue}\mathbf{b}} u(t).
+\end{eqnarray}$$ Under standard assumptions of controllability and observability, this optimization has a stationary, linear solution $u^*(t) = -\mathbf{K}^\intercal\mathbf{x}(t)$ (details are available in [@bertsekas11dynamic Chap. 3]). Moreover, setting $A:=A_{open}-\mathbf{b}\mathbf{K}^\intercal$, it is well know that the dynamics ${\mathbf{x}}(t+1) = A \mathbf{x}(t),~t\geq0,$ are stable.
+
+In this section we supply the adjustments we made for specifically for the cartpole experiments. We first mention that we scale down the estimated gradient of the value function returned by the GradEst subroutine (Algorithm [\[alg:gradEst\]](#alg:gradEst){reference-type="ref" reference="alg:gradEst"}) (in the cartpole simulation only). The scaling that worked for us is $\frac{10}{\norm{\widehat{{\nabla}V^{\pi}(\mu)}}}$.
+
+Next, we provide the values of the constants that were described in Sec. [3](#appendix:Details of Setup and Modelling of the Inverted Pendulum){reference-type="ref" reference="appendix:Details of Setup and Modelling of the Inverted Pendulum"} in Table [1](#table:hyperparameters of inverted pend){reference-type="ref" reference="table:hyperparameters of inverted pend"}.
+
+::: {#table:hyperparameters of inverted pend}
+ Parameter Value
+ -------------------- -------
+ Gravity $g$ 9.8
+ Mass of pole $m_p$ 0.1
+ Length of pole $l$ 1
+ Mass of cart $m_k$ 1
+ Total mass $m_t$ 1.1
+
+ : Values of the hyperparameters used for the cartpole simulation
+:::
+
+For simplicity and ease of understanding, we connect our current discussion to the cartpole example discussed in Sec. [\[sec:motivating-cartpole-example\]](#sec:motivating-cartpole-example){reference-type="ref" reference="sec:motivating-cartpole-example"}. Consider a generic (ergodic) control policy that switches across a menu of controllers $\{K_1,\cdots,K_N\}$. That is, at any time $t$, it chooses controller $K_i,~i\in[N],$ w.p. $p_i$, so that the control input at time $t$ is $u(t)=-\mathbf{K}_i^\intercal\mathbf{x}(t)$ w.p. $p_i.$ Let $A(i) := A_{open}-\mathbf{b}\mathbf{K}_i^\intercal$. The resulting controlled dynamics are given by $$\begin{eqnarray}
+\label{eqn:EPLS-definition}
+ {\mathbf{x}}(t+1) &=& A(r(t)) \mathbf{x}(t)\nonumber\\
+ \mathbf{x}(0) &=& \mathbf{0},
+\end{eqnarray}$$ where $r(t)=i$ w.p. $p_i,$ IID across time. In the literature, this belongs to a class of systems known as *Ergodic Parameter Linear Systems* (EPLS) [@bolzern-etal08almost-sure-stability-ergodic-linear], which are said to be *Exponentially Almost Surely Stable* (EAS) if there exists $\rho>0$ such that for any $\mathbf{x}(0),$ $$\begin{eqnarray}
+\label{eqn:exp-almost-sure-definition}
+ \mathbb{P}\left\lbrace\omega\in\Omega\bigg|\limsup_{t\rightarrow\infty}\frac{1}{t}\log{\norm{\mathbf{x}(t,\omega)}}\leq-\rho\right\rbrace = 1.
+\end{eqnarray}$$ In other words, w.p. 1, the trajectories of the system decay to the origin exponentially fast. The random variable $\lambda(\omega):=\limsup_{t\rightarrow\infty}\frac{1}{t}\log{\norm{\mathbf{x}(t,\omega)}}$ in [\[eqn:exp-almost-sure-definition\]](#eqn:exp-almost-sure-definition){reference-type="eqref" reference="eqn:exp-almost-sure-definition"} is called the *Lyapunov Exponent* of the system. For our EPLS, $$\begin{eqnarray}
+ \lambda(\omega) &=& \limsup_{t\rightarrow\infty}\frac{1}{t}\log{\norm{\mathbf{x}(t,\omega)}}= \limsup_{t\rightarrow\infty}\frac{1}{t}\log{\norm{\prod_{s=1}^tA(r(s,\omega))\mathbf{x}(0)}}\nonumber\\
+ % &=& \limsup_{t\rightarrow\infty}\frac{1}{t}\log{\norm{\prod_{s=1}^tA(r(s,\omega))\mathbf{x}(0)}}\nonumber\\
+ &\leq& \limsup_{t\rightarrow\infty}\cancelto{0}{\frac{1}{t}\log{\norm{\mathbf{x}(0)}}} + \limsup_{t\rightarrow\infty}\frac{1}{t}\log{\norm{\prod_{s=1}^tA(r(s,\omega))}} \nonumber\\
+ &\leq& \limsup_{t\rightarrow\infty}\frac{1}{t}\sum_{s=1}^t\log{\norm{A(r(s,\omega))}} \stackrel{(\ast)}{=} \lim_{t\rightarrow\infty}\frac{1}{t}\sum_{s=1}^t\log{\norm{A(r(s,\omega))}}\nonumber\\
+ % &\stackrel{(\ast)}{=}& \lim_{t\rightarrow\infty}\frac{1}{t}\sum_{s=1}^t\log{\norm{A(r(s,\omega))}} \nonumber\\
+ &\stackrel{(\dagger)}{=}& \mathbb{E}\log{\norm{A(r)}}= \sum_{i=1}^N p_i \log{\norm{A(i)}},\label{eqn:lyapExponentLQR}%\nonumber\\
+ % &=& \sum_{i=1}^N p_i \log{\norm{A(i)}},\label{eqn:lyapExponentLQR}
+\end{eqnarray}$$ where the equalities $(\ast)$ and $(\dagger)$ are due to the ergodic law of large numbers. The control policy can now be designed by choosing $\{p_1,\cdots,p_N\}$ such that $\lambda(\omega)<-\rho$ for some $\rho>0$, ensuring exponentially almost sure stability.
+
+The system, shown in Fig. [2](#subfig:the-two-queues){reference-type="ref" reference="subfig:the-two-queues"}, comprises two queues fed by independent, stochastic arrival processes $A_i(t),i\in\{1,2\},t\in\mathbb{N}.$ The length of Queue $i$, measured at the beginning of time slot $t,$ is denoted by $Q_i(t)\in\mathbb{Z}_+$. A common server serves both queues and can drain at most one packet from the system in a time slot[^1]. The server, therefore, needs to decide which of the two queues it intends to serve in a given slot (we assume that once the server chooses to serve a packet, service succeeds with probability 1). The server's decision is denoted by the vector $\mathbf{D}(t)\in\mathcal{A}:=\left\lbrace[0,0],[1,0],[0,1]\right\rbrace,$ where a "$1$" denotes service and a "$0$" denotes lack thereof.
+
+
+
+Qi(t) is the length of Queue i (i ∈ {1, 2}) at the beginning of time slot t, Ai(t) is its packet arrival process and D(t) ∈ {[0, 0], [1, 0], [0, 1]}.
+
+
+For simplicity, we assume that the processes $\left(A_i(t)\right)_{t=0}^\infty$ are both IID Bernoulli, with $\mathbb{E}A_i(t)=\lambda_i.$ Note that the arrival rate $\boldsymbol{\lambda}=[\lambda_1,\lambda_2]$ is unknown to the learner. Defining $(x)^+:=\max\{0,x\},~\forall~x\in\mathbb{R},$ queue length evolution is given by the equations $$\begin{equation}
+ Q_i(t+1) = \left(Q_i(t)-D_i(t)\right)^+ + A_i(t+1),~i\in\{1,2\}.
+\end{equation}$$
+
+We show here that the value function $V^{\pi}(\rho)$ is in general non-concave, and hence standard convex optimization techniques for maximization may get stuck in local optima. We note once again that this is *different* from the non-concavity of $V^\pi$ when the parameterization is over the entire state-action space, i.e., $\Real^{S\times A}$.
+
+We show here that for both SoftMax and direct parameterization, the value function is non-concave where, by "direct" parameterization we mean that the controllers $K_m$ are parameterized by weights $\theta_m\in \Real$, where $\theta_i\geq 0,~\forall i\in[M]$ and $\sum\limits_{i=1}^M\theta_i=1$. A similar argument holds for softmax parameterization, which we outline in Note [2](#remark:nonconcavity for softmax){reference-type="ref" reference="remark:nonconcavity for softmax"}.
+
+::: {#lemma:nonconcavity of V .lemma}
+**Lemma 1**. *(Non-concavity of Value function)There is an MDP and a set of controllers, for which the maximization problem of the value function (i.e. [\[eq:main optimization problem\]](#eq:main optimization problem){reference-type="eqref" reference="eq:main optimization problem"}) is non-concave for SoftMax parameterization, i.e., $\theta\mapsto V^{\pi_{\theta}}$ is non-concave.*
+:::
+
+:::: proof
+*Proof.*
+
+![An example of an MDP with controllers as defined in [\[eqn:controllers-for-non-concavity-counterexample\]](#eqn:controllers-for-non-concavity-counterexample){reference-type="eqref" reference="eqn:controllers-for-non-concavity-counterexample"} having a non-concave value function. The MDP has $S=5$ states and $A=2$ actions. States $s_3,s_4\text{ and }s_5$ are terminal states. The only transition with nonzero reward is $s_2\rightarrow s_4.$](figures/Nonconcave.png){#fig:example showing non-concavity of the value function}
+
+Consider the MDP shown in Figure [3](#fig:example showing non-concavity of the value function){reference-type="ref" reference="fig:example showing non-concavity of the value function"} with 5 states, $s_1,\ldots,s_5$ . States $s_3,s_4$ and $s_5$ are terminal states. In the figure we also show the allowed transitions and the rewards obtained by those transitions. Let the action set $\cA$ consists of only three actions $\{a_1,a_2,a_3\} \equiv \{\tt{right}, \tt{up}, \tt{null}\}$, where 'null' is a dummy action included to accommodate the three terminal states. Let us consider the case when $M=2$. The two controllers $K_i\in \Real^{S\times A}$, $i=1,2$ (where each row is probability distribution over $\cA$) are shown below. $$\begin{equation}
+ K_1 = \begin{bmatrix}1/4 & 3/4 & 0\\ 3/4 & 1/4 & 0\\ 0 & 0 & 1\\ 0 & 0 & 1\\ 0 & 0 & 1 \end{bmatrix}, K_2 = \begin{bmatrix}3/4 & 1/4 & 0\\ 1/4 & 3/4 & 0\\ 0 & 0 & 1\\ 0 & 0 & 1\\ 0 & 0 & 1 \end{bmatrix}.
+ \label{eqn:controllers-for-non-concavity-counterexample}
+\end{equation}$$ Let $\theta^{(1)} = (1,0)\transpose$ and $\theta^{(2)} = (0,1)\transpose$. Let us fix the initial state to be $s_1$. Since a nonzero reward is only earned during a $s_2\rightarrow s_4$ transition, we note for any policy $\pi:\cA\to\cS$ that $V^\pi(s_1)=\pi(a_1|s_1)\pi(a_2|s_2)r$. We also have, $$(K_1 + K_2)/2 = \begin{bmatrix}1/2 & 1/2 & 0\\ 1/2 & 1/2 & 0\\ 0 & 0 & 1\\ 0 & 0 & 1\\ 0 & 0 & 1 \end{bmatrix}.$$
+
+We will show that $\frac{1}{2}V^{\pi_{\theta^{(1)}}}+\frac{1}{2}V^{\pi_{\theta^{(2)}}} > V^{\pi_{\left(\theta^{(1)}+\theta^{(2)}\right)/2}}$.\
+We observe the following. $$\begin{align*}
+ V^{\pi_{\theta^{(1)}}}(s_1) &= V^{K_1}(s_1) = (1/4).(1/4).r=r/16.\\
+ V^{\pi_{\theta^{(2)}}}(s_1) &= V^{K_2}(s_1) = (3/4).(3/4).r=9r/16.
+\end{align*}$$ where $V^{K}(s)$ denotes the value obtained by starting from state $s$ and following a controller matrix $K$ for all time.
+
+Also, on the other hand we have, $$V^{\pi_{\left(\theta^{(1)}+\theta^{(2)}\right)/2}} = V^{\left(K_1+K_2\right)/2}(s_1) = (1/2).(1/2).r=r/4.$$ Hence we see that, $$\frac{1}{2}V^{\pi_{\theta^{(1)}}}+\frac{1}{2}V^{\pi_{\theta^{(2)}}} = r/32+9r/32 = 10r/32 =1.25r/4 > r/4 = V^{\pi_{\left(\theta^{(1)}+\theta^{(2)}\right)/2}}.$$ This shows that $\theta\mapsto V^{\pi_{\theta}}$ is non-concave, which concludes the proof for direct parameterization.
+
+::: {#remark:nonconcavity for softmax .remark}
+*Remark 2*. For softmax parametrization, we choose the same 2 controllers $K_1,K_2$ as above. Fix some $\epsilon\in (0,1)$ and set $\theta^{(1)}= \left(\log(1-\epsilon), \log\epsilon \right)\transpose$ and $\theta^{(2)}= \left(\log\epsilon, \log(1-\epsilon) \right)\transpose$. A similar calculation using softmax projection, and using the fact that $\pi_\theta(a|s) = \sum\limits_{m=1}^M\pi_\theta(m)K_m(s,a)$, shows that under $\theta^{(1)}$ we follow matrix $(1-\epsilon)K_1+\epsilon K_2$, which yields a Value of $\left(1/4+\epsilon/2\right)^2r$. Under $\theta^{(2)}$ we follow matrix $\epsilon K_1+(1-\epsilon) K_2$, which yields a Value of $\left(3/4-\epsilon/2\right)^2r$. On the other hand, $(\theta^{(1)}+\theta^{(2)})/2$ amounts to playing the matrix $(K_1+K_2)/2$, yielding the a value of $r/4$, as above. One can verify easily that $\left(1/4+\epsilon/2\right)^2r + \left(3/4-\epsilon/2\right)^2r > 2.r/4$. This shows the non-concavity of $\theta\mapsto V^{\pi_{\theta}}$ under softmax parameterization.
+:::
+
+ ◻
+::::
+
+Consider the same MDP as in Sec [6](#appendix:nonconcavity of V){reference-type="ref" reference="appendix:nonconcavity of V"}, however with different base controllers. Let the initial state be $s_1$.
+
+The two base controllers $K_i\in \Real^{S\times A}$, $i=1,2$ (where each row is probability distribution over $\cA$) are shown below. $$\begin{equation}
+ K_1 = \begin{bmatrix}1/4 & 3/4 & 0\\ 1/4 & 3/4 & 0\\ 0 & 0 & 1\\ 0 & 0 & 1\\ 0 & 0 & 1 \end{bmatrix}, K_2 = \begin{bmatrix}3/4 & 1/4 & 0\\ 3/4 & 1/4 & 0\\ 0 & 0 & 1\\ 0 & 0 & 1\\ 0 & 0 & 1 \end{bmatrix}.
+ \label{eqn:controllers-for-non-concavity-counterexample1}
+\end{equation}$$ Let $\theta^{(1)} = (1,0)\transpose$ and $\theta^{(2)} = (0,1)\transpose$. Let us fix the initial state to be $s_1$. Since a nonzero reward is only earned during a $s_2\rightarrow s_4$ transition, we note for any policy $\pi$, that $V^\pi(s_1)=\pi(a_1|s_1)\pi(a_2|s_2)r$ and $V^\pi(s_2)=\pi(a_2|s_2)r$. Note here that the optimal policy of this MDP is *deterministic* with $\pi^*(a_1|s_1)=1$ and $\pi^*(a_2|s_2)=1$. The transitions are all deterministic.
+
+However, notice that the optimal policy (with initial state $s_1$) given $K_1$ and $K_2$ is *strict mixture*, because, given any $\boldsymbol{\theta}=[\theta, 1-\theta],~\theta\in[0,1],$ the value of the policy $\pi_{\boldsymbol{\theta}}$ is $$\begin{equation}
+ v^{\pi_{\boldsymbol{\theta}}}=\frac{1}{4}(3-2\theta)(1+2\theta)r,
+\end{equation}$$ which is maximized at $\theta=1/2.$ This means that the optimal *non deterministic* policy chooses $K_1$ and $K_2$ with probabilites $(1/2,1/2)$, i.e., $$K^*=(K_1 + K_2)/2 = \begin{bmatrix}1/2 & 1/2 & 0\\ 1/2 & 1/2 & 0\\ 0 & 0 & 1\\ 0 & 0 & 1\\ 0 & 0 & 1 \end{bmatrix}.$$\
+We observe the following. $$\begin{align*}
+ V^{\pi_{\theta^{(1)}}}(s_1) &= V^{K_1}(s_1) = (1/4).(3/4).r=3r/16.\\
+ V^{\pi_{\theta^{(2)}}}(s_1) &= V^{K_2}(s_1) = (3/4).(1/4).r=3r/16.\\
+ V^{\pi_{\theta^{(1)}}}(s_2) &= V^{K_1}(s_2) = (3/4).r=3r/4.\\
+ % V^{\pi_{\theta^{(2)}}}(s_2) &= V^{K_2}(s_2) = (1/4).r=r/4.
+\end{align*}$$
+
+On the other hand we have, $$\begin{align*}
+V^{\pi_{\left(\theta^{(1)}+\theta^{(2)}\right)/2}}(s_1) = V^{K^*}(s_1) = (1/2).(1/2).r=r/4.\\
+V^{\pi_{\left(\theta^{(1)}+\theta^{(2)}\right)/2}}(s_2) = V^{K^*}(s_2) = (1/2).r=r/2.\\
+\end{align*}$$ We see that $V^{K^*}(s_1)> \max\{V^{K_1}(s_1), V^{K_2}(s_1)\}$. However, $V^{K^*}(s_2)< V^{K_1}(s_2)$. This implies that playing according to an improved mixture policy (here the optimal given the initial state is $s_1$) does not necessarily improve the value across *all* states.
+
+In this section we consider the instructive sub-case when $S=1$, which is also called the Multiarmed Bandit. We provide regret bounds for two cases (1) when the value gradient $\frac{dV^{\pi_{\theta_t}}(\mu)}{d\theta^t}$ (in the gradient update) is available in each round, and (2) when it needs to be estimated.
+
+Note that each controller in this case, is a probability distribution over the $A$ arms of the bandit. We consider the scenario where the agent at each time $t\geq 1$, has to choose a probability distribution $K_{m_t}$ from a set of $M$ probability distributions over actions $\cA$. She then plays an action $a_t\sim K_{m_t}$. This is different from the standard MABs because the learner cannot choose the actions directly, instead chooses from a *given* set of controllers, to play actions. Note the $V$ function has no argument as $S=1$. Let $\mu\in [0,1]^A$ be the mean vector of the arms $\cA$. The value function for any given mixture $\pi\in \cP([M])$, $$\begin{align}
+\label{eq:bandits-value function}
+ V^\pi &\bydef \expect{\sum\limits_{t=0}^\infty \gamma^tr_t\given \pi } = \sum\limits_{t=0}^\infty \gamma^t\expect{r_t\given \pi}\nonumber\\
+ &=\sum\limits_{t=0}^\infty \gamma^t\sum\limits_{a\in \cA}\sum\limits_{m=1}^{M}\pi(m)K_m(a)\mu_a. \nonumber\\
+ &= \frac{1}{1-\gamma} \sum\limits_{m=1}^{M}\pi_m \mu\transpose K_m = \frac{1}{1-\gamma} \sum\limits_{m=1}^{M}\pi_m \mathfrak{r}^\mu_m.
+\end{align}$$ where the interpretation of $\kr_m^\mu$ is that it is the mean reward one obtains if the controller $m$ is chosen at any round $t$. Since $V^{\pi}$ is linear in $\pi$, the maximum is attained at one of the base controllers $\pi^*$ puts mass 1 on $m^*$ where $m^*:= \argmax\limits_{m\in [M]} V^{K_m},$ and $V^{K_m}$ is the value obtained using $K_m$ for all time. In the sequel, we assume $\Delta_i\bydef \mathfrak{r}^\mu_{m^*}-\mathfrak{r}^\mu_i>0$.
+
+With access to the exact value gradient at each step, we have the following result, when Softmax PG (Algorithm [\[alg:mainPolicyGradMDP\]](#alg:mainPolicyGradMDP){reference-type="ref" reference="alg:mainPolicyGradMDP"}) is applied for the bandits-over-bandits case.
+
+::: restatable
+theoremconvergencebandits[]{#thm:convergence for bandit label="thm:convergence for bandit"} With $\eta=\frac{2(1-\gamma)}{5}$ and with $\theta^{(1)}_m=1/M$ for all $m\in [M]$, with the availability for true gradient, we have $\forall t\geq 1$, $$V^{\pi^*}-V^{\pi_{\theta_t}} \leq \frac{5}{1-\gamma} \frac{M^2}{t}.$$
+:::
+
+Also, defining regret for a time horizon of $T$ rounds as $$\begin{equation}
+\label{eq:regret definition MAB}
+ \mathcal{R}(T):= \sum\limits_{t=1}^T V^{\pi^*} - V^{\pi_{\theta_t}},
+\end{equation}$$ we show as a corollary to Thm. [\[thm:convergence for bandit\]](#thm:convergence for bandit){reference-type="ref" reference="thm:convergence for bandit"} that,
+
+::: {#cor:regret true gradient MAB .corollary}
+**Corollary 3**. *$$\mathcal{R}(T)\leq \min\left\{ \frac{5M^2}{1-\gamma}{\color{blue}\log T}, \sqrt{\frac{5}{1-\gamma}}M{\color{blue}\sqrt{T}} \right\}.$$*
+:::
+
+:::::::::::: proof
+*Proof.* Recall from eq ([\[eq:bandits-value function\]](#eq:bandits-value function){reference-type="ref" reference="eq:bandits-value function"}), that the value function for any given policy $\pi \in \cP([M])$, that is a distribution over the given $M$ controllers (which are itself distributions over actions $\cA$) can be simplified as: $$V^\pi = \frac{1}{1-\gamma} \sum\limits_{m=1}^{M}\pi_m \mu\transpose K_m = \frac{1}{1-\gamma} \sum\limits_{m=1}^{M}\pi_m \mathfrak{r}^\mu_m$$ where $\mu$ here is the (unknown) vector of mean rewards of the arms $\cA$. Here, $\kr^\mu_m:= \mu\transpose K_m$, $i=1,\cdots, M$, represents the mean reward obtained by choosing to play controller $K_m, m\in {M}$. For ease of notation, we will drop the superscript $\mu$ in the proofs of this section. We first show a simplification of the gradient of the value function w.r.t. the parameter $\theta$. Fix a $m\in [M]$, $$\begin{equation}
+\label{eq:grad simplification for MAB}
+ \frac{\partial}{\partial \theta_{m'}} V^{\pi_\theta} = \frac{1}{1-\gamma} \sum\limits_{m=1}^{M}\frac{\partial}{\partial \theta_m} \pi_\theta(m) \mathfrak{r}_m = \frac{1}{1-\gamma} \sum\limits_{m=1}^{M}\pi_\theta(m') \left\{\ind_{mm'}-\pi_\theta(m) \right\} \kr_m.
+\end{equation}$$ Next we show that $V^\pi$ is $\beta-$ smooth. A function $f:\Real^M\to \Real$ is $\beta-$ smooth, if $\forall \theta',\theta \in \Real^M$ $$\abs{f(\theta') - f(\theta) -\innprod{\frac{d}{d\theta}f(\theta), \theta'-\theta}} \leq \frac{\beta}{2} \norm{\theta'-\theta}_2^2.$$
+
+Let $S:=\frac{d^2}{d\theta^2} V^{\pi_{\theta}}$. This is a matrix of size $M\times M$. Let $1\leq i,j\leq M$. $$\begin{align}
+ S_{i,j}&=\left(\frac{d}{d\theta}\left(\frac{d}{d\theta}V^{\pi_{\theta}} \right)\right)_{i,j}\\
+ &=\frac{1}{1-\gamma} \frac{d(\pi_\theta(i)(\kr(i)-\pi_\theta\transpose \kr))}{d\theta_j}\\
+ &=\frac{1}{1-\gamma} \left(\frac{d\pi_\theta(i)}{d\theta_j}(\kr(i)-\pi_\theta\transpose \kr) + \pi_\theta(i)\frac{d(\kr(i)-\pi_\theta\transpose \kr)}{d\theta_j} \right)\\
+ &=\frac{1}{1-\gamma} \left(\pi_\theta(j)(\kr(i)-\pi_\theta\transpose\kr) - \pi_\theta(i)\pi_\theta(j)(\kr(i)-\pi_\theta\transpose\kr) - \pi_\theta(i)\pi_\theta(j)(\kr(j)-\pi_\theta\transpose\kr) \right).
+\end{align}$$ Next, let $y\in \Real^M$, $$\begin{align*}
+ \abs{y\transpose Sy} &= \abs{\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}S_{ij}y(i)y(j)}\\
+ &=\frac{1}{1-\gamma} \abs{\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\left(\pi_\theta(j)(\kr(i)-\pi_\theta\transpose\kr) - \pi_\theta(i)\pi_\theta(j)(\kr(i)-\pi_\theta\transpose\kr) - \pi_\theta(i)\pi_\theta(j)(\kr(j)-\pi_\theta\transpose\kr) \right) y(i)y(j) }\\
+ &=\frac{1}{1-\gamma} \abs{\sum\limits_{i=1}^{M}\pi_\theta(i)(\kr(i)-\pi_\theta\transpose\kr)y(i)^2 - 2\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\pi_\theta(i)\pi_\theta(j)(\kr(i)-\pi_\theta\transpose\kr) y(i)y(j) }\\
+ &= \frac{1}{1-\gamma} \abs{ \sum\limits_{i=1}^{M}\pi_\theta(i)(\kr(i)-\pi_\theta\transpose\kr)y(i)^2 -2 \sum\limits_{i=1}^{M}\pi_\theta(i)(\kr(i)-\pi_\theta\transpose\kr) y(i)\sum\limits_{j=1}^{M}\pi_\theta(j)y(j) }\\
+ &\leq \frac{1}{1-\gamma} \abs{ \sum\limits_{i=1}^{M}\pi_\theta(i)(\kr(i)-\pi_\theta\transpose\kr)y(i)^2} +\frac{2}{1-\gamma}\abs{ \sum\limits_{i=1}^{M}\pi_\theta(i)(\kr(i)-\pi_\theta\transpose\kr) y(i)\sum\limits_{j=1}^{M}\pi_\theta(j)y(j) } \\
+ &\leq \frac{1}{1-\gamma} \norm{\pi_\theta\odot (\kr-\pi_\theta\transpose\kr)}_\infty \norm{y\odot y }_1 + \frac{2}{1-\gamma} \norm{\pi_\theta\odot (\kr-\pi_\theta\transpose\kr)}_1.\norm{y}_\infty. \norm{\pi_\theta}_1\norm{y}_\infty.
+\end{align*}$$ The last equality is by the assumption that reward are bounded in \[0,1\]. We observe that, $$\begin{align*}
+ \norm{\pi_\theta\odot (\kr-\pi_\theta\transpose\kr)}_1 &=\sum\limits_{m=1}^{M}\abs{\pi_\theta(i)(\kr(i) -\pi_\theta\transpose\kr) }\\
+ &= \sum\limits_{m=1}^{M}\pi_\theta(i)\abs{\kr(i) -\pi_\theta\transpose\kr }\\
+ &=\max\limits_{i=1,\ldots, M} \abs{\kr(i) -\pi_\theta\transpose\kr }\leq 1.
+\end{align*}$$ Next, for any $i\in [M]$, $$\begin{align*}
+ \abs{\pi_\theta(i) (\kr(i)-\pi_\theta\transpose\kr) } &=\abs{\pi_\theta(i)\kr(i) -\pi_\theta(i)^2r(i)-\sum\limits_{j\neq i}\pi_\theta(i)\pi_\theta(j)\kr(j) }\\
+ &=\pi_\theta(i)(1-\pi_\theta(i)) + \pi_\theta(i) (1-\pi_\theta(i)) \leq 2.1/4 =1/2.
+\end{align*}$$ Combining the above two inequalities with the fact that $\norm{\pi_\theta}_1=1$ and $\norm{y}_\infty \leq \norm{y}_2$, we get, $$\abs{y\transpose Sy} \leq \frac{1}{1-\gamma} \norm{\pi_\theta\odot (\kr-\pi_\theta\transpose\kr)}_\infty \norm{y\odot y }_1 + \frac{2}{1-\gamma} \norm{\pi_\theta\odot (\kr-\pi_\theta\transpose\kr)}_1.\norm{y}_\infty. \norm{\pi_\theta}_1\norm{y}_\infty \leq \frac{1}{1-\gamma}(1/2+2)\norm{y}_2^2.$$ Hence $V^{\pi_\theta}$ is $\beta-$smooth with $\beta = \frac{5}{2(1-\gamma)}$.
+
+We establish a lower bound on the norm of the gradient of the value function at every step $t$ as below (these type of inequalities are called Łojaseiwicz inequalities [@lojasiewicz1963equations])
+
+::: {#lemma:nonuniform lojaseiwicz MAB .lemma}
+**Lemma 4**. *\[Lower bound on norm of gradient\] $$\norm{\frac{\partial V^{\pi_\theta}}{\partial \theta}}_2 \geq \pi_{\theta_{m^*}} \left(V^{\pi^*}-V^{\pi_{\theta}}\right).$$*
+:::
+
+[Proof of Lemma [4](#lemma:nonuniform lojaseiwicz MAB){reference-type="ref" reference="lemma:nonuniform lojaseiwicz MAB"}.]{.underline}
+
+::: proof
+*Proof.* Recall from the simplification of gradient of $V^\pi$, i.e., eq ([\[eq:grad simplification for MAB\]](#eq:grad simplification for MAB){reference-type="ref" reference="eq:grad simplification for MAB"}): $$\begin{align*}
+ \frac{\partial}{\partial \theta_{m}} V^{\pi_\theta} &= \frac{1}{1-\gamma} \sum\limits_{m'=1}^M \pi_\theta(m) \left\{\ind_{mm'}-\pi_\theta(m') \right\} \kr_m'\\
+ &=\frac{1}{1-\gamma} \pi(m)\left(\kr(m)-\pi\transpose \kr \right).
+\end{align*}$$ Taking norm both sides, $$\begin{align*}
+ \norm{\frac{\partial}{\partial \theta} V^{\pi_\theta}} &=\frac{1}{1-\gamma} \sqrt{\sum\limits_{m=1}^{M}(\pi(m))^2\left(\kr(m)-\pi\transpose \kr \right)^2 }\\
+ &\geq \frac{1}{1-\gamma} \sqrt{ (\pi(m^*))^2\left(\kr(m^*)-\pi\transpose \kr \right)^2 }\\
+ &=\frac{1}{1-\gamma} (\pi(m^*))\left(\kr(m^*)-\pi\transpose \kr \right) \\
+ &=\frac{1}{1-\gamma} (\pi(m^*))\left(\pi^*-\pi\right)\transpose \kr \\
+ &=(\pi(m^*)) \left[ V^{\pi^*}-V^{\pi_\theta}\right].
+\end{align*}$$ where $\pi^*=e_{m^*}$. ◻
+:::
+
+We will now prove Theorem [\[thm:convergence for bandit\]](#thm:convergence for bandit){reference-type="ref" reference="thm:convergence for bandit"} and corollary [3](#cor:regret true gradient MAB){reference-type="ref" reference="cor:regret true gradient MAB"}. We restate the result here.
+
+::: restatable
+theoremconvergencebandits[]{#thm:convergence for bandit label="thm:convergence for bandit"} With $\eta=\frac{2(1-\gamma)}{5}$ and with $\theta^{(1)}_m=1/M$ for all $m\in [M]$, with the availability for true gradient, we have $\forall t\geq 1$, $$V^{\pi^*}-V^{\pi_{\theta_t}} \leq \frac{5}{1-\gamma} \frac{M^2}{t}.$$
+:::
+
+:::::::: proof
+*Proof.* First, note that since $V^\pi$ is smooth we have: $$\begin{align*}
+ V^{\pi_{\theta_t}}- V^{\pi_{\theta_{t+1}}} &\leq -\innprod{\frac{d}{d\theta_t}V^{\pi_{\theta_t}}, \theta_{t+1}-\theta_t} +\frac{5}{2(1-\gamma)} \norm{\theta_{t+1}-\theta_t}_2^2\\
+ &=-\eta\norm{ \frac{d}{d\theta_t}V^{\pi_{\theta_t}}}_2^2 + \frac{5}{4(1-\gamma)}\eta^2 \norm{ \frac{d}{d\theta_t}V^{\pi_{\theta_t}}}_2^2\\
+ &=\norm{ \frac{d}{d\theta_t}V^{\pi_{\theta_t}}}_2^2\left(\frac{5\eta^2}{4(1-\gamma)} -\eta \right)\\
+ &= -\left(\frac{1-\gamma}{5}\right) \norm{ \frac{d}{d\theta_t}V^{\pi_{\theta_t}}}_2^2.\\
+ &\leq -\left(\frac{1-\gamma}{5}\right) (\pi_{\theta_t}(m^*))^2 \left[ V^{\pi^*}-V^{\pi_\theta}\right]^2 \qquad \text{Lemma \ref{lemma:nonuniform lojaseiwicz MAB}}\\
+ &\leq -\left(\frac{1-\gamma}{5}\right) (\underbrace{\inf\limits_{1\leq s\leq t}\pi_{\theta_t}(m^*)}_{=:c_t})^2 \left[ V^{\pi^*}-V^{\pi_\theta}\right]^2.
+\end{align*}$$ The first equality is by smoothness, second inequality is by the update equation in algorithm [\[alg:mainPolicyGradMDP\]](#alg:mainPolicyGradMDP){reference-type="ref" reference="alg:mainPolicyGradMDP"}.
+
+Next, let $\delta_t:= V^{\pi^*}-V^{\pi_{\theta_t}}$. We have, $$\begin{equation}
+\label{eq:induction hyp MAB}
+ \delta_{t+1}-\delta_t \leq -\frac{(1-\gamma)}{5}c_t^2 \delta_t^2.
+\end{equation}$$ **Claim:** $\forall t\geq1, \delta_t\leq \frac{5}{c_t^2(1-\gamma)} \frac{1}{t}.$\
+We prove the claim by using induction on $t\geq 1$.\
+[Base case.]{.underline} Since $\delta_t\leq \frac{1}{1-\gamma}$, the claim is true for all $t\leq 5$.\
+[Induction step:]{.underline} Let $\phi_t:=\frac{5}{c_t^2(1-\gamma)}$. Fix a $t\geq 2$, assume $\delta_t \leq \frac{\phi_t}{t}$.
+
+Let $g:\Real\to \Real$ be a function defined as $g(x) = x-\frac{1}{\phi_t}x^2$. One can verify easily that $g$ is monotonically increasing in $\left[ 0, \frac{\phi_t}{2}\right]$. Next with equation [\[eq:induction step\]](#eq:induction step){reference-type="ref" reference="eq:induction step"}, we have $$\begin{align*}
+ \delta_{t+1} &\leq \delta_t -\frac{1}{\phi_t} \delta_t^2\\
+ &= g(\delta_t)\\
+ &\leq g(\frac{\phi_t}{ t})\\
+ &\leq \frac{\phi_t}{t} - \frac{\phi_t}{t^2}\\
+ &= \phi_t\left(\frac{1}{t}-\frac{1}{t^2} \right)\\
+ &\leq \phi_t\left(\frac{1}{t+1}\right).
+\end{align*}$$ This completes the proof of the claim. We will show that $c_t\geq 1/M$ in the next lemma. We first complete the proof of the corollary assuming this.
+
+We fix a $T\geq 1$. Observe that, $\delta_t\leq \frac{5}{(1-\gamma)c_t^2}\frac{1}{t}\leq \frac{5}{(1-\gamma)c_T^2}\frac{1}{t}$. $$\sum\limits_{t=1}^T V^{\pi^*}-V^{\pi_{\theta_t}} = \frac{1}{1-\gamma} \sum\limits_{t=1}^T (\pi^*-\pi_{\theta_t})\transpose \kr\leq \frac{5\log T}{(1-\gamma)c_T^2}+1.$$ Also we have that, $$\sum\limits_{t=1}^T V^{\pi^*}-V^{\pi_{\theta_t}} = \sum\limits_{t=1}^T \delta_t \leq \sqrt{T} \sqrt{\sum\limits_{t=1}^T\delta_t^2}\leq \sqrt{T} \sqrt{\sum\limits_{t=1}^T \frac{5}{(1-\gamma)c_T^2}(\delta_t-\delta_{t+1}) }\leq \frac{1}{c_T}\sqrt{\frac{5T}{(1-\gamma)}}.$$ We next show that with $\theta_m^{(1)}=1/M,\forall m$, i.e., uniform initialization, $\inf_{t\geq 1}c_t=1/M$, which will then complete the proof of Theorem [\[thm:convergence for bandit\]](#thm:convergence for bandit){reference-type="ref" reference="thm:convergence for bandit"} and of corollary [3](#cor:regret true gradient MAB){reference-type="ref" reference="cor:regret true gradient MAB"}.
+
+::: {#lemma:c bounded from zero :MAB .lemma}
+**Lemma 5**. *We have $\inf_{t\geq 1} \pi_{\theta_t}(m^*) >0$. Furthermore, with uniform initialization of the parameters $\theta_m^{(1)}$, i.e., $1/M, \forall m\in [M]$, we have $\inf_{t\geq 1} \pi_{\theta_t}(m^*) = \frac{1}{M}$.*
+:::
+
+:::::: proof
+*Proof.* We will show that there exists $t_0$ such that $\inf_{t\geq 1} \pi_{\theta_t}(m^*) = \min\limits_{1\leq t\leq t_0}\pi_{\theta_t}(m^*)$, where $t_0=\min\left\{t: \pi_{\theta_t}(m^*)\geq C \right\}$. We define the following sets. $$\begin{align*}
+ \cS_1 &= \left\{\theta: \frac{dV^{\pi_\theta}}{d\theta_{m^*}} \geq \frac{dV^{\pi_\theta}}{d\theta_{m}}, \forall m\neq m^* \right\}\\
+ \cS_2 &= \left\{\theta: \pi_\theta(m^*) \geq \pi_\theta(m), \forall m\neq m^* \right\} \\
+ \cS_3 &= \left\{\theta: \pi_\theta(m^*) \geq C \right\}
+\end{align*}$$ Note that $\cS_3$ depends on the choice of $C$. Let $C:=\frac{M-\Delta}{M+\Delta}$. We claim the following:\
+**Claim 2.** $(i) \theta_t\in \cS_1\implies \theta_{t+1}\in \cS_1$ and $(ii) \theta_t\in \cS_1 \implies \pi_{\theta_{t+1}}(m^*) \geq \pi_{\theta_{t}}(m^*)$.
+
+::: proof
+*Proof of Claim 2.* $(i)$ Fix a $m\neq m^*$. We will show that if $\frac{dV^{\pi_\theta}}{d\theta_t(m^*)} \geq \frac{dV^{\pi_\theta}}{d\theta_t(m)}$, then $\frac{dV^{\pi_\theta}}{d\theta_{t+1}(m^*)} \geq \frac{dV^{\pi_\theta}}{d\theta_{t+1}(m)}$. This will prove the first part.\
+[Case (a): $\pi_{\theta_t}(m^*)\geq \pi_{\theta_t}(m)$]{.underline}. This implies, by the softmax property, that $\theta_t(m^*) \geq \theta_t(m)$. After gradient ascent update step we have: $$\begin{align*}
+ \theta_{t+1}(m^*) &= \theta_{t}(m^*) +\eta \frac{dV^{\pi_{\theta_t}}}{d\theta_t(m^*)}\\
+ &\geq \theta_t(m) + \eta \frac{dV^{\pi_{\theta_t}}}{d\theta_t(m)}\\
+ &= \theta_{t+1}(m).
+\end{align*}$$ This again implies that $\theta_{t+1}(m^*) \geq \theta_{t+1}(m)$. By the definition of derivative of $V^{\pi_{\theta}}$ w.r.t $\theta_t$ (see eq ([\[eq:grad simplification for MAB\]](#eq:grad simplification for MAB){reference-type="ref" reference="eq:grad simplification for MAB"})), $$\begin{align*}
+ \frac{dV^{\pi_\theta}}{d\theta_{t+1}(m^*)} &= \frac{1}{1-\gamma} \pi_{\theta_{t+1}(m^*)}(\kr(m^*)-\pi_{\theta_{t+1}}\transpose \kr)\\
+ &=\frac{1}{1-\gamma} \pi_{\theta_{t+1}(m)}(\kr(m)-\pi_{\theta_{t+1}}\transpose \kr)\\
+ &= \frac{dV^{\pi_\theta}}{d\theta_{t+1}(m)}.
+\end{align*}$$ This implies $\theta_{t+1}\in \cS_1$.\
+[Case (b): $\pi_{\theta_t}(m^*)< \pi_{\theta_t}(m)$]{.underline}. We first note the following equivalence: $$\frac{dV^{\pi_{\theta}}}{d\theta(m^*)}\geq \frac{dV^{\pi_{\theta}}}{d\theta(m)} \longleftrightarrow (\kr(m^*)-\kr(m))\left(1-\frac{\pi_\theta(m^*)}{\pi_\theta(m^*)}\right)(\kr(m^*)-\pi_\theta\transpose\kr).$$ which can be simplified as: $$\begin{align*}
+ (\kr(m^*)-\kr(m))\left(1-\frac{\pi_\theta(m^*)}{\pi_\theta(m^*)}\right)(\kr(m^*)-\pi_\theta\transpose\kr) & = (\kr(m^*)-\kr(m))\left(1-\exp\left(\theta_t(m^*)-\theta_t(m) \right) \right)(\kr(m^*)-\pi_\theta\transpose\kr).
+\end{align*}$$ The above condition can be rearranged as: $$\kr(m^*)-\kr(m)\geq \left(1-\exp\left(\theta_t(m^*)-\theta_t(m) \right)\right)\left(\kr(m^*)-\pi_{\theta_{t}}\transpose\kr\right).$$ By lemma [16](#lemma:gradient ascent lemma){reference-type="ref" reference="lemma:gradient ascent lemma"}, we have that $V^{\pi_{\theta_{t+1}}}\geq V^{\pi_{\theta_{t}}}\implies \pi_{\theta_{t+1}}\transpose \kr \geq \pi_{\theta_{t}}\transpose \kr$. Hence, $$0<\kr(m^*)-\pi_{\theta_{t+1}}\transpose\kr \leq \pi_{\theta_{t}}\transpose\kr.$$ Also, we note: $$\theta_{t+1}(m^*) -\theta_{t+1}(m) = \theta_t(m^*) + \eta \frac{dV^{\pi_t}}{d\theta_t(m^*)} - \theta_{t+1}(m) -\eta \frac{dV^{\pi_t}}{d\theta_t(m)} \geq \theta_t(m^*)-\theta_t(m).$$ This implies, $1-\exp\left(\theta_{t+1}(m^*) -\theta_{t+1}(m) \right)\leq 1-\exp\left(\theta_t(m^*)-\theta_t(m) \right)$.
+
+Next, we observe that by the assumption $\pi_t(m^*)< \pi_t(m)$, we have $$1-\exp\left(\theta_t(m^*)-\theta_t(m) \right) =1-\frac{\pi_t(m^*)}{\pi_t(m)}>0.$$ Hence we have, $$\begin{align*}
+ \left(1-\exp\left(\theta_{t+1}(m^*)-\theta_{t+1}(m) \right)\right)\left(\kr(m^*)-\pi_{\theta_{t+1}}\transpose\kr\right) &\leq \left(1-\exp\left(\theta_t(m^*)-\theta_t(m) \right)\right)\left(\kr(m^*)-\pi_{\theta_{t}}\transpose\kr\right)\\
+ &\leq \kr(m^*)-\kr(m).
+\end{align*}$$ Equivalently, $$\left(1-\frac{\pi_{t+1}(m^*)}{\pi_{t+1}(m)} \right)(\kr(m^*)-\pi_{t+1}\transpose\kr )\leq \kr(m^*)-\kr(m).$$ Finishing the proof of the claim 2(i).\
+(ii) Let $\theta_t\in \cS_1$. We observe that: $$\begin{align*}
+ \pi_{t+1}(m^*) &= \frac{\exp(\theta_{t+1}(m^*))}{\sum\limits_{m=1}^{M}\exp(\theta_{t+1}(m))}\\
+ &=\frac{\exp(\theta_{t}(m^*)+\eta \frac{dV^{\pi_t}}{d\theta_t(m^*)})}{\sum\limits_{m=1}^{M}\exp(\theta_{t}(m)+\eta \frac{dV^{\pi_t}}{d\theta_t(m)})}\\
+ &\geq \frac{\exp(\theta_{t}(m^*)+\eta \frac{dV^{\pi_t}}{d\theta_t(m^*)})}{\sum\limits_{m=1}^{M}\exp(\theta_{t}(m)+\eta \frac{dV^{\pi_t}}{d\theta_t(m^*)})}\\
+ &=\frac{\exp(\theta_{t}(m^*))}{\sum\limits_{m=1}^{M}\exp(\theta_{t}(m))} = \pi_t(m^*)
+\end{align*}$$ This completes the proof of Claim 2(ii). ◻
+:::
+
+**Claim 3.** $\cS_2\subset \cS_1$ and $\cS_3\subset \cS_1$.
+
+::: proof
+*Proof.* To show that $\cS_2\subset \cS_1$, let $\theta\in cS_2$. We have $\pi_\theta(m^*) \geq \pi_\theta(m), \forall m\neq m^*$. $$\begin{align*}
+ \frac{dV^{\pi_\theta}}{d\theta(m^*)} &=\frac{1}{1-\gamma} \pi_\theta(m^*)(\kr(m^*)-\pi_\theta\transpose \kr)\\
+ &>\frac{1}{1-\gamma} \pi_\theta(m)(\kr(m)-\pi_\theta\transpose \kr)\\
+ &=\frac{dV^{\pi_\theta}}{d\theta(m)}.
+\end{align*}$$ This shows that $\theta\in \cS_1$. For showing the second part of the claim, we assume $\theta\in \cS_3\cap \cS_2^c$, because if $\theta\in \cS_2$, we are done. Let $m\neq m^*$. We have, $$\begin{align*}
+ \frac{dV^{\pi_\theta}}{d\theta(m^*)} - \frac{dV^{\pi_\theta}}{d\theta(m)} &= \frac{1}{1-\gamma} \left(\pi_\theta(m^*)(\kr(m^*)-\pi_\theta\transpose \kr) -\pi_\theta(m)(\kr(m)-\pi_\theta\transpose \kr) \right)\\
+ &= \frac{1}{1-\gamma} \left(2\pi_\theta(m^*)(\kr(m^*)-\pi_\theta\transpose \kr) + \sum\limits_{i\neq m^*,m}^M \pi_\theta(i)(\kr(i)-\pi_\theta\transpose \kr) \right)\\
+ &= \frac{1}{1-\gamma} \left( \left(2\pi_\theta(m^*) + \sum\limits_{i\neq m^*,m}^M \pi_\theta(i)\right)(\kr(m^*)-\pi_\theta\transpose \kr) - \sum\limits_{i\neq m^*,m}^M \pi_\theta(i)(\kr(m^*)-\kr(i)) \right)\\
+ &\geq \frac{1}{1-\gamma} \left( \left(2\pi_\theta(m^*) + \sum\limits_{i\neq m^*,m}^M \pi_\theta(i)\right)(\kr(m^*)-\pi_\theta\transpose \kr) - \sum\limits_{i\neq m^*,m}^M \pi_\theta(i) \right)\\
+ &\geq \frac{1}{1-\gamma} \left( \left(2\pi_\theta(m^*) + \sum\limits_{i\neq m^*,m}^M \pi_\theta(i)\right)\frac{\Delta}{M}- \sum\limits_{i\neq m^*,m}^M \pi_\theta(i) \right).
+\end{align*}$$ Observe that, $\sum\limits_{i\neq m^*,m}^M \pi_\theta(i) = 1-\pi(m^*)-\pi(m)$. Using this and rearranging we get, $$\frac{dV^{\pi_\theta}}{d\theta(m^*)} - \frac{dV^{\pi_\theta}}{d\theta(m)} \geq \frac{1}{1-\gamma}\left(\pi(m^*)\left(1+\frac{\Delta}{M}\right) - \left(1-\frac{\Delta}{M}\right) + \pi(m) \left(1-\frac{\Delta}{M}\right)\right) \geq \frac{1}{1-\gamma}\pi(m) \left(1-\frac{\Delta}{M}\right)\geq 0.$$ The last inequality follows because $\theta\in \cS_3$ and the choice of $C$. This completes the proof of Claim 3. ◻
+:::
+
+**Claim 4.** There exists a finite $t_0$, such that $\theta_{t_0}\in \cS_3$.
+
+::: proof
+*Proof.* The proof of this claim relies on the asymptotic convergence result of [@Agarwal2020]. We note that their convergence result hold for our choice of $\eta=\frac{2(1-\gamma)}{5}$. As noted in [@Mei2020], the choice of $\eta$ is used to justify the gradient ascent lemma [16](#lemma:gradient ascent lemma){reference-type="ref" reference="lemma:gradient ascent lemma"}. Hence we have $\pi_{\theta_t}{\to} 1$ as ${t\to \infty}$. Therefore, there exists a finite $t_0$ such that $\pi_{\theta_{t_0}}(m^*)\geq C$ and hence $\theta_{t_0}\in \cS_3$. ◻
+:::
+
+This completes the proof that there exists a $t_0$ such that $\inf\limits_{t\geq 1}\pi_{\theta_t}(m^*) = \inf\limits_{1\leq t\leq t_0}\pi_{\theta_t}(m^*)$, since once the $\theta_t\in \cS_3$, by Claim 3, $\theta_t\in \cS_1$. Further, by Claim 2, $\forall t\geq t_0$, $\theta_t\in \cS_1$ and $\pi_{\theta_t}(m^*)$ is non-decreasing after $t_0$. ◻
+::::::
+
+With uniform initialization $\theta_1(m^*)=\frac{1}{M}\geq \theta_1(m)$, for all $m\neq m^*$. Hence, $\pi_{\theta_1}(m^*)\geq \pi_{\theta_1}(m)$ for all $m\neq m^*$. This implies $\theta_1\in \cS_2$, which implies $\theta_1\in \cS_1$. As established in Claim 2, $\cS_1$ remains invariant under gradient ascent updates, implying $t_0=1$. Hence we have that $\inf\limits_{t\geq 1}\pi_{\theta_t}(m^*) = \pi_{\theta_1}(m^*)=1/M$, completing the proof of Theorem [\[thm:convergence for bandit\]](#thm:convergence for bandit){reference-type="ref" reference="thm:convergence for bandit"} and corollary [3](#cor:regret true gradient MAB){reference-type="ref" reference="cor:regret true gradient MAB"}. ◻
+::::::::
+
+ ◻
+::::::::::::
+
+When value gradients are unavailable, we follow a direct policy gradient algorithm instead of softmax projection. The full pseudo-code is provided here in Algorithm [\[alg:ProjectionFreePolicyGradient\]](#alg:ProjectionFreePolicyGradient){reference-type="ref" reference="alg:ProjectionFreePolicyGradient"}. At each round $t\geq 1$, the learning rate for $\eta$ is chosen asynchronously for each controller $m$, to be $\alpha \pi_t(m)^2$, to ensure that we remain inside the simplex, for some $\alpha \in (0,1)$. To justify its name as a policy gradient algorithm, observe that in order to minimize regret, we need to solve the following optimization problem: $$\min\limits_{\pi\in \cP([M])} \sum\limits_{m=1}^{M}\pi(m)(\kr_\mu(m^*)- \kr_\mu(m)).$$ A direct gradient with respect to the parameters $\pi(m)$ gives us a rule for the policy gradient algorithm. The other changes in the update step (eq [\[algstep:update noisy gradien\--repeated\]](#algstep:update noisy gradien--repeated){reference-type="ref" reference="algstep:update noisy gradien--repeated"}), stem from the fact that true means of the arms are unavailable and importance sampling.
+
+:::: algorithm
+::: algorithmic
+learning rate $\eta\in (0,1)$ Initialize each $\pi_1(m)=\frac{1}{M}$, for all $m\in [M]$. $m_*(t)\leftarrow \argmax\limits_{m\in [M]} \pi_t(m)$ Choose controller $m_t\sim \pi_t$. Play action $a_t \sim K_{m_t}$. Receive reward $R_{m_t}$ by pulling arm $a_t$. Update $\forall m\in [M], m\neq m_*(t):$ $$\begin{equation}
+\label{algstep:update noisy gradien--repeated}
+ \pi_{t+1}(m) = \pi_t(m) + \eta\left(\frac{R_{m}\ind_{m}}{\pi_t(m)} - \frac{R_{m_*(t)}\ind_{m_*(t)}}{\pi_t(m_*(t))}\right)
+\end{equation}$$ Set $\pi_{t+1}(m_*(t)) = 1- \sum\limits_{m\neq m_*(t)}\pi_{t+1}(m)$.
+:::
+::::
+
+We have the following result.
+
+::: restatable
+theoremregretnoisyMAB[]{#thm:regret noisy gradient label="thm:regret noisy gradient"} With value of $\alpha$ chosen to be less than $\frac{\Delta_{min}}{\mathfrak{r}^\mu_{m^*}-\Delta_{min}}$, $\left(\pi_t\right)$ is a Markov process, with $\pi_t(m^*)\to 1$ as $t\to \infty, a.s.$ Further the regret till any time $T$ is bounded as $$\cR(T) \leq \frac{1}{1-\gamma}\sum\limits_{m\neq m^*} \frac{\Delta_m}{\alpha\Delta_{min}^2} \log T + C,$$
+:::
+
+where $C\bydef \frac{1}{1-\gamma}\sum\limits_{t\geq 1}\mathbb{P}\left\{\pi_{t}(m^*(t)) \leq \frac{1}{2}\right\}<\infty$.
+
+We make couple of remarks before providing the full proof of Theorem [\[thm:regret noisy gradient\]](#thm:regret noisy gradient){reference-type="ref" reference="thm:regret noisy gradient"}.
+
+::: {#remark:relation between true and noisy grad .remark}
+*Remark 6*. The "cost\" of not knowing the true gradient seems to cause the dependence on $\Delta_{min}$ in the regret, as is not the case when true gradient is available (see Theorem [\[thm:convergence for bandit\]](#thm:convergence for bandit){reference-type="ref" reference="thm:convergence for bandit"} and Corollary [3](#cor:regret true gradient MAB){reference-type="ref" reference="cor:regret true gradient MAB"}). The dependence on $\Delta_{min}$ as is well known from the work of [@LAI19854], is unavoidable.
+:::
+
+::: {#remark:dependence on delta .remark}
+*Remark 7*. The dependence of $\alpha$ on $\Delta_{min}$ can be removed by a more sophisticated choice of learning rate, at the cost of an extra $\log T$ dependence on regret [@Denisov2020RegretAO].
+:::
+
+:::::::: proof
+*Proof.* The proof is an extension of that of Theorem 1 of [@Denisov2020RegretAO] for the setting that we have. The proof is divided into three main parts. In the first part we show that the recurrence time of the process $\{\pi_t(m^*)\}_{t\geq 1}$ is almost surely finite. Next we bound the expected value of the time taken by the process $\pi_t(m^*)$ to reach 1. Finally we show that almost surely, $\lim\limits_{t\to\infty}\pi_t(m^*) \to 1$, in other words the process $\{\pi_t(m^*)\}_{t\geq 1}$ is transient. We use all these facts to show a regret bound.\
+Recall $m_*(t):=\argmax\limits_{m\in [M]} \pi_t(m)$. We start by defining the following quantity which will be useful for the analysis of algorithm [\[alg:ProjectionFreePolicyGradient\]](#alg:ProjectionFreePolicyGradient){reference-type="ref" reference="alg:ProjectionFreePolicyGradient"}.
+
+Let $\tau\bydef \min \left\{t\geq 1: \pi_t(m^*) > \frac{1}{2} \right\}$.
+
+Next, let $\cS:=\left\{\pi\in \cP([M]): \frac{1-\alpha}{2} \leq \pi(m^*) < \frac{1}{2}\right\}$.
+
+In addition, we define for any $a \in \Real$, $\cS_a:=\left\{\pi\in \cP([M]): \frac{1-\alpha}{a} \leq \pi(m^*) < \frac{1}{x}\right\}$. Observe that if $\pi_1(m^*)\geq 1/a$ and $\pi_2(m^*) < 1/a$ then $\pi_1\in \cS_a$. This fact follows just by the update step of the algorithm [\[alg:ProjectionFreePolicyGradient\]](#alg:ProjectionFreePolicyGradient){reference-type="ref" reference="alg:ProjectionFreePolicyGradient"}, and choosing $\eta=\alpha\pi_t(m)$ for every $m\neq m^*$.
+
+::: {#lemma:NoisyMAB:finite rec time .lemma}
+**Lemma 8**. *For $\alpha>0$ such that $\alpha<\frac{\Delta_{min}}{\kr(m^*)-\Delta_{min}}$, we have that $$\sup\limits_{\pi\in \cS}\expect{\tau\given \pi_1=\pi}<\infty.$$*
+:::
+
+::: proof
+*Proof.* The proof here is for completeness. We first make note of the following useful result: For a sequence of positive real numbers $\{a_n \}_{n\geq 1}$ such that the following condition is met: $$a(n+1)\leq a(n) - b.a(n)^2,$$ for some $b>0$, the following is always true: $$\begin{equation*}
+\label{eq:useful derivative like inequality MAB}
+ a_n\leq \frac{a_1}{1+bt}.
+\end{equation*}$$ This inequality follows by rearranging and observing the $a_n$ is a non-increasing sequence. A complete proof can be found in eg. ([@Denisov2020RegretAO], Appendix A.1). Returning to the proof of lemma, we proceed by showing that the sequence $1/{\pi_t(m^*)}-ct$ is a supermartingale for some $c>0$. Let $\Delta_{min}:=\Delta$ for ease of notation. Note that if the condition on $\alpha$ holds then there exists an $\epsilon>0$, such that $(1+\epsilon)(1+\alpha)< \kr^*/(\kr^*-\Delta)$, where $\kr^* :=\kr(m^*)$. We choose $c$ to be $$c:= \alpha. \frac{\kr^*}{1+\alpha} - \alpha(\kr^*-\Delta)(1+\epsilon) >0.$$ Next, let $x$ to be greater than $M$ and satisfying: $$\frac{x}{x-\alpha M}\leq 1+\epsilon.$$ Let $\xi_x:=\min\{t\geq 1: \pi_t(m^*) > 1/x \}$. Since for $t=1,\ldots, \xi_x-1$, $m_*(t)\neq m^*$, we have $\pi_{t+1}(m^*) = (1+\alpha)\pi_t(m^*)$ w.p. $\pi_t(m^*)\kr^*$ and $\pi_{t+1}(m^*) = \pi_t(m^*)+ \alpha\pi_t(m^*)^2/\pi_t(m_*)^2$ w.p. $\pi_t(m_*)\kr_*(t)$, where $\kr_*(t):=\kr(m_*(t))$.
+
+Let $y(t):=1/{\pi_t(m^*)}$, then we observe by a short calculation that, $$\begin{align*}
+y(t+1)&=
+ \begin{cases}
+ y(t) - \frac{\alpha}{1+\alpha}y(t), & w.p. \frac{\kr^*}{y(t)}\\
+ y(t) + \alpha \frac{y(t)}{\pi_t(m_*(t))y(t)-\alpha}. & w.p. \pi_t(m_*)\kr_*(t)\\
+ y(t) & otherwise.
+ \end{cases}
+\end{align*}$$ We see that, $$\begin{align*}
+&\expect{y(t+1)\given H(t)}-y(t)\\
+&= \frac{\kr^*}{y(t)}. (y(t) - \frac{\alpha}{1+\alpha}y(t)) + \pi_t(m_*)\kr_*(t).(y(t) + \alpha \frac{y(t)}{\pi_t(m_*(t))y(t)-\alpha}) - y(t)(\frac{\kr^*}{y(t)}+ \pi_t(m_*)\kr_*(t)) \\
+&\leq \alpha(\kr^*-\Delta)(1+\epsilon)-\frac{\alpha \kr^*}{1+\alpha}=-c.
+\end{align*}$$ The inequality holds because $\kr_*(t)\leq \kr^*\Delta$ and that $\pi_t(m_*)>1/M$. By the Optional Stopping Theorem [@Durrett11probability:theory], $$-c\expect{\xi_x\land t }\geq \expect{y(\xi_x \land t)-\expect{y(1)}} \geq -\frac{x}{1-\alpha}.$$ The final inequality holds because $\pi_1(m^*)\geq \frac{1-\alpha}{x}$.
+
+Next, applying the monotone convergence theorem gives theta $\expect{\xi_x}\leq \frac{x}{c(1-\alpha)}$. Finally to show the result of lemma [8](#lemma:NoisyMAB:finite rec time){reference-type="ref" reference="lemma:NoisyMAB:finite rec time"}, we refer the reader to (Appendix A.2, [@Denisov2020RegretAO]), which follow from standard Markov chain arguments. ◻
+:::
+
+Next we define an embedded Markov Chain $\{p(s), s\in \mathbb{Z}_+ \}$ as follows. First let $\sigma(k):= \min \left\{t\geq \tau(k): \pi_t(m^*) <\frac{1}{2} \right\}$ and $\tau(k):= \min \left\{t\geq \sigma(k-1): \pi_t (m^*) \geq \frac{1}{2} \right\}$. Note that within the region $[\tau(k), \sigma(k))$, $\pi_t(m^*)\geq 1/2$ and in $[\sigma(k), \tau(k+1))$, $\pi_t(m^*)< 1/2$. We next analyze the rate at which $\pi_t(m^*)$ approaches 1. Define $$\begin{align*}
+ p(s):= \pi_{t_s}(m^*) & \text{ where } & t_s=s+ \sum\limits_{i=0}^k(\tau(i+1)-\sigma(i))\\
+ & \text{ for } & s\in \left[\sum\limits_{i=0}^k(\sigma(i)-\tau(i)),\sum\limits_{i=0}^{k+1}(\sigma(i)-\tau(i)) \right)
+\end{align*}$$ Also let, $$\sigma_s := \min\left\{t>0: \pi_{t+t_s}(m^*)>1/2 \right\}$$ and, $$\tau_s:=\min\left\{t>\sigma_s: \pi_{t+t_s}(m^*)\leq 1/2 \right\}$$
+
+::: {#lemma:noisy MAB expected bound on process .lemma}
+**Lemma 9**. *The process $\{p(s)\}_{s\geq 1}$, is a submartingale. Further, $p(s)\to 1$, as $s\to \infty$. Finally, $$\expect{p(s)} \geq 1- \frac{1}{1+ \alpha\frac{\Delta^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)} s}.$$*
+:::
+
+::: proof
+*Proof.* We first observe that, $$\begin{align*}
+ p(s+1) &=
+ \begin{cases}
+ \pi_{t_s+1}(m^*) & if\, \pi_{t_s+1}(m^*) \geq 1/2\\
+ \pi_{t_s+\tau+s}(m^*) & if\, \pi_{t_s+1}(m^*) < 1/2
+ \end{cases}
+\end{align*}$$ Since $\pi_{t_s+\tau_s}(m^*)\geq 1/2$, we have that, $$p(s+1)\geq \pi_{t_s+1}(m^*) \text{ and } p(s)= \pi_{t_s}(m^*).$$ Since at times $t_s$, $\pi_{t_s}(m^*)>1/2$, we know that $m^*$ is the leading arm. Thus by the update step, for all $m\neq m^*$, $$\pi_{t_s+1}(m) = \pi_{t_s}(m) +\alpha \pi_{t_s}(m)^2\left[\frac{\ind_mR_m(t_s)}{\pi_{t_s}(m)} - \frac{\ind_{m^*}R_{m^*}(t_s)}{\pi_{t_s}(m^*)} \right].$$ Taking expectations both sides, $$\expect{\pi_{t_s+1}(m)\given H(t_s)} - {\pi_{t_s}(m)} = \alpha\pi_{t_s}(m)^2(\kr_m-\kr_{m^*}) = -\alpha \Delta_m\pi_{t_s}(m)^2.$$ Summing over all $m\neq m^*$: $$-\expect{\pi_{t_s+1}(m^*)\given H(t_s)} + {\pi_{t_s}(m^*)} = -\alpha\sum\limits_{m\neq m^*}\Delta_m\pi_{t_s}(m)^2.$$ By Jensen's inequality, $$\begin{align*}
+\sum\limits_{m\neq m^*}\Delta_m\pi_{t_s}(m)^2 &= \left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right) \sum\limits_{m\neq m^*}\frac{\Delta_m}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}\pi_{t_s}(m)^2 \\
+&\geq \left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right) \left(\sum\limits_{m\neq m^*}\frac{\Delta_m\pi_{t_s}(m)}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)} \right)^2\\
+&\geq \left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right) \frac{\Delta^2\left(\sum\limits_{m\neq m^*}\pi_{t_s}(m)\right)^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)^2}\\
+&= \frac{\Delta^2\left(1-\pi_{t_s}(m^*)\right)^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}.
+\end{align*}$$ Hence we get, $$p(s) - \expect{p(s+1)\given H(t_s)} \leq -\alpha \frac{\Delta^2\left(1-p(s)\right)^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)} \implies \expect{p(s+1)\given H(t_s)}\geq p(s) + \alpha \frac{\Delta^2\left(1-p(s)\right)^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}.$$ This implies immediately that $\{p(s)\}_{s\geq 1}$ is a submartingale.
+
+Since, $\{p(s)\}$ is non-negative and bounded by 1, by Martingale Convergence Theorem, $\lim_{s\to \infty} p(s)$ exists. We will now show that the limit is 1. Clearly, it is sufficient to show that $\limsup\limits_{s\to \infty} p(s)=1$. For $a>2$, let $$\phi_a:= \min\left\{s\geq 1: p(s)\geq \frac{a-1}{a} \right\}.$$ As is shown in [@Denisov2020RegretAO], it is sufficient to show $\phi_a<\infty$, with probability 1, because then one can define a sequence of stopping times for increasing $a$, each finite w.p. 1. which implies that $p(s)\to 1$. By the previous display, we have $$\expect{p(s+1)\given H(t_s)}- p(s) \geq \alpha \frac{\Delta^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)a^2}$$ as long as $p(s)\leq \frac{a-1}{a}$. Hence by applying Optional Stopping Theorem and rearranging we get, $$\expect{\phi_a}\leq \lim_{s\to \infty} \expect{\phi_a \land s} \leq \frac{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)a^2}{\alpha \Delta}(1-\expect{p(1)}) <\infty.$$ Since $\phi_a$ is a non-negative random variable with finite expectation, $\phi_a<\infty a.s.$ Let $q(s)=1-p(s)$. We have : $$\expect{q(s+1)} -\expect{q(s)} \leq -\alpha \frac{\Delta^2\left(q(s)\right)^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}.$$ By the useful result [\[eq:useful derivative like inequality MAB\]](#eq:useful derivative like inequality MAB){reference-type="ref" reference="eq:useful derivative like inequality MAB"}, we get, $$\expect{q(s)} \leq \frac{\expect{q(1)}}{1+ \alpha\frac{\Delta^2\expect{q(1)}}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}s } \leq \frac{1}{1+ \alpha\frac{\Delta^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)} s}.$$ This completes the proof of the lemma. ◻
+:::
+
+Finally we provide a lemma to tie the results above. We refer (Appendix A.5 [@Denisov2020RegretAO]) for the proof of this lemma.
+
+::: {#lemma:technical lemm noisy MAB .lemma}
+**Lemma 10**. *$$\sum\limits_{t\geq 1}\prob{\pi_t(m^*)<1/2} <\infty.$$ Also, with probability 1, $\pi_t(m^*)\to 1$, as $t\to \infty$.*
+:::
+
+[Proof of regret bound:]{.underline} Since $\kr^*-\kr(m)\leq 1$, we have by the definition of regret (see eq [\[eq:regret definition MAB\]](#eq:regret definition MAB){reference-type="ref" reference="eq:regret definition MAB"}) $$\cR(T) = \expect{\frac{1}{1-\gamma}\sum\limits_{t=1}^T\left(\sum\limits_{m=1}^{M}\pi^*(m)\kr_m - \pi_t(m)\kr_m \right)}.$$ Here we recall that $\pi^*=e_{m^*}$, we have: $$\begin{align*}
+\cR(T) &= \frac{1}{1-\gamma}\expect{\sum\limits_{t=1}^T\left(\sum\limits_{m=1}^{M}(\pi^*(m)\kr_m - \pi_t(m)\kr_m) \right)}\\
+&=\frac{1}{1-\gamma}\expect{\sum\limits_{m=1}^M\left(\sum\limits_{t=1}^T (\pi^*(m)\kr_m - \pi_t(m)\kr_m) \right)}\\
+&=\frac{1}{1-\gamma}\expect{\sum\limits_{t=1}^T\left(\kr^* -\sum\limits_{m=1}^{M}\pi_t(m)\kr_m \right)}\\
+&=\frac{1}{1-\gamma}\expect{\left(\sum\limits_{t=1}^T\kr^* -\sum\limits_{t=1}^T\sum\limits_{m=1}^{M}\pi_t(m)\kr_m \right)}\\
+&=\frac{1}{1-\gamma}\expect{\left(\sum\limits_{t=1}^T\kr^*(1-\pi_t(m^*)) -\sum\limits_{t=1}^T\sum\limits_{m\neq m^*} \pi_t(m)\kr_m \right)}\\
+&=\frac{1}{1-\gamma}\expect{\left(\sum\limits_{t=1}^T\sum\limits_{m\neq m^*}\kr^*\pi_t(m) -\sum\limits_{t=1}^T\sum\limits_{m\neq m^*} \pi_t(m)\kr_m \right)}\\
+&= \frac{1}{1-\gamma} \sum\limits_{m\neq m^*} (\kr^*-\kr_m)\expect{\sum\limits_{t=1}^T \pi_t(m)}.
+\end{align*}$$ Hence we have, $$\begin{align*}
+\cR(T) &= \frac{1}{1-\gamma} \sum\limits_{m\neq m^*} (\kr^*-\kr_m)\expect{\sum\limits_{t=1}^T \pi_t(m)}\\
+&\leq \frac{1}{1-\gamma} \sum\limits_{m\neq m^*}\expect{\sum\limits_{t=1}^T \pi_t(m)}\\
+&=\frac{1}{1-\gamma} \expect{\sum\limits_{t=1}^T (1-\pi_t(m^*))}\\
+\end{align*}$$ We analyze the following term: $$\expect{\sum\limits_{t=1}^T (1-\pi_t(m^*))} =\expect{\sum\limits_{t=1}^T (1-\pi_t(m^*))\ind\{\pi_t(m^*)\geq 1/2 \}}+\expect{\sum\limits_{t=1}^T (1-\pi_t(m^*))\ind\{\pi_t(m^*)<1/2 \}}$$ $$=\expect{\sum\limits_{t=1}^T (1-\pi_t(m^*))\ind\{\pi_t(m^*)\geq 1/2 \}} +C_1.$$ where, $C_1:=\sum\limits_{t=1}^\infty \prob{\pi_t(m^*)<1/2}<\infty$ by Lemma [10](#lemma:technical lemm noisy MAB){reference-type="ref" reference="lemma:technical lemm noisy MAB"}. Next we observe that, $$\expect{\sum\limits_{t=1}^T (1-\pi_t(m^*))\ind\{\pi_t(m^*)\geq 1/2 \}} = \expect{\sum\limits_{s=1}^T q(s)\ind\{\pi_t(m^*)\geq 1/2 \}} \leq \expect{\sum\limits_{s=1}^T q(s)}$$ $$= \sum\limits_{t=1}^T \frac{1}{1+ \alpha\frac{\Delta^2}{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)} s} \leq \sum\limits_{t=1}^T \frac{{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}}{ \alpha{\Delta^2} s}$$ $$\leq \frac{{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}}{ \alpha{\Delta^2} }\log T.$$
+
+Putting things together, we get, $$\begin{align*}
+\cR(T) &\leq \frac{1}{1-\gamma} \left( \frac{{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}}{ \alpha{\Delta^2} }\log T + C_1 \right)\\
+&=\frac{1}{1-\gamma} \left( \frac{{\left(\sum\limits_{m'\neq m^*}\Delta_{m'} \right)}}{ \alpha{\Delta^2} }\log T \right) + C .
+\end{align*}$$ This completes the proof of Theorem [\[thm:regret noisy gradient\]](#thm:regret noisy gradient){reference-type="ref" reference="thm:regret noisy gradient"}. ◻
+::::::::
+
+First we recall the policy gradient theorem.
+
+::: {#thm:policy gradient theorem .theorem}
+**Theorem 11** (Policy Gradient Theorem [@Sutton2000]). *$$\frac{\partial}{\partial \theta}V^{\pi_\theta} (\mu) = \frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s) \sum\limits_{a\in \cA}\frac{\partial \pi_\theta(a|s)}{\partial \theta}Q^{\pi_\theta}(s,a).$$*
+:::
+
+Let $s\in \cS$ and $m\in [m]$. Let $\tilde{Q}^{\pi_\theta}(s,m)\bydef \sum\limits_{a\in \cA}K_m(s,a)Q^{\pi_\theta}(s,a)$. Also let $\tilde{A}(s,m) \bydef \tilde{Q}(s,m)-V(s)$.
+
+::: restatable
+lemmagradientofV[]{#lemma:Gradient simplification label="lemma:Gradient simplification"} The softmax policy gradient with respect to the parameter $\theta\in \Real^M$ is $\frac{\partial}{\partial \theta_m}V^{\pi_{\theta}}(\mu) = \frac{1}{1-\gamma}\sum\limits_{s\in \cS}d_\mu^{\pi_\theta}(s) \pi_\theta(m)\tilde{A}(s,m)$, where $\tilde{A}(s,m):=\tilde{Q}(s,m)-V(s)$ and $\tilde{Q}(s,m):= \sum\limits_{a\in \cA}K_m(s,a)Q^{\pi_\theta}(s,a)$, and $d_\mu^{\pi_\theta}(.)$ is the *discounted state visitation measure* starting with an initial distribution $\mu$ and following policy $\pi_\theta$.
+:::
+
+The interpretation of $\tilde{A}(s,m)$ is the advantage of following controller $m$ at state $s$ and then following the policy $\pi_\theta$ for all time versus following $\pi_\theta$ always. As mentioned in section [\[sec:PG theory\]](#sec:PG theory){reference-type="ref" reference="sec:PG theory"}, we proceed by proving smoothness of the $V^{\pi}$ function over the space $\Real^M$.
+
+::: proof
+*Proof.* From the policy gradient theorem [11](#thm:policy gradient theorem){reference-type="ref" reference="thm:policy gradient theorem"}, we have: $$\begin{align*}
+ \frac{\partial}{\partial \theta_{m'}} V^{\pi_\theta}(\mu) &= \frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s) \sum\limits_{a\in \cA}\frac{\partial \pi_{\theta_{m'}}(a|s)}{\partial \theta}Q^{\pi_\theta}(s,a)\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s) \sum\limits_{a\in \cA}\frac{\partial }{\partial {\theta_{m'}}}\left(\sum\limits_{m=1}^{M}\pi_\theta(m)K_m(s,a)\right)Q^{\pi_\theta}(s,a)\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s) \sum\limits_{m=1}^{M}\sum\limits_{a\in \cA}\left(\frac{\partial }{\partial {\theta_{m'}}} \pi_\theta(m)\right)K_m(s,a)Q(s,a)\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s) \sum\limits_{a\in \cA}\pi_{m'}\left(K_{m'}(s,a) - \sum\limits_{m=1}^{M}\pi_mK_m(s,a) \right)Q(s,a)\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\pi_{m'} \sum\limits_{a\in \cA}\left(K_{m'}(s,a) - \sum\limits_{m=1}^{M}\pi_mK_m(s,a) \right)Q(s,a)\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\pi_{m'} \left[\sum\limits_{a\in \cA}K_{m'}(s,a)Q(s,a) -\sum\limits_{a\in \cA}\sum\limits_{m=1}^{M}\pi_mK_m(s,a)Q(s,a) \right]\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\pi_{m'} \left[\tilde{Q}(s,m') -V(s) \right]\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\pi_{m'} \tilde{A}^{\pi_\theta}(s,m').
+\end{align*}$$ ◻
+:::
+
+::: restatable
+lemmasmoothnesslemmaMDP[]{#lemma:smoothness of V label="lemma:smoothness of V"} ${V}^{\pi_{\theta}}\left(\mu\right)$ is $\frac{7\gamma^2+4\gamma+5}{2\left(1-\gamma\right)^2}$-smooth.
+:::
+
+::: proof
+*Proof.* The proof uses ideas from [@Agarwal2020] and [@Mei2020]. Let $\theta_{\alpha} = \theta+\alpha u$, where $u\in \Real^M$, $\alpha\in \Real$. For any $s\in \cS$, $$\begin{align*}
+\allowdisplaybreaks
+ \sum\limits_a \abs{\frac{\partial\pi_{\theta_{\alpha}}(a| s)}{\partial \alpha}\Big|_{\alpha=0}} &= \sum\limits_a \abs{\innprod{\frac{\partial\pi_{\theta_{\alpha}}(a| s)}{\partial \theta_{\alpha}}\Big|_{\alpha=0}, \frac{\partial\theta_\alpha}{\partial\alpha}}} = \sum\limits_a \abs{\innprod{\frac{\partial\pi_{\theta_{\alpha}}(a| s)}{\partial \theta_{\alpha}}\Big|_{\alpha=0}, u}} \\
+ &=\sum\limits_a\abs{\sum\limits_{m''=1}^M\sum\limits_{m=1}^M\pi_{\theta_{m''}}\left(\ind_{mm''}-\pi_{\theta_m}\right)K_m(s,a)u(m'') }\\
+ &= \sum\limits_a\abs{\sum\limits_{m''=1}^M\pi_{\theta_{m''}}\left(K_{m''}(s,a)u(m'')-\sum\limits_{m=1}^M K_m(s,a)u(m'')\right) }\\
+ &\leq \sum\limits_a\sum\limits_{m''=1}^M\pi_{\theta_{m''}}K_{m''}(s,a)\abs{u(m'')} + \sum\limits_a\sum\limits_{m''=1}^M\sum\limits_{m=1}^M\pi_{\theta_{m''}}\pi_{\theta_{m}}K_{m}(s,a)\abs{u(m'')} \\
+ &= \sum\limits_{m''=1}^M\pi_{\theta_{m''}}\abs{u(m'')}\underbrace{\sum\limits_a K_{m''}(s,a)}_{=1} + \sum\limits_{m''=1}^M\sum\limits_{m=1}^M\pi_{\theta_{m''}}\pi_{\theta_{m}}\abs{u(m'')}\underbrace{\sum\limits_a K_{m}(s,a) }_{=1}\\
+ &=\sum\limits_{m''=1}^M\pi_{\theta_{m''}}\abs{u(m'')}+ \sum\limits_{m''=1}^M\sum\limits_{m=1}^M\pi_{\theta_{m''}}\pi_{\theta_{m}}\abs{u(m'')}\\
+ &=2\sum\limits_{m''=1}^M \pi_{\theta_{m''}}\abs{u(m'')} \leq 2\norm{u}_2.
+\end{align*}$$ Next we bound the second derivative. $$\sum\limits_a \abs{\frac{\partial^2 \pi_{\theta_{\alpha}}(a\given s)}{\partial\alpha^2}\given_{\alpha=0}} = \sum
+\limits_a \abs{\innprod{\frac{\partial}{\partial\theta_\alpha}\frac{\partial \pi_{\theta_{\alpha}}(a\given s)}{\partial\alpha}\given_{\alpha=0},u}}=\sum
+\limits_a \abs{\innprod{\frac{\partial^2 \pi_{\theta_{\alpha}}(a\given s)}{\partial\alpha^2}\given_{\alpha=0}u,u}}.$$ Let $H^{a,\theta}\bydef \frac{\partial^2 \pi_{\theta_{\alpha}}(a\given s)}{\partial \theta^2} \in \Real^{M\times M}$. We have, $$\begin{align*}
+\allowdisplaybreaks
+ H^{a,\theta}_{i,j} &= \frac{\partial}{\partial \theta_j}\left(\sum\limits_{m=1}^{M} \pi_{\theta_i}\left(\ind_{mi}-\pi_{\theta_m}\right)K_m(s,a) \right)\\
+ &= \frac{\partial}{\partial \theta_j}\left(\pi_{\theta_i}K_i(s,a)-\sum\limits_{m=1}^{M} \pi_{\theta_i}\pi_{\theta_m}K_m(s,a) \right)\\
+ &=\pi_{\theta_j}(\ind_{ij}-\pi_{\theta_i})K_i(s,a) -\sum\limits_{m=1}^M K_m(s,a) \frac{\partial \pi_{\theta_i}\pi_{\theta_m}}{\partial\theta_j}\\
+ &= \pi_j(\ind_{ij}-\pi_i)K_i(s,a) - \sum\limits_{m=1}^M K_m(s,a)\left(\pi_j(\ind_{ij}-\pi_i)\pi_m + \pi_i\pi_j(\ind_{mj}-\pi_m)\right)\\
+ &= \pi_j\left( (\ind_{ij}-\pi_i)K_i(s,a) - \sum\limits_{m=1}^M \pi_m(\ind_{ij}-\pi_i)K_m(s,a) -\sum\limits_{m=1}^M\pi_i(\ind_{mj}-\pi_m)K_m(s,a) \right).
+\end{align*}$$
+
+Plugging this into the second derivative, we get, $$\begin{align*}
+\allowdisplaybreaks
+\begin{split}
+ & \abs{\innprod{\frac{\partial^2}{\partial\theta^2}\pi_\theta(a|s)u,u}}\\
+ &= \abs{\sum\limits_{j=1}^M\sum\limits_{i=1}^MH_{i,j}^{a,\theta}u_iu_j}\\
+ &=\abs{\sum\limits_{j=1}^M\sum\limits_{i=1}^M \pi_j\left( (\ind_{ij}-\pi_i)K_i(s,a) - \sum\limits_{m=1}^M \pi_m(\ind_{ij}-\pi_i)K_m(s,a) -\sum\limits_{m=1}^M\pi_i(\ind_{mj}-\pi_m)K_m(s,a) \right)u_iu_j }\\
+ &=\Bigg|\sum\limits_{i=1}^M \pi_iK_i(s,a)u_i^2 - \sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\pi_i\pi_jK_i(s,a)u_iu_j - \sum\limits_{i=1}^{M}\sum\limits_{m=1}^{M}\pi_i\pi_mK_m(s,a)u_i^2\\ &\qquad +\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\sum\limits_{m=1}^{M}\pi_i\pi_j\pi_m K_m(s,a)u_iu_j - \sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\pi_i\pi_j K_j(s,a)u_iu_j \\
+ &\qquad+\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\sum\limits_{m=1}^{M}\pi_i\pi_j\pi_m K_m(s,a)u_iu_j \Bigg|\\
+ &= \Bigg| \sum\limits_{i=1}^{M}\pi_iK_i(s,a) u_i^2 -2\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\pi_i\pi_jK_i(s,a)u_iu_j\\
+ &\qquad - \sum\limits_{i=1}^{M}\sum\limits_{m=1}^{M}\pi_i\pi_mK_m(s,a) u_i^2 + 2\sum\limits_{i=1}^{M}\sum\limits_{j=1}^{M}\sum\limits_{m=1}^{M}\pi_i\pi_j\pi_mK_m(s,a)u_iu_j\Bigg|\\
+ &= \Bigg|\sum\limits_{i=1}^{M}\pi_iu_i^2\left(K_i(s,a) -\sum\limits_{m=1}^{M}\pi_mK_m(s,a) \right) -2\sum\limits_{i=1}^{M}\pi_iu_i\sum\limits_{j=1}^{M}\pi_ju_j\left( K_i(s,a) -\sum\limits_{m=1}^{M}\pi_mK_m(s,a) \right)\Bigg|\\
+ &\leq \sum\limits_{i=1}^{M}\pi_iu_i^2\underbrace{\abs{K_i(s,a) -\sum\limits_{m=1}^{M}\pi_mK_m(s,a)}}_{\leq 1} +2 \sum\limits_{i=1}^{M}\pi_i\abs{u_i} \sum\limits_{j=1}^{M}\pi_j\abs{u_j} \underbrace{\abs{K_i(s,a) -\sum\limits_{m=1}^{M}\pi_mK_m(s,a)}}_{\leq 1}\\
+ &\leq \norm{u}_2^2 +2 \sum\limits_{i=1}^{M}\pi_i\abs{u_i}\sum\limits_{j=1}^{M}\pi_j\abs{u_j}\leq 3\norm{u}^2_2.
+ \end{split}
+\end{align*}$$
+
+The rest of the proof is similar to [@Mei2020] and we include this for completeness. Define $P(\alpha)\in \Real^{S\times S}$, where $\forall (s,s'),$ $$\left[P(\alpha) \right]_{(s,s')} = \sum\limits_{a\in \cA}\pi_{\theta_\alpha} (a\given s).\tP(s'|s,a).$$ The derivative w.r.t. $\alpha$ is, $$\left[\frac{\partial}{\partial \alpha} P(\alpha)\Big|_{\alpha=0} \right]_{(s,s') } = \sum\limits_{a\in \cA}\left[\frac{\partial}{\partial \alpha}\pi_{\theta_\alpha} (a\given s)\Big|_{\alpha=0}\right].\tP(s'|s,a).$$ For any vector $x\in \Real^S$, $$\left[\frac{\partial}{\partial \alpha} P(\alpha)\Big|_{\alpha=0}x \right]_{(s) } = \sum\limits_{s'\in \cS}\sum\limits_{a\in \cA}\left[\frac{\partial}{\partial \alpha}\pi_{\theta_\alpha} (a\given s)\Big|_{\alpha=0}\right].\tP(s'|s,a). x(s').$$
+
+The $l_\infty$ norm can be upper-bounded as, $$\begin{align*}
+ \norm{\frac{\partial}{\partial \alpha} P(\alpha)\Big|_{\alpha=0}x}_\infty &= \max\limits_{s\in \cS}\abs{ \sum\limits_{s'\in \cS}\sum\limits_{a\in \cA}\left[\frac{\partial}{\partial \alpha}\pi_{\theta_\alpha} (a\given s)\Big|_{\alpha=0}\right].\tP(s'|s,a). x(s') }\\
+ &\leq \max\limits_{s\in \cS} \sum\limits_{s'\in \cS}\sum\limits_{a\in \cA}\abs{\frac{\partial}{\partial \alpha}\pi_{\theta_\alpha} (a\given s)\Big|_{\alpha=0}}.\tP(s'|s,a). \norm{x}_\infty\\
+ &\leq 2\norm{u}_2\norm{x}_\infty.
+\end{align*}$$ Now we find the second derivative, $$\begin{align*}
+ \left[\frac{\partial^2P(\alpha)}{\partial\alpha^2} \Big|_{\alpha=0}\right]_{(s,s')} = \sum\limits_{a\in \cA}\left[\frac{\partial^2\pi_{\theta_\alpha}(a|s)}{\partial \alpha^2}\Big|_{\alpha=0} \right]\tP(s'|s,a)
+\end{align*}$$ taking the $l_\infty$ norm, $$\begin{align*}
+\norm{\left[\frac{\partial^2P(\alpha)}{\partial\alpha^2} \Big|_{\alpha=0}\right]x}_{\infty} &= \max_s \abs{\sum\limits_{s'\in \cS}\sum\limits_{a\in \cA}\left[\frac{\partial^2\pi_{\theta_\alpha}(a|s)}{\partial \alpha^2}\Big|_{\alpha=0} \right]\tP(s'|s,a)x(s')}\\
+&\leq \max_s \sum\limits_{s'\in \cS}\left[\abs{\frac{\partial^2\pi_{\theta_\alpha}(a|s)}{\partial \alpha^2}\Big|_{\alpha=0}} \right]\tP(s'|s,a)\norm{x}_\infty \leq 3\norm{u}_2\norm{x}_\infty.
+\end{align*}$$ Next we observe that the value function of $\pi_{\theta_\alpha}:$ $$V^{\pi_{\theta_\alpha}}(s) = \underbrace{\sum\limits_{a\in \cA}\pi_{\theta_{\alpha}}(a|s)r(s,a)}_{r_{\theta_\alpha}} + \gamma \sum\limits_{a\in \cA}\pi_{\theta_{\alpha}}(a|s) \sum\limits_{s'\in \cS} \tP(s'|s,a)V^{\pi_{\theta_{\alpha}}}(s').$$ In matrix form, $$\begin{align*}
+ V^{\pi_{\theta_{\alpha}}} = r_{\theta_{\alpha}} + \gamma P(\alpha)V^{\pi_{\theta_{\alpha}}}\\
+ \implies \left(Id-\gamma P(\alpha) \right)V^{\pi_{\theta_{\alpha}}} = r_{\theta_{\alpha}}\\
+ V^{\pi_{\theta_{\alpha}}} = \left(Id-\gamma P(\alpha) \right)^{-1}r_{\theta_{\alpha}}.
+\end{align*}$$ Let $M(\alpha):=\left(Id-\gamma P(\alpha) \right)^{-1}= \sum\limits_{t=0}^\infty \gamma^t [P(\alpha)]^t$. Also, observe that $$\mathbf{1} = \frac{1}{1-\gamma} \left(Id-\gamma P(\alpha) \right) \mathbf{1}\implies M(\alpha)\mathbf{1}=\frac{1}{1-\gamma}\mathbf{1}.$$ $$\implies \forall i \norm{[M(\alpha)]_{i,:}}_1 = \frac{1}{1-\gamma}$$ where $[M(\alpha)]_{i,:}$ is the $i^{th}$ row of $M(\alpha)$. Hence for any vector $x\in \Real^S$, $\norm{M(\alpha)x}_\infty\leq \frac{1}{1-\gamma} \norm{x}_\infty.$
+
+By assumption [14](#assumption:bounded reward){reference-type="ref" reference="assumption:bounded reward"}, we have $\norm{r_{\theta_\alpha}}_{\infty} = \max_s \abs{r_{\theta_\alpha}(s)} \leq 1$. Next we find the derivative of $r_{\theta_\alpha}$ w.r.t $\alpha$. $$\begin{align*}
+ \abs{\frac{\partial r_{\theta_\alpha}(s)}{\partial \alpha}} &= \abs{\left(\frac{\partial r_{\theta_\alpha}(s)}{\partial \theta_\alpha}\right)\transpose \frac{\partial \theta_\alpha}{\partial \alpha}}\\
+ &\leq \abs{\sum\limits_{m''=1}^M\sum\limits_{m=1}^{M}\sum\limits_{a\in \cA}\pi_{\theta_\alpha}(m'') (\ind_{mm''}-\pi_{\theta_\alpha}(m))K_m(s,a) r(s,a)u(m'') }\\
+ &=\abs{\sum\limits_{m''=1}^M \sum\limits_{a\in \cA}\pi_{\theta_\alpha}(m'')K_{m''}(s,a) r(s,a)u(m'') - \sum\limits_{m''=1}^M \sum\limits_{m=1}^{M}\sum\limits_{a\in \cA}\pi_{\theta_\alpha}(m'')\pi_{\theta_\alpha}(m)K_m(s,a) r(s,a)u(m'')}\\
+ %&\leq \abs{\sum\limits_{m''=1}^M \sa \pi_{\theta_\alpha}(m'')K_{m''}(s,a) u(m'') - \sum\limits_{m''=1}^M \sm\sa \pi_{\theta_\alpha}(m'')\pi_{\theta_\alpha}(m)K_m(s,a) u(m'')}\\
+ &\leq \abs{\sum\limits_{m''=1}^M \sum\limits_{a\in \cA}\pi_{\theta_\alpha}(m'')K_{m''}(s,a) r(s,a) - \sum\limits_{m''=1}^M \sum\limits_{m=1}^{M}\sum\limits_{a\in \cA}\pi_{\theta_\alpha}(m'')\pi_{\theta_\alpha}(m)K_m(s,a) r(s,a)}\norm{u}_\infty \leq \norm{u}_2.\\
+\end{align*}$$ Similarly, we can calculate the upper-bound on second derivative, $$\begin{align*}
+ \norm{\frac{\partial r_{\theta_\alpha}}{\partial \alpha^2}}_\infty &= \max_s \abs{ \frac{\partial r_{\theta_\alpha}(s)}{\partial \alpha^2} }\\
+ &=\max_s \abs{\left( \frac{\partial}{\partial \alpha} \left\{ \frac{\partial r_{\theta_\alpha}(s)}{\partial \alpha}\right\} \right)\transpose \frac{\partial \theta_\alpha}{\partial \alpha}}\\
+ &= \max_s \abs{\left( \frac{\partial^2 r_{\theta_\alpha}(s)}{\partial \alpha^2} \frac{\partial \theta_\alpha}{\partial \alpha} \right)\transpose \frac{\partial \theta_\alpha}{\partial \alpha} }
+ &\leq 5/2 \norm{u}_2^2.
+\end{align*}$$ Next, the derivative of the value function w.r.t $\alpha$ is given by, $$\frac{\partial V^{\pi_{\theta_\alpha}}(s)}{\partial \alpha} = \gamma e_s\transpose M(\alpha) \frac{\partial P(\alpha)}{\partial\alpha}M(\alpha)r_{\theta_\alpha} + e_s\transpose M(\alpha) \frac{\partial r_{\theta_\alpha}}{\partial \alpha}.$$ And the second derivative, $$\begin{align*}
+\begin{split}
+ \frac{\partial^2 V^{\pi_{\theta_\alpha}}(s)}{\partial \alpha^2} &= \underbrace{2\gamma^2e_s\transpose M(\alpha)\frac{\partial P(\alpha)}{\partial\alpha} M(\alpha)\frac{\partial P(\alpha)}{\partial\alpha}M(\alpha)r_{\theta_\alpha}}_{T1} + \underbrace{\gamma e_s\transpose M(\alpha) \frac{\partial^2 P(\alpha)}{\partial\alpha^2} M(\alpha)r_{\theta_\alpha}}_{T2}\\& + \underbrace{2\gamma e_s\transpose M(\alpha)\frac{\partial P(\alpha)}{\partial\alpha} M(\alpha)\frac{\partial r_{\theta_\alpha}}{\partial \alpha}}_{T3}+ \underbrace{e_s\transpose M(\alpha)\frac{\partial^2 r_{\theta_\alpha}}{\partial\alpha^2}}_{T4}.
+\end{split}
+\end{align*}$$ We use the above derived bounds to bound each of the term in the above display. The calculations here are same as shown for Lemma 7 in [@Mei2020], except for the particular values of the bounds. Hence we directly, mention the final bounds that we obtain and refer to [@Mei2020] for the detailed but elementary calculations. $$\begin{align*}
+ \abs{T1} &\leq \frac{4}{(1-\gamma)^3} \norm{u}_2^2\\
+ \abs{T2} &\leq \frac{3}{(1-\gamma)^2} \norm{u}_2^2\\
+ \abs{T3} &\leq \frac{2}{(1-\gamma)^2} \norm{u}_2^2\\
+ \abs{T4} &\leq \frac{5/2}{(1-\gamma)} \norm{u}_2^2.
+\end{align*}$$ Combining the above bounds we get, $$\abs{\frac{\partial^2 V^{\pi_{\theta_\alpha}}(s)}{\partial \alpha^2}\Big|_{\alpha=0}}\leq \left(\frac{8\gamma^2}{(1-\gamma)^3} + \frac{3\gamma}{(1-\gamma)^2} + \frac{4\gamma}{(1-\gamma)^2} + \frac{5/2}{(1-\gamma)} \right)\norm{u}_2^2$$ $$=\frac{7\gamma^2+4\gamma+5}{2(1-\gamma)^3}\norm{u}_2.$$ Finally, let $y\in \Real^M$ and fix a $\theta \in \Real^M$: $$\begin{align*}
+ \abs{y\transpose\frac{\partial^2 V^{\pi_{\theta}}(s)}{\partial\theta^2}y} &=\abs{\frac{y}{\norm{y}_2}\transpose\frac{\partial^2 V^{\pi_{\theta}}(s)}{\partial\theta^2}\frac{y}{\norm{y}_2}}.\norm{y}_2^2\\
+ &\leq \max\limits_{\norm{u}_2=1}\abs{\innprod{\frac{\partial^2 V^{\pi_{\theta}}(s)}{\partial\theta^2}u,u }}.\norm{y}_2^2\\
+ &= \max\limits_{\norm{u}_2=1}\abs{\innprod{\frac{\partial^2 V^{\pi_{\theta_\alpha}}(s)}{\partial\theta_\alpha^2}\Big|_{\alpha=0}\frac{\partial\theta_\alpha}{\partial \alpha},\frac{\partial\theta_\alpha}{\partial \alpha} }}.\norm{y}_2^2\\
+ &= \max\limits_{\norm{u}_2=1}\abs{\frac{\partial^2V^{\pi_{\theta_\alpha}}(s)}{\partial\alpha^2}\Big|_{\alpha=0}}.\norm{y}_2^2\\
+ &\leq \frac{7\gamma^2+4\gamma+5}{2(1-\gamma)^3}\norm{y}_2^2.
+\end{align*}$$ Let $\theta_\xi:= \theta + \xi(\theta'-\theta)$ where $\xi\in [0,1]$. By Taylor's theorem $\forall s,\theta,\theta'$, $$\abs{V^{\pi_{\theta'}}(s) -V^{\pi_{\theta}}(s) -\innprod{\frac{\partial V^{\pi_{\theta}}(s)}{\partial \theta} } } = \frac{1}{2}.\abs{(\theta'-\theta)\transpose \frac{\partial^2V^{\pi_{\theta_\xi}}(s) }{\partial \theta_\xi^2}(\theta'-\theta) }$$ $$\leq \frac{7\gamma^2+4\gamma+5}{4(1-\gamma)^3}\norm{\theta'-\theta}_2^2.$$ Since $V^{\pi_\theta}(s)$ is $\frac{7\gamma^2+4\gamma+5}{2(1-\gamma)^3}$ smooth for every $s$, $V^{\pi_\theta}(\mu)$ is also $\frac{7\gamma^2+4\gamma+5}{2(1-\gamma)^3}-$ smooth. ◻
+:::
+
+::: {#lemma:value diffence lemma .lemma}
+**Lemma 12** (Value Difference Lemma-1). *For any two policies $\pi$ and $\pi'$, and for any state $s\in \cS$, the following is true. $$V^{\pi'}(s)-V^{\pi}(s) = \frac{1}{1-\gamma} \sum\limits_{s'\in \cS}d_s^{\pi'}(s')\sum\limits_{m=1}^{M}\pi'_m \tilde{A}(s',m).$$*
+:::
+
+::: proof
+*Proof.* $$\begin{align*}
+\allowdisplaybreaks
+ V^{\pi'}(s) - V^{\pi}(s) &= \sum\limits_{m=1}^{M}\pi_m'\tilde{Q}'(s,m) - \sum\limits_{m=1}^{M}\pi_m \tilde{Q}(s,m)\\
+ &=\sum\limits_{m=1}^{M}\pi_m'\left(\tilde{Q}'(s,m)-\tilde{Q}(s,m) \right) +\sum\limits_{m=1}^{M}(\pi_m'-\pi_m)\tilde{Q}(s,m)\\
+ &=\sum\limits_{m=1}^{M}(\pi_m'-\pi_m)\tilde{Q}(s,m) +\underbrace{\sum\limits_{m=1}^{M}\pi_m'\sum\limits_{a\in \cA} K_m(s,a)}_{=\sum\limits_{a\in \cA}\pi_\theta(a|s)}\sum\limits_{s'\in \cS} \tP(s'|s,a)\left[V^{\pi'}(s')-V^{\pi}(s') \right]\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s'\in \cS} d_s^{\pi'}(s') \sum\limits_{m'=1}^M (\pi'_{m'}-\pi_{m'})\tilde{Q}(s',m')\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s'\in \cS} d_s^{\pi'}(s') \sum\limits_{m'=1}^M \pi'_{m'}(\tilde{Q}{s',m'}-V(s'))\\
+ &=\frac{1}{1-\gamma}\sum\limits_{s'\in \cS} d_s^{\pi'}(s') \sum\limits_{m'=1}^M \pi'_{m'}\tilde{A}(s',m').
+\end{align*}$$ ◻
+:::
+
+::: {#lemma:value diffrence lemma-2 .lemma}
+**Lemma 13**. *(Value Difference Lemma-2)For any two policies $\pi$ and $\pi'$ and state $s\in \cS$, the following is true. $$V^{\pi'}(s)-V^{\pi}(s) = \frac{1}{1-\gamma} \sum\limits_{s'\in \cS} d_s^{\pi}(s')\sum\limits_{m=1}^{M}(\pi_m'-\pi_m)\tilde{Q}^{\pi'}(s',m).$$*
+:::
+
+::: proof
+*Proof.* We will use $\tilde{Q}$ for $\tilde{Q}^{\pi}$ and $\tilde{Q}'$ for $\tilde{Q}^{\pi'}$ as a shorthand. $$\begin{align*}
+\begin{split}
+ V^{\pi'}(s)-V^{\pi}(s) &= \sum\limits_{m=1}^{M}\pi_m'\tilde{Q}'(s,m) - \sum\limits_{m=1}^{M}\pi_m\tilde{Q}(s,m)\\
+ &=\sum\limits_{m=1}^{M}(\pi_m'-\pi_m)\tilde{Q}'(s,m) + \sum\limits_{m=1}^{M}\pi_m(\tilde{Q}'(s,m)-\tilde{Q}(s,m))\\
+ &= \sum\limits_{m=1}^{M}(\pi_m'-\pi_m)\tilde{Q}'(s,m) + \\ & \gamma \sum\limits_{m=1}^{M}\pi_m\left(\sum\limits_{a\in \cA} K_m(s,a)\sum\limits_{s'\in \cS} \tP(s'|s,a)V'(s') - \sum\limits_{a\in \cA} K_m(s,a)\sum\limits_{s'\in \cS} \tP(s'|s,a)V(s') \right)\\
+ &=\sum\limits_{m=1}^{M}(\pi_m'-\pi_m)\tilde{Q}'(s,m) + \gamma\sum\limits_{a\in \cA} \pi_\theta(a|s) \sum\limits_{s'\in \cS} \tP(s'|s,a) \left[V'(s)-V(s') \right]\\
+ &=\frac{1}{1-\gamma} \sum\limits_{s'\in \cS} d_s^{\pi}(s') \sum\limits_{m=1}^{M}(\pi'_m-\pi_m) \tilde{Q}'(s',m).
+\end{split}
+\end{align*}$$ ◻
+:::
+
+::: {#assumption:bounded reward .assumption}
+**Assumption 14**. The reward $r(s,a)\in [0,1]$, for all pairs $(s,a)\in \cS\times \cA$.
+:::
+
+::: {#assumption:positivity of advantage .assumption}
+**Assumption 15**. Let $\pi^*\bydef \argmax\limits_{\pi\in \cP_M} V^\pi(s_0)$. We make the following assumption. $$\mathbb{E}_{m\sim \pi^*} \left[Q^{\pi_\theta}(s,m) \right] -V^{\pi_\theta}(s) \geq 0, \forall s\in \cS,\forall \pi_\theta \in \Pi.$$
+:::
+
+Let the best controller be a point in the $M-simplex$, i.e., $K^* \bydef \sum\limits_{m=1}^{M}\pi^*_mK_m$.
+
+::: proof
+*Proof.* $$\begin{align*}
+ \norm{\frac{\partial}{\partial\theta}V^{\pi_\theta}(\mu)}_2 &=\left(\sum\limits_{m=1}^{M}\left(\frac{\partial V^{\pi_\theta}(\mu)}{\partial\theta_m}\right)^2 \right)^{1/2}\\
+ &\geq \frac{1}{\sqrt{M}} \sum\limits_{m=1}^{M}\abs{\frac{\partial V^{\pi_\theta}(\mu)}{\partial\theta_m}} \text{ \quad\quad (Cauchy-Schwarz)}\\
+ &=\frac{1}{\sqrt{M}} \sum\limits_{m=1}^{M}\frac{1}{1-\gamma} \abs{\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\pi_m\tilde{A}(s,m)} \text{\quad \quad Lemma \ref{lemma:Gradient simplification}}\\
+ &\geq\frac{1}{\sqrt{M}} \sum\limits_{m=1}^{M}\frac{\pi_m^*\pi_m}{1-\gamma} \abs{\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\tilde{A}(s,m)}\\
+ &\geq \left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right) \frac{1}{\sqrt{M}} \sum\limits_{m=1}^{M}\frac{\pi_m^*}{1-\gamma} \abs{\sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\tilde{A}(s,m)}\\
+ &\geq \left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right) \frac{1}{\sqrt{M}} \abs{\sum\limits_{m=1}^{M}\frac{\pi_m^*}{1-\gamma} \sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\tilde{A}(s,m)}\\
+ % &= \left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right) \frac{1}{\sqrt{M}} \abs{\sm \frac{\pi_m^*}{1-\gamma} \sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s)\tilde{A}(s,m)}\\
+ &= \left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right) \abs{\frac{1}{\sqrt{M}} \sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s){\sum\limits_{m=1}^{M}\frac{\pi_m^*}{1-\gamma} \tilde{A}(s,m)}}\\
+ &= \left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right) \frac{1}{\sqrt{M}} \sum\limits_{s\in \cS} d_\mu^{\pi_\theta}(s){\sum\limits_{m=1}^{M}\frac{\pi_m^*}{1-\gamma} \tilde{A}(s,m)} \text{\quad \quad Assumption \ref{assumption:positivity of advantage}}\\
+ %&=\frac{1}{\sqrt{M}} \sm \frac{\pi_m^*\pi_m}{1-\gamma} \sum\limits_{s\in \cS} \frac{d_\mu^{\pi_\theta}(s)}{d_{\rho}^{\pi^*}(s)}d_{\rho}^{\pi^*}(s)\tilde{A}(s,m)\\
+ &\geq \frac{1}{\sqrt{M}}\frac{1}{1-\gamma}\left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right) \norm{\frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi_\theta}}}_{\infty}^{-1} \sum\limits_{s\in \cS} d_{\rho}^*(s) \sum\limits_{m=1}^{M}\pi_m^*\tilde{A}(s,m)\\
+ &=\frac{1}{\sqrt{M}}\left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right) \norm{\frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi_\theta}}}_{\infty}^{-1} \left[V^*(\rho) -V^{\pi_\theta}(\rho) \right] \text{\quad \quad Lemma \ref{lemma:value diffence lemma}}.
+\end{align*}$$ ◻
+:::
+
+Let $\beta:=\frac{7\gamma^2+4\gamma+5}{\left(1-\gamma\right)^2}$. We have that, $$\begin{align*}
+ V^*(\rho) -V^{\pi_\theta}(\rho) &= \frac{1}{1-\gamma} \sum\limits_{s\in \cS} d_\rho^{\pi_\theta}(s) \sum\limits_{m=1}^{M}(\pi^*_m-\pi_m)\tilde{Q}^{\pi^*}(s,m) \text{$\qquad$ (Lemma \ref{lemma:value diffrence lemma-2})}\\
+ &= \frac{1}{1-\gamma} \sum\limits_{s\in \cS} \frac{d_\rho^{\pi_\theta}(s)}{d_\mu^{\pi_\theta}(s)}d_\mu^{\pi_\theta}(s) \sum\limits_{m=1}^{M}(\pi^*_m-\pi_m)\tilde{Q}^{\pi^*}(s,m) \\
+ &\leq \frac{1}{1-\gamma} \norm{\frac{1}{d_\mu^{\pi_\theta}}}_{\infty} \sum\limits_{s\in \cS} \sum\limits_{m=1}^{M}(\pi^*_m-\pi_m)\tilde{Q}^{\pi^*}(s,m) \\
+ &\leq \frac{1}{(1-\gamma)^2} \norm{\frac{1}{\mu}}_\infty \sum\limits_{s\in \cS} \sum\limits_{m=1}^{M}(\pi^*_m-\pi_m)\tilde{Q}^{\pi^*}(s,m) \\
+ &=\frac{1}{(1-\gamma)} \norm{\frac{1}{\mu}}_\infty\left[V^*(\mu) -V^{\pi_\theta}(\mu) \right] \text{$\qquad$ (Lemma \ref{lemma:value diffrence lemma-2})}.
+\end{align*}$$ Let $\delta_t\bydef V^*(\mu) -V^{\pi_{\theta_t}}(\mu)$. $$\begin{align*}
+ \delta_{t+1}-\delta_t &= V^{\pi_{\theta_{t}}}(\mu)-V^{\pi_{\theta_{t+1}}}(\mu) \text{\qquad (Lemma \ref{lemma:smoothness of V})}\\
+ &\leq -\frac{1}{2\beta} \norm{\frac{\partial}{\partial\theta}V^{\pi_{\theta_{t}}}(\mu)}^2_2 \text{\qquad (Lemma \ref{lemma:gradient ascent lemma} )}\\
+ &\leq -\frac{1}{2\beta} \frac{1}{{M}}\left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right)^2 \norm{\frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi_\theta}}}_{\infty}^{-2} \delta_t^2 \text{\qquad (Lemma \ref{lemma:nonuniform lojaseiwicz inequality})}\\
+ &\leq -\frac{1}{2\beta} \left(1-\gamma\right)^2 \frac{1}{{M}}\left(\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right)^2 \norm{\frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi_\theta}}}_{\infty}^{-2} \delta_t^2\\
+ &\leq -\frac{1}{2\beta} \left(1-\gamma\right)^2 \frac{1}{{M}}\left(\min\limits_{1\leq s\leq t}\min\limits_{m:\pi^*_{\theta_{m}}>0} \pi_{\theta_m} \right)^2 \norm{\frac{d_{\rho}^{\pi^*}}{d_{\mu}^{\pi_\theta}}}_{\infty}^{-2} \delta_t^2\\\\
+ &= -\frac{1}{2\beta} \frac{1}{M}\left(1-\gamma\right)^2 \norm{\frac{d_\mu^{\pi^*}}{\mu}}_{\infty}^{-2}c_t^2 \delta_t^2,\\
+\end{align*}$$ where $c_t\bydef \min\limits_{1\leq s\leq t} \min\limits_{m:\pi^*_m>0}\pi_{\theta_s}(m)$. Hence we have that, $$\begin{equation}
+\label{eq:induction step}
+\delta_{t+1} \leq \delta_t - \frac{1}{2\beta} \frac{\left(1-\gamma\right)^2}{M} \norm{\frac{d_\mu^{\pi^*}}{\mu}}_{\infty}^{-2}c_t^2 \delta_t^2.
+\end{equation}$$ The rest of the proof follows from a induction argument over $t\geq1$.
+
+[Base case:]{.underline} Since $\delta_t\leq \frac{1}{1-\gamma}$, and $c_t \in (0,1)$, the result holds for all $t\leq \frac{2\beta M}{(1-\gamma)}\norm{\frac{d_\mu^{\pi^*}}{\mu}}_{\infty}^2.$
+
+For ease of notation, let $\phi_t\bydef \frac{2\beta M}{c_t^2(1-\gamma)^2}\norm{\frac{d_\mu^{\pi^*}}{\mu}}_{\infty}^2$. We need to show that $\delta_t\leq \frac{\phi_t}{ t}$, for all $t\geq 1$.
+
+[Induction step:]{.underline} Fix a $t\geq 2$, assume $\delta_t \leq \frac{\phi_t}{t}$.
+
+Let $g:\Real\to \Real$ be a function defined as $g(x) = x-\frac{1}{\phi_t}x^2$. One can verify easily that $g$ is monotonically increasing in $\left[ 0, \frac{\phi_t}{2}\right]$. Next with equation [\[eq:induction step\]](#eq:induction step){reference-type="ref" reference="eq:induction step"}, we have $$\begin{align*}
+ \delta_{t+1} &\leq \delta_t -\frac{1}{\phi_t} \delta_t^2\\
+ &= g(\delta_t)\\
+ &\leq g(\frac{\phi_t}{ t})\\
+ &\leq \frac{\phi_t}{t} - \frac{\phi_t}{t^2}\\
+ &= \phi_t\left(\frac{1}{t}-\frac{1}{t^2} \right)\\
+ &\leq \phi_t\left(\frac{1}{t+1}\right)\\
+ &\leq \phi_{t+1}\left(\frac{1}{t+1}\right).
+\end{align*}$$ where the last step follows from the fact that $c_{t+1}\leq c_t$ (infimum over a larger set does not increase the value). This completes the proof.
+
+::: {#lemma:gradient ascent lemma .lemma}
+**Lemma 16**. *Let $f:\Real^M\to \Real$ be $\beta-$smooth. Then gradient ascent with learning rate $\frac{1}{\beta}$ guarantees, for all $x,x'\in \Real^M$: $$f(x)-f(x') \leq -\frac{1}{2\beta}\norm{\frac{df(x)}{dx}}_2^2.$$*
+:::
+
+::: proof
+*Proof.* $$\begin{align*}
+ f(x) -f(x') &\leq -\innprod{\frac{\partial f(x)}{\partial x}} + \frac{\beta}{2}.\norm{x'-x}_2^2\\
+ &= \frac{1}{\beta} \norm{\frac{df(x)}{dx}}_2^2 + \frac{\beta}{2}\frac{1}{\beta^2}\norm{\frac{df(x)}{dx}}_2^2\\
+ &= -\frac{1}{2\beta}\norm{\frac{df(x)}{dx}}_2^2.
+\end{align*}$$ ◻
+:::
+
+We will begin with some useful lemmas.
+
+::: {#lemma:value of Lpsi .lemma}
+**Lemma 17**. *For any $\theta,\theta\in \Real^M$, we have $\norm{\psi_\theta(m)-\psi_{\theta'}(m)}_2\leq \norm{\theta-\theta'}_2$.*
+:::
+
+::: proof
+*Proof.* Recall, $\psi_{\theta}(m):= \nabla_\theta \log \pi_\theta(m)$. Fix $m'\in [M]$, $$\begin{align*}
+ \frac{\partial\log\pi_\theta(m)}{\partial \theta_{m'}} &= \frac{\partial\log\left( \frac{e^{\theta_m}}{\sum\limits_{j=1}^M e^{\theta_j}} \right)}{\partial \theta_{m'}}\\
+ &= \frac{\partial}{\partial\theta_{m'}} \left(\theta_m-\log\left(\sum\limits_{j=1}^M e^{\theta_j} \right) \right)\\
+ &= \mathbbm{1}\{m'=m\} - \frac{e^{\theta_{m'}}}{\sum\limits_{j=1}^M e^{\theta_j} } \\
+ &= \mathbbm{1}\{m'=m\} - \pi_{\theta}(m').
+\end{align*}$$ $$\begin{align*}
+ \norm{\psi_\theta(m)-\psi_{\theta'}(m)}_2\leq \norm{\theta-\theta'}_2 &=\norm{\nabla_\theta \log \pi_{\theta}(m)-\nabla_\theta \log \pi_{\theta'}(m)}_2\\
+ &=\norm{\pi_\theta(.) - \pi_{\theta'}(.) }_2\\
+ &\leq^{(*)} \norm{\theta-\theta'}_2.
+\end{align*}$$ Here (\*) follows from the fact that the softmax function is 1-Lipschitz [@softmax]. ◻
+:::
+
+::: {#lemma:value of Cpsi .lemma}
+**Lemma 18**. *For all $m\in [M]$ and $\theta\in \Real^M$, $\norm{\psi_\theta(m)}_2 \leq \sqrt{2}$.*
+:::
+
+::: proof
+*Proof.* Proof follows by noticing that $\norm{\psi_\theta(m)}_2=\norm{\nabla_\theta \log \pi_\theta(m)}_2\leq \sqrt{2}$, where the last inequality follows because the 2-norm of a probability vector is bounded by 1. ◻
+:::
+
+::: {#lemma:value of Cpi .lemma}
+**Lemma 19**. *For all $\theta,\theta'\in \Real^M$, $\norm{\pi_\theta(.)-\pi_{\theta'}(.)}_{TV}\leq \frac{\sqrt{M}}{2}\norm{\theta-\theta'}_2$.*
+:::
+
+::: proof
+*Proof.* $$\begin{align*}
+ \norm{\pi_\theta(.)-\pi_{\theta'}(.)}_{TV} &= \frac{1}{2}\norm{\pi_\theta(.)-\pi_{\theta'}(.)}_{1}\\
+ &\leq \frac{\sqrt{M}}{2}\norm{\pi_\theta(.)-\pi_{\theta'}(.)}_{2}.
+\end{align*}$$ The inequality follows from relation between 1-norm and 2-norm. ◻
+:::
+
+::: {#prop:restatement of Prop 1 in AC paper .proposition}
+**Proposition 20**. *For any $\theta,\theta'\in \Real^M$, $$\nabla V(\theta) - \nabla V(\theta')\leq \sqrt{M}L_V \norm{\theta-\theta'}_2$$ where $L_V=\frac{2\sqrt{2}C_{\kappa \xi} +1}{1-\gamma}$, and $C_{\kappa \xi}=\left( 1+\left\lceil{\log_\xi\frac{1}{\kappa}}\right\rceil +\frac{1}{1-\xi} \right)$.*
+:::
+
+::: proof
+*Proof.* We follow the same steps as in Proposition 1 in [@Improving-AC-NAC] along with Lemmas [19](#lemma:value of Cpi){reference-type="ref" reference="lemma:value of Cpi"},[18](#lemma:value of Cpsi){reference-type="ref" reference="lemma:value of Cpsi"},[17](#lemma:value of Lpsi){reference-type="ref" reference="lemma:value of Lpsi"} and that the maximum reward is bounded by 1. ◻
+:::
+
+We will now restate a useful result from [@Improving-AC-NAC], about the convergence of the critic parameter $w_t$ to the equilibrium point $w^*$ of the underlying ODE, applied to our setting.
+
+::: {#prop: restatement of Lemma 2 in ac paper .proposition}
+**Proposition 21**. *Suppose assumptions'[\[assumption:bound on w\]](#assumption:bound on w){reference-type="ref" reference="assumption:bound on w"} and [\[assumption:ergodicity\]](#assumption:ergodicity){reference-type="ref" reference="assumption:ergodicity"} hold. Then is $\beta\leq \min\left\{\frac{\Gamma_L}{16},\frac{8}{\Gamma_L} \right\}$ and $H\geq \left( \frac{4}{\Gamma_L}+2\alpha \right)\left[\frac{1536[1+(\kappa-1)\xi]}{(1-\xi)\Gamma_L} \right]$. We have $$\expect{\norm{w_{T_c}-w^*}_2^2}\leq \left(1-\frac{\Gamma_L}{16}\alpha \right)^{T_c}\norm{w_0-w^*}_2^2 + \left(\frac{4}{\Gamma_L}+2\alpha \right)\frac{1536(1+R_w^2)[1+(\kappa-1)\xi]}{(1-\xi)H}.$$*
+
+*If we further let $T_c\geq \frac{16}{\Gamma_L\alpha}\log \frac{2\norm{w_0-w^*}^2_2}{\epsilon}$ and $H\geq \left(\frac{4}{\Gamma_L}+2\alpha \right)\frac{3072(R_w^2+1)[1+(\kappa-1)\xi]}{(1-\xi)\Gamma_L\epsilon}$, then we have $\expect{\norm{w_{T_c}-w^*}_2^2}\leq \epsilon$ with total sample complexity given by $T_cH=\mathcal{O}\left(\frac{1}{\alpha\epsilon}\log\frac{1}{\epsilon} \right)$.*
+:::
+
+::: proof
+*Proof.* Proof follows along the similar lines as in Thm. 4 in @Improving-AC-NAC and by using $\norm{\phi(s)(\gamma\phi(s')-\phi(s))^\top}_F\leq (1+\gamma)\leq 2$ and assuming $\norm{\phi(s)}_2\leq 1$ for all $s,s'\in \cS$. ◻
+:::
+
+::: proof
+*Proof of Theorem [\[thm:AC main theorem\]](#thm:AC main theorem){reference-type="ref" reference="thm:AC main theorem"}.* Let $v_t(w):=\frac{1}{B}\sum\limits_{i=0}^{B-1}\cE(s_{t,i},m_{t,i},s_{t,i+1})\psi_{\theta_t}(m_{t,i})$ and $A_w(s,m):=\mathbb{E}_{\bar{P}}\left[\cE(s,m,s')|(s,m) \right]$ and $g(w,\theta):=\mathbb{E}_{\nu_\theta}[A_w(s,m)\psi_\theta(m)]$ for all $\theta\in \Real^M,w\in\Real^d,s\in \cS, m\in [M]$. Using Prop [20](#prop:restatement of Prop 1 in AC paper){reference-type="ref" reference="prop:restatement of Prop 1 in AC paper"} we get, $$\begin{align*}
+\begin{split}
+ V(\theta_{t+1}) &\geq V(\theta_t) + \innprod{\nabla_{\theta}V(\theta_t), \theta_{t+1}-\theta_t} - \frac{\sqrt{M}L_V}{2}\norm{\theta_{t+1}-\theta_t}_2^2\\
+ &=V(\theta_t) + \alpha\innprod{\nabla_{\theta}V(\theta_t), v_t(w_t)-\nabla_{\theta}V(\theta_t)+\nabla_{\theta}V(\theta_t)} - \frac{\sqrt{M}L_V\alpha^2}{2}\norm{v_t(w_t)}_2^2\\
+ &=V(\theta_t) +\alpha\norm{\nabla_{\theta}V(\theta_t)}_2^2 \\ &+\alpha\innprod{\nabla_{\theta}V(\theta_t), v_t(w_t)-\nabla_{\theta}V(\theta_t)}- \frac{\sqrt{M}L_V\alpha^2}{2}\norm{v_t(w_t)}_2^2\\
+ &\geq V(\theta_t) +\left(\frac{1}{2}\alpha- {\sqrt{M}L_V\alpha^2} \right)\norm{\nabla_{\theta}V(\theta_t)}_2^2 -\left(\frac{1}{2}\alpha+ {\sqrt{M}L_V\alpha^2} \right)\norm{v_t(w_t)-\nabla_{\theta}V(\theta_t)}_2^2
+ \end{split}
+\end{align*}$$ Taking expectations and rearranging, we have $$\begin{align*}
+ &\left(\frac{1}{2}\alpha- {\sqrt{M}L_V\alpha^2} \right)\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2|\cF_t}\\
+ &\leq \expect{V(\theta_{t+1})|\cF_t} -V(\theta_t) + \left(\frac{1}{2}\alpha+ {\sqrt{M}L_V\alpha^2} \right)\expect{\norm{v_t(w_t)-\nabla_{\theta}V(\theta_t)}_2^2|\cF_t}.
+\end{align*}$$ Next we will upperbound $\expect{\norm{v_t(w_t)-\nabla_{\theta}V(\theta_t)}_2^2|\cF_t}$. $$\begin{align*}
+ & {\norm{v_t(w_t)-\nabla_{\theta}V(\theta_t)}_2^2}\\
+ &\leq 3\norm{v_t(w_t) -v_t(w^*_{\theta_t}) }_2^2 + 3\norm{v_t(w^*_{\theta_t})-g(w^*_{\theta_t})}_2^2 + 3\norm{g(w^*_{\theta_t})-\nabla_{\theta}V(\theta_t)}_2^2.
+\end{align*}$$ $$\begin{align*}
+ \norm{v_t(w_t) -v_t(w^*_{\theta_t}) }_2^2 &= \norm{\frac{1}{B} \sum\limits_{i=0}^{B-1} [\cE_{w_t}(s_{t,i}, m_{t,i}, s_{t, i+1})-\cE_{w^*_{\theta_t}}(s_{t,i}, m_{t,i}, s_{t, i+1})]\psi(m_{t,i})}_2^2 \\
+ &\leq \frac{1}{B}\sum\limits_{i=0}^{B-1}\norm{[\cE_{w_t}(s_{t,i}, m_{t,i}, s_{t, i+1})-\cE_{w^*_{\theta_t}}(s_{t,i}, m_{t,i}, s_{t, i+1})]\psi(m_{t,i})}_2^2\\
+ &\leq \frac{2}{B}\sum\limits_{i=0}^{B-1}\norm{[\cE_{w_t}(s_{t,i}, m_{t,i}, s_{t, i+1})-\cE_{w^*_{\theta_t}}(s_{t,i}, m_{t,i}, s_{t, i+1})]}_2^2\\
+ &=\frac{2}{B}\sum\limits_{i=0}^{B-1} \norm{(\gamma \phi(s_{t,i+1})-\phi(s_{t,i}))^\top(w_t-w^*_{\theta_t})}_2^2\\
+ &\leq \frac{8}{B}\sum\limits_{i=0}^{B-1} \norm{(w_t-w^*_{\theta_t})}_2^2=8 \norm{(w_t-w^*_{\theta_t})}_2^2.
+\end{align*}$$ Next we have, $$\begin{align*}
+ \norm{g(w^*_{\theta_t})-\nabla_{\theta}V(\theta_t)}_2^2 &=\norm{\mathbb{E}_{\nu_{\theta_t}}[A_{w^*_{\theta_t}}(s,m)\psi_{\theta_t}(m)]-\mathbb{E}_{\nu_{\theta_t}}[A_{\pi_{\theta_t}}(s,m)\psi_{\theta_t}(m)] }_2^2\\
+ &\leq 2 \mathbb{E}_{\nu_{\theta_t}}\norm{A_{w^*_{\theta_t}}(s,m)-A_{\pi_{\theta_t}}(s,m)}_2^2\\
+ &=2\mathbb{E}_{\nu_{\theta_t}}\left[ | \gamma\expect{V_{w^*_{\theta_t}}(s')-V_{\pi_{{\theta_t}}}(s')|s,m}+ +V_{\pi_{{\theta_t}}}(s)-V_{w^*_{\theta_t}}(s) |^2 \right]\\
+ &\leq 8 \Delta_{critic}.
+\end{align*}$$
+
+Finally we bound the last term $\norm{v_t(w^*_{\theta_t})-g(w^*_{\theta_t})}_2^2$ by using Assumption [\[assumption:ergodicity\]](#assumption:ergodicity){reference-type="ref" reference="assumption:ergodicity"} we have, $$\begin{align*}
+ \norm{v_t(w^*_{\theta_t})-g(w^*_{\theta_t})}_2^2 &\leq \expect{\norm{\frac{1}{B}\sum\limits_{i=0}^{B-1}\cE_{w^*_{\theta_t}}(s_{t,i}, m_{t,i}, s_{t,i+1})\psi_{\theta_t}(m_{t,i}) - \mathbb{E}_{\nu_{\theta_t}}[A_{w^*_{\theta_t}}(s,m)\psi_{\theta_t}(m)]}_2^2 |\cF_t}.
+\end{align*}$$ We will now proceed in the similar manner as in [@Improving-AC-NAC] (eq 24 to eq 26), and using Lemma [18](#lemma:value of Cpsi){reference-type="ref" reference="lemma:value of Cpsi"}, we have $$\begin{align*}
+ \expect{\norm{v_t(w^*_{\theta_t})-g(w^*_{\theta_t})}_2^2|\cF_t} &\leq \frac{32(1+R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)}.
+\end{align*}$$ Putting things back we have, $$\begin{align*}
+ \expect{{\norm{v_t(w_t)-\nabla_{\theta}V(\theta_t)}_2^2}\Big|\cF_t } &\leq \frac{96(1+R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)} + {24} \expect{\norm{(w_t-w^*_{\theta_t})}_2^2} + 24 \Delta_{critic}.
+\end{align*}$$
+
+Hence we get, $$\begin{align*}
+\begin{split}
+ &\left(\frac{1}{2}\alpha- {\sqrt{M}L_V\alpha^2} \right)\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2}\\&\leq \expect{V(\theta_{t+1})} -\expect{V(\theta_t)} \\&+ \left(\frac{1}{2}\alpha+ {\sqrt{M}L_V\alpha^2} \right)\left(\frac{96(1+R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)}
+ + {24} \expect{\norm{(w_t-w^*_{\theta_t})}_2^2} + 24 \Delta_{critic}\right).
+ \end{split}
+\end{align*}$$ We put $\alpha=\frac{1}{4L_V\sqrt{M}}$ above to get, $$\begin{align*}
+ \begin{split}
+ \left(\frac{1}{16L_V\sqrt{M}} \right)\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2}&\leq \expect{V(\theta_{t+1})} -\expect{V(\theta_t)} \\&+ \left(\frac{1}{4L_V\sqrt{M}} \right)\left(\frac{96(1+R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)}
+ + {24} \expect{\norm{(w_t-w^*_{\theta_t})}_2^2} + 24 \Delta_{critic}\right).
+ \end{split}
+\end{align*}$$ which simplifies as $$\begin{align*}
+ \begin{split}
+ &\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2}\\&\leq 16L_V\sqrt{M}\left(\expect{V(\theta_{t+1})} -\expect{V(\theta_t)}\right) + \frac{384(1+R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)}
+ + {96} \expect{\norm{(w_t-w^*_{\theta_t})}_2^2} + 96 \Delta_{critic}.
+ \end{split}
+\end{align*}$$ Taking summation over $t=0,1,2,\ldots, T-1$ and dividing by $T$, $$\begin{align*}
+\begin{split}
+ &\expect{\norm{\nabla_{\theta}V(\theta_{\hat{T}})}_2^2}\\&=\frac{1}{T}\sum\limits_{t=0}^{T-1}\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2}\\
+ &\leq \frac{16L_V\sqrt{M}\left(\expect{V(\theta_{T})} -\expect{V(\theta_0)}\right)}{T} + \frac{384(1+R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)}
+ + {96} \frac{1}{T}\sum\limits_{t=0}^{T-1}\expect{\norm{(w_t-w^*_{\theta_t})}_2^2} + 96 \Delta_{critic}\\
+ &\leq \frac{16L_V\sqrt{M}}{(1-\gamma)T} + \frac{384(1+R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)}
+ + {96} \frac{1}{T}\sum\limits_{t=0}^{T-1}\expect{\norm{(w_t-w^*_{\theta_t})}_2^2} + 96 \Delta_{critic}
+ \end{split}
+\end{align*}$$ We now let $B\geq \frac{1152}{(1+R_w)^2[1+(\kappa-1)\xi]}{(1-\xi)\epsilon}$, $\expect{\norm{(w_t-w^*_{\theta_t})}_2^2} \leq \frac{\epsilon}{288}$ and $T\geq \frac{48L_V\sqrt{M}}{(1-\gamma)\epsilon}$, then we have $$\expect{\norm{\nabla_{\theta}V(\theta_{\hat{T}})}_2^2} \leq \epsilon + \cO(\Delta_{critic}).$$ This leads to the final sample complexity of $(B+HT_c)T=\left(\frac{1}{\epsilon}+\frac{\sqrt{M}}{\epsilon}\log\frac{1}{\epsilon} \right)\left(\frac{\sqrt{M}}{(1-\gamma)^2\epsilon} \right)=\cO\left(\frac{M}{(1-\gamma)^2\epsilon^2}\log\frac{1}{\epsilon}\right)$ ◻
+:::
+
+::::: proof
+*Proof.* We first show that the natural actor-critic improper learner converges to a stationary point. We will then show convergence to the global optima which is what is different from that of [@Improving-AC-NAC].
+
+Let $v_t(w):=\frac{1}{B}\sum_{i=0}^{B-1} \cE_w(s_{t,i},m_{t,i}, s_{t,i+1})\psi_{\theta_t}(m_{t,i})$, $A_w(s,m):=\mathbb{E}_{\tilde{P}}[\cE(s,m,s')|s,m]$ and $g(w,\theta):=\mathbb{E}_{\nu_\theta}[A_w(s,m)\psi_\theta(m)]$ for $w\in \Real^d$ and $\theta\in \Real^M$. Also let $u_t(w):=[F_t(\theta_t)+\lambda I]^{-1}\left[\frac{1}{B}\sum_{i=0}^{B-1}\cE_w(s_{t,i},m_{t,i},s_{t,i+1})\psi_{\theta_t}(m_{t,i})\right] = [F_t(\theta_t)+\lambda I]^{-1}v_t(w)$.
+
+Recall Prop [20](#prop:restatement of Prop 1 in AC paper){reference-type="ref" reference="prop:restatement of Prop 1 in AC paper"}. We have
+
+::: {#lemma:NAC stationary pt convergence .lemma}
+**Lemma 22**. *Assume $\sup_{s\in \cS}\norm{\phi(s)}_2 \leq 1$. Under Assumptions [\[assumption:ergodicity\]](#assumption:ergodicity){reference-type="ref" reference="assumption:ergodicity"} and [\[assumption:bound on w\]](#assumption:bound on w){reference-type="ref" reference="assumption:bound on w"} with step-sizes chosen as $\alpha=\left(\frac{\lambda^2}{2\sqrt{M}L_V(1+\lambda)}\right)$, we have $$\begin{align*}
+\begin{split}
+ \mathbb{E}[\norm{\nabla_\theta V(\theta_{\hat{T}})}_2^2]
+ &=\frac{1}{T}\sum_{t=0}^{T-1} \mathbb{E}[\norm{\nabla_\theta V(\theta_t)}_2^2] \\
+ &\leq \frac{16\sqrt{M}L_V(1+\lambda)^2}{\lambda^2}\frac{\expect{V(\theta_T)}-V(\theta_0)}{T} + \frac{108}{\lambda^2}[2(1+\lambda)^2+\lambda^2]\frac{\sum_{t=0}^{T-1}\expect{\norm{w_t-w_{\theta_t}^*}_2^2}}{T} \\
+ &+ [2(1+\lambda)^2 +\lambda^2]\left( \frac{32}{\lambda^4(1-\gamma)^2} + \frac{432(1+2R_w)^2}{\lambda^2} \right) \frac{1+(\kappa-1)\xi}{(1-\xi)B} +\frac{216}{\lambda^2}[2(1+\lambda)^2+\lambda^2]\Delta_{critic}.
+\end{split}
+\end{align*}$$*
+:::
+
+::: proof
+*Proof.* Proof is similar to first part of proof of Thm 6 in [@Improving-AC-NAC] and similar to Thm [\[thm:AC main theorem\]](#thm:AC main theorem){reference-type="ref" reference="thm:AC main theorem"}, along with using Prop [20](#prop:restatement of Prop 1 in AC paper){reference-type="ref" reference="prop:restatement of Prop 1 in AC paper"} and Lemmas [19](#lemma:value of Cpi){reference-type="ref" reference="lemma:value of Cpi"}, [18](#lemma:value of Cpsi){reference-type="ref" reference="lemma:value of Cpsi"} and [17](#lemma:value of Lpsi){reference-type="ref" reference="lemma:value of Lpsi"}. ◻
+:::
+
+We now move to proving the global optimality of natural actor critic based improper learner. Let $KL(\cdot,\cdot)$ be the KL-divergence between two distributions. We denote $\tt{D}(\theta):=KL(\pi^*, \pi_\theta)$, $u_{\theta_t}^\lambda:=(F(\theta_t)+\lambda I)^{-1}\nabla_\theta V(\theta_t)$ and $u^\dagger_{\theta_t}:=F(\theta_t)^\dagger \nabla_\theta V(\theta_t)$. We see that $$\begin{align*}
+\allowdisplaybreaks
+\begin{split}
+\allowdisplaybreaks
+ &\tt{D}({\theta_t}) -\tt{D}({\theta_{t+1}}) \\
+ &= \sum\limits_{m=1}^M \pi^*(m) \log\pi_{\theta_{t+1}}(m) - \log\pi_{\theta_{t}}(m)\\
+ &\stackrel{(i)}{=} \sum\limits_{s\in \cS}d_\rho^{\pi^*}(s)\sum\limits_{m=1}^M \pi^*(m) \log\pi_{\theta_{t+1}}(m) - \log\pi_{\theta_{t}}(m)\\
+ &= \mathbb{E}_{\nu_{\pi^*}} [\log\pi_{\theta_{t+1}}(m) - \log\pi_{\theta_{t}}(m)]\\
+ &\stackrel{(ii)}{\geq} \mathbb{E}_{\nu_{\pi^*}}\left[\nabla_\theta\log(\pi_{\theta_t}(m)) \right]^\top(\theta_{t+1}-\theta_t) -\frac{\norm{\theta_{t+1}-\theta_t}_2^2}{2}\\
+ &=\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top(\theta_{t+1}-\theta_t) -\frac{\norm{\theta_{t+1}-\theta_t}_2^2}{2}\\
+ &=\alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top u_t(w_t) -\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &=\alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top u^\lambda_{\theta_t} + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u_t(w_t)-u^\lambda_{\theta_t})-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &=\alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top u^\dagger_{\theta_t} + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u^\lambda_{\theta_t}-u^\dagger_{\theta_t}) + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u_t(w_t)-u^\lambda_{\theta_t})-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &=\alpha\mathbb{E}_{\nu_{\pi^*}}[A_{\pi_{\theta_t}}(s,m)] + \alpha\mathbb{E}_{\nu_{\pi^*}}[\psi_{\theta_t}(m)^\top u^\dagger_{\theta_t} - A_{\pi_{\theta_t}}(s,m)] + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u^\lambda_{\theta_t}-u^\dagger_{\theta_t}) \\&+ \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u_t(w_t)-u^\lambda_{\theta_t})-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &\stackrel{(iii)}{=} (1-\gamma) (V({\pi^*})-V(\pi_{\theta_t})) + \alpha\mathbb{E}_{\nu_{\pi^*}}[\psi_{\theta_t}(m)^\top u^\dagger_{\theta_t} - A_{\pi_{\theta_t}}(s,m)] + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u^\lambda_{\theta_t}-u^\dagger_{\theta_t}) \\&+ \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u_t(w_t)-u^\lambda_{\theta_t})-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &\geq (1-\gamma) (V({\pi^*})-V(\pi_{\theta_t})) - \alpha\sqrt{\mathbb{E}_{\nu_{\pi^*}}[[\psi_{\theta_t}(m)^\top u^\dagger_{\theta_t} - A_{\pi_{\theta_t}}(s,m)]^2]} + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u^\lambda_{\theta_t}-u^\dagger_{\theta_t}) \\&+ \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u_t(w_t)-u^\lambda_{\theta_t})-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &\stackrel{(iv)}{\geq} (1-\gamma) (V({\pi^*})-V(\pi_{\theta_t})) - \sqrt{\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_t}}}}_{\infty}}\alpha\sqrt{\mathbb{E}_{\nu_{\pi^*}}[[\psi_{\theta_t}(m)^\top u^\dagger_{\theta_t} - A_{\pi_{\theta_t}}(s,m)]^2]} + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u^\lambda_{\theta_t}-u^\dagger_{\theta_t}) \\&+ \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u_t(w_t)-u^\lambda_{\theta_t})-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &\stackrel{(v)}{\geq} (1-\gamma) (V({\pi^*})-V(\pi_{\theta_t})) - \sqrt{\frac{1}{1-\gamma}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\alpha\sqrt{\mathbb{E}_{\nu_{\pi^*}}[[\psi_{\theta_t}(m)^\top u^\dagger_{\theta_t} - A_{\pi_{\theta_t}}(s,m)]^2]} \\&+\alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u^\lambda_{\theta_t}-u^\dagger_{\theta_t}) + \alpha\mathbb{E}_{\nu_{\pi^*}}\left[\psi_{\theta_t}(m) \right]^\top (u_t(w_t)-u^\lambda_{\theta_t})-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &\stackrel{(vi)}{\geq} (1-\gamma) (V({\pi^*})-V(\pi_{\theta_t})) - \sqrt{\frac{1}{1-\gamma}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\alpha\sqrt{\mathbb{E}_{\nu_{\pi^*}}[[\psi_{\theta_t}(m)^\top u^\dagger_{\theta_t} - A_{\pi_{\theta_t}}(s,m)]^2]} - \alpha C_{soft}\lambda \\&- 2\alpha\norm{ u_t(w_t)-u^\lambda_{\theta_t}}_2-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}.
+ \end{split}
+\end{align*}$$ where (i) is by taking an extra expectation without changing the inner summand, (ii) follows by Lemma [17](#lemma:value of Lpsi){reference-type="ref" reference="lemma:value of Lpsi"} and Lemma 5 in [@Improving-AC-NAC], (iii) follows by the value difference lemma (Lemma [12](#lemma:value diffence lemma){reference-type="ref" reference="lemma:value diffence lemma"}), (iv) follows by defining $\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_t}}}}_{\infty}:=\max_{s,m}\frac{\nu_{\pi^*}(s,m)}{\nu_{\pi_{\theta_t}}(s,m)}$, (v) follows because $\nu_{\pi_{\theta_t}}(s,m) \geq (1-\gamma)\nu_{\pi_{\theta_0}}(s,m)$, (vi) follows by Lemma6 in [@Improving-AC-NAC]and Lemma [18](#lemma:value of Cpsi){reference-type="ref" reference="lemma:value of Cpsi"}.
+
+Next, we denote $\Delta_{actor}:=\max_{\theta\in \Real^M}\min_{w\in \Real^d}\mathbb{E}_{\nu_{\pi_\theta}}[[\psi_\theta^\top w - A_{\pi_\theta}(s,m)]^2]$ as the actor error. $$\begin{align*}
+\allowdisplaybreaks
+\begin{split}
+ &\tt{D}({\theta_t}) -\tt{D}({\theta_{t+1}}) \\
+ &\geq(1-\gamma) (V({\pi^*})-V(\pi_{\theta_t})) - \sqrt{\frac{1}{1-\gamma}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\alpha\sqrt{\Delta_{actor}} - \alpha C_{soft}\lambda \\&- 2\alpha\norm{ u_t(w_t)-u^\lambda_{\theta_t}}_2-\frac{\alpha^2\norm{u_t(w_t)}_2^2}{2}\\
+ &\geq (1-\gamma) (V({\pi^*})-V(\pi_{\theta_t})) - \sqrt{\frac{1}{1-\gamma}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\alpha\sqrt{\Delta_{actor}} - \alpha C_{soft}\lambda \\&- 2\alpha\norm{ u_t(w_t)-u^\lambda_{\theta_t}}_2-\frac{\alpha^2\norm{u_t(w_t)-u^\lambda(\theta_t)}_2^2}{2}-\frac{\alpha^2}{\lambda^2}\norm{\nabla_{\theta}V(\theta_t)}_2^2.\\
+\end{split}
+\end{align*}$$ Rearranging and dividing by $(1-\gamma)\alpha$, and taking expectation both sides we get $$\begin{align*}
+\begin{split}
+ & V({\pi^*})-\expect{V(\pi_{\theta_t})}\\
+ &\leq \frac{\expect{\tt{D}(\theta_t)}-\expect{\tt{D}(\theta_{t+1})}}{(1-\gamma)\alpha} +\frac{2\sqrt{\mathbb{E}[\norm{ u_t(w_t)-u^\lambda_{\theta_t}}_2^2]}}{1-\gamma} + \frac{\alpha\expect{\norm{u_t(w_t)-u^\lambda(\theta_t)}_2^2}}{2(1-\gamma)} \\&+ \frac{\alpha}{\lambda^2(1-\gamma)}\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2} + \sqrt{\frac{1}{(1-\gamma)^3}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\sqrt{\Delta_{actor}} + \frac{C_{soft}\lambda}{1-\gamma}.\\
+ \end{split}
+\end{align*}$$ Next we use the same argument as in eq (33) and Lemma 2 in @Improving-AC-NAC to bound the second term. $$\begin{align*}
+ \expect{\norm{u_t(w_t)-u^\lambda_{\theta_t}}_2^2} &\leq \frac{C}{B}+ \frac{108\expect{\norm{w_t-w^*_{\theta_t}}_2^2}}{\lambda^2} + \frac{216\Delta_{critic}}{\lambda^2}.
+\end{align*}$$ where $C:=\frac{18}{\lambda^2}\frac{24(1+2R_w)^2[1+(\kappa-1)\xi]}{B(1-\xi)} + \frac{4}{\lambda^4(1-\gamma)^2}.\frac{8[1+(\kappa-1)\xi]}{(1-\xi)B}.$ Using this in the bound and using $\sqrt{a+b}\leq \sqrt{a}+\sqrt{b}$ for positive $a,b$ above, we have, $$\begin{align*}
+ \begin{split}
+ & V({\pi^*})-\expect{V(\pi_{\theta_t})}\\
+ &\leq \frac{\expect{\tt{D}(\theta_t)}-\expect{\tt{D}(\theta_{t+1})}}{(1-\gamma)\alpha} +\frac{2}{1-\gamma}\left( \sqrt{\frac{C}{B}}+ 11\sqrt{\frac{\expect{\norm{w_t-w^*_{\theta_t}}_2^2}}{\lambda^2}} + 15\sqrt{\frac{\Delta_{critic}}{\lambda^2}} \right) \\&+ \frac{\alpha}{2(1-\gamma)}
+ \left({\frac{C}{B}}+ {108\frac{\expect{\norm{w_t-w^*_{\theta_t}}_2^2}}{\lambda^2}} + {216\frac{\Delta_{critic}}{\lambda^2}} \right)
+ \\&+ \frac{\alpha}{\lambda^2(1-\gamma)}\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2} + \sqrt{\frac{1}{(1-\gamma)^3}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\sqrt{\Delta_{actor}} + \frac{C_{soft}\lambda}{1-\gamma}.
+ \end{split}
+\end{align*}$$ Summing over all $t=0,1,\ldots,T-1$ and then dividing by $T$ we get, $$\begin{align*}
+ \begin{split}
+ &V({\pi^*})-\frac{1}{T}\sum\limits_{t=0}^{T-1}\expect{V(\pi_{\theta_t})}\\
+ & \leq \frac{{\tt{D}(\theta_0)}-\expect{\tt{D}(\theta_{T})}}{(1-\gamma)\alpha T} +\frac{2}{(1-\gamma)}\left( \sqrt{\frac{C}{B}} + 15\sqrt{\frac{\Delta_{critic}}{\lambda^2}} \right) +\frac{22}{(1-\gamma)T}\sum_{t=0}^{T-1}\sqrt{\frac{\expect{\norm{w_t-w^*_{\theta_t}}_2^2}}{\lambda^2}} \\&+ \frac{\alpha}{2(1-\gamma)}
+ \left({\frac{C}{B}} + {216\frac{\Delta_{critic}}{\lambda^2}} \right) + \frac{54\alpha}{(1-\gamma)T}\sum\limits_{t=0}^{T-1}{\frac{\expect{\norm{w_t-w^*_{\theta_t}}_2^2}}{\lambda^2}}
+ \\&+ \frac{\alpha}{\lambda^2(1-\gamma)T}\sum\limits_{t=0}^{T-1}\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2} + \sqrt{\frac{1}{(1-\gamma)^3}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\sqrt{\Delta_{actor}} + \frac{C_{soft}\lambda}{1-\gamma}.
+ \end{split}
+\end{align*}$$ We now put the value of $\alpha \leq\frac{\lambda^2}{2\sqrt{M}L_V(1+\lambda)}$, we get, $$\begin{align*}
+ \begin{split}
+ &V({\pi^*})-\frac{1}{T}\sum\limits_{t=0}^{T-1}\expect{V(\pi_{\theta_t})}\\
+ &\leq C_1\frac{\sqrt{M}}{T} + \frac{C_2}{\sqrt{B}} + C_3\sqrt{\Delta_{critic}} + \frac{C_4}{T}\sum_{t=0}^{T-1}\sqrt{\expect{\norm{w_t-w^*_{\theta_t}}_2^2}} \\
+ &+ \frac{C_5}{B} + C_6\sqrt{\Delta_{critic}} + \frac{C_7}{T}\sum\limits_{t=0}^{T-1}\expect{\norm{w_t-w^*_{\theta_t}}_2^2} + \frac{C_8}{T}\sum\limits_{t=0}^{T-1}\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2} \\
+ &+ \sqrt{\frac{1}{(1-\gamma)^3}\norm{\frac{\nu_{\pi^*}}{\nu_{\pi_{\theta_0}}}}_{\infty}}\sqrt{\Delta_{actor}} + C_9\lambda.
+ \end{split}
+\end{align*}$$ Letting $T=\mathcal{O}\left( \frac{\sqrt{M}}{(1-\gamma)^2\epsilon} \right)$, $B=\cO\left( \frac{1}{(1-\gamma)^2\epsilon^2} \right)$ then $\expect{\norm{\nabla_{\theta}V(\theta_t)}_2^2}\leq \epsilon^2$ and $$V({\pi^*})-\frac{1}{T}\sum\limits_{t=0}^{T-1}\expect{V(\pi_{\theta_t})} \leq \epsilon + \cO\left(\sqrt{\frac{\Delta_{actor}}{(1-\gamma)^3} }\right) + \cO(\Delta_{critic}) + \cO(\lambda).$$ This leads to the total sample complexity as $$(B+HT_c)T = \cO\left(\left( \frac{1}{(1-\gamma)^2\epsilon^2} + \frac{\sqrt{M}}{\epsilon^2}\log\frac{1}{\epsilon} \right)\frac{\sqrt{M}}{(1-\gamma)^2\epsilon}\right) = \cO\left(\frac{M}{(1-\gamma)^4\epsilon^3} \log\frac{1}{\epsilon}\right).$$ ◻
+:::::
+
+In this section we describe the details of the Sec. [\[sec:simulation-results\]](#sec:simulation-results){reference-type="ref" reference="sec:simulation-results"}. Recall that since neither value functions nor value gradients are available in closed-form, we modify SoftMax PG (Algorithm [\[alg:mainPolicyGradMDP\]](#alg:mainPolicyGradMDP){reference-type="ref" reference="alg:mainPolicyGradMDP"}) to make it generally implementable using a combination of (1) rollouts to estimate the value function of the current (improper) policy and (2) a stochastic approximation-based approach to estimate its value gradient.
+
+The Softmax PG with Gradient Estimation or **SPGE** (Algorithm [\[alg:mainPolicyGradMDPgradEst\]](#alg:mainPolicyGradMDPgradEst){reference-type="ref" reference="alg:mainPolicyGradMDPgradEst"}), and the gradient estimation algorithm [\[alg:gradEst\]](#alg:gradEst){reference-type="ref" reference="alg:gradEst"}, *GradEst*, are shown below.
+
+::::: minipage
+:::: algorithm
+::: algorithmic
+learning rate $\eta>0$, perturbation parameter $\alpha>0$, Initial state distribution $\mu$ Initialize each $\theta^1_m=1$, for all $m\in [M]$, $s_1\sim \mu$ Choose controller $m_t\sim \pi_t$. Play action $a_t \sim K_{m_t}(s_{t},:)$. Observe $s_{t+1}\sim \tP(.|s_t,a_t)$. ${\widehat{\nabla_{\theta^t} V^{\pi_{\theta_t}}(\mu)}}= \text{\tt GradEst}({\theta_t},\alpha,\mu)$ Update:\
+$\theta^{t+1} = \theta^{t}+ \eta. \widehat{\nabla_{\theta^t} V^{\pi_{\theta_t}}(\mu)}$.
+:::
+::::
+:::::
+
+::::: minipage
+:::: algorithm
+::: algorithmic
+Policy parameters $\theta$, parameter $\alpha>0$, Initial state distribution $\mu$. $u^i\sim Unif(\mathbb{S}^{M-1}).$ $\theta_\alpha = \theta+\alpha.u^i$ $\pi_\alpha = \softmax(\theta_\alpha)$ Generate trajectory $(s_0,a_0,r_0,s_1,a_1,r_1,\ldots, s_{\tt{lt}},a_{\tt{lt}},r_{\tt{lt}})$ using the policy $\pi_\alpha:$ and $s_0 \sim \mu$. $\tt{reward}^l=\sum\limits_{j=0}^{lt}\gamma^jr_j$ $\tt{mr}(i) = \tt{mean}(\tt{reward})$ $\tt{GradValue} = \frac{1}{\#runs}\sum\limits_{i=1}^{\#\tt{runs}}{\tt{mr}}(i).u^i.\frac{M}{\alpha}.$ $\tt{GradValue}$.
+:::
+::::
+:::::
+
+Next we report some extra simulations we performed under different environments.
+
+We consider a linear chain MDP as shown in Figure [4](#fig:linearChainMDP){reference-type="ref" reference="fig:linearChainMDP"}. As evident from the figure, $\abs{\cS}=10$ and the learner has only two actions available, which are $\cA = \{\tt{left}, \tt{right}\}$. Hence the name 'chain'. The numbers on the arrows represent the reward obtained with the transition. The initial state is $s_1$. We let $s_{100}$ as the terminal state. Let us define 2 base controllers, $K_1$ and $K_2$, as follows. $$\begin{align*}
+ K_1(\tt{left}\given s_j) &=\begin{cases}
+ 1, & j\in[9]\backslash \{5\}\\
+ 0.1, & j = 5\\
+ 0, & j=10.
+ \end{cases}
+\end{align*}$$ $$\begin{align*}
+ K_2(\tt{left}\given s_j) &=\begin{cases}
+ 1, & j\in[9]\backslash \{6\}\\
+ 0.1, & j = 6\\
+ 0, & j=10.
+ \end{cases}
+\end{align*}$$ and obviously $K_i(\tt{right}|s_j) = 1-K_i(\tt{left}|s_j)$ for $i=1,2$. An improper mixture of the two controllers, i.e., $(K_1+K_2)/2$ is the optimal in this case. We show that our policy gradient indeed converges to the 'correct' combination, see Figure [5](#fig:Chainfigure){reference-type="ref" reference="fig:Chainfigure"}.
+
+{#fig:linearChainMDP}
+
+
+
+
.
+Softmax PG alg applied to the linear Chain MDP with various randomly chosen initial distribution. Plot shows probability of choosing controller K1 averaged over $\#\tt{trials}$
+
+
+We here provide an elementary calculation of our claim that the mixture $K_{\tt{mix}}:=(K_1+K_2)/2$ is indeed better than applying $K_1$ or $K_2$ for all time. We first analyze the value function due to $K_i, i=1,2$ (which are the same due to *symmetry* of the problem and the probability values described). $$\begin{align*}
+ V^{K_i}(s_1) &= \expect{\sum\limits_{t\geq0} \gamma^tr_t(a_t,s_t)}\\
+ &=0.1\times \gamma^9+0.1\times 0.9\times 0.1\times \gamma^{11} + 0.1\times 0.9\times 0.1\times0.9\times 0.1\times \gamma^{13} \ldots\\
+ &=0.1\times \gamma^9 \left(1+\left(0.1\times 0.9 \gamma^2\right)+ \left(0.1\times 0.9 \gamma^2\right)^2+\ldots \right)=\frac{0.1\times \gamma^9}{1-0.1\times0.9\times \gamma^2}.
+\end{align*}$$ We will next analyze the value if a true mixture controller i.e., $K_{\tt{mix}}$ is applied to the above MDP. The analysis is a little more intricate than the above. We make use of the following key observations, which are elementary but crucial.
+
+1. Let $\tt{Paths}$ be the set of all sequence of states starting from $s_1$, which terminate at $s_{10}$ which can be generated under the policy $K_{\tt{mix}}$. Observe that $$\begin{equation}
+ V^{K_{\tt{mix}}}(s_1) = \sum\limits_{\underline{p}\in \tt{Paths}}\gamma^{{\tt{length}}(\underline{p})}\prob{\underline{p}}.1.
+ \end{equation}$$ Recall that reward obtained from the transition $s_9\to s_{10}$ is 1.
+
+2. Number of distinct paths with exactly $n$ loops: $2^n$.
+
+3. Probability of each such distinct path with $n$ cycles: $$\begin{align*}
+ &=\underbrace{(0.55\times 0.45)\times (0.55\times 0.45)\times \ldots (0.55\times 0.45)}_{n \,\tt{ times}}\times 0.55\times 0.55\times \gamma^{9+2n}\\
+ &=\left(0.55\right)^2\times \gamma^9\left(0.55\times 0.45\times \gamma^2 \right)^n.
+ \end{align*}$$
+
+4. Finally, we put everything together to get: $$\begin{align*}
+ V^{K_{\tt{mix}}}(s_1) = &\sum\limits_{n=0}^{\infty} 2^n\times \left(0.55\right)^2 \times \gamma^9\times \left(0.55\times 0.45\times \gamma^2 \right)^n\\
+ &=\frac{\left(0.55\right)^2\times \gamma^9}{1-2\times 0.55\times0.45\times \gamma^2} > V^{K_i}(s_1).
+ \end{align*}$$
+
+This shows that a mixture performs better than the constituent controllers. The plot shown in Fig. [5](#fig:Chainfigure){reference-type="ref" reference="fig:Chainfigure"} shows the Softmax PG algorithm (even with estimated gradients and value functions) converges to a (0.5,0.5) mixture correctly.
+
+
+
+Softmax policy gradient algorithm applies show convergence to the best mixture policy.
+
+
+We study two different settings (1) where in the first case the optimal policy is a strict improper combination of the available controllers and (2) where it is at a corner point, i.e., one of the available controllers itself is optimal. Our simulations show that in both the cases, PG converges to the correct controller distribution.
+
+Recall the example that we discussed in Sec. [\[sec:motivating-queueing-example\]](#sec:motivating-queueing-example){reference-type="ref" reference="sec:motivating-queueing-example"}. We consider the case with Bernoulli arrivals with rates $\boldsymbol{\lambda}=[\lambda_1,\lambda_2]$ and are given two base/atomic controllers $\{K_1,K_2\}$, where controller $K_i$ serves Queue $i$ with probability $1$, $i=1,2$. As can be seen in Fig. [\[subfig:equal arrival rate\]](#subfig:equal arrival rate){reference-type="ref" reference="subfig:equal arrival rate"} when $\boldsymbol{\lambda}=[0.49,0.49]$ (equal arrival rates), GradEst converges to an improper mixture policy that serves each queue with probability $[0.5,0.5]$. Note that this strategy will also stabilize the system whereas both the base controllers lead to instability (the queue length of the unserved queue would obviously increase without bound). Figure [\[subfig:unequal arrival rate\]](#subfig:unequal arrival rate){reference-type="ref" reference="subfig:unequal arrival rate"}, shows that with unequal arrival rates too, GradEst quickly converges to the best policy.
+
+Fig. [\[subfig:Value function comparisom\]](#subfig:Value function comparisom){reference-type="ref" reference="subfig:Value function comparisom"} shows the evolution of the value function of GradEst (in blue) compared with those of the base controllers (red) and the *Longest Queue First* policy (LQF) which, as the name suggests, always serves the longest queue in the system (black). LQF, like any policy that always serves a nonempty queue in the system whenever there is one[^2], is known to be optimal in the sense of delay minimization for this system [@LQF2016]. See Sec. [11](#sec:simulation-details){reference-type="ref" reference="sec:simulation-details"} in the Appendix for more details about this experiment.
+
+Finally, Fig. [\[subfig:3experts\]](#subfig:3experts){reference-type="ref" reference="subfig:3experts"} shows the result of the second experimental setting with three base controllers, one of which is delay optimal. The first two are $K_1,K_2$ as before and the third controller, $K_3$, is LQF. Notice that $K_1,K_2$ are both queue length-agnostic, meaning they could attempt to serve empty queues as well. LQF, on the other hand, always and only serves nonempty queues. Hence, in this case the optimal policy is attained at one of the corner points, i.e., $[0,0,1]$. The plot shows the PG algorithm converging to the correct point on the simplex.
+
+Here, we justify the value of the two policies which always follow one fixed queue, that is plotted as straight line in Figure [\[subfig:Value function comparisom\]](#subfig:Value function comparisom){reference-type="ref" reference="subfig:Value function comparisom"}. Let us find the value of the policy which always serves queue 1. The calculation for the other expert (serving queue 2 only) is similar. Let $q_i(t)$ denote the length of queue $i$ at time $t$. We note that since the expert (policy) always recommends to serve one of the queue, the expected *cost* suffered in any round $t$ is $c_t=q_1(t)+q_2(t) = 0 + t.\lambda_2$. Let us start with empty queues at $t=0$. $$\begin{align*}
+ V^{Expert 1}(\mathbf{\underline{0}}) &= \expect{\sum\limits_{t=0}^T \gamma^tc_t\given Expert 1}\\
+ &= \sum\limits_{t=0}^T \gamma^t. t. \lambda_2\\
+ &\leq\lambda_2.\frac{\gamma}{(1-\gamma)^2}.
+\end{align*}$$ With the values, $\gamma=0.9$ and $\lambda_2=0.49$, we get $V^{Expert 1}(\mathbf{\underline{0}})\leq 44$, which is in good agreement with the bound shown in the figure.
+
+
+
+An example of a path graph network. The interference constraints are such that physically adjacent queues cannot be served simultaneously.
+
+
+Consider a system of parallel transmitter-receiver pairs as shown in Figure [\[subfig:PGN-TxRx\]](#subfig:PGN-TxRx){reference-type="ref" reference="subfig:PGN-TxRx"}. Due to the physical arrangement of the Tx-Rx pairs, no two adjacent systems can be served simultaneously because of interference. This type of communication system is commonly referred to as a *path graph network* [@Mohan2020ThroughputOD]. Figure [\[subfig:PGN-conflict graph\]](#subfig:PGN-conflict graph){reference-type="ref" reference="subfig:PGN-conflict graph"} shows the corresponding *conflict graph*. Each Tx-Rx pair can be thought of as a queue, and the edges between them represent that the two connecting queues, cannot be served simultaneously. On the other hand, the sets of queues which can be served simultaneously are called *independent sets* in the queuing theory literature. In the figure above, the independent sets are $\{\emptyset,\{1\}, \{2\}, \{3\}, \{4\}, \{1,3\}, \{2,4\}, \{1,4\} \}$.
+
+Finally, in Table [2](#table:meandelay){reference-type="ref" reference="table:meandelay"}, we report the mean delay values of the 5 base controllers we used in our simulation Fig. [\[subfig:BQ5\]](#subfig:BQ5){reference-type="ref" reference="subfig:BQ5"}, Sec.[\[sec:simulation-results\]](#sec:simulation-results){reference-type="ref" reference="sec:simulation-results"}. We see the controller $K_2$ which was chosen to be MER, indeed has the lowest cost associated, and as shown in Fig. [\[subfig:BQ5\]](#subfig:BQ5){reference-type="ref" reference="subfig:BQ5"}, our Softmax PG algorithm (with estimated value functions and gradients) converges to it.
+
+::: {#table:meandelay}
+ Controller Mean delay (# time slots) over 200 trials Standard deviation
+ -------------------------- ------------------------------------------- --------------------
+ $K_1(MW)$ 22.11 0.63
+ ${\color{blue}K_2(MER)}$ **20.96** 0.65
+ $K_3(\{1,3\})$ 80.10 0.92
+ $K_4(\{2,4\})$ 80.22 0.90
+ $K_5(\{1,4\})$ 80.13 0.91
+
+ : Mean Packet Delay Values of Path Graph Network Simulation.
+:::
+
+[]{#table:meandelay label="table:meandelay"}
+
+We investigate further the example in our simulation in which the two constituent controllers are $K_{opt}+\Delta$ and $K_{opt}-\Delta$. We use OpenAI gym to simulate this situation. In the Figure [\[subfig:cartpole\--symm\]](#subfig:cartpole--symm){reference-type="ref" reference="subfig:cartpole--symm"}, it was shown our Softmax PG algorithm (with estimated values and gradients) converged to a improper mixture of the two controllers, i.e., $\approx(0.53,0.47)$. Let $K_{\tt{conv}}$ be defined as the (randomized) controller which chooses $K_1$ with probability 0.53, and $K_2$ with probability 0.47. Recall from Sec. [\[sec:motivating-cartpole-example\]](#sec:motivating-cartpole-example){reference-type="ref" reference="sec:motivating-cartpole-example"} that this control law converts the linearized cartpole into an Ergodic Parameter Linear System (EPLS). In Table [3](#table:meanfalls){reference-type="ref" reference="table:meanfalls"} we report the average number of rounds the pendulum stays upright when different controllers are applied for all time, over trajectories of length 500 rounds. The third column displays an interesting feature of our algorithm. Over 100 trials, the base controllers do not stabilize the pendulum for a relatively large number of trials, however, $K_{\tt{conv}}$ successfully does so most of the times.
+
+::: {#table:meanfalls}
+ --------------------------------------- ----- -------
+ Controller
+ before the pendulum falls $\land$ 500
+ the pendulum falls before 500 rounds
+ $K_1(K_{opt}+\Delta)$ 403 38
+ ${K_2(K_{opt}-\Delta)}$ 355 46
+ ${\color{blue}K_{\tt{conv}}}$ 465 **8**
+ --------------------------------------- ----- -------
+
+ : A table showing the number of rounds the constituent controllers manage to keep the cartpole upright.
+:::
+
+[]{#table:meanfalls label="table:meanfalls"}
+
+We mention here that if one follows $K^*$, which is the optimum controller matrix one obtains by solving the standard Discrete-time Algebraic Riccatti Equation (DARE) [@bertsekas11dynamic], the pole does not fall over 100 trials. However, as indicated in Sec.[\[sec:Introduction\]](#sec:Introduction){reference-type="ref" reference="sec:Introduction"}, constructing the optimum controller for this system from scratch requires exponential, in the number of state dimension, sample complexity [@chen-hazan20black-box-control-linear-dynamical]. On the other hand $K_{\tt{conv}}$ performs very close to the optimum, while being sample efficient.
+
+**Choice of hyperparameters.** In the simulations, we set learning rate to be $10^{-4}$, $\#\tt{runs}=10,\#\tt{rollouts}=10, \tt{lt}= 30$, discount factor $\gamma=0.9$ and $\alpha=1/\sqrt{\#\tt{runs}}$. All the simulations have been run for 20 trials and the results shown are averaged over them. We capped the queue sizes at 1000.
+
+- First we show a queuing theory where we have 2 queues to be served and we have two base controllers similar to as we discussed in the Sec [\[sec:motivating-examples\]](#sec:motivating-examples){reference-type="ref" reference="sec:motivating-examples"}. However, here we have two different arrival rates for the two queues $(\lambda_1,\lambda_2)\equiv (0.4,0.3)$, i.e., the arrival rates are unequal. We plot in Fig. [8](#fig:NAC unequal arrivals){reference-type="ref" reference="fig:NAC unequal arrivals"} the probability of choosing the two different controllers. We see that ACIL converges to the "correct\" mixture of the base controllers.
+
+- Next, we show a simulation on the setting in Sec. [11.1](#subsec:State Dependent controllers -- Chain MDP){reference-type="ref" reference="subsec:State Dependent controllers -- Chain MDP"}, which we called a Chain MDP. We recall that this setting consists of two base controllers $K_1$ and $K_2$, however a $(1/2,1/2)$ mixture of the two controllers was shown (analytically) to perform better than each individual ones. As the plot in Fig. [9](#fig:NAC chain MDP){reference-type="ref" reference="fig:NAC chain MDP"} shows NACIL identifies the correct combination and follows it.
+
+**Choice of hyperparameters.** For the queuing theoretic simulations of Algorithm [\[alg:actor-critic improper RL alg\]](#alg:actor-critic improper RL alg){reference-type="ref" reference="alg:actor-critic improper RL alg"} ACIL, we choose $\alpha=10^{-4}$, $\beta=10^{-3}$. We choose the identity mapping $\phi(s)\equiv s$, where $s$ is the current state of the system which is a $N-$length vector, which consists of the $i^{th}$ queue length at the $i^{th}$ position. $\lambda$ was chosen to be $0.1$. The other parameters are chosen as $B=50$, $H=30$ and $T_c=20$. We choose a buffer of size 1000 to keep the states bounded, i.e., if a queue exceeds a size 1000, those arrivals are ignored and queue length is not increased. This is used to normalize the $\norm{\phi(s)}_2$ across time.
+
+
+
+
.
+NACIL alg applied to the a queuing system with two queues, having arrival rates (λ1, λ2) ≡ (0.4, 0.3). Plot shows probability of choosing controllers K1 and K2 averaged over 20 trials
+
+
+
+
+
.
+NACIL alg applied to the linear Chain MDP with various randomly chosen initial distribution. Plot shows probability of choosing controller K1 averaged over 20 trials
+
+
+- *Comment on the 'simple' experimental settings.* The motivating examples may seem "simple\" and trainable from scratch with respect to progress in the field of RL. However, our main point is that there are situations where, for example, one may have trained controllers for a range of environments *in simulation*. However, the real life environment may differ from the simulated ones. We demonstrate that exploiting such basic pre-learnt controllers via our approach can help in generating a better (meta) controller for a new, unseen environment, instead of learning a new controller for the new environment from scratch.
+
+- *On characterizing the performance of the optimal mixture policy.* As correctly noticed by the reviewer, the inverted pendulum experiment showed that the optimal mixture policy can vastly outperform the component controllers. Currently, however, we do not provide any theoretical guarantees regarding this, since this depends on the structure of the policy space and the underlying MDP, which is very challenging. We hope to explore this task in our future work.