SlowGuess's picture
Add Batch f29f1451-73d1-484d-ab54-3834792f3972
4b361be verified
# Accelerated Policy Gradient for $s$ -Rectangular Robust MDPs with Large State Spaces
Ziyi Chen<sup>1</sup> Heng Huang<sup>1</sup>
# Abstract
Robust Markov decision process (robust MDP) is an important machine learning framework to make a reliable policy that is robust to environmental perturbation. Despite empirical success and popularity of policy gradient methods, existing policy gradient methods require at least iteration complexity $\mathcal{O}(\epsilon^{-4})$ to converge to the global optimal solution of s-rectangular robust MDPs with $\epsilon$ -accuracy and are limited to deterministic setting with access to exact gradients and small state space that are impractical in many applications. In this work, we propose an accelerated policy gradient algorithm with iteration complexity $\mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ in the deterministic setting using entropy regularization. Furthermore, we extend this algorithm to stochastic setting with access to only stochastic gradients and large state space which achieves the sample complexity $\mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ . In the meantime, our algorithms are also the first scalable policy gradient methods to entropy-regularized robust MDPs, which provide an important but underexplored machine learning framework.
# 1. Introduction
Reinforcement Learning (RL) modeled by Markov decision processes (MDP) is a broadly used machine learning framework where an agent learns and makes decisions by interacting with a dynamic environment. RL has many applications including robotics (Kober et al., 2013; Peng et al., 2018), energy flow control (Perera and Kamalaruban, 2021), production scheduling (Wang and Usher, 2005), flight control (Abbeel et al., 2006), etc. RL system is usually trained in a simulated environment to avoid deployment cost (Zhou
$^{1}$ Department of Computer Science, University of Maryland College Park. Correspondence to: Ziyi Chen <zc286@umd.edu>, Heng Huang <heng@umd.edu>.
Proceedings of the $41^{st}$ International Conference on Machine Learning, Vienna, Austria. PMLR 235, 2024. Copyright 2024 by the author(s).
et al., 2023). However, the simulated environment usually differs from real-world environment, which may degrade the performance of the trained RL system on real-world environment (Peng et al., 2018; Zhou et al., 2023). To make RL robust to this simulation-to-reality gap, robust Markov decision process (robust MDP) (Iyengar, 2005; Nilim and El Ghaoui, 2005; Wiesemann et al., 2013) has been proposed, which aims to find the optimal robust policy that optimizes the performance under the worst possible environment from a certain ambiguity set.
Robust MDP problem with a general ambiguity set is proved to be NP-hard (Wiesemann et al., 2013). To make it computationally tractable, various structural conditions on ambiguity set have been used including $(s,a)$ -rectangularity (Nilim and El Ghaoui, 2005; Iyengar, 2005; Wiesemann et al., 2013; Wang and Zou, 2022; Li et al., 2023c; Zhou et al., 2023) and $s$ -rectangularity (Wiesemann et al., 2013; Ho et al., 2021; Wang et al., 2023; Kumar et al., 2023a;c). This work focuses on the more general $s$ -rectangularity which allows the nature to select an adversarial environment before observing the learning agent's action (Wang et al., 2023) and yields less conservative policies (Kumar et al., 2023c).
Various methods have been adopted to solve robust MDP, including value-iteration (Iyengar, 2005; Nilim and El Ghaoui, 2005; Wiesemann et al., 2013; Grand-Clément and Kroer, 2021; Kumar et al., 2023b), policy-iteration (Iyengar, 2005; Badrinath and Kalathil, 2021; Ho et al., 2021; Kumar et al., 2022) and policy gradient (Wang and Zou, 2022; Li et al., 2023c; Wang et al., 2023; Zhou et al., 2023; Kumar et al., 2023c; Li et al., 2023b; Guha and Lee, 2023). Among these methods, policy gradient has gained significant attention due to its simple implementation (Silver et al., 2014), excellent real-world performance (Silver et al., 2014; Xu et al., 2014; Wang et al., 2023) and scalability to large state and action spaces (Silver et al., 2014; Wang et al., 2023). Moreover, policy gradient methods also have provable global convergence guarantee on non-robust MDP (Agarwal et al., 2021; Bhandari and Russo, 2021; Xiao, 2022). However, the global convergence of policy gradient methods for robust MDP is much harder to obtain since the robust value function is not differentiable. For $(s,a)$ -rectangular case, Wang
Table 1: Comparison of policy gradient works for $s$ -rectangular robust MDPs. The measures include the number of updates on policy $\pi$ as well as transition kernel $p$ and the complexity to achieve $\epsilon$ -optimal robust policy defined in Definition 1. The complexity denotes iteration complexity (the total number of updates on $\pi$ and $p$ ) in deterministic setting with access to exact gradients, and sample complexity (total number of required samples) in stochastic setting with access to only stochastic gradients. See Appendix A for more explanation of this table.
<table><tr><td>WORKS</td><td>#π UPDATES</td><td>#p UPDATES</td><td>COMPLEXITY</td><td>STOCHASTIC</td><td>LARGE SPACE</td></tr><tr><td>(WANG ET AL., 2023)</td><td>O(ε-4)</td><td>O(ε-4+ε4γ-O(ε-4))</td><td>O(ε-4+ε4γ-O(ε-4))</td><td>×</td><td>×</td></tr><tr><td>(LI ET AL., 2023B)</td><td>O(ε-4)</td><td>O(ε-6)</td><td>O(ε-6)</td><td>×</td><td>×</td></tr><tr><td>(KUMAR ET AL., 2023C)</td><td>O(ε-1)</td><td>-</td><td>-</td><td>×</td><td>×</td></tr><tr><td>(GUHA AND LEE, 2023)</td><td>O(ε-4)</td><td>O(ε-4)</td><td>O(ε-4)</td><td>×</td><td>×</td></tr><tr><td>OUR ALGORITHM 1</td><td>O(ε-3 ln ε-1)</td><td>O(ε-3)</td><td>O(ε-3 ln ε-1)</td><td>×</td><td>×</td></tr><tr><td>OUR ALGORITHM 2</td><td>O(ε-3 ln ε-1)</td><td>O(ε-3)</td><td>O(ε-7 ln ε-1)</td><td></td><td>×</td></tr><tr><td>OUR ALGORITHM 3</td><td>O(ε-3 ln ε-1)</td><td>O(ε-3)</td><td>O(ε-7 ln ε-1)</td><td></td><td></td></tr></table>
and Zou (2022); Li et al. (2023c); Zhou et al. (2023) tackle this challenge by evaluating and using the uniquely-defined robust Q function.
For the more general $s$ -rectangular case, this robust Q function is not well-defined (Li and Lan, 2023), which makes the global convergence even more challenging. Among the existing policy gradient methods (Wang et al., 2023; Li et al., 2023b; Kumar et al., 2023c; Guha and Lee, 2023) on $s$ -rectangular case, the state of the art iteration complexity (defined as the total number of updates on both transition kernel and policy) is $\mathcal{O}(\epsilon^{-4})$ (Guha and Lee, 2023) as shown in Table 1. Moreover, Kumar et al. (2023c) has oracle access to the sub-gradient of the robust optimal return that is assumed to be Lipschitz-smooth, which does not hold in many cases. Hence, we are motivated to ask:
Q1: Can we propose a policy gradient algorithm with lower iteration complexity to achieve global optimal solution of a generic $s$ -rectangular robust MDP?
Moreover, existing policy gradient algorithms are analyzed in deterministic setting with access to exact gradients and need to obtain policy or transition kernel over all states and actions, which are impractical in many applications where only stochastic samples are available and the state space is very large. Though Wang et al. (2023); Li et al. (2023b); Guha and Lee (2023) mention transition kernel parameterization to mitigate this issue, their global convergence results still involve enumeration over all states and actions and thus do not apply to large state space. As a result, we want to ask:
Q2: Can we extend policy gradient algorithms to stochastic setting and large state space for $s$ -rectangular case?
# 1.1. Our Contributions
We answer affirmatively to these questions by proposing an accelerated policy gradient algorithm (Algorithm 1) for $s$ -rectangular robust MDPs in the deterministic setting with access to exact gradients, and extending this algorithm to stochastic setting with access to only stochastic gradients (Algorithm 2) and then to large state space (Algorithm 3). We summarize the advantages of our algorithms and their global convergence results as follows and also in Table 1.
Acceleration: To accelerate existing policy gradient algorithms which directly optimize the non-differentiable objective function (Li et al., 2023b; Guha and Lee, 2023; Kumar et al., 2023c), we apply entropy regularization to the policy which provides a smooth approximation to the non-differentiable objective function. The approximation error can be arbitrarily small by using a sufficiently small regularization coefficient. Moreover, the entropy regularization not only ensures exponential convergence of our inner policy update, but also yields the Lipschitz-smoothness and gradient dominance of the objective function that guarantees efficient global convergence of the outer transition update. Hence, we obtain $\mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ iteration complexity, faster than the state of the art $\mathcal{O}(\epsilon^{-4})$ (Guha and Lee, 2023).
We are the first to obtain the above Lipschitz-smoothness of the entropy regularized objective (Proposition 2), in which we adopt two novel techniques to tackle entropy regularizer. First, as the log-policy $\ln \pi(a|s)$ may approach $-\infty$ , we use the Lipschitz property of $\pi(a|s)\ln \pi(a|s)$ with respect to $\ln \pi(a|s)$ . Second, the optimal log-policy $\ln \pi_p$ given the transition kernel $p$ involves a certain Q function $Q_p$ as the unique fixed point of a Bellman operator $T_p$ . Hence, we also need to obtain the Lipschitz property of $T_p$ and accordingly obtain a recursive bound for the Lipschitz property of $Q_p$ .
Stochastic Setting: We extend Algorithm 1 to the practical stochastic setting, by applying temporal difference (TD) method and sample-average to approximate the Q function and transition gradient respectively. In this way, we obtain
the first stochastic policy gradient algorithm (Algorithm 2) with provable global convergence and sample complexity $\mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ for $s$ -rectangular robust MDP.
In this sample complexity analysis, the entropy regularized cost involves $\tau \ln \pi(a|s)$ where $\ln \pi(a|s)$ might approach $-\infty$ . To bound $|\tau \ln \pi(a|s)|$ , we prove that the optimal log-policy $\ln \pi_p(a|s) \geq \mathcal{O}(-1/\tau)$ for any $p$ . As $\ln \pi \to \ln \pi_p$ exponentially fast with policy optimization, we prove that $\ln \pi(a|s) \geq \mathcal{O}(-1/\tau)$ for any $\pi$ involved in the algorithm and thus $|\tau \ln \pi(a|s)| \leq \mathcal{O}(1)$ .
Large State Space: We further extend Algorithm 2 to large state space, by using linear transition kernel parameterization and linear Q function approximation to reduce state enumeration. We prove that linear kernel parameterization preserves Lipschitz property as well as gradient dominance, which avoids parameterization error in the global convergence result. The obtained Algorithm 3 retains the sample complexity $\mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ to converge to the global optimal solution up to a function approximation error term $\zeta >0$ .
Entropy Regularized Robust MDP: All our algorithms are also the first policy gradient methods to solve entropy regularized robust MDP (Mankowitz et al., 2019; Mai and Jaillet, 2021; Eysenbach and Levine, 2021), an important but under-explored learning framework. Entropy regularized robust MDP is important because it combines both the advantage of robustness, and the advantage of entropy regularization in encouraging exploration and prohibiting early convergence to sub-optimal policies (Mankowitz et al., 2019; Mai and Jaillet, 2021), which is suitable for application to inverse reinforcement learning (IRL) (Mai and Jaillet, 2021).
# 1.2. Related Works
Robust Policy Evaluation: While this work focuses on policy optimization problem, i.e., to find the optimal policy, Li and Lan (2023); Kumar et al. (2023a) focused on robust policy evaluation problem of $s$ -rectangular robust MDP, i.e., to evaluate the value of a policy under the worst-case environment. Li and Lan (2023) considered the nature's choice of transition kernel as a policy and proposes policy gradient methods which achieve linear global convergence in the deterministic setting and $\widetilde{\mathcal{O}} (\epsilon^{-2})$ sample complexity in the stochastic setting. Kumar et al. (2023a) studied a robust MDP where both transition kernel and reward are uncertain and range in $L_{p}$ -ball constrained $s$ -rectangular ambiguity sets, which has closed-form optimal solution and yields linear convergence in the deterministic case.
Policy Gradient for Non-robust MDP: Policy gradient based algorithms including policy gradient (Sutton et al., 1999), natural policy gradient (Kakade, 2001), actor-critic (Konda and Tsitsiklis, 1999) and natural actor-critic (Bhatnagar et al., 2009) are also very popular for policy optimization.
tion in non-robust MDP. Agarwal et al. (2021) provided sub-linear global convergence rates and complexity results of policy gradient methods for various policy parameterizations including tabular, softmax, log-linear and neural policies. Bhandari and Russo (2021); Xiao (2022) accelerated the global convergence to linear rate for tabular policy by relating policy gradient methods to policy-iteration.
Policy Gradient for $(s,a)$ -rectangular Robust MDP: Wang and Zou (2022) presented a smoothed robust policy gradient algorithm for robust MDP with a specific R-contamination ambiguity set and achieves $\mathcal{O}(\epsilon^{-3})$ iteration complexity in the deterministic setting and $\mathcal{O}(\epsilon^{-7})$ sample complexity in the stochastic setting. Li et al. (2023c) introduced a robust policy mirror descent algorithm for robust MDP with a more general $(s,a)$ -rectangular ambiguity set and obtains linear global convergence in the deterministic setting and $\widetilde{\mathcal{O}} (\epsilon^{-2})$ sample complexity in the stochastic setting. Zhou et al. (2023) proposed a robust stochastic natural actor-critic algorithm with linear function approximation and two specific $(s,a)$ -rectangular ambiguity sets which applies to robust MDPs with large state spaces and also achieves $\widetilde{\mathcal{O}} (\epsilon^{-2})$ sample complexity.
Entropy Regularized Robust MDP: Mankowitz et al. (2019) extended the Maximum A-Posteriori Policy Optimization algorithm (Abdolmaleki et al., 2018) to entropy regularized robust MDP. Mai and Jalillet (2021) proposed a value-iteration algorithm for entropy regularized robust MDP with provable worst-case complexity. Eysenbach and Levine (2021) proved that entropy regularized MDP provides a lower bound of the robust MDP objective.
# 2. Problem Settings
# 2.1. Robust MDP
A vanilla MDP is characterized by a tuple $(\mathcal{S},\mathcal{A},p,c,\gamma ,\rho)$ $S$ and $\mathcal{A}$ are finite state and action spaces respectively. $\gamma \in (0,1)$ is the discount factor. $p$ is the state transition kernel where $p(\cdot |s,a)\in \Delta^{\mathcal{S}}$ is a distribution on $S$ for any state $s\in S$ and action $a\in \mathcal{A}$ $c:\mathcal{S}\times \mathcal{A}\times \mathcal{S}\to [0,1]$ is the cost function. $\rho \in \Delta^{\mathcal{S}}$ is the distribution of the environment's initial state $s_0$ . At time $t$ , an agent observes the environment's current state $s_t$ and takes a random action $a_{t}\sim \pi (\cdot |s)$ based on the agent's stationary policy $\pi \in \Pi \coloneqq (\Delta^{\mathcal{A}})^{\mathcal{S}}$ . Then the environment transitions to the next state $s_{t + 1}\sim p(\cdot |s_t,a_t)$ and gives a cost $c_{t}\coloneqq c(s_{t},a_{t},s_{t + 1})$ to the agent. We define the following value function, which characterizes the long-term expected cost under the policy $\pi$
$$
J _ {\rho} (\pi , p) := \mathbb {E} _ {\pi , p} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} c _ {t} \mid s _ {0} \sim \rho \right]. \tag {1}
$$
The aim of vanilla MDP is to find the optimal policy $\pi$ that minimizes the expected cost $J_{\rho}(\pi, p)$ for a given transition
kernel $p$ . However, in practice, $p$ is usually unknown and thus has to be estimated from data. The estimation error often degrades the performance after deployment. To make the performance robust to this estimation error, robust MDP (Iyengar, 2005; Nilim and El Ghaoui, 2005; Wiesemann et al., 2013) has been proposed where the transition kernel $p$ ranges in a certain ambiguity set $\mathcal{P}$ which can be selected to contain the true transition kernel. The aim of robust MDP is to find the optimal robust policy that minimizes the robust value function $\Phi_{\rho}(\pi) \coloneqq \max_{p \in \mathcal{P}} J_{\rho}(\pi, p)$ under the worst-case transition kernel $p \in \mathcal{P}$ , as formulated by the following minimax optimization problem.
$$
\min _ {\pi \in \Pi} \max _ {p \in \mathcal {P}} J _ {\rho} (\pi , p). \tag {2}
$$
Definition 1. A policy $\pi$ is called an $\epsilon$ -optimal robust policy if $\Phi_{\rho}(\pi) \leq \min_{\pi' \in \Pi} \Phi_{\rho}(\pi') + \epsilon$ for a certain precision $\epsilon \geq 0$ .
The robust MDP problem (2) is in general NP-hard (Wiesemann et al., 2013). To make it tractable, $\mathcal{P}$ is often assumed to be $(s,a)$ -rectangular (Nilim and El Ghaoui, 2005; Iyengar, 2005; Wiesemann et al., 2013; Wang and Zou, 2022; Li et al., 2023c; Zhou et al., 2023) and $s$ -rectangular (Wiesemann et al., 2013; Ho et al., 2021; Wang et al., 2023; Kumar et al., 2023a;c). An $(s,a)$ -rectangular $\mathcal{P}$ is defined as a Cartesian product of sets $\mathcal{P}_{s,a} \subset \Delta^S$ for all $s,a$ , i.e.,
$$
\mathcal {P} = \left\{p \in \left(\Delta^ {\mathcal {S}}\right) ^ {\mathcal {S} \times \mathcal {A}}: p (\cdot | s, a) \in \mathcal {P} _ {s, a}, \forall s \in \mathcal {S}, a \in \mathcal {A} \right\}.
$$
An $s$ -rectangular $\mathcal{P}$ is defined as a Cartesian product of sets $\mathcal{P}_s \subset (\Delta^S)^A$ for all $s$ , i.e.,
$$
\mathcal {P} = \{p \in (\Delta^ {\mathcal {S}}) ^ {\mathcal {S} \times \mathcal {A}}: p (\cdot | s, \cdot) \in \mathcal {P} _ {s}, \forall s \in \mathcal {S} \}.
$$
We adopt the following assumption throughout this work.
Assumption 1. $\mathcal{P}$ is $s$ -rectangular, compact and convex.
# 2.2. Entropy Regularized Robust MDP
Entropy regularized robust MDP (Mankowitz et al., 2019; Mai and Jaillet, 2021; Eysenbach and Levine, 2021) is also an important but underexplored learning framework. Its objective is shown below.
$$
\min _ {\pi \in \Pi} \max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi , p) := \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} c _ {\tau , \pi , t} \mid s _ {0} \sim \rho \right], \tag {3}
$$
where $c_{\tau, \pi, t} := c_t + \tau \ln \pi(a_t | s_t)$ is the entropy regularized cost with $\tau \in [0,1]$ . The above objective adds entropy regularizer to the robust MDP objective (1). Hence, entropy regularized robust MDP has both the advantage of policy robustness, and the advantage of entropy regularization in encouraging exploration and prohibiting early convergence
to sub-optimal policies (Mankowitz et al., 2019; Mai and Jaillet, 2021), which is suitable for application to inverse reinforcement learning (IRL) (Mai and Jaillet, 2021).
Define the following $V$ and $Q$ functions (Cayci et al., 2022):
$$
V _ {\tau} (\pi , p; s) := \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} c _ {\tau , \pi , t} \mid s _ {0} = s \right], \tag {4}
$$
$$
Q _ {\tau} (\pi , p; s, a) := \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} c _ {\tau , \pi , t} \right| s _ {0} = s, a _ {0} = a ]. \tag {5}
$$
A good solution to the entropy regularized problem (3) can be an approximate Nash equilibrium defined as follows.
Definition 2. A policy-transition pair $(\pi, p) \in \Pi \times \mathcal{P}$ is called an $(\epsilon, \tau)$ -Nash equilibrium if it satisfies
$$
J _ {\rho , \tau} (\pi , p) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho , \tau} (\pi^ {\prime}, p) \leq \epsilon ,
$$
$$
\max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi , p) \leq \epsilon .
$$
Proposition 1. Under Assumption 1, for any $\epsilon \geq 0$ and $\tau > 0$ , $(\epsilon, \tau)$ -Nash equilibrium exists. If $(\pi, p) \in \Pi \times \mathcal{P}$ is an $(\epsilon, \tau)$ -Nash equilibrium, then $\pi$ is a $\left(2\epsilon + \frac{\tau \ln |\mathcal{A}|}{1 - \gamma}\right)$ -optimal robust policy to the optimization problem (2).
Proposition 1 indicates that by letting $\tau = \mathcal{O}(\epsilon)$ , the $(\epsilon, \tau)$ -Nash equilibrium also solves the robust MDP problem (2) with precision $\mathcal{O}(\epsilon)$ . Hence, we can solve robust MDP by solving entropy-regularized robust MDP. In the next section, we will provide the first policy gradient algorithm for entropy-regularized robust MDP, which also solves $s$ -rectangular robust MDP with lower iteration complexity than the existing policy gradient algorithms (Wang et al., 2023; Li et al., 2023b; Kumar et al., 2023c; Guha and Lee, 2023) that directly aim at the robust MDP objective (2).
# 3. Accelerated Robust Policy Gradient Algorithm
In this section, we will provide a robust policy gradient algorithm (Algorithm 1) for both entropy-regularized robust MDP and robust MDP, obtain convergence results under the deterministic case. Then we will extend Algorithm 1 to stochastic case and obtain sample complexity result.
# 3.1. Accelerated Robust Policy Gradient Algorithm
A major challenge to solve robust MDP is that its objective $\Phi_{\rho}(\pi)$ defined by eq. (2) is non-differentiable since the optimal transition kernel $\arg \max_p J_{\rho}(\pi, p)$ is non-unique. In contrast, the entropy-regularized robust MDP (3) is equivalent to its dual form below (Mai and Jaillet, 2021) (see Lemma 2 for the proof of equivalence):
$$
\max _ {p \in \mathcal {P}} \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p), \tag {6}
$$
for which the optimal policy $\pi_p \coloneqq \arg \min_{\pi} J_{\rho, \tau}(\pi, p)$ is unique given the transition kernel $p$ for any $\tau > 0$ (Cen et al., 2022). Hence, based on the Danskin's Theorem (Bernhard and Rapaport, 1995), $F_{\rho, \tau}(p) \coloneqq \min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p)$ is differentiable with $\nabla F_{\rho, \tau}(p) = \nabla_2 J_{\rho, \tau}(\pi_p, p)$ (See Lemma 7 and its proof in Appendix C)<sup>1</sup>. Furthermore, we will show that $F_{\rho, \tau}(p)$ is Lipschitz smooth in Section 3.2.
As a result, a natural idea is to apply projected gradient ascent $p_{t + 1} = \mathrm{proj}_{\mathcal{P}}\bigl (p_t + \alpha_t\nabla F_{\rho ,\tau}(p_t)\bigr)$ to the Lipschitz smooth objective (6) where $\nabla F_{\rho ,\tau}(p_t) = \nabla_2J_{\rho ,\tau}(\pi_{p_t},p_t)$ . The unique optimal policy $\pi_{p_t}\coloneqq \arg \min_{\pi \in \Pi}J_{\rho ,\tau}(\pi ,p_t)$ can be efficiently approximated using the following natural policy gradient (NPG) step (Cen et al., 2022):
$$
\pi_ {t, k + 1} (\cdot | s) \propto \pi_ {t, k} (\cdot | s) \exp \left[ - \frac {\eta \widehat {Q} _ {t , k} (s , \cdot)}{1 - \gamma} \right], \tag {7}
$$
where $\eta > 0$ is the stepsize and $\widehat{Q}_{t,k}$ approximates the Q function $Q_{\tau}(\pi_{t,k},p_t)\coloneqq Q_{\tau}(\pi_{t,k},p_t;\cdot ,\cdot)\in \mathbb{R}^{|S||A|}$ defined by eq. (5)2. After $T^{\prime}$ NPG steps (7), $\pi_t\coloneqq \pi_{t,T'}$ converges to $\pi_{p_t}$ exponentially fast with $T^{\prime}$ (Cen et al., 2022) (also see Lemma 9). Hence, we can replace $\pi_{p_t}$ with $\pi_t$ when computing $\nabla F_{\rho ,\tau}(p_t) = \nabla_2J_{\rho ,\tau}(\pi_{p_t},p_t)$ , which yields the following projected gradient ascent rule:
$$
p _ {t + 1} = \operatorname {p r o j} _ {\mathcal {P}} \left(p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right)\right), \tag {8}
$$
where $\beta > 0$ is the stepsize and $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t) \approx \nabla_p J_{\rho, \tau}(\pi_t, p_t)$ is the estimated gradient. $\nabla_p J_{\rho, \tau}(\pi_t, p_t) \in \mathbb{R}^{|\mathcal{S}|^2 |\mathcal{A}|}$ is the exact gradient and its $(s, a, s')$ -th entry is given below:
$$
\begin{array}{l} \nabla_ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) = \frac {d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \\ \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) \right]. \quad (9) \\ \end{array}
$$
where the occupancy measure $d_{\rho}^{\pi ,p}(s)$ is defined as:
$$
d _ {\rho} ^ {\pi , p} (s) = (1 - \gamma) \sum_ {t = 0} ^ {\infty} \gamma^ {t} \mathbb {P} _ {\pi , p} \left(s _ {t} = s \mid s _ {0} \sim \rho\right). \tag {10}
$$
Our accelerated robust policy gradient algorithm is shown in Algorithm 1, which updates the policy $\pi_t$ in the inner
# Algorithm 1 Accelerated Robust Policy Gradient
1: Inputs: $\tau, T, T'$ , $\eta$ , $\beta$ , $\epsilon_1$ , $\epsilon_2$ .
2: Initialize: $p_0$ .
3: for transition update steps $t = 0,1,\ldots ,T - 1$ do
4: Let $\pi_{t,0}(a|s) \equiv 1 / |\mathcal{A}|$ .
5: for policy update steps $k = 0,1,\ldots ,T^{\prime} - 1$ do
6: Obtain $\widehat{Q}_{t,k} \approx Q_{\tau}(\pi_{t,k}, p_t)$ such that $\| \widehat{Q}_{t,k} - Q_{\tau}(\pi_{t,k}, p_t) \|_{\infty} \leq \epsilon_1$ .
7: Obtain $\pi_{t,k+1}$ using the NPG step (7).
8: end for
9: Let $\pi_t \coloneqq \pi_{t,T'}$ .
10: Obtain approximate gradient $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t)$ such that $\| \widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t) - \nabla_p J_{\rho, \tau}(\pi_t, p_t) \| \leq \epsilon_2$
11: Obtain $p_{t + 1}$ by projected gradient ascent step (8).
12: end for
13: Output: $\pi_{\widetilde{T}}, p_{\widetilde{T}}$ where $\widetilde{T} \in \operatorname*{argmin}_{0 \leq t \leq T-1} \|p_{t+1} - p_t\|$ .
loop and transition kernel $p_{t+1}$ in the outer loop. The lines 6 and 10 assume access to $\widehat{Q}_{t,k} \approx Q_{\tau}(\pi_{t,k}, p_t)$ and $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t) \approx \nabla_p J_{\rho, \tau}(\pi_t, p_t)$ with arbitrary predefined precisions $\epsilon_1, \epsilon_2 \geq 0$ respectively. This covers both exact case where exact Q-functions and gradients can be easily computed (i.e., $\epsilon_1 = \epsilon_2 = 0$ ) and inexact case where exact computation is intractable or requires much more computation than inexact estimation. We will show how to obtain these inexact estimations in Section 3.3.
# 3.2. Iteration Complexity of Algorithm 1
We will first show two amenable geometric properties of the regularized problem (6) that yields faster global convergence of Algorithm 1 than existing policy gradient works. Throughout this work, $||\cdot ||_p$ ( $p \in [1,\infty]$ ) is Euclidean $p$ -norm and $||\cdot || = ||\cdot ||_2$ is 2-norm by default.
Proposition 2. Under Assumption 1, $F_{\rho ,\tau}(p)$ is Lipschitz smooth with parameter $\ell_F\coloneqq \frac{8|\mathcal{S}||\mathcal{A}|(1 + \gamma\tau\ln|\mathcal{A}|)^2}{\tau(1 - \gamma)^5}$ , i.e., for any $p,p^{\prime}\in \mathcal{P}$
$$
\left\| \nabla F _ {\rho , \tau} \left(p ^ {\prime}\right) - \nabla F _ {\rho , \tau} (p) \right\| \leq \ell_ {F} \| p ^ {\prime} - p \|. \tag {11}
$$
Technical Novelty: The Lipschitz property of $\nabla F_{\rho,\tau}(p)$ for entropy regularized robust MDP has not been obtained in existing literature to our knowledge. We use two novel techniques to tackle the entropy regularizer. First, we need to prove the Lipschitz continuity of $J_{\rho,\tau}$ and $\nabla_p J_{\rho,\tau}$ (see Lemma 6). $J_{\rho,\tau}$ contains the regularized cost $c(s,a,s') + \tau \ln \pi(a|s)$ which goes to $-\infty$ as $\pi(a|s) \to +0$ . To solve this, we control $\ln \pi(a|s)$ by multiplying it with $\pi(a|s)$ and use $|\pi'(a|s) \ln \pi'(a|s) - \pi(a|s) \ln \pi(a|s)| \leq |\ln \pi'(a|s) - \ln \pi(a|s)|$ (see Lemma 16), so we obtained the Lipschitz properties of $J_{\rho,\tau}$ and $\nabla_p J_{\rho,\tau}$ with respect
to $\ln \pi$ . Second, as $\nabla F_{\rho,\tau} = \nabla_2 J_{\rho}(\pi_p,p)$ , we also need to obtain the Lipschitz property of $\ln \pi_p$ with respect to $p$ . This is not straightforward, since $\ln \pi_p$ is implicitly defined by $Q_p := \widetilde{Q}_{\tau}(\pi_p,p)$ ( $\widetilde{Q}_{\tau}$ is defined in eq. (30)), the unique fixed point of a Bellman operator $T_p$ defined in eq. (72). As a result, we also need to obtain Lipschitz property of $T_p$ which yields the recursive bound below for any $p,p' \in \mathcal{P}$ :
$$
\begin{array}{l} \| Q _ {p ^ {\prime}} - Q _ {p} \| _ {\infty} = \| T _ {p ^ {\prime}} Q _ {p ^ {\prime}} - T _ {p} Q _ {p} \| _ {\infty} \\ \leq \left\| T _ {p ^ {\prime}} Q _ {p ^ {\prime}} - T _ {p} Q _ {p ^ {\prime}} \right\| _ {\infty} + \left\| T _ {p} Q _ {p ^ {\prime}} - T _ {p} Q _ {p} \right\| _ {\infty} \\ \leq c \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| Q _ {p ^ {\prime}} - Q _ {p} \| _ {\infty}, \\ \end{array}
$$
where $c > 0$ is a constant. This implies that $\| Q_{p'} - Q_p\|_{\infty} \leq \frac{c}{1 - \gamma} \max_{s,a} \| p'(\cdot | s, a) - p(\cdot | s, a)\|_1$ and thus the Lipschitz continuity of $\ln \pi_p$ .
Proposition 2 guarantees that Algorithm 1, which can be seen as approximate gradient ascent on $F_{\rho,\tau}(p)$ , converges to a stationary point of $F_{\rho,\tau}(p)$ . Such a stationary point also provides a global optimal solution as shown in the following gradient dominance property. Throughout, we define $D_{\mathcal{P}} \coloneqq \sup_{p, \widetilde{p} \in \mathcal{P}} \| \widetilde{p} - p \|$ as the diameter of $\mathcal{P}$ and $D \coloneqq \sup_{\pi \in \Pi, p \in \mathcal{P}} \| d_{\rho}^{\pi,p} / \rho \|_{\infty} < \infty$ as the distribution mismatch coefficient which has also been used in (Agarwal et al., 2021; Leonardos et al., 2022; Wang et al., 2023).
Proposition 3 (Gradient dominance). Under Assumption 1, the function $J_{\rho,\tau}$ satisfies the following gradient dominance property for any $\pi \in \Pi$ and $p \in \mathcal{P}$ ,
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi , p) \\ \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \left(p ^ {\prime} - p\right) ^ {\top} \nabla_ {p} J _ {\rho , \tau} (\pi , p). \tag {12} \\ \end{array}
$$
Based on the two properties, we obtain the following convergence rates of Algorithm 1.
Theorem 1. Implement Algorithm 1 with $\beta \leq \frac{1}{2\ell_F}$ , $\eta = \frac{1 - \gamma}{\tau}$ . Then the output $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ satisfies the following rates under Assumption 1.
$$
\begin{array}{l} J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}\right) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau}\right), (13) \\ \max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p) - J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \\ \leq \mathcal {O} \left[ \left(1 + \tau \epsilon_ {2}\right) \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}}\right) \right]. (14) \\ \end{array}
$$
Proof Sketch of Theorem 1. The rate (13) is straightforward since $\pi_{\widetilde{T}} \coloneqq \pi_{\widetilde{T}, T'} \to \pi_{p_{\widetilde{T}}} \text{ exponentially fast as } T' \to \infty$ . The rate (14) relies on Propositions 2 and 3 which apply to different functions $F_{\rho, \tau}$ and $J_{\rho, \tau}$ respectively. We tackle this challenge by using the connection $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t) \approx \nabla F_{\rho, \tau}(p_t)$ between the two functions, which implies that
# Algorithm 2 Accelerated Stochastic Robust Policy Gradient
1: Inputs: $\tau, T, T', T_1, \alpha, \eta, \beta, N, H$ .
2: Initialize: $p_0$ .
3: for transition update steps $t = 0,1,\ldots ,T - 1$ do
4: Initialize $\pi_{t,0}(a|s) \equiv 1 / |\mathcal{A}|$ .
5: for policy update steps $k = 0,1,\ldots ,{T}^{\prime } - 1\mathbf{{do}}$
6: For $\pi = \pi_{t,k}$ and $p = p_t$ , perform the TD update rule (15) for $T_{1}$ iterations.
7: Assign $\widehat{Q}_{t,k} \gets \overline{q}_{T_1} \coloneqq \frac{1}{T_1} \sum_{n=1}^{T_1} q_n$ .
8: Obtain $\pi_{t,k+1}$ by the NPG update step (7).
9: end for
10: Let $\pi_t \coloneqq \pi_{t,T'}$ .
11: Obtain $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t)$ using eq. (16).
12: Obtain $p_{t+1}$ by the projected gradient ascent step (8).
13: end for
14: Output: $\pi_{\widetilde{T}}, p_{\widetilde{T}}$ where $\widetilde{T} \in \operatorname*{argmin}_{0 \leq t \leq T-1} \|p_{t+1} - p_t\|$ .
Algorithm 1 is essentially a projected gradient ascent algorithm on the $\ell_F$ -smooth objective $\max_p F_{\rho, \tau}(p)$ . Hence, we can apply the standard convergence analysis to the projected gradient $G_t = (p_{t+1} - p_t) / \beta$ and obtain the convergence rate of $\| G_{\widetilde{T}}\|$ . As $G_t$ is defined by the projection step (8) involving $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t)$ , we can apply properties about projection which implies that $\max_{p' \in \mathcal{P}} (p' - p_{\widetilde{T}})^{\top} \widehat{\nabla}_p J_{\rho, \tau}(\pi_{\widetilde{T}}, p_{\widetilde{T}}) \leq \mathcal{O}(\| G_{\widetilde{T}} \|)$ . This bound along with Proposition 3 implies the convergence rate (14).
Under deterministic setting where we can access exact $Q$ function $Q_{\tau}$ and gradient $\nabla_p J_{\rho,\tau}$ , we have $\epsilon_1 = \epsilon_2 = 0$ in Algorithm 1. In this case, we obtain the following iteration complexity result based on Theorem 1.
Corollary 1 (Iteration Complexity of Algorithm 1). Implement Algorithm 1 under deterministic setting $(\epsilon_{1} = \epsilon_{2} = 0)$ . For any $\epsilon > 0$ , select hyperparameters $\tau = \min \left(\frac{\epsilon(1 - \gamma)}{3\ln|\mathcal{A}|}, 1\right)$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T' = \mathcal{O}[\ln (\epsilon^{-1})]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F}$ . Then the output $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ is both $\epsilon$ -optimal robust policy and $(\epsilon, \tau)$ -Nash equilibrium under Assumption 1. This requires $T = \mathcal{O}(\epsilon^{-3})$ transition kernel updates, $TT' = \mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ policy updates and iteration complexity $T + TT' = \mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ .
Comparison with Existing Works: With the aforementioned amenable geometric properties given by the entropy regularization, our iteration complexity $\mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ is lower than the state of the art $\mathcal{O}(\epsilon^{-4})$ (Guha and Lee, 2023) among the existing policy gradient methods for $s$ -rectangular robust MDP under deterministic setting. In addition, our Algorithm 1 is also the first policy gradient algorithm with global convergence to both the robust MDP and the entropy regularized robust MDP.
# 3.3. Stochastic Estimation and Sample Complexity
We will extend Algorithm 1 to the stochastic setting where the Q function $Q_{\tau}(\pi ,p)\coloneqq Q_{\tau}(\pi ,p;\cdot ,\cdot)\in \mathbb{R}^{|S|\times |\mathcal{A}|}$ and $\nabla_pJ_{\rho ,\tau}(\pi ,p)$ for fixed $\pi$ and $p$ can only be estimated by stochastic samples, and obtain the sample complexity result.
We estimate $Q_{\tau}(\pi, p)$ via the following temporal difference (TD) update rule (Bhandari et al., 2018; Xu et al., 2021; Li et al., 2023a; Samsonov et al., 2023).
$$
\begin{array}{l} q _ {n + 1} \left(s _ {n}, a _ {n}\right) = q _ {n} \left(s _ {n}, a _ {n}\right) + \alpha \left[ c \left(s _ {n}, a _ {n}, s _ {n} ^ {\prime}\right) + \tau \ln \pi \left(a _ {n} \mid s _ {n}\right) \right. \\ + \gamma q _ {n} \left(s _ {n} ^ {\prime}, a _ {n} ^ {\prime}\right) - q _ {n} \left(s _ {n}, a _ {n}\right) ]; n = 0, 1, \dots , T _ {1} - 1, \tag {15} \\ \end{array}
$$
where $s_n \sim \mu_{\pi, p}$ (the state stationary distribution under $\pi, p$ ), $a_n \sim \pi(\cdot | s_n)$ , $s'_n \sim p(\cdot | s_n, a_n)$ , $a'_n \sim \pi(\cdot | s'_n)$ , and $c(s_n, a_n, s'_n) + \tau \ln \pi(a_n | s_n)$ can be seen as regularized cost. We use $\overline{q}_{T_1} \coloneqq \frac{1}{T_1} \sum_{n=1}^{T_1} q_n$ as the output which provably converges to $Q_{\tau}(\pi, p)$ (Li et al., 2023a).
A stochastic estimation of $\nabla_{p}J_{\rho ,\tau}(\pi ,p)$ defined in eq. (9) can be obtained as follows.
$$
\begin{array}{l} \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) = \frac {1}{N (1 - \gamma)} \sum_ {i = 1} ^ {N} \pi (a | s) \mathbb {1} \{s _ {i, H _ {i}} = s \} \\ \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) q _ {T _ {1}} \left(s ^ {\prime}, a ^ {\prime}\right) \right], \tag {16} \\ \end{array}
$$
where $\mathbb{1}(\cdot)$ is an indicator function, $H_{i}$ is generated by $\mathbb{P}(H_i = h)\propto \gamma^h (h = 0,1,\dots ,H - 1)$ , a geometric distribution truncated at level $H$ , and then every $i$ -th trajectory $\{s_{i,h},a_{i,h}\}_{h = 0}^{H_i}$ is generated via $s_{i,0}\sim \rho$ , $a_{i,h}\sim \pi (\cdot |s_{i,h})$ , $s_{i,h + 1}\sim p(\cdot |s_{i,h},a_{i,h})$ such that the distribution of $s_{i,H_i}$ is $\mathcal{O}(\gamma^H)$ -close to $d_{\rho}^{\pi ,p}$ .
By estimating $Q_{\tau}(\pi, p)$ and $\nabla_p J_{\rho, \tau}(\pi, p)$ using the TD rule (15) and stochastic gradient (16) respectively, we obtain a stochastic implementation of Algorithm 1 in Algorithm 2, the first stochastic policy gradient method for $s$ -rectangular robust MDP to our knowledge. Due to entropy regularization, a major challenge to obtain the sample complexity of Algorithm 2 is to bound the regularized cost $c(s, a, s') + \tau \ln \pi(a|s)$ involved in the estimations (15) and (16), where $\pi(a|s)$ may approach 0. To tackle this, we can prove that the policy $\pi$ obtained by the NPG step (7) satisfies $|\tau \ln \pi(a|s)| \leq \mathcal{O}(1)$ by the following Lemma.
Lemma 1. The NPG step (7) with $\| \widehat{Q}_{t,k} - Q_{\tau}(\pi_{t,k},p_t)\|_{\infty}\leq \epsilon_1$ stepsize $\eta = \frac{1 - \gamma}{\tau}$ and initial policy $\pi_{t,0}(a|s)\equiv 1 / |\mathcal{A}|$ always guarantees
$$
0 \geq \ln \pi_ {t, k} (a | s) \geq - \frac {3 \ln | \mathcal {A} | + 3 / \tau}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}. \tag {17}
$$
Proof Sketch of Lemma 1: The proof is shown in the proof of Lemma 9 in Appendix D. Since $\ln \pi_{t,k} \to \ln \pi_t^* \coloneqq \ln \pi_{p_t}$ exponentially fast as $k \to \infty$ , we only need to lower
bound $\ln \pi_t^*$ , which can be obtained by using the analytical solution of $\ln \pi_t^*$ (see Lemma 4 in Appendix B).
With this bounded cost, it is well known that TD (15) achieves $\| q_{T_1} - Q_\tau (\pi ,p)\|_\infty \leq \epsilon_1$ with $T_{1} = \mathcal{O}(\epsilon_{2}^{-2}\ln \epsilon_{2}^{-1})$ iterations (Li et al., 2023a), and the stochastic gradient (16) achieves $\| \widehat{\nabla}_pJ_{\rho ,\tau}(\pi_t,p_t) - \nabla_pJ_{\rho ,\tau}(\pi_t,p_t)\| \leq \epsilon_2$ with $\mathcal{O}(\epsilon_2^{-2}\ln \epsilon_2^{-1})$ stochastic samples (see Lemma 14). These results along with the convergence rates in Theorem 1 yield the following sample complexity result of Algorithm 2.
Corollary 2 (Sample Complexity of Algorithm 2). For any $\epsilon >0$ and $\delta \in (0,1)$ , implement Algorithm 2 with hyperparameters $\tau = \min \left(\frac{\epsilon(1 - \gamma)}{3\ln|\mathcal{A}|},1\right)$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T^{\prime} = \mathcal{O}[\ln (\epsilon^{-1})]$ , $T_{1} = \mathcal{O}(\epsilon^{-4})$ , $\alpha = \mathcal{O}[\ln^{-1}(\epsilon^{-1})]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F}$ , $N = \mathcal{O}(\epsilon^{-2})$ , $H = \mathcal{O}[\ln (\epsilon^{-1})]$ . The output $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is both $\epsilon$ -optimal robust policy and $(\epsilon ,\tau)$ -Nash equilibrium with probability at least $1 - \delta$ under Assumption 1. Furthermore, the sample complexity is $T(T^{\prime}T_{1} + NH) = \mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ .
# 4. Extension to Large State Space
Algorithms 1-2 and the existing policy gradient algorithms for $s$ -rectangular robust MDPs need to compute $\pi(a|s)$ , $p(s'|s, a)$ or $Q(s, a)$ for all states $s, s' \in S$ and actions $a \in \mathcal{A}$ . This is intractable in many practical applications where the state space is very large. We will introduce two key techniques to reduce such state enumeration, namely, transition kernel parameterization and Q function approximation. Based on these techniques, we extend Algorithm 1 to large space and obtain sample complexity result.
# 4.1. Transition Kernel Parameterization
To reduce the dimensionality $|\mathcal{S}|^2 |\mathcal{A}|$ of the transition kernel $p$ , we adopt a linear transition kernel parameterization which has also been used in linear mixture MDP (Ayoub et al., 2020; Zhou et al., 2021; Zhang et al., 2023) and robust MDP (Li et al., 2023b). Linear kernel parameterization can be written as $p_{\xi}(s'|s,a) = \psi (s,a,s')^\top \xi$ with fixed and known features $\psi (s,a,s')\in \mathbb{R}^{d_p}$ and unknown parameter $\xi \in \Xi \subset \mathbb{R}^{d_p}$ with $d_{p}\ll |\mathcal{S}|^{2}|\mathcal{A}|$ . The ambiguity set $\Xi$ can be defined as a neighborhood of a nominal parameter $\bar{\xi}$ which can be estimated from data. For simplicity, the linear kernel parameterization can be rewritten as $p_{\xi} = \Psi \xi$ where the $(s,a,s')$ -th row of the feature matrix $\Psi \in \mathbb{R}^{|\mathcal{S}|^2 |\mathcal{A}|\times d_p}$ is $\psi (s,a,s')^\top$ .
The gradient for $p_{\xi} \coloneqq \Psi \xi$ has the following expression<sup>4</sup>.
$$
\nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) = \frac {1}{1 - \gamma} \mathbb {E} _ {s \sim d _ {\rho} ^ {\pi , p _ {\xi}}, a \sim \pi (\cdot | s), s ^ {\prime} \sim p _ {\xi} (\cdot | s, a)}
$$
$$
\left. \left[ \frac {\psi (s , a , s ^ {\prime})}{p _ {\xi} \left(s ^ {\prime} \mid s , a\right)} [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a \mid s) + \gamma V _ {\tau} (\pi , p _ {\xi}; s ^ {\prime}) ] \right] \right. \tag {18}
$$
Similar to the stochastic gradient (16), a stochastic sample-based estimation of the above gradient is shown below.
$$
\begin{array}{l} \widehat {\nabla} _ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) = \frac {1}{N (1 - \gamma)} \sum_ {i = 1} ^ {N} \frac {\psi (s _ {i , H _ {i}} , a _ {i , H _ {i}} , s _ {i , H _ {i} + 1})}{p _ {\xi} (s _ {i , H _ {i} + 1} | s _ {i , H _ {i}} , a _ {i , H _ {i}})} \\ \left[ c \left(s _ {i, H _ {i}}, a _ {i, H _ {i}}, s _ {i, H _ {i} + 1}\right) + \tau \ln \pi \left(a _ {i, H _ {i}} \mid s _ {i, H _ {i}}\right) \right. \\ \left. + \gamma \phi \left(s _ {i, H _ {i} + 1}, a _ {i, H _ {i} + 1}\right) ^ {\top} \bar {w} _ {T _ {1}} \right], \tag {19} \\ \end{array}
$$
where $H_{i}$ is generated by $\mathbb{P}(H_i = h)\propto \gamma^h (h = 0,1,\ldots ,H - 1)$ and then the trajectory $\{s_{i,h},a_{i,h}\}_{h = 0}^{H_i + 1}$ is generated via $s_{i,0}\sim \rho$ $a_{i,h}\sim \pi (\cdot |s_{i,h})$ $s_{i,h + 1}\sim p_{\xi}(\cdot |s_{i,h},a_{i,h})$ . In large state space, this computation of $\widehat{\nabla}_{\xi}J_{\rho ,\tau}(\pi ,p_{\xi})\in \mathbb{R}^{d_p}$ with $d_p\ll |\mathcal{S}|^2 |\mathcal{A}|$ is less intensive than to compute $\widehat{\nabla}_pJ_{\rho ,\tau}(\pi ,p)(s,a,s')$ for every $(s,a,s')$
With the stochastic gradient (16), we can apply projected stochastic gradient ascent to $\xi$ as follows.
$$
\xi_ {t + 1} = \operatorname {p r o j} _ {\Xi} \left(\xi_ {t} + \beta \widehat {\nabla} _ {\xi} J _ {\rho , \tau} \left(\pi_ {t}, p _ {\xi_ {t}}\right)\right). \tag {20}
$$
where $\pi_t \approx \pi_{p\xi_t}$ can be obtained by policy optimization in the next subsection.
# 4.2. Q Function Approximation
To avoid direct evaluation of $Q_{\tau}(\pi, p; s, a)$ for every $s, a$ , we adopt the popular linear $\mathbf{Q}$ function approximation $Q_{\tau}(\pi, p; s, a) \approx \phi(s, a)^{\top} w$ with fixed and known features $\phi(s, a) \in \mathbb{R}^d$ and parameter $w \in \mathbb{R}^d$ (Huh and Lee, 2018; Zou et al., 2019; Wang et al., 2021; Zhou et al., 2023).
The parameter $w$ can be estimated by the following TD algorithm (Huh and Lee, 2018; Li et al., 2023a).
$$
\begin{array}{l} w _ {n + 1} = w _ {n} + \alpha \phi \left(s _ {n}, a _ {n}\right) \left[ c \left(s _ {n}, a _ {n}, s _ {n} ^ {\prime}\right) + \tau \ln \pi \left(a _ {n} \mid s _ {n}\right) \right. \\ + \gamma \phi \left(s _ {n} ^ {\prime}, a _ {n} ^ {\prime}\right) ^ {\top} w _ {n} - \phi \left(s _ {n}, a _ {n}\right) ^ {\top} w _ {n} ]; n = 0, 1, \dots , T _ {1} - 1, \tag {21} \\ \end{array}
$$
where $s_n \sim \mu_{\pi, p}$ (stationary distribution), $a_n \sim \pi(\cdot | s_n), s'_n \sim p(\cdot | s_n, a_n), a'_n \sim \pi(\cdot | s'_n)$ . We take $\overline{w}_{T_1} := \frac{1}{T_1} \sum_{n=1}^{T_1} w_n$ as the output which has provable convergence to the optimal parameter (Li et al., 2023a).
Suppose we got $\widehat{Q}_{t,k} = \phi (s,a)^{\top}w_{t,k}$ via the above TD algorithm. Then the NPG step (7) can be rewritten below.
$$
\pi_ {t, k} (\cdot | s) \propto \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} u _ {t , k}}{1 - \gamma} \right], \tag {22}
$$
$$
\text {w h e r e} u _ {t, k + 1} = u _ {t, k} + \eta w _ {t, k}. \tag {23}
$$
In this way, the policy $\pi_{t,k}$ is implicitly parameterized by $u_{t,k}$ . Instead of computing $\pi_{t,k}(a|s)$ for every $s,a$ , we only need to compute $\pi_{t,k}(\cdot |s)$ above to obtain action samples
given the state samples $s$ , which significantly reduces the computation for large state space.
Based on the above discussions, we extend Algorithm 2 to Algorithm 3, the first stochastic policy gradient algorithm for $s$ -rectangular robust MDP with large state space, by changing the outer update of $p_{t+1}$ to eq. (20) under linear kernel parameterization, and changing the inner update rule of $\pi_t$ to eq. (22) under linear Q function approximation.
# 4.3. Sample Complexity of Algorithm 3
The global convergence of Algorithm 3 is largely guaranteed by linear kernel parameterization, which preserves the Lipschitz smoothness and gradient dominance. To elaborate, $\nabla_{\xi}F_{\rho ,\tau}(p) = \Psi^{\top}\nabla F_{\rho ,\tau}(p)$ is $\ell_F\| \Psi \|$ -smooth based on Proposition 2. Similar to Proposition 3, we have the following gradient dominance property.
Proposition 4. Under Assumption 1, the function $J_{\rho,\tau}$ satisfies the following gradient dominance property for any $\pi \in \Pi, \xi \in \Xi$ .
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi , p _ {\xi}) \\ \leq \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} \left(\xi^ {\prime} - \xi\right) ^ {\top} \nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}). \tag {24} \\ \end{array}
$$
Define $\zeta := \sup_{\pi \in \Pi, p \in \mathcal{P}, s \in \mathcal{S}, a \in \mathcal{A}, \tau \in [0,1]} |\phi(s, a)^\top w_{\pi, p}^* - Q_\tau(\pi, p; s, a)|^2$ as the linear Q function approximation error where $w_{\pi, p}^*$ is the optimal critic parameter (Xu et al., 2020; Chen et al., 2022). Then we obtain the sample complexity of Algorithm 3 as follows.
Theorem 2 (Sample Complexity of Algorithm 3). For any $\epsilon >0$ and $\delta \in (0,1)$ , implement Algorithm 3 with hyperparameters $\tau = \min \left[\mathcal{O}(\sqrt{\zeta} +\epsilon),1\right]$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T^{\prime} = \mathcal{O}[\ln (\epsilon^{-1})]$ , $T_{1}\coloneqq \mathcal{O}(\epsilon^{-4})$ , $\alpha = \mathcal{O}[\ln^{-1}(\zeta + \epsilon^{2})^{-1}]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F\|\Psi\|}$ , $N = \mathcal{O}(\epsilon^{-4})$ , $H = \mathcal{O}[\ln (\epsilon^{-1})]$ . Then under Assumption 1 and the assumption that $\inf_{s,a,s'}p_{\xi}(s'|s,a) > p_{\mathrm{min}}$ for a constant $p_{\mathrm{min}} > 0$ , $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is both $(\mathcal{O}(\sqrt{\zeta} +\zeta +\epsilon),\tau)$ -Nash equilibrium and $\mathcal{O}(\sqrt{\zeta} +\zeta +\epsilon)$ -optimal robust policy with probability at least $1 - \delta$ . The required sample complexity is $T(T'T_1 + NH) = \mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ .
The sample complexity above is the same as that in Corollary 2 for small state space. The major difference is that the convergence error $\epsilon > 0$ becomes $\mathcal{O}(\sqrt{\zeta + \zeta + \epsilon})$ due to the linear Q function approximation error term $\zeta$ . The linear kernel parameterization does not cause additional error terms since we have proved that linear kernel parameterization preserves the amenable geometric properties of Lipschitz smoothness and gradient dominance.
Algorithm 3 Accelerated Stochastic Robust Policy Gradient for Large State Space
<table><tr><td>1:</td><td>Inputs: τ, T, T&#x27;, T1, α, η, β, N, H.</td></tr><tr><td>2:</td><td>Initialize: ξ0.</td></tr><tr><td>3:</td><td>for transition update steps t = 0, 1, ..., T - 1 do</td></tr><tr><td>4:</td><td>Initialize ut,0 = 0.</td></tr><tr><td>5:</td><td>for policy update steps k = 0, 1, ..., T&#x27; - 1 do</td></tr><tr><td>6:</td><td>For π = πt,k defined by eq. (22) and p = pξt, perform the TD update rule (21) for T1 iterations.</td></tr><tr><td>7:</td><td>Assign wt,k ← wT1 := 1/T1 ∑n=1T1 wn.</td></tr><tr><td>8:</td><td>Obtain ut,k+1 by eq. (23).</td></tr><tr><td>9:</td><td>end for</td></tr><tr><td>10:</td><td>For πt := πt,T&#x27; defined by eq. (22), obtain stochastic gradient ∇ξJρ,τ(πt, pξt) using eq. (19).</td></tr><tr><td>11:</td><td>Obtain ξt+1 by projected gradient ascent step (20).</td></tr><tr><td>12:</td><td>end for</td></tr><tr><td>13:</td><td>Output: πT, ξT where T ∈ arg min0≤t≤T-1||ξt+1-ξt|.</td></tr></table>
# 5. Conclusion
This work proposes a policy gradient algorithm with faster global convergence than existing policy gradient algorithms on $s$ -rectangular robust MDP, by solving an entropy regularized robust MDP. We further extend this algorithm to stochastic setting and large state space, and obtain the first sample complexity results for policy gradient on $s$ -rectangular robust MDP. Moreover, our algorithms are also the first policy gradient methods that can solve entropy regularized robust MDP problem, which is an important but underexplored area. Since $F_{\rho,\tau}(p)$ is Lipschitz smooth with parameter $\ell_F = \mathcal{O}(\tau^{-1})$ (see Proposition 2) while $\tau = \mathcal{O}(\epsilon)$ is required for $\epsilon$ -accuracy (see Proposition 1), our algorithm requires small stepsize $\beta = \mathcal{O}(\epsilon)$ and thus the iteration complexity and sample complexities are not minimax optimal. An interesting future direction is to further accelerate our algorithms using techniques such as Nesterov's acceleration and variance reduction, etc.
# Impact Statement
This paper presents work whose goal is to advance the field of machine learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
# Acknowledgements
This work was partially supported by NSF IIS 2347592, 2347604, 2348159, 2348169, DBI 2405416, CCF 2348306, CNS 2347617.
# References
Abbeel, P., Coates, A., Quigley, M., and Ng, A. Y. (2006). An application of reinforcement learning to aerobic helicopter flight. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1-8.
Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., and Riedmiller, M. (2018). Maximum a posteriori policy optimisation. In Proceedings of the International Conference on Learning Representations (ICLR).
Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. (2021). On the theory of policy gradient methods: Optimality, approximation, and distribution shift. The Journal of Machine Learning Research, 22(1):4431-4506.
Altman, E. (2004). Constrained Markov decision processes. CRC press. https://www-sop.inria.fr/members/Eitan.Altman/PAPERS/h.pdf.
Archibald, T., McKinnon, K., and Thomas, L. (1995). On the generation of markov decision processes. Journal of the Operational Research Society, 46(3):354-361.
Ayoub, A., Jia, Z., Szepesvari, C., Wang, M., and Yang, L. (2020). Model-based reinforcement learning with value-targeted regression. In Proceedings of the International Conference on Machine Learning (ICML), pages 463-474.
Badrinath, K. P. and Kalathil, D. (2021). Robust reinforcement learning using least squares policy iteration with provable performance guarantees. In International Conference on Machine Learning, pages 511-520.
Bernhard, P. and Rapaport, A. (1995). On a theorem of danskin with an application to a theorem of von neumann-sion. Nonlinear Analysis: Theory, Methods & Applications, 24(8):1163-1181.
Bhandari, J. and Russo, D. (2021). On the linear convergence of policy gradient methods for finite mdps. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pages 2386-2394.
Bhandari, J., Russo, D., and Singal, R. (2018). A finite time analysis of temporal difference learning with linear function approximation. In Proceedings of the Conference on learning theory (COLT), pages 1691-1692.
Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., and Lee, M. (2009). Natural actor-critic algorithms. Automatica, 45(11):2471-2482.
Cayci, S., He, N., and Srikant, R. (2022). Finite-time analysis of entropy-regularized neural natural actor-critic algorithm. ArXiv:2206.00833.
Cen, S., Cheng, C., Chen, Y., Wei, Y., and Chi, Y. (2022). Fast global convergence of natural policy gradient methods with entropy regularization. *Operations Research*, 70(4):2563-2578.
Chen, Z., Zhou, Y., Chen, R.-R., and Zou, S. (2022). Sample and communication-efficient decentralized actor-critic algorithms with finite-time analysis. In Proceedings of the International Conference on Machine Learning (ICML), pages 3794-3834.
Eysenbach, B. and Levine, S. (2021). Maximum entropy rl (provably) solves some robust rl problems. In Proceedings of the International Conference on Learning Representations (ICLR).
Grand-Clément, J. and Kroer, C. (2021). Scalable first-order methods for robust mdps. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12086-12094.
Guha, E. K. and Lee, J. D. (2023). Solving robust mdps through no-regret dynamics. ArXiv:2305.19035.
Ho, C. P., Petrik, M., and Wiesemann, W. (2021). Partial policy iteration for 11-robust markov decision processes. Journal of Machine Learning Research, 22(275):1-46.
Huh, J. and Lee, D. D. (2018). Efficient sampling with q-learning to guide rapidly exploring random trees. IEEE Robotics and Automation Letters, 3(4):3868-3875.
Iyengar, G. N. (2005). Robust dynamic programming. Mathematics of Operations Research, 30(2):257-280.
Kakade, S. (2001). A natural policy gradient. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1531-1538.
Kober, J., Bagnell, J. A., and Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238-1274.
Konda, V. R. and Tsitsiklis, J. N. (1999). Actor-citic algorithms. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1008-1014.
Kumar, N., Derman, E., Geist, M., Levy, K., and Mannor, S. (2023a). Policy gradient for rectangular robust markov decision processes. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips).
Kumar, N., Levy, K., Wang, K., and Mannor, S. (2022). Efficient policy iteration for robust markov decision processes via regularization. ArXiv:2205.14327.
Kumar, N., Levy, K., Wang, K., and Mannor, S. (2023b). An efficient solution to s-rectangular robust markov decision processes. *ArXiv:2301.13642*.
Kumar, N., Usmanova, I., Levy, K. Y., and Mannor, S. (2023c). Towards faster global convergence of robust policy gradient methods. In Sixteenth European Workshop on Reinforcement Learning.
Leonardos, S., Overman, W., Panageas, I., and Piliouras, G. (2022). Global convergence of multi-agent policy gradient in markov potential games. In *ICLR 2022 Workshop on Gamification and Multiagent Solutions*.
Li, G., Wu, W., Chi, Y., Ma, C., Rinaldo, A., and Wei, Y. (2023a). Sharp high-probability sample complexities for policy evaluation with linear function approximation. *ArXiv:2305.19001*.
Li, M., Sutter, T., and Kuhn, D. (2023b). Policy gradient algorithms for robust mdps with non-rectangular uncertainty sets. ArXiv:2305.19004.
Li, Y. and Lan, G. (2023). First-order policy optimization for robust policy evaluation. ArXiv:2307.15890.
Li, Y., Lan, G., and Zhao, T. (2023c). First-order policy optimization for robust markov decision process. ArXiv:2209.10579.
Mai, T. and Jaillet, P. (2021). Robust entropy-regularized markov decision processes. ArXiv:2112.15364.
Mankowitz, D. J., Levine, N., Jeong, R., Abdelmaleki, A., Springenberg, J. T., Shi, Y., Kay, J., Hester, T., Mann, T., and Riedmiller, M. (2019). Robust reinforcement learning for continuous control with model misspecification. In Proceedings of the International Conference on Learning Representations (ICLR).
Nilim, A. and El Ghaoui, L. (2005). Robust control of markov decision processes with uncertain transition matrices. Operations Research, 53(5):780-798.
Peng, X. B., Andrychowicz, M., Zaremba, W., and Abbeel, P. (2018). Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pages 3803-3810. IEEE.
Perera, A. and Kamalaruban, P. (2021). Applications of reinforcement learning in energy systems. Renewable and Sustainable Energy Reviews, 137:110618.
Samsonov, S., Tiapkin, D., Naumov, A., and Moulines, E. (2023). Finite-sample analysis of the temporal difference learning. *ArXiv:2310.14286*.
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. (2014). Deterministic policy gradient algorithms. In Proceedings of the International conference on machine learning (ICML), pages 387-395.
Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. (1999). Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1057-1063.
Wang, Q., Ho, C. P., and Petrik, M. (2023). Policy gradient in robust mdps with global convergence guarantee. In Proceedings of the International Conference on Machine Learning (ICML), volume 202, pages 35763-35797.
Wang, T., Zhou, D., and Gu, Q. (2021). Provably efficient reinforcement learning with linear function approximation under adaptivity constraints. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips).
Wang, Y. and Zou, S. (2022). Policy gradient method for robust reinforcement learning. In Proceedings of the International Conference on Machine Learning (ICML), pages 23484-23526.
Wang, Y.-C. and Usher, J. M. (2005). Application of reinforcement learning for agent-based production scheduling. Engineering applications of artificial intelligence, 18(1):73-82.
Wiesemann, W., Kuhn, D., and Rustem, B. (2013). Robust markov decision processes. Mathematics of Operations Research, 38(1):153-183.
Xiao, L. (2022). On the convergence rates of policy gradient methods. The Journal of Machine Learning Research, 23(1):12887-12922.
Xu, T., Liang, Y., and Lan, G. (2021). Crpo: A new approach for safe reinforcement learning with convergence guarantee. In Proceedings of the International Conference on Machine Learning (ICML), pages 11480-11491.
Xu, T., Wang, Z., and Liang, Y. (2020). Improving sample complexity bounds for (natural) actor-critic algorithms. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 4358-4369.
Xu, X., Zuo, L., and Huang, Z. (2014). Reinforcement learning algorithms with function approximation: Recent advances and applications. Information Sciences, 261:1-31.
Zhang, J., Zhang, W., and Gu, Q. (2023). Optimal horizon-free reward-free exploration for linear mixture MDPs. In Proceedings of the International Conference on Machine Learning (ICML), volume 202, pages 41902-41930.
Zhou, D., Gu, Q., and Szepesvari, C. (2021). Nearly minimax optimal reinforcement learning for linear mixture markov decision processes. In Proceedings of the Conference on Learning Theory (COLT), pages 4532-4576.
Zhou, R., Liu, T., Cheng, M., Kalathil, D., Kumar, P., and Tian, C. (2023). Natural actor-critic for robust reinforcement learning with function approximation. In Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems (Neurips).
Zou, S., Xu, T., and Liang, Y. (2019). Finite-sample analysis for sarsa with linear function approximation. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 8668-8678.
# Appendix
# Table of Contents
A Existing Complexity Results of Robust Policy Gradient 12
A.1 Complexity Results of (Wang et al., 2023) 13
A.2 Complexity Results of (Li et al., 2023b) 13
A.3 Extension to Non-rectangularity in (Li et al., 2023b) Also Applies to Our Work 13
A.4 Complexity Results of (Kumar et al., 2023c) 13
A.5 Complexity Results of (Guha and Lee, 2023) 14
A.6 Why Does (Guha and Lee, 2023) Require $s$ -rectangularity 14
B Basic Properties of Entropy Regularized Robust MDP 14
C Lipschitz Properties 15
D Convergence Results of the Policy Updates 19
E Stochastic Approximation Errors 20
F Supporting Lemmas 24
G Proof of Proposition 1 25
H Proof of Proposition 2 26
Proof of Proposition 3 27
J Proof of Proposition 4 28
K Proof of Theorem 1 28
L Proof of Corollary 1 30
M Proof of Corollary 2 31
N Proof of Theorem 2 32
O Experiments 33
O.1 Experiments on Small State Space under Deterministic Setting 33
O.2 Experiments on Large State Space 34
# A. Existing Complexity Results of Robust Policy Gradient
In this section, we will explain the existing complexity results listed in Table 1. We will also explain about the claim of (Li et al., 2023b; Guha and Lee, 2023) that their complexity results do not require rectangularity assumption.
# A.1. Complexity Results of (Wang et al., 2023)
(Wang et al., 2023) proposes a double-loop robust policy gradient (DRPG) algorithm for robust MDP with $s$ -rectangular ambiguity set, which applies projected gradient descent steps to update the policy $\pi$ in the outer loop and projected gradient ascent steps to update the transition kernel $p$ in the inner loop. Based on Theorem 3.3 of (Wang et al., 2023), to obtain an $\epsilon$ -optimal robust policy, DRPG requires $T = \mathcal{O}(\epsilon^{-4})$ outer iterations of the policy updates, and the $t$ -th outer iteration requires a transition kernel $p_t$ such that $J_{\rho}(\pi_t,p_t)\geq \max_{p\in \mathcal{P}}J_{\rho}(\pi_t,p) - \epsilon_t$ , where the precision $\epsilon_t > 0$ satisfies $\epsilon_{t + 1}\leq \gamma \epsilon_t$ and $\epsilon_0\leq \sqrt{T}$ . Such an $\epsilon_{t}$ -accurate $p_t$ further requires $T_{t} = \mathcal{O}(\epsilon_{t}^{-2})$ iterations of the transition kernel updates based on Theorem 4.4 of (Wang et al., 2023). $T_{t}$ is exponentially growing as $T_{t}\geq \mathcal{O}(\gamma^{t}\epsilon_{0})^{-2}\geq \mathcal{O}(T^{-1}\gamma^{-2t})$ . Hence, the required number of policy updates is
$$
\begin{array}{l} \max \left(T, \sum_ {t = 0} ^ {T} T _ {t}\right) \geq \max \left(T, T ^ {- 1} \sum_ {t = 0} ^ {T - 1} \mathcal {O} \left(\gamma^ {- 2 t}\right)\right) \\ = \max \left(T, \mathcal {O} \left(T ^ {- 1} \gamma^ {- 2 T}\right)\right) \\ = \max \left(\mathcal {O} \left(\epsilon^ {- 4}\right), \mathcal {O} \left(\epsilon^ {4} \gamma^ {- \mathcal {O} \left(\epsilon^ {- 4}\right)}\right)\right) \\ = \mathcal {O} \left(\epsilon^ {- 4} + \epsilon^ {4} \gamma^ {- \mathcal {O} \left(\epsilon^ {- 4}\right)}\right). \\ \end{array}
$$
Therefore, the iteration complexity (the total number of updates on both transition kernel and policy) is $T + \mathcal{O}(\epsilon^{-4} + \epsilon^4\gamma^{-\mathcal{O}(\epsilon^{-4})}) = \mathcal{O}(\epsilon^{-4} + \epsilon^4\gamma^{-\mathcal{O}(\epsilon^{-4})})$ , which exponentially increases as $\epsilon \rightarrow +0$ .
# A.2. Complexity Results of (Li et al., 2023b)
(Li et al., 2023b) proposes an actor-critic algorithm (see their Algorithm 4.1) which has similar double loop structure as the DRPG algorithm (Wang et al., 2023). To achieve an $\epsilon$ -optimal robust policy, this actor-critic algorithm requires $\mathcal{O}(\epsilon^{-4})$ outer policy updates and $\mathcal{O}(\epsilon^{-2})$ inner transition kernel updates per outer iteration, based on Theorems 4.5 and 3.8 of (Li et al., 2023b) respectively. Therefore, the total number of transition kernel updates is $\mathcal{O}(\epsilon^{-4})\mathcal{O}(\epsilon^{-2}) = \mathcal{O}(\epsilon^{-6})$ , so the iteration complexity is $\mathcal{O}(\epsilon^{-4}) + \mathcal{O}(\epsilon^{-6}) = \mathcal{O}(\epsilon^{-6})$ .
# A.3. Extension to Non-rectangularity in (Li et al., 2023b) Also Applies to Our Work
(Li et al., 2023b) extends their complexity results to non-rectangular ambiguity set $\mathcal{P}$ by defining the following degree of non-rectangularity.
$$
\delta_ {\mathcal {P}} := \max _ {p ^ {\prime} \in \mathcal {P}} \left[ \max _ {p _ {s} \in \mathcal {P} _ {s}} \langle \nabla_ {p} J _ {\rho} (\pi , p ^ {\prime}), p _ {s} \rangle - \max _ {p \in \mathcal {P}} \langle \nabla_ {p} J _ {\rho} (\pi , p ^ {\prime}), p \rangle \right] \tag {25}
$$
where $\mathcal{P}_s$ denotes the smallest $s$ -rectangular ambiguity set containing $\mathcal{P}$ .
Then the proof of the convergence for the inner transition kernel updates in (Li et al., 2023b) (see their Theorem 3.8) uses the following gradient dominance property.
$$
\max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - J _ {\rho} (\pi , p) \leq \left\| \frac {d _ {\rho} ^ {\pi , p _ {s} ^ {*}}}{d _ {\rho} ^ {\pi , p}} \right\| _ {\infty} \max _ {p _ {s} \in \mathcal {P} _ {s}} \langle \nabla_ {p} J _ {\rho} (\pi , p), p _ {s} - p \rangle \stackrel {(i)} {\leq} \frac {D}{1 - \gamma} \left[ \delta_ {\mathcal {P}} + \max _ {p ^ {\prime} \in \mathcal {P}} \langle \nabla_ {p} J _ {\rho} (\pi , p), p ^ {\prime} - p \rangle \right], \tag {26}
$$
where $p_s^* \in \arg \max_{p' \in \mathcal{P}_s} J_\rho(\pi, p')$ denotes the optimal transition kernel and (i) uses $D := \sup_{\pi \in \Pi, p \in \mathcal{P}} \|d_\rho^{\pi, p} / \rho\|_\infty < \infty$ and $d_\rho^{\pi, p}(s) \geq (1 - \gamma)\rho(s)$ . Compared with the gradient dominance property (Proposition 3) used in our convergence proof for $s$ -rectangular case, the above gradient dominance property involves the degree of non-rectangularity $\delta_{\mathcal{P}} > 0$ defined in eq. (25). Hence, we can also extend our convergence result to non-rectangular robust MDPs in the same way by replacing Proposition 3 with the above gradient dominance property (26), where the objective function $J_\rho$ should be changed to $J_{\rho,\tau}$ to fit our entropy regularized case.
# A.4. Complexity Results of (Kumar et al., 2023c)
(Kumar et al., 2023c) aims to solve the following robust MDP problem,
$$
\max _ {\pi} \min _ {(P, R) \in \mathcal {U}} \rho_ {P, R} ^ {\pi} := \mathbb {E} _ {\pi , p} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R (s _ {t}, a _ {t}) \Big | s _ {0} \sim \rho \right],
$$
where $\mathcal{U}$ denotes the ambiguity set and $\rho_{P,R}^{\pi}$ denotes the value function under policy $\pi$ , transition kernel $P$ and reward function $R$ . This work assumes oracle access to the optimal transition kernel and reward $(P_{\mathcal{U}}^{\pi}, R_{\mathcal{U}}^{\pi}) \in \arg \inf_{(P,R) \in \mathcal{U}} \rho_{(P,R)}^{\pi}$ and assumes that the robust value function $\rho_{\mathcal{U}}^{\pi} \coloneqq \min_{(P,R) \in \mathcal{U}} \rho_{P,R}^{\pi}$ has Lipschitz-continuous gradient $\nabla_{\pi} \rho_{\mathcal{U}}^{\pi} = \nabla_{\pi} \rho_{P,R}^{\pi, R_{\mathcal{U}}^{\pi}}$ . Under these assumptions which are not practical in many applications, (Kumar et al., 2023c) proved that it takes $\mathcal{O}(\epsilon^{-1})$ iterations of the following projected gradient ascent steps to obtain an $\epsilon$ -robust optimal policy.
$$
\pi_ {t + 1} := \operatorname {p r o j} _ {\Pi} \left(\pi_ {k} + \eta \nabla_ {\pi} \rho_ {\mathcal {U}} ^ {\pi_ {k}}\right).
$$
However, (Kumar et al., 2023c) has not discussed how to obtain the optimal transition kernel and reward $(P_{\mathcal{U}}^{\pi}, R_{\mathcal{U}}^{\pi})$ , so the total iteration complexity defined as the updates of all the variables $(\pi, P$ and $R)$ is unknown.
# A.5. Complexity Results of (Guha and Lee, 2023)
(Guha and Lee, 2023) proposes a gradient-based no-regret RL algorithm, which has $T$ time steps. In each time step, both the policy and transition kernels are updated using $T_{\mathcal{O}}$ projected gradient descent steps. The convergence rate is $\mathcal{O}(T^{-1/2} + T_{\mathcal{O}}^{-1/2})$ based on Theorem 7.2 of (Guha and Lee, 2023). Hence, to obtain $\epsilon$ -optimal robust policy, it requires $T, T_{\mathcal{O}} = \mathcal{O}(\epsilon^{-2})$ , which means both policy and transition kernels are updated $TT_{\mathcal{O}} = \mathcal{O}(\epsilon^{-4})$ times, so the iteration complexity is also $\mathcal{O}(\epsilon^{-4})$ .
# A.6. Why Does (Guha and Lee, 2023) Require $s$ -rectangularity
The gradient-based no-regret RL algorithm (Guha and Lee, 2023) claims to globally converge without rectangularity condition. However, their global convergence relies on the following gradient-dominance condition (see their Lemma 6.5), which requires $s$ -rectangularity as will be elaborated soon.
$$
V _ {W} (\mu) - V _ {W ^ {*}} (\mu) \leq \frac {- 1}{1 - \gamma} \left\| \frac {d _ {\mu} ^ {W}}{\mu} \right\| _ {\infty} \min _ {\bar {W} \in \mathcal {W}} \left[ \left(\bar {W} - W\right) ^ {\top} \nabla_ {W} V _ {W} (\mu) \right], \tag {27}
$$
where $W, \mu, \mathcal{W}, V_W(\mu), d_\mu^W, \left\| \frac{d_\mu^W}{\mu} \right\|_\infty$ correspond to our transition kernel $p$ , initial state distribution $\rho$ , ambiguity set $\mathcal{P}$ , objective function $J_{\rho}(\pi, p)$ (with fixed policy $\pi$ ), occupancy measure $d_{\rho}^{\pi, p}$ and constant $D := \sup_{\pi \in \Pi, p \in \mathcal{P}} \| d_{\rho}^{\pi, p} / \rho \|_\infty < \infty$ respectively.
Their proof of the above gradient dominance property (27) made the following mistake at the beginning of page 16.
$$
\sum_ {s ^ {\prime}, a, s} \left[ \gamma^ {t} d _ {\mu} ^ {W} (s) \pi (a \mid s) \min _ {s ^ {\prime}} \left(A ^ {W} \left(s ^ {\prime}, a, s\right)\right) \right] = \min _ {\bar {W} \in \mathcal {W}} \sum_ {s ^ {\prime}, a, s} \left[ \gamma^ {t} d _ {\mu} ^ {W} (s) \pi (a \mid s) \mathbb {P} _ {\bar {W}} \left(s ^ {\prime}, a, s\right) \left(A ^ {W} \left(s ^ {\prime}, a, s\right)\right) \right] \tag {28}
$$
where $\mathbb{P}_{\overline{W}} = \bar{W}$ , $A^{W}(s^{\prime},a,s) := \gamma V_{W}(s^{\prime}) + r(s,a) - V_{W}(s)$ ((Guha and Lee, 2023) uses reward function $r$ instead of our cost $c$ ). The above equality uses the fact that the right side is minimized when $\mathbb{P}_{\overline{W}}(s^{\prime},a,s) = 1$ for $s^{\prime} \in \arg \min_{s^{\prime}} A^{W}(s^{\prime},a,s)$ . However, this is not true since $A^{W}(s^{\prime},a,s) < 0$ is possible. Furthermore, even if $\inf_{s,a,s^{\prime}} A^{W}(s^{\prime},a,s) \geq 0$ , such a deterministic choice of $\mathbb{P}_{\overline{W}}$ does not necessarily satisfy the constraint that $W \in \mathcal{W}$ .
The correct proof of the above gradient dominance condition (27) is shown in the proof of Lemma 4.3 in (Wang et al., 2023). At the end of their proof, they use an inequality that requires $s$ -rectangularity condition.
# B. Basic Properties of Entropy Regularized Robust MDP
We quote the perfect duality result of entropy regularized robust MDP as follows from Theorem 3.2 of (Mai and Jaillet, 2021).
Lemma 2. Under Assumption 1, the following perfect duality holds for $J_{\rho,\tau}(\pi,p)$ , the objective function of entropy regularized robust MDP defined in eq. (3).
$$
\min _ {\pi \in \Pi} \max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi , p) = \max _ {p \in \mathcal {P}} \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p) \tag {29}
$$
Proof. Assumption 1 says $\mathcal{P}$ is $s$ -rectangular, compact and convex. Hence, each $\mathcal{P}_s$ is compact and convex, which means all the conditions of Theorem 3.2 of (Mai and Jaillet, 2021) hold, and thus its conclusion of perfect duality follows.
To facilitate further discussion, we follow (Cen et al., 2022) and define a variant of $Q_{\tau}$ (defined in eq. (5)) as follows.
$$
\widetilde {Q} _ {\tau} (\pi , p; s, a) := \mathbb {E} _ {s ^ {\prime} \sim p (· | s, a)} [ c (s, a, s ^ {\prime}) + \gamma V _ {\tau} (s ^ {\prime}) ] \tag {30}
$$
It can be directly seen that $\widetilde{Q}_{\tau}(\pi, p; s, a) = Q_{\tau}(\pi, p; s, a) - \tau \ln \pi(a|s)$ .
Lemma 3. If $c(s, a, s') \in [0,1]$ , then the functions $J_{\rho,\tau}, V_{\tau}, F_{\rho,\tau}$ and $\widetilde{Q}_{\tau}$ defined by eqs. (3), (4), (6) and (30) respectively have the following ranges for any $\pi \in \Pi$ , $p \in \mathcal{P}$ , $s \in S$ and $a \in \mathcal{A}$ .
$$
J _ {\rho , \tau} (\pi , p), V _ {\tau} (\pi , p; s), F _ {\rho , \tau} (p) \in \left[ - \frac {\tau \ln | A |}{1 - \gamma}, \frac {1}{1 - \gamma} \right] \tag {31}
$$
$$
\widetilde {Q} _ {\tau} (\pi , p; s, a) \in \left[ - \frac {\gamma \tau \ln | \mathcal {A} |}{1 - \gamma}, \frac {1}{1 - \gamma} \right] \tag {32}
$$
Proof. We rewrite the function $V_{\tau}$ as follows.
$$
\begin{array}{l} V _ {\tau} (\pi , p; s, a) \stackrel {(i)} {=} \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left[ c _ {t} + \tau \ln \pi \left(a _ {t} \mid s _ {t}\right) \right] \mid s _ {0} = s \right] \\ \stackrel {(i i)} {=} \mathbb {E} \Big [ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \Big (c _ {t} + \tau \sum_ {a} \pi (a | s _ {t}) \ln \pi (a | s _ {t}) \Big) \Big | s _ {0} = s \Big ], \\ \end{array}
$$
where (i) uses eq. (4) and (ii) uses $a_{t} \sim \pi(\cdot|s_{t})$ conditioned on $s_{t}$ . Since $c(s, a, s') \in [0,1]$ and the negative entropy $\sum_{a} \pi(a|s_{t}) \ln \pi(a|s_{t}) \in [-\ln |\mathcal{A}|, 0]$ , the range (31) holds for the function $V_{\tau}$ , and thus also holds for the functions $J_{\rho,\tau}(\pi, p) = \mathbb{E}_{s \sim \rho} V_{\tau}(\pi, p; s)$ and $F_{\rho,\tau}(p) = \min_{\pi \in \Pi} J_{\rho,\tau}(\pi, p)$ .
Then the range (32) can be proved as follows.
$$
\begin{array}{l} \widetilde {Q} _ {\tau} (\pi , p; s, a) \stackrel {(i)} {=} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ c (s, a, s ^ {\prime}) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) ] \\ \stackrel {(i i)} {\in} \left[ 0 + \gamma \Big (- \frac {\tau \ln | \mathcal {A} |}{1 - \gamma} \Big), 1 + \frac {\gamma}{1 - \gamma} \right] = \Big [ - \frac {\gamma \tau \ln | \mathcal {A} |}{1 - \gamma}, \frac {1}{1 - \gamma} \Big ], \\ \end{array}
$$
where (i) uses eq. (30) and (ii) uses $c(s,a,s') \in [0,1]$ and the range (31).
Lemma 4. For any $p \in \mathcal{P}$ , the optimal policy $\pi_p \coloneqq \arg \min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p)$ is unique and has the following lower bound.
$$
\ln \pi_ {p} (a | s) \geq - \frac {\ln | \mathcal {A} | + 1 / \tau}{1 - \gamma}; \forall s \in \mathcal {S}, a \in \mathcal {A}. \tag {33}
$$
Proof. Based on (Cen et al., 2022), $\pi_p\coloneqq \arg \min_{\pi \in \Pi}J_{\rho ,\tau}(\pi ,p)$ is unique with the following expression.
$$
\pi_ {p} (a | s) = \frac {\exp \left[ - \widetilde {Q} _ {\tau} \left(\pi_ {p} , p ; s , a\right) / \tau \right]}{\sum_ {a ^ {\prime}} \exp \left[ - \widetilde {Q} _ {\tau} \left(\pi_ {p} , p ; s , a ^ {\prime}\right) / \tau \right]}, \tag {34}
$$
where $\widetilde{Q}_{\tau}$ is defined by eq. (30). Therefore, eq. (33) can be proved as follows.
$$
\ln \pi_ {p} (a | s) \stackrel {(i)} {=} \ln \left(\frac {\exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p _ {t} ; s , a) / \tau ]}{\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p _ {t} ; s , a ^ {\prime}) / \tau ]}\right) \stackrel {(i i)} {\geq} \ln \left(\frac {\exp [ - 1 / \tau (1 - \gamma) ]}{| \mathcal {A} | \exp [ \gamma \ln | \mathcal {A} | / (1 - \gamma) ]}\right) = - \frac {\ln | \mathcal {A} | + 1 / \tau}{1 - \gamma} \tag {35}
$$
where (i) uses eq. (34) and (ii) uses eq. (32).
# C. Lipschitz Properties
Lemma 5. The occupancy measure $d_{\rho}^{\pi, p}(s) \coloneqq (1 - \gamma) \sum_{t=0}^{\infty} \gamma^{t} \mathbb{P}_{\pi, p}(s_{t} = s | s_{0} \sim \rho)$ satisfies the following Lipschitz properties for any $\pi, \pi' \in \Pi$ and $p, p' \in \mathcal{P}$ .
$$
\left\| d _ {\rho} ^ {\pi^ {\prime}, p} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \leq \frac {\gamma}{1 - \gamma} \max _ {s} \left\| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \right\| _ {1} \tag {36}
$$
$$
\left\| d _ {\rho} ^ {\pi , p ^ {\prime}} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \leq \frac {\gamma}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \tag {37}
$$
Proof. The occupancy measure $d_{\rho}^{\pi ,p}$ satisfies the following equation based on Theorem 3.2 of (Altman, 2004).
$$
d _ {\rho} ^ {\pi , p} \left(s ^ {\prime}\right) = (1 - \gamma) \rho \left(s ^ {\prime}\right) + \gamma \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} \mid s, a\right); \forall \pi , p, s ^ {\prime}. \tag {38}
$$
Therefore, for any $\pi, \pi' \in \Pi$ and $p \in \mathcal{P}$ , we have
$$
\begin{array}{l} \left\| d _ {\rho} ^ {\pi^ {\prime}, p} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \\ = \sum_ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi^ {\prime}, p} \left(s ^ {\prime}\right) - d _ {\rho} ^ {\pi , p} \left(s ^ {\prime}\right) \right| \\ \stackrel {(i)} {=} \gamma \sum_ {s ^ {\prime}} \left| \sum_ {s, a} d _ {\rho} ^ {\pi^ {\prime}, p} (s) \pi^ {\prime} (a | s) p \left(s ^ {\prime} | s, a\right) - \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} | s, a\right) \right| \\ = \gamma \sum_ {s ^ {\prime}} p \left(s ^ {\prime} \mid s, a\right) \Big | \sum_ {s, a} \left[ d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \pi^ {\prime} (a \mid s) + \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \left[ \pi^ {\prime} (a \mid s) - \pi (a \mid s) \right] \\ \leq \gamma \sum_ {s} \left| d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right| + \gamma \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \left| \pi^ {\prime} (a | s) - \pi (a | s) \right| \\ \leq \gamma \| d _ {\rho} ^ {\pi^ {\prime}, p} - d _ {\rho} ^ {\pi , p} \| _ {1} + \gamma \max _ {s} \| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \| _ {1} \\ \end{array}
$$
where (i) uses eq. (38). Then eq. (36) can be proved by rearranging the above inequality.
Next, we will prove eq. (37). For any $\pi \in \Pi$ and $p,p^{\prime}\in \mathcal{P}$ , we have
$$
\begin{array}{l} \left\| d _ {\rho} ^ {\pi , p ^ {\prime}} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \\ = \sum_ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi , p ^ {\prime}} \left(s ^ {\prime}\right) - d _ {\rho} ^ {\pi , p} \left(s ^ {\prime}\right) \right| \\ \stackrel {(i)} {=} \gamma \sum_ {s ^ {\prime}} \left| \sum_ {s, a} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) p ^ {\prime} \left(s ^ {\prime} \mid s, a\right) - \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} \mid s, a\right) \right| \\ = \gamma \sum_ {s ^ {\prime}} \left| \sum_ {s, a} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) \left[ p ^ {\prime} \left(s ^ {\prime} | s, a\right) - p \left(s ^ {\prime} | s, a\right) \right] + \sum_ {s, a} \left[ d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \pi (a | s) p \left(s ^ {\prime} | s, a\right) \right| \\ \leq \gamma \sum_ {s, a, s ^ {\prime}} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) \left| p ^ {\prime} \left(s ^ {\prime} \mid s, a\right) - p \left(s ^ {\prime} \mid s, a\right) \right| + \gamma \sum_ {s, a, s ^ {\prime}} \pi (a | s) p \left(s ^ {\prime} \mid s, a\right) \left| d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) \right| \\ \leq \gamma \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| d _ {\rho} ^ {\pi , p ^ {\prime}} - d _ {\rho} ^ {\pi , p} \| _ {1}, \\ \end{array}
$$
where (i) uses eq. (38). Then eq. (37) can be proved by rearranging the above inequality.
Lemma 6. The function $J_{\rho, \tau}(\pi, p)$ defined by eq. (3) has the following Lipschitz properties for any $\pi, \pi' \in \Pi, p, p' \in \mathcal{P}$ .
$$
\left| J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - J _ {\rho , \tau} (\pi , p) \right| \leq L _ {\pi} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| \tag {39}
$$
$$
\left| J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - J _ {\rho , \tau} (\pi , p) \right| \leq L _ {p} \| p ^ {\prime} - p \| \tag {40}
$$
$$
\left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \right\| \leq \ell_ {\pi} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| \tag {41}
$$
$$
\left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - \nabla_ {p} J _ {\rho , \tau} \left(\pi , p\right) \right\| \leq \ell_ {p} \| p ^ {\prime} - p \|, \tag {42}
$$
where $L_{\pi} \coloneqq \frac{\sqrt{|\mathcal{A}|}(2 - \gamma + \gamma\tau\ln|\mathcal{A}|)}{(1 - \gamma)^2}$ , $L_{p} \coloneqq \frac{\sqrt{|\mathcal{S}|}(1 + \tau\ln|\mathcal{A}|)}{(1 - \gamma)^2}$ , $\ell_{\pi} \coloneqq \frac{\sqrt{|S||\mathcal{A}|}(2 + 3\gamma\tau\ln|\mathcal{A}|)}{(1 - \gamma)^3}$ and $\ell_{p} \coloneqq \frac{2\gamma|S|(1 + \tau\ln|\mathcal{A}|)}{(1 - \gamma)^3}$ .
Proof. First, we prove eq. (39).
$$
\begin{array}{l} \left| J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - J _ {\rho , \tau} (\pi , p) \right| \\ = \frac {1}{1 - \gamma} \Big | \sum_ {s, a, s ^ {\prime}} \left(d _ {\rho} ^ {\pi^ {\prime}, p} (s) \pi^ {\prime} (a | s) p \left(s ^ {\prime} \mid s, a\right) [ c (s, a, s ^ {\prime}) + \tau \ln \pi^ {\prime} (a | s) ] - d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} \mid s, a\right) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) ]\right) \\ \end{array}
$$
$$
\begin{array}{l} \leq \frac {1}{1 - \gamma} \sum_ {s, a, s ^ {\prime}} \left| d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right| \pi^ {\prime} (a | s) p \left(s ^ {\prime} | s, a\right) \left[ \left| c (s, a, s ^ {\prime}) \right| + \tau \right| \ln \pi^ {\prime} (a | s) | ] \\ + \frac {1}{1 - \gamma} \sum_ {s, a, s ^ {\prime}} d _ {\rho} ^ {\pi , p} (s) p (s ^ {\prime} | s, a) \big (| \pi^ {\prime} (a | s) - \pi (a | s) | | c (s, a, s ^ {\prime}) | + \tau | \pi^ {\prime} (a | s) \ln \pi^ {\prime} (a | s) - \pi (a | s) \ln \pi (a | s) | \big) \\ \stackrel {(i)} {\leq} \frac {1}{1 - \gamma} \sum_ {s} | d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) | - \frac {\tau}{1 - \gamma} \sum_ {s} | d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) | \sum_ {a} \pi^ {\prime} (a | s) \ln \pi^ {\prime} (a | s) \\ + \frac {1 + \tau}{1 - \gamma} \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) | \ln \pi^ {\prime} (a | s) - \ln \pi (a | s) | \\ \stackrel {(i i)} {\leq} \frac {\gamma (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \max _ {s} \| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \| _ {1} + \frac {2}{1 - \gamma} \sum_ {s} d _ {\rho} ^ {\pi , p} (s) \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} \\ \stackrel {(i i i)} {\leq} \frac {2 - \gamma + \gamma \tau \ln | \mathcal {A} |}{(1 - \gamma) ^ {2}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} \\ \leq \frac {\sqrt {| \mathcal {A} |} (2 - \gamma + \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \|, \\ \end{array}
$$
where (i) uses $c(s,a,s') \in [0,1]$ and Lemma 16, (ii) uses eq. (36), $\tau \in [0,1]$ and $-\sum_{a} \pi'(a|s) \ln \pi'(a|s) \in [0, \ln |\mathcal{A}|]$ , and (iii) uses eq. (67).
Then eq. (40) can be proved as follows.
$$
\begin{array}{l} \left| J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - J _ {\rho , \tau} (\pi , p) \right| \\ \stackrel {(i)} {=} \left| \int_ {0} ^ {1} \left(p ^ {\prime} - p\right) ^ {\top} \nabla_ {p} J _ {\rho , \tau} \left(\pi , p _ {u}\right) d u \right| \\ \stackrel {(i i)} {\leq} \frac {1}{1 - \gamma} \Big | \int_ {0} ^ {1} \sum_ {s, a, s ^ {\prime}} [ p ^ {\prime} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) ] d _ {\rho} ^ {\pi , p _ {u}} (s) \pi (a | s) \big [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p _ {u}; s ^ {\prime}) \big ] d u \Big | \\ \stackrel {(i i i)} {\leq} \frac {1}{1 - \gamma} \int_ {0} ^ {1} \sum_ {s, a, s ^ {\prime}} | p ^ {\prime} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) | d _ {\rho} ^ {\pi , p _ {u}} (s) \pi (a | s) \left[ \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi (a | s) \right] d u \\ \leq \frac {1}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \int_ {0} ^ {1} \sum_ {s, a} d _ {\rho} ^ {\pi , p _ {u}} (s) \pi (a | s) \left[ \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi (a | s) \right] d u \\ \stackrel {(i v)} {\leq} \frac {1}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \left[ \frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} + \tau \ln | \mathcal {A} | \right] \\ \leq \frac {\sqrt {| \mathcal {S} |} (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \| p ^ {\prime} - p \|, \\ \end{array}
$$
where (i) denotes $p_u \coloneqq up' + (1 - u)p$ for $u \in [0,1]$ , (ii) uses eq. (9), (iii) uses $c(s,a,s') \in [0,1]$ and eq. (31) which imply that $|c(s,a,s') + \tau \ln \pi(a|s) + \gamma V_{\tau}(\pi,p_u;s')| \leq \frac{\max(1,\gamma\tau\ln|\mathcal{A}|)}{1 - \gamma} - \tau \ln \pi(a|s)$ , and (iv) uses $-\sum_{a}\pi'(a|s)\ln \pi'(a|s) \in [0,\ln |\mathcal{A}|]$ .
Then eq. (41) can be proved as follows.
$$
\begin{array}{l} \left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \right\| \\ \leq \sqrt {| \mathcal {S} |} \sum_ {s, a} \max _ {s ^ {\prime}} \left| \nabla_ {p} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) (s, a, s ^ {\prime}) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) \right| \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i)} {=} \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \max _ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi^ {\prime}, p} (s) \pi^ {\prime} (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi^ {\prime} (a | s) + \gamma V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) ] \right. \\ \left. - d _ {\rho} ^ {\pi , p} (s) \pi (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) ] \right| \\ \end{array}
$$
$$
\begin{array}{l} = \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \max _ {s ^ {\prime}} \left| \left[ d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \pi^ {\prime} (a | s) \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi^ {\prime} (a | s) + \gamma V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) \right] \right. \\ + d _ {\rho} ^ {\pi , p} (s) [ \pi^ {\prime} (a | s) - \pi (a | s) ] [ c (s, a, s ^ {\prime}) + \gamma V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) ] + \gamma d _ {\rho} ^ {\pi , p} (s) \pi (a | s) [ V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) - V _ {\tau} (\pi , p; s ^ {\prime}) ] \\ + \tau d _ {\rho} ^ {\pi , p} (s) [ \pi^ {\prime} (a | s) \ln \pi^ {\prime} (a | s) - \pi (a | s) \ln \pi (a | s) ] \Big | \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \left[ | d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) | \pi^ {\prime} (a | s) \left(\frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi^ {\prime} (a | s)\right) \right. \\ + \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} d _ {\rho} ^ {\pi , p} (s) | \ln \pi^ {\prime} (a | s) - \ln \pi (a | s) | + \gamma L _ {\pi} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| \\ \left. + \tau d _ {\rho} ^ {\pi , p} (s) | \ln \pi^ {\prime} (a | s) - \ln \pi (a | s) | \right] \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i i)} {\leq} \frac {\gamma \sqrt {| \mathcal {S} |}}{(1 - \gamma) ^ {2}} \max _ {s} \| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \| _ {1} \Big (\frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \tau \ln | \mathcal {A} | \Big) \\ + \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \Big [ \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} + \frac {\sqrt {| \mathcal {A} |} (2 \gamma - \gamma^ {2} + \gamma^ {2} \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} \\ \left. + \tau \max _ {s} \left\| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \right\| _ {1} \right] \\ \end{array}
$$
$$
\stackrel {(i v)} {\leq} \frac {\sqrt {| \mathcal {S} | | \mathcal {A} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \|, \tag {43}
$$
where (i) uses eq. (9), (ii) uses $c(s,a,s') \in [0,1]$ , eq. (31), Lemma 16 and $|V_{\tau}(\pi',p;s') - V_{\tau}(\pi,p;s')| \leq L_{\pi}\max_s\|\ln\pi'(\cdot|s) - \ln\pi(\cdot|s)\|$ (eq. (39) when $\rho(s) = I\{s = s'\}$ ), (iii) uses eq. (36), $L_{\pi} \coloneqq \frac{\sqrt{|\mathcal{A}|(2 - \gamma + \gamma\tau\ln|\mathcal{A}|)}}{(1 - \gamma)^2}$ and $-\sum_{a}\pi'(a|s)\ln\pi'(a|s) \in [0,\ln|\mathcal{A}|]$ , and (iv) uses $\gamma, \tau \in [0,1]$ , $\max_s\|\pi'(\cdot|s) - \pi(\cdot|s)\|_1 \leq \max_s\|\ln\pi'(\cdot|s) - \ln\pi(\cdot|s)\|_1 \leq \sqrt{|\mathcal{A}|}\max_s\|\ln\pi'(\cdot|s) - \ln\pi(\cdot|s)\|$ (the first $\leq$ comes from eq. (67)).
Then, eq. (42) can be proved as follows.
$$
\begin{array}{l} \left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \right\| \\ \leq \sqrt {| \mathcal {S} |} \sum_ {s, a} \max _ {s ^ {\prime}} \left| \nabla_ {p} J _ {\rho , \tau} (\pi , p ^ {\prime}) (s, a, s ^ {\prime}) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) \right| \\ \stackrel {(i)} {=} \frac {\sqrt {| S |}}{1 - \gamma} \sum_ {s, a} \max _ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p ^ {\prime}; s ^ {\prime}) ] \right. \\ \left. - d _ {\rho} ^ {\pi , p} (s) \pi (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) ] \right| \\ \leq \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \pi (a | s) \max _ {s ^ {\prime}} \left| \gamma d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \left[ V _ {\tau} \left(\pi , p ^ {\prime}; s ^ {\prime}\right) - V _ {\tau} \left(\pi , p; s ^ {\prime}\right) \right] \right. \\ + \left[ d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) \right] \\ \stackrel {(i i)} {\leq} \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \pi (a | s) \Big [ \gamma L _ {p} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \| p ^ {\prime} - p \| + \Big (\frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi (a | s) \Big) | d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) | \Big ] \\ \stackrel {(i i i)} {\leq} \frac {\gamma L _ {p} \sqrt {| \mathcal {S} |}}{1 - \gamma} \| p ^ {\prime} - p \| + \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \Big (\frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} + \tau \ln | \mathcal {A} | \Big) \frac {\gamma}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \\ \stackrel {(i v)} {\leq} \frac {\gamma | \mathcal {S} | (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \| p ^ {\prime} - p \| + \frac {\gamma | \mathcal {S} |}{(1 - \gamma) ^ {3}} (1 + \tau \ln | \mathcal {A} |) \| p ^ {\prime} - p \| \\ \leq \frac {2 \gamma | \mathcal {S} | (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \| p ^ {\prime} - p \|, \\ \end{array}
$$
where (i) uses eq. (9), (ii) uses $c(s,a,s') \in [0,1]$ , eq. (31) and $|V_{\tau}(\pi ,p';s') - V_{\tau}(\pi ,p;s')|\leq L_p\| p' - p\|$ (eq. (40) when
$\rho(s) = I\{s = s'\}$ , (iii) uses $-\sum_{a} \pi'(a|s) \ln \pi'(a|s) \in [0, \ln |\mathcal{A}|]$ and eq. (37), and (iv) uses $L_p := \frac{\sqrt{|S|(1 + \tau \ln |\mathcal{A}|)}}{(1 - \gamma)^2}$ .
Lemma 7. $F_{\rho ,\tau}(p)\coloneqq \min_{\pi \in \Pi}J_{\rho ,\tau}(\pi ,p)$ is differentiable with $\nabla F_{\rho ,\tau}(p)\coloneqq \nabla_2J_{\rho ,\tau}(\pi_p,p)^6$
Proof. Note that $p \in \mathcal{P}$ where $\mathcal{P}$ is a subset of the Banach space $(\Delta^{\mathcal{S}})^{\mathcal{S} \times \mathcal{A}}$ . Also, we have proved in Lemma 6 that $J_{\rho, \tau}(\pi, p)$ is differentiable and $\nabla_p J_{\rho, \tau}(\pi, p)$ is Lipschitz continuous. Hence, the conditions of the Danskin's Theorem (Bernhard and Rapaport, 1995) hold, so we can apply the Danskin's Theorem which yields this lemma.
Lemma 8. The stochastic gradient $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t)$ obtained from Algorithm 1 has the following estimation error
$$
\left\| \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) - \nabla F _ {\rho , \tau} \left(p _ {t}\right) \right\| \leq \ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2} \tag {44}
$$
Proof. Based on Lemma 7, $\nabla F_{\rho,\tau}(p_t) = \nabla_p J_\rho(\pi_t^*, p_t)$ where $\pi_t^* := \arg \max_{\pi} J_{\rho,\tau}(\pi, p_t)$ . Hence, eq. (44) can be proved as follows.
$$
\begin{array}{l} \| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {t} ^ {*}, p _ {t}) \| \leq \| \nabla_ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {t} ^ {*}, p _ {t}) \| + \| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) \| \\ \stackrel {(i)} {\leq} \ell_ {\pi} \max _ {s} \| \ln \pi_ {t} ^ {*} (\cdot | s) - \ln \pi_ {t} (\cdot | s) \| + \epsilon_ {2} \\ \leq \ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}, \\ \end{array}
$$
where (i) uses eq. (41).
![](images/f3994789f085704e4e75fe9116e55138536a37128e4cc3aaeea4a2ccadc082ef.jpg)
# D. Convergence Results of the Policy Updates
Lemma 9 (Convergence of policy updates for small space). For the policy optimization problem $\min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p_t)$ , perform the NPG step (7) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ . Suppose the $Q$ function is approximated with error $\epsilon_1$ , i.e., $\| \widehat{Q}_{t,k'} - Q_{\tau}(\pi_{t,k'}, p_t) \|_{\infty} \leq \epsilon_1$ for all $k' = 0, 1, \ldots, k-1$ . Then the policy $\pi_{t,k}$ satisfies the following properties.
$$
\left\| \widetilde {Q} _ {\tau} \left(\pi_ {t} ^ {*}, p _ {t}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {t, k}, p _ {t}\right) \right\| _ {\infty} \leq \frac {\gamma^ {k} \left(1 + \gamma \tau \ln | \mathcal {A} |\right)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}}, \tag {45}
$$
$$
\left\| \pi_ {t} ^ {*} - \pi_ {t, k} \right\| _ {\infty} \leq \left\| \ln \pi_ {t} ^ {*} - \ln \pi_ {t, k} \right\| _ {\infty} \leq \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}, \tag {46}
$$
$$
\ln \pi_ {t, k} \geq \ln \pi_ {\min } := - \frac {3 \ln | \mathcal {A} | + 3 / \tau}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}. \tag {47}
$$
Proof. Note that the NPG step (7) can be rewritten into the following form, as used in (Cen et al., 2022).
$$
\pi_ {t, k + 1} (\cdot | s) \propto \pi_ {t, k} (\cdot | s) ^ {1 - \frac {\eta \tau}{1 - \gamma}} \exp \left[ - \frac {\eta \left(\widehat {Q} _ {t , k} (s , \cdot) - \tau \ln \pi_ {t , k} (\cdot | s)\right)}{1 - \gamma} \right],
$$
where $\widehat{Q}_{t,k}(s,a) - \tau \ln \pi_{t,k}(a|s)\approx \widetilde{Q}_{\tau}(\pi_{t,k},p_t;s,a) = Q_{\tau}(\pi_{t,k},p_t;s,a) - \tau \ln \pi_{t,k}(a|s)$ with $\sup_{s,a}\left|\left[\widehat{Q}_{t,k}(s,a) - \tau \ln \pi_{t,k}(a|s)\right] - \widetilde{Q}_{\tau}(\pi_{t,k},p_t;s,a)\right|\leq \epsilon_1$ . Therefore, based on Theorem 2 of (Cen et al., 2022), we obtain the convergence rates (45) and (46) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ as follows.
$$
\| \widetilde {Q} _ {\tau} (\pi_ {t} ^ {*}, p _ {t}) - \widetilde {Q} _ {\tau} (\pi_ {t, k}, p _ {t}) \| _ {\infty} \leq C _ {1} \gamma^ {k} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}} \stackrel {(i)} {\leq} \frac {\gamma^ {k} (1 + \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}},
$$
$$
\| \pi_ {t} ^ {*} - \pi_ {t, k} \| _ {\infty} \stackrel {(i i)} {\leq} \| \ln \pi_ {t} ^ {*} - \ln \pi_ {t, k} \| _ {\infty} \leq 2 C _ {1} \tau^ {- 1} \gamma^ {k - 1} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}} \stackrel {(i i i)} {\leq} \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}},
$$
where (i) and (iii) use $C_1 \coloneqq \| \widetilde{Q}_{\tau}(\pi_t^*, p_t) - \widetilde{Q}_{\tau}(\pi_{t,0}, p_t) \|_{\infty} \leq \frac{1 + \gamma \tau \ln |\mathcal{A}|}{1 - \gamma}$ ( $\leq$ is based on eq. (32)), and (ii) uses the following inequality for any $u, v \in (0, 1]$ .
$$
| \ln u - \ln v | = \ln \max (u, v) - \ln \min (u, v) = \int_ {\min (u, v)} ^ {\max (u, v)} \frac {d s}{s} \geq \max (u, v) - \min (u, v) = | u - v |. \tag {48}
$$
Note that $\ln \pi_t^* = \ln \pi_{p_t} \geq -\frac{\ln |\mathcal{A}| + 1 / \tau}{1 - \gamma}$ based on eq. (33). This along with eq. (46) implies that for any $k \geq 1$
$$
\ln \pi_ {t, k} \geq - \frac {\ln | \mathcal {A} | + 1 / \tau}{1 - \gamma} - \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}} \geq - \frac {3 \ln | \mathcal {A} | + 3 / \tau}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}},
$$
The above bound also holds for $k = 0$ as $\pi_{t,0}(a|s) = 1 / |\mathcal{A}|$ . This proves eq. (47).
Lemma 10 (Convergence of policy updates for large space). For the policy optimization problem $\min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p_{\xi_t})$ perform the NPG steps (22)-(23) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ . Suppose the $Q$ function is approximated with error $\epsilon_1$ , i.e., $\sup_{s, a} |\phi(s, a)^\top w_{t,k'} - Q_\tau(\pi_{t,k'}, p_t; s, a)| \leq \epsilon_1$ for all $k' = 0, 1, \ldots, k-1$ . Then the policy $\pi_{t,k}$ satisfies the following properties.
$$
\left\| \widetilde {Q} _ {\tau} \left(\pi_ {t} ^ {*}, p _ {t}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {t, k}, p _ {t}\right) \right\| _ {\infty} \leq \frac {\gamma^ {k} \left(1 + \gamma \tau \ln | \mathcal {A} |\right)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}}, \tag {49}
$$
$$
\left\| \pi_ {t} ^ {*} - \pi_ {t, k} \right\| _ {\infty} \leq \left\| \ln \pi_ {t} ^ {*} - \ln \pi_ {t, k} \right\| _ {\infty} \leq \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}, \tag {50}
$$
$$
\ln \pi_ {t, k} \geq \ln \pi_ {\min } := - \frac {3 \ln | \mathcal {A} | + 3 / \tau}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}. \tag {51}
$$
Proof. It suffices to prove that the NPG steps (22)-(23) is equivalent to the NPG step (7) with $\widehat{Q}_{t,k}(s,a) = \phi(s,a)^{\top}w_{t,k}$ , so that we can directly apply Lemma 9.
The NPG steps (22)-(23) imply the NPG step (7) as follows.
$$
\begin{array}{l} \pi_ {t, k + 1} \propto \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} u _ {t , k + 1}}{1 - \gamma} \right] \\ \propto \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} (u _ {t , k} + w _ {t , k})}{1 - \gamma} \right] \\ = \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} u _ {t , k}}{1 - \gamma} \right] \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} w _ {t , k}}{1 - \gamma} \right] \\ \propto \pi_ {t, k} \exp \left[ - \frac {\widehat {Q} _ {t , k} (s , a)}{1 - \gamma} \right]. \\ \end{array}
$$
Conversely, by iterating the NPG step (7) over $k = 0,1,\ldots ,K - 1$ , we obtain that
$$
\pi_ {t, K} (a | s) \propto \exp \left[ - \frac {1}{1 - \gamma} \sum_ {k = 0} ^ {K - 1} \widehat {Q} _ {t, k} (s, a) \right] = \exp \left[ - \frac {1}{1 - \gamma} \phi (s, a) ^ {\top} \sum_ {k = 0} ^ {K - 1} u _ {t, k} \right].
$$
Denote $w_{t,K} \coloneqq \sum_{k=0}^{K-1} u_{t,k}$ which satisfies eq. (23). Then the above update rule becomes eq. (22).
# E. Stochastic Approximation Errors
In this section, we will analyze the approximation error of estimating Q function and transition gradients in both Algorithm 2 (for small state space) and Algorithm 3 (for large state space).
Lemma 11 (Approximation error of $Q_{\tau}$ for large space). Fix $p \in \mathcal{P}$ and $\pi \in \Pi$ . Suppose that the regularized cost $c(s, a, s') + \tau \ln \pi(a|s)$ is bounded and that $\sup_{s,a} \| \phi(s,a) \| \leq 1$ . For any $\delta_1 \in (0,1)$ and $\epsilon_1 > 2\zeta$ where $\zeta := \sup_{\pi \in \Pi, p \in \mathcal{P}, s \in \mathcal{S}, a \in \mathcal{A}, \tau \in [0,1]} |\phi(s,a)^\top w_{\pi,p}^* - Q_\tau(\pi, p; s, a)|^2$ denotes the linear $Q$ function approximation error; update $w_n \in \mathbb{R}^d$ by applying the TD rule (21) with $T_1 \geq O(\epsilon_1^{-2})$ iterations and stepsize $\alpha = O[\ln^{-1}(\epsilon_1^{-1})]$ . Then $\overline{w}_{T_1} := \frac{1}{T_1} \sum_{t_1=1}^{T_1} w_n$ satisfies $\sup_{s \in \mathcal{S}, a \in \mathcal{A}} |\phi(s,a)^\top \overline{w}_{T_1} - Q_\tau(\pi, p; s, a)| \leq \epsilon_1$ with probability at least $1 - \delta_1$ .
Proof. The optimal parameter $w_{\pi, p}^{*}$ for estimating $Q_{\tau}(\pi, p; s, a) \approx \phi(s, a)^{\top} w$ has the following expression. (Li et al., 2023a)
$$
w _ {\pi , p} ^ {*} := \mathbb {E} _ {\pi , p} \left[ \phi (s, a) \left(\phi (s, a) - \gamma \phi \left(s ^ {\prime}, a ^ {\prime}\right)\right) ^ {\top} \right] ^ {- 1} \mathbb {E} \left[ \phi (s, a) \left(c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s)\right) \right], \tag {52}
$$
where the expectation $\mathbb{E}_{\pi,p}$ is taken over $s \sim \mu_{\pi,p}, a \sim \pi(\cdot|s), s' \sim p(\cdot|s,a), a' \sim \pi(\cdot|s')$ where $\mu_{\pi,p}$ is the stationary distribution under policy $\pi$ and transition kernel $p$ .
Based on Theorem 1 of (Li et al., 2023a), $\| \overline{w}_{T_1} - w_{\pi, p}^* \| \leq \mathcal{O}(T^{-1/2}) \leq \epsilon_1 / 2$ for $T = \mathcal{O}(\epsilon_1^{-2})$ . Therefore,
$$
\begin{array}{l} \max _ {s, a} \left| \phi (s, a) ^ {\top} \bar {w} _ {T _ {1}} - Q _ {\tau} (\pi , p; s, a) \right| \\ \leq \max _ {s, a} \left[ | \phi (s, a) ^ {\top} (\overline {{w}} _ {T _ {1}} - w _ {\pi , p} ^ {*}) | + | \phi (s, a) ^ {\top} w _ {\pi , p} ^ {*} - Q _ {\tau} (\pi , p; s, a) | \right] \\ \stackrel {(i)} {\leq} \left\| \bar {w} _ {T _ {1}} - w _ {\pi , p} ^ {*} \right\| + \zeta \leq \frac {\epsilon_ {1}}{2} + \zeta \stackrel {(i i)} {\leq} \epsilon_ {1}. \\ \end{array}
$$
where (i) uses $\zeta := \sup_{\pi \in \Pi, p \in \mathcal{P}, s \in \mathcal{S}, a \in \mathcal{A}, \tau \in [0,1]} |\phi(s, a)^{\top} w_{\pi, p}^{*} - Q_{\tau}(\pi, p; s, a)|^2$ and the assumption that $\sup_{s, a} \| \phi(s, a) \| \leq 1$ and (ii) uses $\epsilon_1 > 2\zeta$ .
Lemma 12 (Approximation error of $Q_{\tau}$ for small space). Fix $\pi \in \Pi$ and $p \in \mathcal{P}$ . Suppose that the regularized cost $c(s, a, s') + \tau \ln \pi(a|s)$ is bounded. For any $\delta_1 \in (0, 1)$ and $\epsilon_1 > 0$ , update $q_n$ by applying the TD rule (15) with $T_1 \geq \mathcal{O}(\epsilon_1^{-2})$ iterations and stepsize $\alpha = \mathcal{O}[\ln^{-1}(\epsilon_1^{-1})]$ . Then $\overline{q}_{T_1} := \frac{1}{T_1} \sum_{t_1=1}^{T_1} q_n$ satisfies $\sup_{s \in S, a \in A} |\overline{q}_{T_1}(s, a) - Q_{\tau}(\pi, p; s, a)| \leq \epsilon_1$ with probability at least $1 - \delta_1$ .
Proof. In Lemma 11, let $\phi(s,a) \in \{0,1\}^d$ ( $d = |\mathcal{S}||\mathcal{A}|$ ) be a one-hot vector with the $(s,a)$ -th entry being 1. Then this Lemma becomes a special case of Lemma 11 in the following aspects:
(1) The TD rule (21) becomes the TD rule (15) with $q_{n} = w_{n}$ .
(2) $\overline{q}_{T_1} = \overline{w}_{T_1}$ .
(3) $Q_{\tau}(\pi, p) = w_{\pi, p}^{*}$ and thus $\zeta$ becomes 0.
(4) The condition of Lemma 11 that $\sup_{s,a}\| \phi (s,a)\| \leq 1$ is satisfied.
Lemma 13 (Approximation error of $\nabla_{\xi}J_{\rho,\tau}(\pi,p_{\xi})$ for large space). Fix $\pi \in \Pi$ and $p \in \mathcal{P}$ . Use linear parameterization $p_{\xi} = \Psi\xi$ and assume it satisfies $\inf_{s,a,s'} p_{\xi}(s'|s,a) > p_{\min}$ for a constant $p_{\min} > 0$ . Suppose that the regularized cost $c(s,a,s') + \tau \ln \pi(a|s)$ is bounded, and that the $Q$ function estimation $Q_{\tau}(\pi,p_{\xi};s,a) \approx \phi(s,a)^{\top}\overline{w}_{T_1}$ satisfies $\sup_{s \in S, a \in \mathcal{A}} |\phi(s,a)^{\top}\overline{w}_{T_1} - Q_{\tau}(\pi,p_{\xi};s,a)| \leq \epsilon_1$ for $\epsilon_1 > 2\zeta$ . Then for any $\delta_2 \in (0,1)$ and $\epsilon_2 \geq \frac{3\gamma\|\Psi\|_{\epsilon_1}}{p_{\min}}$ , the stochastic transition gradient (19) with $N \geq \mathcal{O}(\epsilon_2^{-2})$ and $H \geq \mathcal{O}[\ln(\epsilon_2^{-1})]$ has approximation error $\|\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi,p_{\xi}) - \nabla_{\xi}J_{\rho,\tau}(\pi,p_{\xi})\| \leq \epsilon_2$ with probability at least $1 - \delta_2$ , which requires $NH = \mathcal{O}[\epsilon_2^{-2}\ln(\epsilon_2^{-1})]$ samples.
Proof. The stochastic gradient (19) can be rewritten as $\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi,p_{\xi}) = \frac{1}{N}\sum_{i=1}^{N}g(s_{i,H_i},a_{i,H_i},s_{i,H_i+1},a_{i,H_i+1})$ with
$$
g (s, a, s ^ {\prime}, a ^ {\prime}) := \frac {\psi (s , a , s ^ {\prime})}{(1 - \gamma) p _ {\xi} (s ^ {\prime} | s , a)} [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \phi (s ^ {\prime}, a ^ {\prime}) ^ {\top} \bar {w} _ {T _ {1}} ] \tag {53}
$$
Since $\mathbb{P}(H_i = h) \propto \gamma^h (h = 0,1,\ldots ,H - 1)$ , $s_{i,H_i} \sim d_{\rho ,H}^{\pi ,p}(s) \coloneqq \frac{1 - \gamma}{1 - \gamma^H}\sum_{h = 0}^{H - 1}\gamma^h\mathbb{P}_{\pi ,p}(s_h = s|s_0 \sim \rho)$ . Therefore,
$$
\mathbb {E} g \left(s _ {i, H _ {i}}, a _ {i, H _ {i}}, s _ {i, H _ {i} + 1}, a _ {i, H _ {i} + 1}\right) = \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, a ^ {\prime}), \tag {54}
$$
where the expectation $\mathbb{E}_{d_{\rho ,H}^{\pi ,p_{\xi}}}^{\pi ,p_{\xi}}$ is taken over $s\sim d_{\rho ,H}^{\pi ,p_{\xi}},a\sim \pi (\cdot |s),s'\sim p_{\xi}(\cdot |s,a),a'\sim \pi (\cdot |s')$
Note that
$$
\| g (s, a, s ^ {\prime}, a ^ {\prime}) \|
$$
$$
\begin{array}{l} \stackrel {(i)} {\leq} \frac {\| \psi (s , a , s ^ {\prime}) \|}{(1 - \gamma) p _ {\min }} \left[ | c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) | + \gamma | \phi (s ^ {\prime}, a ^ {\prime}) ^ {\top} \bar {w} _ {T _ {1}} - Q _ {\tau} (\pi , p; s, a) | + \gamma | \widetilde {Q} _ {\tau} (\pi , p; s, a) | + \gamma | \tau \ln \pi (a ^ {\prime} | s ^ {\prime}) | \right] \\ \stackrel {(i i)} {\leq} \frac {\| \Psi \|}{(1 - \gamma) p _ {\min}} \left[ \mathcal {O} (1) + \gamma \left(\epsilon_ {1} + \frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma}\right) \right] \\ \leq \mathcal {O} (1), \tag {55} \\ \end{array}
$$
where (i) uses $\widetilde{Q}_{\tau}(\pi, p; s, a) = Q_{\tau}(\pi, p; s, a) - \tau \ln \pi(a|s)$ (based on eqs. (5) and (30)) and the assumption that $p_{\xi}(s'|s, a) \geq p_{\min}$ , and (ii) uses eq. (32), $\sup_{s \in S, a \in A} |\phi(s, a)^{\top} \overline{w}_{T_1} - Q_{\tau}(\pi, p_{\xi}; s, a)| \leq \epsilon_1$ , $|c(s, a, s') + \tau \ln \pi(a|s)| \leq \mathcal{O}(1)$ and $|\tau \ln \pi(a'|s')| \leq \mathcal{O}(1)$ (since $|c(s, a, s')| \leq \mathcal{O}(1)$ ).
Therefore, applying Hoeffding's inequality to the i.i.d. variables $\{g(s_{i,H_i},a_{i,H_i},s_{i,H_i + 1},a_{i,H_i + 1})\}_{i = 1}^N$ with bound (55), the following bound holds with probability at least $1 - \delta_2$ .
$$
\left\| \widehat {\nabla} _ {\xi} J _ {\rho , \tau} \left(\pi , p _ {\xi}\right) - \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g _ {1} \right\| \leq \mathcal {O} \left[ \frac {1}{\sqrt {N}} \ln \left(\frac {2}{\delta_ {2}}\right) \right] \stackrel {(i)} {\leq} \frac {\epsilon_ {2}}{3}, \tag {56}
$$
where (i) holds for $N = \mathcal{O}(\epsilon_2^{-2})$ . Note that the transition gradient (18) can be rewritten as follows.
$$
\nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) = \frac {1}{1 - \gamma} \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} \left[ \frac {\psi (s , a , s ^ {\prime})}{p _ {\xi} \left(s ^ {\prime} \mid s , a\right)} [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a \mid s) + \gamma Q _ {\tau} (\pi , p _ {\xi}; s ^ {\prime}, a ^ {\prime}) ] \right], \tag {57}
$$
where the expectation $\mathbb{E}_{d_{\rho}^{\pi ,p_{\xi}}}$ is taken over $s\sim d_{\rho}^{\pi ,p_{\xi}},a\sim \pi (\cdot |s),s'\sim p_{\xi}(\cdot |s,a),a'\sim \pi (\cdot |s')$ and we used $\mathbb{E}_{a'\sim \pi (\cdot |s')}[Q_{\tau}(\pi ,p_{\xi};s',a')|s'] = V_{\tau}(\pi ,p_{\xi};s')$ based on eqs. (4) and (5).
$$
\begin{array}{l} \left\| \widehat {\nabla} _ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) - \nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) \right\| \\ \leq \| \widehat {\nabla} _ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) - \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) \| + \| \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) - \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) \| \\ + \left\| \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) - \nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) \right\| \\ \end{array}
$$
$$
\stackrel {(i)} {\leq} \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) \sum_ {s} | d _ {\rho , H} ^ {\pi , p} (s) - d _ {\rho} ^ {\pi , p} (s) | + \frac {\gamma}{1 - \gamma} \left\| \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} \left[ \frac {\psi (s , a , s ^ {\prime})}{p _ {\xi} (s ^ {\prime} | s , a)} \big (\phi (s ^ {\prime}, a ^ {\prime}) ^ {\top} \overline {{w}} _ {T _ {1}} - Q _ {\tau} (\pi , p _ {\xi}; s ^ {\prime}, a ^ {\prime}) \big) \right] \right\|
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) (1 - \gamma) \sum_ {s} \Big | \sum_ {t = 0} ^ {H - 1} \gamma^ {t} \Big (\frac {1}{1 - \gamma^ {H}} - 1 \Big) \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) - \sum_ {t = H} ^ {+ \infty} \gamma^ {t} \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) \Big | + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\min} (1 - \gamma)} \\ \leq \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) (1 - \gamma) \sum_ {s} \left[ \sum_ {t = 0} ^ {H - 1} \frac {\gamma^ {H + t}}{1 - \gamma^ {H}} \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) + \sum_ {t = H} ^ {+ \infty} \gamma^ {t} \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) \right] + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\min} (1 - \gamma)} \\ = \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) (1 - \gamma) \left[ \sum_ {t = 0} ^ {H - 1} \frac {\gamma^ {H + t}}{1 - \gamma^ {H}} + \sum_ {t = H} ^ {+ \infty} \gamma^ {t} \right] + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\min} (1 - \gamma)} \\ \leq \frac {\epsilon_ {2}}{3} + \mathcal {O} (\gamma^ {H}) + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\operatorname* {m i n}} (1 - \gamma)} \stackrel {(i i i)} {\leq} \epsilon_ {2}, \\ \end{array}
$$
where (i) uses eqs. (53), (55), (56) and (57), (ii) uses $d_{\rho}^{\pi,p}(s) \coloneqq (1 - \gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathbb{P}_{\pi,p}(s_{t} = s|s_{0} \sim \rho)$ defined in eq. (10), $d_{\rho,H}^{\pi,p}(s) \coloneqq \frac{1 - \gamma}{1 - \gamma^{H}}\sum_{t=0}^{H-1}\gamma^{t}\mathbb{P}_{\pi,p}(s_{t} = s|s_{0} \sim \rho)$ , $\inf_{s,a,s'}p_{\xi}(s'|s,a) > p_{\min}$ and $\sup_{s \in \mathcal{S}, a \in \mathcal{A}}|\phi(s,a)^{\top}\overline{w}_{T_1} - Q_{\tau}(\pi,p_{\xi};s,a)| \leq \epsilon_1$ , and (iii) uses $H = \mathcal{O}[\ln(\epsilon_2^{-1})]$ and $\epsilon_1 \leq \frac{p_{\min}\epsilon_2(1 - \gamma)}{3\gamma\|\Psi\|}$ .
Lemma 14 (Approximation error of $\nabla_{p}J_{\rho,\tau}(\pi,p)$ for small space). Fix $\pi \in \Pi$ and $p \in \mathcal{P}$ . Suppose that the $Q$ function estimation $\overline{q}_{T_1} \approx Q_\tau(\pi,p)$ satisfies $\| \overline{q}_{T_1} - Q_\tau(\pi,p) \|_\infty \leq \epsilon_1$ . Then for any $\delta_2 \in (0,1)$ and $\epsilon_2 \geq \frac{3\gamma\epsilon_1\sqrt{|S|}}{1-\gamma}$ , the stochastic transition gradient (16) with $N \geq \mathcal{O}(\epsilon_2^{-2})$ and $H \geq \mathcal{O}[\ln (\epsilon_2^{-1})]$ has approximation error $\| \widehat{\nabla}_p J_{\rho,\tau}(\pi,p) - \nabla_p J_{\rho,\tau}(\pi,p) \| \leq \epsilon_2$ with probability at least $1-\delta_2$ , which requires $NH = \mathcal{O}(\epsilon_2^{-2}\ln \epsilon_2^{-1})$ samples.
Proof. The proof logic is the same as that of Lemma 13. We rewrite the stochastic gradient (16) as $\widehat{\nabla}_p J_{\rho, \tau}(\pi, p)(s, a, s') = \frac{1}{N} \sum_{i=1}^{N} g(s_{i, H_i}; s, a, s')$ where
$$
g (\widetilde {s}; s, a, s ^ {\prime}) := \frac {\pi (a | s) \mathbb {1} \left\{\widetilde {s} = s \right\}}{1 - \gamma} \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) \bar {q} _ {T _ {1}} \left(s ^ {\prime}, a ^ {\prime}\right) \right]. \tag {58}
$$
This function $g$ has the following bound.
$$
\begin{array}{l} \left\| g (\widetilde {s}; \cdot , \cdot , \cdot) \right\| \leq \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} | g (\widetilde {s} ; s , a , s ^ {\prime}) | ^ {2}} \\ = \sum_ {s, a} \frac {\pi (a | s) \mathbb {1} \{\widetilde {s} = s \}}{1 - \gamma} \sqrt {\sum_ {s ^ {\prime}} \left| c (s , a , s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \sum_ {a ^ {\prime}} \pi (a ^ {\prime} | s ^ {\prime}) [ \bar {q} _ {T _ {1}} (s ^ {\prime} , a ^ {\prime}) - Q _ {\tau} (\pi , p ; s ^ {\prime} , a ^ {\prime}) + Q _ {\tau} (\pi , p ; s ^ {\prime} , a ^ {\prime}) ] \right| ^ {2}} \\ \stackrel {(i)} {\leq} \sum_ {s, a} \frac {\pi (a | s) \mathbb {1} \{\widetilde {s} = s \}}{1 - \gamma} \sqrt {\sum_ {s ^ {\prime}} \left[ 1 - \tau \ln \pi (a | s) + \gamma \epsilon_ {1} + \gamma | V _ {\tau} (\pi , p ; s ^ {\prime}) | \right] ^ {2}} \\ \stackrel {(i i)} {\leq} \sqrt {| \mathcal {S} |} \sum_ {s, a} \frac {\pi (a | s) \mathbb {1} \{\widetilde {s} = s \}}{1 - \gamma} \Big [ 1 - \tau \ln \pi (a | s) + \gamma \epsilon_ {1} + \frac {\gamma + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} \Big ] \\ \stackrel {(i i i)} {\leq} \frac {\sqrt {| S |}}{1 - \gamma} \left[ 1 + \tau \ln | \mathcal {A} | + \gamma \epsilon_ {1} + \frac {\gamma + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} \right] = \mathcal {O} (1), \tag {59} \\ \end{array}
$$
where (i) uses $|c(s, a, s')| \leq 1$ , $|\tau \ln \pi(a|s)| = -\tau \ln \pi(a|s)$ , $\| \overline{q}_{T_1} - Q_{\tau}(\pi, p) \|_{\infty} \leq \epsilon_1$ and $V_{\tau}(\pi, p; s') = \sum_{a'} \pi(a'|s') Q_{\tau}(\pi, p; s', a')$ (based on eqs. (4) and (5)), (ii) uses eq. (31), (iii) uses $-\sum_{a} \pi(a|s) \ln \pi(a|s) \in [0, \ln |\mathcal{A}|]$ . Hence, applying Hoeffding's inequality to $\widehat{\nabla}_p J_{\rho, \tau}(\pi, p)(s, a, s') = \frac{1}{N} \sum_{i=1}^{N} g(s_i, H_i; s, a, s')$ , the following bound holds with probability at least $1 - \delta_2$ .
$$
\left\| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi , p) - \mathbb {E} _ {\widetilde {s} \sim d _ {\rho , H} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) \right\| \leq \mathcal {O} \left[ \frac {1}{\sqrt {N}} \ln \left(\frac {2}{\delta_ {2}}\right) \right] \stackrel {(i)} {\leq} \frac {\epsilon_ {2}}{3}, \tag {60}
$$
where $d_{\rho ,H}^{\pi ,p}(s)\coloneqq \frac{1 - \gamma}{1 - \gamma^H}\sum_{t = 0}^{H - 1}\gamma^t\mathbb{P}_{\pi ,p}(s_t = s|s_0\sim \rho)$ is the distribution of $s_{i,H_i}$ , and (i) holds for $N = \mathcal{O}(\epsilon_2^{-2})$ . Moreover,
$$
\left\| \mathbb {E} _ {\widetilde {s} \sim d _ {\rho , H} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) - \mathbb {E} _ {\widetilde {s} \sim d _ {\rho} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) \right\| \stackrel {(i)} {\leq} \mathcal {O} (1) \sum_ {s} | d _ {\rho , H} ^ {\pi , p} (s) - d _ {\rho} ^ {\pi , p} (s) | \stackrel {(i i)} {\leq} \mathcal {O} (\gamma^ {H}) \stackrel {(i i i)} {\leq} \frac {\epsilon_ {2}}{3} \tag {61}
$$
where (i) uses eq. (59), (ii) follows the proof of Lemma 13, and (iii) uses $H = \mathcal{O}[\ln (\epsilon_2^{-1})]$ .
Note that the transition gradient (9) can be rewritten as
$$
\nabla_ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) = \frac {d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) Q _ {\tau} (\pi , p; s ^ {\prime}, a ^ {\prime}) \right] \tag {62}
$$
Therefore,
$$
\begin{array}{l} \| \mathbb {E} _ {\widetilde {s} \sim d _ {\rho} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \| \leq \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} \left| \mathbb {E} _ {\widetilde {s} \sim d _ {\rho} ^ {\pi , p}} g (\widetilde {s} ; s , a , s ^ {\prime}) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) (s , a , s ^ {\prime}) \right| ^ {2}} \\ \stackrel {(i)} {\leq} \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} \left| \frac {\gamma d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \sum_ {a ^ {\prime}} \pi (a ^ {\prime} | s ^ {\prime}) [ \overline {{q}} _ {T _ {1}} (s ^ {\prime} , a ^ {\prime}) - Q _ {\tau} (\pi , p ; s ^ {\prime} , a ^ {\prime}) ] \right| ^ {2}} \\ \stackrel {(i i)} {\leq} \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} \left| \frac {\gamma \epsilon_ {1} d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \right| ^ {2}} \\ = \sqrt {| \mathcal {S} |} \sum_ {s, a} \frac {\gamma \epsilon_ {1} d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \\ \end{array}
$$
$$
= \frac {\gamma \epsilon_ {1} \sqrt {| S |}}{1 - \gamma} \stackrel {(i i i)} {\leq} \frac {\epsilon_ {2}}{3}, \tag {63}
$$
where (i) uses eqs. (58) and (62), (ii) uses $\| \overline{q}_{T_1} - Q_\tau (\pi ,p)\|_\infty \leq \epsilon_1$ , and (iii) uses $\epsilon_{2}\geq \frac{3\gamma\epsilon_{1}\sqrt{|S|}}{1 - \gamma}$ .
As a result, we conclude that $\| \widehat{\nabla}_p J_{\rho ,\tau}(\pi ,p) - \nabla_p J_{\rho ,\tau}(\pi ,p)\| \leq \epsilon_2$ by applying triangular inequality to eqs. (60), (61) and (63).
# F. Supporting Lemmas
Lemma 15. Suppose $\mathcal{P}$ is a convex set. For any $p' \in \mathcal{P}$ , the variable $p_{t+1} = \operatorname{proj}_{\mathcal{P}}(p_t + \beta \widehat{\nabla}_p J_\rho(\pi_t, p_t))$ generated from Algorithm 1 satisfies
$$
\langle p ^ {\prime} - p _ {t + 1}, p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} \left(\pi_ {t}, p _ {t}\right) - p _ {t + 1} \rangle \leq 0. \tag {64}
$$
Similarly, if $\Xi$ is a convex set, then for any $\xi \in \Xi$ , the variable $\xi_{t+1} = \text{proj}_{\Xi}\bigl(\xi_t + \beta \widehat{\nabla}_\xi J_\rho(\pi_t, p_{\xi_t})\bigr)$ generated from Algorithm 3 satisfies
$$
\left\langle \xi^ {\prime} - \xi_ {t + 1}, \xi_ {t} + \beta \widehat {\nabla} _ {\xi} J _ {\rho} \left(\pi_ {t}, p _ {\xi_ {t}}\right) - \xi_ {t + 1} \right\rangle \leq 0. \tag {65}
$$
Proof. We will only prove eq. (64) since eq. (65) can be proved in a similar way.
Define the function $f(u) \coloneqq \| p_t + \beta \widehat{\nabla}_p J_\rho(\pi_t, p_t) - [u p' + (1 - u) p_{t+1}] \|^2$ .
Note that $p', p_{t+1} \in \mathcal{P}$ . Hence, for any $u \in [0,1]$ , $up' + (1 - u)p_{t+1} \in \mathcal{P}$ . Since $p_{t+1} = \mathrm{proj}_{\mathcal{P}}\big(p_t + \beta \widehat{\nabla}_p J_\rho(\pi_t, p_t)\big)$ , we have
$$
\begin{array}{l} f (u) = \| p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} (\pi_ {t}, p _ {t}) - [ u p ^ {\prime} + (1 - u) p _ {t + 1} ] \| ^ {2} \\ \geq \left\| p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} \left(\pi_ {t}, p _ {t}\right) - p _ {t + 1} \right\| ^ {2} = f (0). \\ \end{array}
$$
Therefore,
$$
f ^ {\prime} (0) = - 2 \left\langle p ^ {\prime} - p _ {t + 1}, p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} \left(\pi_ {t}, p _ {t}\right) - p _ {t + 1} \right\rangle \geq 0, \tag {66}
$$
which proves eq. (64).
Lemma 16. Any $x, y \in (0,1]$ satisfy the following inequalities.
$$
| x - y | \leq | \ln x - \ln y | \tag {67}
$$
$$
\left| x \ln x - y \ln y \right| \leq \left| \ln x - \ln y \right| \tag {68}
$$
Proof. Denote $a_1 = \ln x \leq 0$ , $a_2 = \ln y \leq 0$ , $g(a) \coloneqq e^a$ and $h(a) \coloneqq ae^a$ . Then this lemma can be proved as follows
$$
| x - y | = | g (a _ {1}) - g (a _ {2}) | \leq | a _ {1} - a _ {2} | \sup _ {a \leq 0} | g ^ {\prime} (a) | = | a _ {1} - a _ {2} | \sup _ {a \leq 0} e ^ {a} = | \ln x - \ln y |
$$
$$
| x \ln x - y \ln y | = | h (a _ {1}) - h (a _ {2}) | \leq | a _ {1} - a _ {2} | \sup _ {a \leq 0} | h ^ {\prime} (a) | \overset {(i)} {=} | \ln x - \ln y |,
$$
where (i) uses $\sup_{a < 0} |h'(a)| = 1$ which will be proved next.
Note that $h^\prime (a) = e^a (a + 1)$ and $h''(a) = e^a (a + 2)$ . Hence, $h^\prime (a)$ is monotonically decreasing in $(-\infty , - 2]$ and increasing in $[-2,0]$ . Since $\lim_{a\to -\infty}h'(a) = 0$ , $h^{\prime}(-2) = -e^{-2}$ and $h^{\prime}(0) = 1$ , we have $\sup_{a\leq 0}|h^{\prime}(a)| = 1$ .
Lemma 17. The diameter of $\mathcal{P}$ defined as $D_{\mathcal{P}}\coloneqq \sup_{p,\widetilde{p}\in \mathcal{P}}\| \widetilde{p} -p\|$ ranges in $[0,\sqrt{2|\mathcal{S}||\mathcal{A}|} ]$
Proof. For any $p, \widetilde{p} \in \mathcal{P}$ ,
$$
\begin{array}{l} 0 \leq \| \widetilde {p} - p \| ^ {2} \\ = \sum_ {s, a, s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) ] ^ {2} \\ \leq | \mathcal {S} | | \mathcal {A} | \max _ {s, a} \sum_ {s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) ] ^ {2} \\ = | \mathcal {S} | | \mathcal {A} | \max _ {s, a} \sum_ {s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) ^ {2} + p (s ^ {\prime} | s, a) ^ {2} - 2 p (s ^ {\prime} | s, a) \widetilde {p} (s ^ {\prime} | s, a) ] \\ \stackrel {(i)} {\leq} | \mathcal {S} | | \mathcal {A} | \max _ {s, a} \sum_ {s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) + p (s ^ {\prime} | s, a) ] = 2 | \mathcal {S} | | \mathcal {A} |, \\ \end{array}
$$
where (i) uses $p(s'|s,a), \widetilde{p}(s'|s,a) \in [0,1]$ .
# G. Proof of Proposition 1
Proposition 1. Under Assumption 1, for any $\epsilon \geq 0$ and $\tau > 0$ , $(\epsilon, \tau)$ -Nash equilibrium exists. If $(\pi, p) \in \Pi \times \mathcal{P}$ is an $(\epsilon, \tau)$ -Nash equilibrium, then $\pi$ is a $\left(2\epsilon + \frac{\tau \ln |\mathcal{A}|}{1 - \gamma}\right)$ -optimal robust policy to the optimization problem (2).
# Proof. Proof of $(\epsilon, \tau)$ -Nash equilibrium existence:
Fix any $\tau > 0$ . Based on (Cen et al., 2022), for any $p \in \mathcal{P}$ , there exists a unique optimal policy $\pi_p := \arg \min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p)$ . Then based on the Danskin's Theorem (Bernhard and Rapaport, 1995), $F_{\rho, \tau}(p) := J_{\rho, \tau}(\pi_p, p)$ is differentiable with $\nabla F_{\rho, \tau}(p) := \nabla_2 J_{\rho, \tau}(\pi_p, p)$ . Such a differentiable function $F_{\rho, \tau}(p)$ has minimum in the compact set $\mathcal{P}$ , so there exists $p^* \in \arg \min_{p \in \mathcal{P}} F_{\rho, \tau}(p)$ .
Note that $J_{\rho, \tau}(\pi_{p^*}, p^*) = \min_{\pi' \in \Pi} J_{\rho, \tau}(\pi', p^*)$ based on the definition of $\pi_p$ . Then it suffices to prove that $J_{\rho, \tau}(\pi_{p^*}, p^*) = \max_{p' \in \mathcal{P}} J_{\rho, \tau}(\pi_{p^*}, p')$ , which along with $J_{\rho, \tau}(\pi_{p^*}, p^*) = \min_{\pi' \in \Pi} J_{\rho, \tau}(\pi', p^*)$ implies that $(\pi', p^*)$ is a $(0, \tau)$ -Nash equilibrium and thus also an $(\epsilon, \tau)$ -Nash equilibrium for any $\epsilon \geq 0$ .
Note that the proof of Proposition 3 does not rely on the existence of $(\epsilon, \tau)$ -Nash equilibrium, so we can apply Proposition 3 and obtain that
$$
0 \leq \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} \left(\pi_ {p ^ {*}}, p ^ {\prime}\right) - J _ {\rho , \tau} \left(\pi_ {p ^ {*}}, p ^ {*}\right) \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \left(p ^ {\prime} - p\right) ^ {\top} \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p ^ {*}}, p ^ {*}\right) \stackrel {(i)} {=} 0, \tag {69}
$$
where (i) uses $\nabla_{2}J_{\rho ,\tau}(\pi_{p^{*}},p^{*}) = \nabla F_{\rho ,\tau}(p^{*}) = 0$ since $p^*\in \arg \min_{p\in \mathcal{P}}F_{\rho ,\tau}(p)$ . Hence, $J_{\rho ,\tau}(\pi_{p^*},p^*) = \max_{p'\in \mathcal{P}}J_{\rho ,\tau}(\pi_{p^*},p')$ .
Proof of optimal robust policy: Note that $(\pi, p)$ satisfy the following $(\epsilon, \tau)$ -Nash equilibrium conditions
$$
J _ {\rho , \tau} (\pi , p) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) \leq \epsilon , \quad \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - J _ {\rho , \tau} (\pi , p) \leq \epsilon . \tag {70}
$$
Therefore,
$$
\begin{array}{l} \Phi_ {\rho} (\pi) - \Phi_ {\rho} (\pi^ {*}) \\ = \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - \min _ {\pi^ {\prime} \in \Pi} \max _ {p ^ {\prime \prime} \in \mathcal {P}} J _ {\rho} (\pi^ {\prime}, p ^ {\prime \prime}) \\ \leq \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho} (\pi^ {\prime}, p) \\ \stackrel {(i)} {\leq} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho , \tau} (\pi^ {\prime}, p) + \frac {\tau \ln | \mathcal {A} |}{1 - \gamma} \\ \stackrel {(i i)} {\leq} 2 \epsilon + \frac {\tau \ln | \mathcal {A} |}{1 - \gamma} \tag {71} \\ \end{array}
$$
where (i) uses $J_{\rho}(\pi, p) \coloneqq J_{\rho, \tau}(\pi, p) + \tau \mathcal{H}_{\rho, p}(\pi)$ with entropy regularizer $\mathcal{H}_{\rho, p}(\pi) \coloneqq -\mathbb{E}_{\pi, p}[\sum_{t=0}^{\infty} \gamma^{t} \ln \pi (a_{t} | s_{t}) | s_{0} \sim \rho] \in [0, \frac{\tau \ln |\mathcal{A}|}{1 - \gamma}]$ and (ii) uses the conditions (70). The above inequality means $\pi$ is a $(2\epsilon + \frac{\tau \ln |\mathcal{A}|}{1 - \gamma})$ -optimal policy by Definition 1.
# H. Proof of Proposition 2
Proposition 2. Under Assumption 1, $F_{\rho ,\tau}(p)$ is Lipschitz smooth with parameter $\ell_F\coloneqq \frac{8|S||A|(1 + \gamma\tau\ln|A|)^2}{\tau(1 - \gamma)^5}$ , i.e., for any $p,p^{\prime}\in \mathcal{P}$
$$
\left\| \nabla F _ {\rho , \tau} \left(p ^ {\prime}\right) - \nabla F _ {\rho , \tau} (p) \right\| \leq \ell_ {F} \| p ^ {\prime} - p \|. \tag {11}
$$
Proof. Based on Lemma 2 of (Cen et al., 2022), $\widetilde{Q}_{\tau}(\pi_p,p)$ defined by eq. (30) is the unique fixed point of the following Bellman operator $T_{p}$ .
$$
T _ {p} Q (s, a) := \min _ {\pi \in \Pi} \sum_ {s ^ {\prime}} p \left(s ^ {\prime} \mid s, a\right) \left(c (s, a, s ^ {\prime}) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) \left[ Q \left(s ^ {\prime}, a ^ {\prime}\right) + \tau \ln \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) \right]\right) \tag {72}
$$
The Bellman operator $T_{p}$ above has the following two properties.
1. Based on Lemma 2 of (Cen et al., 2022), $T_{p}$ is a $\gamma$ -contraction under $\ell_{\infty}$ -norm, i.e.,
$$
\left\| T _ {p} Q ^ {\prime} - T _ {p} Q \right\| _ {\infty} \leq \gamma \left\| Q ^ {\prime} - Q \right\| _ {\infty}; \forall p \in \mathcal {P}, Q, Q ^ {\prime} \in \mathbb {R} ^ {| \mathcal {S} | | \mathcal {A} |}. \tag {73}
$$
2. For any $p, p' \in \mathcal{P}$ , $\pi \in \Pi$ , $s \in S$ , $a \in \mathcal{A}$ and $Q \in \mathbb{R}^{|S||\mathcal{A}|}$ , we have
$$
\begin{array}{l} \left| \sum_ {s ^ {\prime}} \left[ p ^ {\prime} \left(s ^ {\prime} \mid s, a\right) - p \left(s ^ {\prime} \mid s, a\right) \right] \left(c (s, a, s ^ {\prime}) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) [ Q \left(s ^ {\prime}, a ^ {\prime}\right) + \tau \ln \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) ]\right) \right| \\ \stackrel {(i)} {\leq} [ 1 + \gamma (\| Q \| _ {\infty} + \tau \ln | \mathcal {A} |) ] \sum_ {s ^ {\prime}} | p ^ {\prime} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) |, \\ \end{array}
$$
where (i) uses $c(s, a, s') \in [0, 1]$ and $\sum_{a'} \pi(a'|s') \ln \pi(a'|s') \in [-\ln |\mathcal{A}|, 0]$ . Hence,
$$
\left\| T _ {p ^ {\prime}} Q - T _ {p} Q \right\| _ {\infty} \leq \left[ 1 + \gamma \left(\| Q \| _ {\infty} + \tau \ln | \mathcal {A} |\right) \right] \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1}. \tag {74}
$$
Based on the above two properties, for any $p,p^{\prime}\in \mathcal{P}$ , we have
$$
\begin{array}{l} \left\| \widetilde {Q} _ {\tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {p}, p\right) \right\| _ {\infty} \\ = \| T _ {p ^ {\prime}} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \leq \| T _ {p ^ {\prime}} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) \| _ {\infty} + \| T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \stackrel {(i)} {\leq} \left[ 1 + \gamma (\| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) \| _ {\infty} + \tau \ln | \mathcal {A} |) \right] \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \stackrel {(i i)} {\leq} \left(1 + \frac {\gamma \max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \gamma \tau \ln | \mathcal {A} |\right) \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \leq \frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| \widetilde {Q} _ {\tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {p}, p\right) \| _ {\infty}, \\ \end{array}
$$
where (i) uses eqs. (73)-(74) and (ii) uses eq. (32). Rearranging the above inequality yields that
$$
\left\| \widetilde {Q} _ {\tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {p}, p\right) \right\| _ {\infty} \leq \frac {1 + \gamma \tau \ln | \mathcal {A} |}{(1 - \gamma) ^ {2}} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1}. \tag {75}
$$
Therefore,
$$
\begin{array}{l} | \ln \pi_ {p ^ {\prime}} (a | s) - \ln \pi_ {p} (a | s) | \\ \stackrel {(i)} {=} \frac {1}{\tau} | \widetilde {Q} _ {\tau} (\pi_ {p}, p; s, a) - \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}; s, a) | + \left| \ln \frac {\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p ; s , a ^ {\prime}) / \tau ]}{\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}} , p ^ {\prime} ; s , a ^ {\prime}) / \tau ]} \right| \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {1}{\tau} \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} + \left| \ln \frac {\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p ; s , a ^ {\prime}) / \tau ]}{\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p ; s , a ^ {\prime}) / \tau - \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}} , p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p} , p) \| _ {\infty} / \tau ]} \right| \\ = \frac {2}{\tau} \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \stackrel {(i i i)} {\leq} \frac {2 + 2 \gamma \tau \ln | \mathcal {A} |}{\tau (1 - \gamma) ^ {2}} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \\ \leq \frac {2 \sqrt {| \mathcal {S} |} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {2}} \| p ^ {\prime} - p \|, \tag {76} \\ \end{array}
$$
where (i) uses eq. (34), (ii) uses $\widetilde{Q}_{\tau}(\pi_{p'}, p'; s, a') \leq \widetilde{Q}_{\tau}(\pi_p, p; s, a') + \| \widetilde{Q}_{\tau}(\pi_{p'}, p') - \widetilde{Q}_{\tau}(\pi_p, p) \|_{\infty}$ and (iii) uses eq. (75). Therefore, eq. (11) can be proved as follows.
$$
\begin{array}{l} \| \nabla F _ {\rho , \tau} (p ^ {\prime}) - \nabla F _ {\rho , \tau} (p) \| \stackrel {(i)} {=} \| \nabla_ {2} J _ {\rho , \tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \nabla_ {2} J _ {\rho , \tau} (\pi_ {p}, p) \| \\ \leq \left\| \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p}, p ^ {\prime}\right) \right\| + \left\| \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p}, p ^ {\prime}\right) - \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p}, p\right) \right\| \\ \stackrel {(i i)} {\leq} \ell_ {\pi} \max _ {s} \| \ln \pi_ {p ^ {\prime}} (\cdot | s) - \ln \pi_ {p} (\cdot | s) \| + \ell_ {p} \| p ^ {\prime} - p \| \\ \leq \ell_ {\pi} \sqrt {| \mathcal {A} |} \max _ {s, a} | \ln \pi_ {p ^ {\prime}} (a | s) - \ln \pi_ {p} (a | s) | + \ell_ {p} \| p ^ {\prime} - p \| \\ \stackrel {(i i i)} {\leq} \left(\frac {| \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \frac {2 \sqrt {| \mathcal {S} |} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {2}} + \frac {2 \gamma | \mathcal {S} | (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}}\right) \| p ^ {\prime} - p \| \\ \leq \frac {8 | \mathcal {S} | | \mathcal {A} | (1 + \gamma \tau \ln | \mathcal {A} |) ^ {2}}{\tau (1 - \gamma) ^ {5}} \| p ^ {\prime} - p \|, \\ \end{array}
$$
where (i) uses $\nabla F_{\rho,\tau}(p) = \nabla_2 J_{\rho,\tau}(\pi_p, p)$ based on the Danskin's Theorem (Bernhard and Rapaport, 1995), (ii) uses eqs. (41) and (42), and (iii) uses $\ell_\pi := \frac{\sqrt{|S||A|(2 + 3\gamma\tau\ln|A|)}}{(1 - \gamma)^3}$ , $\ell_p := \frac{2\gamma|S|(1 + \tau\ln|\mathcal{A}|)}{(1 - \gamma)^3}$ and eq. (76).
# I. Proof of Proposition 3
Proposition 3 (Gradient dominance). Under Assumption 1, the function $J_{\rho,\tau}$ satisfies the following gradient dominance property for any $\pi \in \Pi$ and $p \in \mathcal{P}$ ,
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi , p) \\ \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \left(p ^ {\prime} - p\right) ^ {\top} \nabla_ {p} J _ {\rho , \tau} (\pi , p). \tag {12} \\ \end{array}
$$
Proof. Based on Lemma 4.3 of (Wang et al., 2023), the gradient dominance property (12) holds for $J_{\rho}$ , i.e.,
$$
\max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - J _ {\rho} (\pi , p) \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} (p ^ {\prime} - p) ^ {\top} \nabla_ {p} J _ {\rho} (\pi , p).
$$
Note that for any fixed policy $\pi$ , the function $J_{\rho}(\pi, p) = \mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} c_{t} \mid s_{0} = s\right]$ becomes $J_{\rho, \tau}(\pi, p) = \mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} [c_{t} + \tau \ln \pi(a_{t} | s_{t})] \mid s_{0} = s\right]$ after replacing the cost $c_{t} = c(s_{t}, a_{t}, s_{t+1})$ with $c_{t} + \tau \ln \pi(a_{t} | s_{t})$ . Therefore, the gradient dominance property (12) also holds for $J_{\rho, \tau}$ .
If $p \in \mathcal{P}$ satisfies $\| \nabla_p F_{\rho, \tau}(p) \| \leq \frac{\epsilon(1 - \gamma)}{DD_{\mathcal{P}}}$ , then we prove below that $(\pi_p, p)$ is an $(\epsilon, \tau)$ -Nash equilibrium.
$$
J _ {\rho , \tau} (\pi_ {p}, p) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho , \tau} (\pi^ {\prime}, p) = 0 \leq \epsilon ,
$$
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi_ {p}, p) \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} (p ^ {\prime} - p) ^ {\top} \nabla_ {2} J _ {\rho , \tau} (\pi_ {p}, p) \\ \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \| p ^ {\prime} - p \| \| \nabla F _ {\rho , \tau} (p) \| \leq \frac {D D _ {\mathcal {P}}}{1 - \gamma} \frac {\epsilon (1 - \gamma)}{D D _ {\mathcal {P}}} = \epsilon . \\ \end{array}
$$
![](images/f2192e36886573d23d21a6181624216f7d1e74cf52bc42f459e3af50a5efa276.jpg)
# J. Proof of Proposition 4
Proposition 4. Under Assumption 1, the function $J_{\rho,\tau}$ satisfies the following gradient dominance property for any $\pi \in \Pi$ , $\xi \in \Xi$ .
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi , p _ {\xi}) \\ \leq \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} \left(\xi^ {\prime} - \xi\right) ^ {\top} \nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}). \tag {24} \\ \end{array}
$$
Proof. Proposition 4 can be proved as follows.
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - J _ {\rho} (\pi , p _ {\xi}) \stackrel {(i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} (p ^ {\prime} - p _ {\xi}) ^ {\top} \nabla_ {p} J _ {\rho} (\pi , p _ {\xi}) \\ \stackrel {(i i)} {=} \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} (p _ {\xi^ {\prime}} - p _ {\xi}) ^ {\top} \nabla_ {p} J _ {\rho} (\pi , p _ {\xi}) \\ \stackrel {(i i i)} {=} \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} (\xi^ {\prime} - \xi) ^ {\top} \Psi^ {\top} \nabla_ {p} J _ {\rho} (\pi , p _ {\xi}) \\ \stackrel {(i v)} {=} \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} (\xi^ {\prime} - \xi) ^ {\top} \nabla_ {\xi} J _ {\rho} (\pi , p _ {\xi}), \tag {77} \\ \end{array}
$$
where (i) uses Proposition 3, (ii) uses $\mathcal{P} \coloneqq \{p_{\xi} : \xi \in \Xi\}$ and (iii)-(iv) use $p_{\xi} = \Psi \xi$ .
![](images/a363afafd503ef71dfcff5dbe5cc818abba53eec9ff4279af10eb950ea04a17b.jpg)
# K. Proof of Theorem 1
Theorem 1. Implement Algorithm 1 with $\beta \leq \frac{1}{2\ell_F}$ , $\eta = \frac{1 - \gamma}{\tau}$ . Then the output $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ satisfies the following rates under Assumption 1.
$$
J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}\right) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau}\right), \tag {13}
$$
$$
\begin{array}{l} \max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p) - J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \\ \leq \mathcal {O} \left[ \left(1 + \tau \epsilon_ {2}\right) \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}}\right) \right]. \tag {14} \\ \end{array}
$$
Proof. Based on Lemma 9, the output $\pi_t \coloneqq \pi_{t,T'}$ of the NPG step (7) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ has the following convergence rates.
$$
\left\| Q _ {\tau} \left(\pi_ {t}, p _ {t}\right) - Q _ {\tau} \left(\pi_ {t} ^ {*}, p _ {t}\right) \right\| _ {\infty} \leq \frac {\gamma^ {T ^ {\prime} + 1} \left(1 + \gamma \tau \ln | \mathcal {A} |\right)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}}, \tag {78}
$$
$$
\left\| \pi_ {t} ^ {*} - \pi_ {t} \right\| _ {\infty} \leq \left\| \ln \pi_ {t} ^ {*} - \ln \pi_ {t} \right\| _ {\infty} \leq \frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}. \tag {79}
$$
Hence, the convergence rate (13) can be proved as follows.
$$
\begin{array}{l} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) = J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - J _ {\rho , \tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}) \\ \leq \mathbb {E} _ {s \sim \rho} [ V _ {\tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}; s) - V _ {\tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s) ] \\ \stackrel {(i)} {=} \mathbb {E} _ {s \sim \rho} \sum_ {a} \left[ \pi_ {\widetilde {T}} (a | s) \left[ Q _ {\tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}; s, a\right) - \tau \ln \pi_ {\widetilde {T}} (a | s) \right] - \pi_ {\widetilde {T}} ^ {*} (a | s) \left[ Q _ {\tau} \left(\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s, a\right) - \tau \ln \pi_ {\widetilde {T}} ^ {*} (a | s) \right] \right] \\ = \mathbb {E} _ {s \sim \rho} \sum_ {a} \left[ [ \pi_ {\widetilde {T}} (a | s) - \pi_ {\widetilde {T}} ^ {*} (a | s) ] [ Q _ {\tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s, a) - \tau \ln \pi_ {\widetilde {T}} ^ {*} (a | s) ] \right. \\ + \pi_ {\widetilde {T}} (a | s) [ Q _ {\tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}; s, a) - Q _ {\tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s, a) - \tau \ln \pi_ {\widetilde {T}} (a | s) + \tau \ln \pi_ {\widetilde {T}} ^ {*} (a | s) ] \Bigg ] \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \mathbb {E} _ {s \sim \rho} \sum_ {a} \left[ \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right) \left(\frac {2 + \tau \ln | \mathcal {A} |}{1 - \gamma}\right) \right. \\ \left. \right.\left. + \pi_ {\widetilde {T}} (a | s) \left(\frac {\gamma^ {T ^ {\prime} + 1} (1 + \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}} + \tau \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right)\right)\right] \\ \leq \frac {3 | \mathcal {A} |}{1 - \gamma} \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right) + \frac {1 2 \gamma^ {T ^ {\prime}}}{1 - \gamma} + \frac {6 \epsilon_ {1}}{(1 - \gamma) ^ {2}} \\ = \frac {2 + 3 \tau \ln | \mathcal {A} |}{\tau (1 - \gamma) ^ {2}} \Big (2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |) + \frac {4 \epsilon_ {1}}{1 - \gamma} \Big) \leq \mathcal {O} \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} \Big), \\ \end{array}
$$
where (i) uses $V_{\tau}(\pi, p; s) = \mathbb{E}_{a \sim \pi(\cdot|s)}[Q_{\tau}(\pi, p; s, a) - \tau \ln \pi(a|s)]$ based on eqs. (4) and (5), (ii) uses eqs. (32), (33), (78) and (79).
Next, we will prove the convergence rate (14). Note that
$$
\begin{array}{l} \left\| \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) \right\| = \frac {1}{\beta} \left\| \left(p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right)\right) - p _ {t} \right\| \\ \stackrel {(i)} {\geq} \frac {1}{\beta} \left\| \left(p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right)\right) - \operatorname {p r o j} _ {\mathcal {P}} \left(p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right)\right) \right\| \\ \stackrel {(i i)} {=} \| \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) - G _ {t} \|, \\ \end{array}
$$
where (i) uses $p_t \in \mathcal{P}$ and (ii) denotes $G_t := \frac{1}{\beta} \left( \mathrm{proj}_{\mathcal{P}}[p_t + \beta \widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t)] - p_t \right)$ . The above inequality implies that
$$
G _ {t} ^ {\top} \hat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) \geq \frac {1}{2} \| G _ {t} \| ^ {2}. \tag {80}
$$
Since $F_{\rho,\tau}(p) \coloneqq \max_{\pi \in \Pi} J_{\rho,\tau}(\pi,p)$ is $\ell_F$ -smooth as shown in Proposition 2, we have
$$
\begin{array}{l} F _ {\rho , \tau} (p _ {t + 1}) - F _ {\rho , \tau} (p _ {t}) \geq \nabla F _ {\rho , \tau} (p _ {t}) ^ {\top} (p _ {t + 1} - p _ {t}) - \frac {\ell_ {F}}{2} \| p _ {t + 1} - p _ {t} \| ^ {2} \\ \stackrel {(i)} {=} \beta G _ {t} ^ {\top} [ \nabla F _ {\rho , \tau} (p _ {t}) - \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) ] + \beta G _ {t} ^ {\top} \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \frac {\ell_ {F} \beta^ {2}}{2} \| G _ {t} \| ^ {2} \\ \stackrel {(i i)} {\geq} - \beta \| G _ {t} \| \left(\ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}\right) + \frac {\beta}{2} \| G _ {t} \| ^ {2} - \frac {\beta}{4} \| G _ {t} \| ^ {2} \\ \geq \frac {\beta}{8} \| G _ {t} \| ^ {2} - 2 \beta \left(\ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}\right) ^ {2}, \tag {81} \\ \end{array}
$$
where (i) uses $p_{t+1} - p_t = \beta G_t$ , (ii) uses $\beta \leq \frac{1}{2\ell_F}$ and eqs. (44) and (80), and (iii) uses $c\|G_t\| \leq 2c^2 + \frac{\|G_t\|^2}{8}$ for $c := \ell_\pi \sqrt{|\mathcal{A}|} \| \ln \pi_t - \ln \pi_t^* \|_\infty + \epsilon_2$ . Averaging the above inequality over $t = 0, 1, \ldots, T-1$ , we obtain that
$$
\begin{array}{l} \| G _ {\widetilde {T}} \| = \min _ {0 \leq t \leq T - 1} \| G _ {t} \| \leq \sqrt {\frac {1}{T} \sum_ {t = 0} ^ {T - 1} \| G _ {t} \| ^ {2}} \\ \leq \sqrt {\frac {1 6}{T} \sum_ {t = 0} ^ {T - 1} \left(\ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}\right) ^ {2} + \frac {8 \left[ F _ {\rho , \tau} \left(p _ {T}\right) - F _ {\rho , \tau} \left(p _ {0}\right) \right]}{T \beta}} \\ \stackrel {(i)} {\leq} \sqrt {1 6 \left[ \frac {| \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right) + \epsilon_ {2} \right] ^ {2} + \frac {8 (1 + \tau \ln | \mathcal {A} |)}{T \beta (1 - \gamma)}} \\ \leq \frac {4 | \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {4}} \left(2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |) + \frac {4 \epsilon_ {1}}{1 - \gamma}\right) + 4 \epsilon_ {2} + \sqrt {\frac {8 (1 + \tau \ln | \mathcal {A} |)}{T \beta (1 - \gamma)}} \tag {82} \\ \end{array}
$$
where (i) uses eq. (31), eq. (79), $\ell_{\pi} := \frac{\sqrt{|S||A|}(2 + 3\gamma\tau\ln{|A|})}{(1 - \gamma)^{3}}.$
Then, the convergence rate (14) can be proved as follows.
$$
\max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p) - J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}})
$$
$$
\stackrel {(i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \langle p ^ {\prime} - p _ {\widetilde {T}}, \nabla_ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle
$$
$$
\leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \langle p ^ {\prime} - p _ {\widetilde {T}}, \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle + \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \| p ^ {\prime} - p _ {\widetilde {T}} \| \| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \|
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \left[ \langle p ^ {\prime} - p _ {\widetilde {T} + 1}, \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle + \langle p _ {\widetilde {T} + 1} - p _ {\widetilde {T}}, \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle \right] + \frac {D D _ {\mathcal {P}} \epsilon_ {2}}{1 - \gamma} \\ \stackrel {(i i i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \Big [ \frac {1}{\beta} \langle p ^ {\prime} - p _ {\widetilde {T} + 1}, p _ {\widetilde {T}} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - p _ {\widetilde {T} + 1} \rangle - \frac {1}{\beta} \langle p ^ {\prime} - p _ {\widetilde {T} + 1}, p _ {\widetilde {T}} - p _ {\widetilde {T} + 1} \rangle + \| \beta G _ {\widetilde {T}} \| (L _ {p} + \epsilon_ {2}) \Big ] \\ + \frac {D D _ {\mathcal {P}} \epsilon_ {2}}{1 - \gamma} \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i v)} {\leq} \frac {D}{1 - \gamma} \Big [ 0 + \frac {1}{\beta} \max _ {p ^ {\prime} \in \mathcal {P}} \| p ^ {\prime} - p _ {\widetilde {T} + 1} \| \| \beta G _ {\widetilde {T}} \| + \beta (L _ {p} + \epsilon_ {2}) \| G _ {\widetilde {T}} \| + D _ {\mathcal {P}} \epsilon_ {2} \Big ] \\ \stackrel {(v)} {\leq} \frac {D}{1 - \gamma} \Big [ [ D _ {\mathcal {P}} + \beta (L _ {p} + \epsilon_ {2}) ] \Big (\frac {4 | \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {4}} \Big (2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |) + \frac {4 \epsilon_ {1}}{1 - \gamma} \Big) + 4 \epsilon_ {2} + \sqrt {\frac {8 (1 + \tau \ln | \mathcal {A} |)}{T \beta (1 - \gamma)}} \Big) \\ \left. + D _ {\mathcal {P}} \epsilon_ {2} \right] \\ \end{array}
$$
$$
\stackrel {(v i)} {\leq} \mathcal {O} \left[ (1 + \tau \epsilon_ {2}) \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}}\right) \right],
$$
where (i) uses Proposition 3, (ii) uses $\| p' - p_{\widetilde{T}}\| \leq D_{\mathcal{P}}$ for $p',p_{\widetilde{T}}\in \mathcal{P}$ where $D_{\mathcal{P}}\coloneqq \sup_{p,\widetilde{p}\in \mathcal{P}}\| \widetilde{p} -p\|$ denotes the diameter of $\mathcal{P}$ ( $D_{\mathcal{P}}\leq \sqrt{2|\mathcal{S}||\mathcal{A}|}$ as shown in Lemma 17) and $\| \widehat{\nabla}_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}}) - \nabla_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| \leq \epsilon_2$ (iii) uses $p_{\widetilde{T} +1} - p_{\widetilde{T}} = \beta G_{\widetilde{T}}$ and $\| \widehat{\nabla}_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| \leq \| \nabla_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| +\| \widehat{\nabla}_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}}) - \nabla_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| \leq L_p + \epsilon_2$ (The second $\leq$ uses eq. (40)), (iv) uses $p_{\widetilde{T} +1} - p_{\widetilde{T}} = \beta G_{\widetilde{T}}$ and eq. (64), (v) uses eq. (82) and $\| p^{\prime} - p_{\widetilde{T} +1}\| \leq D_{\mathcal{P}}$ and (vi) uses $\beta \leq \frac{1}{2\ell_F} = \frac{\tau(1 - \gamma)^5}{16|\mathcal{S}||\mathcal{A}|(1 + \gamma\tau\ln|\mathcal{A}|)^2} = \mathcal{O}(\tau)$ and $L_{p} = \frac{\sqrt{|S|(1 + \tau\ln|\mathcal{A}|)}}{(1 - \gamma)^{2}} = \mathcal{O}(1)$ .
# L. Proof of Corollary 1
Corollary 1 (Iteration Complexity of Algorithm 1). Implement Algorithm 1 under deterministic setting $(\epsilon_{1} = \epsilon_{2} = 0)$ . For any $\epsilon > 0$ , select hyperparameters $\tau = \min \left(\frac{\epsilon(1 - \gamma)}{3\ln|\mathcal{A}|}, 1\right)$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T' = \mathcal{O}[\ln (\epsilon^{-1})]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F}$ . Then the output $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ is both $\epsilon$ -optimal robust policy and $(\epsilon, \tau)$ -Nash equilibrium under Assumption 1. This requires $T = \mathcal{O}(\epsilon^{-3})$ transition kernel updates, $TT' = \mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ policy updates and iteration complexity $T + TT' = \mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ .
Proof. Select the following hyperparameters for Algorithm 1 which satisfies the conditions of Theorem 1.
$$
\epsilon_ {1} = \epsilon_ {2} = 0 \tag {83}
$$
$$
\tau = \min \left(\frac {\epsilon (1 - \gamma)}{3 \ln | \mathcal {A} |}, 1\right) \tag {84}
$$
$$
\beta = \frac {1}{2 \ell_ {F}} = \frac {\tau (1 - \gamma) ^ {5}}{1 6 | \mathcal {S} | | \mathcal {A} | (1 + \gamma \tau \ln | \mathcal {A} |) ^ {2}} \tag {85}
$$
$$
\eta = \frac {1 - \gamma}{\tau} \tag {86}
$$
$$
T = \mathcal {O} \left(\epsilon^ {- 3}\right) \tag {87}
$$
$$
T ^ {\prime} = \frac {\mathcal {O} [ \ln (\tau^ {- 1} \epsilon^ {- 1}) ]}{\ln (\gamma^ {- 1})} = \mathcal {O} [ \ln (\epsilon^ {- 1}) ]. \tag {88}
$$
Therefore, the convergence rates (13) and (14) along with the above hyperparameter choices imply that
$$
J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}\right) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau}\right) \leq \frac {\epsilon}{3}, \tag {89}
$$
$$
\max _ {p \in \mathcal {P}} J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p\right) - J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}\right) \leq \mathcal {O} \left(1 + \tau \epsilon_ {2}\right) \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}}\right) \leq \frac {\epsilon}{3}, \tag {90}
$$
which means $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is $(\epsilon /3,\tau)$ -Nash equilibrium and thus $\epsilon$ -optimal robust policy by Proposition 1.
# M. Proof of Corollary 2
Corollary 2 (Sample Complexity of Algorithm 2). For any $\epsilon >0$ and $\delta \in (0,1)$ , implement Algorithm 2 with hyperparameters $\tau = \min \left(\frac{\epsilon(1 - \gamma)}{3\ln|\mathcal{A}|},1\right)$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T^{\prime} = \mathcal{O}[\ln (\epsilon^{-1})]$ , $T_{1} = \mathcal{O}(\epsilon^{-4})$ , $\alpha = \mathcal{O}[\ln^{-1}(\epsilon^{-1})]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F}$ , $N = \mathcal{O}(\epsilon^{-2})$ , $H = \mathcal{O}[\ln (\epsilon^{-1})]$ . The output $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is both $\epsilon$ -optimal robust policy and $(\epsilon ,\tau)$ -Nash equilibrium with probability at least $1 - \delta$ under Assumption 1. Furthermore, the sample complexity is $T(T^{\prime}T_{1} + NH) = \mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ .
Proof. Select the following hyperparameters.
$$
\tau = \min \left(\frac {\epsilon (1 - \gamma)}{3 \ln | \mathcal {A} |}, 1\right) = \mathcal {O} (\epsilon) \tag {91}
$$
$$
\epsilon_ {1} = \mathcal {O} (\tau \epsilon) = \mathcal {O} \left(\epsilon^ {2}\right) \tag {92}
$$
$$
\epsilon_ {2} = \mathcal {O} (\epsilon) \tag {93}
$$
$$
\beta = \frac {1}{2 \ell_ {F}} = \mathcal {O} (\tau) = \mathcal {O} (\epsilon) \tag {94}
$$
$$
\eta = \frac {1 - \gamma}{\tau} \tag {95}
$$
$$
T = \mathcal {O} \left(\epsilon^ {- 3}\right) \tag {96}
$$
$$
T ^ {\prime} = \frac {\mathcal {O} [ \ln \left(\tau^ {- 1} \epsilon^ {- 1}\right) ]}{\ln \left(\gamma^ {- 1}\right)} = \mathcal {O} [ \ln \left(\epsilon^ {- 1}\right) ] \tag {97}
$$
$$
T _ {1} = \mathcal {O} \left(\epsilon_ {1} ^ {- 2}\right) = \mathcal {O} \left(\epsilon^ {- 4}\right) \tag {98}
$$
$$
\alpha = \mathcal {O} \left[ \ln^ {- 1} \left(\epsilon_ {1} ^ {- 1}\right) \right] = \mathcal {O} \left[ \ln^ {- 1} \left(\epsilon^ {- 1}\right) \right] \tag {99}
$$
$$
\delta_ {1} = \frac {\delta}{2 T T ^ {\prime}} \tag {100}
$$
$$
N = \mathcal {O} \left(\epsilon_ {2} ^ {- 2}\right) = \mathcal {O} \left(\epsilon^ {- 2}\right), \tag {101}
$$
$$
H = \mathcal {O} \left[ \ln \left(\epsilon_ {2} ^ {- 1}\right) \right] = \mathcal {O} \left[ \ln \left(\epsilon^ {- 1}\right) \right], \tag {102}
$$
$$
\delta_ {2} = \frac {\delta}{2 T}, \tag {103}
$$
Based on the conditions of Lemmas 9 and 12, for all $t = 0, 1, \dots, T - 1$ and $k = 0, 1, \dots, T' - 1$ , eq. (47) of Lemmas 9 and the conclusion of Lemma 12 below hold with probability at least $1 - TT'\delta_1 = 1 - \delta / 2$ .
$$
\| \widehat {Q} _ {t, k ^ {\prime}} - Q _ {\tau} (\pi_ {t, k ^ {\prime}}, p _ {t}) \| _ {\infty} \leq \epsilon_ {1}; \forall k ^ {\prime} = 0, 1, \ldots , k - 1 \Rightarrow \inf _ {s, a} \pi_ {t, k} (a | s) \geq \pi_ {\min},
$$
$$
\inf _ {s, a} \ln \pi_ {t, k} (a | s) \geq \ln \pi_ {\min} = - \mathcal {O} (\tau^ {- 1}) \Rightarrow | c (s, a, s ^ {\prime}) + \tau \ln \pi_ {t, k} (a | s) | \leq \mathcal {O} (1) \Rightarrow \| \widehat {Q} _ {t, k} - Q _ {\tau} (\pi_ {t, k}, p _ {t}) \| _ {\infty} \leq \epsilon_ {1}.
$$
Note that $\pi_{t,0}(a|s) \equiv 1 / |\mathcal{A}| \geq \pi_{\min}$ . Hence, by induction over $k$ , the above statements imply that $\| \widehat{Q}_{t,k} - Q_{\tau}(\pi_{t,k}, p_t) \|_{\infty} \leq \epsilon_1$ and $\inf_{s,a} \pi_{t,k}(a|s) \geq \pi_{\min}$ for all $t = 0, 1, \ldots, T-1$ and $k = 0, 1, \ldots, T' - 1$ .
Note that $\epsilon_{1} = \mathcal{O}(\epsilon^{2})$ and $\epsilon_{2} = \mathcal{O}(\epsilon)$ for sufficiently small $\epsilon > 0$ can satisfy the condition of Lemma 14 that $\epsilon_{2} \geq \frac{3\gamma\epsilon_{1}\sqrt{|S|}}{1 - \gamma}$ . Hence, based on Lemma 14, the stochastic transition gradients $\widehat{\nabla}_{p}J_{\rho,\tau}(\pi_{t},p_{t})$ obtained by eq. (16) for all $t = 0,1,\ldots,T-1$ satisfy $\|\widehat{\nabla}_{p}J_{\rho,\tau}(\pi_{t},p_{t}) - \nabla_{p}J_{\rho,\tau}(\pi_{t},p_{t})\| \leq \epsilon_{2}$ with probability at least $1-T\delta_{2}=1-\delta/2$ .
Hence, we proved that $\| \widehat{\nabla}_p J_{\rho ,\tau}(\pi_t,p_t) - \nabla_p J_{\rho ,\tau}(\pi_t,p_t)\| \leq \epsilon_2$ and $\| \widehat{Q}_{t,k} - Q_{\tau}(\pi_{t,k},p_t)\|_{\infty}\leq \epsilon_1$ hold for all $t = 0,\ldots ,T - 1$ and $k = 0,\dots,T^{\prime} - 1$ with probability at least $1 - \delta$ . Therefore, Algorithm 2 with the above hyperparameter choices can be seen as a special case of Algorithm 1, so the convergence rates (13) and (14) in Theorem 1 hold which imply
$$
J _ {\rho} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \min _ {\pi \in \Pi} J _ {\rho} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau}\right) \leq \frac {\epsilon}{3},
$$
$$
\max _ {p \in \mathcal {P}} J _ {\rho} (\pi_ {\widetilde {T}}, p) - J _ {\rho} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \leq \mathcal {O} (1 + \tau \epsilon_ {2}) \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T} \beta} \Big) \leq \frac {\epsilon}{3}.
$$
Therefore, with probability at least $1 - \delta$ , $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ is $(\epsilon / 3, \tau)$ -Nash equilibrium and thus $\epsilon$ -optimal robust policy by Proposition 1. The required total sample complexity is $T(T' T_1 + NH) = \mathcal{O}(\epsilon^{-7} \ln \epsilon^{-1})$ .
# N. Proof of Theorem 2
Theorem 2 (Sample Complexity of Algorithm 3). For any $\epsilon >0$ and $\delta \in (0,1)$ , implement Algorithm 3 with hyperparameters $\tau = \min \left[\mathcal{O}(\sqrt{\zeta} +\epsilon),1\right]$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T^{\prime} = \mathcal{O}[\ln (\epsilon^{-1})]$ , $T_{1}\coloneqq \mathcal{O}(\epsilon^{-4})$ , $\alpha = \mathcal{O}[\ln^{-1}(\zeta +\epsilon^{2})^{-1}]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F\|\Psi\|}$ , $N = \mathcal{O}(\epsilon^{-4})$ , $H = \mathcal{O}[\ln (\epsilon^{-1})]$ . Then under Assumption 1 and the assumption that $\inf_{s,a,s'}p_{\xi}(s'|s,a) > p_{\mathrm{min}}$ for a constant $p_{\mathrm{min}} > 0$ , $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is both $(\mathcal{O}(\sqrt{\zeta} +\zeta +\epsilon),\tau)$ -Nash equilibrium and $\mathcal{O}(\sqrt{\zeta} +\zeta +\epsilon)$ -optimal robust policy with probability at least $1 - \delta$ . The required sample complexity is $T(T^{\prime}T_{1} + NH) = \mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ .
Proof. Select the following hyperparameters for Algorithm 3.
$$
\epsilon_ {1} = 2 \zeta + \epsilon^ {2} \tag {104}
$$
$$
\epsilon_ {2} = \frac {3 \gamma \epsilon_ {1} \sqrt {| S |}}{1 - \gamma} = \frac {3 \gamma \sqrt {| S |} \left(2 \zeta + \epsilon^ {2}\right)}{1 - \gamma} \tag {105}
$$
$$
\delta_ {1} = \frac {\delta}{2 T T ^ {\prime}} \tag {106}
$$
$$
\delta_ {2} = \frac {\delta}{2 T} \tag {107}
$$
$$
\tau = \min \left[ \mathcal {O} (\sqrt {\zeta} + \epsilon), 1 \right] \tag {108}
$$
$$
\beta = \frac {1}{2 \ell_ {F} \| \Psi \|} = \mathcal {O} (\tau) \geq \mathcal {O} (\epsilon) \tag {109}
$$
$$
\eta = \frac {1 - \gamma}{\tau} \tag {110}
$$
$$
T = \mathcal {O} \left(\epsilon^ {- 3}\right) \tag {111}
$$
$$
T ^ {\prime} = \frac {\mathcal {O} [ \ln (\epsilon^ {- 2}) ]}{\ln (\gamma^ {- 1})} = \mathcal {O} [ \ln (\epsilon^ {- 1}) ] \tag {112}
$$
$$
T _ {1} = \mathcal {O} \left(\epsilon^ {- 4}\right) \geq \mathcal {O} \left(\epsilon_ {1} ^ {- 2}\right) \tag {113}
$$
$$
\alpha = \mathcal {O} \left(\ln^ {- 1} \epsilon_ {1} ^ {- 1}\right) = \mathcal {O} \left[ \ln^ {- 1} \left(\zeta + \epsilon^ {2}\right) ^ {- 1} \right] \tag {114}
$$
$$
N = \mathcal {O} \left(\epsilon^ {- 4}\right) \geq \mathcal {O} \left(\epsilon_ {2} ^ {- 2}\right) \tag {115}
$$
$$
H = \mathcal {O} \left[ \ln \left(\epsilon^ {- 1}\right) \right] \geq \mathcal {O} \left[ \ln \left(\epsilon_ {2} ^ {- 1}\right) \right] \tag {116}
$$
Based on the conditions of Lemmas 10 and 11, select the following hyperparameters for the TD update rule (21).
Then for all $t = 0,1,\dots ,T - 1$ and $k = 0,1,\ldots ,T^{\prime} - 1$ , eq. (51) of Lemmas 10 and the conclusion of Lemma 11 below hold with probability at least $1 - TT^{\prime}\delta_{1}^{\prime} = 1 - \delta /2$ .
$$
\sup _ {s, a} | \phi (s, a) ^ {\top} w _ {t, k ^ {\prime}} - Q _ {\tau} (\pi_ {t, k ^ {\prime}}, p _ {t}; s, a) | \leq \epsilon_ {1}; \forall k ^ {\prime} = 0, 1, \ldots , k - 1 \Rightarrow \inf _ {s, a} \ln \pi_ {t, k} (a | s) \geq \ln \pi_ {\min},
$$
$$
\inf_{s,a}\ln \pi_{t,k}(a|s)\geq \ln \pi_{\min} = -\mathcal{O}(\tau^{-1})\Rightarrow |c(s,a,s^{\prime}) + \tau \ln \pi_{t,k}(a|s)|\leq \mathcal{O}(1)\Rightarrow \sup_{s,a}| \phi (s,a)^{\top}w_{t,k} - Q_{\tau}(\pi_{t,k},p_{t};s,a)|\leq \epsilon_{1}.
$$
Note that $u_{t,0} = 0 \Rightarrow \pi_{t,0}(a|s) \equiv 1 / |\mathcal{A}| \geq \pi_{\min}$ based on eq. (22). Hence, by induction over $k$ , the above statements imply that $\sup_{s,a} |\phi(s,a)^\top w_{t,k} - Q_\tau(\pi_{t,k}, p_t; s, a)| \leq \epsilon_1$ and $\inf_{s,a} \pi_{t,k}(a|s) \geq \pi_{\min}$ for all $t = 0, 1, \ldots, T-1$ and $k = 0, 1, \ldots, T' - 1$ .
Then based on Lemma 13, the stochastic transition gradients $\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi_t,p_{\xi_t})$ obtained by eq. (19) for all $t = 0,1,\ldots,T-1$ satisfy $\|\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi_t,p_{\xi_t}) - \nabla_{\xi}J_{\rho,\tau}(\pi_t,p_{\xi_t})\| \leq \epsilon_2$ with probability at least $1-T\delta_2 = 1-\delta/2$ .
Hence, we have proved that $\| \widehat{\nabla}_{\xi}J_{\rho ,\tau}(\pi_t,p_{\xi_t}) - \nabla_{\xi}J_{\rho ,\tau}(\pi_t,p_{\xi_t})\| \leq \epsilon_2$ and $\sup_{s,a}|\phi (s,a)^{\top}w_{t,k} - Q_{\tau}(\pi_{t,k},p_t;s,a)|\leq \epsilon_1$ holds for all $t = 0,1,\ldots ,T - 1$ and $k = 0,1,\dots,T^{\prime} - 1$ with probability at least $1 - \delta$ . Therefore, we can prove that the convergence rates in Theorem 1 also hold for Algorithm 3 with probability at least $1 - \delta$ . The proof logic is the same as that of Theorem 1. The major difference is that we replace the transition kernel $p\in \mathcal{P}$ with its corresponding parameter $\xi$ . Note that the proof of Theorem 2 uses the gradient dominance property (Proposition 12) about $\nabla_pJ_{\rho ,\tau}(\pi ,p)$ to obtain global convergence, and $\nabla_{\xi}F_{\rho ,\tau}(p_{\xi})$ also satisfies gradient dominance property (Proposition 24) of the same form. Hence, we can use the latter here. In addition, since $p_\xi = \Psi \xi$ , we have $\nabla_{\xi}J_{\rho ,\tau}(\pi ,p_{\xi}) = \Psi^{\top}\nabla_{p}J_{\rho ,\tau}(\pi ,p)$ and $\nabla_{\xi}F_{\rho ,\tau}(p_{\xi}) = \Psi^{\top}\nabla F_{\rho ,\tau}(p)$ , so the Lipschitz constants $L_{p}$ , $\ell_p$ and $\ell_F$ will be changed to $L_{p}\| \Psi \|$ $\ell_p\| \Psi \|$ and $\ell_F\| \Psi \|$ respectively, which does not change the order of the convergence rate as $\| \Psi \| = \mathcal{O}(1)$ .
Substituting the hyperparameters (104)-(116) into the convergence rates in Theorem 1, we obtain that
$$
J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} \Big) \leq \mathcal {O} \Big (\frac {\epsilon^ {2} + 2 \zeta + \epsilon^ {2}}{\min [ \mathcal {O} (\sqrt {\zeta} + \epsilon) , 1 ]} \Big) \leq \mathcal {O} (\zeta + \sqrt {\zeta} + \epsilon),
$$
$$
\max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p) - J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \leq \mathcal {O} (1 + \tau \epsilon_ {2}) \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}} \Big) \leq \mathcal {O} (\zeta + \sqrt {\zeta} + \epsilon).
$$
Therefore, with probability at least $1 - \delta$ , $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ is $\left(\mathcal{O}(\sqrt{\zeta} + \zeta + \epsilon), \tau\right)$ -Nash equilibrium and thus $\mathcal{O}(\sqrt{\zeta} + \zeta + \epsilon)$ -optimal robust policy by Proposition 1. The required total sample complexity is
$$
T (T ^ {\prime} T _ {1} + N H) = \mathcal {O} \big [ \epsilon^ {- 3} \big (\epsilon^ {- 4} \ln (\epsilon^ {- 1}) + \epsilon^ {- 4} \ln (\epsilon^ {- 1}) \big) \big ] = \mathcal {O} \big (\epsilon^ {- 7} \ln (\epsilon^ {- 1}) \big).
$$
# O. Experiments
The experiments are implemented on Python 3.9 in a MacBook Pro laptop with 500 GB Storage and 8-core CPU (16 GB Memory). The code can be downloaded from https://github.com/changy12/ICML2024-Accelerated-Policy-Gradient-for-s-rectangular-Robust-MDPs-with-Large-State-Spaces.
# O.1. Experiments on Small State Space under Deterministic Setting
We compare our Algorithm 1 with the existing double-loop robust policy gradient (DRPG) algorithm (Wang et al., 2023) and actor-critic algorithm (Li et al., 2023b) under deterministic setting (i.e., when exact values of some quantities are available, including gradients, Q functions, V functions, etc.) on the Garnet problem (Archibald et al., 1995; Wang and Zou, 2022) with spaces $\mathcal{S} = \{0,1,2,3,4\}$ of 5 states and $\mathcal{A} = \{0,1,2\}$ of 3 actions. The agent gets cost 0 if it takes action 0 at state 0 or action 1 at other states, and gets cost 1 otherwise. We use $s$ -rectangular $L_{2}$ -norm ambiguity set $\mathcal{P} := \{p \in (\Delta^{\mathcal{S}})^{\mathcal{S} \times \mathcal{A}} : \| p(s,\cdot,\cdot) - p_0(s,\cdot,\cdot)\| \leq 0.03\}$ where $p_0(s,a,s') \equiv 0.2$ is the nominal transition kernel. The initial state distribution $\rho$ is uniform with $\rho(s) \equiv \frac{1}{5}$ . The discount factor is $\gamma = 0.95$ .
We implement an exact version of Algorithm 1 (i.e., $\epsilon_{1} = \epsilon_{2} = 0$ ) using $T_{p} = 5$ outer transition kernel updates with stepsize $\beta = 0.001$ , and $T^{\prime} = 1$ inner policy update with stepsize $\eta = \frac{1 - \gamma}{\tau} = 50$ per outer update. For DRPG algorithm, we use $T = 5$ outer policy updates (Algorithm 1 of (Wang et al., 2023)) with stepsize $\alpha_{t} = 10$ and $T_{k} = 1$ inner transition kernel update (Algorithm 2 of (Wang et al., 2023)) with stepsize $\beta_{t} = 0.001$ per outer update. For actor-critic algorithm (Algorithm 4.1 of (Li et al., 2023b)), we use $K = 5$ outer iterations, where the actor step (policy update) uses stepsize $\eta = 500$ , and the critic step (transition kernel update) uses only 1 iteration of Algorithm 3.2 of (Li et al., 2023b) with $\alpha_{m} = 1$ as well as $P_{\epsilon}$ obtained by exactly solving the direction-finding subproblem in eq. (3.4) of (Li et al., 2023b). We plot learning curves of the objective function $\Phi_{\rho}(\pi_t)\coloneqq \max_{p\in \mathcal{P}}J_{\rho}(\pi_t,p)$ at each $t$ -th outer iteration on the left of Figure 1. The x-axis is iteration complexity defined as the total number of policy updates and transition kernel updates up to each iteration $t$ . Figure 1 shows
![](images/f5ebc1409d3ded78a05c2671352772eec51f6de91a490423cbf199a17142485b.jpg)
Figure 1: Experimental Results on Small State Space (Left) and Large State Space (Right).
![](images/063caab304557da855611e650aa9ff2a90aa4c3afe71242f86598bfe3c890697.jpg)
that our Algorithm 1 converges faster to the optimal robust value $\min_{\pi \in \Pi} \Phi_{\rho}(\pi) = 0$ than DRPG algorithm (Wang et al., 2023) and actor-critic algorithm (Li et al., 2023b).
# O.2. Experiments on Large State Space
We test Algorithm 3 on the Garnet problem (Archibald et al., 1995; Wang and Zou, 2022) with spaces $S = \{0,1,\dots ,49\}$ of 50 states and $\mathcal{A} = \{0,1,2\}$ of 3 actions. The agent gets cost 0 if it takes action 0 at state 0 or action 1 at other states, and gets cost 1 otherwise. We use transition kernel parameterization $p_{\xi}(s^{\prime}|s,a) = \psi (a,s^{\prime})\xi (s)$ with parameter $\xi (s)\in \mathbb{R}^{d_p}$ $(d_{p} = 10)$ and randomly generated feature vectors $\psi (a,s^{\prime})\in \mathbb{R}^{d_p}$ . This parameterization is both $s$ -rectangular and a special case of the linear kernel parameterization $p_{\widetilde{\xi}}(s^{\prime}|s,a) = \widetilde{\psi} (s,a,s^{\prime})\widetilde{\xi}$ introduced in Section 4.1 with parameter $\widetilde{\xi} = [\xi (1),\xi (2),\ldots ,\xi (|\mathcal{S}|)]\in \mathbb{R}^{d_p|S|}$ and the following feature vector
$$
\widetilde {\psi} (s, a, s ^ {\prime}) = [ \underbrace {0 , \ldots , 0} _ {(s - 1) d _ {p} \text {e l e m e n t s}}, \underbrace {\psi (a , s ^ {\prime})} _ {d _ {p} \text {e l e m e n t s}}, \underbrace {0 , \ldots , 0} _ {(| S | - s) d _ {p} \text {e l e m e n t s}} ] \in \mathbb {R} ^ {d _ {p} | S |}.
$$
We first generate $\psi_{\mathrm{pre}}^{(j)}(a,s') \in \mathbb{R}$ from uniform distribution $U(1,2)$ for all the entries $j = 1,\dots,d_p$ and for all $a,s'$ . Then we obtain $\psi(a,s') = [\psi^{(1)}(a,s'),\ldots,\psi^{(d_p)}(a,s')] \in \mathbb{R}^{d_p}$ by normalization as follows.
$$
\psi^ {(j)} (a, s ^ {\prime}) = \frac {\psi_ {\mathrm {p r e}} ^ {(j)} (a , s ^ {\prime})}{\sum_ {s ^ {\prime \prime}} \psi_ {\mathrm {p r e}} ^ {(j)} (a , s ^ {\prime \prime})}.
$$
In this way, we obtain $\psi(a, s') = [\psi^{(1)}(a, s'), \ldots, \psi^{(d_p)}(a, s')] \in \mathbb{R}^{d_p}$ where $p_{\xi}(s'|s, a) = \psi(a, s') \xi(s)$ is a distribution of $s' \in S$ for any $\xi(s) \in \mathbb{R}_+^{d_p}$ satisfying $\|\xi(s)\|_1 = 1$ . We use $s$ -rectangular $L_2$ -norm ambiguity set $\Xi := \{[\xi(1), \xi(2), \ldots, \xi(|S|)] \in \mathbb{R}^{d_p|S|} : \|\xi(s) - \xi_0(s)\| \leq 0.03, \forall s \in S\}$ where $\xi_0(s) = [0.1, 0.1, \ldots, 0.1] \in \mathbb{R}^{d_p}$ is the nominal kernel parameter. We also adopt the linear Q function approximation $Q_{\tau}(\pi, p; s, a) \approx \phi(s, a)^{\top} w$ with parameter $w \in \mathbb{R}^d$ as well as feature vectors $\phi(s, a) \in \mathbb{R}^d$ generated entrywise from uniform distribution $U(0, 1)$ . The initial state distribution $\rho$ is uniform with $\rho(s) \equiv \frac{1}{50}, \forall s \in S$ . The discount factor is $\gamma = 0.95$ .
In the above robust MDP setting with varying $d \in \{5, 20, 50, 100, 130, 140, 150\}$ , we implement Algorithm 3 with $\tau = 0.1$ , $T = 10$ , $T' = 20$ , $T_1 = 10^5$ , $\eta = 1$ , $\beta = 0.001$ , $\alpha = 0.001$ , $N = 1$ , $H = 500$ , $N = 10^4$ . The learning curves of the objective function $\Phi_{\rho}(\pi_t) := \max_{p \in \mathcal{P}} J_{\rho}(\pi_t, p)$ for each $d$ is plotted on the right of Figure 1, which shows that our Algorithm 3 converges to the optimal robust value $\min_{\pi \in \Pi} \Phi_{\rho}(\pi) = 0$ with sufficiently large $d$ , and the convergence gap gets larger with smaller $d$ , due to larger transition kernel parameterization error. Hence, a proper value of $d$ is important to trade off between performance and the amount of computation.