| 1: | Inputs: τ, T, T', T1, α, η, β, N, H. |
| 2: | Initialize: ξ0. |
| 3: | for transition update steps t = 0, 1, ..., T - 1 do |
| 4: | Initialize ut,0 = 0. |
| 5: | for policy update steps k = 0, 1, ..., T' - 1 do |
| 6: | For π = πt,k defined by eq. (22) and p = pξt, perform the TD update rule (21) for T1 iterations. |
| 7: | Assign wt,k ← wT1 := 1/T1 ∑n=1T1 wn. |
| 8: | Obtain ut,k+1 by eq. (23). |
| 9: | end for |
| 10: | For πt := πt,T' defined by eq. (22), obtain stochastic gradient ∇ξJρ,τ(πt, pξt) using eq. (19). |
| 11: | Obtain ξt+1 by projected gradient ascent step (20). |
| 12: | end for |
| 13: | Output: πT, ξT where T ∈ arg min0≤t≤T-1||ξt+1-ξt|. |
# 5. Conclusion
This work proposes a policy gradient algorithm with faster global convergence than existing policy gradient algorithms on $s$ -rectangular robust MDP, by solving an entropy regularized robust MDP. We further extend this algorithm to stochastic setting and large state space, and obtain the first sample complexity results for policy gradient on $s$ -rectangular robust MDP. Moreover, our algorithms are also the first policy gradient methods that can solve entropy regularized robust MDP problem, which is an important but underexplored area. Since $F_{\rho,\tau}(p)$ is Lipschitz smooth with parameter $\ell_F = \mathcal{O}(\tau^{-1})$ (see Proposition 2) while $\tau = \mathcal{O}(\epsilon)$ is required for $\epsilon$ -accuracy (see Proposition 1), our algorithm requires small stepsize $\beta = \mathcal{O}(\epsilon)$ and thus the iteration complexity and sample complexities are not minimax optimal. An interesting future direction is to further accelerate our algorithms using techniques such as Nesterov's acceleration and variance reduction, etc.
# Impact Statement
This paper presents work whose goal is to advance the field of machine learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here.
# Acknowledgements
This work was partially supported by NSF IIS 2347592, 2347604, 2348159, 2348169, DBI 2405416, CCF 2348306, CNS 2347617.
# References
Abbeel, P., Coates, A., Quigley, M., and Ng, A. Y. (2006). An application of reinforcement learning to aerobic helicopter flight. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1-8.
Abdolmaleki, A., Springenberg, J. T., Tassa, Y., Munos, R., Heess, N., and Riedmiller, M. (2018). Maximum a posteriori policy optimisation. In Proceedings of the International Conference on Learning Representations (ICLR).
Agarwal, A., Kakade, S. M., Lee, J. D., and Mahajan, G. (2021). On the theory of policy gradient methods: Optimality, approximation, and distribution shift. The Journal of Machine Learning Research, 22(1):4431-4506.
Altman, E. (2004). Constrained Markov decision processes. CRC press. https://www-sop.inria.fr/members/Eitan.Altman/PAPERS/h.pdf.
Archibald, T., McKinnon, K., and Thomas, L. (1995). On the generation of markov decision processes. Journal of the Operational Research Society, 46(3):354-361.
Ayoub, A., Jia, Z., Szepesvari, C., Wang, M., and Yang, L. (2020). Model-based reinforcement learning with value-targeted regression. In Proceedings of the International Conference on Machine Learning (ICML), pages 463-474.
Badrinath, K. P. and Kalathil, D. (2021). Robust reinforcement learning using least squares policy iteration with provable performance guarantees. In International Conference on Machine Learning, pages 511-520.
Bernhard, P. and Rapaport, A. (1995). On a theorem of danskin with an application to a theorem of von neumann-sion. Nonlinear Analysis: Theory, Methods & Applications, 24(8):1163-1181.
Bhandari, J. and Russo, D. (2021). On the linear convergence of policy gradient methods for finite mdps. In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), pages 2386-2394.
Bhandari, J., Russo, D., and Singal, R. (2018). A finite time analysis of temporal difference learning with linear function approximation. In Proceedings of the Conference on learning theory (COLT), pages 1691-1692.
Bhatnagar, S., Sutton, R. S., Ghavamzadeh, M., and Lee, M. (2009). Natural actor-critic algorithms. Automatica, 45(11):2471-2482.
Cayci, S., He, N., and Srikant, R. (2022). Finite-time analysis of entropy-regularized neural natural actor-critic algorithm. ArXiv:2206.00833.
Cen, S., Cheng, C., Chen, Y., Wei, Y., and Chi, Y. (2022). Fast global convergence of natural policy gradient methods with entropy regularization. *Operations Research*, 70(4):2563-2578.
Chen, Z., Zhou, Y., Chen, R.-R., and Zou, S. (2022). Sample and communication-efficient decentralized actor-critic algorithms with finite-time analysis. In Proceedings of the International Conference on Machine Learning (ICML), pages 3794-3834.
Eysenbach, B. and Levine, S. (2021). Maximum entropy rl (provably) solves some robust rl problems. In Proceedings of the International Conference on Learning Representations (ICLR).
Grand-Clément, J. and Kroer, C. (2021). Scalable first-order methods for robust mdps. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 12086-12094.
Guha, E. K. and Lee, J. D. (2023). Solving robust mdps through no-regret dynamics. ArXiv:2305.19035.
Ho, C. P., Petrik, M., and Wiesemann, W. (2021). Partial policy iteration for 11-robust markov decision processes. Journal of Machine Learning Research, 22(275):1-46.
Huh, J. and Lee, D. D. (2018). Efficient sampling with q-learning to guide rapidly exploring random trees. IEEE Robotics and Automation Letters, 3(4):3868-3875.
Iyengar, G. N. (2005). Robust dynamic programming. Mathematics of Operations Research, 30(2):257-280.
Kakade, S. (2001). A natural policy gradient. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1531-1538.
Kober, J., Bagnell, J. A., and Peters, J. (2013). Reinforcement learning in robotics: A survey. The International Journal of Robotics Research, 32(11):1238-1274.
Konda, V. R. and Tsitsiklis, J. N. (1999). Actor-citic algorithms. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1008-1014.
Kumar, N., Derman, E., Geist, M., Levy, K., and Mannor, S. (2023a). Policy gradient for rectangular robust markov decision processes. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips).
Kumar, N., Levy, K., Wang, K., and Mannor, S. (2022). Efficient policy iteration for robust markov decision processes via regularization. ArXiv:2205.14327.
Kumar, N., Levy, K., Wang, K., and Mannor, S. (2023b). An efficient solution to s-rectangular robust markov decision processes. *ArXiv:2301.13642*.
Kumar, N., Usmanova, I., Levy, K. Y., and Mannor, S. (2023c). Towards faster global convergence of robust policy gradient methods. In Sixteenth European Workshop on Reinforcement Learning.
Leonardos, S., Overman, W., Panageas, I., and Piliouras, G. (2022). Global convergence of multi-agent policy gradient in markov potential games. In *ICLR 2022 Workshop on Gamification and Multiagent Solutions*.
Li, G., Wu, W., Chi, Y., Ma, C., Rinaldo, A., and Wei, Y. (2023a). Sharp high-probability sample complexities for policy evaluation with linear function approximation. *ArXiv:2305.19001*.
Li, M., Sutter, T., and Kuhn, D. (2023b). Policy gradient algorithms for robust mdps with non-rectangular uncertainty sets. ArXiv:2305.19004.
Li, Y. and Lan, G. (2023). First-order policy optimization for robust policy evaluation. ArXiv:2307.15890.
Li, Y., Lan, G., and Zhao, T. (2023c). First-order policy optimization for robust markov decision process. ArXiv:2209.10579.
Mai, T. and Jaillet, P. (2021). Robust entropy-regularized markov decision processes. ArXiv:2112.15364.
Mankowitz, D. J., Levine, N., Jeong, R., Abdelmaleki, A., Springenberg, J. T., Shi, Y., Kay, J., Hester, T., Mann, T., and Riedmiller, M. (2019). Robust reinforcement learning for continuous control with model misspecification. In Proceedings of the International Conference on Learning Representations (ICLR).
Nilim, A. and El Ghaoui, L. (2005). Robust control of markov decision processes with uncertain transition matrices. Operations Research, 53(5):780-798.
Peng, X. B., Andrychowicz, M., Zaremba, W., and Abbeel, P. (2018). Sim-to-real transfer of robotic control with dynamics randomization. In 2018 IEEE international conference on robotics and automation (ICRA), pages 3803-3810. IEEE.
Perera, A. and Kamalaruban, P. (2021). Applications of reinforcement learning in energy systems. Renewable and Sustainable Energy Reviews, 137:110618.
Samsonov, S., Tiapkin, D., Naumov, A., and Moulines, E. (2023). Finite-sample analysis of the temporal difference learning. *ArXiv:2310.14286*.
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., and Riedmiller, M. (2014). Deterministic policy gradient algorithms. In Proceedings of the International conference on machine learning (ICML), pages 387-395.
Sutton, R. S., McAllester, D., Singh, S., and Mansour, Y. (1999). Policy gradient methods for reinforcement learning with function approximation. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 1057-1063.
Wang, Q., Ho, C. P., and Petrik, M. (2023). Policy gradient in robust mdps with global convergence guarantee. In Proceedings of the International Conference on Machine Learning (ICML), volume 202, pages 35763-35797.
Wang, T., Zhou, D., and Gu, Q. (2021). Provably efficient reinforcement learning with linear function approximation under adaptivity constraints. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips).
Wang, Y. and Zou, S. (2022). Policy gradient method for robust reinforcement learning. In Proceedings of the International Conference on Machine Learning (ICML), pages 23484-23526.
Wang, Y.-C. and Usher, J. M. (2005). Application of reinforcement learning for agent-based production scheduling. Engineering applications of artificial intelligence, 18(1):73-82.
Wiesemann, W., Kuhn, D., and Rustem, B. (2013). Robust markov decision processes. Mathematics of Operations Research, 38(1):153-183.
Xiao, L. (2022). On the convergence rates of policy gradient methods. The Journal of Machine Learning Research, 23(1):12887-12922.
Xu, T., Liang, Y., and Lan, G. (2021). Crpo: A new approach for safe reinforcement learning with convergence guarantee. In Proceedings of the International Conference on Machine Learning (ICML), pages 11480-11491.
Xu, T., Wang, Z., and Liang, Y. (2020). Improving sample complexity bounds for (natural) actor-critic algorithms. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 4358-4369.
Xu, X., Zuo, L., and Huang, Z. (2014). Reinforcement learning algorithms with function approximation: Recent advances and applications. Information Sciences, 261:1-31.
Zhang, J., Zhang, W., and Gu, Q. (2023). Optimal horizon-free reward-free exploration for linear mixture MDPs. In Proceedings of the International Conference on Machine Learning (ICML), volume 202, pages 41902-41930.
Zhou, D., Gu, Q., and Szepesvari, C. (2021). Nearly minimax optimal reinforcement learning for linear mixture markov decision processes. In Proceedings of the Conference on Learning Theory (COLT), pages 4532-4576.
Zhou, R., Liu, T., Cheng, M., Kalathil, D., Kumar, P., and Tian, C. (2023). Natural actor-critic for robust reinforcement learning with function approximation. In Proceedings of the Thirty-seventh Conference on Neural Information Processing Systems (Neurips).
Zou, S., Xu, T., and Liang, Y. (2019). Finite-sample analysis for sarsa with linear function approximation. In Proceedings of the International Conference on Neural Information Processing Systems (Neurips), pages 8668-8678.
# Appendix
# Table of Contents
A Existing Complexity Results of Robust Policy Gradient 12
A.1 Complexity Results of (Wang et al., 2023) 13
A.2 Complexity Results of (Li et al., 2023b) 13
A.3 Extension to Non-rectangularity in (Li et al., 2023b) Also Applies to Our Work 13
A.4 Complexity Results of (Kumar et al., 2023c) 13
A.5 Complexity Results of (Guha and Lee, 2023) 14
A.6 Why Does (Guha and Lee, 2023) Require $s$ -rectangularity 14
B Basic Properties of Entropy Regularized Robust MDP 14
C Lipschitz Properties 15
D Convergence Results of the Policy Updates 19
E Stochastic Approximation Errors 20
F Supporting Lemmas 24
G Proof of Proposition 1 25
H Proof of Proposition 2 26
Proof of Proposition 3 27
J Proof of Proposition 4 28
K Proof of Theorem 1 28
L Proof of Corollary 1 30
M Proof of Corollary 2 31
N Proof of Theorem 2 32
O Experiments 33
O.1 Experiments on Small State Space under Deterministic Setting 33
O.2 Experiments on Large State Space 34
# A. Existing Complexity Results of Robust Policy Gradient
In this section, we will explain the existing complexity results listed in Table 1. We will also explain about the claim of (Li et al., 2023b; Guha and Lee, 2023) that their complexity results do not require rectangularity assumption.
# A.1. Complexity Results of (Wang et al., 2023)
(Wang et al., 2023) proposes a double-loop robust policy gradient (DRPG) algorithm for robust MDP with $s$ -rectangular ambiguity set, which applies projected gradient descent steps to update the policy $\pi$ in the outer loop and projected gradient ascent steps to update the transition kernel $p$ in the inner loop. Based on Theorem 3.3 of (Wang et al., 2023), to obtain an $\epsilon$ -optimal robust policy, DRPG requires $T = \mathcal{O}(\epsilon^{-4})$ outer iterations of the policy updates, and the $t$ -th outer iteration requires a transition kernel $p_t$ such that $J_{\rho}(\pi_t,p_t)\geq \max_{p\in \mathcal{P}}J_{\rho}(\pi_t,p) - \epsilon_t$ , where the precision $\epsilon_t > 0$ satisfies $\epsilon_{t + 1}\leq \gamma \epsilon_t$ and $\epsilon_0\leq \sqrt{T}$ . Such an $\epsilon_{t}$ -accurate $p_t$ further requires $T_{t} = \mathcal{O}(\epsilon_{t}^{-2})$ iterations of the transition kernel updates based on Theorem 4.4 of (Wang et al., 2023). $T_{t}$ is exponentially growing as $T_{t}\geq \mathcal{O}(\gamma^{t}\epsilon_{0})^{-2}\geq \mathcal{O}(T^{-1}\gamma^{-2t})$ . Hence, the required number of policy updates is
$$
\begin{array}{l} \max \left(T, \sum_ {t = 0} ^ {T} T _ {t}\right) \geq \max \left(T, T ^ {- 1} \sum_ {t = 0} ^ {T - 1} \mathcal {O} \left(\gamma^ {- 2 t}\right)\right) \\ = \max \left(T, \mathcal {O} \left(T ^ {- 1} \gamma^ {- 2 T}\right)\right) \\ = \max \left(\mathcal {O} \left(\epsilon^ {- 4}\right), \mathcal {O} \left(\epsilon^ {4} \gamma^ {- \mathcal {O} \left(\epsilon^ {- 4}\right)}\right)\right) \\ = \mathcal {O} \left(\epsilon^ {- 4} + \epsilon^ {4} \gamma^ {- \mathcal {O} \left(\epsilon^ {- 4}\right)}\right). \\ \end{array}
$$
Therefore, the iteration complexity (the total number of updates on both transition kernel and policy) is $T + \mathcal{O}(\epsilon^{-4} + \epsilon^4\gamma^{-\mathcal{O}(\epsilon^{-4})}) = \mathcal{O}(\epsilon^{-4} + \epsilon^4\gamma^{-\mathcal{O}(\epsilon^{-4})})$ , which exponentially increases as $\epsilon \rightarrow +0$ .
# A.2. Complexity Results of (Li et al., 2023b)
(Li et al., 2023b) proposes an actor-critic algorithm (see their Algorithm 4.1) which has similar double loop structure as the DRPG algorithm (Wang et al., 2023). To achieve an $\epsilon$ -optimal robust policy, this actor-critic algorithm requires $\mathcal{O}(\epsilon^{-4})$ outer policy updates and $\mathcal{O}(\epsilon^{-2})$ inner transition kernel updates per outer iteration, based on Theorems 4.5 and 3.8 of (Li et al., 2023b) respectively. Therefore, the total number of transition kernel updates is $\mathcal{O}(\epsilon^{-4})\mathcal{O}(\epsilon^{-2}) = \mathcal{O}(\epsilon^{-6})$ , so the iteration complexity is $\mathcal{O}(\epsilon^{-4}) + \mathcal{O}(\epsilon^{-6}) = \mathcal{O}(\epsilon^{-6})$ .
# A.3. Extension to Non-rectangularity in (Li et al., 2023b) Also Applies to Our Work
(Li et al., 2023b) extends their complexity results to non-rectangular ambiguity set $\mathcal{P}$ by defining the following degree of non-rectangularity.
$$
\delta_ {\mathcal {P}} := \max _ {p ^ {\prime} \in \mathcal {P}} \left[ \max _ {p _ {s} \in \mathcal {P} _ {s}} \langle \nabla_ {p} J _ {\rho} (\pi , p ^ {\prime}), p _ {s} \rangle - \max _ {p \in \mathcal {P}} \langle \nabla_ {p} J _ {\rho} (\pi , p ^ {\prime}), p \rangle \right] \tag {25}
$$
where $\mathcal{P}_s$ denotes the smallest $s$ -rectangular ambiguity set containing $\mathcal{P}$ .
Then the proof of the convergence for the inner transition kernel updates in (Li et al., 2023b) (see their Theorem 3.8) uses the following gradient dominance property.
$$
\max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - J _ {\rho} (\pi , p) \leq \left\| \frac {d _ {\rho} ^ {\pi , p _ {s} ^ {*}}}{d _ {\rho} ^ {\pi , p}} \right\| _ {\infty} \max _ {p _ {s} \in \mathcal {P} _ {s}} \langle \nabla_ {p} J _ {\rho} (\pi , p), p _ {s} - p \rangle \stackrel {(i)} {\leq} \frac {D}{1 - \gamma} \left[ \delta_ {\mathcal {P}} + \max _ {p ^ {\prime} \in \mathcal {P}} \langle \nabla_ {p} J _ {\rho} (\pi , p), p ^ {\prime} - p \rangle \right], \tag {26}
$$
where $p_s^* \in \arg \max_{p' \in \mathcal{P}_s} J_\rho(\pi, p')$ denotes the optimal transition kernel and (i) uses $D := \sup_{\pi \in \Pi, p \in \mathcal{P}} \|d_\rho^{\pi, p} / \rho\|_\infty < \infty$ and $d_\rho^{\pi, p}(s) \geq (1 - \gamma)\rho(s)$ . Compared with the gradient dominance property (Proposition 3) used in our convergence proof for $s$ -rectangular case, the above gradient dominance property involves the degree of non-rectangularity $\delta_{\mathcal{P}} > 0$ defined in eq. (25). Hence, we can also extend our convergence result to non-rectangular robust MDPs in the same way by replacing Proposition 3 with the above gradient dominance property (26), where the objective function $J_\rho$ should be changed to $J_{\rho,\tau}$ to fit our entropy regularized case.
# A.4. Complexity Results of (Kumar et al., 2023c)
(Kumar et al., 2023c) aims to solve the following robust MDP problem,
$$
\max _ {\pi} \min _ {(P, R) \in \mathcal {U}} \rho_ {P, R} ^ {\pi} := \mathbb {E} _ {\pi , p} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} R (s _ {t}, a _ {t}) \Big | s _ {0} \sim \rho \right],
$$
where $\mathcal{U}$ denotes the ambiguity set and $\rho_{P,R}^{\pi}$ denotes the value function under policy $\pi$ , transition kernel $P$ and reward function $R$ . This work assumes oracle access to the optimal transition kernel and reward $(P_{\mathcal{U}}^{\pi}, R_{\mathcal{U}}^{\pi}) \in \arg \inf_{(P,R) \in \mathcal{U}} \rho_{(P,R)}^{\pi}$ and assumes that the robust value function $\rho_{\mathcal{U}}^{\pi} \coloneqq \min_{(P,R) \in \mathcal{U}} \rho_{P,R}^{\pi}$ has Lipschitz-continuous gradient $\nabla_{\pi} \rho_{\mathcal{U}}^{\pi} = \nabla_{\pi} \rho_{P,R}^{\pi, R_{\mathcal{U}}^{\pi}}$ . Under these assumptions which are not practical in many applications, (Kumar et al., 2023c) proved that it takes $\mathcal{O}(\epsilon^{-1})$ iterations of the following projected gradient ascent steps to obtain an $\epsilon$ -robust optimal policy.
$$
\pi_ {t + 1} := \operatorname {p r o j} _ {\Pi} \left(\pi_ {k} + \eta \nabla_ {\pi} \rho_ {\mathcal {U}} ^ {\pi_ {k}}\right).
$$
However, (Kumar et al., 2023c) has not discussed how to obtain the optimal transition kernel and reward $(P_{\mathcal{U}}^{\pi}, R_{\mathcal{U}}^{\pi})$ , so the total iteration complexity defined as the updates of all the variables $(\pi, P$ and $R)$ is unknown.
# A.5. Complexity Results of (Guha and Lee, 2023)
(Guha and Lee, 2023) proposes a gradient-based no-regret RL algorithm, which has $T$ time steps. In each time step, both the policy and transition kernels are updated using $T_{\mathcal{O}}$ projected gradient descent steps. The convergence rate is $\mathcal{O}(T^{-1/2} + T_{\mathcal{O}}^{-1/2})$ based on Theorem 7.2 of (Guha and Lee, 2023). Hence, to obtain $\epsilon$ -optimal robust policy, it requires $T, T_{\mathcal{O}} = \mathcal{O}(\epsilon^{-2})$ , which means both policy and transition kernels are updated $TT_{\mathcal{O}} = \mathcal{O}(\epsilon^{-4})$ times, so the iteration complexity is also $\mathcal{O}(\epsilon^{-4})$ .
# A.6. Why Does (Guha and Lee, 2023) Require $s$ -rectangularity
The gradient-based no-regret RL algorithm (Guha and Lee, 2023) claims to globally converge without rectangularity condition. However, their global convergence relies on the following gradient-dominance condition (see their Lemma 6.5), which requires $s$ -rectangularity as will be elaborated soon.
$$
V _ {W} (\mu) - V _ {W ^ {*}} (\mu) \leq \frac {- 1}{1 - \gamma} \left\| \frac {d _ {\mu} ^ {W}}{\mu} \right\| _ {\infty} \min _ {\bar {W} \in \mathcal {W}} \left[ \left(\bar {W} - W\right) ^ {\top} \nabla_ {W} V _ {W} (\mu) \right], \tag {27}
$$
where $W, \mu, \mathcal{W}, V_W(\mu), d_\mu^W, \left\| \frac{d_\mu^W}{\mu} \right\|_\infty$ correspond to our transition kernel $p$ , initial state distribution $\rho$ , ambiguity set $\mathcal{P}$ , objective function $J_{\rho}(\pi, p)$ (with fixed policy $\pi$ ), occupancy measure $d_{\rho}^{\pi, p}$ and constant $D := \sup_{\pi \in \Pi, p \in \mathcal{P}} \| d_{\rho}^{\pi, p} / \rho \|_\infty < \infty$ respectively.
Their proof of the above gradient dominance property (27) made the following mistake at the beginning of page 16.
$$
\sum_ {s ^ {\prime}, a, s} \left[ \gamma^ {t} d _ {\mu} ^ {W} (s) \pi (a \mid s) \min _ {s ^ {\prime}} \left(A ^ {W} \left(s ^ {\prime}, a, s\right)\right) \right] = \min _ {\bar {W} \in \mathcal {W}} \sum_ {s ^ {\prime}, a, s} \left[ \gamma^ {t} d _ {\mu} ^ {W} (s) \pi (a \mid s) \mathbb {P} _ {\bar {W}} \left(s ^ {\prime}, a, s\right) \left(A ^ {W} \left(s ^ {\prime}, a, s\right)\right) \right] \tag {28}
$$
where $\mathbb{P}_{\overline{W}} = \bar{W}$ , $A^{W}(s^{\prime},a,s) := \gamma V_{W}(s^{\prime}) + r(s,a) - V_{W}(s)$ ((Guha and Lee, 2023) uses reward function $r$ instead of our cost $c$ ). The above equality uses the fact that the right side is minimized when $\mathbb{P}_{\overline{W}}(s^{\prime},a,s) = 1$ for $s^{\prime} \in \arg \min_{s^{\prime}} A^{W}(s^{\prime},a,s)$ . However, this is not true since $A^{W}(s^{\prime},a,s) < 0$ is possible. Furthermore, even if $\inf_{s,a,s^{\prime}} A^{W}(s^{\prime},a,s) \geq 0$ , such a deterministic choice of $\mathbb{P}_{\overline{W}}$ does not necessarily satisfy the constraint that $W \in \mathcal{W}$ .
The correct proof of the above gradient dominance condition (27) is shown in the proof of Lemma 4.3 in (Wang et al., 2023). At the end of their proof, they use an inequality that requires $s$ -rectangularity condition.
# B. Basic Properties of Entropy Regularized Robust MDP
We quote the perfect duality result of entropy regularized robust MDP as follows from Theorem 3.2 of (Mai and Jaillet, 2021).
Lemma 2. Under Assumption 1, the following perfect duality holds for $J_{\rho,\tau}(\pi,p)$ , the objective function of entropy regularized robust MDP defined in eq. (3).
$$
\min _ {\pi \in \Pi} \max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi , p) = \max _ {p \in \mathcal {P}} \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p) \tag {29}
$$
Proof. Assumption 1 says $\mathcal{P}$ is $s$ -rectangular, compact and convex. Hence, each $\mathcal{P}_s$ is compact and convex, which means all the conditions of Theorem 3.2 of (Mai and Jaillet, 2021) hold, and thus its conclusion of perfect duality follows.
To facilitate further discussion, we follow (Cen et al., 2022) and define a variant of $Q_{\tau}$ (defined in eq. (5)) as follows.
$$
\widetilde {Q} _ {\tau} (\pi , p; s, a) := \mathbb {E} _ {s ^ {\prime} \sim p (· | s, a)} [ c (s, a, s ^ {\prime}) + \gamma V _ {\tau} (s ^ {\prime}) ] \tag {30}
$$
It can be directly seen that $\widetilde{Q}_{\tau}(\pi, p; s, a) = Q_{\tau}(\pi, p; s, a) - \tau \ln \pi(a|s)$ .
Lemma 3. If $c(s, a, s') \in [0,1]$ , then the functions $J_{\rho,\tau}, V_{\tau}, F_{\rho,\tau}$ and $\widetilde{Q}_{\tau}$ defined by eqs. (3), (4), (6) and (30) respectively have the following ranges for any $\pi \in \Pi$ , $p \in \mathcal{P}$ , $s \in S$ and $a \in \mathcal{A}$ .
$$
J _ {\rho , \tau} (\pi , p), V _ {\tau} (\pi , p; s), F _ {\rho , \tau} (p) \in \left[ - \frac {\tau \ln | A |}{1 - \gamma}, \frac {1}{1 - \gamma} \right] \tag {31}
$$
$$
\widetilde {Q} _ {\tau} (\pi , p; s, a) \in \left[ - \frac {\gamma \tau \ln | \mathcal {A} |}{1 - \gamma}, \frac {1}{1 - \gamma} \right] \tag {32}
$$
Proof. We rewrite the function $V_{\tau}$ as follows.
$$
\begin{array}{l} V _ {\tau} (\pi , p; s, a) \stackrel {(i)} {=} \mathbb {E} \left[ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \left[ c _ {t} + \tau \ln \pi \left(a _ {t} \mid s _ {t}\right) \right] \mid s _ {0} = s \right] \\ \stackrel {(i i)} {=} \mathbb {E} \Big [ \sum_ {t = 0} ^ {\infty} \gamma^ {t} \Big (c _ {t} + \tau \sum_ {a} \pi (a | s _ {t}) \ln \pi (a | s _ {t}) \Big) \Big | s _ {0} = s \Big ], \\ \end{array}
$$
where (i) uses eq. (4) and (ii) uses $a_{t} \sim \pi(\cdot|s_{t})$ conditioned on $s_{t}$ . Since $c(s, a, s') \in [0,1]$ and the negative entropy $\sum_{a} \pi(a|s_{t}) \ln \pi(a|s_{t}) \in [-\ln |\mathcal{A}|, 0]$ , the range (31) holds for the function $V_{\tau}$ , and thus also holds for the functions $J_{\rho,\tau}(\pi, p) = \mathbb{E}_{s \sim \rho} V_{\tau}(\pi, p; s)$ and $F_{\rho,\tau}(p) = \min_{\pi \in \Pi} J_{\rho,\tau}(\pi, p)$ .
Then the range (32) can be proved as follows.
$$
\begin{array}{l} \widetilde {Q} _ {\tau} (\pi , p; s, a) \stackrel {(i)} {=} \mathbb {E} _ {s ^ {\prime} \sim p (\cdot | s, a)} [ c (s, a, s ^ {\prime}) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) ] \\ \stackrel {(i i)} {\in} \left[ 0 + \gamma \Big (- \frac {\tau \ln | \mathcal {A} |}{1 - \gamma} \Big), 1 + \frac {\gamma}{1 - \gamma} \right] = \Big [ - \frac {\gamma \tau \ln | \mathcal {A} |}{1 - \gamma}, \frac {1}{1 - \gamma} \Big ], \\ \end{array}
$$
where (i) uses eq. (30) and (ii) uses $c(s,a,s') \in [0,1]$ and the range (31).
Lemma 4. For any $p \in \mathcal{P}$ , the optimal policy $\pi_p \coloneqq \arg \min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p)$ is unique and has the following lower bound.
$$
\ln \pi_ {p} (a | s) \geq - \frac {\ln | \mathcal {A} | + 1 / \tau}{1 - \gamma}; \forall s \in \mathcal {S}, a \in \mathcal {A}. \tag {33}
$$
Proof. Based on (Cen et al., 2022), $\pi_p\coloneqq \arg \min_{\pi \in \Pi}J_{\rho ,\tau}(\pi ,p)$ is unique with the following expression.
$$
\pi_ {p} (a | s) = \frac {\exp \left[ - \widetilde {Q} _ {\tau} \left(\pi_ {p} , p ; s , a\right) / \tau \right]}{\sum_ {a ^ {\prime}} \exp \left[ - \widetilde {Q} _ {\tau} \left(\pi_ {p} , p ; s , a ^ {\prime}\right) / \tau \right]}, \tag {34}
$$
where $\widetilde{Q}_{\tau}$ is defined by eq. (30). Therefore, eq. (33) can be proved as follows.
$$
\ln \pi_ {p} (a | s) \stackrel {(i)} {=} \ln \left(\frac {\exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p _ {t} ; s , a) / \tau ]}{\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p _ {t} ; s , a ^ {\prime}) / \tau ]}\right) \stackrel {(i i)} {\geq} \ln \left(\frac {\exp [ - 1 / \tau (1 - \gamma) ]}{| \mathcal {A} | \exp [ \gamma \ln | \mathcal {A} | / (1 - \gamma) ]}\right) = - \frac {\ln | \mathcal {A} | + 1 / \tau}{1 - \gamma} \tag {35}
$$
where (i) uses eq. (34) and (ii) uses eq. (32).
# C. Lipschitz Properties
Lemma 5. The occupancy measure $d_{\rho}^{\pi, p}(s) \coloneqq (1 - \gamma) \sum_{t=0}^{\infty} \gamma^{t} \mathbb{P}_{\pi, p}(s_{t} = s | s_{0} \sim \rho)$ satisfies the following Lipschitz properties for any $\pi, \pi' \in \Pi$ and $p, p' \in \mathcal{P}$ .
$$
\left\| d _ {\rho} ^ {\pi^ {\prime}, p} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \leq \frac {\gamma}{1 - \gamma} \max _ {s} \left\| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \right\| _ {1} \tag {36}
$$
$$
\left\| d _ {\rho} ^ {\pi , p ^ {\prime}} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \leq \frac {\gamma}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \tag {37}
$$
Proof. The occupancy measure $d_{\rho}^{\pi ,p}$ satisfies the following equation based on Theorem 3.2 of (Altman, 2004).
$$
d _ {\rho} ^ {\pi , p} \left(s ^ {\prime}\right) = (1 - \gamma) \rho \left(s ^ {\prime}\right) + \gamma \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} \mid s, a\right); \forall \pi , p, s ^ {\prime}. \tag {38}
$$
Therefore, for any $\pi, \pi' \in \Pi$ and $p \in \mathcal{P}$ , we have
$$
\begin{array}{l} \left\| d _ {\rho} ^ {\pi^ {\prime}, p} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \\ = \sum_ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi^ {\prime}, p} \left(s ^ {\prime}\right) - d _ {\rho} ^ {\pi , p} \left(s ^ {\prime}\right) \right| \\ \stackrel {(i)} {=} \gamma \sum_ {s ^ {\prime}} \left| \sum_ {s, a} d _ {\rho} ^ {\pi^ {\prime}, p} (s) \pi^ {\prime} (a | s) p \left(s ^ {\prime} | s, a\right) - \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} | s, a\right) \right| \\ = \gamma \sum_ {s ^ {\prime}} p \left(s ^ {\prime} \mid s, a\right) \Big | \sum_ {s, a} \left[ d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \pi^ {\prime} (a \mid s) + \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \left[ \pi^ {\prime} (a \mid s) - \pi (a \mid s) \right] \\ \leq \gamma \sum_ {s} \left| d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right| + \gamma \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \left| \pi^ {\prime} (a | s) - \pi (a | s) \right| \\ \leq \gamma \| d _ {\rho} ^ {\pi^ {\prime}, p} - d _ {\rho} ^ {\pi , p} \| _ {1} + \gamma \max _ {s} \| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \| _ {1} \\ \end{array}
$$
where (i) uses eq. (38). Then eq. (36) can be proved by rearranging the above inequality.
Next, we will prove eq. (37). For any $\pi \in \Pi$ and $p,p^{\prime}\in \mathcal{P}$ , we have
$$
\begin{array}{l} \left\| d _ {\rho} ^ {\pi , p ^ {\prime}} - d _ {\rho} ^ {\pi , p} \right\| _ {1} \\ = \sum_ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi , p ^ {\prime}} \left(s ^ {\prime}\right) - d _ {\rho} ^ {\pi , p} \left(s ^ {\prime}\right) \right| \\ \stackrel {(i)} {=} \gamma \sum_ {s ^ {\prime}} \left| \sum_ {s, a} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) p ^ {\prime} \left(s ^ {\prime} \mid s, a\right) - \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} \mid s, a\right) \right| \\ = \gamma \sum_ {s ^ {\prime}} \left| \sum_ {s, a} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) \left[ p ^ {\prime} \left(s ^ {\prime} | s, a\right) - p \left(s ^ {\prime} | s, a\right) \right] + \sum_ {s, a} \left[ d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \pi (a | s) p \left(s ^ {\prime} | s, a\right) \right| \\ \leq \gamma \sum_ {s, a, s ^ {\prime}} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) \left| p ^ {\prime} \left(s ^ {\prime} \mid s, a\right) - p \left(s ^ {\prime} \mid s, a\right) \right| + \gamma \sum_ {s, a, s ^ {\prime}} \pi (a | s) p \left(s ^ {\prime} \mid s, a\right) \left| d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) \right| \\ \leq \gamma \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| d _ {\rho} ^ {\pi , p ^ {\prime}} - d _ {\rho} ^ {\pi , p} \| _ {1}, \\ \end{array}
$$
where (i) uses eq. (38). Then eq. (37) can be proved by rearranging the above inequality.
Lemma 6. The function $J_{\rho, \tau}(\pi, p)$ defined by eq. (3) has the following Lipschitz properties for any $\pi, \pi' \in \Pi, p, p' \in \mathcal{P}$ .
$$
\left| J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - J _ {\rho , \tau} (\pi , p) \right| \leq L _ {\pi} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| \tag {39}
$$
$$
\left| J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - J _ {\rho , \tau} (\pi , p) \right| \leq L _ {p} \| p ^ {\prime} - p \| \tag {40}
$$
$$
\left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \right\| \leq \ell_ {\pi} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| \tag {41}
$$
$$
\left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - \nabla_ {p} J _ {\rho , \tau} \left(\pi , p\right) \right\| \leq \ell_ {p} \| p ^ {\prime} - p \|, \tag {42}
$$
where $L_{\pi} \coloneqq \frac{\sqrt{|\mathcal{A}|}(2 - \gamma + \gamma\tau\ln|\mathcal{A}|)}{(1 - \gamma)^2}$ , $L_{p} \coloneqq \frac{\sqrt{|\mathcal{S}|}(1 + \tau\ln|\mathcal{A}|)}{(1 - \gamma)^2}$ , $\ell_{\pi} \coloneqq \frac{\sqrt{|S||\mathcal{A}|}(2 + 3\gamma\tau\ln|\mathcal{A}|)}{(1 - \gamma)^3}$ and $\ell_{p} \coloneqq \frac{2\gamma|S|(1 + \tau\ln|\mathcal{A}|)}{(1 - \gamma)^3}$ .
Proof. First, we prove eq. (39).
$$
\begin{array}{l} \left| J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - J _ {\rho , \tau} (\pi , p) \right| \\ = \frac {1}{1 - \gamma} \Big | \sum_ {s, a, s ^ {\prime}} \left(d _ {\rho} ^ {\pi^ {\prime}, p} (s) \pi^ {\prime} (a | s) p \left(s ^ {\prime} \mid s, a\right) [ c (s, a, s ^ {\prime}) + \tau \ln \pi^ {\prime} (a | s) ] - d _ {\rho} ^ {\pi , p} (s) \pi (a | s) p \left(s ^ {\prime} \mid s, a\right) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) ]\right) \\ \end{array}
$$
$$
\begin{array}{l} \leq \frac {1}{1 - \gamma} \sum_ {s, a, s ^ {\prime}} \left| d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right| \pi^ {\prime} (a | s) p \left(s ^ {\prime} | s, a\right) \left[ \left| c (s, a, s ^ {\prime}) \right| + \tau \right| \ln \pi^ {\prime} (a | s) | ] \\ + \frac {1}{1 - \gamma} \sum_ {s, a, s ^ {\prime}} d _ {\rho} ^ {\pi , p} (s) p (s ^ {\prime} | s, a) \big (| \pi^ {\prime} (a | s) - \pi (a | s) | | c (s, a, s ^ {\prime}) | + \tau | \pi^ {\prime} (a | s) \ln \pi^ {\prime} (a | s) - \pi (a | s) \ln \pi (a | s) | \big) \\ \stackrel {(i)} {\leq} \frac {1}{1 - \gamma} \sum_ {s} | d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) | - \frac {\tau}{1 - \gamma} \sum_ {s} | d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) | \sum_ {a} \pi^ {\prime} (a | s) \ln \pi^ {\prime} (a | s) \\ + \frac {1 + \tau}{1 - \gamma} \sum_ {s, a} d _ {\rho} ^ {\pi , p} (s) | \ln \pi^ {\prime} (a | s) - \ln \pi (a | s) | \\ \stackrel {(i i)} {\leq} \frac {\gamma (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \max _ {s} \| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \| _ {1} + \frac {2}{1 - \gamma} \sum_ {s} d _ {\rho} ^ {\pi , p} (s) \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} \\ \stackrel {(i i i)} {\leq} \frac {2 - \gamma + \gamma \tau \ln | \mathcal {A} |}{(1 - \gamma) ^ {2}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} \\ \leq \frac {\sqrt {| \mathcal {A} |} (2 - \gamma + \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \|, \\ \end{array}
$$
where (i) uses $c(s,a,s') \in [0,1]$ and Lemma 16, (ii) uses eq. (36), $\tau \in [0,1]$ and $-\sum_{a} \pi'(a|s) \ln \pi'(a|s) \in [0, \ln |\mathcal{A}|]$ , and (iii) uses eq. (67).
Then eq. (40) can be proved as follows.
$$
\begin{array}{l} \left| J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - J _ {\rho , \tau} (\pi , p) \right| \\ \stackrel {(i)} {=} \left| \int_ {0} ^ {1} \left(p ^ {\prime} - p\right) ^ {\top} \nabla_ {p} J _ {\rho , \tau} \left(\pi , p _ {u}\right) d u \right| \\ \stackrel {(i i)} {\leq} \frac {1}{1 - \gamma} \Big | \int_ {0} ^ {1} \sum_ {s, a, s ^ {\prime}} [ p ^ {\prime} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) ] d _ {\rho} ^ {\pi , p _ {u}} (s) \pi (a | s) \big [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p _ {u}; s ^ {\prime}) \big ] d u \Big | \\ \stackrel {(i i i)} {\leq} \frac {1}{1 - \gamma} \int_ {0} ^ {1} \sum_ {s, a, s ^ {\prime}} | p ^ {\prime} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) | d _ {\rho} ^ {\pi , p _ {u}} (s) \pi (a | s) \left[ \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi (a | s) \right] d u \\ \leq \frac {1}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \int_ {0} ^ {1} \sum_ {s, a} d _ {\rho} ^ {\pi , p _ {u}} (s) \pi (a | s) \left[ \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi (a | s) \right] d u \\ \stackrel {(i v)} {\leq} \frac {1}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \left[ \frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} + \tau \ln | \mathcal {A} | \right] \\ \leq \frac {\sqrt {| \mathcal {S} |} (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \| p ^ {\prime} - p \|, \\ \end{array}
$$
where (i) denotes $p_u \coloneqq up' + (1 - u)p$ for $u \in [0,1]$ , (ii) uses eq. (9), (iii) uses $c(s,a,s') \in [0,1]$ and eq. (31) which imply that $|c(s,a,s') + \tau \ln \pi(a|s) + \gamma V_{\tau}(\pi,p_u;s')| \leq \frac{\max(1,\gamma\tau\ln|\mathcal{A}|)}{1 - \gamma} - \tau \ln \pi(a|s)$ , and (iv) uses $-\sum_{a}\pi'(a|s)\ln \pi'(a|s) \in [0,\ln |\mathcal{A}|]$ .
Then eq. (41) can be proved as follows.
$$
\begin{array}{l} \left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \right\| \\ \leq \sqrt {| \mathcal {S} |} \sum_ {s, a} \max _ {s ^ {\prime}} \left| \nabla_ {p} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) (s, a, s ^ {\prime}) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) \right| \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i)} {=} \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \max _ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi^ {\prime}, p} (s) \pi^ {\prime} (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi^ {\prime} (a | s) + \gamma V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) ] \right. \\ \left. - d _ {\rho} ^ {\pi , p} (s) \pi (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) ] \right| \\ \end{array}
$$
$$
\begin{array}{l} = \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \max _ {s ^ {\prime}} \left| \left[ d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \pi^ {\prime} (a | s) \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi^ {\prime} (a | s) + \gamma V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) \right] \right. \\ + d _ {\rho} ^ {\pi , p} (s) [ \pi^ {\prime} (a | s) - \pi (a | s) ] [ c (s, a, s ^ {\prime}) + \gamma V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) ] + \gamma d _ {\rho} ^ {\pi , p} (s) \pi (a | s) [ V _ {\tau} (\pi^ {\prime}, p; s ^ {\prime}) - V _ {\tau} (\pi , p; s ^ {\prime}) ] \\ + \tau d _ {\rho} ^ {\pi , p} (s) [ \pi^ {\prime} (a | s) \ln \pi^ {\prime} (a | s) - \pi (a | s) \ln \pi (a | s) ] \Big | \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \left[ | d _ {\rho} ^ {\pi^ {\prime}, p} (s) - d _ {\rho} ^ {\pi , p} (s) | \pi^ {\prime} (a | s) \left(\frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi^ {\prime} (a | s)\right) \right. \\ + \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} d _ {\rho} ^ {\pi , p} (s) | \ln \pi^ {\prime} (a | s) - \ln \pi (a | s) | + \gamma L _ {\pi} d _ {\rho} ^ {\pi , p} (s) \pi (a | s) \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| \\ \left. + \tau d _ {\rho} ^ {\pi , p} (s) | \ln \pi^ {\prime} (a | s) - \ln \pi (a | s) | \right] \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i i)} {\leq} \frac {\gamma \sqrt {| \mathcal {S} |}}{(1 - \gamma) ^ {2}} \max _ {s} \| \pi^ {\prime} (\cdot | s) - \pi (\cdot | s) \| _ {1} \Big (\frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \tau \ln | \mathcal {A} | \Big) \\ + \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \Big [ \frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} + \frac {\sqrt {| \mathcal {A} |} (2 \gamma - \gamma^ {2} + \gamma^ {2} \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {2}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \| _ {1} \\ \left. + \tau \max _ {s} \left\| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \right\| _ {1} \right] \\ \end{array}
$$
$$
\stackrel {(i v)} {\leq} \frac {\sqrt {| \mathcal {S} | | \mathcal {A} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \max _ {s} \| \ln \pi^ {\prime} (\cdot | s) - \ln \pi (\cdot | s) \|, \tag {43}
$$
where (i) uses eq. (9), (ii) uses $c(s,a,s') \in [0,1]$ , eq. (31), Lemma 16 and $|V_{\tau}(\pi',p;s') - V_{\tau}(\pi,p;s')| \leq L_{\pi}\max_s\|\ln\pi'(\cdot|s) - \ln\pi(\cdot|s)\|$ (eq. (39) when $\rho(s) = I\{s = s'\}$ ), (iii) uses eq. (36), $L_{\pi} \coloneqq \frac{\sqrt{|\mathcal{A}|(2 - \gamma + \gamma\tau\ln|\mathcal{A}|)}}{(1 - \gamma)^2}$ and $-\sum_{a}\pi'(a|s)\ln\pi'(a|s) \in [0,\ln|\mathcal{A}|]$ , and (iv) uses $\gamma, \tau \in [0,1]$ , $\max_s\|\pi'(\cdot|s) - \pi(\cdot|s)\|_1 \leq \max_s\|\ln\pi'(\cdot|s) - \ln\pi(\cdot|s)\|_1 \leq \sqrt{|\mathcal{A}|}\max_s\|\ln\pi'(\cdot|s) - \ln\pi(\cdot|s)\|$ (the first $\leq$ comes from eq. (67)).
Then, eq. (42) can be proved as follows.
$$
\begin{array}{l} \left\| \nabla_ {p} J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \right\| \\ \leq \sqrt {| \mathcal {S} |} \sum_ {s, a} \max _ {s ^ {\prime}} \left| \nabla_ {p} J _ {\rho , \tau} (\pi , p ^ {\prime}) (s, a, s ^ {\prime}) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) \right| \\ \stackrel {(i)} {=} \frac {\sqrt {| S |}}{1 - \gamma} \sum_ {s, a} \max _ {s ^ {\prime}} \left| d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \pi (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p ^ {\prime}; s ^ {\prime}) ] \right. \\ \left. - d _ {\rho} ^ {\pi , p} (s) \pi (a | s) [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) ] \right| \\ \leq \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \pi (a | s) \max _ {s ^ {\prime}} \left| \gamma d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \left[ V _ {\tau} \left(\pi , p ^ {\prime}; s ^ {\prime}\right) - V _ {\tau} \left(\pi , p; s ^ {\prime}\right) \right] \right. \\ + \left[ d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) \right] \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma V _ {\tau} (\pi , p; s ^ {\prime}) \right] \\ \stackrel {(i i)} {\leq} \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \sum_ {s, a} \pi (a | s) \Big [ \gamma L _ {p} d _ {\rho} ^ {\pi , p ^ {\prime}} (s) \| p ^ {\prime} - p \| + \Big (\frac {\max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} - \tau \ln \pi (a | s) \Big) | d _ {\rho} ^ {\pi , p ^ {\prime}} (s) - d _ {\rho} ^ {\pi , p} (s) | \Big ] \\ \stackrel {(i i i)} {\leq} \frac {\gamma L _ {p} \sqrt {| \mathcal {S} |}}{1 - \gamma} \| p ^ {\prime} - p \| + \frac {\sqrt {| \mathcal {S} |}}{1 - \gamma} \Big (\frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} + \tau \ln | \mathcal {A} | \Big) \frac {\gamma}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \\ \stackrel {(i v)} {\leq} \frac {\gamma | \mathcal {S} | (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \| p ^ {\prime} - p \| + \frac {\gamma | \mathcal {S} |}{(1 - \gamma) ^ {3}} (1 + \tau \ln | \mathcal {A} |) \| p ^ {\prime} - p \| \\ \leq \frac {2 \gamma | \mathcal {S} | (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \| p ^ {\prime} - p \|, \\ \end{array}
$$
where (i) uses eq. (9), (ii) uses $c(s,a,s') \in [0,1]$ , eq. (31) and $|V_{\tau}(\pi ,p';s') - V_{\tau}(\pi ,p;s')|\leq L_p\| p' - p\|$ (eq. (40) when
$\rho(s) = I\{s = s'\}$ , (iii) uses $-\sum_{a} \pi'(a|s) \ln \pi'(a|s) \in [0, \ln |\mathcal{A}|]$ and eq. (37), and (iv) uses $L_p := \frac{\sqrt{|S|(1 + \tau \ln |\mathcal{A}|)}}{(1 - \gamma)^2}$ .
Lemma 7. $F_{\rho ,\tau}(p)\coloneqq \min_{\pi \in \Pi}J_{\rho ,\tau}(\pi ,p)$ is differentiable with $\nabla F_{\rho ,\tau}(p)\coloneqq \nabla_2J_{\rho ,\tau}(\pi_p,p)^6$
Proof. Note that $p \in \mathcal{P}$ where $\mathcal{P}$ is a subset of the Banach space $(\Delta^{\mathcal{S}})^{\mathcal{S} \times \mathcal{A}}$ . Also, we have proved in Lemma 6 that $J_{\rho, \tau}(\pi, p)$ is differentiable and $\nabla_p J_{\rho, \tau}(\pi, p)$ is Lipschitz continuous. Hence, the conditions of the Danskin's Theorem (Bernhard and Rapaport, 1995) hold, so we can apply the Danskin's Theorem which yields this lemma.
Lemma 8. The stochastic gradient $\widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t)$ obtained from Algorithm 1 has the following estimation error
$$
\left\| \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) - \nabla F _ {\rho , \tau} \left(p _ {t}\right) \right\| \leq \ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2} \tag {44}
$$
Proof. Based on Lemma 7, $\nabla F_{\rho,\tau}(p_t) = \nabla_p J_\rho(\pi_t^*, p_t)$ where $\pi_t^* := \arg \max_{\pi} J_{\rho,\tau}(\pi, p_t)$ . Hence, eq. (44) can be proved as follows.
$$
\begin{array}{l} \| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {t} ^ {*}, p _ {t}) \| \leq \| \nabla_ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {t} ^ {*}, p _ {t}) \| + \| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) \| \\ \stackrel {(i)} {\leq} \ell_ {\pi} \max _ {s} \| \ln \pi_ {t} ^ {*} (\cdot | s) - \ln \pi_ {t} (\cdot | s) \| + \epsilon_ {2} \\ \leq \ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}, \\ \end{array}
$$
where (i) uses eq. (41).

# D. Convergence Results of the Policy Updates
Lemma 9 (Convergence of policy updates for small space). For the policy optimization problem $\min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p_t)$ , perform the NPG step (7) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ . Suppose the $Q$ function is approximated with error $\epsilon_1$ , i.e., $\| \widehat{Q}_{t,k'} - Q_{\tau}(\pi_{t,k'}, p_t) \|_{\infty} \leq \epsilon_1$ for all $k' = 0, 1, \ldots, k-1$ . Then the policy $\pi_{t,k}$ satisfies the following properties.
$$
\left\| \widetilde {Q} _ {\tau} \left(\pi_ {t} ^ {*}, p _ {t}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {t, k}, p _ {t}\right) \right\| _ {\infty} \leq \frac {\gamma^ {k} \left(1 + \gamma \tau \ln | \mathcal {A} |\right)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}}, \tag {45}
$$
$$
\left\| \pi_ {t} ^ {*} - \pi_ {t, k} \right\| _ {\infty} \leq \left\| \ln \pi_ {t} ^ {*} - \ln \pi_ {t, k} \right\| _ {\infty} \leq \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}, \tag {46}
$$
$$
\ln \pi_ {t, k} \geq \ln \pi_ {\min } := - \frac {3 \ln | \mathcal {A} | + 3 / \tau}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}. \tag {47}
$$
Proof. Note that the NPG step (7) can be rewritten into the following form, as used in (Cen et al., 2022).
$$
\pi_ {t, k + 1} (\cdot | s) \propto \pi_ {t, k} (\cdot | s) ^ {1 - \frac {\eta \tau}{1 - \gamma}} \exp \left[ - \frac {\eta \left(\widehat {Q} _ {t , k} (s , \cdot) - \tau \ln \pi_ {t , k} (\cdot | s)\right)}{1 - \gamma} \right],
$$
where $\widehat{Q}_{t,k}(s,a) - \tau \ln \pi_{t,k}(a|s)\approx \widetilde{Q}_{\tau}(\pi_{t,k},p_t;s,a) = Q_{\tau}(\pi_{t,k},p_t;s,a) - \tau \ln \pi_{t,k}(a|s)$ with $\sup_{s,a}\left|\left[\widehat{Q}_{t,k}(s,a) - \tau \ln \pi_{t,k}(a|s)\right] - \widetilde{Q}_{\tau}(\pi_{t,k},p_t;s,a)\right|\leq \epsilon_1$ . Therefore, based on Theorem 2 of (Cen et al., 2022), we obtain the convergence rates (45) and (46) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ as follows.
$$
\| \widetilde {Q} _ {\tau} (\pi_ {t} ^ {*}, p _ {t}) - \widetilde {Q} _ {\tau} (\pi_ {t, k}, p _ {t}) \| _ {\infty} \leq C _ {1} \gamma^ {k} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}} \stackrel {(i)} {\leq} \frac {\gamma^ {k} (1 + \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}},
$$
$$
\| \pi_ {t} ^ {*} - \pi_ {t, k} \| _ {\infty} \stackrel {(i i)} {\leq} \| \ln \pi_ {t} ^ {*} - \ln \pi_ {t, k} \| _ {\infty} \leq 2 C _ {1} \tau^ {- 1} \gamma^ {k - 1} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}} \stackrel {(i i i)} {\leq} \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}},
$$
where (i) and (iii) use $C_1 \coloneqq \| \widetilde{Q}_{\tau}(\pi_t^*, p_t) - \widetilde{Q}_{\tau}(\pi_{t,0}, p_t) \|_{\infty} \leq \frac{1 + \gamma \tau \ln |\mathcal{A}|}{1 - \gamma}$ ( $\leq$ is based on eq. (32)), and (ii) uses the following inequality for any $u, v \in (0, 1]$ .
$$
| \ln u - \ln v | = \ln \max (u, v) - \ln \min (u, v) = \int_ {\min (u, v)} ^ {\max (u, v)} \frac {d s}{s} \geq \max (u, v) - \min (u, v) = | u - v |. \tag {48}
$$
Note that $\ln \pi_t^* = \ln \pi_{p_t} \geq -\frac{\ln |\mathcal{A}| + 1 / \tau}{1 - \gamma}$ based on eq. (33). This along with eq. (46) implies that for any $k \geq 1$
$$
\ln \pi_ {t, k} \geq - \frac {\ln | \mathcal {A} | + 1 / \tau}{1 - \gamma} - \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}} \geq - \frac {3 \ln | \mathcal {A} | + 3 / \tau}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}},
$$
The above bound also holds for $k = 0$ as $\pi_{t,0}(a|s) = 1 / |\mathcal{A}|$ . This proves eq. (47).
Lemma 10 (Convergence of policy updates for large space). For the policy optimization problem $\min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p_{\xi_t})$ perform the NPG steps (22)-(23) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ . Suppose the $Q$ function is approximated with error $\epsilon_1$ , i.e., $\sup_{s, a} |\phi(s, a)^\top w_{t,k'} - Q_\tau(\pi_{t,k'}, p_t; s, a)| \leq \epsilon_1$ for all $k' = 0, 1, \ldots, k-1$ . Then the policy $\pi_{t,k}$ satisfies the following properties.
$$
\left\| \widetilde {Q} _ {\tau} \left(\pi_ {t} ^ {*}, p _ {t}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {t, k}, p _ {t}\right) \right\| _ {\infty} \leq \frac {\gamma^ {k} \left(1 + \gamma \tau \ln | \mathcal {A} |\right)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}}, \tag {49}
$$
$$
\left\| \pi_ {t} ^ {*} - \pi_ {t, k} \right\| _ {\infty} \leq \left\| \ln \pi_ {t} ^ {*} - \ln \pi_ {t, k} \right\| _ {\infty} \leq \frac {2 \gamma^ {k - 1} (1 / \tau + \gamma \ln | \mathcal {A} |)}{1 - \gamma} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}, \tag {50}
$$
$$
\ln \pi_ {t, k} \geq \ln \pi_ {\min } := - \frac {3 \ln | \mathcal {A} | + 3 / \tau}{1 - \gamma} - \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}. \tag {51}
$$
Proof. It suffices to prove that the NPG steps (22)-(23) is equivalent to the NPG step (7) with $\widehat{Q}_{t,k}(s,a) = \phi(s,a)^{\top}w_{t,k}$ , so that we can directly apply Lemma 9.
The NPG steps (22)-(23) imply the NPG step (7) as follows.
$$
\begin{array}{l} \pi_ {t, k + 1} \propto \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} u _ {t , k + 1}}{1 - \gamma} \right] \\ \propto \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} (u _ {t , k} + w _ {t , k})}{1 - \gamma} \right] \\ = \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} u _ {t , k}}{1 - \gamma} \right] \exp \left[ - \frac {\phi (s , \cdot) ^ {\top} w _ {t , k}}{1 - \gamma} \right] \\ \propto \pi_ {t, k} \exp \left[ - \frac {\widehat {Q} _ {t , k} (s , a)}{1 - \gamma} \right]. \\ \end{array}
$$
Conversely, by iterating the NPG step (7) over $k = 0,1,\ldots ,K - 1$ , we obtain that
$$
\pi_ {t, K} (a | s) \propto \exp \left[ - \frac {1}{1 - \gamma} \sum_ {k = 0} ^ {K - 1} \widehat {Q} _ {t, k} (s, a) \right] = \exp \left[ - \frac {1}{1 - \gamma} \phi (s, a) ^ {\top} \sum_ {k = 0} ^ {K - 1} u _ {t, k} \right].
$$
Denote $w_{t,K} \coloneqq \sum_{k=0}^{K-1} u_{t,k}$ which satisfies eq. (23). Then the above update rule becomes eq. (22).
# E. Stochastic Approximation Errors
In this section, we will analyze the approximation error of estimating Q function and transition gradients in both Algorithm 2 (for small state space) and Algorithm 3 (for large state space).
Lemma 11 (Approximation error of $Q_{\tau}$ for large space). Fix $p \in \mathcal{P}$ and $\pi \in \Pi$ . Suppose that the regularized cost $c(s, a, s') + \tau \ln \pi(a|s)$ is bounded and that $\sup_{s,a} \| \phi(s,a) \| \leq 1$ . For any $\delta_1 \in (0,1)$ and $\epsilon_1 > 2\zeta$ where $\zeta := \sup_{\pi \in \Pi, p \in \mathcal{P}, s \in \mathcal{S}, a \in \mathcal{A}, \tau \in [0,1]} |\phi(s,a)^\top w_{\pi,p}^* - Q_\tau(\pi, p; s, a)|^2$ denotes the linear $Q$ function approximation error; update $w_n \in \mathbb{R}^d$ by applying the TD rule (21) with $T_1 \geq O(\epsilon_1^{-2})$ iterations and stepsize $\alpha = O[\ln^{-1}(\epsilon_1^{-1})]$ . Then $\overline{w}_{T_1} := \frac{1}{T_1} \sum_{t_1=1}^{T_1} w_n$ satisfies $\sup_{s \in \mathcal{S}, a \in \mathcal{A}} |\phi(s,a)^\top \overline{w}_{T_1} - Q_\tau(\pi, p; s, a)| \leq \epsilon_1$ with probability at least $1 - \delta_1$ .
Proof. The optimal parameter $w_{\pi, p}^{*}$ for estimating $Q_{\tau}(\pi, p; s, a) \approx \phi(s, a)^{\top} w$ has the following expression. (Li et al., 2023a)
$$
w _ {\pi , p} ^ {*} := \mathbb {E} _ {\pi , p} \left[ \phi (s, a) \left(\phi (s, a) - \gamma \phi \left(s ^ {\prime}, a ^ {\prime}\right)\right) ^ {\top} \right] ^ {- 1} \mathbb {E} \left[ \phi (s, a) \left(c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s)\right) \right], \tag {52}
$$
where the expectation $\mathbb{E}_{\pi,p}$ is taken over $s \sim \mu_{\pi,p}, a \sim \pi(\cdot|s), s' \sim p(\cdot|s,a), a' \sim \pi(\cdot|s')$ where $\mu_{\pi,p}$ is the stationary distribution under policy $\pi$ and transition kernel $p$ .
Based on Theorem 1 of (Li et al., 2023a), $\| \overline{w}_{T_1} - w_{\pi, p}^* \| \leq \mathcal{O}(T^{-1/2}) \leq \epsilon_1 / 2$ for $T = \mathcal{O}(\epsilon_1^{-2})$ . Therefore,
$$
\begin{array}{l} \max _ {s, a} \left| \phi (s, a) ^ {\top} \bar {w} _ {T _ {1}} - Q _ {\tau} (\pi , p; s, a) \right| \\ \leq \max _ {s, a} \left[ | \phi (s, a) ^ {\top} (\overline {{w}} _ {T _ {1}} - w _ {\pi , p} ^ {*}) | + | \phi (s, a) ^ {\top} w _ {\pi , p} ^ {*} - Q _ {\tau} (\pi , p; s, a) | \right] \\ \stackrel {(i)} {\leq} \left\| \bar {w} _ {T _ {1}} - w _ {\pi , p} ^ {*} \right\| + \zeta \leq \frac {\epsilon_ {1}}{2} + \zeta \stackrel {(i i)} {\leq} \epsilon_ {1}. \\ \end{array}
$$
where (i) uses $\zeta := \sup_{\pi \in \Pi, p \in \mathcal{P}, s \in \mathcal{S}, a \in \mathcal{A}, \tau \in [0,1]} |\phi(s, a)^{\top} w_{\pi, p}^{*} - Q_{\tau}(\pi, p; s, a)|^2$ and the assumption that $\sup_{s, a} \| \phi(s, a) \| \leq 1$ and (ii) uses $\epsilon_1 > 2\zeta$ .
Lemma 12 (Approximation error of $Q_{\tau}$ for small space). Fix $\pi \in \Pi$ and $p \in \mathcal{P}$ . Suppose that the regularized cost $c(s, a, s') + \tau \ln \pi(a|s)$ is bounded. For any $\delta_1 \in (0, 1)$ and $\epsilon_1 > 0$ , update $q_n$ by applying the TD rule (15) with $T_1 \geq \mathcal{O}(\epsilon_1^{-2})$ iterations and stepsize $\alpha = \mathcal{O}[\ln^{-1}(\epsilon_1^{-1})]$ . Then $\overline{q}_{T_1} := \frac{1}{T_1} \sum_{t_1=1}^{T_1} q_n$ satisfies $\sup_{s \in S, a \in A} |\overline{q}_{T_1}(s, a) - Q_{\tau}(\pi, p; s, a)| \leq \epsilon_1$ with probability at least $1 - \delta_1$ .
Proof. In Lemma 11, let $\phi(s,a) \in \{0,1\}^d$ ( $d = |\mathcal{S}||\mathcal{A}|$ ) be a one-hot vector with the $(s,a)$ -th entry being 1. Then this Lemma becomes a special case of Lemma 11 in the following aspects:
(1) The TD rule (21) becomes the TD rule (15) with $q_{n} = w_{n}$ .
(2) $\overline{q}_{T_1} = \overline{w}_{T_1}$ .
(3) $Q_{\tau}(\pi, p) = w_{\pi, p}^{*}$ and thus $\zeta$ becomes 0.
(4) The condition of Lemma 11 that $\sup_{s,a}\| \phi (s,a)\| \leq 1$ is satisfied.
Lemma 13 (Approximation error of $\nabla_{\xi}J_{\rho,\tau}(\pi,p_{\xi})$ for large space). Fix $\pi \in \Pi$ and $p \in \mathcal{P}$ . Use linear parameterization $p_{\xi} = \Psi\xi$ and assume it satisfies $\inf_{s,a,s'} p_{\xi}(s'|s,a) > p_{\min}$ for a constant $p_{\min} > 0$ . Suppose that the regularized cost $c(s,a,s') + \tau \ln \pi(a|s)$ is bounded, and that the $Q$ function estimation $Q_{\tau}(\pi,p_{\xi};s,a) \approx \phi(s,a)^{\top}\overline{w}_{T_1}$ satisfies $\sup_{s \in S, a \in \mathcal{A}} |\phi(s,a)^{\top}\overline{w}_{T_1} - Q_{\tau}(\pi,p_{\xi};s,a)| \leq \epsilon_1$ for $\epsilon_1 > 2\zeta$ . Then for any $\delta_2 \in (0,1)$ and $\epsilon_2 \geq \frac{3\gamma\|\Psi\|_{\epsilon_1}}{p_{\min}}$ , the stochastic transition gradient (19) with $N \geq \mathcal{O}(\epsilon_2^{-2})$ and $H \geq \mathcal{O}[\ln(\epsilon_2^{-1})]$ has approximation error $\|\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi,p_{\xi}) - \nabla_{\xi}J_{\rho,\tau}(\pi,p_{\xi})\| \leq \epsilon_2$ with probability at least $1 - \delta_2$ , which requires $NH = \mathcal{O}[\epsilon_2^{-2}\ln(\epsilon_2^{-1})]$ samples.
Proof. The stochastic gradient (19) can be rewritten as $\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi,p_{\xi}) = \frac{1}{N}\sum_{i=1}^{N}g(s_{i,H_i},a_{i,H_i},s_{i,H_i+1},a_{i,H_i+1})$ with
$$
g (s, a, s ^ {\prime}, a ^ {\prime}) := \frac {\psi (s , a , s ^ {\prime})}{(1 - \gamma) p _ {\xi} (s ^ {\prime} | s , a)} [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \phi (s ^ {\prime}, a ^ {\prime}) ^ {\top} \bar {w} _ {T _ {1}} ] \tag {53}
$$
Since $\mathbb{P}(H_i = h) \propto \gamma^h (h = 0,1,\ldots ,H - 1)$ , $s_{i,H_i} \sim d_{\rho ,H}^{\pi ,p}(s) \coloneqq \frac{1 - \gamma}{1 - \gamma^H}\sum_{h = 0}^{H - 1}\gamma^h\mathbb{P}_{\pi ,p}(s_h = s|s_0 \sim \rho)$ . Therefore,
$$
\mathbb {E} g \left(s _ {i, H _ {i}}, a _ {i, H _ {i}}, s _ {i, H _ {i} + 1}, a _ {i, H _ {i} + 1}\right) = \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, a ^ {\prime}), \tag {54}
$$
where the expectation $\mathbb{E}_{d_{\rho ,H}^{\pi ,p_{\xi}}}^{\pi ,p_{\xi}}$ is taken over $s\sim d_{\rho ,H}^{\pi ,p_{\xi}},a\sim \pi (\cdot |s),s'\sim p_{\xi}(\cdot |s,a),a'\sim \pi (\cdot |s')$
Note that
$$
\| g (s, a, s ^ {\prime}, a ^ {\prime}) \|
$$
$$
\begin{array}{l} \stackrel {(i)} {\leq} \frac {\| \psi (s , a , s ^ {\prime}) \|}{(1 - \gamma) p _ {\min }} \left[ | c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) | + \gamma | \phi (s ^ {\prime}, a ^ {\prime}) ^ {\top} \bar {w} _ {T _ {1}} - Q _ {\tau} (\pi , p; s, a) | + \gamma | \widetilde {Q} _ {\tau} (\pi , p; s, a) | + \gamma | \tau \ln \pi (a ^ {\prime} | s ^ {\prime}) | \right] \\ \stackrel {(i i)} {\leq} \frac {\| \Psi \|}{(1 - \gamma) p _ {\min}} \left[ \mathcal {O} (1) + \gamma \left(\epsilon_ {1} + \frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma}\right) \right] \\ \leq \mathcal {O} (1), \tag {55} \\ \end{array}
$$
where (i) uses $\widetilde{Q}_{\tau}(\pi, p; s, a) = Q_{\tau}(\pi, p; s, a) - \tau \ln \pi(a|s)$ (based on eqs. (5) and (30)) and the assumption that $p_{\xi}(s'|s, a) \geq p_{\min}$ , and (ii) uses eq. (32), $\sup_{s \in S, a \in A} |\phi(s, a)^{\top} \overline{w}_{T_1} - Q_{\tau}(\pi, p_{\xi}; s, a)| \leq \epsilon_1$ , $|c(s, a, s') + \tau \ln \pi(a|s)| \leq \mathcal{O}(1)$ and $|\tau \ln \pi(a'|s')| \leq \mathcal{O}(1)$ (since $|c(s, a, s')| \leq \mathcal{O}(1)$ ).
Therefore, applying Hoeffding's inequality to the i.i.d. variables $\{g(s_{i,H_i},a_{i,H_i},s_{i,H_i + 1},a_{i,H_i + 1})\}_{i = 1}^N$ with bound (55), the following bound holds with probability at least $1 - \delta_2$ .
$$
\left\| \widehat {\nabla} _ {\xi} J _ {\rho , \tau} \left(\pi , p _ {\xi}\right) - \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g _ {1} \right\| \leq \mathcal {O} \left[ \frac {1}{\sqrt {N}} \ln \left(\frac {2}{\delta_ {2}}\right) \right] \stackrel {(i)} {\leq} \frac {\epsilon_ {2}}{3}, \tag {56}
$$
where (i) holds for $N = \mathcal{O}(\epsilon_2^{-2})$ . Note that the transition gradient (18) can be rewritten as follows.
$$
\nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) = \frac {1}{1 - \gamma} \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} \left[ \frac {\psi (s , a , s ^ {\prime})}{p _ {\xi} \left(s ^ {\prime} \mid s , a\right)} [ c (s, a, s ^ {\prime}) + \tau \ln \pi (a \mid s) + \gamma Q _ {\tau} (\pi , p _ {\xi}; s ^ {\prime}, a ^ {\prime}) ] \right], \tag {57}
$$
where the expectation $\mathbb{E}_{d_{\rho}^{\pi ,p_{\xi}}}$ is taken over $s\sim d_{\rho}^{\pi ,p_{\xi}},a\sim \pi (\cdot |s),s'\sim p_{\xi}(\cdot |s,a),a'\sim \pi (\cdot |s')$ and we used $\mathbb{E}_{a'\sim \pi (\cdot |s')}[Q_{\tau}(\pi ,p_{\xi};s',a')|s'] = V_{\tau}(\pi ,p_{\xi};s')$ based on eqs. (4) and (5).
$$
\begin{array}{l} \left\| \widehat {\nabla} _ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) - \nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) \right\| \\ \leq \| \widehat {\nabla} _ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) - \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) \| + \| \mathbb {E} _ {d _ {\rho , H} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) - \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) \| \\ + \left\| \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} g (s, a, s ^ {\prime}, s ^ {\prime \prime}) - \nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}) \right\| \\ \end{array}
$$
$$
\stackrel {(i)} {\leq} \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) \sum_ {s} | d _ {\rho , H} ^ {\pi , p} (s) - d _ {\rho} ^ {\pi , p} (s) | + \frac {\gamma}{1 - \gamma} \left\| \mathbb {E} _ {d _ {\rho} ^ {\pi , p _ {\xi}}} \left[ \frac {\psi (s , a , s ^ {\prime})}{p _ {\xi} (s ^ {\prime} | s , a)} \big (\phi (s ^ {\prime}, a ^ {\prime}) ^ {\top} \overline {{w}} _ {T _ {1}} - Q _ {\tau} (\pi , p _ {\xi}; s ^ {\prime}, a ^ {\prime}) \big) \right] \right\|
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) (1 - \gamma) \sum_ {s} \Big | \sum_ {t = 0} ^ {H - 1} \gamma^ {t} \Big (\frac {1}{1 - \gamma^ {H}} - 1 \Big) \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) - \sum_ {t = H} ^ {+ \infty} \gamma^ {t} \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) \Big | + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\min} (1 - \gamma)} \\ \leq \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) (1 - \gamma) \sum_ {s} \left[ \sum_ {t = 0} ^ {H - 1} \frac {\gamma^ {H + t}}{1 - \gamma^ {H}} \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) + \sum_ {t = H} ^ {+ \infty} \gamma^ {t} \mathbb {P} _ {\pi , p} (s _ {t} = s | s _ {0} \sim \rho) \right] + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\min} (1 - \gamma)} \\ = \frac {\epsilon_ {2}}{3} + \mathcal {O} (1) (1 - \gamma) \left[ \sum_ {t = 0} ^ {H - 1} \frac {\gamma^ {H + t}}{1 - \gamma^ {H}} + \sum_ {t = H} ^ {+ \infty} \gamma^ {t} \right] + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\min} (1 - \gamma)} \\ \leq \frac {\epsilon_ {2}}{3} + \mathcal {O} (\gamma^ {H}) + \frac {\gamma \| \Psi \| \epsilon_ {1}}{p _ {\operatorname* {m i n}} (1 - \gamma)} \stackrel {(i i i)} {\leq} \epsilon_ {2}, \\ \end{array}
$$
where (i) uses eqs. (53), (55), (56) and (57), (ii) uses $d_{\rho}^{\pi,p}(s) \coloneqq (1 - \gamma)\sum_{t=0}^{\infty}\gamma^{t}\mathbb{P}_{\pi,p}(s_{t} = s|s_{0} \sim \rho)$ defined in eq. (10), $d_{\rho,H}^{\pi,p}(s) \coloneqq \frac{1 - \gamma}{1 - \gamma^{H}}\sum_{t=0}^{H-1}\gamma^{t}\mathbb{P}_{\pi,p}(s_{t} = s|s_{0} \sim \rho)$ , $\inf_{s,a,s'}p_{\xi}(s'|s,a) > p_{\min}$ and $\sup_{s \in \mathcal{S}, a \in \mathcal{A}}|\phi(s,a)^{\top}\overline{w}_{T_1} - Q_{\tau}(\pi,p_{\xi};s,a)| \leq \epsilon_1$ , and (iii) uses $H = \mathcal{O}[\ln(\epsilon_2^{-1})]$ and $\epsilon_1 \leq \frac{p_{\min}\epsilon_2(1 - \gamma)}{3\gamma\|\Psi\|}$ .
Lemma 14 (Approximation error of $\nabla_{p}J_{\rho,\tau}(\pi,p)$ for small space). Fix $\pi \in \Pi$ and $p \in \mathcal{P}$ . Suppose that the $Q$ function estimation $\overline{q}_{T_1} \approx Q_\tau(\pi,p)$ satisfies $\| \overline{q}_{T_1} - Q_\tau(\pi,p) \|_\infty \leq \epsilon_1$ . Then for any $\delta_2 \in (0,1)$ and $\epsilon_2 \geq \frac{3\gamma\epsilon_1\sqrt{|S|}}{1-\gamma}$ , the stochastic transition gradient (16) with $N \geq \mathcal{O}(\epsilon_2^{-2})$ and $H \geq \mathcal{O}[\ln (\epsilon_2^{-1})]$ has approximation error $\| \widehat{\nabla}_p J_{\rho,\tau}(\pi,p) - \nabla_p J_{\rho,\tau}(\pi,p) \| \leq \epsilon_2$ with probability at least $1-\delta_2$ , which requires $NH = \mathcal{O}(\epsilon_2^{-2}\ln \epsilon_2^{-1})$ samples.
Proof. The proof logic is the same as that of Lemma 13. We rewrite the stochastic gradient (16) as $\widehat{\nabla}_p J_{\rho, \tau}(\pi, p)(s, a, s') = \frac{1}{N} \sum_{i=1}^{N} g(s_{i, H_i}; s, a, s')$ where
$$
g (\widetilde {s}; s, a, s ^ {\prime}) := \frac {\pi (a | s) \mathbb {1} \left\{\widetilde {s} = s \right\}}{1 - \gamma} \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) \bar {q} _ {T _ {1}} \left(s ^ {\prime}, a ^ {\prime}\right) \right]. \tag {58}
$$
This function $g$ has the following bound.
$$
\begin{array}{l} \left\| g (\widetilde {s}; \cdot , \cdot , \cdot) \right\| \leq \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} | g (\widetilde {s} ; s , a , s ^ {\prime}) | ^ {2}} \\ = \sum_ {s, a} \frac {\pi (a | s) \mathbb {1} \{\widetilde {s} = s \}}{1 - \gamma} \sqrt {\sum_ {s ^ {\prime}} \left| c (s , a , s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \sum_ {a ^ {\prime}} \pi (a ^ {\prime} | s ^ {\prime}) [ \bar {q} _ {T _ {1}} (s ^ {\prime} , a ^ {\prime}) - Q _ {\tau} (\pi , p ; s ^ {\prime} , a ^ {\prime}) + Q _ {\tau} (\pi , p ; s ^ {\prime} , a ^ {\prime}) ] \right| ^ {2}} \\ \stackrel {(i)} {\leq} \sum_ {s, a} \frac {\pi (a | s) \mathbb {1} \{\widetilde {s} = s \}}{1 - \gamma} \sqrt {\sum_ {s ^ {\prime}} \left[ 1 - \tau \ln \pi (a | s) + \gamma \epsilon_ {1} + \gamma | V _ {\tau} (\pi , p ; s ^ {\prime}) | \right] ^ {2}} \\ \stackrel {(i i)} {\leq} \sqrt {| \mathcal {S} |} \sum_ {s, a} \frac {\pi (a | s) \mathbb {1} \{\widetilde {s} = s \}}{1 - \gamma} \Big [ 1 - \tau \ln \pi (a | s) + \gamma \epsilon_ {1} + \frac {\gamma + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} \Big ] \\ \stackrel {(i i i)} {\leq} \frac {\sqrt {| S |}}{1 - \gamma} \left[ 1 + \tau \ln | \mathcal {A} | + \gamma \epsilon_ {1} + \frac {\gamma + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} \right] = \mathcal {O} (1), \tag {59} \\ \end{array}
$$
where (i) uses $|c(s, a, s')| \leq 1$ , $|\tau \ln \pi(a|s)| = -\tau \ln \pi(a|s)$ , $\| \overline{q}_{T_1} - Q_{\tau}(\pi, p) \|_{\infty} \leq \epsilon_1$ and $V_{\tau}(\pi, p; s') = \sum_{a'} \pi(a'|s') Q_{\tau}(\pi, p; s', a')$ (based on eqs. (4) and (5)), (ii) uses eq. (31), (iii) uses $-\sum_{a} \pi(a|s) \ln \pi(a|s) \in [0, \ln |\mathcal{A}|]$ . Hence, applying Hoeffding's inequality to $\widehat{\nabla}_p J_{\rho, \tau}(\pi, p)(s, a, s') = \frac{1}{N} \sum_{i=1}^{N} g(s_i, H_i; s, a, s')$ , the following bound holds with probability at least $1 - \delta_2$ .
$$
\left\| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi , p) - \mathbb {E} _ {\widetilde {s} \sim d _ {\rho , H} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) \right\| \leq \mathcal {O} \left[ \frac {1}{\sqrt {N}} \ln \left(\frac {2}{\delta_ {2}}\right) \right] \stackrel {(i)} {\leq} \frac {\epsilon_ {2}}{3}, \tag {60}
$$
where $d_{\rho ,H}^{\pi ,p}(s)\coloneqq \frac{1 - \gamma}{1 - \gamma^H}\sum_{t = 0}^{H - 1}\gamma^t\mathbb{P}_{\pi ,p}(s_t = s|s_0\sim \rho)$ is the distribution of $s_{i,H_i}$ , and (i) holds for $N = \mathcal{O}(\epsilon_2^{-2})$ . Moreover,
$$
\left\| \mathbb {E} _ {\widetilde {s} \sim d _ {\rho , H} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) - \mathbb {E} _ {\widetilde {s} \sim d _ {\rho} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) \right\| \stackrel {(i)} {\leq} \mathcal {O} (1) \sum_ {s} | d _ {\rho , H} ^ {\pi , p} (s) - d _ {\rho} ^ {\pi , p} (s) | \stackrel {(i i)} {\leq} \mathcal {O} (\gamma^ {H}) \stackrel {(i i i)} {\leq} \frac {\epsilon_ {2}}{3} \tag {61}
$$
where (i) uses eq. (59), (ii) follows the proof of Lemma 13, and (iii) uses $H = \mathcal{O}[\ln (\epsilon_2^{-1})]$ .
Note that the transition gradient (9) can be rewritten as
$$
\nabla_ {p} J _ {\rho , \tau} (\pi , p) (s, a, s ^ {\prime}) = \frac {d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \left[ c (s, a, s ^ {\prime}) + \tau \ln \pi (a | s) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) Q _ {\tau} (\pi , p; s ^ {\prime}, a ^ {\prime}) \right] \tag {62}
$$
Therefore,
$$
\begin{array}{l} \| \mathbb {E} _ {\widetilde {s} \sim d _ {\rho} ^ {\pi , p}} g (\widetilde {s}; \cdot , \cdot , \cdot) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) \| \leq \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} \left| \mathbb {E} _ {\widetilde {s} \sim d _ {\rho} ^ {\pi , p}} g (\widetilde {s} ; s , a , s ^ {\prime}) - \nabla_ {p} J _ {\rho , \tau} (\pi , p) (s , a , s ^ {\prime}) \right| ^ {2}} \\ \stackrel {(i)} {\leq} \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} \left| \frac {\gamma d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \sum_ {a ^ {\prime}} \pi (a ^ {\prime} | s ^ {\prime}) [ \overline {{q}} _ {T _ {1}} (s ^ {\prime} , a ^ {\prime}) - Q _ {\tau} (\pi , p ; s ^ {\prime} , a ^ {\prime}) ] \right| ^ {2}} \\ \stackrel {(i i)} {\leq} \sum_ {s, a} \sqrt {\sum_ {s ^ {\prime}} \left| \frac {\gamma \epsilon_ {1} d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \right| ^ {2}} \\ = \sqrt {| \mathcal {S} |} \sum_ {s, a} \frac {\gamma \epsilon_ {1} d _ {\rho} ^ {\pi , p} (s) \pi (a | s)}{1 - \gamma} \\ \end{array}
$$
$$
= \frac {\gamma \epsilon_ {1} \sqrt {| S |}}{1 - \gamma} \stackrel {(i i i)} {\leq} \frac {\epsilon_ {2}}{3}, \tag {63}
$$
where (i) uses eqs. (58) and (62), (ii) uses $\| \overline{q}_{T_1} - Q_\tau (\pi ,p)\|_\infty \leq \epsilon_1$ , and (iii) uses $\epsilon_{2}\geq \frac{3\gamma\epsilon_{1}\sqrt{|S|}}{1 - \gamma}$ .
As a result, we conclude that $\| \widehat{\nabla}_p J_{\rho ,\tau}(\pi ,p) - \nabla_p J_{\rho ,\tau}(\pi ,p)\| \leq \epsilon_2$ by applying triangular inequality to eqs. (60), (61) and (63).
# F. Supporting Lemmas
Lemma 15. Suppose $\mathcal{P}$ is a convex set. For any $p' \in \mathcal{P}$ , the variable $p_{t+1} = \operatorname{proj}_{\mathcal{P}}(p_t + \beta \widehat{\nabla}_p J_\rho(\pi_t, p_t))$ generated from Algorithm 1 satisfies
$$
\langle p ^ {\prime} - p _ {t + 1}, p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} \left(\pi_ {t}, p _ {t}\right) - p _ {t + 1} \rangle \leq 0. \tag {64}
$$
Similarly, if $\Xi$ is a convex set, then for any $\xi \in \Xi$ , the variable $\xi_{t+1} = \text{proj}_{\Xi}\bigl(\xi_t + \beta \widehat{\nabla}_\xi J_\rho(\pi_t, p_{\xi_t})\bigr)$ generated from Algorithm 3 satisfies
$$
\left\langle \xi^ {\prime} - \xi_ {t + 1}, \xi_ {t} + \beta \widehat {\nabla} _ {\xi} J _ {\rho} \left(\pi_ {t}, p _ {\xi_ {t}}\right) - \xi_ {t + 1} \right\rangle \leq 0. \tag {65}
$$
Proof. We will only prove eq. (64) since eq. (65) can be proved in a similar way.
Define the function $f(u) \coloneqq \| p_t + \beta \widehat{\nabla}_p J_\rho(\pi_t, p_t) - [u p' + (1 - u) p_{t+1}] \|^2$ .
Note that $p', p_{t+1} \in \mathcal{P}$ . Hence, for any $u \in [0,1]$ , $up' + (1 - u)p_{t+1} \in \mathcal{P}$ . Since $p_{t+1} = \mathrm{proj}_{\mathcal{P}}\big(p_t + \beta \widehat{\nabla}_p J_\rho(\pi_t, p_t)\big)$ , we have
$$
\begin{array}{l} f (u) = \| p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} (\pi_ {t}, p _ {t}) - [ u p ^ {\prime} + (1 - u) p _ {t + 1} ] \| ^ {2} \\ \geq \left\| p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} \left(\pi_ {t}, p _ {t}\right) - p _ {t + 1} \right\| ^ {2} = f (0). \\ \end{array}
$$
Therefore,
$$
f ^ {\prime} (0) = - 2 \left\langle p ^ {\prime} - p _ {t + 1}, p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho} \left(\pi_ {t}, p _ {t}\right) - p _ {t + 1} \right\rangle \geq 0, \tag {66}
$$
which proves eq. (64).
Lemma 16. Any $x, y \in (0,1]$ satisfy the following inequalities.
$$
| x - y | \leq | \ln x - \ln y | \tag {67}
$$
$$
\left| x \ln x - y \ln y \right| \leq \left| \ln x - \ln y \right| \tag {68}
$$
Proof. Denote $a_1 = \ln x \leq 0$ , $a_2 = \ln y \leq 0$ , $g(a) \coloneqq e^a$ and $h(a) \coloneqq ae^a$ . Then this lemma can be proved as follows
$$
| x - y | = | g (a _ {1}) - g (a _ {2}) | \leq | a _ {1} - a _ {2} | \sup _ {a \leq 0} | g ^ {\prime} (a) | = | a _ {1} - a _ {2} | \sup _ {a \leq 0} e ^ {a} = | \ln x - \ln y |
$$
$$
| x \ln x - y \ln y | = | h (a _ {1}) - h (a _ {2}) | \leq | a _ {1} - a _ {2} | \sup _ {a \leq 0} | h ^ {\prime} (a) | \overset {(i)} {=} | \ln x - \ln y |,
$$
where (i) uses $\sup_{a < 0} |h'(a)| = 1$ which will be proved next.
Note that $h^\prime (a) = e^a (a + 1)$ and $h''(a) = e^a (a + 2)$ . Hence, $h^\prime (a)$ is monotonically decreasing in $(-\infty , - 2]$ and increasing in $[-2,0]$ . Since $\lim_{a\to -\infty}h'(a) = 0$ , $h^{\prime}(-2) = -e^{-2}$ and $h^{\prime}(0) = 1$ , we have $\sup_{a\leq 0}|h^{\prime}(a)| = 1$ .
Lemma 17. The diameter of $\mathcal{P}$ defined as $D_{\mathcal{P}}\coloneqq \sup_{p,\widetilde{p}\in \mathcal{P}}\| \widetilde{p} -p\|$ ranges in $[0,\sqrt{2|\mathcal{S}||\mathcal{A}|} ]$
Proof. For any $p, \widetilde{p} \in \mathcal{P}$ ,
$$
\begin{array}{l} 0 \leq \| \widetilde {p} - p \| ^ {2} \\ = \sum_ {s, a, s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) ] ^ {2} \\ \leq | \mathcal {S} | | \mathcal {A} | \max _ {s, a} \sum_ {s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) ] ^ {2} \\ = | \mathcal {S} | | \mathcal {A} | \max _ {s, a} \sum_ {s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) ^ {2} + p (s ^ {\prime} | s, a) ^ {2} - 2 p (s ^ {\prime} | s, a) \widetilde {p} (s ^ {\prime} | s, a) ] \\ \stackrel {(i)} {\leq} | \mathcal {S} | | \mathcal {A} | \max _ {s, a} \sum_ {s ^ {\prime}} [ \widetilde {p} (s ^ {\prime} | s, a) + p (s ^ {\prime} | s, a) ] = 2 | \mathcal {S} | | \mathcal {A} |, \\ \end{array}
$$
where (i) uses $p(s'|s,a), \widetilde{p}(s'|s,a) \in [0,1]$ .
# G. Proof of Proposition 1
Proposition 1. Under Assumption 1, for any $\epsilon \geq 0$ and $\tau > 0$ , $(\epsilon, \tau)$ -Nash equilibrium exists. If $(\pi, p) \in \Pi \times \mathcal{P}$ is an $(\epsilon, \tau)$ -Nash equilibrium, then $\pi$ is a $\left(2\epsilon + \frac{\tau \ln |\mathcal{A}|}{1 - \gamma}\right)$ -optimal robust policy to the optimization problem (2).
# Proof. Proof of $(\epsilon, \tau)$ -Nash equilibrium existence:
Fix any $\tau > 0$ . Based on (Cen et al., 2022), for any $p \in \mathcal{P}$ , there exists a unique optimal policy $\pi_p := \arg \min_{\pi \in \Pi} J_{\rho, \tau}(\pi, p)$ . Then based on the Danskin's Theorem (Bernhard and Rapaport, 1995), $F_{\rho, \tau}(p) := J_{\rho, \tau}(\pi_p, p)$ is differentiable with $\nabla F_{\rho, \tau}(p) := \nabla_2 J_{\rho, \tau}(\pi_p, p)$ . Such a differentiable function $F_{\rho, \tau}(p)$ has minimum in the compact set $\mathcal{P}$ , so there exists $p^* \in \arg \min_{p \in \mathcal{P}} F_{\rho, \tau}(p)$ .
Note that $J_{\rho, \tau}(\pi_{p^*}, p^*) = \min_{\pi' \in \Pi} J_{\rho, \tau}(\pi', p^*)$ based on the definition of $\pi_p$ . Then it suffices to prove that $J_{\rho, \tau}(\pi_{p^*}, p^*) = \max_{p' \in \mathcal{P}} J_{\rho, \tau}(\pi_{p^*}, p')$ , which along with $J_{\rho, \tau}(\pi_{p^*}, p^*) = \min_{\pi' \in \Pi} J_{\rho, \tau}(\pi', p^*)$ implies that $(\pi', p^*)$ is a $(0, \tau)$ -Nash equilibrium and thus also an $(\epsilon, \tau)$ -Nash equilibrium for any $\epsilon \geq 0$ .
Note that the proof of Proposition 3 does not rely on the existence of $(\epsilon, \tau)$ -Nash equilibrium, so we can apply Proposition 3 and obtain that
$$
0 \leq \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} \left(\pi_ {p ^ {*}}, p ^ {\prime}\right) - J _ {\rho , \tau} \left(\pi_ {p ^ {*}}, p ^ {*}\right) \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \left(p ^ {\prime} - p\right) ^ {\top} \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p ^ {*}}, p ^ {*}\right) \stackrel {(i)} {=} 0, \tag {69}
$$
where (i) uses $\nabla_{2}J_{\rho ,\tau}(\pi_{p^{*}},p^{*}) = \nabla F_{\rho ,\tau}(p^{*}) = 0$ since $p^*\in \arg \min_{p\in \mathcal{P}}F_{\rho ,\tau}(p)$ . Hence, $J_{\rho ,\tau}(\pi_{p^*},p^*) = \max_{p'\in \mathcal{P}}J_{\rho ,\tau}(\pi_{p^*},p')$ .
Proof of optimal robust policy: Note that $(\pi, p)$ satisfy the following $(\epsilon, \tau)$ -Nash equilibrium conditions
$$
J _ {\rho , \tau} (\pi , p) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho , \tau} \left(\pi^ {\prime}, p\right) \leq \epsilon , \quad \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} \left(\pi , p ^ {\prime}\right) - J _ {\rho , \tau} (\pi , p) \leq \epsilon . \tag {70}
$$
Therefore,
$$
\begin{array}{l} \Phi_ {\rho} (\pi) - \Phi_ {\rho} (\pi^ {*}) \\ = \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - \min _ {\pi^ {\prime} \in \Pi} \max _ {p ^ {\prime \prime} \in \mathcal {P}} J _ {\rho} (\pi^ {\prime}, p ^ {\prime \prime}) \\ \leq \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho} (\pi^ {\prime}, p) \\ \stackrel {(i)} {\leq} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho , \tau} (\pi^ {\prime}, p) + \frac {\tau \ln | \mathcal {A} |}{1 - \gamma} \\ \stackrel {(i i)} {\leq} 2 \epsilon + \frac {\tau \ln | \mathcal {A} |}{1 - \gamma} \tag {71} \\ \end{array}
$$
where (i) uses $J_{\rho}(\pi, p) \coloneqq J_{\rho, \tau}(\pi, p) + \tau \mathcal{H}_{\rho, p}(\pi)$ with entropy regularizer $\mathcal{H}_{\rho, p}(\pi) \coloneqq -\mathbb{E}_{\pi, p}[\sum_{t=0}^{\infty} \gamma^{t} \ln \pi (a_{t} | s_{t}) | s_{0} \sim \rho] \in [0, \frac{\tau \ln |\mathcal{A}|}{1 - \gamma}]$ and (ii) uses the conditions (70). The above inequality means $\pi$ is a $(2\epsilon + \frac{\tau \ln |\mathcal{A}|}{1 - \gamma})$ -optimal policy by Definition 1.
# H. Proof of Proposition 2
Proposition 2. Under Assumption 1, $F_{\rho ,\tau}(p)$ is Lipschitz smooth with parameter $\ell_F\coloneqq \frac{8|S||A|(1 + \gamma\tau\ln|A|)^2}{\tau(1 - \gamma)^5}$ , i.e., for any $p,p^{\prime}\in \mathcal{P}$
$$
\left\| \nabla F _ {\rho , \tau} \left(p ^ {\prime}\right) - \nabla F _ {\rho , \tau} (p) \right\| \leq \ell_ {F} \| p ^ {\prime} - p \|. \tag {11}
$$
Proof. Based on Lemma 2 of (Cen et al., 2022), $\widetilde{Q}_{\tau}(\pi_p,p)$ defined by eq. (30) is the unique fixed point of the following Bellman operator $T_{p}$ .
$$
T _ {p} Q (s, a) := \min _ {\pi \in \Pi} \sum_ {s ^ {\prime}} p \left(s ^ {\prime} \mid s, a\right) \left(c (s, a, s ^ {\prime}) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) \left[ Q \left(s ^ {\prime}, a ^ {\prime}\right) + \tau \ln \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) \right]\right) \tag {72}
$$
The Bellman operator $T_{p}$ above has the following two properties.
1. Based on Lemma 2 of (Cen et al., 2022), $T_{p}$ is a $\gamma$ -contraction under $\ell_{\infty}$ -norm, i.e.,
$$
\left\| T _ {p} Q ^ {\prime} - T _ {p} Q \right\| _ {\infty} \leq \gamma \left\| Q ^ {\prime} - Q \right\| _ {\infty}; \forall p \in \mathcal {P}, Q, Q ^ {\prime} \in \mathbb {R} ^ {| \mathcal {S} | | \mathcal {A} |}. \tag {73}
$$
2. For any $p, p' \in \mathcal{P}$ , $\pi \in \Pi$ , $s \in S$ , $a \in \mathcal{A}$ and $Q \in \mathbb{R}^{|S||\mathcal{A}|}$ , we have
$$
\begin{array}{l} \left| \sum_ {s ^ {\prime}} \left[ p ^ {\prime} \left(s ^ {\prime} \mid s, a\right) - p \left(s ^ {\prime} \mid s, a\right) \right] \left(c (s, a, s ^ {\prime}) + \gamma \sum_ {a ^ {\prime}} \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) [ Q \left(s ^ {\prime}, a ^ {\prime}\right) + \tau \ln \pi \left(a ^ {\prime} \mid s ^ {\prime}\right) ]\right) \right| \\ \stackrel {(i)} {\leq} [ 1 + \gamma (\| Q \| _ {\infty} + \tau \ln | \mathcal {A} |) ] \sum_ {s ^ {\prime}} | p ^ {\prime} (s ^ {\prime} | s, a) - p (s ^ {\prime} | s, a) |, \\ \end{array}
$$
where (i) uses $c(s, a, s') \in [0, 1]$ and $\sum_{a'} \pi(a'|s') \ln \pi(a'|s') \in [-\ln |\mathcal{A}|, 0]$ . Hence,
$$
\left\| T _ {p ^ {\prime}} Q - T _ {p} Q \right\| _ {\infty} \leq \left[ 1 + \gamma \left(\| Q \| _ {\infty} + \tau \ln | \mathcal {A} |\right) \right] \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1}. \tag {74}
$$
Based on the above two properties, for any $p,p^{\prime}\in \mathcal{P}$ , we have
$$
\begin{array}{l} \left\| \widetilde {Q} _ {\tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {p}, p\right) \right\| _ {\infty} \\ = \| T _ {p ^ {\prime}} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \leq \| T _ {p ^ {\prime}} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) \| _ {\infty} + \| T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - T _ {p} \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \stackrel {(i)} {\leq} \left[ 1 + \gamma (\| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) \| _ {\infty} + \tau \ln | \mathcal {A} |) \right] \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \stackrel {(i i)} {\leq} \left(1 + \frac {\gamma \max (1 , \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \gamma \tau \ln | \mathcal {A} |\right) \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \leq \frac {1 + \gamma \tau \ln | \mathcal {A} |}{1 - \gamma} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} + \gamma \| \widetilde {Q} _ {\tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {p}, p\right) \| _ {\infty}, \\ \end{array}
$$
where (i) uses eqs. (73)-(74) and (ii) uses eq. (32). Rearranging the above inequality yields that
$$
\left\| \widetilde {Q} _ {\tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \widetilde {Q} _ {\tau} \left(\pi_ {p}, p\right) \right\| _ {\infty} \leq \frac {1 + \gamma \tau \ln | \mathcal {A} |}{(1 - \gamma) ^ {2}} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1}. \tag {75}
$$
Therefore,
$$
\begin{array}{l} | \ln \pi_ {p ^ {\prime}} (a | s) - \ln \pi_ {p} (a | s) | \\ \stackrel {(i)} {=} \frac {1}{\tau} | \widetilde {Q} _ {\tau} (\pi_ {p}, p; s, a) - \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}; s, a) | + \left| \ln \frac {\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p ; s , a ^ {\prime}) / \tau ]}{\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}} , p ^ {\prime} ; s , a ^ {\prime}) / \tau ]} \right| \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {1}{\tau} \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} + \left| \ln \frac {\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p ; s , a ^ {\prime}) / \tau ]}{\sum_ {a ^ {\prime}} \exp [ - \widetilde {Q} _ {\tau} (\pi_ {p} , p ; s , a ^ {\prime}) / \tau - \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}} , p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p} , p) \| _ {\infty} / \tau ]} \right| \\ = \frac {2}{\tau} \| \widetilde {Q} _ {\tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \widetilde {Q} _ {\tau} (\pi_ {p}, p) \| _ {\infty} \\ \stackrel {(i i i)} {\leq} \frac {2 + 2 \gamma \tau \ln | \mathcal {A} |}{\tau (1 - \gamma) ^ {2}} \max _ {s, a} \| p ^ {\prime} (\cdot | s, a) - p (\cdot | s, a) \| _ {1} \\ \leq \frac {2 \sqrt {| \mathcal {S} |} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {2}} \| p ^ {\prime} - p \|, \tag {76} \\ \end{array}
$$
where (i) uses eq. (34), (ii) uses $\widetilde{Q}_{\tau}(\pi_{p'}, p'; s, a') \leq \widetilde{Q}_{\tau}(\pi_p, p; s, a') + \| \widetilde{Q}_{\tau}(\pi_{p'}, p') - \widetilde{Q}_{\tau}(\pi_p, p) \|_{\infty}$ and (iii) uses eq. (75). Therefore, eq. (11) can be proved as follows.
$$
\begin{array}{l} \| \nabla F _ {\rho , \tau} (p ^ {\prime}) - \nabla F _ {\rho , \tau} (p) \| \stackrel {(i)} {=} \| \nabla_ {2} J _ {\rho , \tau} (\pi_ {p ^ {\prime}}, p ^ {\prime}) - \nabla_ {2} J _ {\rho , \tau} (\pi_ {p}, p) \| \\ \leq \left\| \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p ^ {\prime}}, p ^ {\prime}\right) - \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p}, p ^ {\prime}\right) \right\| + \left\| \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p}, p ^ {\prime}\right) - \nabla_ {2} J _ {\rho , \tau} \left(\pi_ {p}, p\right) \right\| \\ \stackrel {(i i)} {\leq} \ell_ {\pi} \max _ {s} \| \ln \pi_ {p ^ {\prime}} (\cdot | s) - \ln \pi_ {p} (\cdot | s) \| + \ell_ {p} \| p ^ {\prime} - p \| \\ \leq \ell_ {\pi} \sqrt {| \mathcal {A} |} \max _ {s, a} | \ln \pi_ {p ^ {\prime}} (a | s) - \ln \pi_ {p} (a | s) | + \ell_ {p} \| p ^ {\prime} - p \| \\ \stackrel {(i i i)} {\leq} \left(\frac {| \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \frac {2 \sqrt {| \mathcal {S} |} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {2}} + \frac {2 \gamma | \mathcal {S} | (1 + \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}}\right) \| p ^ {\prime} - p \| \\ \leq \frac {8 | \mathcal {S} | | \mathcal {A} | (1 + \gamma \tau \ln | \mathcal {A} |) ^ {2}}{\tau (1 - \gamma) ^ {5}} \| p ^ {\prime} - p \|, \\ \end{array}
$$
where (i) uses $\nabla F_{\rho,\tau}(p) = \nabla_2 J_{\rho,\tau}(\pi_p, p)$ based on the Danskin's Theorem (Bernhard and Rapaport, 1995), (ii) uses eqs. (41) and (42), and (iii) uses $\ell_\pi := \frac{\sqrt{|S||A|(2 + 3\gamma\tau\ln|A|)}}{(1 - \gamma)^3}$ , $\ell_p := \frac{2\gamma|S|(1 + \tau\ln|\mathcal{A}|)}{(1 - \gamma)^3}$ and eq. (76).
# I. Proof of Proposition 3
Proposition 3 (Gradient dominance). Under Assumption 1, the function $J_{\rho,\tau}$ satisfies the following gradient dominance property for any $\pi \in \Pi$ and $p \in \mathcal{P}$ ,
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi , p) \\ \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \left(p ^ {\prime} - p\right) ^ {\top} \nabla_ {p} J _ {\rho , \tau} (\pi , p). \tag {12} \\ \end{array}
$$
Proof. Based on Lemma 4.3 of (Wang et al., 2023), the gradient dominance property (12) holds for $J_{\rho}$ , i.e.,
$$
\max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - J _ {\rho} (\pi , p) \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} (p ^ {\prime} - p) ^ {\top} \nabla_ {p} J _ {\rho} (\pi , p).
$$
Note that for any fixed policy $\pi$ , the function $J_{\rho}(\pi, p) = \mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} c_{t} \mid s_{0} = s\right]$ becomes $J_{\rho, \tau}(\pi, p) = \mathbb{E}\left[\sum_{t=0}^{\infty} \gamma^{t} [c_{t} + \tau \ln \pi(a_{t} | s_{t})] \mid s_{0} = s\right]$ after replacing the cost $c_{t} = c(s_{t}, a_{t}, s_{t+1})$ with $c_{t} + \tau \ln \pi(a_{t} | s_{t})$ . Therefore, the gradient dominance property (12) also holds for $J_{\rho, \tau}$ .
If $p \in \mathcal{P}$ satisfies $\| \nabla_p F_{\rho, \tau}(p) \| \leq \frac{\epsilon(1 - \gamma)}{DD_{\mathcal{P}}}$ , then we prove below that $(\pi_p, p)$ is an $(\epsilon, \tau)$ -Nash equilibrium.
$$
J _ {\rho , \tau} (\pi_ {p}, p) - \min _ {\pi^ {\prime} \in \Pi} J _ {\rho , \tau} (\pi^ {\prime}, p) = 0 \leq \epsilon ,
$$
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi_ {p}, p) \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} (p ^ {\prime} - p) ^ {\top} \nabla_ {2} J _ {\rho , \tau} (\pi_ {p}, p) \\ \leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \| p ^ {\prime} - p \| \| \nabla F _ {\rho , \tau} (p) \| \leq \frac {D D _ {\mathcal {P}}}{1 - \gamma} \frac {\epsilon (1 - \gamma)}{D D _ {\mathcal {P}}} = \epsilon . \\ \end{array}
$$

# J. Proof of Proposition 4
Proposition 4. Under Assumption 1, the function $J_{\rho,\tau}$ satisfies the following gradient dominance property for any $\pi \in \Pi$ , $\xi \in \Xi$ .
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho , \tau} (\pi , p ^ {\prime}) - J _ {\rho , \tau} (\pi , p _ {\xi}) \\ \leq \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} \left(\xi^ {\prime} - \xi\right) ^ {\top} \nabla_ {\xi} J _ {\rho , \tau} (\pi , p _ {\xi}). \tag {24} \\ \end{array}
$$
Proof. Proposition 4 can be proved as follows.
$$
\begin{array}{l} \max _ {p ^ {\prime} \in \mathcal {P}} J _ {\rho} (\pi , p ^ {\prime}) - J _ {\rho} (\pi , p _ {\xi}) \stackrel {(i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} (p ^ {\prime} - p _ {\xi}) ^ {\top} \nabla_ {p} J _ {\rho} (\pi , p _ {\xi}) \\ \stackrel {(i i)} {=} \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} (p _ {\xi^ {\prime}} - p _ {\xi}) ^ {\top} \nabla_ {p} J _ {\rho} (\pi , p _ {\xi}) \\ \stackrel {(i i i)} {=} \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} (\xi^ {\prime} - \xi) ^ {\top} \Psi^ {\top} \nabla_ {p} J _ {\rho} (\pi , p _ {\xi}) \\ \stackrel {(i v)} {=} \frac {D}{1 - \gamma} \max _ {\xi^ {\prime} \in \Xi} (\xi^ {\prime} - \xi) ^ {\top} \nabla_ {\xi} J _ {\rho} (\pi , p _ {\xi}), \tag {77} \\ \end{array}
$$
where (i) uses Proposition 3, (ii) uses $\mathcal{P} \coloneqq \{p_{\xi} : \xi \in \Xi\}$ and (iii)-(iv) use $p_{\xi} = \Psi \xi$ .

# K. Proof of Theorem 1
Theorem 1. Implement Algorithm 1 with $\beta \leq \frac{1}{2\ell_F}$ , $\eta = \frac{1 - \gamma}{\tau}$ . Then the output $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ satisfies the following rates under Assumption 1.
$$
J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}\right) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau}\right), \tag {13}
$$
$$
\begin{array}{l} \max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p) - J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \\ \leq \mathcal {O} \left[ \left(1 + \tau \epsilon_ {2}\right) \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}}\right) \right]. \tag {14} \\ \end{array}
$$
Proof. Based on Lemma 9, the output $\pi_t \coloneqq \pi_{t,T'}$ of the NPG step (7) with stepsize $\eta = \frac{1 - \gamma}{\tau}$ has the following convergence rates.
$$
\left\| Q _ {\tau} \left(\pi_ {t}, p _ {t}\right) - Q _ {\tau} \left(\pi_ {t} ^ {*}, p _ {t}\right) \right\| _ {\infty} \leq \frac {\gamma^ {T ^ {\prime} + 1} \left(1 + \gamma \tau \ln | \mathcal {A} |\right)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}}, \tag {78}
$$
$$
\left\| \pi_ {t} ^ {*} - \pi_ {t} \right\| _ {\infty} \leq \left\| \ln \pi_ {t} ^ {*} - \ln \pi_ {t} \right\| _ {\infty} \leq \frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}. \tag {79}
$$
Hence, the convergence rate (13) can be proved as follows.
$$
\begin{array}{l} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) = J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - J _ {\rho , \tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}) \\ \leq \mathbb {E} _ {s \sim \rho} [ V _ {\tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}; s) - V _ {\tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s) ] \\ \stackrel {(i)} {=} \mathbb {E} _ {s \sim \rho} \sum_ {a} \left[ \pi_ {\widetilde {T}} (a | s) \left[ Q _ {\tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}; s, a\right) - \tau \ln \pi_ {\widetilde {T}} (a | s) \right] - \pi_ {\widetilde {T}} ^ {*} (a | s) \left[ Q _ {\tau} \left(\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s, a\right) - \tau \ln \pi_ {\widetilde {T}} ^ {*} (a | s) \right] \right] \\ = \mathbb {E} _ {s \sim \rho} \sum_ {a} \left[ [ \pi_ {\widetilde {T}} (a | s) - \pi_ {\widetilde {T}} ^ {*} (a | s) ] [ Q _ {\tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s, a) - \tau \ln \pi_ {\widetilde {T}} ^ {*} (a | s) ] \right. \\ + \pi_ {\widetilde {T}} (a | s) [ Q _ {\tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}; s, a) - Q _ {\tau} (\pi_ {\widetilde {T}} ^ {*}, p _ {\widetilde {T}}; s, a) - \tau \ln \pi_ {\widetilde {T}} (a | s) + \tau \ln \pi_ {\widetilde {T}} ^ {*} (a | s) ] \Bigg ] \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \mathbb {E} _ {s \sim \rho} \sum_ {a} \left[ \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right) \left(\frac {2 + \tau \ln | \mathcal {A} |}{1 - \gamma}\right) \right. \\ \left. \right.\left. + \pi_ {\widetilde {T}} (a | s) \left(\frac {\gamma^ {T ^ {\prime} + 1} (1 + \gamma \tau \ln | \mathcal {A} |)}{1 - \gamma} + \frac {2 \gamma \epsilon_ {1}}{(1 - \gamma) ^ {2}} + \tau \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right)\right)\right] \\ \leq \frac {3 | \mathcal {A} |}{1 - \gamma} \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right) + \frac {1 2 \gamma^ {T ^ {\prime}}}{1 - \gamma} + \frac {6 \epsilon_ {1}}{(1 - \gamma) ^ {2}} \\ = \frac {2 + 3 \tau \ln | \mathcal {A} |}{\tau (1 - \gamma) ^ {2}} \Big (2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |) + \frac {4 \epsilon_ {1}}{1 - \gamma} \Big) \leq \mathcal {O} \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} \Big), \\ \end{array}
$$
where (i) uses $V_{\tau}(\pi, p; s) = \mathbb{E}_{a \sim \pi(\cdot|s)}[Q_{\tau}(\pi, p; s, a) - \tau \ln \pi(a|s)]$ based on eqs. (4) and (5), (ii) uses eqs. (32), (33), (78) and (79).
Next, we will prove the convergence rate (14). Note that
$$
\begin{array}{l} \left\| \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) \right\| = \frac {1}{\beta} \left\| \left(p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right)\right) - p _ {t} \right\| \\ \stackrel {(i)} {\geq} \frac {1}{\beta} \left\| \left(p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right)\right) - \operatorname {p r o j} _ {\mathcal {P}} \left(p _ {t} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right)\right) \right\| \\ \stackrel {(i i)} {=} \| \widehat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) - G _ {t} \|, \\ \end{array}
$$
where (i) uses $p_t \in \mathcal{P}$ and (ii) denotes $G_t := \frac{1}{\beta} \left( \mathrm{proj}_{\mathcal{P}}[p_t + \beta \widehat{\nabla}_p J_{\rho, \tau}(\pi_t, p_t)] - p_t \right)$ . The above inequality implies that
$$
G _ {t} ^ {\top} \hat {\nabla} _ {p} J _ {\rho , \tau} \left(\pi_ {t}, p _ {t}\right) \geq \frac {1}{2} \| G _ {t} \| ^ {2}. \tag {80}
$$
Since $F_{\rho,\tau}(p) \coloneqq \max_{\pi \in \Pi} J_{\rho,\tau}(\pi,p)$ is $\ell_F$ -smooth as shown in Proposition 2, we have
$$
\begin{array}{l} F _ {\rho , \tau} (p _ {t + 1}) - F _ {\rho , \tau} (p _ {t}) \geq \nabla F _ {\rho , \tau} (p _ {t}) ^ {\top} (p _ {t + 1} - p _ {t}) - \frac {\ell_ {F}}{2} \| p _ {t + 1} - p _ {t} \| ^ {2} \\ \stackrel {(i)} {=} \beta G _ {t} ^ {\top} [ \nabla F _ {\rho , \tau} (p _ {t}) - \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) ] + \beta G _ {t} ^ {\top} \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {t}, p _ {t}) - \frac {\ell_ {F} \beta^ {2}}{2} \| G _ {t} \| ^ {2} \\ \stackrel {(i i)} {\geq} - \beta \| G _ {t} \| \left(\ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}\right) + \frac {\beta}{2} \| G _ {t} \| ^ {2} - \frac {\beta}{4} \| G _ {t} \| ^ {2} \\ \geq \frac {\beta}{8} \| G _ {t} \| ^ {2} - 2 \beta \left(\ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}\right) ^ {2}, \tag {81} \\ \end{array}
$$
where (i) uses $p_{t+1} - p_t = \beta G_t$ , (ii) uses $\beta \leq \frac{1}{2\ell_F}$ and eqs. (44) and (80), and (iii) uses $c\|G_t\| \leq 2c^2 + \frac{\|G_t\|^2}{8}$ for $c := \ell_\pi \sqrt{|\mathcal{A}|} \| \ln \pi_t - \ln \pi_t^* \|_\infty + \epsilon_2$ . Averaging the above inequality over $t = 0, 1, \ldots, T-1$ , we obtain that
$$
\begin{array}{l} \| G _ {\widetilde {T}} \| = \min _ {0 \leq t \leq T - 1} \| G _ {t} \| \leq \sqrt {\frac {1}{T} \sum_ {t = 0} ^ {T - 1} \| G _ {t} \| ^ {2}} \\ \leq \sqrt {\frac {1 6}{T} \sum_ {t = 0} ^ {T - 1} \left(\ell_ {\pi} \sqrt {| \mathcal {A} |} \| \ln \pi_ {t} - \ln \pi_ {t} ^ {*} \| _ {\infty} + \epsilon_ {2}\right) ^ {2} + \frac {8 \left[ F _ {\rho , \tau} \left(p _ {T}\right) - F _ {\rho , \tau} \left(p _ {0}\right) \right]}{T \beta}} \\ \stackrel {(i)} {\leq} \sqrt {1 6 \left[ \frac {| \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{(1 - \gamma) ^ {3}} \left(\frac {2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma)} + \frac {4 \epsilon_ {1}}{\tau (1 - \gamma) ^ {2}}\right) + \epsilon_ {2} \right] ^ {2} + \frac {8 (1 + \tau \ln | \mathcal {A} |)}{T \beta (1 - \gamma)}} \\ \leq \frac {4 | \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {4}} \left(2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |) + \frac {4 \epsilon_ {1}}{1 - \gamma}\right) + 4 \epsilon_ {2} + \sqrt {\frac {8 (1 + \tau \ln | \mathcal {A} |)}{T \beta (1 - \gamma)}} \tag {82} \\ \end{array}
$$
where (i) uses eq. (31), eq. (79), $\ell_{\pi} := \frac{\sqrt{|S||A|}(2 + 3\gamma\tau\ln{|A|})}{(1 - \gamma)^{3}}.$
Then, the convergence rate (14) can be proved as follows.
$$
\max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p) - J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}})
$$
$$
\stackrel {(i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \langle p ^ {\prime} - p _ {\widetilde {T}}, \nabla_ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle
$$
$$
\leq \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \langle p ^ {\prime} - p _ {\widetilde {T}}, \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle + \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \| p ^ {\prime} - p _ {\widetilde {T}} \| \| \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \nabla_ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \|
$$
$$
\begin{array}{l} \stackrel {(i i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \left[ \langle p ^ {\prime} - p _ {\widetilde {T} + 1}, \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle + \langle p _ {\widetilde {T} + 1} - p _ {\widetilde {T}}, \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \rangle \right] + \frac {D D _ {\mathcal {P}} \epsilon_ {2}}{1 - \gamma} \\ \stackrel {(i i i)} {\leq} \frac {D}{1 - \gamma} \max _ {p ^ {\prime} \in \mathcal {P}} \Big [ \frac {1}{\beta} \langle p ^ {\prime} - p _ {\widetilde {T} + 1}, p _ {\widetilde {T}} + \beta \widehat {\nabla} _ {p} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - p _ {\widetilde {T} + 1} \rangle - \frac {1}{\beta} \langle p ^ {\prime} - p _ {\widetilde {T} + 1}, p _ {\widetilde {T}} - p _ {\widetilde {T} + 1} \rangle + \| \beta G _ {\widetilde {T}} \| (L _ {p} + \epsilon_ {2}) \Big ] \\ + \frac {D D _ {\mathcal {P}} \epsilon_ {2}}{1 - \gamma} \\ \end{array}
$$
$$
\begin{array}{l} \stackrel {(i v)} {\leq} \frac {D}{1 - \gamma} \Big [ 0 + \frac {1}{\beta} \max _ {p ^ {\prime} \in \mathcal {P}} \| p ^ {\prime} - p _ {\widetilde {T} + 1} \| \| \beta G _ {\widetilde {T}} \| + \beta (L _ {p} + \epsilon_ {2}) \| G _ {\widetilde {T}} \| + D _ {\mathcal {P}} \epsilon_ {2} \Big ] \\ \stackrel {(v)} {\leq} \frac {D}{1 - \gamma} \Big [ [ D _ {\mathcal {P}} + \beta (L _ {p} + \epsilon_ {2}) ] \Big (\frac {4 | \mathcal {A} | \sqrt {| \mathcal {S} |} (2 + 3 \gamma \tau \ln | \mathcal {A} |)}{\tau (1 - \gamma) ^ {4}} \Big (2 \gamma^ {T ^ {\prime}} (1 + \gamma \tau \ln | \mathcal {A} |) + \frac {4 \epsilon_ {1}}{1 - \gamma} \Big) + 4 \epsilon_ {2} + \sqrt {\frac {8 (1 + \tau \ln | \mathcal {A} |)}{T \beta (1 - \gamma)}} \Big) \\ \left. + D _ {\mathcal {P}} \epsilon_ {2} \right] \\ \end{array}
$$
$$
\stackrel {(v i)} {\leq} \mathcal {O} \left[ (1 + \tau \epsilon_ {2}) \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}}\right) \right],
$$
where (i) uses Proposition 3, (ii) uses $\| p' - p_{\widetilde{T}}\| \leq D_{\mathcal{P}}$ for $p',p_{\widetilde{T}}\in \mathcal{P}$ where $D_{\mathcal{P}}\coloneqq \sup_{p,\widetilde{p}\in \mathcal{P}}\| \widetilde{p} -p\|$ denotes the diameter of $\mathcal{P}$ ( $D_{\mathcal{P}}\leq \sqrt{2|\mathcal{S}||\mathcal{A}|}$ as shown in Lemma 17) and $\| \widehat{\nabla}_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}}) - \nabla_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| \leq \epsilon_2$ (iii) uses $p_{\widetilde{T} +1} - p_{\widetilde{T}} = \beta G_{\widetilde{T}}$ and $\| \widehat{\nabla}_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| \leq \| \nabla_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| +\| \widehat{\nabla}_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}}) - \nabla_pJ_{\rho ,\tau}(\pi_{\widetilde{T}},p_{\widetilde{T}})\| \leq L_p + \epsilon_2$ (The second $\leq$ uses eq. (40)), (iv) uses $p_{\widetilde{T} +1} - p_{\widetilde{T}} = \beta G_{\widetilde{T}}$ and eq. (64), (v) uses eq. (82) and $\| p^{\prime} - p_{\widetilde{T} +1}\| \leq D_{\mathcal{P}}$ and (vi) uses $\beta \leq \frac{1}{2\ell_F} = \frac{\tau(1 - \gamma)^5}{16|\mathcal{S}||\mathcal{A}|(1 + \gamma\tau\ln|\mathcal{A}|)^2} = \mathcal{O}(\tau)$ and $L_{p} = \frac{\sqrt{|S|(1 + \tau\ln|\mathcal{A}|)}}{(1 - \gamma)^{2}} = \mathcal{O}(1)$ .
# L. Proof of Corollary 1
Corollary 1 (Iteration Complexity of Algorithm 1). Implement Algorithm 1 under deterministic setting $(\epsilon_{1} = \epsilon_{2} = 0)$ . For any $\epsilon > 0$ , select hyperparameters $\tau = \min \left(\frac{\epsilon(1 - \gamma)}{3\ln|\mathcal{A}|}, 1\right)$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T' = \mathcal{O}[\ln (\epsilon^{-1})]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F}$ . Then the output $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ is both $\epsilon$ -optimal robust policy and $(\epsilon, \tau)$ -Nash equilibrium under Assumption 1. This requires $T = \mathcal{O}(\epsilon^{-3})$ transition kernel updates, $TT' = \mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ policy updates and iteration complexity $T + TT' = \mathcal{O}(\epsilon^{-3}\ln \epsilon^{-1})$ .
Proof. Select the following hyperparameters for Algorithm 1 which satisfies the conditions of Theorem 1.
$$
\epsilon_ {1} = \epsilon_ {2} = 0 \tag {83}
$$
$$
\tau = \min \left(\frac {\epsilon (1 - \gamma)}{3 \ln | \mathcal {A} |}, 1\right) \tag {84}
$$
$$
\beta = \frac {1}{2 \ell_ {F}} = \frac {\tau (1 - \gamma) ^ {5}}{1 6 | \mathcal {S} | | \mathcal {A} | (1 + \gamma \tau \ln | \mathcal {A} |) ^ {2}} \tag {85}
$$
$$
\eta = \frac {1 - \gamma}{\tau} \tag {86}
$$
$$
T = \mathcal {O} \left(\epsilon^ {- 3}\right) \tag {87}
$$
$$
T ^ {\prime} = \frac {\mathcal {O} [ \ln (\tau^ {- 1} \epsilon^ {- 1}) ]}{\ln (\gamma^ {- 1})} = \mathcal {O} [ \ln (\epsilon^ {- 1}) ]. \tag {88}
$$
Therefore, the convergence rates (13) and (14) along with the above hyperparameter choices imply that
$$
J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}\right) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau}\right) \leq \frac {\epsilon}{3}, \tag {89}
$$
$$
\max _ {p \in \mathcal {P}} J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p\right) - J _ {\rho , \tau} \left(\pi_ {\widetilde {T}}, p _ {\widetilde {T}}\right) \leq \mathcal {O} \left(1 + \tau \epsilon_ {2}\right) \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}}\right) \leq \frac {\epsilon}{3}, \tag {90}
$$
which means $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is $(\epsilon /3,\tau)$ -Nash equilibrium and thus $\epsilon$ -optimal robust policy by Proposition 1.
# M. Proof of Corollary 2
Corollary 2 (Sample Complexity of Algorithm 2). For any $\epsilon >0$ and $\delta \in (0,1)$ , implement Algorithm 2 with hyperparameters $\tau = \min \left(\frac{\epsilon(1 - \gamma)}{3\ln|\mathcal{A}|},1\right)$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T^{\prime} = \mathcal{O}[\ln (\epsilon^{-1})]$ , $T_{1} = \mathcal{O}(\epsilon^{-4})$ , $\alpha = \mathcal{O}[\ln^{-1}(\epsilon^{-1})]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F}$ , $N = \mathcal{O}(\epsilon^{-2})$ , $H = \mathcal{O}[\ln (\epsilon^{-1})]$ . The output $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is both $\epsilon$ -optimal robust policy and $(\epsilon ,\tau)$ -Nash equilibrium with probability at least $1 - \delta$ under Assumption 1. Furthermore, the sample complexity is $T(T^{\prime}T_{1} + NH) = \mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ .
Proof. Select the following hyperparameters.
$$
\tau = \min \left(\frac {\epsilon (1 - \gamma)}{3 \ln | \mathcal {A} |}, 1\right) = \mathcal {O} (\epsilon) \tag {91}
$$
$$
\epsilon_ {1} = \mathcal {O} (\tau \epsilon) = \mathcal {O} \left(\epsilon^ {2}\right) \tag {92}
$$
$$
\epsilon_ {2} = \mathcal {O} (\epsilon) \tag {93}
$$
$$
\beta = \frac {1}{2 \ell_ {F}} = \mathcal {O} (\tau) = \mathcal {O} (\epsilon) \tag {94}
$$
$$
\eta = \frac {1 - \gamma}{\tau} \tag {95}
$$
$$
T = \mathcal {O} \left(\epsilon^ {- 3}\right) \tag {96}
$$
$$
T ^ {\prime} = \frac {\mathcal {O} [ \ln \left(\tau^ {- 1} \epsilon^ {- 1}\right) ]}{\ln \left(\gamma^ {- 1}\right)} = \mathcal {O} [ \ln \left(\epsilon^ {- 1}\right) ] \tag {97}
$$
$$
T _ {1} = \mathcal {O} \left(\epsilon_ {1} ^ {- 2}\right) = \mathcal {O} \left(\epsilon^ {- 4}\right) \tag {98}
$$
$$
\alpha = \mathcal {O} \left[ \ln^ {- 1} \left(\epsilon_ {1} ^ {- 1}\right) \right] = \mathcal {O} \left[ \ln^ {- 1} \left(\epsilon^ {- 1}\right) \right] \tag {99}
$$
$$
\delta_ {1} = \frac {\delta}{2 T T ^ {\prime}} \tag {100}
$$
$$
N = \mathcal {O} \left(\epsilon_ {2} ^ {- 2}\right) = \mathcal {O} \left(\epsilon^ {- 2}\right), \tag {101}
$$
$$
H = \mathcal {O} \left[ \ln \left(\epsilon_ {2} ^ {- 1}\right) \right] = \mathcal {O} \left[ \ln \left(\epsilon^ {- 1}\right) \right], \tag {102}
$$
$$
\delta_ {2} = \frac {\delta}{2 T}, \tag {103}
$$
Based on the conditions of Lemmas 9 and 12, for all $t = 0, 1, \dots, T - 1$ and $k = 0, 1, \dots, T' - 1$ , eq. (47) of Lemmas 9 and the conclusion of Lemma 12 below hold with probability at least $1 - TT'\delta_1 = 1 - \delta / 2$ .
$$
\| \widehat {Q} _ {t, k ^ {\prime}} - Q _ {\tau} (\pi_ {t, k ^ {\prime}}, p _ {t}) \| _ {\infty} \leq \epsilon_ {1}; \forall k ^ {\prime} = 0, 1, \ldots , k - 1 \Rightarrow \inf _ {s, a} \pi_ {t, k} (a | s) \geq \pi_ {\min},
$$
$$
\inf _ {s, a} \ln \pi_ {t, k} (a | s) \geq \ln \pi_ {\min} = - \mathcal {O} (\tau^ {- 1}) \Rightarrow | c (s, a, s ^ {\prime}) + \tau \ln \pi_ {t, k} (a | s) | \leq \mathcal {O} (1) \Rightarrow \| \widehat {Q} _ {t, k} - Q _ {\tau} (\pi_ {t, k}, p _ {t}) \| _ {\infty} \leq \epsilon_ {1}.
$$
Note that $\pi_{t,0}(a|s) \equiv 1 / |\mathcal{A}| \geq \pi_{\min}$ . Hence, by induction over $k$ , the above statements imply that $\| \widehat{Q}_{t,k} - Q_{\tau}(\pi_{t,k}, p_t) \|_{\infty} \leq \epsilon_1$ and $\inf_{s,a} \pi_{t,k}(a|s) \geq \pi_{\min}$ for all $t = 0, 1, \ldots, T-1$ and $k = 0, 1, \ldots, T' - 1$ .
Note that $\epsilon_{1} = \mathcal{O}(\epsilon^{2})$ and $\epsilon_{2} = \mathcal{O}(\epsilon)$ for sufficiently small $\epsilon > 0$ can satisfy the condition of Lemma 14 that $\epsilon_{2} \geq \frac{3\gamma\epsilon_{1}\sqrt{|S|}}{1 - \gamma}$ . Hence, based on Lemma 14, the stochastic transition gradients $\widehat{\nabla}_{p}J_{\rho,\tau}(\pi_{t},p_{t})$ obtained by eq. (16) for all $t = 0,1,\ldots,T-1$ satisfy $\|\widehat{\nabla}_{p}J_{\rho,\tau}(\pi_{t},p_{t}) - \nabla_{p}J_{\rho,\tau}(\pi_{t},p_{t})\| \leq \epsilon_{2}$ with probability at least $1-T\delta_{2}=1-\delta/2$ .
Hence, we proved that $\| \widehat{\nabla}_p J_{\rho ,\tau}(\pi_t,p_t) - \nabla_p J_{\rho ,\tau}(\pi_t,p_t)\| \leq \epsilon_2$ and $\| \widehat{Q}_{t,k} - Q_{\tau}(\pi_{t,k},p_t)\|_{\infty}\leq \epsilon_1$ hold for all $t = 0,\ldots ,T - 1$ and $k = 0,\dots,T^{\prime} - 1$ with probability at least $1 - \delta$ . Therefore, Algorithm 2 with the above hyperparameter choices can be seen as a special case of Algorithm 1, so the convergence rates (13) and (14) in Theorem 1 hold which imply
$$
J _ {\rho} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \min _ {\pi \in \Pi} J _ {\rho} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \left(\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau}\right) \leq \frac {\epsilon}{3},
$$
$$
\max _ {p \in \mathcal {P}} J _ {\rho} (\pi_ {\widetilde {T}}, p) - J _ {\rho} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \leq \mathcal {O} (1 + \tau \epsilon_ {2}) \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T} \beta} \Big) \leq \frac {\epsilon}{3}.
$$
Therefore, with probability at least $1 - \delta$ , $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ is $(\epsilon / 3, \tau)$ -Nash equilibrium and thus $\epsilon$ -optimal robust policy by Proposition 1. The required total sample complexity is $T(T' T_1 + NH) = \mathcal{O}(\epsilon^{-7} \ln \epsilon^{-1})$ .
# N. Proof of Theorem 2
Theorem 2 (Sample Complexity of Algorithm 3). For any $\epsilon >0$ and $\delta \in (0,1)$ , implement Algorithm 3 with hyperparameters $\tau = \min \left[\mathcal{O}(\sqrt{\zeta} +\epsilon),1\right]$ , $T = \mathcal{O}(\epsilon^{-3})$ , $T^{\prime} = \mathcal{O}[\ln (\epsilon^{-1})]$ , $T_{1}\coloneqq \mathcal{O}(\epsilon^{-4})$ , $\alpha = \mathcal{O}[\ln^{-1}(\zeta +\epsilon^{2})^{-1}]$ , $\eta = \frac{1 - \gamma}{\tau}$ , $\beta = \frac{1}{2\ell_F\|\Psi\|}$ , $N = \mathcal{O}(\epsilon^{-4})$ , $H = \mathcal{O}[\ln (\epsilon^{-1})]$ . Then under Assumption 1 and the assumption that $\inf_{s,a,s'}p_{\xi}(s'|s,a) > p_{\mathrm{min}}$ for a constant $p_{\mathrm{min}} > 0$ , $(\pi_{\widetilde{T}},p_{\widetilde{T}})$ is both $(\mathcal{O}(\sqrt{\zeta} +\zeta +\epsilon),\tau)$ -Nash equilibrium and $\mathcal{O}(\sqrt{\zeta} +\zeta +\epsilon)$ -optimal robust policy with probability at least $1 - \delta$ . The required sample complexity is $T(T^{\prime}T_{1} + NH) = \mathcal{O}(\epsilon^{-7}\ln \epsilon^{-1})$ .
Proof. Select the following hyperparameters for Algorithm 3.
$$
\epsilon_ {1} = 2 \zeta + \epsilon^ {2} \tag {104}
$$
$$
\epsilon_ {2} = \frac {3 \gamma \epsilon_ {1} \sqrt {| S |}}{1 - \gamma} = \frac {3 \gamma \sqrt {| S |} \left(2 \zeta + \epsilon^ {2}\right)}{1 - \gamma} \tag {105}
$$
$$
\delta_ {1} = \frac {\delta}{2 T T ^ {\prime}} \tag {106}
$$
$$
\delta_ {2} = \frac {\delta}{2 T} \tag {107}
$$
$$
\tau = \min \left[ \mathcal {O} (\sqrt {\zeta} + \epsilon), 1 \right] \tag {108}
$$
$$
\beta = \frac {1}{2 \ell_ {F} \| \Psi \|} = \mathcal {O} (\tau) \geq \mathcal {O} (\epsilon) \tag {109}
$$
$$
\eta = \frac {1 - \gamma}{\tau} \tag {110}
$$
$$
T = \mathcal {O} \left(\epsilon^ {- 3}\right) \tag {111}
$$
$$
T ^ {\prime} = \frac {\mathcal {O} [ \ln (\epsilon^ {- 2}) ]}{\ln (\gamma^ {- 1})} = \mathcal {O} [ \ln (\epsilon^ {- 1}) ] \tag {112}
$$
$$
T _ {1} = \mathcal {O} \left(\epsilon^ {- 4}\right) \geq \mathcal {O} \left(\epsilon_ {1} ^ {- 2}\right) \tag {113}
$$
$$
\alpha = \mathcal {O} \left(\ln^ {- 1} \epsilon_ {1} ^ {- 1}\right) = \mathcal {O} \left[ \ln^ {- 1} \left(\zeta + \epsilon^ {2}\right) ^ {- 1} \right] \tag {114}
$$
$$
N = \mathcal {O} \left(\epsilon^ {- 4}\right) \geq \mathcal {O} \left(\epsilon_ {2} ^ {- 2}\right) \tag {115}
$$
$$
H = \mathcal {O} \left[ \ln \left(\epsilon^ {- 1}\right) \right] \geq \mathcal {O} \left[ \ln \left(\epsilon_ {2} ^ {- 1}\right) \right] \tag {116}
$$
Based on the conditions of Lemmas 10 and 11, select the following hyperparameters for the TD update rule (21).
Then for all $t = 0,1,\dots ,T - 1$ and $k = 0,1,\ldots ,T^{\prime} - 1$ , eq. (51) of Lemmas 10 and the conclusion of Lemma 11 below hold with probability at least $1 - TT^{\prime}\delta_{1}^{\prime} = 1 - \delta /2$ .
$$
\sup _ {s, a} | \phi (s, a) ^ {\top} w _ {t, k ^ {\prime}} - Q _ {\tau} (\pi_ {t, k ^ {\prime}}, p _ {t}; s, a) | \leq \epsilon_ {1}; \forall k ^ {\prime} = 0, 1, \ldots , k - 1 \Rightarrow \inf _ {s, a} \ln \pi_ {t, k} (a | s) \geq \ln \pi_ {\min},
$$
$$
\inf_{s,a}\ln \pi_{t,k}(a|s)\geq \ln \pi_{\min} = -\mathcal{O}(\tau^{-1})\Rightarrow |c(s,a,s^{\prime}) + \tau \ln \pi_{t,k}(a|s)|\leq \mathcal{O}(1)\Rightarrow \sup_{s,a}| \phi (s,a)^{\top}w_{t,k} - Q_{\tau}(\pi_{t,k},p_{t};s,a)|\leq \epsilon_{1}.
$$
Note that $u_{t,0} = 0 \Rightarrow \pi_{t,0}(a|s) \equiv 1 / |\mathcal{A}| \geq \pi_{\min}$ based on eq. (22). Hence, by induction over $k$ , the above statements imply that $\sup_{s,a} |\phi(s,a)^\top w_{t,k} - Q_\tau(\pi_{t,k}, p_t; s, a)| \leq \epsilon_1$ and $\inf_{s,a} \pi_{t,k}(a|s) \geq \pi_{\min}$ for all $t = 0, 1, \ldots, T-1$ and $k = 0, 1, \ldots, T' - 1$ .
Then based on Lemma 13, the stochastic transition gradients $\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi_t,p_{\xi_t})$ obtained by eq. (19) for all $t = 0,1,\ldots,T-1$ satisfy $\|\widehat{\nabla}_{\xi}J_{\rho,\tau}(\pi_t,p_{\xi_t}) - \nabla_{\xi}J_{\rho,\tau}(\pi_t,p_{\xi_t})\| \leq \epsilon_2$ with probability at least $1-T\delta_2 = 1-\delta/2$ .
Hence, we have proved that $\| \widehat{\nabla}_{\xi}J_{\rho ,\tau}(\pi_t,p_{\xi_t}) - \nabla_{\xi}J_{\rho ,\tau}(\pi_t,p_{\xi_t})\| \leq \epsilon_2$ and $\sup_{s,a}|\phi (s,a)^{\top}w_{t,k} - Q_{\tau}(\pi_{t,k},p_t;s,a)|\leq \epsilon_1$ holds for all $t = 0,1,\ldots ,T - 1$ and $k = 0,1,\dots,T^{\prime} - 1$ with probability at least $1 - \delta$ . Therefore, we can prove that the convergence rates in Theorem 1 also hold for Algorithm 3 with probability at least $1 - \delta$ . The proof logic is the same as that of Theorem 1. The major difference is that we replace the transition kernel $p\in \mathcal{P}$ with its corresponding parameter $\xi$ . Note that the proof of Theorem 2 uses the gradient dominance property (Proposition 12) about $\nabla_pJ_{\rho ,\tau}(\pi ,p)$ to obtain global convergence, and $\nabla_{\xi}F_{\rho ,\tau}(p_{\xi})$ also satisfies gradient dominance property (Proposition 24) of the same form. Hence, we can use the latter here. In addition, since $p_\xi = \Psi \xi$ , we have $\nabla_{\xi}J_{\rho ,\tau}(\pi ,p_{\xi}) = \Psi^{\top}\nabla_{p}J_{\rho ,\tau}(\pi ,p)$ and $\nabla_{\xi}F_{\rho ,\tau}(p_{\xi}) = \Psi^{\top}\nabla F_{\rho ,\tau}(p)$ , so the Lipschitz constants $L_{p}$ , $\ell_p$ and $\ell_F$ will be changed to $L_{p}\| \Psi \|$ $\ell_p\| \Psi \|$ and $\ell_F\| \Psi \|$ respectively, which does not change the order of the convergence rate as $\| \Psi \| = \mathcal{O}(1)$ .
Substituting the hyperparameters (104)-(116) into the convergence rates in Theorem 1, we obtain that
$$
J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) - \min _ {\pi \in \Pi} J _ {\rho , \tau} (\pi , p _ {\widetilde {T}}) \leq \mathcal {O} \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} \Big) \leq \mathcal {O} \Big (\frac {\epsilon^ {2} + 2 \zeta + \epsilon^ {2}}{\min [ \mathcal {O} (\sqrt {\zeta} + \epsilon) , 1 ]} \Big) \leq \mathcal {O} (\zeta + \sqrt {\zeta} + \epsilon),
$$
$$
\max _ {p \in \mathcal {P}} J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p) - J _ {\rho , \tau} (\pi_ {\widetilde {T}}, p _ {\widetilde {T}}) \leq \mathcal {O} (1 + \tau \epsilon_ {2}) \Big (\frac {\gamma^ {T ^ {\prime}} + \epsilon_ {1}}{\tau} + \epsilon_ {2} + \frac {1}{\sqrt {T \beta}} \Big) \leq \mathcal {O} (\zeta + \sqrt {\zeta} + \epsilon).
$$
Therefore, with probability at least $1 - \delta$ , $(\pi_{\widetilde{T}}, p_{\widetilde{T}})$ is $\left(\mathcal{O}(\sqrt{\zeta} + \zeta + \epsilon), \tau\right)$ -Nash equilibrium and thus $\mathcal{O}(\sqrt{\zeta} + \zeta + \epsilon)$ -optimal robust policy by Proposition 1. The required total sample complexity is
$$
T (T ^ {\prime} T _ {1} + N H) = \mathcal {O} \big [ \epsilon^ {- 3} \big (\epsilon^ {- 4} \ln (\epsilon^ {- 1}) + \epsilon^ {- 4} \ln (\epsilon^ {- 1}) \big) \big ] = \mathcal {O} \big (\epsilon^ {- 7} \ln (\epsilon^ {- 1}) \big).
$$
# O. Experiments
The experiments are implemented on Python 3.9 in a MacBook Pro laptop with 500 GB Storage and 8-core CPU (16 GB Memory). The code can be downloaded from https://github.com/changy12/ICML2024-Accelerated-Policy-Gradient-for-s-rectangular-Robust-MDPs-with-Large-State-Spaces.
# O.1. Experiments on Small State Space under Deterministic Setting
We compare our Algorithm 1 with the existing double-loop robust policy gradient (DRPG) algorithm (Wang et al., 2023) and actor-critic algorithm (Li et al., 2023b) under deterministic setting (i.e., when exact values of some quantities are available, including gradients, Q functions, V functions, etc.) on the Garnet problem (Archibald et al., 1995; Wang and Zou, 2022) with spaces $\mathcal{S} = \{0,1,2,3,4\}$ of 5 states and $\mathcal{A} = \{0,1,2\}$ of 3 actions. The agent gets cost 0 if it takes action 0 at state 0 or action 1 at other states, and gets cost 1 otherwise. We use $s$ -rectangular $L_{2}$ -norm ambiguity set $\mathcal{P} := \{p \in (\Delta^{\mathcal{S}})^{\mathcal{S} \times \mathcal{A}} : \| p(s,\cdot,\cdot) - p_0(s,\cdot,\cdot)\| \leq 0.03\}$ where $p_0(s,a,s') \equiv 0.2$ is the nominal transition kernel. The initial state distribution $\rho$ is uniform with $\rho(s) \equiv \frac{1}{5}$ . The discount factor is $\gamma = 0.95$ .
We implement an exact version of Algorithm 1 (i.e., $\epsilon_{1} = \epsilon_{2} = 0$ ) using $T_{p} = 5$ outer transition kernel updates with stepsize $\beta = 0.001$ , and $T^{\prime} = 1$ inner policy update with stepsize $\eta = \frac{1 - \gamma}{\tau} = 50$ per outer update. For DRPG algorithm, we use $T = 5$ outer policy updates (Algorithm 1 of (Wang et al., 2023)) with stepsize $\alpha_{t} = 10$ and $T_{k} = 1$ inner transition kernel update (Algorithm 2 of (Wang et al., 2023)) with stepsize $\beta_{t} = 0.001$ per outer update. For actor-critic algorithm (Algorithm 4.1 of (Li et al., 2023b)), we use $K = 5$ outer iterations, where the actor step (policy update) uses stepsize $\eta = 500$ , and the critic step (transition kernel update) uses only 1 iteration of Algorithm 3.2 of (Li et al., 2023b) with $\alpha_{m} = 1$ as well as $P_{\epsilon}$ obtained by exactly solving the direction-finding subproblem in eq. (3.4) of (Li et al., 2023b). We plot learning curves of the objective function $\Phi_{\rho}(\pi_t)\coloneqq \max_{p\in \mathcal{P}}J_{\rho}(\pi_t,p)$ at each $t$ -th outer iteration on the left of Figure 1. The x-axis is iteration complexity defined as the total number of policy updates and transition kernel updates up to each iteration $t$ . Figure 1 shows

Figure 1: Experimental Results on Small State Space (Left) and Large State Space (Right).

that our Algorithm 1 converges faster to the optimal robust value $\min_{\pi \in \Pi} \Phi_{\rho}(\pi) = 0$ than DRPG algorithm (Wang et al., 2023) and actor-critic algorithm (Li et al., 2023b).
# O.2. Experiments on Large State Space
We test Algorithm 3 on the Garnet problem (Archibald et al., 1995; Wang and Zou, 2022) with spaces $S = \{0,1,\dots ,49\}$ of 50 states and $\mathcal{A} = \{0,1,2\}$ of 3 actions. The agent gets cost 0 if it takes action 0 at state 0 or action 1 at other states, and gets cost 1 otherwise. We use transition kernel parameterization $p_{\xi}(s^{\prime}|s,a) = \psi (a,s^{\prime})\xi (s)$ with parameter $\xi (s)\in \mathbb{R}^{d_p}$ $(d_{p} = 10)$ and randomly generated feature vectors $\psi (a,s^{\prime})\in \mathbb{R}^{d_p}$ . This parameterization is both $s$ -rectangular and a special case of the linear kernel parameterization $p_{\widetilde{\xi}}(s^{\prime}|s,a) = \widetilde{\psi} (s,a,s^{\prime})\widetilde{\xi}$ introduced in Section 4.1 with parameter $\widetilde{\xi} = [\xi (1),\xi (2),\ldots ,\xi (|\mathcal{S}|)]\in \mathbb{R}^{d_p|S|}$ and the following feature vector
$$
\widetilde {\psi} (s, a, s ^ {\prime}) = [ \underbrace {0 , \ldots , 0} _ {(s - 1) d _ {p} \text {e l e m e n t s}}, \underbrace {\psi (a , s ^ {\prime})} _ {d _ {p} \text {e l e m e n t s}}, \underbrace {0 , \ldots , 0} _ {(| S | - s) d _ {p} \text {e l e m e n t s}} ] \in \mathbb {R} ^ {d _ {p} | S |}.
$$
We first generate $\psi_{\mathrm{pre}}^{(j)}(a,s') \in \mathbb{R}$ from uniform distribution $U(1,2)$ for all the entries $j = 1,\dots,d_p$ and for all $a,s'$ . Then we obtain $\psi(a,s') = [\psi^{(1)}(a,s'),\ldots,\psi^{(d_p)}(a,s')] \in \mathbb{R}^{d_p}$ by normalization as follows.
$$
\psi^ {(j)} (a, s ^ {\prime}) = \frac {\psi_ {\mathrm {p r e}} ^ {(j)} (a , s ^ {\prime})}{\sum_ {s ^ {\prime \prime}} \psi_ {\mathrm {p r e}} ^ {(j)} (a , s ^ {\prime \prime})}.
$$
In this way, we obtain $\psi(a, s') = [\psi^{(1)}(a, s'), \ldots, \psi^{(d_p)}(a, s')] \in \mathbb{R}^{d_p}$ where $p_{\xi}(s'|s, a) = \psi(a, s') \xi(s)$ is a distribution of $s' \in S$ for any $\xi(s) \in \mathbb{R}_+^{d_p}$ satisfying $\|\xi(s)\|_1 = 1$ . We use $s$ -rectangular $L_2$ -norm ambiguity set $\Xi := \{[\xi(1), \xi(2), \ldots, \xi(|S|)] \in \mathbb{R}^{d_p|S|} : \|\xi(s) - \xi_0(s)\| \leq 0.03, \forall s \in S\}$ where $\xi_0(s) = [0.1, 0.1, \ldots, 0.1] \in \mathbb{R}^{d_p}$ is the nominal kernel parameter. We also adopt the linear Q function approximation $Q_{\tau}(\pi, p; s, a) \approx \phi(s, a)^{\top} w$ with parameter $w \in \mathbb{R}^d$ as well as feature vectors $\phi(s, a) \in \mathbb{R}^d$ generated entrywise from uniform distribution $U(0, 1)$ . The initial state distribution $\rho$ is uniform with $\rho(s) \equiv \frac{1}{50}, \forall s \in S$ . The discount factor is $\gamma = 0.95$ .
In the above robust MDP setting with varying $d \in \{5, 20, 50, 100, 130, 140, 150\}$ , we implement Algorithm 3 with $\tau = 0.1$ , $T = 10$ , $T' = 20$ , $T_1 = 10^5$ , $\eta = 1$ , $\beta = 0.001$ , $\alpha = 0.001$ , $N = 1$ , $H = 500$ , $N = 10^4$ . The learning curves of the objective function $\Phi_{\rho}(\pi_t) := \max_{p \in \mathcal{P}} J_{\rho}(\pi_t, p)$ for each $d$ is plotted on the right of Figure 1, which shows that our Algorithm 3 converges to the optimal robust value $\min_{\pi \in \Pi} \Phi_{\rho}(\pi) = 0$ with sufficiently large $d$ , and the convergence gap gets larger with smaller $d$ , due to larger transition kernel parameterization error. Hence, a proper value of $d$ is important to trade off between performance and the amount of computation.