AcademicEval / abs_28K /test_abstract_long_2404.16767v1.json
jiyuuuu's picture
syn
9059969
raw
history blame
289 kB
{
"url": "http://arxiv.org/abs/2404.16767v1",
"title": "REBEL: Reinforcement Learning via Regressing Relative Rewards",
"abstract": "While originally developed for continuous control problems, Proximal Policy\nOptimization (PPO) has emerged as the work-horse of a variety of reinforcement\nlearning (RL) applications including the fine-tuning of generative models.\nUnfortunately, PPO requires multiple heuristics to enable stable convergence\n(e.g. value networks, clipping) and is notorious for its sensitivity to the\nprecise implementation of these components. In response, we take a step back\nand ask what a minimalist RL algorithm for the era of generative models would\nlook like. We propose REBEL, an algorithm that cleanly reduces the problem of\npolicy optimization to regressing the relative rewards via a direct policy\nparameterization between two completions to a prompt, enabling strikingly\nlightweight implementation. In theory, we prove that fundamental RL algorithms\nlike Natural Policy Gradient can be seen as variants of REBEL, which allows us\nto match the strongest known theoretical guarantees in terms of convergence and\nsample complexity in the RL literature. REBEL can also cleanly incorporate\noffline data and handle the intransitive preferences we frequently see in\npractice. Empirically, we find that REBEL provides a unified approach to\nlanguage modeling and image generation with stronger or similar performance as\nPPO and DPO, all while being simpler to implement and more computationally\ntractable than PPO.",
"authors": "Zhaolin Gao, Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kiant\u00e9 Brantley, Thorsten Joachims, J. Andrew Bagnell, Jason D. Lee, Wen Sun",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.CL",
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "While originally developed for continuous control problems, Proximal Policy\nOptimization (PPO) has emerged as the work-horse of a variety of reinforcement\nlearning (RL) applications including the fine-tuning of generative models.\nUnfortunately, PPO requires multiple heuristics to enable stable convergence\n(e.g. value networks, clipping) and is notorious for its sensitivity to the\nprecise implementation of these components. In response, we take a step back\nand ask what a minimalist RL algorithm for the era of generative models would\nlook like. We propose REBEL, an algorithm that cleanly reduces the problem of\npolicy optimization to regressing the relative rewards via a direct policy\nparameterization between two completions to a prompt, enabling strikingly\nlightweight implementation. In theory, we prove that fundamental RL algorithms\nlike Natural Policy Gradient can be seen as variants of REBEL, which allows us\nto match the strongest known theoretical guarantees in terms of convergence and\nsample complexity in the RL literature. REBEL can also cleanly incorporate\noffline data and handle the intransitive preferences we frequently see in\npractice. Empirically, we find that REBEL provides a unified approach to\nlanguage modeling and image generation with stronger or similar performance as\nPPO and DPO, all while being simpler to implement and more computationally\ntractable than PPO.",
"main_content": "Introduction The generality of the reinforcement learning (RL) paradigm is striking: from continuous control problems (Kalashnikov et al., 2018) to, recently, the fine-tuning of generative models (Stiennon et al., 2022; Ouyang et al., 2022), RL has enabled concrete progress across a variety of decision-making tasks. Specifically, when it comes to fine-tuning generative models, Proximal Policy Optimization (PPO, Schulman et al. (2017)) has emerged as the de-facto RL algorithm of choice, from language models (LLMs) (Ziegler et al., 2020; Stiennon et al., 2022; Ouyang et al., 2022; Touvron et al., 2023) to image generative models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024). If we take a step back however, it is odd that we are using an algorithm designed for optimizing two-layer networks for continuous control tasks from scratch for fine-tuning the billions of parameters \u2217{zg292, jdc396, ojo2, kdb82, ws455}@cornell.edu, tj@cs.cornell.edu \u2020{wenhao.zhan, jasonlee}@princeton.edu \u2021{gswamy,bagnell2}@andrew.cmu.edu 1 arXiv:2404.16767v1 [cs.LG] 25 Apr 2024 \fImage Generation Language Modeling ( ) RLHF reinforcement learning regression REBEL ( ) ( ) x y x y Figure 1: We present REBEL: a simple and scalable RL algorithm that performs policy optimization via iteratively regressing the difference in rewards directly in terms of the policy. This allows us to eliminate much of the complexity (e.g. value functions, clipping) of algorithms like PPO (Schulman et al., 2017). We apply REBEL to problems in both image generation and language modeling and find that despite its conceptual and implementation-level simplicity, REBEL is able to match or sometimes outperform the performance of PPO while out-performing purely offline techniques like DPO (Rafailov et al., 2023). of modern-day generative models. In the continuous control setting, the randomly initialized neural networks and the possible stochasticity in the dynamics necessitate variance reduction through a learned value function as a baseline (Schulman et al., 2015b), while clipping updates is important to limit distribution shift from iteration to iteration (Kakade and Langford, 2002). This means that when applied to generative model fine-tuning, we need to store four models in memory simultaneously (the policy, the reference policy, the critic, and the reward model), each with billions of parameters. Furthermore, we often add a KL regularization to the base model for fine-tuning, making explicit clipping unnecessary nor advisable, as pointed out by Ahmadian et al. (2024). Even outside of the generative modeling context, PPO is notorious for the wide range of performances measured, with differences being attributed to seemingly inconsequential implementation details (Henderson et al., 2019; Engstrom et al., 2020). This begs the question: Are there simpler algorithms that scale to modern RL applications? Our answer is REBEL: an algorithm that reduces the problem of reinforcement learning to solving a sequence of squared loss regression problems on iteratively collected datasets. The regression problems directly use policies to predict the difference in rewards. This allows us to eliminate the complexity of value functions, avoid heuristics like clipping, and scale easily to problems in both language modeling and image generation. Our key insight is that regressing relative rewards via policies directly on a sequence of iteratively collected datasets implicitly enables policy improvement. Rather than being a heuristic, REBEL comes with strong guarantees in theory and can be seen as a strict generalization of classical techniques (e.g., NPG) in reinforcement learning. Furthermore, REBEL cleanly incorporates offline datasets when available, can be extended to robustly handle intransitive preferences (Swamy et al., 2024), and empirically out-performs techniques like PPO 2 \fand DPO (Rafailov et al., 2023) in language generation and has a faster convergence with a similar asymptotic performance in image generation. More explicitly, our key contributions are four-fold: 1. We propose REBEL, a simple and scalable RL algorithm. REBEL finds a near-optimal policy by solving a sequence of least square regression problems on iteratively collected datasets. Each regression problem involves using a policy-parameterized regressor to predict the difference in rewards across trajectories sampled from the dataset. This dataset can be generated in a purely on-policy fashion or can incorporate offline data, enabling hybrid training. Furthermore, REBEL can be easily extended to handle intransitive preferences. 2. We connect REBEL to classical RL methods. We show that REBEL is a generalization of the foundational Natural Policy Gradient (NPG, Kakade (2001)) algorithm \u2013 applying the Gauss-Newton algorithm to the sequence of regression problems that REBEL solves recovers NPG. However, by instead applying simpler first-order optimization techniques, we are able to avoid computing the Fisher Information Matrix and enjoy a variance reduction effect. Thus, REBEL can be understood as a generalization of NPG while being much more scalable. 3. We analyze the convergence properties of REBEL. We prove via a direct reduction-based analysis that as long as we can solve the regression problem well at each iteration, we will be able to compete with any policy covered by the iteratively collected datasets (matching the strongest known results in the agnostic RL). These problems involve predicting the difference in rewards between trajectories in our dataset. We expect this problem to be well-solved in practice because our class of regressors is isomorphic to a class of policies that is highly expressive for the applications we consider (i.e. flexible Transformer models). 4. We evaluate REBEL both on language modeling and image generation tasks. We find that the on-policy version of REBEL outperforms PPO and DPO on language modeling and has similar performance for image generation tasks. On the TL;DR summarization task, we show REBEL scales well by finetuning a 6.9B parameter model. For text-guided image generation, REBEL optimizes a consistency model that converges to a similar performance as PPO. In short, REBEL is a simple and scalable algorithm that enjoys strong theoretical guarantees and empirical performance. We believe it is a suitable answer to the question raised above. 2 REBEL: REgression to RElative REward Based RL We first outline the notation used throughout the paper. 2.1 Notation We consider the Contextual Bandit formulation (Langford and Zhang, 2007) of RL which has been used to formalize the generation process of models like LLMs (Rafailov et al., 2023; Ramamurthy et al., 2022; Chang et al., 2023) and Diffusion Models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) due to the determinism of the transitions. More explicitly, in the deterministic transition setting, explicit states are not required as they can be equivalently represented by a sequence of 3 \factions. Furthermore, the entire sequence of actions can be considered as a single \u201carm\u201d in a bandit problem with an exponentially large action space. We denote by (\ud835\udc65, \ud835\udc66) a prompt/response pair with \ud835\udc65\u2208X as a prompt and \ud835\udc66\u2208Y as a response (e.g., a sequence of tokens, or in general a sequence of actions). We assume access to a reward function \ud835\udc5f(\ud835\udc65, \ud835\udc66) from which we can query for reward signals (the exact form of \ud835\udc5fdoes not need to be known). Querying \ud835\udc5fat (\ud835\udc65, \ud835\udc66) will return a scalar \ud835\udc5f(\ud835\udc65, \ud835\udc66) measuring the quality of the response. Such a reward function could be a pre-defined metric (e.g., Rouge score against human responses) or it could be learned from an offline human demonstration or preference data (e.g., the RLHF paradigm (Christiano et al., 2017; Ziegler et al., 2020)), as explored in our experiments. Denote by \ud835\udf0b\u2208X \u21a6\u2192\u0394(\ud835\udc4c), a policy (e.g. LLM) that maps from a prompt \ud835\udc65to a distribution over the response space Y. We use \ud835\udf0cto denote the distribution over prompts (i.e. initial states / contexts) \ud835\udc65. Throughout the paper, we use \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) to denote a parameterized policy with parameter \ud835\udf03(e.g., a neural network policy). At times we interchangeably use \ud835\udf0b\ud835\udc61and \ud835\udf0b\ud835\udf03\ud835\udc61when it is clear from the context. We emphasize that while we focus on the bandit formulation for notation simplicity, the algorithms proposed here can be applied to any deterministic MDP where \ud835\udc65is the initial state and the trajectory \ud835\udc66consists of the sequence of actions. At each iteration of all algorithms, our goal will be to solve the following KL-constrained RL problem: \ud835\udf0b\ud835\udc61+1 = argmax \ud835\udf0b E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02E\ud835\udc65KL (\ud835\udf0b(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) . (1) Intuitively, this can be thought of asking for the optimizer to fine-tune the policy \ud835\udf0b\ud835\udc61+1 according to \ud835\udc5f while staying close to some baseline policy \ud835\udf0b\ud835\udc61. 2.2 Deriving REBEL: REgression to RElative REward Based RL From Ziebart et al. (2008), we know that there exists a closed-form solution to the above minimum relative entropy problem (Eq. 1, Gr\u00fcnwald and Dawid (2004)): \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) = \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) \ud835\udc4d(\ud835\udc65) ; \ud835\udc4d(\ud835\udc65) = \u2211\ufe01 \ud835\udc66 \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)). (2) As first pointed out by Rafailov et al. (2023), observe that we can invert Eq. 2 and write the reward as a function of the policy, i.e. the \u201cDPO Trick\u201d: \u2200\ud835\udc65, \ud835\udc66: \ud835\udc5f(\ud835\udc65, \ud835\udc66) = 1 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013 . (3) As soon as X and Y become large, we can no longer guarantee the above expression holds exactly at all (\ud835\udc65, \ud835\udc66) and therefore need to turn our attention to choosing a policy such that Eq. 3 is approximately true. We propose using a simple square loss objective between the two sides of Eq. 3 to measure the goodness of a policy, i.e. reducing RL to a regression problem: \u0012 \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013\u00132 . (4) 4 \fAlgorithm 1 REgression to RElative REward Based RL (REBEL) 1: Input: Reward \ud835\udc5f, policy class \u03a0 = {\ud835\udf0b\ud835\udf03}, base distribution \ud835\udf07, learning rate \ud835\udf02 2: Initialize policy \ud835\udf0b\ud835\udf030. 3: for \ud835\udc61= 0 to \ud835\udc47\u22121 do 4: // Base distribution \ud835\udf07can either be an offline dataset or \ud835\udf0b\ud835\udc61. 5: Collect dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65) 6: Solve square loss regression problem: \ud835\udf03\ud835\udc61+1 = argmin \ud835\udf03 \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 (9) 7: end for Unfortunately, this loss function includes the partition function \ud835\udc4d(\ud835\udc65), which can be challenging to approximate over large input / output domains. However, observe that \ud835\udc4d(\ud835\udc65) only depends on \ud835\udc65and not \ud835\udc66. Thus, if we have access to paired samples, i.e. (\ud835\udc65, \ud835\udc66) and (\ud835\udc65, \ud835\udc66\u2032), we can instead regress the difference in rewards to eliminate this term from our objective: \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (5) Of course, we need to evaluate this loss function on some distribution of samples. In particular, we propose using an on-policy dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} with \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65), where \ud835\udf07is some base distribution. The base distribution \ud835\udf07can either be a fixed offline dataset (e.g. the instruction fine-tuning dataset) or \ud835\udf0b\ud835\udc61itself. Thus, the choice of base distribution \ud835\udf07determines whether REBEL is hybrid or fully online. Putting it all together, we arrive at our core REBEL objective: \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (6) To recap, given a pair of completions \ud835\udc66, \ud835\udc66\u2032 to a prompt \ud835\udc65, REBEL attempt to fit the relative reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (7) by optimizing over a class of predictors of the form 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 . (8) Critically, observe that if we were able to perfectly solve this regression problem, we would indeed recover the optimal solution to the KL-constrained RL problem we outlined in Eq. 1. While the above update might seem somewhat arbitrary at first glance, it has deep connections to prior work in the literature that illuminate its strengths over past techniques. We now discuss some of them. 3 Understanding REBEL as an Adaptive Policy Gradient We begin by recapping the foundational algorithms for policy optimization before situating REBEL within this space of techniques. 5 \f3.1 Adaptive Gradient Algorithms for Policy Optimization In this section, we give a brief overview of three adaptive gradient algorithms: Mirror Descent (MD), Natural Policy Gradient (NPG), and Proximal Policy Optimization (PPO). We discuss why they are preferable to their non-adaptive counterparts (Gradient Descent (GD) and Policy Gradient (PG)) and the connections between them. Mirror Descent. If X and Y are small discrete spaces (i.e. we are in the tabular setting), we can used the closed-form expression for the minimum relative entropy problem (Eq. 2). This is equivalent to the classic Mirror Descent (MD) algorithm with KL as the Bregman divergence. This update procedure is also sometimes known as soft policy iteration (Ziebart et al., 2008). Note that it does not even involve a parameterized policy and is therefore manifestly covariant. MD ensures a 1/\ud835\udc47convergence rate, i.e., after \ud835\udc47iterations, it must find a policy \u02c6 \ud835\udf0b, such that E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\u2605(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65,\ud835\udc66\u223c\u02c6 \ud835\udf0b(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42(1/\ud835\udc47). In particular, the convergence is almost dimension-free: the convergence rate scales logarithmically with respect to the size of the Y space. Note that gradient ascent will not enjoy such a dimension-free rate when optimizing over the simplex. When sup\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, we can show that the KL divergence between two policies, i.e., KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)), is also bounded, ensuring \ud835\udf0b\ud835\udc61+1 stay close to \ud835\udf0b\ud835\udc61. One can also show monotonic policy improvement, i.e., E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61+1\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2265E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61\ud835\udc5f(\ud835\udc65, \ud835\udc66). Foreshadowing a key point we will soon expound upon, both NPG and PPO can be considered approximations of this idealized tabular policy update procedure. Natural Policy Gradient. When Y and X are large, we cannot simply enumerate all \ud835\udc65and \ud835\udc66. Thus, we need to use a function to approximate \ud835\udf0b, which makes it impossible to exactly implement Eq. 2. Let us use \ud835\udf0b\ud835\udf03to denote a parameterized policy with parameter \ud835\udf03(e.g. the weights of a transformer). The Natural Policy Gradient (NPG, Kakade (2001)) approximates the KL in Equation 1 via its second-order Taylor expansion, whose Hessian is known as the Fisher Information Matrix (FIM, Bagnell and Schneider (2003)), i.e. E\ud835\udc65KL(\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)) \u2248(\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65) \u0002 \u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4\u0003 | {z } Fisher Information Matrix \ud835\udc39\ud835\udc61 (\ud835\udf03\u2212\ud835\udf03\ud835\udc61). The NPG update can be derived by plugging in this approximation to Eq. 1, further approximating the E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) by its first order Taylor expansion around \ud835\udf03\ud835\udc61, and finding the root of the resulting quadratic form: \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61+ \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0010 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u0011 (10) where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) is the standard policy gradient (i.e. REINFORCE (Williams, 1992)). As mentioned above, this update procedure can be understood as performing gradient updates in the local geometry induced by the Fisher information matrix, which ensures that we are taking small steps in policy space rather than in parameter space. Conversely, unlike regular gradient descent methods (i.e., PG), NPG allows us to make large changes in the parameter space \u0398, as long as the resulting two policies are close to each other in terms of KL divergence. This property allows NPG to make more aggressive and adaptive updates in the parameter space of the policy as well as be invariant to linear transformations of the parameters. Theoretically, Agarwal et al. (2021a) show that NPG with softmax parameterization converges at the 1/\ud835\udc47rate in a dimension-free manner, provably faster than the standard PG under the same setup. Empirically, the 6 \fsuperior convergence speed of NPG compared to that of PG was observed in its original exploration (Kakade, 2001; Bagnell and Schneider, 2003), as well as in follow-up work like TRPO (Schulman et al., 2015a). Critically, while elegant in theory, NPG, unfortunately, does not scale to modern generative models due to the need for computing the Fisher matrix inverse either explicitly or implicitly via the Hessian-vector matrix product trick. Proximal Policy Optimization. To address the scalability of NPG, Schulman et al. (2017) proposes Proximal Policy Optimization (PPO). Rather than explicitly computing the KL divergence between policies or approximating it via a Taylor expansion, PPO takes a more direct route and uses clipped updates with the hope of controlling the action probability deviation from \ud835\udf0b\ud835\udf03\ud835\udc61+1 to \ud835\udf0b\ud835\udf03\ud835\udc61, i.e. \ud835\udf03\ud835\udc61+1 := argmax \ud835\udf03 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)clip \u0012 \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) ; 1 \u2212\ud835\udf16, 1 + \ud835\udf16 \u0013 \ud835\udc5f(\ud835\udc65, \ud835\udc66). (11) Prima facie, this update follows the underlying intuition of NPG: allow big and adaptive changes in the policy\u2019s parameters \ud835\udf03, as long as the corresponding action probabilities do not change too much. This perhaps explains the superiority of PPO over vanilla REINFORCE in domains like continuous control. Unfortunately, under closer scrutiny, it becomes apparent that PPO-style clipped updates neither guarantee closeness to the prior policy nor have NPG-style adaptivity. While the clipping operator can set the gradient to be zero at samples (\ud835\udc65, \ud835\udc66) where \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) is much larger or smaller than \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65), it cannot actually guarantee \ud835\udf0b\ud835\udf03\ud835\udc61+1 staying close to \ud835\udf0b\ud835\udf03\ud835\udc61, a phenomenon empirically observed in prior work (Hsu et al., 2020). Furthermore, hard clipping is not adaptive \u2013 it treats all (\ud835\udc65, \ud835\udc66) equally and clips whenever the ratio is outside of a fixed range. In contrast, constraining the KL divergence to the prior policy allows one to vary the ratio \ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) at different (\ud835\udc65, \ud835\udc66), as long as the total KL divergence across the state space is small. Lastly, clipping reduces the effective size of a batch of training examples and thus wastes training samples. A REBEL With a Cause. Our algorithm REBEL addresses the limitations of NPG (scalability) and PPO (lack of conservativity or adaptivity) from above. First, unlike NPG, it does not rely on the Fisher information matrix at all and can easily scale to modern LLM applications, yet (as we will discuss below) can be interpreted as a generalization of NPG. Second, in contrast to PPO, it doesn\u2019t have unjustified heuristics and thus enjoys strong convergence and regret guarantees just like NPG. 3.2 Connections between REBEL and MD / NPG We now sketch a series of connections between REBEL and the methods outlined above. Exact REBEL is Mirror Descent. First, to build intuition, we interpret our algorithm\u2019s behavior under the assumption that the least square regression optimization returns the exact Bayes Optimal solution (i.e., our learned predictor achieves zero prediction error everywhere): \u2200\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032 : 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 = \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (12) Conditioned on Eq. 12 being true, a few lines of algebraic manipulation reveals that there must exist a function \ud835\udc50(\ud835\udc65) which is independent of \ud835\udc66, such that: \u2200\ud835\udc65, \ud835\udc66: 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) = \ud835\udc5f(\ud835\udc65, \ud835\udc66) + \ud835\udc50(\ud835\udc65). 7 \fTaking an exp on both sides and re-arrange terms, we get: \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \u221d\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) exp (\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) . In other words, under the strong assumption that least square regression returns a point-wise accurate estimator (i.e., Eq. 12), we see the REBEL recovers the exact MD update, which gives it (a) a fast 1/\ud835\udc47convergence rate (Shani et al., 2020; Agarwal et al., 2021a), (b) conservativity, i.e., max\ud835\udc65KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) is bounded as long as max\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, and (c) monotonic policy improvement via the NPG standard analysis (Agarwal et al., 2021a). NPG is Approximate REBEL with Gauss-Newton Updates. We provide another interpretation of REBEL by showing that NPG (Eq. 10) can be understood as a special case of REBEL where the least square problem in Eq. 9 is approximately solved via a single iteration of the Gauss-Newton algorithm. As for any application of Gauss-Newton, we start by approximating our predictor 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) by its first order Taylor expansion at \ud835\udf03\ud835\udc61: 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 \u22481 \ud835\udf02\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4(\ud835\udf03\u2212\ud835\udf03\ud835\udc61), where \u2248indicates that we ignore higher order terms in the expansion. If we \ud835\udeff:= \ud835\udf03\u2212\ud835\udf03\ud835\udc61and replace 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 by its above first order approximation in Eq. 9, we arrive at the following quadratic form: min \ud835\udeffE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)\u0001\u22a4\ud835\udeff\u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 . (13) Further simplifying notation, we denote the uniform mixture of \ud835\udf0b\ud835\udc61 and \ud835\udf07 as \ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) := (\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65) + \ud835\udf07(\u00b7|\ud835\udc65))/2 and the Fisher information matrix \ud835\udc39\ud835\udc61averaged under said mixture as: \ud835\udc39\ud835\udc61= E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) h \u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001\u22a4i . Solving the above least square regression to obtain a minimum norm solution, we have the following claim. Claim 1. The minimum norm minimizer \ud835\udeff\u2605of the least squares problem in Eq. 13 recovers an advantage-based variant of the NPG update: \ud835\udeff\u2605:= \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0000E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65)\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)[\ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66)]\u0001 , where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and the advantage is defined as \ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66) := \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212 E\ud835\udc66\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66). The proof of this claim is deferred to Appendix A. Observe that in REBEL, we never explicitly compute the advantage \ud835\udc34\ud835\udf0b\ud835\udc61. However, applying Gauss-Newton to our objective leads to an advantage-based NPG (rather than the traditional \ud835\udc44-function based NPG, e.g., Q-NPG from Agarwal et al. (2021a, 2019)) which indicates that predicting reward difference has an implicit variance reduction effect, as by definition, an advantage function includes a value function baseline. 1 1Note that the original form of NPG is on-policy (Kakade, 2001; Sutton et al., 1999), i.e., the expectations under \ud835\udf0b\ud835\udc61. Our formulation is more general: when set \ud835\udf07= \ud835\udf0b\ud835\udc61, a Gauss-Newton step will recover the original on-policy form of NPG from Kakade (2001); Sutton et al. (1999). More recent works have extended NPG beyond on-policy (e.g., Agarwal et al. (2021a, 2020)). 8 \f3.3 Extending REBEL to General Preferences In the above discussion, we assume we are given access to a ground-truth reward function. However, in the generative model fine-tuning applications of RL, we often need to learn from human preferences, rather than rewards. This shift introduces a complication: not all preferences can be rationalized by an underlying utility function. In particular, intransitive preferences which are well-known to result from aggregation of different sub-populations or users evaluating different pairs of items on the basis of different features (May, 1954; Tversky, 1969; Gardner, 1970) cannot be accurately captured by a single reward model. To see this, note that if we have \ud835\udc4e\u227b\ud835\udc4f, \ud835\udc4f\u227b\ud835\udc50, and \ud835\udc50\u227b\ud835\udc4e, it is impossible to have a reward model that simultaneously sets \u02c6 \ud835\udc5f(\ud835\udc4e) > \u02c6 \ud835\udc5f(\ud835\udc4f), \u02c6 \ud835\udc5f(\ud835\udc4f) > \u02c6 \ud835\udc5f(\ud835\udc50), and \u02c6 \ud835\udc5f(\ud835\udc50) > \u02c6 \ud835\udc5f(\ud835\udc4e). As we increase the space of possible choices to that of all possible prompt completions, the probability of such intransitivities sharply increases (Dud\u00edk et al., 2015), as reflected in the high levels of annotator disagreement in LLM fine-tuning datasets (Touvron et al., 2023). Thus, rather than assuming access to a reward model, in such settings, we assume access to a preference model (Munos et al., 2023; Swamy et al., 2024; Rosset et al., 2024; Ye et al., 2024). 3.3.1 A Game-Theoretic Perspective on Learning from Preferences More specifically, for any tuple (\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032), we assume we have access to P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65): the probability that \ud835\udc66is preferred to \ud835\udc66\u2032. We then define our preference model \ud835\udc59as \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u225c2 \u00b7 P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65) \u22121. (14) Observe that \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u2208[\u22121, 1] is skew-symmetric, i.e., \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66) = 0, \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) + \ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66) = 0 for all \ud835\udc65\u2208X, \ud835\udc66, \ud835\udc66\u2032 \u2208Y. If the learner can only receive a binary feedback \ud835\udc5c\u2208{0, 1} indicating the preference between \ud835\udc66and \ud835\udc66\u2032, we assume \ud835\udc5cis sampled from a Bernoulli distribution with mean P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65), where \ud835\udc5c= 1 means that \ud835\udc66is preferred over \ud835\udc66\u2032 and 0 otherwise. Given access to such a preference model, a solution concept to the preference aggregation problem with deep roots in the social choice theory literature (Kreweras, 1965; Fishburn, 1984; Kramer, 1973; Simpson, 1969) and the dueling bandit literature (Yue et al., 2012; Dud\u00edk et al., 2015) is that of a minimax winner (MW) \ud835\udf0bMW: the Nash Equilibrium strategy of the symmetric two-player zero-sum game with \ud835\udc59as a payoff function. In particular, due to the skew-symmetric property of \ud835\udc59, Swamy et al. (2024) proved that there exists a policy \ud835\udf0bMW such that max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] = min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] . This implies that (\ud835\udf0bMW, \ud835\udf0bMW) is a Nash Equilibrium (Wang et al., 2023b; Munos et al., 2023; Swamy et al., 2024; Ye et al., 2024). As is standard in game solving, our objective is to obtain an \ud835\udf16-approximate MW b \ud835\udf0bmeasured by the duality gap (DG): DG(b \ud835\udf0b) := max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223cb \ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2212min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223cb \ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2264\ud835\udf16. In the following discussion, we will use \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b) to denote E\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] and \ud835\udc59(\ud835\udf0b, \ud835\udf0b\u2032) to denote E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b\u2032(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] for notational convenience. 9 \f3.3.2 Self-Play Preference Optimization (SPO) with REBEL as Base Learner We can straightforwardly extend REBEL to the general preference setting via an instantiation of the Self-Play Preference Optimization (SPO) reduction of Swamy et al. (2024). In short, Swamy et al. (2024) prove that rather than performing adversarial training, we are able to perform a simple and stable self-play procedure while retaining strong theoretical guarantees. Practically, this corresponds to sampling at leas two completions from the current policy, querying a learned preference / supervisor model on each pair, and using the win rate for each completion as its reward. We will now describe how we can adapt REBEL to this mode of feedback. Assuming that we can query the preference oracle \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) at will, we can modify the least square objective Eq. (9) to \ud835\udf03\ud835\udc61+1 := argmin \ud835\udf03 \u2211\ufe01 \ud835\udc65,\ud835\udc66,\ud835\udc66\u2032,\ud835\udc66\u2032\u2032\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032)) \u00132 where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032\u2032 \u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). When the exact value of \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is unavailable but only a binary preference feedback \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2208{0, 1} sampling from Bernoulli with mean \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is available, we can just replace \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) by \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2212\ud835\udc5c\ud835\udc66\u2032,\ud835\udc66\u2032\u2032. It is easy to see that the Bayes optimal of the above least square regression problem is equal to: E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61). Swamy et al. (2024) define an iteration-dependent reward \ud835\udc5f\ud835\udc61(\ud835\udc65, \ud835\udc66) := E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61). Thus, the above regression problem can be understood as an extension of REBEL to the setting where the reward function changes at each iteration \ud835\udc61. Swamy et al. (2024) shows that running the exact MD (Eq. 2) with this iteration-dependent reward function \ud835\udc5f\ud835\udc61leads to fast convergence to an approximate Minimax Winner, a property that we will use to provide the regret bound of REBEL in the general preference setting while accounting for nonzero mean squared error. 4 Theoretical Analysis In the previous section, we interpret REBEL as the exact MD and show its convergence by assuming that least square regression always returns a predictor that is accurate everywhere. While such an explanation is simple and has also been used in prior work, point-wise out-of-distribution generalization is an extremely strong condition and is significantly beyond what a standard supervised learning method can promise. In this section, we significantly relax this condition via a reduction-based analysis: As long as we can solve the regression problems well in an in-distribution manner, REBEL can compete against any policy covered by the training data distributions. Formally, we assume the following generalization condition holds on the regressors we find. Assumption 1 (Regression generalization bounds). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. 10 \fIntuitively, this assumption is saying that there is a function in our class of regressors that is able to accurately fit the difference of rewards. Recall that our class of regressors is isomorphic to our policy class. Therefore, as long as our class of policies is expressive, we would expect this assumption to hold with small \ud835\udf16. For all domains we consider, our policy class is a flexible set of generative models (e.g. Transformer-based LLMs or diffusion models). Thus, we believe it is reasonable to believe this assumption holds in practice \u2013 see Figure 6 in Appendix G for empirical evidence of this point and Example 1 for more discussion. More formally, the above assumption bounds the standard in-distribution generalization error (v.s. the point-wise guarantee in Eq. 12) of a well-defined supervised learning problem: least squares regression. The generalization error \ud835\udf16captures the possible errors from the learning process for \ud835\udf03\ud835\udc61+1 and it could depend on the complexity of the policy class and the number of samples used in the dataset D\ud835\udc61. For instance, when the the function ln \ud835\udf0b\u2212ln \ud835\udf0b\u2032 induced by the log-difference of two policies (\ud835\udf0b, \ud835\udf0b\u2032) are rich enough (e.g., policies are deep neural networks) to capture the reward difference, then \ud835\udf16in this assumption converges to zero as we increase the number of training data. Note that while \ud835\udf16can be small, it does not imply that the learned predictor will have a small prediction error in a point-wise manner \u2013 it almost certainly will not. Example 1. One simple example is when \ud835\udf0b(\ud835\udc66|\ud835\udc65) \u221dexp(\ud835\udf03\u22a4\ud835\udf19(\ud835\udc65, \ud835\udc66)) for some features \ud835\udf19(\ud835\udc65, \ud835\udc66). In this case, ln(\ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65)) \u2212ln(\ud835\udf0b(\ud835\udc66\u2032|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)) = (\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4(\ud835\udf19(\ud835\udc65, \ud835\udc66) \u2212\ud835\udf19(\ud835\udc65, \ud835\udc66\u2032)), which means that our regression problem in Eq. 9 is a classic linear regression problem. When the reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) is also linear in feature \ud835\udf19(\ud835\udc65, \ud835\udc66), then Eq. 9 is a well-specified linear regression problem, and \ud835\udf16typically scales in the rate of \ud835\udc42(\ud835\udc51/|D\ud835\udc61|) with \ud835\udc51being the dimension of feature \ud835\udf19. We can extend the above example to the case where \ud835\udf19is the feature corresponding to some kernel, e.g., RBF kernel or even Neural Tangent Kernel, which allows us to capture the case where \ud835\udf0bis a softmax wide neural network with the least square regression problem solved by gradient flow. The error \ud835\udf16again scales poly(\ud835\udc51/|D\ud835\udc61|), where \ud835\udc51is the effective dimension of the corresponding kernel. We now define the concentrability coefficient (Kakade and Langford, 2002) that quantifies how the training data distribution is covering a comparator policy. Data Coverage. Recall that the base distribution \ud835\udf07can be some behavior policy, which in RLHF can be a human labeler, a supervised fine-tuned policy (SFT), or just the current learned policy (i.e., on-policy). Given a test policy \ud835\udf0b, we denote by \ud835\udc36\ud835\udf07\u2192\ud835\udf0bthe concentrability coefficient, i.e. \ud835\udc36\ud835\udf07\u2192\ud835\udf0b= max \ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) . (15) We say \ud835\udf07covers \ud835\udf0bif \ud835\udc36\ud835\udf07\u2192\ud835\udf0b< +\u221e. Our goal is to bound the regret between our learned policies and an arbitrary comparator \ud835\udf0b\u2217(e.g. the optimal policy if it is covered by \ud835\udf07) using \ud835\udf16and the concentrability coefficient defined in Eq. 15. The following theorem formally states the regret bound of our algorithm. Theorem 1. Under Assumption 1, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, among the learned policies \ud835\udf0b1, . . . , \ud835\udf0b\ud835\udc47, there must exist a policy \u02c6 \ud835\udf0b, such that: \u2200\ud835\udf0b\u2217: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\u2217(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\u02c6 \ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16. ! . 11 \fHere the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217,\ud835\udc47. The above theorem shows a reduction from RL to supervised learning \u2014 as long as supervised learning works (i.e., \ud835\udf16is small), then REBEL can compete against any policy \ud835\udf0b\u2217that is covered by the base data distribution \ud835\udf07. In the regret bound, the 1/ \u221a \ud835\udc47comes from Mirror Descent style update, and \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16captures the cost of distribution shift: we train our regressors under distribution \ud835\udf0b\ud835\udc61and \ud835\udf07, but we want the learned regressor to predict well under \ud835\udf0b\u2217. Similar to the NPG analysis from Agarwal et al. (2021a), we now have a slower convergence rate 1/ \u221a \ud835\udc47, which is due to the fact that we have approximation error from learning. Such an agnostic regret bound \u2014 being able compete against any policy that is covered by training distributions \u2013 is the strongest type of agnostic learning results known in the RL literature, matching the best of what has appeared in prior policy optimization work including PSDP (Bagnell et al., 2003), CPI (Kakade and Langford, 2002), NPG (Agarwal et al., 2021a), and PC-PG (Agarwal et al., 2020). While in this work, we use the simplest and most intuitive definition of coverage \u2013 the density ratio-based definition in Eq. 15 \u2013 extension to more general ones such as transfer error (Agarwal et al., 2020, 2021a) or concentrability coefficients that incorporate function class (e.g., Song et al. (2023b)) is straightforward. We defer the proof of the above theorem and the detailed constants that we omitted in the \ud835\udc42notation to Appendix B. 4.1 Extension to General Preferences Extending the above analysis to the general preference case is straightforward except that it requires a stronger coverage condition. This is because we want to find a Nash Equilibrium, which requires a comparison between the learned policy against all the other policies. Results from the Markov Game literature (Cui and Du, 2022b; Zhong et al., 2022; Cui and Du, 2022a; Xiong et al., 2023) and Cui and Du (2022b) have shown that the standard single policy coverage condition used in single-player optimization is provably not sufficient. In particular, they propose using a notion of unilateral concentrability for efficient learning, which can be defined as \ud835\udc36uni,\ud835\udf07:= max \ud835\udf0b,\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032\u2032 \ud835\udf0bMW(\ud835\udc66|\ud835\udc65)\ud835\udf0b(\ud835\udc66\u2032\u2032|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65)\ud835\udf07(\ud835\udc66\u2032\u2032|\ud835\udc65) , in the general preference setting. Notably, the above unilateral concentrability coefficient \ud835\udc36uni,\ud835\udf07is equivalent to \ud835\udc36\ud835\udf07:= max\ud835\udf0b,\ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) since \ud835\udc36\ud835\udf07\u2264\ud835\udc36uni,\ud835\udf07\u2264\ud835\udc362 \ud835\udf07. Therefore in the following discussion, we will use \ud835\udc36\ud835\udf07as the coverage condition. In addition, we also assume the generalization error of the regression problem is small, Assumption 2 (Regression generalization bounds for general preference). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. Under the above coverage condition and generalization bound, we can show that REBEL is able to learn an approximate Minimax Winner: 12 \fTheorem 2. With assumption 2, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, the policy b \ud835\udf0b= Unif({\ud835\udf0b\ud835\udc61}\ud835\udc47 \ud835\udc61=1) satisfies that: DG(b \ud835\udf0b) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\ud835\udf16. ! . Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07,\ud835\udc47. We defer the proof to Appendix C. Note that the coverage condition here is much stronger than the single policy coverage condition in the RL setting. We conjecture that this is the cost one has to pay by moving to the more general preference setting and leaving the investigation of the necessarily coverage condition for future work. 5 Experiments The implementation of REBEL follows Algorithm 1. In each iteration, REBEL collects a dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032}, where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). Subsequently, REBEL optimizes the least squares regression problem in Eq. 9 through gradient descent with AdamW (Loshchilov and Hutter, 2017). We choose \ud835\udf07= \ud835\udf0b\ud835\udc61such that both \ud835\udc66and \ud835\udc66\u2032 are generated by the current policy. We empirically assess REBEL\u2019s performance on both natural language generation and text-guided image generation. 5.1 Natural Language Generation Baselines: We compare REBEL with baseline RL algorithms, PPO (Schulman et al., 2017), Direct Preference Optimization (DPO) (Rafailov et al., 2023), and REINFORCE (Williams, 1992) and its multi-sample extension, REINFORCE Leave-One-Out (RLOO) (Kool et al., 2019). The REINFORCE method is implemented with a moving average baseline of the reward. We include two variants of RLOO with two (\ud835\udc58= 2) and four (\ud835\udc58= 4) generations per prompt. Dataset: We use the TL;DR summarization dataset (Stiennon et al., 2020)2 to train the model to generate summaries of Reddit posts based on human preference data. The dataset comprises human reference summaries and preference data. Following prior work (Stiennon et al., 2020; Rafailov et al., 2023; Ahmadian et al., 2024), we train the DPO baseline on the preference dataset, while conducting online RL (PPO, RLOO, REBEL) on the human reference dataset. We set the maximum context length to 512 and the maximum generation length to 53 to ensure all references in the dataset can be generated. Additional dataset details are in Appendix D.1. Models: We include results with three different model sizes: 1.4B, 2.8B, and 6.9B. Each model is trained with a supervised fine-tuned (SFT) model and/or a reward model (RM) of the same size. For SFT models, we train a Pythia 1.4B (Biderman et al., 2023)3 model for 1 epoch over the dataset with human references as labels, and use the existing fine-tuned 2.8B4 and 6.9B5 models. For reward models, we train a Pythia 1.4B parameter model for 1 epoch over the preference dataset and 2Dataset available at https://github.com/openai/summarize-from-feedback 3HuggingFace Model Card: EleutherAI/pythia-1.4b-deduped 4HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__sft__tldr 5HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr 13 \fModel size Algorithm Winrate (\u2191) RM Score (\u2191) KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53) (\u2193) 1.4B SFT 24.5% -0.52 DPO 43.8% 0.11 30.9 PPO 51.6% 1.73 29.1 REBEL 55.3% 1.87 32.4 2.8B SFT 28.4% -0.40 DPO 53.5% 2.41 66.5 PPO 67.2% 2.37 27.4 REBEL 70.3% 2.44 29.2 Table 1: Results on TL;DR Summarization for SFT, PPO, DPO, and REBEL using three metrics. The RM Score is computed using the reward model with the respective size and the winrate is evaluated by GPT4. The models are trained with low-rank adapters. The best-performing method for each size and metric is highlighted in bold and the second best is underlined. We note that REBEL outperforms all baselines here in terms of the winrate 6.9B SFT DPO REINFORCE PPO RLOO (\ud835\udc58= 2) RLOO (\ud835\udc58= 4) REBEL Winrate (\u2191) 44.6% 68.2% 70.7%\u2217 77.6%\u2021 74.2%\u2217 77.9%\u2217 78.0% *directly obtained from Ahmadian et al. (2024) \u2021directly obtained from Huang et al. (2024) Table 2: Results on TL;DR Summarization on 6.9B models. We perform full-parameter training for all models. The best-performing method is highlighted in bold and the second best is underlined. use the existing reward models with 2.8B6 and 6.9B7 parameters. For both REBEL and baseline methods using 1.4B and 2.8B parameters, we trained the policy and/or the critic using low-rank adapters (LoRA) (Hu et al., 2022) on top of our SFT and/or reward model respectively. For the 6.9B models, we perform full-parameter training. More details about the hyperparameters are described in Appendix D.2. Evaluation: We evaluate each method by its balance between reward model score and KLdivergence with the reference policy, testing the effectiveness of the algorithm in optimizing the regularized RL object. To evaluate the quality of the generation, we compute the winrate (Rafailov et al., 2023) against human references using GPT48 (OpenAI, 2023). The winrate is computed from a randomly sampled subset (10%) of the test set with a total of 600 samples. The prompt used to query GPT4 as well as an example response is shown in Appendix D.3. 14 \f1.6 1.8 2.0 2.2 2.4 2.6 RM Score ( ) 15 20 25 30 35 KL ( || ref) ( ) 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 RM Score ( ) 0 10 20 30 40 50 60 REBEL PPO Figure 2: Plot of Reward vs KL-Divergence for 2.8B REBEL and PPO. We evaluate the models across the entire test set every 100 steps for 2,000 steps. Left: each point represents the average reward score and KL-divergence for a specific time step; the eclipse represents the confidence interval with 2 standard deviations. Right: we divide the KL distribution at the 2,000-step into 10 bins with equal size and average the corresponding RM scores in each bin. 5.1.1 Quality Analysis Table 1 presents a comparison between REBEL and SFT, PPO, and DPO for 1.4B and 2.8B models trained with LoRA. We calculate the KL-divergence (KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53)) using the SFT policy of the corresponding size as the reference for all models. Notably, REBEL outperforms all the baselines on RM score across all model sizes with a slightly larger KL than PPO. In addition, REBEL achieves the highest winrate under GPT4 when evaluated against human references, indicating the benefit of regressing the relative rewards. Example generations of 2.8B REBEL are included in Appendix E. We also perform full-parameter training for 6.9B models and the winrates are shown in Table 2. We can observe that REBEL still outperforms all of the baselines while REBEL, PPO, and RLOO (\ud835\udc58= 4) have comparable performances (but we will soon show in the next section that REBEL is more tractable in computation and memory than PPO and RLOO with \ud835\udc58= 4). An ablation analysis on parameter \ud835\udf02is in Appendix F. The trade-off between the reward model score and KL-divergence is shown in Figure 2. We evaluate the 2.8B REBEL and PPO every 400 gradient updates during training for 8,000 updates. The sample complexity of each update is held constant across both algorithms for fair comparison. For the left plot, each point represents the average divergence and score over the entire test set, and the eclipse represents the confidence interval with 2 standard deviations. As observed previously, PPO exhibits lower divergence, whereas REBEL shows higher divergence but is capable of achieving larger RM scores. Notably, towards the end of the training (going to the right part of the plot), REBEL and PPO have similar KL and RM scores. For the right plot in Figure 2, we analyze a single checkpoint for each algorithm at the end of training. For each algorithm, we group every generation from the test set by its KL distribution into 10 equally sized bins and calculate the average of the corresponding RM 6HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__reward__tldr 7HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__reward__tldr 8Specific API checkpoint used throughout this section: gpt-4-0613 15 \fDPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 20 40 60 80 100 120 Time (s) Generation Policy Update DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 5 10 15 20 25 30 35 40 Peak Memory Usage (GB) Figure 3: Plot of runtime and memory usage for DPO, REINFORCE, RLOO, PPO, and REBEL. The runtime includes both time for generation and policy update for each batch. Runtime and memory usage are measured on A6000 GPUs. Baselines on the left-hand side of the dashed line have lower winrates. Methods on the right-hand side of the dashed line have similar winrates to REBEL, but REBEL is noticeably more computationally tractable and memory efficient than PPO and RLOO (\ud835\udc58= 4). score for each bin. We can see that REBEL achieves higher RM scores for generations with small divergence while requiring larger divergence for generations with the highest scores. 5.1.2 Runtime & Memory Analysis We analyze the runtime and peak memory usage for 2.8B models using PPO, DPO, RLOO, and REBEL. The runtime includes both the generation time and the time required for policy updates. Both runtime and peak memory usage are measured on A6000 GPUs using the same hyperparameters detailed in Appendix D.2. The methods in the plots are arranged in ascending order based on winrates. To the right of the dashed line, PPO, RLOO (\ud835\udc58= 4), and REBEL have the highest winrates, which are comparable among them. While DPO and REINFORCE require less time and memory, their performance does not match up to REBEL, as discussed in Section 5.1.1. RLOO (\ud835\udc58= 2) has similar runtime and memory usage as REBEL since we set \ud835\udf07= \ud835\udf0b\ud835\udc61, making REBEL also generate twice per prompt. However, RLOO (\ud835\udc58= 2) has worse performance than REBEL. Compared to PPO and RLOO (\ud835\udc58= 4), REBEL demonstrates shorter runtimes and lower peak memory usage. PPO is slow and requires more memory because it needs to update both two networks: policy network and value network. RLOO (\ud835\udc58= 4) requires generating 4 responses per prompt which makes it slow and less memory efficient. Compared to the two baselines PPO and RLOO (\ud835\udc58= 4) that achieve similar winrates as REBEL, we see that REBEL is more computationally tractable. REBEL is also noticeably simpler to implement than PPO since it does not learn value networks or compute the advantage estimation. 16 \f0 10000 20000 30000 40000 50000 60000 Reward Queries 6.0 6.5 7.0 7.5 8.0 8.5 9.0 LAION Aesthetic Score REBEL PPO Figure 4: Learning curves as a function of reward queries to the LAION aesthetic predictor. We report inter-quartile means (IQM) with 95% confidence intervals (CIs) across three seeds for both REBEL and PPO. The CIs were calculated with percentile bootstrap with stratified sampling over three random seeds. 5.2 Image Generation We also consider the setting of image generation, where, given a consistency model (Song et al., 2023a) and a target reward function, we seek to train the consistency model to output images which garner a higher reward. Specifically, we compare REBEL and PPO under the RLCM framework (Oertell et al., 2024). Baselines: We compare REBEL to a clipped, policy gradient objective (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) with the aim to optimize aesthetic quality to obtain high reward from the LAION aesthetic score predictor (Schuhmann, 2022). This baseline does not use critics or GAE for advantage estimates. However, the clipping objective is clearly motivated by PPO, and thus, we simply name this baseline as PPO in this section. Dataset: We use 45 common animals as generation prompts similar to Black et al. (2023); Oertell et al. (2024)9. Models: We use the latent consistency model (Luo et al., 2023) distillation of the Dreamshaper v7 model10, a finetune of stable diffusion (Rombach et al., 2021). Evaluation: We evaluate PPO and REBEL on its reward under the LAION aesthetic reward model for an equal number of reward queries/samples generated and an equal number of gradient updates. The aesthetic predictor is trained to predict human-labeled scores of images on a scale of 1 to 10. Images that tend to have the highest reward are artwork. Following the recommendations of Agarwal et al. (2021b), we report the inter-quartile mean with 95% confidence intervals for our reported results across three random seeds. 9Dataset available at https://github.com/Owen-Oertell/rlcm 10Huggingface model card: SimianLuo/LCM_Dreamshaper_v7 17 \fREBEL PPO 7.29 7.38 7.37 7.27 7.14 6.85 6.17 6.00 6.29 7.06 Figure 5: Generated images using PPO and REBEL during an intermediate checkpoint. We note that at the same number of epochs, REBEL observes a higher reward under the reward model. This can further be seen by the more diverse background of images generated from REBEL with less training time. 5.3 Quality Analysis Figure 4 shows REBEL optimizes the consistency model faster during the beginning of training but eventually achieves similar performance to that of PPO. For our experiments, we tuned both batch size and learning rate for our algorithms, testing batch sizes of [4, 8, 16] per gpu and learning rates [1e \u22124, 3e \u22124, 6e \u22124, 1e \u22123]. Note, the main difference in implementation between PPO and REBEL is the replacement of the clipped PPO objective with our regression objective. Qualitatively, we observe that eventually, both PPO and REBEL start to generate good-looking images but ignore the text prompt entirely. However, from just optimizing the reward function perspective, this behavior is not surprising since the objective does not encourage the maintenance of the consistency between the text prompt and the generated image. To maximize LAION-predicted aesthetic quality, both REBEL and PPO transform a model that produces plain images into one that produces artistic drawings. We found across multiple seeds that REBEL produced lush backgrounds when compared to PPO\u2019s generations. Please see Appendix E.2 for more examples of generated images. 6 Related Work Policy Gradients. Policy gradient (PG) methods (Nemirovsk\u0133 and Yudin, 1983; Williams, 1992; Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001; Schulman et al., 2017) are a prominent class of RL algorithms due to their direct, gradient-based policy optimization, robustness to model mis-specification (Agarwal et al., 2020), and scalability to modern AI applications from fine-tuning LLMs (Stiennon et al., 2022) to optimizing text-to-image generators (Oertell et al., 2024). 18 \fBroadly speaking, we can taxonomize PG methods into two families. The first family is based on REINFORCE (Williams, 1992) and often includes variance reduction techniques (Kool et al., 2019; Richter et al., 2020; Zhu et al., 2023). While prior work by Ahmadian et al. (2024) has shown that REINFORCE-based approaches can outperform more complex RL algorithms like PPO on LLM fine-tuning tasks like TL;DR, we find that a properly optimized version of PPO still out-performs a REINFORCE baseline. The second family is adaptive PG techniques that precondition the policy gradient (usually with the inverse of the Fisher Information Matrix) to ensure it is covariant to re-parameterizations of the policy, which include NPG (Kakade, 2001; Bagnell and Schneider, 2003) and its practical approximations like TRPO (Schulman et al., 2015a) and PPO (Schulman et al., 2017). Intuitively, the preconditioning ensures that we make small changes in terms of action distributions, rather than in terms of the actual policy parameters, leading to faster and more stable convergence. Unfortunately, computing and then inverting the Fisher Information Matrix is computationally intensive and therefore we often resort to approximations in practice, as done in TRPO. However, these approximations are still difficult to apply to large-scale generative models, necessitating even coarser approximations like PPO. In contrast, REBEL does not need any such approximations to be implemented at scale, giving us a much closer connection between theory and practice. Reward Regression. The heart of REBEL is a novel reduction from RL to iterative squared loss regression. While using regression to fit either the reward (Peters and Schaal, 2007) or the value (Peng et al., 2019) targets which are then used to extract a policy have previously been explored, our method instead takes a page from DPO (Rafailov et al., 2023) to implicitly parameterize the reward regressor in terms of the policy. This collapses the two stage procedure of prior methods into a single regression step. Preference Fine-Tuning (PFT) of Generative Models. RL has attracted renewed interest due to its central role in \u201caligning\u201d language models \u2013 i.e., adapting their distribution of prompt completions towards the set of responses preferred by human raters. One family of techniques for PFT, often referred to as Reinforcement Learning from Human Feedback (RLHF) involves first fitting a reward model (i.e. a classifier) to the human preference data and then using this model to provide reward values to a downstream RL algorithm (often PPO) (Christiano et al., 2017; Ziegler et al., 2020). LLMs fine-tuned by this procedure include GPT-N (OpenAI, 2023), Claude-N (Anthropic, 2024), and Llama-N (Meta, 2024). Similar approaches have proved beneficial for tasks like summarization (Stiennon et al., 2022), question answering (Nakano et al., 2022), text-to-image generation (Lee et al., 2023), and instruction following (Ouyang et al., 2022). Another family of techniques for PFT essentially treats the problem as supervised learning and uses a variety of ranking loss functions. It includes DPO (Rafailov et al., 2023), IPO (Azar et al., 2023), and KTO (Ethayarajh et al., 2023). These techniques are simpler to implement as they remove components like an explicit reward model, value network, and on-policy training from the standard RLHF setup. However, recent work finds their performance to be lesser than that of on-policy methods (Lambert et al., 2024; Tajwar et al., 2024), which agrees with our findings. This is perhaps caused by their lack of interaction during training, leading to the well-known covariate shift/compounding error issue (Ross et al., 2011; Swamy et al., 2021) and the associated lower levels of performance. The third family of PFT techniques combines elements from the previous two: it involves running an offline algorithm iteratively, collecting on-policy preference feedback from either a supervisor model (Rosset et al., 2024; Xiong et al., 2024; Guo et al., 2024) or from a preference model fit on human data 19 \f(Calandriello et al., 2024). All of these approaches can be considered instantiations of the general SPO reduction proposed by Swamy et al. (2024), which itself can be thought of as a preference-based variant of DAgger (Ross et al., 2011). Recent work by Tajwar et al. (2024) confirms the empirical strength of these techniques. Our approach fits best into this family of techniques \u2013 we also iteratively update our model by solving a sequence of supervised learning problems over on-policy datasets. However, REBEL comes with several key differentiating factors from the prior work. First, we can run REBEL with datasets consisting of a mixture of on-policy and off-policy data with strong guarantees, enabling hybrid training, as previously explored in the RL (Song et al., 2023b; Ball et al., 2023; Zhou et al., 2023) and inverse RL (Ren et al., 2024) literature. Second, unlike all of the aforementioned works that regularize to the initial policy \ud835\udf0b0 during updates, we perform conservative updates by regularizing \ud835\udf0b\ud835\udc61+1 to \ud835\udf0b\ud835\udc61. Thus, for the prior work, it is difficult to prove convergence or monotonic improvement as the current policy can just bounce around a ball centered at \ud835\udf0b0, a well-known issue in the theory of approximate policy iteration (Kakade and Langford, 2002; Munos, 2003). In contrast, by incorporating the prior policy\u2019s probabilities into our regression problem, we are able to prove stronger guarantees for REBEL. 7 Summary and Future Work In summary, we propose REBEL, an RL algorithm that reduces the problem of RL to solving a sequence of relative reward regression problems on iteratively collected datasets. In contrast to policy gradient approaches that require additional networks and heuristics like clipping to ensure optimization stability, REBEL requires that we can drive down training error on a least squares problem. This makes it strikingly simple to implement and scale. In theory, REBEL matches the best guarantees we have for RL algorithms in the agnostic setting, while in practice, REBEL is able to match and sometimes outperform methods that are far more complex to implement or expensive to run across both language modeling and guided image generation tasks. There are several open questions raised by our work. The first is whether using a loss function other than square loss (e.g. log loss or cross-entropy) could lead to better performance in practice (Farebrother et al., 2024) or tighter bounds (e.g. first-order / gap-dependent) in theory (Foster and Krishnamurthy, 2021; Wang et al., 2023a, 2024). The second is whether, in the general (i.e. non-utility-based) preference setting, the coverage condition assumed in our analysis is necessary \u2013 we conjecture it is. Relatedly, it would be interesting to explore whether using preference (rather than reward) models to provide supervision for REBEL replicates the performance improvements reported by Swamy et al. (2024); Munos et al. (2023). Third, while we focus primarily on the bandit setting in the preceding sections, it would be interesting to consider the more general RL setting and explore how offline datasets can be used to improve the efficiency of policy optimization via techniques like resets (Bagnell et al., 2003; Ross and Bagnell, 2014; Swamy et al., 2023; Chang et al., 2023, 2024). 20",
"additional_info": [
{
"url": "http://arxiv.org/abs/2404.06356v1",
"title": "Policy-Guided Diffusion",
"abstract": "In many real-world settings, agents must learn from an offline dataset\ngathered by some prior behavior policy. Such a setting naturally leads to\ndistribution shift between the behavior policy and the target policy being\ntrained - requiring policy conservatism to avoid instability and overestimation\nbias. Autoregressive world models offer a different solution to this by\ngenerating synthetic, on-policy experience. However, in practice, model\nrollouts must be severely truncated to avoid compounding error. As an\nalternative, we propose policy-guided diffusion. Our method uses diffusion\nmodels to generate entire trajectories under the behavior distribution,\napplying guidance from the target policy to move synthetic experience further\non-policy. We show that policy-guided diffusion models a regularized form of\nthe target distribution that balances action likelihood under both the target\nand behavior policies, leading to plausible trajectories with high target\npolicy probability, while retaining a lower dynamics error than an offline\nworld model baseline. Using synthetic experience from policy-guided diffusion\nas a drop-in substitute for real data, we demonstrate significant improvements\nin performance across a range of standard offline reinforcement learning\nalgorithms and environments. Our approach provides an effective alternative to\nautoregressive offline world models, opening the door to the controllable\ngeneration of synthetic training data.",
"authors": "Matthew Thomas Jackson, Michael Tryfan Matthews, Cong Lu, Benjamin Ellis, Shimon Whiteson, Jakob Foerster",
"published": "2024-04-09",
"updated": "2024-04-09",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"cs.RO"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "In many real-world settings, agents must learn from an offline dataset\ngathered by some prior behavior policy. Such a setting naturally leads to\ndistribution shift between the behavior policy and the target policy being\ntrained - requiring policy conservatism to avoid instability and overestimation\nbias. Autoregressive world models offer a different solution to this by\ngenerating synthetic, on-policy experience. However, in practice, model\nrollouts must be severely truncated to avoid compounding error. As an\nalternative, we propose policy-guided diffusion. Our method uses diffusion\nmodels to generate entire trajectories under the behavior distribution,\napplying guidance from the target policy to move synthetic experience further\non-policy. We show that policy-guided diffusion models a regularized form of\nthe target distribution that balances action likelihood under both the target\nand behavior policies, leading to plausible trajectories with high target\npolicy probability, while retaining a lower dynamics error than an offline\nworld model baseline. Using synthetic experience from policy-guided diffusion\nas a drop-in substitute for real data, we demonstrate significant improvements\nin performance across a range of standard offline reinforcement learning\nalgorithms and environments. Our approach provides an effective alternative to\nautoregressive offline world models, opening the door to the controllable\ngeneration of synthetic training data.",
"main_content": "Introduction A key obstacle to the real-world adoption of reinforcement learning (RL, Sutton & Barto, 2018) is its notorious sample inefficiency, preventing agents from being trained on environments with expensive or slow online data collection. A closely related challenge arises in environments where exploration, required by standard RL methods, is inherently dangerous, limiting their applicability. Yet many such settings come with an abundance of pre-collected or offline experience, gathered under one or more behavior policies (Yu et al., 2020). These settings enable the application of offline RL (Levine et al., 2020), where a policy is optimized from an offline dataset without access to the environment. However, the distribution shift between the target policy (i.e., the policy being optimized) and the collected data poses many challenges (Kumar et al., 2020; Kostrikov et al., 2021; Fujimoto et al., 2019). In particular, distribution shift between the target and behavior policies leads to an out-of-sample issue: since the goal of offline RL is to exceed the performance of the behavior policy, the distribution of state-action pairs sampled by the target policy necessarily differs from that of the behavior policy, and its samples are therefore underrepresented (or unavailable) in the offline dataset. However, the maximizing nature of RL classically leads to overestimation bias when generalizing to rarely seen state-action pairs, resulting in an overly optimistic target policy. As a solution, most previous *Correspondence to jackson@robots.ox.ac.uk. \u2020Equal supervision. 1 arXiv:2404.06356v1 [cs.LG] 9 Apr 2024 \fFigure 1: Offline reinforcement learning with policy-guided diffusion. Offline data from a behavior policy is first used to train a trajectory diffusion model. Synthetic experience is then generated with diffusion, guided by the target policy in order to move trajectories further on-policy. An agent is then trained for multiple steps on the synthetic dataset, before it is regenerated. model-free work has proposed severe regularization of the target policy\u2014such as penalizing value estimates in uncertain states (Kumar et al., 2020; An et al., 2021) or regularizing it towards the behavior policy (Fujimoto & Gu, 2021)\u2014sacrificing target policy performance for stability. In this paper, we focus on an alternative class of methods: generating synthetic experience to both augment the offline dataset and lessen the out-of-sample issue. Prior methods in this area use a model-based approach (Yu et al., 2020; Kidambi et al., 2020; Ball et al., 2021, see Section 3.1), in which a single-step world model is learned from the offline dataset, which the target policy interacts with to generate synthetic on-policy training experience. While this allows the target policy to sample synthetic trajectories under its own action distribution, compounding model error usually forces these methods to severely truncate model rollouts to a handful of interactions. Thus, there are two options which trade off coverage and bias. The first is to roll out from the initial state, which is unbiased but lacks coverage. The second is to roll out from states randomly sampled from the data set, which increases coverage but introduces bias. Neither option fully addresses the difference in observed states between the behavior and target policy when deployed, nor the out-of-sample issue mentioned above. Instead, we propose policy-guided diffusion (PGD, Figure 1), which avoids compounding error by modeling entire trajectories (Section 3.2) rather than single-step transitions. To achieve this, we train a diffusion model on the offline dataset, from which we can sample synthetic trajectories under the behavior policy. However, while this addresses data sparsity, these trajectories are still off-distribution from our target policy. Therefore, inspired by classifier-guided diffusion (Dhariwal & Nichol, 2021), we apply guidance from the target policy to shift the sampling distribution towards that of the target policy. At each diffusion step, our guidance term directly increases the likelihood of sampled synthetic actions under the target policy, while the diffusion model updates the entire trajectory towards those in the dataset. This yields a regularized target distribution that we name the behavior-regularized target distribution, ensuring actions do not diverge too far from the behavior policy, limiting generalization error. As a result, PGD does not suffer from compounding error, while also generating synthetic trajectories that are more representative of the target policy. We illustrate this point in Figure 2. Our approach results in consistent improvements in offline RL performance for agents trained on policy-guided synthetic data, compared to those trained on unguided synthetic or real data. We evaluate using the standard TD3+BC (Fujimoto & Gu, 2021) and IQL (Kostrikov et al., 2021) algorithms across a variety of D4RL (Fu et al., 2020) datasets. Notably, we see a statistically significant 11.2% improvement in performance for the TD3+BC algorithm aggregated across MuJoCo (Todorov et al., 2012) locomotion tasks compared to training on the real data, with no algorithmic changes. Our results also extend to even larger improvements for the challenging Maze2d navigation environments. Furthermore, we analyze synthetic trajectories generated by PGD and show that PGD achieves lower dynamics error than PETS (Chua et al., 2018)\u2014a prior offline model-based method\u2014while matching the target policy likelihood of PETS. Together, our experiments illustrate the potential of PGD as an effective drop-in replacement for real data\u2014across agents, environments, and behavior policies. 2 \f2 Background 2.1 Offline Reinforcement Learning Formulation We adopt the standard reinforcement learning formulation, in which an agent acts in a Markov Decision Process (MDP, Sutton & Barto, 2018). An MDP is defined as a tuple M = \u27e8S, A, p0, T, R, H\u27e9, where s \u2208S and a \u2208A are the state and action spaces, p0(s0) is a probability distribution over the initial state, T(st+1|st, at) is a conditional probability distribution defining the transition dynamics, R : S \u00d7 A \u2212 \u2192R is the reward function, \u03b3 is the discount factor, and H is the horizon. In RL, we learn a policy \u03c0(a|s) that defines a conditional probability distribution over actions for each state, inducing a distribution over trajectories \u03c4 := (s0, a1, r1, s1, . . . , sH) given by p\u03c0,M(\u03c4) = p0(s0) H\u22121 Y t=0 \u03c0(at|st) \u00b7 T(st+1|st, at), (1) omitting the reward function throughout our work for conciseness. Our goal is to learn a policy that maximizes the expected return, defined as Ep\u03c0,M [V (\u03c4)] where V (\u03c4) := PH t=0 rt is the return of a trajectory. The offline RL setting (Levine et al., 2020) extends this, preventing the agent from interacting with the environment and instead presenting it with a dataset of trajectories \u03c4 \u2208Doff gathered by some unknown behavior policy \u03c0off, with which to optimize a target policy \u03c0target. Out-of-Sample Generalization The core challenge of offline RL emerges from the distribution shift between the behavior distribution p\u03c0off,M(\u03c4) and the target distribution p\u03c0target,M(\u03c4), which are otherwise denoted poff(\u03c4) and ptarget(\u03c4) for conciseness. Optimization of \u03c0target on Doff can lead to catastrophic value overestimation at unobserved actions, a problem termed the out-of-sample issue (Kostrikov et al., 2021). As such, model-free offline algorithms typically regularize the policy towards the behavior distribution, either explicitly (Fujimoto & Gu, 2021; Kumar et al., 2020) or implicitly (Kostrikov et al., 2021). Alternatively, prior work proposes learning a single-step world model M from Doff (Yu et al., 2020; Kidambi et al., 2020; Lu et al., 2022). By rolling out the target policy using M, we generate trajectories \u03c4 \u223cptarget(\u03c4), with the aim of avoiding distribution shift. However, in practice, this technique only pushes the generalization issue into the world model. In particular, RL policies are prone to exploiting errors in the world model, which can compound over the course of an episode. When combined with typical maximizing operations used in off-policy RL, this results in value overestimation bias (Sims et al., 2024). 2.2 Diffusion Models Definition To generate synthetic data, we consider diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020), a class of generative model that allows one to sample from a distribution p(x) by iteratively reversing a forward noising process. Karras et al. (2022) present an ODE formulation of diffusion which, given a noise schedule \u03c3(i) indexed by i, mutates data according to dx = \u2212\u02d9 \u03c3(i)\u03c3(i)\u2207x log p (x; \u03c3(i)) di, (2) where \u02d9 \u03c3 = d\u03c3 di and \u2207x log p (x; \u03c3(i)) is the score function (Hyv\u00e4rinen & Dayan, 2005), which points towards areas of high data density. Intuitively, infinitesimal forward or backward steps of this ODE respectively nudge a sample away from or towards the data. To generate a sample, we start with pure noise at the highest noise level \u03c3max, and iteratively denoise in discrete timesteps under Equation 2. Classifier Guidance Our method is designed to augment the data-generating process towards on-policy trajectories from the target distribution ptarget(\u03c4), rather than the behavior distribution poff(\u03c4). To achieve this, we take inspiration from classifier guidance (Dhariwal & Nichol, 2021), 3 \fFigure 2: Trajectories from an illustrative 2D environment, in which the start location is indicated by \u2022 and the goals for the behavior and target policies are indicated by \u00d7 and \u00d7. Left: Rollouts from the target policy in the real environment. Right: Offline datasets gathered by the behavior policy suffer from distribution shift and limited sample size. Truncated world models (Yu et al., 2020; Kidambi et al., 2020) previously used in offline model-based reinforcement learning offer a partial solution to this problem but suffer from bias due to short rollouts. Meanwhile, unguided diffusion (Lu et al., 2023) can increase the sample size, but maintains the original distribution shift. In contrast, policy-guided diffusion samples from a regularized target distribution, generating entire trajectories with low transition error but higher likelihood under the target distribution. which leverages a differentiable classifier to augment the score function of a pre-trained diffusion model towards a class-conditional distribution p(x|y). Concretely, this adds a classifier gradient to the score function, giving \u2207x log p\u03bb (x|y; \u03c3(i)) = \u2207x log p (x; \u03c3(i)) + \u03bb\u2207x log p\u03b8 (y|x; \u03c3(i)) , (3) where \u2207x log p\u03b8 (y|x; \u03c3(i)) is the gradient of the classifier and \u03bb is the guidance weight. 3 On-Policy Sampling from Offline Data Generating synthetic agent experience is a promising approach to solving out-of-sample generalization in offline RL. By generating experience that is unseen in the dataset, the policy may be directly optimized on OOD samples, thereby moving the generalization problem from the policy to the generative model. Some prior work has suggested learning a model from the offline dataset (Lu et al., 2023), thereby sampling synthetic experience from the behavior distribution. While this improves sample coverage, the approach retains many of the original challenges of offline RL. As with the behavior policy, the synthetic trajectories may be suboptimal, meaning we still require conservative off-policy RL techniques to train the agent. Instead, we seek to extend this approach by making our generative model sample from the target distribution. This reduces the need for conservatism and generates synthetic trajectories with increasing performance as the agent improves over training. Practically, the effectiveness of this approach depends on how we parameterize each of the terms of the trajectory distribution (Equation 1). In this section, we consider two parameterizations: autoregressive and direct. 3.1 Autoregressive Generation \u2014 Model T , Sample p(s0) The autoregressive\u2014or model-based\u2014approach to generating on-policy data is to use the offline dataset to train a one-step transition model T (st+1|st, at; \u03b8). To generate unbiased sample trajectories from the target distribution, we first sample an initial state (i.e., one that starts an episode) from the offline dataset s0 \u223cDoff. Next, we roll out our agent in the learned model by iteratively sampling actions from the target policy and approximating environment transitions with the learned dynamics model. However, compounding error from the transition model usually requires agent rollouts to be much shorter than the environment horizon\u2014such that the agent takes k \u226aH steps.\u2021 Consequently, any states more than k steps away from any initial state cannot be generated in this manner, limiting the applicability of this approach. \u2021Typically k \u22645 (Janner et al., 2019; Yu et al., 2020). 4 \fAs an approximation, autoregressive methods typically sample initial states from any timestep st \u223cDoff in the offline dataset. Given a truncated rollout length k, this may be seen as approximating the sub-trajectory distribution\u2014i.e., the trajectory from time t to t + k\u2014given by ptarget(\u03c4t:t+k; \u03b8) = ptarget(st) \u00b7 ptarget(\u03c4t:t+k|st; \u03b8), (4) by instead modeling F(\u03c4t:t+k; \u03b8) = poff(st) \u00b7 ptarget(\u03c4t:t+k|st; \u03b8). (5) Here, we denote the stationary state distributions of the target and behavior policies at time t by ptarget(st) and poff(st) respectively, and define the conditional sub-trajectory distribution as ptarget(\u03c4t:t+k|st; \u03b8) := k\u22121 Y j=0 \u03c0target(at+j|st+j) \u00b7 T (st+j+1|st+j, at+j; \u03b8). (6) When generating trajectories from this distribution, the difference between ptarget(st) and poff(st) biases the start of rollouts towards states visited by the behavior policy. Furthermore, we still require k to be small to avoid compounding error. In combination, sampling from the offline dataset \u201canchors\u201d synthetic rollouts to states in the offline dataset, while truncated rollouts prevent synthetic trajectories from moving far from this anchor. Therefore, the practical application of autoregressive generation leads to a strong bias towards the behavior distribution and fails to address the out-of-sample problem. 3.2 Direct Generation \u2014 Model poff(\u03c4) As an alternative to autoregressive generation, we can parameterize the target distribution by directly modeling the behavior distribution, as follows: ptarget(\u03c4) = p(s0) H\u22121 Y t=0 \u03c0target(at|st) \u00b7 T (st+1|st, at) = p(s0) H\u22121 Y t=0 \u03c0target(at|st) \u03c0off(at|st) \u00b7 \u03c0off(at|st) \u00b7 T (st+1|st, at) = poff(\u03c4) H\u22121 Y t=0 wat,st \u2248poff(\u03c4; \u03b8) H\u22121 Y t=0 wat,st = ptarget(\u03c4; \u03b8) (7) where wa,s := \u03c0target(a|s) \u03c0off(a|s) denotes the importance sampling weight for (a, s) (Precup, 2000). This directly parameterizes the behavior distribution poff(\u03c4; \u03b8)\u2014which may be learned by modeling entire trajectories on the offline dataset\u2014and adjusts their likelihoods by the relative probabilities of actions wat,st under the target and behavior policies. By jointly modeling the initial state distribution, transition function, and behavior policy, such a parameterization is not required to enforce the Markov property. As a result, it can directly generate entire trajectories, thereby avoiding the compounding model error suffered by autoregressive methods when iteratively generating transitions. However, computing wat,st requires access to the behavior policy \u03c0off(a|s), which is not assumed in offline RL. Prior work has explored modeling the behavior policy from the offline dataset and using this to compute importance sampling corrections. However, products of many importance weights can lead to problems with high variance (Precup et al., 2000; Levine et al., 2020). 5 \f4 Policy-Guided Diffusion In this work, we propose a method following the direct generation approach outlined in Section 3.2, named policy-guided diffusion (PGD, Algorithm 1). Following the success of diffusion models at generating trajectories (Janner et al., 2022; Lu et al., 2023), we first train a trajectory-level diffusion model on the offline dataset to model the behavior distribution. Then, inspired by classifier-guided diffusion (Section 2.2), we guide the diffusion process using the target policy to move closer to the target distribution. Specifically, during the denoising process, we compute the gradient of the action distribution for each action under the target policy, using it to augment the diffusion process towards high-probability actions. In doing so, we approximate a regularized target distribution that equally weights action likelihoods under the behavior and target policies. In this section, we derive PGD as an approximation of the behavior-regularized target distribution (Section 4.1), then describe practical details for controlling and stabilizing policy guidance (Section 4.2). We provide a summary of PGD against alternative sources of training data in Table 1. Algorithm 1 Trajectory sampling via policy-guided diffusion \u2014 based on Karras et al. (2022). 1: Parameters: Noise schedule \u03c3n, guidance schedule \u03bbn, noise factor \u03b3n, noise level Snoise, diffusion steps Ndiffusion 2: Required: Denoiser model D\u03b8, target policy \u03c0\u03d5 3: sample \u03c40 \u223cN(0, \u03c32 0I) \u25b7Sample random noise trajectory 4: for n = 0 to Ndiffusion \u22121 do 5: sample \u03f5n \u223cN(0, S2 noiseI) \u25b7Temporarily increase noise level 6: \u02c6 \u03c3n \u2190\u03c3n + \u03b3n\u03c3n 7: \u02c6 \u03c4n \u2190\u03c4n + p \u02c6 \u03c32 n \u2212\u03c32 n\u03f5n 8: \u00af \u03c4n \u2190D\u03b8(\u02c6 \u03c4n; \u02c6 \u03c3n) \u25b7Estimate denoised trajectory 9: dn \u2190(\u02c6 \u03c4n \u2212\u00af \u03c4n) /\u02c6 \u03c3n \u25b7Evaluate \u2202\u03c4 \u2202\u03c3 at \u02c6 \u03c3n 10: gn \u2190\u2207\u00af \u03c4 actions n \u03c0\u03d5(\u00af \u03c4 actions n |\u00af \u03c4 states n ) \u25b7Compute denoised action gradient 11: \u02c6 \u03c4 actions n \u2190\u02c6 \u03c4 actions n + \u03bbn(gn/\u2225gn\u22252) \u25b7Apply policy guidance to noised actions 12: \u03c4n+1 \u2190\u02c6 \u03c4n + (\u03c3n+1 \u2212\u02c6 \u03c3n)dn \u25b7Apply Euler step 13: if \u03c3n+1 \u0338= 0 then 14: d\u2032 n \u2190(\u03c4n+1 \u2212D\u03b8(\u03c4n+1; \u03c3n+1)) /\u03c3n+1 \u25b7Apply 2nd order correction 15: \u02c6 \u03c4n+1 \u2190\u02c6 \u03c4n + (\u03c3n+1 \u2212\u02c6 \u03c3n) \u0000 1 2dn + 1 2d\u2032 n \u0001 16: end if 17: end for 18: return \u03c4N 4.1 Behavior-Regularized Target Distribution Policy Guidance Derivation To sample a trajectory via diffusion, we require a noise-conditioned score function \u2207\u02c6 \u03c4 log p(\u02c6 \u03c4; \u03c3) for a noised trajectory \u02c6 \u03c4 := (\u02c6 s0, \u02c6 a1, \u02c6 r1, \u02c6 s1, . . . , \u02c6 sH) under a distribution p(\u03c4) at a noise level \u03c3. Given an offline dataset Doff, it is straightforward to learn this function under the behavior distribution, \u2207\u02c6 \u03c4 log poff(\u02c6 \u03c4; \u03c3), by training a denoiser model to reconstruct noised trajectories from Doff. However, there is no apparent method to directly model the noise-conditioned score function \u2207\u02c6 \u03c4 log ptarget(\u02c6 \u03c4; \u03c3) for the target distribution (see Appendix B for further discussion), meaning we require an approximation. To achieve this, we consider the score function of a noise-free trajectory \u03c4 under the target distribution, based on the formulation from Equation 7, \u2207\u03c4 log ptarget(\u03c4) = \u2207\u03c4 log poff(\u03c4) + H\u22121 X t=0 (\u2207\u03c4 log \u03c0target (at|st) \u2212\u2207\u03c4 log \u03c0off (at|st)) . (8) 6 \fIn the limit of noise \u03c3 \u21920, the noise-conditioned score function \u2207\u02c6 \u03c4 log ptarget(\u02c6 \u03c4; \u03c3) clearly approaches \u2207\u03c4 log ptarget(\u03c4). Therefore, we may approximate this function by \u2207\u02c6 \u03c4 log ptarget(\u02c6 \u03c4; \u03c3) \u2248\u2207\u02c6 \u03c4 log poff(\u02c6 \u03c4; \u03c3) + H\u22121 X t=0 (\u2207\u02c6 \u03c4 log \u03c0target (\u02c6 at|\u02c6 st) \u2212\u2207\u02c6 \u03c4 log \u03c0off (\u02c6 at|\u02c6 st)) , (9) for \u03c3 \u22480. Whilst iteratively denoising under this function (Section 2.2) does not model ptarget(\u03c4) exactly, the score function approaches \u2207\u02c6 \u03c4 log ptarget(\u02c6 \u03c4; \u03c3) towards the end of the denoising process, which we believe provides an effective approximation. Excluding Behavior Policy Guidance As discussed, we may directly model the first term of Equation 9 by training a denoiser model. Furthermore, we may directly compute target policy guidance \u2207\u02c6 \u03c4 log \u03c0target (\u02c6 at|\u02c6 st)\u2014the second term of this approximation\u2014as we assume access to a (differentiable) target policy in the offline RL setting. However, we generally do not have access to the behavior policy, preventing us from computing \u2207\u02c6 \u03c4 log \u03c0off (\u02c6 at|\u02c6 st). Due to this, we exclude behavior policy guidance from our approximation, resulting in the score function \u2207\u02c6 \u03c4 log poff(\u02c6 \u03c4; \u03c3) + PH\u22121 t=0 \u2207\u02c6 \u03c4 log \u03c0target (\u02c6 at|\u02c6 st). As \u03c3 \u21920, this approaches the score function for a proxy distribution of the form F(\u03c4; \u03c0target) \u221dpoff(\u03c4) H\u22121 Y t=0 \u03c0target(at|st) = poff(\u03c4) \u00b7 qtarget(\u03c4) = ptarget(\u03c4) \u00b7 qoff(\u03c4), (10) where qtarget(\u03c4) := QH\u22121 t=0 \u03c0target(at|st) denotes the product of action probabilities under the target policy and qoff(\u03c4) denotes the same quantity under the behavior policy. Therefore, we hypothesize that excluding behavior policy guidance is an effective form of regularization, as it biases trajectories towards the support of the offline data, thereby limiting model error and the out-of-sample problem. We refer to F(\u03c4; \u03c0target) as the behavior-regularized target distribution due to it balancing action likelihoods under the behavior and target policies, and provide further discussion in Appendix C. Finally, as a promising avenue for future work, we note that the behavior policy may be modeled by applying behavior cloning to Doff, allowing for the inclusion of behavior policy guidance in the offline RL setting. Excluding State Guidance Target policy guidance \u2207\u02c6 \u03c4 log \u03c0target (\u02c6 at|\u02c6 st) has non-zero gradients for the state and action at timestep t. In practice, the action component \u2207\u02c6 at log \u03c0target (\u02c6 at|\u02c6 st) typically has an efficient, closed-form solution, with \u03c0target (\u02c6 at|\u02c6 st) commonly being Gaussian for continuous action spaces. In contrast, for neural network policies, the state component \u2207\u02c6 st log \u03c0target (\u02c6 at|\u02c6 st) requires backpropagating gradients through the policy network, which is both expensive to compute and can lead to high variance on noisy, out-of-distribution states. Due to this, we apply policy guidance to only the noised action, yielding our policy-guided score function sPGD(\u02c6 \u03c4; \u03c3) = \u2207\u02c6 \u03c4 log poff(\u02c6 \u03c4; \u03c3) | {z } Behavior score function + \u2207\u02c6 a log qtarget(\u02c6 \u03c4) | {z } Target policy guidance , (11) where (abusing notation) \u2207\u02c6 a denotes the gradient \u2207\u02c6 \u03c4 of \u02c6 \u03c4, with non-action components set to 0. 4.2 Improving Policy Guidance Controlling Guidance Strength A standard technique from classifier-guided diffusion is the use of guidance coefficients (Dhariwal & Nichol, 2021). These augment the guided score function by introducing a controllable coefficient on the guidance term. Applied to the PGD score function (Equation 11), this has the form sPGD(\u02c6 \u03c4; \u03c3, \u03bb) = \u2207\u02c6 \u03c4 log poff(\u02c6 \u03c4; \u03c3) + \u03bb\u2207\u02c6 a log qtarget(\u02c6 \u03c4), (12) 7 \fFigure 3: Left: Trajectory probability distribution for an example behavior distribution poff(\u03c4) and target policy likelihood qtarget(\u03c4). Right: Corresponding PGD sampling distribution (Equation 13) computed over a range of policy-guidance coefficients \u03bb. By increasing \u03bb, we transform from the sampling distribution towards the regions of high target policy likelihood, making PGD an effective mechanism for controlling the level of regularization towards the behavior distribution. where \u03bb denotes the guidance coefficient. As \u03c3 \u21920, this transforms the sampling distribution to F(\u03c4|\u03c0target; \u03bb) \u221dpoff(\u03c4) \u00b7 qtarget(\u03c4)\u03bb. (13) Intuitively, \u03bb interpolates the actions in the sampling distribution between the behavior and target distributions. By tuning \u03bb, we can therefore control the strength of guidance towards the target policy, avoiding high dynamics error when the target policy is far from the behavior policy. We visualize this effect in Figure 3 and analyze its impact on target policy likelihood in Figure 5. Following Ma et al. (2023), we also apply a cosine guidance schedule to the guidance coefficient, \u03bbn = \u03bb \u00b7 (\u03c3n + \u03b2\u03c3N \u00b7 sin(\u03c0 \u00b7 n/N)), (14) where \u03b2 is the cosine weight, which is set to 0.3 in all experiments. By decreasing the strength of guidance in later steps, we find that this schedule stabilizes guidance and reduces dynamics error. Stabilizing Guided Diffusion When under distribution shift, RL policies are known to suffer from poor generalization to unseen states (Kirk et al., 2023). This makes policy guidance challenging, since the policy must operate on noised states, and compute action gradients from noised actions. Similar issues have been studied in classifier-guided diffusion (Ma et al., 2023), where the classifier gradient can be unstable when exposed to out-of-distribution inputs. Bansal et al. (2023) alleviate this issue by applying guidance to the denoised sample estimated by the denoiser model, rather than the original noised sample, in addition to normalizing the guidance gradient to a unit vector. By applying these techniques to policy guidance, we lessen the need for the target policy to generalize to noisy states, which we find decreases dynamics error. Table 1: Overview of training experience sources in offline RL\u2014for each, we consider the sampling distribution, expected error in transition dynamics, likelihood of actions under the target policy, and state space coverage beyond the behavior distribution. Policy-guided diffusion provides an effective trade-off between each error, likelihood, and coverage. Data source Distribution Error (\u2193) Likelihood (\u2191) Coverage (\u2191) Offline dataset poff(\u03c4) \u2014 Low Low Episodic world model ptarget(\u03c4) High \u2014 High Truncated world model Equation 5 Low \u2014 Low Unguided diffusion poff(\u03c4) Low Low High Policy-guided diffusion Equation 10 Low High High 8 \f5 Results Through our experiments, we first demonstrate that agents trained with synthetic experience from PGD outperform those trained on unguided synthetic data or directly on the offline dataset (Section 5.2). We show that this effect is consistent across agents (TD3+BC and IQL), environments (HalfCheetah, Walker2d, Hopper, and Maze), behavior policies (random, mixed, and medium), and modes of data generation (continuous and periodic). Following this, we demonstrate that tuning the guidance coefficient enables PGD to sample trajectories with high action likelihood across a range of target policies. Finally, we verify that PGD retains low dynamics error despite sampling high-likelihood actions from the policy (Section 5.3). 5.1 Experimental Setup We evaluate PGD on the MuJoCo and Maze2d continuous control datasets from D4RL (Fu et al., 2020; Todorov et al., 2012). For MuJoCo, we consider the HalfCheetah, Walker2d, and Hopper environments with random (randomly initialized behavior policy), medium (suboptimal behavior policy), and medium-replay (or \u201cmixed\u201d, the replay buffer from medium policy training) datasets. For Maze2d we consider the original (sparse reward) instances of the umaze, medium and large layouts. We train 4 trajectory diffusion models on each dataset, for which we detail hyperparameters in Appendix A. In Section 5.3, we conduct analysis of PGD against MOPO-style PETS (Chua et al., 2018) models, an autoregressive world model composed of an ensemble of probabilistic models, for which we use model weights from OfflineRL-Kit (Sun, 2023). To demonstrate synthetic experience from PGD as a drop-in substitute for the real dataset, we transfer the original hyperparameters for IQL (Kostrikov et al., 2021) and TD3+BC (Fujimoto & Gu, 2021)\u2014 as tuned on the real datasets\u2014without any further tuning. Policy guidance requires a stochastic target policy, in order to compute the gradient of the action distribution. Since TD3+BC trains a deterministic policy, we perform guidance by modeling the action distribution as a unit Gaussian centered on the deterministic action. We implement all agents and diffusion models from scratch in Jax (Bradbury et al., 2018), which may be found at https://github.com/EmptyJackson/policy-guided-diffusion. 5.2 Offline Reinforcement Learning For each D4RL dataset, we train two popular model-free offline algorithms, TD3+BC (Fujimoto & Gu, 2021) and IQL (Kostrikov et al., 2021) on synthetic experience generated by trajectory diffusion models with and without policy guidance, as well as on the real dataset. We first consider periodic generation of synthetic data, in which the synthetic dataset is regenerated after extended periods of agent training, such that the agent is near convergence on the synthetic dataset at the point it is regenerated with the current policy. Each epoch, we generate a dataset of 214 synthetic trajectories of length 16. Following the notation of Algorithm 2, we set the number of epochs to Nepochs = 4 with Npolicy = 250,000 train steps per epoch, meaning the agent is trained to close to convergence before the dataset is regenerated. This can be viewed as solving a sequence of offline RL tasks with synthetic datasets, in which the behavior policy is the target policy from the previous generation. Using periodic generation, performance improves significantly across benchmarks for both IQL and TD3+BC (Table 2). In MuJoCo, the most consistent improvement is on mixed datasets, with 4 out of 6 experiments achieving significant performance improvement. This is to be expected, as these datasets contain experience from a mixture of behavior policy levels. In this case, the diffusion model is likely to be able to represent a wide variety of policies, and on-policy guidance would naturally produce higher return trajectories as the target policy improves. In order to demonstrate the flexibility of PGD, we also evaluate PGD in a continuous generation setting, using a data generation rate closer to that of traditional model-based methods. For this, we set Nepochs = 100 and Npolicy = 10,000, then lower the sample size to match the overall number of synthetic trajectories generated by periodic generation across training. Due to the decrease in sample 9 \fTable 2: Final return of IQL and TD3+BC agents trained on real, unguided (\u03bb = 0) synthetic and policy-guided (\u03bb = 1) synthetic data\u2014mean and standard error over 4 seeds (diffusion models and agents) is presented, with significant improvements (p < 0.05) shaded. IQL TD3+BC Dataset Unguided Guided Dataset Unguided Guided Random HalfCheetah 9.1 \u00b1 2.2 2.6 \u00b1 0.1 6.5 \u00b1 1.7 11.2 \u00b1 0.8 11.0 \u00b1 0.4 21.1 \u00b1 0.9 Walker2d 4.3 \u00b1 0.5 2.7 \u00b1 0.7 5.3 \u00b1 0.3 0.5 \u00b1 0.3 1.1 \u00b1 1.2 \u22120.3 \u00b1 0.1 Hopper 7.4 \u00b1 0.4 5.2 \u00b1 0.9 4.9 \u00b1 1.0 7.4 \u00b1 0.6 4.2 \u00b1 1.4 5.5 \u00b1 2.1 Mixed HalfCheetah 44.2 \u00b1 0.2 43.6 \u00b1 0.2 43.6 \u00b1 0.2 44.7 \u00b1 0.1 43.1 \u00b1 0.2 46.1 \u00b1 0.3 Walker2d 81.3 \u00b1 2.0 85.2 \u00b1 0.3 84.9 \u00b1 1.4 82.7 \u00b1 1.3 70.7 \u00b1 10.1 84.0 \u00b1 1.0 Hopper 82.9 \u00b1 3.5 97.4 \u00b1 2.7 100.5 \u00b1 0.5 58.6 \u00b1 11.2 52.1 \u00b1 1.8 91.9 \u00b1 4.3 Medium HalfCheetah 48.4 \u00b1 0.1 45.4 \u00b1 0.1 45.1 \u00b1 0.1 48.6 \u00b1 0.1 45.3 \u00b1 0.2 47.6 \u00b1 0.3 Walker2d 81.7 \u00b1 1.4 82.1 \u00b1 0.9 77.8 \u00b1 3.6 84.8 \u00b1 0.1 85.2 \u00b1 0.2 86.3 \u00b1 0.3 Hopper 63.6 \u00b1 0.8 59.7 \u00b1 2.0 62.8 \u00b1 1.2 62.4 \u00b1 0.9 57.4 \u00b1 0.4 63.1 \u00b1 0.6 Total 46.9 \u00b1 0.4 47.0 \u00b1 0.4 47.9 \u00b1 0.3 44.5 \u00b1 1.1 41.1 \u00b1 1.1 49.5 \u00b1 0.9 Maze2d UMaze 42.6 \u00b1 0.4 42.9 \u00b1 1.8 43.8 \u00b1 3.5 50.0 \u00b1 2.4 33.8 \u00b1 3.0 76.2 \u00b1 17.4 Medium 38.5 \u00b1 1.9 33.4 \u00b1 3.2 60.0 \u00b1 13.9 32.1 \u00b1 6.8 24.0 \u00b1 4.0 89.6 \u00b1 19.9 Large 50.9 \u00b1 5.8 23.4 \u00b1 8.0 45.3 \u00b1 14.8 137.2 \u00b1 20.2 93.3 \u00b1 31.0 131.1 \u00b1 37.5 Total 44.0 \u00b1 2.2 33.2 \u00b1 1.8 49.7 \u00b1 9.5 73.1 \u00b1 6.7 50.4 \u00b1 11.1 99.0 \u00b1 14.5 size, we maintain each generated dataset across epochs in a replay buffer, with each dataset being removed after 10 epochs. We see similar improvements in performance against real and unguided synthetic data under this approach, with PGD outperforming real data on 2 out of 3 environments and datasets (Figure 4). Periodic generation outperforms continuous generation across environments and behavior policies, which we attribute to training stability, especially when performing guidance early in training. Regardless, both approaches consistently outperform training on real and unguided synthetic data, demonstrating the potential of PGD as a drop-in extension to replay and model-based RL methods. 5.3 Synthetic Trajectory Analysis We now analyze the quality of trajectories produced by PGD against those from unguided diffusion and autoregressive world model (PETS) rollouts. In principle, we seek to evaluate the divergence of these sampling distributions from the true target distribution. However, this is not tractable to compute directly, so we instead investigate two proxy objectives: 1. Trajectory Likelihood: mean log-likelihood of actions under the target policy; and 2. Dynamics Error: mean squared error between states in the synthetic trajectory and real environment, when rolled out with the same initial state and action sequence. In our experiments, we consider trajectory diffusion and MOPO-style PETS (Chua et al., 2018) models trained on representative datasets from the D4RL (Fu et al., 2020) benchmark that were featured in the previous section. Specifically, we consider the models trained on halfcheetah-medium, before sampling trajectories with IQL target policies trained on the halfcheetah-random, -medium, and -expert. This enables us to test the robustness of these models to target policies far from the behavior policy, both in performance and policy entropy. 10 \f34 36 38 Policy-Guided (periodic) Policy-Guided (continuous) Unguided (periodic) Unguided (continuous) Dataset Half Cheetah 32 40 48 56 Walker 40 45 50 55 Hopper Normalized Return (a) Environments 6.0 7.5 9.0 Policy-Guided (periodic) Policy-Guided (continuous) Unguided (periodic) Unguided (continuous) Dataset Random 45 60 75 Mixed 60 62 64 66 Medium Normalized Return (b) Behavior policies Figure 4: Aggregate MuJoCo performance after training on unguided or policy-guided synthetic data under continuous and periodic dataset generation, as well as on the real dataset. For each setting, mean return over TD3+BC and IQL agents is marked, with standard error over 4 seeds (diffusion models and agents) highlighted. Policy Guidance Increases Trajectory Likelihood In Figure 5, we present the trajectory likelihood of synthetic trajectories over varying degrees of guidance. Unsurprisingly, unguided diffusion generates low probability trajectories for all target policies, due to it directly modeling the behavior distribution. However, as we increase the guidance coefficient \u03bb, trajectory likelihood increases monotonically under each target policy. Furthermore, this effect is robust across target policies, giving the ability to sample high-probability trajectories with OOD target policies. The value of \u03bb required to achieve the same action likelihood as direct action sampling (PETS) varies with the target policy. Since this threshold increases with target policy performance, we hypothesize that it increases with target policy entropy. Based on this, a promising avenue for future work is automatically tuning \u03bb for hyperparameter-free guidance. 10 1 100 101 10 5 Action log-probability Random policy 10 1 100 101 Policy guidance coefficient ( ) 50 0 Medium policy 10 1 100 101 150 100 50 0 Expert policy MOPO Unguided PGD Figure 5: Action probability of synthetic trajectories generated by diffusion and PETS models trained on halfcheetah-medium. Target policies are trained on halfcheetah-random, halfcheetah-medium, and halfcheetah-expert datasets, demonstrating robustness to OOD actions. Standard error over 4 diffusion model seeds is shaded (but negligible), with mean computed over 2048 synthetic trajectories. Policy Guided Diffusion Achieves Lower Error Than Autoregressive Models In Figure 6, we present the dynamics error of synthetic trajectories over 16 rollout steps. For a fair comparison, we fix the guidance coefficient of PGD to \u03bb = 1.0, since this was sufficient to match the trajectory likelihood of PETS (Figure 5). Over all target policies, PGD achieves significantly lower error than PETS. Furthermore, PGD has similar levels of error across target policies, while PETS suffers from 11 \f0 4 8 12 16 0.0 2.5 5.0 7.5 Step MSE Random policy 0 4 8 12 16 Environment step Medium policy 0 4 8 12 16 Expert policy MOPO Unguided PGD Figure 6: Dynamics mean squared error of synthetic trajectories generated by diffusion and PETS models trained on halfcheetah-medium. Standard error over 4 diffusion model seeds and 3 PETS seeds (via OfflineRL-Kit) is shaded, with each generating 2048 synthetic trajectories for analysis. significantly higher error on OOD (random and expert) target policies. This highlights the robustness of PGD to target policy, a critical feature for generating high-likelihood training data throughout tabula rasa policy training. 6 Related Work Model-based Offline Reinforcement Learning Model-based methods in offline RL (Yu et al., 2020; Kidambi et al., 2020; Rigter et al., 2022; Lu et al., 2022) are designed to augment the offline buffer with additional on-policy samples in order to mitigate distribution shift. This is typically done by rolling out a policy in a learned world model (Janner et al., 2019) and applying a suitable pessimism term in order to account for dynamics model errors. While these methods share the same overall motivation as our paper, the empirical realization is quite different. In particular, forward dynamics models are liable to compounding errors over long horizons, resulting in model exploitation, whereas our trajectories are generated in a single step. Model-free Offline Reinforcement Learning Model-free methods in offline RL typically tackle the out-of-sample issue by applying conservatism to the value function or by constraining the policy to remain close to the data. For example, CQL (Kumar et al., 2020) and EDAC (An et al., 2021) both aim to minimize the values of out-of-distribution actions. Meanwhile, BCQ (Fujimoto et al., 2019) ensures that actions used in value targets are in-distribution with the behavioral policy using constrained optimization. We take the opposite approach in this paper: by enabling our diffusion model to generate on-policy samples without diverging from the behavior distribution, we reduce the need for conservatism. Diffusion in Reinforcement Learning Diffusion models are a flexible method for data augmentation in reinforcement learning. SynthER (Lu et al., 2023) uses unguided diffusion models to upsample offline or online RL datasets, which are then used by model-free off-policy algorithms. While this improves performance, SynthER uses unguided diffusion to model the behavior distribution, resulting in the same issue of distributional shift. Similarly, MTDiff (He et al., 2023) considers unguided data generation in multitask settings. Diffusion models have also been used to train world models. Zhang et al. (2023) train a world model for sensor observations by first tokenizing using VQ-VAE and then predicting future observations via discrete diffusion. Alonso et al. (2023) also train a world model using diffusion and demonstrate it can more accurately predict future observations. However, neither of these approaches model the whole trajectory, thereby suffering from compounding error, nor do they apply policy guidance. Parallel to this work, Rigter et al. (2023) use guidance from a policy to augment a diffusion world model for online RL. By contrast, we focus on the offline RL setting, provide a theoretical derivation and motivation for the trajectory distribution modeled by policy guidance, and demonstrate improvements in downstream policy performance. 12 \fDiffusion models are also used elsewhere in reinforcement learning. For example, Diffuser (Janner et al., 2022) and Decision Diffuser (Ajay et al., 2023) use trajectory diffusion models for planning and to bias planned trajectories towards high return. By contrast, we use on-policy guidance and train on the generated data. Diffusion models have also been used as an expressive policy class (Wang et al., 2023) for Q-learning, showing improvement over MLPs. 7"
},
{
"url": "http://arxiv.org/abs/2305.15703v3",
"title": "The Benefits of Being Distributional: Small-Loss Bounds for Reinforcement Learning",
"abstract": "While distributional reinforcement learning (DistRL) has been empirically\neffective, the question of when and why it is better than vanilla,\nnon-distributional RL has remained unanswered. This paper explains the benefits\nof DistRL through the lens of small-loss bounds, which are instance-dependent\nbounds that scale with optimal achievable cost. Particularly, our bounds\nconverge much faster than those from non-distributional approaches if the\noptimal cost is small. As warmup, we propose a distributional contextual bandit\n(DistCB) algorithm, which we show enjoys small-loss regret bounds and\nempirically outperforms the state-of-the-art on three real-world tasks. In\nonline RL, we propose a DistRL algorithm that constructs confidence sets using\nmaximum likelihood estimation. We prove that our algorithm enjoys novel\nsmall-loss PAC bounds in low-rank MDPs. As part of our analysis, we introduce\nthe $\\ell_1$ distributional eluder dimension which may be of independent\ninterest. Then, in offline RL, we show that pessimistic DistRL enjoys\nsmall-loss PAC bounds that are novel to the offline setting and are more robust\nto bad single-policy coverage.",
"authors": "Kaiwen Wang, Kevin Zhou, Runzhe Wu, Nathan Kallus, Wen Sun",
"published": "2023-05-25",
"updated": "2023-09-23",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI",
"math.OC",
"math.ST",
"stat.ML",
"stat.TH"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "While distributional reinforcement learning (DistRL) has been empirically\neffective, the question of when and why it is better than vanilla,\nnon-distributional RL has remained unanswered. This paper explains the benefits\nof DistRL through the lens of small-loss bounds, which are instance-dependent\nbounds that scale with optimal achievable cost. Particularly, our bounds\nconverge much faster than those from non-distributional approaches if the\noptimal cost is small. As warmup, we propose a distributional contextual bandit\n(DistCB) algorithm, which we show enjoys small-loss regret bounds and\nempirically outperforms the state-of-the-art on three real-world tasks. In\nonline RL, we propose a DistRL algorithm that constructs confidence sets using\nmaximum likelihood estimation. We prove that our algorithm enjoys novel\nsmall-loss PAC bounds in low-rank MDPs. As part of our analysis, we introduce\nthe $\\ell_1$ distributional eluder dimension which may be of independent\ninterest. Then, in offline RL, we show that pessimistic DistRL enjoys\nsmall-loss PAC bounds that are novel to the offline setting and are more robust\nto bad single-policy coverage.",
"main_content": "Introduction The goal of reinforcement learning (RL) is to learn a policy that minimizes/maximizes the mean loss/return (i.e., cumulative costs/rewards) along its trajectory. Classical approaches, such as Qlearning [Mnih et al., 2015] and policy gradients [Kakade, 2001], often learn Q-functions via least square regression, which represent the mean loss-to-go and act greedily with respect to these estimates. By Bellman\u2019s equation, Q-functions suf\ufb01ce for optimal decision-making and indeed these approaches have vanishing regret bounds, suggesting we only need to learn means well [Sutton and Barto, 2018]. Since the seminal work of Bellemare et al. [2017], however, numerous developments showed that learning the whole loss distribution can actually yield state-of-the-art performance in stratospheric balloon navigation [Bellemare et al., 2020], robotic grasping [Bodnar et al., 2020], algorithm discovery [Fawzi et al., 2022] and game playing benchmarks [Hessel et al., 2018, Dabney et al., 2018a, Barth-Maron et al., 2018]. In both online [Yang et al., 2019] and of\ufb02ine RL [Ma et al., 2021], distributional RL (DistRL) algorithms often perform better and use fewer samples in challenging tasks when compared to standard approaches that directly estimate the mean. Despite learning the whole loss distribution, DistRL algorithms use only the mean of the learned distribution for decision making, not extracting any additional information such as higher moments. In other words, DistRL is simply employing a different and seemingly roundabout way of learning the mean: \ufb01rst, learn the loss-to-go distribution via distributional Bellman equations, and then, compute the mean of the learned distribution. Lyle et al. [2019] provided some empirical explanations of the bene\ufb01ts of this two-step approach, showing that learning the distribution, e.g., its moments or quantiles, is an auxiliary task that leads to better representation learning. However, the theoretical 37th Conference on Neural Information Processing Systems (NeurIPS 2023). \fquestion remains: does DistRL, i.e., learning the distribution and then computing the mean, yield provably stronger \ufb01nite-sample guarantees and if so stronger how and when? In this paper, we provide the \ufb01rst mathematical basis for the bene\ufb01ts of DistRL via the lens of smallloss bounds, which are instance-dependent bounds that depend on the minimum achievable cost in the problem [Agarwal et al., 2017].1 For example in linear MDPs, typical worst-case regret bounds scale on the order of poly(d, H) \u221a K, where d is the feature dimension, H is the horizon, and K is the number of episodes [Jin et al., 2020b]. In contrast, small-loss bounds will scale on the order of poly(d, H) \u221a K \u00b7 V \u22c6+ poly(d, H) log(K), where V \u22c6= min\u03c0 V \u03c0 is the optimal expected cumulative cost for the problem. We assume cumulative costs are normalized in [0, 1] without loss of generality. As V \u22c6becomes negligible (approaches 0), the \ufb01rst term vanishes and the small-loss bound yields a faster convergence rate of O(poly(d, H) log(K)), compared to the O(poly(d, H) \u221a K) rate in standard uniform bounds. Since we always have V \u22c6\u22641, small-loss bounds simply match the standard uniform bounds in the worst case. As warm-up, we show that maximum likelihood estimation (MLE), i.e., maximizing log-likelihood, can be used to obtain small-loss regret bounds for contextual bandits (CB), i.e., the one-step RL setting. Then, we turn to the online RL setting, and propose an optimistic DistRL algorithm that optimizes over con\ufb01dence sets constructed via MLE applied to the distributional Bellman equations. We prove our algorithm attains the \ufb01rst small-loss PAC bounds in low-rank MDPs [Agarwal et al., 2020]. Our proof uses a novel regret decomposition with triangular discrimination and also introduces the \u21131 distributional eluder dimension, which generalizes the \u21132 distributional eluder dimension of Jin et al. [2021a] and may be of independent interest. Furthermore, we design an of\ufb02ine distributional RL algorithm using the principle of pessimism, and show our algorithm obtains the \ufb01rst small-loss bounds in of\ufb02ine RL. Our of\ufb02ine small-loss bound holds under the weak single-policy coverage. Notably, our result has a novel robustness property that allows our algorithm to strongly compete with policies that either are well-covered or have small-loss, while prior approaches solely depended on the former. Finally, we \ufb01nd that our distributional CB algorithm empirically outperforms existing approaches in three challenging CB tasks. Our key contributions are as follows: 1. As warm-up, we propose a distributional CB algorithm and prove that it obtains a small-loss regret bound (Section 4). We empirically demonstrate it outperforms state-of-the-art CB algorithms in three challenging benchmark tasks (Section 7). 2. We propose a distributional online RL algorithm that enjoys small-loss bounds in settings with low \u21131 distributional eluder dimension, which we show can always capture low-rank MDPs. The \u21131 distributional eluder dimension may be of independent interest (Section 5). 3. We propose a distributional of\ufb02ine RL algorithm and prove that it obtains the \ufb01rst small-loss bounds in the of\ufb02ine setting. Our small-loss guarantee exhibits a novel robustness to bad coverage, which implies strong improvement over more policies than existing results in the literature (Section 6). In sum, we show that DistRL can yield small-loss bounds in both online and of\ufb02ine RL, which provide a concrete theoretical justi\ufb01cation for the bene\ufb01ts of distribution learning in decision making. 2 Related Works Theory of Distributional RL Rowland et al. [2018, 2023] proved asymptotic convergence guarantees of popular distributional RL algorithms such as C51 [Bellemare et al., 2017] and QR-DQN [Dabney et al., 2018b]. However, these asymptotic results do not explain the bene\ufb01ts of distributional RL over standard approaches, since they do not imply stronger \ufb01nite-sample guarantees than those obtainable with non-distributional algorithms. In contrast, our work shows that distributional RL yields adaptive \ufb01nite-sample bounds that converge faster when the optimal cost of the problem is small. Wu et al. [2023] recently derived \ufb01nite-sample bounds for distributional off-policy evaluation with MLE, while our of\ufb02ine RL section focuses on off-policy optimization. 1\u201cFirst-order\u201d generally refers to bounds that scale with the optimal value, either the maximum reward or the minimum cost. To highlight that we are minimizing cost, we call our bounds \u201csmall-loss\u201d. 2 \fFirst-order bounds in bandits When maximizing rewards, \ufb01rst-order \u201csmall-return\u201d bounds can be easily derived from EXP4 [Auer et al., 2002], since receiving the worst reward 0 with probability (w.p.) \u03b4 contributes at most R\u22c6\u03b4 to the regret2. When minimizing costs, receiving the worst loss 1 w.p. \u03b4 may induce large regret relative to L\u22c6if L\u22c6is small. To illustrate, if R\u22c6= 0 then all policies are optimal, so no learning is needed and the small-return bound is vacuous. Yet if L\u22c6= 0, suboptimal policies may have a large gap from L\u22c6, so small-loss bounds in this regime are meaningful. Small-loss bounds are achievable in multi-arm bandits [Foster et al., 2016], semi-bandits [Neu, 2015, Lykouris et al., 2022], and CBs [Allen-Zhu et al., 2018, Foster and Krishnamurthy, 2021]. First-order bounds in RL Jin et al. [2020a], Wagenmaker et al. [2022] obtained small-return regret for tabular and linear MDPs via concentration bounds that scale with the variance. The idea is that the return\u2019s variance is bounded by some multiple of the expected value, which is bounded by V \u22c6in the reward-maximizing setting, i.e., Var(P h rh | \u03c0k) \u2264c \u00b7 V \u03c0k \u2264c \u00b7 V \u22c6. However, the last inequality fails in the loss-minimizing setting, so the variance approach does not easily yield smallloss bounds. Small-loss regret for tabular MDPs was resolved by Lee et al. [2020, Theorem 4.1] using online mirror descent with the log-barrier on the occupancy measure. Moreover, Kakade et al. [2020, Theorem 3.8] obtains small-loss regret for linear-quadratic regulators (LQRs), but their Assumption 3 posits that the coef\ufb01cient of variation for the cumulative costs is bounded, which is false in general even in tabular MDPs. To the best of our knowledge, there are no known \ufb01rst-order bounds for low-rank MDPs or in of\ufb02ine RL. Risk-sensitive RL A well-motivated use-case of DistRL is risk-sensitive RL, where the goal is to learn risk-sensitive policies that optimize some risk measure, e.g., Conditional Value-at-Risk (CVaR), of the loss [Dabney et al., 2018b]. Orthogonal to risk-sensitive RL, this work focuses on the bene\ufb01ts of DistRL for standard risk-neutral RL. Our insights may lead to \ufb01rst-order bounds for risk-sensitive RL, which we leave as future work. 3 Preliminaries As warmup, we begin with the contextual bandit problem with an arbitrary context space X, \ufb01nite action space A with size A and conditional cost distributions C : X \u00d7 A \u2192\u2206([0, 1]). Throughout, we \ufb01x some dominating measure \u03bb on [0, 1] (e.g., Lebesgue for continuous or counting for discrete) and let \u2206([0, 1]) be all distributions on [0, 1] that are absolutely continuous with respect to \u03bb. We identify such a distribution with its density with respect to \u03bb, and we also write C(y | x, a) for (C(x, a))(y). Let K denote the number of episodes. At each episode k \u2208[K], the learner observes a context xk \u2208X, samples an action ak \u2208A, and then receives a cost ct \u223cC(xt, at), which we assume to be normalized, i.e., ct \u2208[0, 1]. The goal is to design a learner that attains low regret with high probability, where regret is de\ufb01ned as RegretCB(K) = PK k=1 \u00af C(xk, ak) \u2212\u00af C(xk, \u03c0\u22c6(xk)), where \u00af f = R yf(y)d\u03bb(y) for any f \u2208\u2206([0, 1]) and \u03c0\u22c6(xk) = arg mina\u2208A \u00af C(xk, a). The focus of this paper is reinforcement learning (RL) under the Markov Decision Process (MDP) model, with observation space X, \ufb01nite action space A with size A, horizon H, transition kernels Ph : X \u00d7A \u2192\u2206(X) and cost distributions Ch : X \u00d7A \u2192\u2206([0, 1]) at each step h \u2208[H]. We start with the Online RL setting, which proceeds over K episodes as follows: at each episode k \u2208[K], the learner plays a policy \u03c0k \u2208[X \u2192\u2206(A)]H; we start from a \ufb01xed initial state x1; then for each h = 1, 2, . . . , H, the policy samples an action ah \u223c\u03c0k h(xh), receives a cost ch \u223cCh(xh, ah), and transitions to the next state xh+1 \u223cPh(xh, ah). Our goal is to compete with the optimal policy that minimizes expected the loss, i.e., \u03c0\u22c6\u2208arg min\u03c0\u2208\u03a0 V \u03c0 where V \u03c0 = E\u03c0 hPH h=1 ch i . Regret bounds aim to control the learner\u2019s regret with high probability, where regret is de\ufb01ned as, RegretRL(K) = PK k=1 V \u03c0k \u2212V \u22c6. If the algorithm returns a single policy b \u03c0, it is desirable to obtain a Probably Approximately Correct (PAC) bound on the sub-optimality of b \u03c0, i.e., V b \u03c0 \u2212V \u22c6. 2Assume rewards/losses in [0, 1] and R\u22c6/L\u22c6is the maximum/minimum expected reward/loss. 3 \fThe third setting we study is Of\ufb02ine RL, where instead of needing to actively explore and collect data ourselves, we are given H datasets D1, D2, . . . , DH to learn a good policy b \u03c0. Each Dh contains N i.i.d. samples (xh,i, ah,i, ch,i, x\u2032 h,i) from the process (xh,i, ah,i) \u223c\u03bdh, ch,i \u223cCh(xh,i, ah,i), x\u2032 h,i \u223c Ph(xh,i, ah,i), where \u03bdh \u2208\u2206(X \u00d7 A) is arbitrary, e.g., the visitations of many policies from the current production system. The goal is to design an of\ufb02ine procedure with a PAC guarantee for b \u03c0, which should improve over the data generating process. Distributional RL For a policy \u03c0 and h \u2208[H], let Z\u03c0 h(xh, ah) \u2208\u2206([0, 1]) denote the distribution of the loss-to-go PH t=h ct conditioned on rolling in \u03c0 from xh, ah. The expectation of the above is Q\u03c0 h(xh, ah) = \u00af Z\u03c0 h(xh, ah) and V \u03c0 h (xh) = Eah\u223c\u03c0h(xh)[Q\u03c0 h(xh, ah)]. We use Z\u22c6 h, Q\u22c6 h, V \u22c6 h to denote these quantities with \u03c0\u22c6. Recall the regular Bellman operator acts on a function f : X \u00d7 A \u2192[0, 1] as follows: T \u03c0 h f(x, a) = \u00af Ch(x, a) + Ex\u2032\u223cPh(x,a),a\u2032\u223c\u03c0(x\u2032)[f(x\u2032, a\u2032)]. Analogously, the distributional Bellman operator [Morimura et al., 2012, Bellemare et al., 2017] acts on a conditional distribution d : X \u00d7 A \u2192\u2206([0, 1]) as follows: T \u03c0,D h d(x, a) D = Ch(x, a) + d(x\u2032, a\u2032), where x\u2032 \u223cPh(x, a), a\u2032 \u223c\u03c0(x\u2032) and D = denotes equality of distributions. Another way to think about the distributional Bellman operator is that a sample z \u223cT \u03c0,D h d(x, a) is generated as follow: z := c + y, where c \u223cCh(x, a), x\u2032 \u223cPh(x, a), a\u2032 \u223c\u03c0(x\u2032), y \u223cd(x\u2032, a\u2032). We will also use the Bellman optimality operator T \u22c6 h and its distributional variant T \u22c6,D h , de\ufb01ned as follows: T \u22c6 h f(x, a) = \u00af Ch(x, a) + Ex\u2032\u223cPh(x,a)[mina\u2208A f(x\u2032, a\u2032)] and T \u22c6,D h d(x, a) D = Ch(x, a) + d(x\u2032, a\u2032) where x\u2032 \u223cPh(x, a), a\u2032 = arg mina \u00af d(x\u2032, a). Please see Table 2 for an index of notations. 4 Warm up: Small-Loss Regret for Distributional Contextual Bandits In this section, we propose an ef\ufb01cient reduction from CB to online maximum likelihood estimation (MLE), which is the standard tool for distribution learning that we will use throughout the paper. In our CB algorithm, we balance exploration and exploitation with the reweighted inverse gap weighting (ReIGW) of Foster and Krishnamurthy [2021], which de\ufb01nes a distribution over actions given predictions b f \u2208RA and a parameter \u03b3 \u2208R++: setting b = arg mina\u2208A b f(a) as the best action with respect to the predictions, the weight for any other action a \u0338= b is, ReIGW\u03b3( b f, \u03b3)[a] := b f(b) A b f(b) + \u03b3( b f(a) \u2212b f(b)) , (1) and the rest of the weight is allocated to b: ReIGW\u03b3( b f, \u03b3)[b] = 1 \u2212P a\u0338=b ReIGW\u03b3( b f, \u03b3)[a]. Algorithm 1 Distributional CB (DISTCB) 1: Input: number of episodes K, failure probability \u03b4, ReIGW learning rate \u03b3. 2: Initialize any cost distribution f (1). 3: for episode k = 1, 2, . . ., K do 4: Observe context xk. 5: Sample action ak \u223cpk = ReIGW( \u00af f (k)(xk, \u00b7), \u03b3) from Eq. (1). 6: Observe cost ck \u223cC(xk, ak) and update online MLE oracle with ((xk, ak), ck). 7: end for We propose Distributional Contextual Bandit (DISTCB) in Algorithm 1, a two-step procedure for each episode k \u2208[K]. Upon seeing context xk, DISTCB \ufb01rst samples an action ak from ReIGW generated by means of our estimated cost distributions for each action, i.e., b f(a) = \u00af f (k)(xk, a), \u2200a \u2208 A (Line 5). Then, DISTCB updates f (k)(\u00b7 | xk, ak) by maximizing the log-likelihood to estimate the conditional cost distribution C(\u00b7 | xk, ak) (Line 6). Formally, this second step is achieved via an online MLE oracle with a realizable distribution class FCB \u2282X \u00d7 A \u2192\u2206([0, 1]); let Regretlog(K) be some upper bound on the log-likelihood regret for all possibly adaptive sequences {xk, ak, ck}k\u2208[K], PK k=1 log C(ck | xk, ak) \u2212log f (k)(ck | xk, ak) \u2264Regretlog(K). 4 \fUnder realizability, C \u2208FCB, we expect Regretlog(K) \u2208O(log(K)). For instance, if FCB is \ufb01nite, exponentially weighted average forecaster guarantees Regretlog(K) \u2264log |FCB| [Cesa-Bianchi and Lugosi, 2006, Chapter 9]. We now state our main result for DISTCB. Theorem 4.1. For any \u03b4 \u2208(0, 1), w.p. at least 1 \u2212\u03b4, running DISTCB with \u03b3 = 10A \u2228 r 40A(C\u22c6+log(1/\u03b4)) 112(Regretlog(K)+log(1/\u03b4)) has regret scaling with C\u22c6= PK k=1 mina\u2208A \u00af C(xk, a), RegretDISTCB(K) \u2264232 q AC\u22c6Regretlog(K) log(1/\u03b4) + 2300A \u0000Regretlog(K) + log(1/\u03b4) \u0001 . The dominant term scales with the optimal sum of costs \u221a C\u22c6which shows that DISTCB obtains small-loss regret. DISTCB is also computationally ef\ufb01cient since each episode simply requires computing the ReIGW. FastCB is the only other computationally ef\ufb01cient CB algorithm with small-loss regret [Foster and Krishnamurthy, 2021, Theorem 1]. Our bound matches that of FastCB in terms of dependence on A, C\u22c6and log(1/\u03b4). Our key difference with FastCB is the online supervised learning oracle: in DISTCB, we aim to learn the conditional cost distribution by maximizing loglikelihood, while FastCB aims to perform regression with the binary cross-entropy loss. In Section 7, we \ufb01nd that DISTCB empirically outperforms SquareCB and FastCB in three challenging CB tasks, which reinforces the practical bene\ufb01ts of distribution learning in CB setting. 4.1 Proof Sketch First, apply the per-round inequality for ReIGW [Foster and Krishnamurthy, 2021, Theorem 4] to get, RegretDistCB(K) \u2272 K X k=1 Eak\u223cpk \" A \u03b3 \u00af C(sk, ak) + \u03b3 \u0000 \u00af f (k)(sk, ak) \u2212\u00af C(sk, ak) \u00012 \u00af f (k)(sk, ak) + \u00af C(sk, ak) | {z } \u22c6 # . For any distributions f, g \u2208\u2206([0, 1]), their triangular discrimination3 is de\ufb01ned as D\u25b3(f \u2225g) := R (f(y)\u2212g(y))2 f(y)+g(y) d\u03bb(y). The key insight is that \u22c6can be bounded by the triangular discrimination of f (k)(sk, ak) and C(sk, ak): by Cauchy-Schwartz and y2 \u2264y for y \u2208[0, 1], we have \u00af f \u2212\u00af g = R y(f(y) \u2212g(y))d\u03bb(y) \u2264 qR y(f(y) + g(y))d\u03bb(y) qR (f(y)\u2212g(y))2 f(y)+g(y) d\u03bb(y), and hence, \f \f \u00af f \u2212\u00af g \f \f \u2264 q\u0000 \u00af f + \u00af g \u0001 D\u25b3(f \u2225g). (\u25b31) So, Eq. (\u25b31) implies that \u22c6is bounded by D\u25b3(f (k)(sk, ak) \u2225C(sk, ak)). Since D\u25b3is equivalent (up to universal constants) to the squared Hellinger distance, Foster et al. [2021, Lemma A.14] implies the above can be bounded by the online MLE regret, so w.p. at least 1 \u2212\u03b4, we have RegretDistCB(K) \u2272PK k=1 A \u03b3 \u0000 \u00af C(sk, ak) + log(1/\u03b4) \u0001 + \u03b3 \u0000Regretlog(K) + log(1/\u03b4) \u0001 . From here, we just need to rearrange terms and set the correct \u03b3. Appendix C contains the full proof. 5 Small-Loss Bounds for Online Distributional RL We now extend our insights to the online RL setting and propose a DistRL perspective on GOLF [Jin et al., 2021a]. While GOLF constructs con\ufb01dence sets of near-minimizers of the squared Bellman error loss, we propose to construct these con\ufb01dence sets using near-maximizers of the loglikelihood loss to approximate MLE. To leverage function approximation for learning conditional distributions, we use a generic function class F \u2286(X \u00d7 A \u2192\u2206([0, 1]))H where each element f \u2208F is a tuple f = (f1, . . . , fH) such that each fh is a candidate estimator for Z\u22c6 h, the distribution of loss-to-go PH t=h ct under \u03c0\u22c6. For notation, fH+1(x, a) = \u03b40 denotes the dirac at zero for all x, a. We now present our Optimistic Distributional Con\ufb01dence set Optimization (O-DISCO) algorithm in Algorithm 2, consisting of three key steps per episode. At episode k \u2208[K], O-DISCO \ufb01rst 3Triangular discrimination is also known as Vincze-Le Cam divergence [Vincze, 1981, Le Cam, 2012]. 5 \fAlgorithm 2 Optimistic Distributional Con\ufb01dence set Optimization (O-DISCO) 1: Input: number of episodes K, distribution class F, threshold \u03b2. 2: Initialize Dh,0 \u2190\u2205for all h \u2208[H], and set F0 = F. 3: for episode k = 1, 2, . . ., K do 4: Set optimistic estimate f (k) = arg minf\u2208Fk\u22121 mina \u00af f1(x1, a). 5: Set \u03c0k h(x) = arg mina \u00af f (k) h (x, a). 6: Roll out \u03c0k and obtain a trajectory x1,k, a1,k, c1,k, . . . , xH,k, aH,k, cH,k. For each h \u2208[H], augment the dataset Dh,k = Dh,k\u22121 \u222a{(xh,k, ah,k, ch,k, xh+1,k)}. 7: For all (h, f) \u2208[H] \u00d7 F, sample yf h,i \u223cfh+1(x\u2032 h,i, a\u2032) and a\u2032 = arg mina \u00af fh+1(x\u2032 h,i, a), where (xh,i, ah,i, ch,i, x\u2032 h,i) is the i-th datapoint of Dh,k. Then, set zf h,i = ch,i + yf h,i and de\ufb01ne the con\ufb01dence set Fk = ( f \u2208F : k X i=1 log fh(zf h,i | xh,i, ah,i) \u2265max g\u2208Fh k X i=1 log g(zf h,i | xh,i, ah,i) \u22127\u03b2, \u2200h \u2208[H] ) . 8: end for 9: Output: \u00af \u03c0 = unif(\u03c01:K). identi\ufb01es the f (k) with the minimal expected value at h = 1 over the previous con\ufb01dence set Fk\u22121 (Line 4). This step induces global optimism. Then, O-DISCO collects data for this episode by rolling in with the greedy policy \u03c0k with respect to the mean of f (k) (Line 6). Finally, ODISCO constructs a con\ufb01dence set Fk by including a function f if it exceeds a threshold on the loglikelihood objective using data zf h,i \u223cT \u22c6,D h fh+1(xh,i, ah,i) for all steps h simultaneously (Line 7). This step is called local \ufb01tting, as each f \u2208Fk has the property that fh is close-in-distribution to T \u22c6,D h fh+1 for all h. We highlight that O-DISCO only learns the distribution for estimating the mean, i.e., Lines 4 and 6 only use the mean \u00af f. This seemingly roundabout way of estimating the mean is exactly how distributional RL algorithms such as C51 differ from the classic DQN. To ensure that MLE succeeds for the Temporal-Difference (TD) style con\ufb01dence sets, we need the following distributional Bellman Completeness (BC) condition introduced in Wu et al. [2023]. Assumption 5.1 (Bellman Completeness). For all \u03c0, h \u2208[H], fh+1 \u2208Fh+1 = \u21d2T \u03c0,D h fh+1 \u2208 Fh. 5.1 The \u21131 Distributional Eluder Dimension We now introduce the \u21131 distributional eluder dimension. Let S be an abstract input space, let \u03a8 be a set of functions mapping S \u2192R and let D be a set of distributions on S. De\ufb01nition 5.2 (\u2113p-distributional eluder dimension). For any function class \u03a8 \u2286S \u2192R, distribution class D \u2286\u2206(S) and \u03b5 > 0, the \u2113p-distributional eluder dimension (denoted by DEp(\u03a8, D, \u03b5)) is the length L of the longest sequence d(1), d(2), . . . , d(L) \u2286D such that there exists \u03b5\u2032 \u2265\u03b5, such that for all t \u2208[L], we have that there exists f \u2208\u03a8 such that |Ed(t)f| > \u03b5 and also Pt\u22121 i=1|Ed(i)f|p \u2264\u03b5p. When p = 2, this is exactly the \u21132 distributional eluder of Jin et al. [2021a, De\ufb01nition 7]. We\u2019re particularly interested in the p = 1 case, which can be used with MLE\u2019s generalization bounds. The following is a key pigeonhole principle for the \u21131 distributional eluder dimension. Theorem 5.3. Let C := supd\u2208D,f\u2208\u03a8|Edf| be the envelope. Fix any K \u2208N and sequences f (1), . . . , f (K) \u2286\u03a8, d(1), . . . , d(K) \u2286D. Let \u03b2 be a constant such that for all k \u2208[K], we have, Pk\u22121 i=1 \f \fEd(i)f (k)\f \f \u2264\u03b2. Then, for all k \u2208[K], we have k X t=1 \f \f \fEd(t)f (t)\f \f \f \u2264 inf 0<\u03b5\u22641{DE1(\u03a8, D, \u03b5)(2C + \u03b2 log(C/\u03b5)) + k\u03b5}. As we\u2019ll see later, Theorem 5.3 is the key tool that transfers triangular discrimination guarantees on the training distribution to any new test distribution. Another key property is that the \u21131 dimension generalizes the original \u21132 dimension of Jin et al. [2021a]. 6 \fLemma 5.4. For any \u03a8, D and \u03b5 > 0, we have DE1(\u03a8, D, \u03b5) \u2264DE2(\u03a8, D, \u03b5). Finally, we note that our distributional eluder dimension generalize the regular \u21131 eluder from Liu et al. [2022], which can be seen by taking D to be dirac distributions. 5.2 Small-Loss Bounds for O-DISCO We will soon prove small-loss regret bounds with the \u201cQ-type\u201d dimension, where \u201cQ-type\u201d refers to the fact that S = X \u00d7 A. While low-rank MDPs are not captured by the \u201cQ-type\u201d dimension, they are captured by the \u201cV-type\u201d dimension where S = X [Jin et al., 2021a, Du et al., 2021]. For PAC bounds with the V-type dimension, we need to slightly modify the data collection process in Line 6 with uniform action exploration (UAE). Instead of executing \u03c0k for a single trajectory, partially roll-out \u03c0k for H times where for each h \u2208[H], we collect xh,k \u223cd\u03c0k h , take a random action ah,k \u223cunif(A), observe ch,k \u223cCh(xh,k, ah,k), x\u2032 h,k \u223cPh(xh,k, ah,k) and augment the dataset Dh,k = Dh,k\u22121 \u222a{(xh,k, ah,k, ch,k, x\u2032 h,k)}. The modi\ufb01ed algorithm is detailed in Appendix B. We lastly need to de\ufb01ne the function and distribution classes measured by the distributional eluder dimension. The Q-type classes are Dh = {(x, a) 7\u2192d\u03c0 h(x, a) : \u03c0 \u2208\u03a0} and \u03a8h = \b (x, a) 7\u2192D\u25b3(f(x, a) \u2225T \u22c6,Df(x, a)) : f \u2208F \t . Similarly, the V-type classes are Dh,v = {x 7\u2192d\u03c0 h(x) : \u03c0 \u2208\u03a0} and \u03a6h,v = \b x 7\u2192Ea\u223cUnif(A)[D\u25b3(f(x, a) \u2225T \u22c6,Df(x, a))] : f \u2208F \t . Finally, de\ufb01ne DE1(\u03b5) = maxh DE1(\u03a8h, Dh, \u03b5) and DE1,v(\u03b5) = maxh DE1(\u03a8h,v, Dh,v, \u03b5). Theorem 5.5. Suppose DistBC holds (Assumption 5.1). For any \u03b4 \u2208(0, 1), w.p. at least 1 \u2212\u03b4, running O-DISCO with \u03b2 = log(HK|F|/\u03b4) guarantees the following regret bound, RegretO-DISCO(K) \u2264160H p KV \u22c6DE1(1/K) log(K)\u03b2 + 18000H2 DE1(1/K) log(K)\u03b2. If UAE = TRUE (Algorithm 4), then the learned mixture policy \u00af \u03c0 is guaranteed to satisfy, V \u00af \u03c0 \u2212V \u22c6\u2264160H r AV \u22c6DE1,v(1/K) log(K)\u03b2 K + 18000H2A DE1,v(1/K) log(K)\u03b2 K . Compared to prior bounds for GOLF [Jin et al., 2021a], the leading \u221a K terms in our bounds enjoy the same sharp dependence in H, K and the eluder dimension. Our bounds further enjoy one key improvement: the leading terms are multiplied with the instance-dependent optimal cost V \u22c6, giving our bounds the small-loss property. For example, if V \u22c6\u2264O(1/ \u221a K), then our regret bound converges at a fast O(H2 DE1(1/K) log(K)\u03b2) rate. While there are existing \ufb01rst-order bounds in online RL, our bound signi\ufb01cantly improves on their generality. For example, Zanette and Brunskill [2019], Jin et al. [2020a], Wagenmaker et al. [2022] used Bernstein bonuses that scale with the conditional variance and showed that careful analysis can lead to \u201csmall-return\u201d bounds in tabular and linear MDPs. However, \u201csmall-return\u201d bounds do not imply \u201csmall-loss\u201d bounds and \u201csmall-loss\u201d bounds are often harder to obtain4. While it is possible that surgical analysis with variance bonuses can lead to small-loss bounds in tabular and linear MDPs, this approach may not scale to settings with non-linear function approximation such as low-rank MDPs. On Bellman Completeness Exponential error ampli\ufb01cation can occur in online and of\ufb02ine RL under only realizability of Q functions [Wang et al., 2021a,b,c, Foster et al., 2022]. With only realizability, basic algorithms such as TD and Fitted-Q-Evaluation (FQE) can diverge or converge to bad \ufb01xed point solutions [Tsitsiklis and Van Roy, 1996, Munos and Szepesv\u00e1ri, 2008, Kolter, 2011]. As a result, BC has risen as a de facto suf\ufb01cient condition for sample ef\ufb01cient RL [Chang et al., 2022, Xie et al., 2021, Zanette et al., 2021]. Finally, we highlight that our method can be easily extended to hold under generalized completeness, i.e., there exist function classes Gh such that fh+1 \u2208Fh+1 = \u21d2 T \u03c0,D h fh+1 \u2208Gh [as in Jin et al., 2021a, Assumption 14]. Simply replace maxg\u2208Fh in the con\ufb01dence set construction with maxg\u2208Gh. While adding functions to F may break BC (as BC is not monotonic), we can always augment G to satisfy generalized completeness. Computational complexity When taken as is, OLIVE [Jiang et al., 2017], GOLF, and our algorithms are version space methods that suffer from a computational drawback: optimizing over the 4In Appendix J, we show a slight modi\ufb01cation of our approach also yields \u201csmall-return\u201d bounds. 7 \fcon\ufb01dence set is NP-hard [Dann et al., 2018]. However, the con\ufb01dence set is purely for deep exploration via optimism and can be replaced by other computationally ef\ufb01cient exploration strategies. For example, \u03b5-greedy suf\ufb01ces in problems that don\u2019t require deep and strategic exploration, i.e., a large myopic exploration gap [Dann et al., 2022]. With \u03b5-greedy, a replay buffer, and discretization, our algorithm essentially recovers C51 [Bellemare et al., 2017]. We leave developing and analyzing computationally ef\ufb01cient algorithms based on our insights as promising future work. 5.3 Instantiation with Low-Rank MDPs The low-rank MDP [Agarwal et al., 2020] is a standard abstraction for non-linear function approximation used in theory [Uehara et al., 2021] and practice [Zhang et al., 2022, Chang et al., 2022]. De\ufb01nition 5.6 (Low-rank MDP). A transition model Ph : X \u00d7 A \u2192\u2206(X) has rank d if there exist unknown features \u03c6\u22c6 h : X \u00d7 A \u2192Rd, \u00b5\u22c6 h : X \u2192Rd such that Ph(x\u2032 | x, a) = \u03c6\u22c6 h(x, a)\u22a4\u00b5\u22c6 h(x\u2032) for all x, a, x\u2032. Also, assume maxx,a \u2225\u03c6\u22c6 h(x, a)\u22252 \u22641 and \u2225 R gd\u00b5\u22c6 h\u22252 \u2264\u2225g\u2225\u221e \u221a d for all functions g : X \u2192R. The MDP is called low-rank if Ph is low-rank for all h \u2208[H]. We now specialize Theorem 5.5 to low-rank MDPs with three key steps. First, we bound the Vtype eluder dimension by DE1,v(\u03b5) \u2264O(d log(d/\u03b5)), which is a known result that we reproduce in Theorem G.4. The next step requires access to a realizable \u03a6 class, i.e., for all h \u2208[H], \u03c6\u22c6 h \u2208 \u03a6, which is a standard assumption for low-rank MDPs [Agarwal et al., 2020, Uehara et al., 2021, Mhammedi et al., 2023]. Given the realizable \u03a6, we can construct a specialized F for the low-rank MDP: Flin = Flin 1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 Flin H \u00d7 Flin H+1 where Flin H+1 = {\u03b40} and for all h \u2208[H], Flin h = \u001a f(z | x, a) = \u03c6(x, a), w(z) \u000b : \u03c6 \u2208\u03a6, w : [0, 1] \u2192Rd, (2) s.t. max z \u2225w(z)\u22252 \u2264\u03b1 \u221a d and max x,a,z \u03c6(x, a), w(z) \u000b \u2264\u03b1 \u001b , where \u03b1 := maxh,\u03c0,z,x,a Z\u03c0 h(z | x, a) is the largest mass for the cost-to-go distributions. In Appendix D, we show that Flin satis\ufb01es DistBC. Further, if costs are discretized into a uniform grid of M points, its bracketing entropy is bounded by e O(dM + log |\u03a6|). Discretization is necessary to bound the statistical complexity of Flin and is common in practice, e.g., C51 and Rainbow both set M = 51 which works well in Atari games [Bellemare et al., 2017, Hessel et al., 2018]. Theorem 5.7. Suppose the MDP is low-rank. For any \u03b4 \u2208(0, 1), w.p. at least 1 \u2212\u03b4, running O-DISCO with UAE=TRUE and with Flin as described above learns a policy \u00af \u03c0 such that, V \u00af \u03c0 \u2212V \u22c6\u2208e O H r AdV \u22c6(dM + log(|\u03a6|/\u03b4)) K + AdH2(dM + log(|\u03a6|/\u03b4)) K ! . Proof. As described above, we have DE1(1/K) \u2264O(d log(dK)) and \u03b2 = log(HK/\u03b4) + dM + log |\u03a6|. Since DistBC is satis\ufb01ed by Flin, plugging into Theorem 5.5 gives the result. This is the \ufb01rst small-loss bound for low-rank MDPs, and for online RL with non-linear function approximation in general. Again when V \u22c6\u2264e O(1/K), O-DISCO has a fast e O(1/K) convergence rate which improves over all prior results that converge at a slow e \u2126(1/ \u221a K) rate [Uehara et al., 2021]. 5.4 Proof Sketch of Theorem 5.5 By DistBC (Assumption 5.1), we can deduce two facts about the construction of Fk: (i) Z\u22c6\u2208 Fk, and (ii) elements of Fk almost satisfy the distributional Bellman equation, i.e., for all h \u2208[H], we have Pk i=1 E\u03c0i[\u03b4h,k(xh, ah)] \u2264O(\u03b2) where \u03b4h,k(xh, ah) = D\u25b3(f (k) h (xh, ah) \u2225 T \u22c6,D h f (k) h+1(xh, ah)). Next, we derive a corollary of Eq. (\u25b31): \f \f \u00af f \u2212\u00af g \f \f \u2264 q 4\u00af g + D\u25b3(f \u2225g) \u00b7 q D\u25b3(f \u2225g). (\u25b32) 8 \fTo see why this is true, apply AM-GM to Eq. (\u25b31) to get 2( \u00af f \u2212\u00af g) \u2264\u00af f + \u00af g + D\u25b3(f \u2225g), which simpli\ufb01es to \u00af f \u22643\u00af g + D\u25b3(f \u2225g). Plugging this back into Eq. (\u25b31) yields Eq. (\u25b32). Then, by iterating Eq. (\u25b32) and AM-GM, we derive a self-bounding lemma: for any f, \u03c0, h, we have \u00af fh(xh, ah) \u2272Q\u03c0 h(xh, ah) + H PH t=h E\u03c0,xh,ah[D\u25b3(ft(xt, at) \u2225T \u03c0,D h ft+1(xt, at))] (Lemma H.3). Since T \u03c0k h \u00af f (k) h+1(x, a) = T \u03c0k,D h f (k) h+1(x, a) and T \u03c0k,D h f (k) h+1 = T \u22c6,D h f (k) h+1, we have V \u03c0k \u2212V \u22c6\u2264V \u03c0k \u2212\u00af f (k) 1 (x1, \u03c0k 1(x1)) (optimism from fact (i)) = PH h=1 E\u03c0k h T \u03c0k h \u00af f (k) h+1(xh, ah) \u2212\u00af f (k) h (xh, ah) i (performance difference) \u22642 PH h=1 q E\u03c0k[ \u00af f (k) h (xh, ah) + \u03b4h,k(xh, ah)] p E\u03c0k[\u03b4h,k(xh, ah)] (Eq. (\u25b32)) \u2272 q V \u03c0kw + H PH h=1 E\u03c0k[\u03b4h,k(xh, ah)] p HE\u03c0k[\u03b4h,k(xh, ah)]. (Lemma H.3) The implicit inequality V \u03c0k \u2212V \u22c6\u2272 q V \u22c6+ H PH h=1 E\u03c0k[\u03b4h,k(xh, ah)] p HE\u03c0k[\u03b4h,k(xh, ah)] can then be obtained by AM-GM and rearranging. The \ufb01nal step is to sum over k and bound PK k=1 E\u03c0k[\u03b4h,k(xh, ah)] via the eluder dimension\u2019s pigeonhole principle (Theorem 5.3 applied with fact (ii)). Please see Appendix H for the full proof. 6 Small-Loss Bounds for Of\ufb02ine Distributional RL We now propose Pessimistic Distributional Con\ufb01dence set Optimization (P-DISCO; Algorithm 3), which adapts the distributional con\ufb01dence set technique from the previous section to the of\ufb02ine setting by leveraging pessimism instead of optimism. Notably, P-DISCO is a simple two-step algorithm that achieves the \ufb01rst small-loss PAC bounds in of\ufb02ine RL. First, construct a distributional con\ufb01dence set for each policy \u03c0 based on a similar log-likelihood thresholding procedure as in ODISCO, where the difference is we now use data sampled from T \u03c0,D h fh+1 instead of T \u22c6,D h fh+1. Next, output the policy with the most pessimistic mean amongst all the con\ufb01dence sets. Algorithm 3 Pessimistic Distributional Con\ufb01dence set Optimization (P-DISCO) 1: Input: datasets D1, . . . , DH, distribution function class F, threshold \u03b2, policy class \u03a0. 2: For all (h, f, \u03c0) \u2208 [H] \u00d7 F \u00d7 \u03a0, sample yf,\u03c0 h,i \u223c fh+1(x\u2032 h,i, \u03c0h+1(x\u2032 h,i)), where (xh,i, ah,i, ch,i, x\u2032 h,i) is the i-th datapoint of Dh. Then, set zf,\u03c0 h,i = ch,i + yf,\u03c0 h,i and de\ufb01ne the con\ufb01dence set, F\u03c0 = ( f \u2208F : N X i=1 log fh(zf,\u03c0 h,i | xh,i, ah,i) \u2265max g\u2208Fh N X i=1 log g(zf,\u03c0 h,i | xh,i, ah,i) \u22127\u03b2, \u2200h \u2208[H] ) . 3: For each \u03c0 \u2208\u03a0, de\ufb01ne the pessimistic estimate f \u03c0 = arg maxf\u2208F\u03c0 Ea\u223c\u03c0(x1) \u0002 \u00af f1(x1, a) \u0003 . 4: Output: b \u03c0 = arg max\u03c0\u2208\u03a0 Ea\u223c\u03c0(x1) \u0002 \u00af f \u03c0 1 (x1, \u03c0) \u0003 . In of\ufb02ine RL, many works made strong all-policy coverage assumptions [Antos et al., 2008, Chen and Jiang, 2019]. Recent advancements [Kidambi et al., 2020, Xie et al., 2021, Uehara and Sun, 2022, Rashidinejad et al., 2021, Jin et al., 2021b] have pursued best effort guarantees that aim to compete with any covered policy e \u03c0, with sub-optimality of the learned b \u03c0 degrading gracefully as coverage worsens. The coverage is measured by the single-policy concentrability Ce \u03c0 = maxh \r \rdde \u03c0 h/d\u03bdh \r \r \u221e. We adopt this framework and obtain the \ufb01rst small-loss PAC bound in of\ufb02ine RL. Theorem 6.1 (Small-Loss PAC bound for P-DISCO). Assume Assumption 5.1. For any \u03b4 \u2208(0, 1), w.p. at least 1 \u2212\u03b4, running P-DISCO with \u03b2 = log(H|\u03a0||F|/\u03b4) learns a policy b \u03c0 that enjoys the following PAC bound with respect to any comparator policy e \u03c0 \u2208\u03a0: V b \u03c0 \u2212V e \u03c0 \u22649H r Ce \u03c0V e \u03c0\u03b2 N + 30H2Ce \u03c0\u03b2 N . To the best of our knowledge, this is the \ufb01rst small-loss bound for of\ufb02ine RL, which we highlight illustrates a novel robustness property against bad coverage. Namely, the dominant term not only 9 \fscales with the coverage coef\ufb01cient Ce \u03c0 but also the comparator policy\u2019s value V e \u03c0. In particular, P-DISCO can strongly compete with a comparator policy e \u03c0 if one of the following is true: (i) \u03bd has good coverage over e \u03c0, so the O(1/ \u221a N) term is manageable; or (ii) e \u03c0 has small-loss, in which case we may even obtain a fast O(1/N) rate. Thus, P-DISCO has two chances at strongly competing with e \u03c0, while conventional of\ufb02ine RL methods solely rely on (i) to be true. 7 Distributional CB Experiments Algorithm: SquareCB FastCB DistCB (Ours) King County Housing [Vanschoren et al., 2013] All episodes .756 (.0007) .734 (.0007) .726 (.0003) Last 100 ep. .725 (.0012) .719 (.0013) .708 (.0019) Prudential Life Insurance [Montoya et al., 2015] All episodes .456 (.0082) .491 (.0029) .411 (.0038) Last 100 ep. .481 (.0185) .474 (.0111) .388 (.0086) CIFAR-100 [Krizhevsky, 2009] All episodes .872 (.0010) .856 (.0016) .838 (.0021) Last 100 ep. .828 (.0024) .793 (.0031) .775 (.0027) Table 1: Avg cost over all episodes and last 100 episodes (lower is better). We report \u2018mean (sem)\u2019 over 10 seeds. We now compare DISTCB with SquareCB [Foster and Rakhlin, 2020] and the state-of-the-art CB method FastCB [Foster and Krishnamurthy, 2021], which respectively minimize the squared loss and log loss for estimating the conditional mean. The key question we investigate here is whether learning the conditional mean via distribution learning with MLE will demonstrate empirical bene\ufb01t over the nondistributional approaches. We consider three challenging tasks that are all derived from real-world datasets and we brie\ufb02y describe the construction below. King County Housing This dataset consists of home features and prices, which we normalize to be in [0, 1]. The action space is 100 evenly spaced prices between 0.01 and 1.0. If the learner overpredicts the true price, the cost is 1.0. Else, the cost is 1.0 minus predicted price. Prudential Life Insurance This dataset contains customer features and an integer risk level in [8], which is our action space. If the model overpredicts the risk level, the cost is 1.0. Otherwise, the cost is .1 \u00d7 (y \u2212\u02c6 y) where y is the actual risk level, and \u02c6 y is the predicted risk level. CIFAR-100 This popular image dataset contains 100 classes, which correspond to our actions, and each class is in one of 20 superclasses. We assign cost as follows: 0.0 for predicting the correct class, 0.5 for the wrong class but correct superclass, and 1.0 for a fully incorrect prediction. Results Across tasks, DISTCB achieves lower average cost over all episodes (i.e., normalized regret) and over the last 100 episodes (i.e., most updated policies\u2019 performance) compared to SquareCB. This indicates the empirical bene\ufb01t of the distributional approach over the conventional approach based on least square regression, matching the theoretical bene\ufb01t demonstrated here. Perhaps surprisingly, DISTCB also consistently outperforms FastCB. Both methods obtain \ufb01rst-order bounds with the same dependencies on A and C\u22c6, which suggests that DISTCB\u2019s empirical improvement over FastCB cannot be fully explained by existing theory. The only difference between DISTCB and FastCB is that the former integrates online MLE while the latter directly estimates the mean by minimizing the log loss (binary cross-entropy). An even more \ufb01negrained understanding of the bene\ufb01ts of distribution learning may therefore be helpful in explaining this improvement. Appendix K contains all experiment details. Reproducible code is available at https://github.com/kevinzhou497/distcb. 8"
},
{
"url": "http://arxiv.org/abs/2301.11426v1",
"title": "Model-based Offline Reinforcement Learning with Local Misspecification",
"abstract": "We present a model-based offline reinforcement learning policy performance\nlower bound that explicitly captures dynamics model misspecification and\ndistribution mismatch and we propose an empirical algorithm for optimal offline\npolicy selection. Theoretically, we prove a novel safe policy improvement\ntheorem by establishing pessimism approximations to the value function. Our key\ninsight is to jointly consider selecting over dynamics models and policies: as\nlong as a dynamics model can accurately represent the dynamics of the\nstate-action pairs visited by a given policy, it is possible to approximate the\nvalue of that particular policy. We analyze our lower bound in the LQR setting\nand also show competitive performance to previous lower bounds on policy\nselection across a set of D4RL tasks.",
"authors": "Kefan Dong, Yannis Flet-Berliac, Allen Nie, Emma Brunskill",
"published": "2023-01-26",
"updated": "2023-01-26",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "We present a model-based offline reinforcement learning policy performance\nlower bound that explicitly captures dynamics model misspecification and\ndistribution mismatch and we propose an empirical algorithm for optimal offline\npolicy selection. Theoretically, we prove a novel safe policy improvement\ntheorem by establishing pessimism approximations to the value function. Our key\ninsight is to jointly consider selecting over dynamics models and policies: as\nlong as a dynamics model can accurately represent the dynamics of the\nstate-action pairs visited by a given policy, it is possible to approximate the\nvalue of that particular policy. We analyze our lower bound in the LQR setting\nand also show competitive performance to previous lower bounds on policy\nselection across a set of D4RL tasks.",
"main_content": "Introduction Of\ufb02ine reinforcement learning (RL) could leverage historical decisions made and their outcomes to improve datadriven decision-making in areas like marketing (Thomas et al. 2017), robotics (Quillen et al. 2018; Yu et al. 2020, 2021; Swazinna, Udluft, and Runkler 2020; Singh et al. 2020), recommendation systems (Swaminathan and Joachims 2015), etc. Of\ufb02ine RL is particularly useful when it is possible to deploy context-speci\ufb01c decision policies, but it is costly or infeasible to do online reinforcement learning. Prior work on of\ufb02ine RL for large state and/or action spaces has primarily focused on one of two extreme settings. One line of work makes minimal assumptions on the underlying stochastic process, requiring only no confounding, and leverages importance-sampling estimators of potential policies (e.g., Thomas, Theocharous, and Ghavamzadeh (2015); Thomas et al. (2019)). Unfortunately, such estimators have a variance that scales exponentially with the horizon (Liu et al. 2018b) and are often ill-suited to long horizon problems1. An alternative, which is the majority of work in of\ufb02ine RL, is to make a number of assumptions on the domain, *These authors contributed equally. Copyright \u00a9 2022, Association for the Advancement of Arti\ufb01cial Intelligence (www.aaai.org). All rights reserved. 1Marginalized importance sampling (MIS) methods (Liu et al. 2018a; Xie, Ma, and Wang 2019; Yin and Wang 2020; Liu, Bacon, and Brunskill 2020) help address this but rely on the system being Markov in the underlying state space behavior data generation process and the expressiveness of the function classes employed. The work in this space typically assumes the domain satis\ufb01es the Markov assumption, which has been recently shown in the off-policy evaluation setting to enable provably more ef\ufb01cient policy value estimation (Kallus and Uehara 2020). Historically, most work (e.g., Munos (2003); Farahmand, Munos, and Szepesv\u00b4 ari (2010); Xie and Jiang (2020); Chen and Jiang (2019)) assumes the batch data set has coverage on any state-action pairs that could be visited under any possible policy. More recent work relaxes this strong requirement using a pessimism under uncertainty approach that is model-based (Yu et al. 2020, 2021; Kidambi et al. 2020), model-free (Liu et al. 2020) or uses policy search (Curi, Berkenkamp, and Krause 2020; van Hasselt, Hessel, and Aslanides 2019). Such work still relies on realizability/lack of misspeci\ufb01cation assumptions. For model-free approaches, a common assumption is that the value function class can represent all policies. Liu et al. (2020) assume that the value function class is closed under (modi\ufb01ed) Bellman backups. A recent exception is Xie and Jiang (2020), which only requires the optimal Qfunction to be representable by the value function class. However, their sample complexity scales non-optimally (Xie and Jiang 2020, Theorem 2), and they also make strong assumptions on the data coverage \u2013 essentially the dataset must visit all states with suf\ufb01cient probability. Model-based approaches such as Malik et al. (2019); Yu et al. (2020) assume the dynamics class has no misspeci\ufb01cation. These two lines of work hint at possibilities in the middle: can we leverage the sample-ef\ufb01cient bene\ufb01ts of Markov structure and allow for minimal assumptions on the datagathering process and potential model misspeci\ufb01cation? This can be viewed as one step towards more best-in-class results for of\ufb02ine RL. Such results are relatively rare in RL, which tends to focus on obtaining optimal or near-optimal policies for the underlying domain. Yet in many important applications, it may be much more practical to hope to identify a strong policy within a particular policy class. Our insight is that the algorithm may be able to leverage misspeci\ufb01ed models and still leverage the Markov assumption for increased data ef\ufb01ciency. In particular, we take a model-based of\ufb02ine RL approach to leverage dynamics models that can accurately \ufb01t the space of state-action pairs visited under a particular policy (local small misspeci\ufb01ca\ftion), rather than being a good model of the entire possible state-action space (global small misspeci\ufb01cation). Our work is most closely related to the recently proposed Minimax Model Learning (MML) algorithm (Voloshin, Jiang, and Yue 2021): MML optimizes for the model that minimizes a value-aware error which upper bounds the difference of policy value in learned and real models. If the considered model class includes the true model, this can work very well, but when the models are misspeci\ufb01ed, this can become overly conservative since it optimizes with respect to a worst-case potential state-action distribution shift. The key feature of our algorithm is to jointly optimize policy and dynamics. Prior model-based of\ufb02ine RL algorithms typically estimate dynamics \ufb01rst, and then optimize a policy w.r.t. the learned dynamics (Yu et al. 2020, 2021; Voloshin, Jiang, and Yue 2021). But when the dynamics model class is misspeci\ufb01ed, there may not exist a unique \u201cgood dynamics\u201d that can approximate the value of every policy. As a result, the learned policy may have a good estimated value under the learned dynamics, but a poor performance in the real environment, or the learned policy may be overly conservative due to the misestimated dynamics. Our paper makes the following contributions. First, we provide a \ufb01nite sample bound that assumes a Markov model, leverages the pessimism principle to work with many datagathering distributions, accounts for estimation error in the behavior policy and, most importantly, directly accounts for dynamics and value function model misspeci\ufb01cation (see Lemma 3). We prove the misspeci\ufb01cation error of our method is much tighter than other approaches because we only look at the models\u2019 ability to represent visited stateaction pairs for a particular policy. In that sense, we say our algorithm relies on small local model dynamics misspeci\ufb01cation. In Theorem 6, we show that when the dynamics model class does not satisfy realizability, decoupling the learning of policy and dynamics is suboptimal. This motivates our algorithm which jointly optimizes the policy and model dynamics across a \ufb01nite set. Because of the tighter pessimistic estimation, we can prove a novel safe policy improvement theorem (see Theorem 4) for of\ufb02ine policy optimization (OPO). While our primary contribution is theoretical, our proposed method for policy selection improves over the state-of-the-art MML Voloshin, Jiang, and Yue (2021) in a simple linear Gaussian setting, and has solid performance on policy selection on a set of D4RL benchmarks. Related Works There is an extensive and growing body of research on of\ufb02ine RL and we focus here on methods that also assume a Markov domain. Many papers focus on model-free methods (e.g., Fujimoto et al. (2018); Kumar et al. (2019, 2020)). Nachum et al. (2019) and their follow-ups (Zhang et al. 2019; Zhang, Liu, and Whiteson 2020) learn a distribution correction term, on top of which they perform evaluation or policy optimization tasks. Uehara, Huang, and Jiang (2020); Jiang and Huang (2020) study the duality between learning Q-functions and learning importance weights. Liu et al. (2020) explicitly consider the distribution shift in of\ufb02ine RL and propose conservative Bellman equations. Another line of research uses model-based methods (Kidambi et al. 2020; Yu et al. 2020, 2021; Matsushima et al. 2020; Swazinna, Udluft, and Runkler 2020; Fu and Levine 2021; Farahmand, Barreto, and Nikovski 2017). Gelada et al. (2019); Delgrange, Nowe, and P\u00b4 erez (2022); Voloshin, Jiang, and Yue (2021) learn the dynamics using different loss functions. Yu et al. (2020) build an uncertainty quanti\ufb01cation on top of the learned dynamics and select a policy that optimizes the lower con\ufb01dence bound. (Argenson and Dulac-Arnold 2020; Zhan, Zhu, and Xu 2021) focus on policy optimization instead of model learning. In Table 1, we compare our error bounds with existing results. Our statistical error (introduced by \ufb01nite dataset) is comparable with VAML (Farahmand, Barreto, and Nikovski 2017), MBS-PI (Liu et al. 2020) and MML (Voloshin, Jiang, and Yue 2021). In addition, we consider misspeci\ufb01cation errors and safe policy improvement (SPI). Algorithm Statistical Error Misspeci\ufb01cation SPI VAML e O \u0010 p \u221an \u0011 2 \u0013(global) \u0017 MBS-PI e O \u0010 Vmax\u03b6 (1\u2212\u03b3)2\u221an \u0011 \u0013(global) \u0013 MML Rn3 \u0013(global) \u0017 Ours e O \u0012 Vmax 1\u2212\u03b3 q \u03b6 n \u0013 \u0013(local) \u0013 Table 1: Comparison of error bounds with prior works. Problem Setup A Markov Decision Process (MDP) is de\ufb01ned by a tuple \u27e8T, r, S, A, \u03b3\u27e9. S and A denote the state and action spaces. T : S \u00d7 A \u2192\u2206(S) is the transition and r : S \u00d7 A \u2192R+ is the reward. \u03b3 \u2208[0, 1) is the discount factor. For a policy \u03c0 : S \u2192\u2206(A), the value function is de\ufb01ned as V \u03c0 T (s) = Es0=s,at\u223c\u03c0(st),st+1\u223cT (st,at)[P\u221e t=0 \u03b3tr(st, at)]. Let Rmax \u225cmaxs,a r(s, a) be the maximal reward and Vmax \u225cRmax/(1 \u2212\u03b3). Without loss of generality, we assume that the initial state is \ufb01xed as s0. We use \u03b7(T, \u03c0) \u225c V \u03c0 T (s0) to denote the expected value of policy \u03c0. Let \u03c1\u03c0 T (s, a) \u225c(1 \u2212\u03b3) P\u221e t=0 \u03b3t Pr\u03c0 T (st = s, at = a | s0) be the normalized state-action distribution when we execute policy \u03c0 in a domain with dynamics model T. For simplicity in this paper we assume the reward function is known. An of\ufb02ine RL algorithm takes a dataset D = {(si, ai, s\u2032 i)}n i=1 as input, where n is the size of the dataset. Each (si, ai, s\u2032 i) tuple is drawn independently from a behavior distribution \u00b5. We assume that \u00b5 is consistent with the MDP in the sense that \u00b5(\u00b7 | s, a) = T(s, a) for all (s, a). For simplicity, we use \u02c6 E to denote the empirical distribution over the dataset D. In this paper, we assume that the 2VAML only considers linear function approximation and p is the dimension of the feature vector. 3The Rademacher complexity. For the \ufb01nite hypothesis, the best-known upper bound is in the same order of ours. \falgorithm has access to an estimated behavior distribution \u02c6 \u00b5 such that TV(\u00b5, \u02c6 \u00b5) is small. This estimation can be easily obtained using a separate dataset (e.g., Liu et al. (2020)). The algorithm can access three (\ufb01nite) function classes G, T , \u03a0. G is a class of value functions, T a class of dynamics and \u03a0 a class of policies. We assume that g(s, a) \u2208 [0, Vmax] for all g \u2208G. We use T \u22c6to denote the groundtruth dynamics. Note that T \u22c6may not be in T . Our goal is to return a policy \u03c0 \u2208\u03a0 that maximizes \u03b7(T \u22c6, \u03c0). Main Results A standard model-based RL algorithm learns the dynamics models \ufb01rst, and then uses the learned dynamics to estimate the value of a policy, or optimize it. In this approach, it is crucial to link the estimation error of the dynamics to the estimation error of the value. Therefore, as a starting point, we invoke the simulation lemma. Lemma 1 (Simulation Lemma (Yu et al. 2020; Kakade and Langford 2002)). Consider two MDPs with dynamics T, T \u22c6, and the same reward function. Then, \u03b7(T, \u03c0) \u2212\u03b7(T \u22c6, \u03c0) = \u03b3 1 \u2212\u03b3 E(s,a)\u223c\u03c1\u03c0 T [ Es\u2032\u223cT (s,a)[V \u03c0 T \u22c6(s\u2032)] \u2212Es\u2032\u223cT \u22c6(s,a)[V \u03c0 T \u22c6(s\u2032)] \u0003 . (1) For a \ufb01xed ground-truth dynamics T \u22c6, we de\ufb01ne G\u03c0 T (s, a) = Es\u2032\u223cT (s,a)[V \u03c0 T \u22c6(s\u2032)] \u2212Es\u2032\u223cT \u22c6(s,a)[V \u03c0 T \u22c6(s\u2032)]. The simulation lemma states that the dynamics will provide an accurate estimate of the policy value if Es\u2032\u223cT (s,a)[V \u03c0 T \u22c6(s\u2032)] matches Es\u2032\u223cT \u22c6(s,a)[V \u03c0 T \u22c6(s\u2032)]. In other words, to obtain a good estimate of a policy value, it is suf\ufb01cient to minimize the model error G\u03c0 T (s, a). Since the value function V \u03c0 T \u22c6is unknown, Yu et al. (2020) upper bound the model error by introducing a class of test functions G : S \u2192R. When V \u03c0 T \u22c6\u2208G, we have |G\u03c0 T (s,a)|\u2264supg\u2208G \f \fEs\u2032\u223cT (s,a)g(s\u2032)\u2212Es\u2032\u223cT \u22c6(s,a)g(s\u2032)] \f \f. In an of\ufb02ine dataset D, typically we can only observe one sample from T \u22c6(s, a) per state-action pair. Hence the algorithm cannot compute this upper bound exactly. In addition, the distribution of the dataset D is also different from the one required by the simulation lemma \u03c1\u03c0 T . To address these issues, we explicitly introduce a density ratio w : S \u00d7 A \u2192R+. For a test function g \u2208G and a dynamics model T, let f g T (s, a) \u225cEs\u2032\u223cT (s,a)[g(s\u2032)]. Recall that \u02c6 E denotes the empirical expectation over dataset D. Then our model loss is de\ufb01ned as \u2113w(T, g) = |\u02c6 E[w(s, a)(f g T (s, a) \u2212g(s\u2032))]|. (2) Distribution mismatch. We aim to upper bound policy evaluation error by the loss function even if there are state action pairs with small probability mass under behavior distribution \u00b5 (i.e., the of\ufb02ine dataset does not have a perfect coverage). Following Liu et al. (2020), we treat the unknown state-action pairs pessimistically. Let \u03b6 be a \ufb01xed cutoff threshold. Recall that \u02c6 \u00b5 is an estimation of the behavior distribution. For a policy \u03c0 and dynamics T, we de\ufb01ne w\u03c0,T (s, a) \u225cI h \u03c1\u03c0 T (s,a) \u02c6 \u00b5(s,a) \u2264\u03b6 i \u03c1\u03c0 T (s,a) \u02c6 \u00b5(s,a) as the truncated density ratio. For a \ufb01xed policy \u03c0, when w = w\u03c0,T , \f \f \fE(s,a)\u223c\u03c1\u03c0 T h G\u03c0 T (s, a) i\f \f \f \u2264 \f \f \f \fE(s,a)\u223c\u03c1\u03c0 T h I h\u03c1\u03c0 T (s, a) \u02c6 \u00b5(s, a) \u2264\u03b6 i G\u03c0 T (s, a) i\f \f \f \f + \f \f \fE(s,a)\u223c\u03c1\u03c0 T h I h\u03c1\u03c0 T (s, a) \u02c6 \u00b5(s, a) > \u03b6 i G\u03c0 T (s, a) i\f \f \f \u2264|E(s,a)\u223c\u02c6 \u00b5 h w(s, a)G\u03c0 T (s, a) i | + Vmax \f \f \fE(s,a)\u223c\u03c1\u03c0 T h I h\u03c1\u03c0 T (s, a) \u02c6 \u00b5(s, a) > \u03b6 ii\f \f \f \u2264|E(s,a)\u223c\u00b5 h w(s, a)G\u03c0 T (s, a) i | + \u03b6VmaxTV (\u02c6 \u00b5, \u00b5) + Vmax \f \f \f \fE(s,a)\u223c\u03c1\u03c0 T h I h\u03c1\u03c0 T (s, a) \u02c6 \u00b5(s, a) > \u03b6 ii\f \f \f \f. Hence, ignoring statistical error due to \ufb01nite dataset, we can upper bound the estimation error |\u03b7(T \u22c6, \u03c0) \u2212\u03b7(T, \u03c0)| by \u03b3 1 \u2212\u03b3 \u0010 sup g\u2208G \f \f \f\u2113w\u03c0,T (g, T) \f \f \f + \u03b6VmaxTV (\u02c6 \u00b5, \u00b5) + VmaxE(s,a)\u223c\u03c1\u03c0 T h I h\u03c1\u03c0 T (s, a) \u02c6 \u00b5(s, a) > \u03b6 ii\u0011 . (3) Intuitively, the \ufb01rst term measures the error caused by imperfect dynamics T, the second term captures the estimation error of the behavior distribution, and the last term comes from truncating the density ratios. Pessimistic Policy Optimization with Model Misspeci\ufb01cation In this section, we explicitly consider misspeci\ufb01cations of the function classes used for representing the value function and dynamics models (G and T , respectively). Most prior theoretical work on model-based RL make strong assumptions on the realizability of the dynamics model class. For example, in the of\ufb02ine setting, Voloshin, Jiang, and Yue (2021) focus on exact realizability of the dynamics model (that is, T \u22c6\u2208T ). In the online setting, Jin et al. (2020) provide bounds where there is a linear regret term due to global model misspeci\ufb01cation. Their bounds require a T \u2208T such that TV (T(s, a), T \u22c6(s, a)) \u2264\u03f5 for all (s, a), even if the state-action pair (s, a) is only visited under some poorly performing policies. We now show that of\ufb02ine RL tasks can need much weaker realizability assumptions on the dynamics model class. Our key observation is that for a given dynamics T and policy \u03c0, computing the density ratio w\u03c0,T is statistically ef\ufb01cient. Note that to compute w\u03c0,T we do not need any samples from the true dynamics: instead, we only need to be able to estimate the state-action density under a dynamics model T for policy \u03c0. This allows us to explicitly utilize the density ratio to get a relaxed realizability assumption. De\ufb01nition 2. The local value function error for a particular \fdynamics model T and policy \u03c0 is de\ufb01ned as \u03f5V (T, \u03c0) \u225cinf g\u2208G |E(s,a)\u223c\u00b5[w\u03c0,T (s, a)(Es\u2032\u223cT (s,a)[(g \u2212V \u03c0 T \u22c6)(s\u2032)] + Es\u2032\u223cT \u22c6(s,a)[(g \u2212V \u03c0 T \u22c6)(s\u2032)])]|. The term \u03f5V measures the local misspeci\ufb01cation of the value function class \u2013 that is, the error between the true value of the policy V \u03c0 T \u22c6and the best \ufb01tting value function in the class G \u2013 only on the state-action pairs that policy \u03c0 visits under a particular potential dynamics model T. In contrast, previous results (Jin et al. 2020; Nachum et al. 2019; Voloshin, Jiang, and Yue 2021) take the global maximum error over all (reachable) (s, a), which can be much larger than the local misspeci\ufb01cation error \u03f5V (T, \u03c0). With this local misspeci\ufb01cation error, we can establish a pessimistic estimation of the true reward. Let E be a high probability event under which the loss function \u2113w\u03c0,T (T, g) is close to its expectation (randomness comes from the dataset D). In the Appendix, we de\ufb01ne this event formally and prove that Pr(E) \u22651 \u2212\u03b4. The following lemma gives a lower bound on the true reward. Proofs, when omitted, are in the Appendix. Lemma 3. Let \u03b9 = log(2|G||T ||\u03a0|/\u03b4). For any dynamics model T and policy \u03c0, we de\ufb01ne lb(T, \u03c0) = \u03b7(T, \u03c0) \u2212 1 1 \u2212\u03b3 \u0010 sup g\u2208G \u2113w\u03c0,T (g, T) + VmaxE(s,a)\u223c\u03c1\u03c0 T h I h\u03c1\u03c0 T (s, a) \u02c6 \u00b5(s, a) > \u03b6 ii\u0011 . (4) Then, under the event E, we have \u03b7(T \u22c6, \u03c0) \u2265lb(T, \u03c0) \u2212 1 1 \u2212\u03b3 \u0010 \u03f5V (T, \u03c0) \u22122Vmax p \u03b6\u03b9/n \u2212\u03b6VmaxTV (\u02c6 \u00b5, \u00b5) \u0011 . (5) We use this to de\ufb01ne our of\ufb02ine policy selection Alg. 1. Algorithm 1: Model-based Of\ufb02ine RL with Local Misspeci\ufb01cation Error Require: estimated behavior distribution \u02c6 \u00b5, truncation threshold \u03b6. for \u03c0 \u2208\u03a0, T \u2208T do Compute w\u03c0,T (s, a) = I h \u03c1\u03c0 T (s,a) \u02c6 \u00b5(s,a) \u2264\u03b6 i \u03c1\u03c0 T (s,a) \u02c6 \u00b5(s,a) . Compute lb(T, \u03c0) by Eq. (4). end \u03c0 \u2190argmax\u03c0\u2208\u03a0 maxT \u2208T lb(T, \u03c0). In contrast to existing of\ufb02ine model-based algorithms (Yu et al. 2020; Voloshin, Jiang, and Yue 2021), our algorithm optimizes the dynamics and policy jointly. For a given dynamics model, policy pair, our Alg. 1 computes the truncated density ratio w\u03c0,T which does not require collecting new samples and then uses this to compute a lower bound lb(T, \u03c0) (Eq. (4)). Finally, it outputs a policy that maximizes the lower bound. We will shortly show this joint optimization can lead to better of\ufb02ine learning. Parameter \u03b6 controls the truncation of the stationary importance weights. Increasing \u03b6 decreases the last term in the lower bound objective lb(T, \u03c0), but it may also increase the variance given the \ufb01nite dataset size. Note that by setting \u03b6 = log(n) and letting n \u2192\u221e(i.e., with in\ufb01nite data), the last term in Eq. (4) and the statistical error converge to zero. Safe Policy Improvement We now derive a novel safe policy improvement result, up to the error terms given below. Intuitively this guarantees that the policy returned by Alg. 1 will improve over the behavior policy when possible, which is an attractive property in many applied settings. Note that recent work (Voloshin, Jiang, and Yue 2021; Yu et al. 2020) on model-based of\ufb02ine RL does not provide this guarantee when the dynamics model class is misspeci\ufb01ed. For a \ufb01xed policy \u03c0, de\ufb01ne \u03f5\u03c1(\u03c0) \u225cinfT \u2208T E(s,a)\u223c\u03c1\u03c0 T \u22c6[TV (T(s, a), T \u22c6(s, a))], (6) \u03f5\u00b5(\u03c0) \u225cE(s,a)\u223c\u03c1\u03c0 T \u22c6 h I h\u03c1\u03c0 T \u22c6(s, a) \u02c6 \u00b5(s, a) > \u03b6/2 ii . (7) The term \u03f5\u03c1 measures the local misspeci\ufb01cation error of the dynamics model class in being able to represent the dynamics for state-action pairs encountered for policy \u03c0. \u03f5\u00b5 represents that overlap of the dataset for an alternate policy \u03c0: such a quantity is common in much of of\ufb02ine RL. In the following theorem, we prove that the true value of the policy computed by Alg. 1 is lower bounded by that of the optimal policy in the function class with some error terms. Theorem 4. Consider a \ufb01xed parameter \u03b6. Let \u02c6 \u03c0 be the policy computed by Alg. 1 and \u02c6 T = argmaxT lb(T, \u02c6 \u03c0). Let \u03b9 = log(2|G||T ||\u03a0|/\u03b4). Then, with probability at least 1\u2212\u03b4, we have \u03b7(T \u22c6, \u02c6 \u03c0) \u2265sup \u03c0 \u001a \u03b7(T \u22c6, \u03c0) \u22126Vmax\u03f5\u03c1(\u03c0) (1 \u2212\u03b3)2 \u2212Vmax\u03f5\u00b5(\u03c0) 1 \u2212\u03b3 \u001b \u2212\u03f5V ( \u02c6 T, \u02c6 \u03c0) 1 \u2212\u03b3 \u22124Vmax 1 \u2212\u03b3 r \u03b6\u03b9 n \u22122\u03b6VmaxTV (\u02c6 \u00b5, \u00b5) 1 \u2212\u03b3 . (8) To prove Theorem 4, we prove the tightness of lb(T, \u03c0) \u2014 the lower bound maxT lb(T, \u03c0) is at least as high as the true value of the policy with some errors. Consequently, maximizing the lower bound also maximizes the true value of the policy. Formally speaking, we have the following Lemma. Lemma 5. For any policy \u03c0 \u2208\u03a0, under the event E we have max T \u2208T lb(T, \u03c0) \u2265\u03b7(T \u22c6, \u03c0) \u22126Vmax\u03f5\u03c1(\u03c0)/(1 \u2212\u03b3)2 \u2212 1 1 \u2212\u03b3 \u0010 Vmax\u03f5\u00b5(\u03c0) \u22122Vmax p \u03b6\u03b9/n \u2212\u03b6VmaxTV (\u02c6 \u00b5, \u00b5) \u0011 . In the sequel, we present a proof sketch for Lemma 5. In this proof sketch, we hide 1/(1 \u2212\u03b3) factors in the bigO notation. For a \ufb01xed policy \u03c0, let \u02c6 T be the minimizer of Eq. (6). We prove Lemma 5 by analyzing the terms in the de\ufb01nition of lb( \u02c6 T, \u03c0) (Eq. (4)) separately. i. Following the de\ufb01nition of Eq. (6), we can show that \u2225\u03c1\u03c0 \u02c6 T \u2212\u03c1\u03c0 T \u22c6\u22251 \u2264 O(\u03f5\u03c1(\u03c0)). Consequently we get \u03b7( \u02c6 T, \u03c0) \u2265\u03b7(T \u22c6, \u03c0) \u2212O(\u03f5\u03c1(\u03c0)). \fii. Recall that 0 \u2264g(s, a) \u2264Vmax for all g \u2208G. Then for any (s, a) we have supg\u2208G |Es\u2032\u223c\u02c6 T (s,a)g(s\u2032) \u2212 Es\u2032\u223cT \u22c6(s,a)g(s\u2032)]| \u2264VmaxTV( \u02c6 T(s, a), T \u22c6(s, a)). Combining the de\ufb01nition of \u2113w(g, T), Eq. (6) and statistical error we get supg\u2208G \u2113w\u03c0,T (g, T) \u2264e O(\u03f5\u03c1(\u03c0) + 1/\u221an + VmaxTV (\u02c6 \u00b5, \u00b5)) under event E. iii. For the last term regarding distribution mismatch, we combine Eq. (7) and Lemma 8. We can upper bound this term by O(\u03f5\u03c1(\u03c0) + \u03f5\u00b5(\u03c0)). iv. The \ufb01nal term arises due to the potential estimation error in the behavior policy distribution. Theorem 4 follows directly from combining Lemma 3 and Lemma 5. Note that Theorem 4 accounts for estimation error in the behavior policy, misspeci\ufb01cation in the dynamics model class, and misspeci\ufb01cation in the value function class, the latter two in a more local, tighter form than prior work. Illustrative Example To build intuition of where our approach may yield bene\ufb01ts, we provide an illustrative example where Alg. 1 has better performance than existing approaches: an MDP whose state space is partitioned into several parts. The model class is restricted so that every model can only be accurate on one part of the state space. When each deterministic policy only visits one part of the state space, the local misspeci\ufb01cation error is small \u2014 for each policy, there exists a dynamics model in the set which can accurately estimate the distribution of states and actions visited under that policy. In contrast, if the dynamics are learned to \ufb01t the whole state space, the estimation error will be large. More precisely, for a \ufb01xed parameter d, consider a MDP where S = {s0, \u00b7 \u00b7 \u00b7 , sd} \u222a{sg, sb}. There are d actions denoted by a1, \u00b7 \u00b7 \u00b7 , ad. The true dynamics are deterministic and given by T \u22c6(s0, ai) = si, T \u22c6(si, aj) = \u001asg, if I [i = j] , sb, if I [i \u0338= j] , (9) T \u22c6(sg, ai) = sg, T \u22c6(sb, ai) = sb, \u2200i \u2208[d]. (10) And the reward is r(s, ai) = I [s = sg] , \u2200i \u2208[d]. The transition function class T is parameterized by \u03b8 \u2208 Rd. For a \ufb01xed \u03b8, the transition for states s1, . . . , sd is T\u03b8(si, aj) = \u001asg, w.p. 1 2 \u00001 + e\u22a4 j \u03b8 \u0001 , sb, w.p. 1 2 \u00001 \u2212e\u22a4 j \u03b8 \u0001 , (11) where ej is the j-th standard basis of Rd. The transitions for states s0, sg, sb is identical to the true dynamics T \u22c6. But the transition model T\u03b8 in the function class must use the same parameter \u03b8 to approximate the dynamics in states s1, \u00b7 \u00b7 \u00b7 , sd, which makes it misspeci\ufb01ed. Decoupling learning the dynamics model and policy is suboptimal. Most prior algorithms \ufb01rst learn a dynamics model and then do planning with that model. However, note here that the optimal action induced by MDP planning given a particular T\u03b8 is suboptimal (assuming a uniformly random tie-breaking). This is because, for any given \u03b8, that dynamics model will estimate the dynamics of states s1, \u00b7 \u00b7 \u00b7 , sd as being identical, with identical resulting value functions. Note this is suboptimality will occur in this example even if the dataset is large and covers the state\u2013action pairs visited by any possible policy (\u03f5\u00b5(\u03c0) = 0), the value function class is tabular and can represent any value function \u03f5V = 0, the behavior policy is known or the resulting estimation error is small (TV (\u02c6 \u00b5, \u00b5) = 0, and \u03b6 = 0). In such a case, Theorem 4 guarantees that with high probability, our algorithm will learn the optimal policy because there exist couplings of the dynamics models and optimal policies such that the local misspeci\ufb01cation error \u03f5\u03c1 = 0. This demonstrates that prior algorithms (including MML (Voloshin, Jiang, and Yue 2021)) that decouple the learning of dynamics and policy can be suboptimal. We now state this more formally: Theorem 6. Consider any (possibly stochastic) algorithm that outputs an estimated dynamics T\u03b8 \u2208T . Let \u03c0\u03b8 be the greedy policy w.r.t. T\u03b8 (with ties breaking uniformly at random). Then max \u03c0 \u03b7(T \u22c6, \u03c0) \u2212\u03b7(T \u22c6, \u03c0\u03b8) \u2265(A \u22121)\u03b32 A(1 \u2212\u03b3) . (12) As a side point, we also show that the off-policy estimation error in Voloshin, Jiang, and Yue (2021) is large when the dynamics model class is misspeci\ufb01ed in Proposition 7. We defer this result to the Appendix. Experiments While our primary contribution is theoretical, we now investigate how our method can be used for of\ufb02ine model-based policy selection with dynamics model misspeci\ufb01cation. We \ufb01rst empirically evaluate our method on Linear-Quadratic Regulator (LQR), a commonly used environment in optimal control theory (Bertsekas et al. 2000), in order to assess: Can Algorithm 1 return the optimal policy when we have both model and distribution mismatch? We also evaluate our approach using D4RL (Fu et al. 2020), a standard of\ufb02ine RL benchmark for continuous control tasks. Here we consider: Given policies and dynamics pairs obtained using state-ofthe-art of\ufb02ine model-based RL methods with ensemble dynamics, does Alg. 1 allow picking the best policy, outperforming previous methods? Linear-Quadratic Regulator (LQR) LQR is de\ufb01ned by a linear transition dynamics st+1 = Ast + Bat + \u03b7, where st \u2208Rn and at \u2208Rm are state and action at time step t, respectively. \u03b7 \u223cN(0, \u03c32I) is random noise. LQR has a quadratic reward function R(s, a) = \u2212(sT Qs + aT Ra) with Q \u2208Rn\u00d7n and R \u2208Rm\u00d7m being positive semi-de\ufb01nite matrices, Q, R \u2ab00. The optimal controller to maximize the sum of future rewards PH t=1 \u2212(sT t Qst+aT t Rat) until the end of horizon H has the form at = \u2212Kst (K \u2208Rm\u00d7n) (Bertsekas et al. 2000). The value function is also a quadratic function, V (s) = sT Us+q for some constant q and positive semi-de\ufb01nite matrix U \u2ab00. In the experiment, the state space is [\u22121, 1]. Misspeci\ufb01ed transition classes. Consider a 1D version of LQR with A(x) = (1 + x/10), B(x) = (\u22120.5 \u2212x/10), \f-0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 K -12 -9 -6 -3 0 Return Returns of different policies under true environment Ours MML 1 2 3 4 5 Rank 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 Negative of lower bound (0.00,-0.25) (0.00,0.00) (0.00,0.25) (0.20,0.25) (0.20,-0.25) Ranking imposed by Eq 6 on policy-model pair (T, ) Model Loss+Distribution Shift 0.1 0.2 0.3 0.4 MBLB MML MOPO D4RL IQM Normalized Score Figure 1: Left: Visualization of true policy value \u03b7(T \u22c6, \u03c0). Our algorithm picks the optimal policy, whereas MML picks a suboptimal policy. Middle: Visualization of negative lower bounds lb(T, \u03c0) for different policies and models (indexed by the values of (v, u)). Right: We show the interquartile mean (IQM) scores of two model-based lower bounds (MML and MBLB) and a recent model-based policy learning algorithm (MOPO) on D4RL. Q = 1, R = 1 and noise \u03b7 \u223cN(0, 0.05). Our true dynamics is given by x\u2217= 6, and the corresponding optimal policy has K = \u22121.1. Function classes used by Alg. 1 are \ufb01nite and computed as follows: (i) the value function class G contains the value functions of 1D LQR with parameters x \u2208{2, 4, 10} and K \u2208{\u22121.1, \u22120.9, \u22120.7}; (ii) the transition class T is misspeci\ufb01ed. We use the following transition class Tu \u2208T parametrized by u, Tu = \u001ast+1 = A(x\u2217)st \u2212B(x\u2217)at, st \u2208[u, u + 1], st+1 = st, otherwise, with u \u2208{\u22120.75, \u22120.5, \u22120.25, 0, 0.25}. In other words, the capacity of the transition class is limited \u2013 each function can only model the true dynamics of a part of the states; (iii) the policy class is given by \u03c0v parameterized by v, and \u03c0v(s) = \u22121.1(s \u2212v) + N(0, 0.01) with v \u2208 {\u22120.6, \u22120.4, \u22120.2, 0, 0.2, 0.4, 0.6}. Intuitively, \u03c0v tries to push the state toward s = v. Since the state and action spaces are one dimensional, we can compute the density ratio w\u03c0,T ef\ufb01ciently by discretization. The implementation details are deferred to Appendix. Baseline. We compare our algorithm to minimizing MML loss as described in the OPO algorithm of Voloshin, Jiang, and Yue (2021, Algorithm 2). MML strictly outperformed VAML (Farahmand, Barreto, and Nikovski 2017) as shown in the experiments of (Voloshin, Jiang, and Yue 2021); hence, we only compare to MML in our experiments. Results. Figure 1 (Left) shows the return of different policies under the true environment. Our method picks the optimal policy for the true model, whereas MML picks the wrong policy. In Figure 1 (Middle), we also visualize different terms in the de\ufb01nition of lb(T, \u03c0) (Eq. (5)). Note that the model loss for different policy is different (model loss for (v, u) = (0, 0) is signi\ufb01cantly larger than (0.0.\u22120.25), even if the dynamics are the same). This is because the model loss is evaluated with a different density ratio. This highlights the main bene\ufb01t of our method over the baseline. Since the model class is misspeci\ufb01ed, maximizing over the weight function w in the MML loss results in an unrealistically large loss value for some models. However, if the chosen policy does not visit the part of the state space with a large error, there is no need to incur a high penalty. D4RL D4RL (Fu et al. 2020) is an of\ufb02ine RL standardized benchmark designed and commonly used to evaluate the progress of of\ufb02ine RL algorithms. This benchmark is standard for evaluating of\ufb02ine policy learning algorithms. Here, we use a state-of-the-art policy learning algorithm MOPO (Yu et al. 2020) to propose a set of policy-transition model tuples \u2013 for N policy hyperparameters and K transition models, we can get M \u00d7 K tuples: {(\u03c01, T1), (\u03c01, T2), ..., (\u03c0N, TK)}. The MOPO algorithm learns an ensemble of transition models and randomly chooses one to sample trajectories during each episode of training. Instead, we choose one transition model to generate trajectories for the policy throughout the entire training. In our experiment, we choose M = 1 and K = 5, and train each tuple for 5 random seeds on Hopper and HalfCheetah tasks (see Appendix). We then compute the model-based lower bound for each (\u03c0i, Tj), and select the optimal policy that has the highest lower bound. We learn the dynamics using 300k iterations and we train each policy using 100k gradient iterations steps with SAC (Haarnoja et al. 2018) as the policy gradient algorithm, imitating MOPO (Yu et al. 2020) policy gradient update. MML. Voloshin, Jiang, and Yue (2021) recommended two practical implementations for computing MML lower bounds. The implementation parametrizes w(s, a)V (s\u2032) jointly via a new function h(s, a, s\u2032). We refer readers to Prop 3.5 from Voloshin, Jiang, and Yue (2021) for a detailed explanation. We describe how we parametrize this function as follows: \u2022 Linear: Voloshin, Jiang, and Yue (2021) showed that if T, V, \u00b5 are all from the linear function classes, then a model T that minimizes MML loss is both unique and identi\ufb01able. This provides a linear parametrization of h(s, a, s\u2032) = \u03c8(s, a, s\u2032)T \u03b8, where \u03c8 is a basis function. We choose \u03c8 to be either a squared basis function or a polynomial basis function with degree 2. \u2022 Kernel: Using a radial basis function (RBF) over S \u00d7 \fDataset Type Env MOPO MML (Squared) MML (Polynomial) MML (RKHS) MBLB (Linear) MBLB (Quad) medium hopper 175.4 (95.3) 379.4 (466.4) 375.6 (459.5) 375.0 (459.9) 591.7 (523.1) 808.5 (502.7) med-expert hopper 183.8 (94.4) 160.9 (131.5) 116.5 (148.4) 61.4 (35.0) 261.1 (157.9) 242.5 (134.0) expert hopper 80.4 (63.4) 93.8 (87.9) 61.6 (61.9) 70.0 (56.2) 118.2 (61.6) 121.0 (72.5) medium halfcheetah 599.8 (668.4) 1967.6 (1707.5) 2625.1 (937.2) 3858.2 (1231.1) 3290.4 (1753.1) 2484.2 (1526.8) med-expert halfcheetah -486.6 (48.1) -188.5 (137.2) -77.0 (252.5) -343.2 (225.2) 207.4 (509.5) 192.8 (432.0) Table 2: We report the mean and (standard deviation) of selected policy\u2019s simulator environment performance across 5 random seeds. MML and MBLB are used as model-selection procedures where they select the best policy for each seed. Our method is choosing the most near-optimal policy across the datasets. A \u00d7 S and computing K((s, a, s\u2032), (\u02dc s, \u02dc a, \u02dc s\u2032)), Voloshin, Jiang, and Yue (2021) showed that there exists a closedform solution to compute the maxima of the MML loss (RKHS). Here, there is no need for any gradient update, we only sample s\u2032 from T. MBLB (Ours). For a continuous control task, we compute our model-based lower bound (MBLB) as follows: Compute \u03b7(T, \u03c0). Although it is reasonable to directly use a value function V \u03c0 T trained during policy learning to compute \u03b7(T, \u03c0), Paine et al. (2020); Kumar et al. (2021) points out how this value function often severely over-estimates the actual discounted return. Therefore, we estimate the expected value of policy \u03c0 using the generalized advantage estimator (GAE) (Schulman et al. 2016). For a sequence of transitions {st, at, r(st, at), st+1}t\u2208[0,N], it is de\ufb01ned as: At = Pt+N t\u2032=t (\u03b3\u03bb)t\u2032\u2212t(r(st\u2032, at\u2032) + \u03b3V\u03c6(st\u2032+1) \u2212V\u03c6(st\u2032)), with \u03bb a \ufb01xed hyperparameter and V\u03c6 the value function estimator at the previous optimization iteration. Then, to estimate the value function, we solve the non-linear regression problem minimize\u03c6 Pt+N t\u2032=t (V\u03c6(st\u2032)\u2212\u02c6 Vt\u2032)2 where \u02c6 Vt = At+V\u03c6(st\u2032). We also provide a comparison to using the standard TD-1 Fitted Q Evaluation (FQE) (Le, Voloshin, and Yue 2019) instead in Table A1 in the Appendix. We \ufb01nd that using GAE provides better policy evaluation estimations. Behavior density modeling. We use a state-of-the-art normalizing \ufb02ow probability model to estimate the density of state-action pairs (Papamakarios et al. 2021). For \u03c1\u03c0 T , we sample 10,000 trajectories from T, \u03c0, and estimate the corresponding density; for the behavior distribution \u00b5, we use the given dataset D. We empirically decide the number of training epochs that will give the model the best \ufb01t. Compute supg\u2208G |\u2113w\u03c0,T (g, T)|. We parametrize g either as a linear function of state: g(s) = mT s, or a quadratic function of the state: g(s) = sT Ms + b. We use gradient ascent on \u2113w\u03c0,T (g, T) to maximize this objective. Results. We report the results in Table 2. There is general overlap across seeds for the performance between various methods, but our approach has the best average performance or is within the standard deviation of the best. We also show that for different choices of how we parameterize the w(s, a)V (s\u2032) distribution (MML) and how we choose the family of g test function (MBLB), we are selecting different \ufb01nal policies. However, overall, MBLB can pick betterperforming \ufb01nal policies with two different parametrizations while MML is choosing lower-performing policies with its three parametrizations. We \ufb01nd that our approach of selecting among the set of policies computed from each of the models used by MOPO consistently outperforms the policy produced by MOPO in the considered tasks. To summarize these results, we report the interquartile mean (IQM) scores of each method in Figure 1 (Right). IQM is an outlier robust metric proposed by Agarwal et al. (2021) to compare deep RL algorithms. We create the plot by sampling with replacement over all runs on all datasets 50000 times. Though there is signi\ufb01cant overlap, our method generally outperforms policies learned from MOPO."
},
{
"url": "http://arxiv.org/abs/2302.02392v2",
"title": "Offline Minimax Soft-Q-learning Under Realizability and Partial Coverage",
"abstract": "In offline reinforcement learning (RL) we have no opportunity to explore so\nwe must make assumptions that the data is sufficient to guide picking a good\npolicy, taking the form of assuming some coverage, realizability, Bellman\ncompleteness, and/or hard margin (gap). In this work we propose value-based\nalgorithms for offline RL with PAC guarantees under just partial coverage,\nspecifically, coverage of just a single comparator policy, and realizability of\nsoft (entropy-regularized) Q-function of the single policy and a related\nfunction defined as a saddle point of certain minimax optimization problem.\nThis offers refined and generally more lax conditions for offline RL. We\nfurther show an analogous result for vanilla Q-functions under a soft margin\ncondition. To attain these guarantees, we leverage novel minimax learning\nalgorithms to accurately estimate soft or vanilla Q-functions with\n$L^2$-convergence guarantees. Our algorithms' loss functions arise from casting\nthe estimation problems as nonlinear convex optimization problems and\nLagrangifying.",
"authors": "Masatoshi Uehara, Nathan Kallus, Jason D. Lee, Wen Sun",
"published": "2023-02-05",
"updated": "2023-11-13",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"stat.ML"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "In offline reinforcement learning (RL) we have no opportunity to explore so\nwe must make assumptions that the data is sufficient to guide picking a good\npolicy, taking the form of assuming some coverage, realizability, Bellman\ncompleteness, and/or hard margin (gap). In this work we propose value-based\nalgorithms for offline RL with PAC guarantees under just partial coverage,\nspecifically, coverage of just a single comparator policy, and realizability of\nsoft (entropy-regularized) Q-function of the single policy and a related\nfunction defined as a saddle point of certain minimax optimization problem.\nThis offers refined and generally more lax conditions for offline RL. We\nfurther show an analogous result for vanilla Q-functions under a soft margin\ncondition. To attain these guarantees, we leverage novel minimax learning\nalgorithms to accurately estimate soft or vanilla Q-functions with\n$L^2$-convergence guarantees. Our algorithms' loss functions arise from casting\nthe estimation problems as nonlinear convex optimization problems and\nLagrangifying.",
"main_content": "Introduction In of\ufb02ine Reinforcement Learning (RL), we must learn exclusively from of\ufb02ine data and are unable to actively interact with the environment (Levine et al., 2020). Of\ufb02ine RL has garnered considerable interest in a range of applications where experimentation may be prohibitively costly or risky. *uehara.masatoshi@gene.com \u2020This work is done when the author was at Cornell University. \u2021kallus@cornell.edu \u00a7jasonlee@princeton.edu \u00b6ws455@cornell.edu 1 \fTable 1: Summary of partial-coverage-type guarantees with model-free general function approximation. Here, w\u22c6:= d\u03c0\u22c6,\u00b50/Pb where d\u03c0\u22c6,\u00b50 is the occupancy distribution under the optimal policy \u03c0\u22c6starting from \u00b50 and Pb is the distribution over the of\ufb02ine data. A function \u02dc w\u22c6 \u03b1 is a regularized marginal density ratio that satis\ufb01es \u02dc w\u22c6 0 = w\u22c6. Functions q\u22c6, q\u22c6 \u03b1, q\u03c0 are the optimal Q\u22c6-function, the soft Q-function, and the Q-function under a policy \u03c0, respectively. Functions v\u22c6 \u03b1, l\u22c6 \u03b1 are Lagrange multipliers of speci\ufb01c minimax optimization problems. The operator B\u03c0 is a Bellman operator under a policy \u03c0. Function classes W, Q, L, V consist of functions that map states (and actions) to real numbers. Note the guarantees provided by Jiang and Huang (2020); Xie et al. (2021) are more general than the below in that the output policy can compete with any policy in the policy class \u03a0. For simplicity, we set the comparator policy to be the optimal policy \u03c0\u22c6in this table. Note that other studies (Ozdaglar et al., 2023; Rashidinejad et al., 2022; Zhu et al., 2023) proposing model-free general function approximation under partial coverage rely on the completeness-type assumption as in (Xie et al., 2021) or realizability for any \u03c0 as in Jiang and Huang (2020). Primary Assumptions Jiang and Huang (2020) w\u22c6\u2208W, q\u03c0 \u2208Q \u2200\u03c0 \u2208\u03a0 Xie et al. (2021) q\u03c0 \u2208Q, B\u03c0Q \u2282Q \u2200\u03c0 \u2208\u03a0 Zhan et al. (2022) \u02dc w\u22c6 \u03b1 \u2208W, v\u22c6 \u03b1 \u2208V MSQP q\u22c6 \u03b1 \u2208Q, l\u22c6 \u03b1 \u2208L Chen and Jiang (2022) Hard margin, w\u22c6\u2208W, q\u22c6\u2208Q MQP Soft margin, q\u22c6\u2208Q, l\u22c6\u2208L Of\ufb02ine RL is generally based on two types of assumptions: suf\ufb01cient coverage in the of\ufb02ine data and suf\ufb01cient function approximation. For instance, classical Fitted-Q-iteration (Antos et al., 2008; Chen and Jiang, 2019) requires (a) full coverage in the of\ufb02ine data, max(s,a) d\u03c0,\u00b50(s, a)/P\u03c0b(s, a) < \u221efor any policy \u03c0 where P\u03c0b(s, a) is the of\ufb02ine data\u2019s distribution on the states and actions and d\u03c0,\u00b50(s, a) is the state-action occupancy distribution under a policy \u03c0 and initial-state distribution \u00b50(s); (b) realizability of the Q\u2217-function in a hypothesis class; and (c) Bellman completeness, i.e., the Bellman operator applied to any function in the hypothesis class remains in the class. Full coverage (a) and Bellman completeness (c) can be particularly stringent because of\ufb02ine data is often insuf\ufb01ciently exploratory and Bellman completeness signi\ufb01cantly restricts transition dynamics. To overcome these challenges, we here propose algorithms with guarantees under re2 \falizability of single functions and re\ufb01ned partial coverage of single policies, and without Bellman completeness. We tackle this by introducing two novel value-based algorithms. The \ufb01rst algorithm, MSQP (mimimax soft-Q-learning with penalization), comprises of two steps: learning soft Q-functions (a.k.a., entropy-regularized Q-functions, as de\ufb01ned in Fox et al., 2015; Schulman et al., 2017) from of\ufb02ine data, and using the softmax policies of the learned soft Q-functions. The second algorithm, MQP (mimimax Q-learning with penalization), consists of two steps: learning standard Q-functions from of\ufb02ine data and employing the greedy policy of the learned Q function on the of\ufb02ine data. Using the above-mentioned two algorithms, we attain PAC guarantees under partial coverage and realizability, yet without Bellman completeness. In particular, in MSQP using soft Q-functions, we ensure strong performance under the realizability of q\u22c6 \u03b1, l\u22c6 \u03b1 and the (density-ratio-based) partial coverage max(s,a) d\u03c0\u22c6 \u03b1,\u00b50(s, a)/Pb(s, a) < \u221e. Here q\u22c6 \u03b1 is a soft Q-function, l\u22c6 \u03b1 is a function that possesses a certain dual relation to q\u22c6 \u03b1, \u03c0\u22c6 \u03b1 is the soft-max optimal policy, and \u03b1 is the temperature parameter for the entropy-regularization. Notably, max(s,a) d\u03c0\u22c6 \u03b1,\u00b50(s, a)/Pb(s, a) < \u221eis signi\ufb01cantly less stringent than the uniform coverage in that the coverage is only imposed against a policy \u03c0\u22c6 \u03b1. In MQP using Q-functions, we similarly ensure strong performance under a soft margin, the realizability of q\u22c6, l\u22c6, and the partial coverage max(s,a) d\u03c0\u22c6,\u00b50(s, a)/Pb(s, a) < \u221e. Here q\u22c6is the vanilla Q-function and l\u22c6is a function that possesses a certain dual relation to q\u22c6, and \u03c0\u22c6is the usual optimal policy. Note the soft margin is introduced to allow realizability on standard Q-functions rather than soft Q-functions. However, the conditions max(s,a) d\u03c0\u22c6 \u03b1,\u00b50(s, a)/Pb(s, a) < \u221e or max(s,a) d\u03c0\u22c6,\u00b50(s, a)/Pb(s, a) < \u221emay still be strong as these marginal density ratios may not exist in large-scale MDPs. For example, this condition is easily violated when the initial distribution \u00b50 is not covered by Pb (i.e., maxs \u00b50(s)/Pb(s) = \u221ewhere Pb(s) := P a Pb(s, a)). Therefore, as an additional innovation, in our algorithms we can further relax these density-ratio-based partial coverage conditions. Speci\ufb01cally, we can demonstrate results under a re\ufb01ned partial coverage, which is adaptive to Q-function classes, even when the initial distribution \u00b50 is not covered by Pb.1 The primary challenge lies in the design of loss functions for effectively learning soft Q-functions and vanilla Q-functions from of\ufb02ine data without Bellman completness. To tackle this, we devise new minimax loss functions with certain regularization terms to achieve favorable L2-convergence rates on the of\ufb02ine data (i.e., in terms of E(s,a)\u223cPb[{\u02c6 q\u03b1 \u2212 q}2(s, a)] given an estimator \u02c6 q). This result serves as the key building block for obtaining re\ufb01ned partial coverage under realizability and is of independent interest in its own right. Existing results are often constrained to speci\ufb01c models, such as linear models (Shi et al., 2022a), or they require Bellman completeness (Antos et al., 2008; Chen and Qi, 2022; Chen and Jiang, 2019). In contrast, our guarantee is applicable to any function approximation method, without the need for Bellman completeness. To the best of our knowl1Note \u00b50 and Pb could be generally different even in the contextual bandit setting. This important setting is often considered in the literature on external validity/transportability in causal inference, as results of randomized clinical trials cannot be directly transported because covariate distributions in of\ufb02ine data and target data are different (Cole and Stuart, 2010; Pearl and Bareinboim, 2014; Dahabreh et al., 2019). 3 \fedge, this is the \ufb01rst guarantee of its kind. Our work exhibits marked improvements over two closely related studies (Zhan et al., 2022; Chen and Jiang, 2022). Similar to our work, they propose algorithms that operate under the realizability of speci\ufb01c functions and partial coverage, yet without Bellman completeness. Zhan et al. (2022) ensures a PAC guarantee under (a\u2019) partial coverage in the of\ufb02ine data max(s,a) d\u02dc \u03c0\u22c6 \u03b1,\u00b50(s, a)/Pb(s, a) < \u221ewhere \u02dc \u03c0\u22c6 \u03b1 is a speci\ufb01c near-optimal policy under the regularization, which differs from the soft optimal policy, and (b\u2019) realizability of d\u02dc \u03c0\u22c6 \u03b1,\u00b50/Pb and the regularized value function. However, unlike MSQP, it is unclear how to re\ufb01ne the abovementioned coverage, i.e., the guarantee could be vacuous when the initial distribution is not covered by of\ufb02ine data. A similar guarantee, but without regularization, is obtained under the additional hard margin (a.k.a., gap) condition in Chen and Jiang (2022). Our soft margin is a strict relaxation of the hard margin, which is important because, unlike the soft margin, the hard margin generally does not hold in continuous state spaces and involves very large constants in discrete state spaces. Lastly, although Chen and Jiang (2022); Zhan et al. (2022) use completely different algorithms and attain guarantees for regularized value-functions and non-regularized value functions, respectively, our guarantee can afford guarantees for regularized and non-regularized value-functions in a uni\ufb01ed manner since MQP can be seen as a limit of MSQP when \u03b1 goes to 0. Our contributions are summarized below and in Table 1. 1. We establish that the optimal policy can be learned under partial coverage and realizability of the optimal soft Q-function and its dual. Notably, we abstain from the use of possibly stronger conditions in of\ufb02ine RL, such as full coverage, Bellman completeness, and uniform realizability over the policy class (such as q\u03c0 \u2208Q for any \u03c0 as in Jiang and Huang, 2020). In particular, while a similar guarantee is provided in Zhan et al. (2022), our partial coverage guarantee has an advantage in that we are able to potentially accommodate scenarios where the initial distribution is not covered by Pb. This is feasible because our algorithm is value-based in nature, which allows us to leverage the structure of the Q-function classes and re\ufb01ne the coverage condition. 2. We demonstrate that the optimal policy can be learned under partial coverage, realizability of the Q-function and its dual, and a soft margin. While a similar guarantee is obtained in Chen and Jiang (2022), our guarantee has the advantage that the soft margin is signi\ufb01cantly less stringent than the hard margin required therein. 1.1 Related Works We summarize related works as follows. Further related works is discussed in Section A. Of\ufb02ine RL under partial coverage. There is a growing number of results under partial coverage following the principle of pessimism in of\ufb02ine RL (Yu et al., 2020; Kidambi et al., 2020). In comparison to works that focus on tabular (Rashidinejad et al., 2021; Li et al., 4 \f2022; Shi et al., 2022b; Yin and Wang, 2021) or linear models (Jin et al., 2020; Chang et al., 2021; Zhang et al., 2022; Nguyen-Tang et al., 2022; Bai et al., 2022), our emphasis is on general function approximation (Jiang and Huang, 2020; Uehara and Sun, 2021; Xie et al., 2021; Zhan et al., 2022; Zhu et al., 2023; Rashidinejad et al., 2022; Zanette and Wainwright, 2022; Ozdaglar et al., 2023). Among them, we speci\ufb01cally focus on model-free methods. The representative work is summarized in Table 1. Soft (entropy-regularized) Q-functions. Soft Q-functions are utilized in various contexts in RL (Geist et al., 2019; Neu et al., 2017). They have been shown to improve performance in online RL settings, as demonstrated in Soft Q-Learning (Fox et al., 2015; Schulman et al., 2017) and Soft Actor Critic (Haarnoja et al., 2018). In the \ufb01eld of imitation learning, they play a crucial role in Maximum Entropy IRL (Ziebart et al., 2008, 2010). Furthermore, within the realm of of\ufb02ine RL, these soft Q-functions are utilized to make the learned policy and behavior policy suf\ufb01ciently similar (Wu et al., 2019; Fakoor et al., 2021). However, to the best of the authors\u2019 knowledge, none of these proposals in the context of of\ufb02ine RL have provided sample complexity results under partial coverage. Lagrangian view of of\ufb02ine RL. In the realm of of\ufb02ine policy evaluation (OPE), Nachum and Dai (2020); Yang et al. (2020); Huang and Jiang (2022) have formulated the problem as a constrained linear optimization problem. Notably, within the context of policy optimization, Zhan et al. (2022) have proposed estimators for regularized density ratios with L2-convergence guarantees, which is a crucial step in obtaining a near-optimal policy. Our work is similarly motivated, but with a key distinction: our target functions are the soft Q-function and Q-function, rather than the regularized density ratio, which presents additional analytical challenges due to the nonlinear constraint. 2 Preliminaries We consider an in\ufb01nite-horizon discounted MDP M = \u27e8S, A, P, r, \u03b3, \u00b50\u27e9where S is the state space, A is the \ufb01nite action space, \u03b3 \u2208[0, 1) is the discount factor, reward r is a random variable following Pr(\u00b7 | s, a) on [Rmin, Rmax] (Rmin \u22650), \u00b50 is the initial distribution. A policy \u03c0 : S \u2192\u2206(A) is a map from the state to the distribution over actions. We denote the discounted state-action occupancy distribution under a policy \u03c0 starting from an initial distribution \u00b50 by d\u03c0,\u00b50(s, a). With slight abuse of notation, we denote d\u03c0,\u00b50(s) = P a d\u03c0,\u00b50(s, a). We de\ufb01ne the value under \u03c0 as J(\u03c0) := E\u03c0[P\u221e t=0 \u03b3t\u02dc r(st, at)] where the expectation is taken under \u03c0. We denote the optimal policy arg max\u03c0 J(\u03c0) by \u03c0\u22c6, and its Q-function E\u03c0\u22c6[P t \u03b3t\u02dc r(st, at) | s0 = s, a0 = a] by q\u22c6(s, a). In of\ufb02ine RL, using of\ufb02ine data D = {(si, ai, ri, s\u2032 i) : i = 1, . . . , n}, we search for the policy \u03c0\u22c6that maximizes the policy value. We suppose each (si, ai, ri, s\u2032 i) is sampled i.i.d. from si \u223cPb, ai \u223c\u03c0b(\u00b7 | s), ri \u223cPr(\u00b7 | si, ai), s\u2032 i \u223cP(\u00b7 | si, ai). We denote the sample average of f by En[f(s, a, r, s\u2032)] = 1 n Pn i=1 f(si, ai, ri, s\u2032 i), and the expectation of f with 5 \frespect to the of\ufb02ine data distribution by E[f(s, a, r, s\u2032)] (without any scripts). The policy \u03c0b used to collect data is typically referred to as a behavior policy. With slight abuse of notation, we denote Pb(s, a) = Pb(s)\u03c0b(a | s). Notation. We denote the support of Pb(\u00b7) by (S \u00d7 A)b, and the L\u221e-norm on (S \u00d7 A)b by \u2225\u00b7 \u2225\u221e,b. The L\u221e-norm on (S \u00d7 A) is denoted by \u2225\u00b7 \u2225\u221e. We de\ufb01ne w\u03c0(s, a) = d\u03c0,\u00b50(s, a)/Pb(s, a) (if it exists). We de\ufb01ne sofmax(h) = exp(h(s,a)) P a exp(h(s,a)) and \u2225h\u22252 = E(s,a)\u223cPb[h2(s, a)]1/2 for h : S \u00d7 A \u2192R. We denote universal constants by c1, c2, . . . . We use the convention a/0 = \u221ewhen a \u0338= 0 and 0/0 = 0. 3 Algorithms In this section, we present two algorithms. The \ufb01rst algorithm aims to estimate the soft optimal policy by \ufb01rst estimating a soft Q-function. The second algorithm estimates the optimal policy after estimating the Q-function. 3.1 Minimax Soft-Q-learning with Penalization Our ultimate aim is to mimic the optimal policy \u03c0\u22c6. As a \ufb01rst step, we begin by \ufb01nding a policy that maximizes the following regularized objective: arg max\u03c0 J\u03b1(\u03c0) where for \u03b1 > 0 we de\ufb01ne J\u03b1(\u03c0) = (1 \u2212\u03b3)\u22121E(s,a)\u223cd\u03c0,\u00b50,r\u223cPr(\u00b7|s,a)[r \u2212\u03b1 log{\u03c0(a | s)/\u03c0b(a | s)}] | {z } KL penalty (between \u03c0 and \u03c0b) This objective function is used in a variety of contexts in RL as mentioned in Section 1.1. The optimal policy that maximizes J\u03b1(\u03c0) with respect to \u03c0 is \u03c0\u22c6 \u03b1 = softmax(q\u22c6 \u03b1/\u03b1 + log \u03c0b), (1) where q\u22c6 \u03b1 : S \u00d7 A \u2192R is the soft Q-function uniquely characterized by the soft Bellman equation: \u2200(s, a); Es\u2032\u223cP (\u00b7|s,a)[\u03b3\u2126\u03b1,\u03c0b(q\u22c6 \u03b1)(s\u2032) + r \u2212q\u22c6 \u03b1(s, a) | s, a] = 0, where \u2126\u03b1,\u03c0b : [S\u00d7A \u2192R] \u2192[S \u2192R] has \u2126\u03b1,\u03c0b(q)(s) = \u03b1 log P a{exp(q(s, a\u2032)/\u03b1)\u03c0b(a\u2032 | s)}. As opposed to the standard objective function with \u03b1 = 0, the KL penalty term serves as a regularization term that renders \u03c0\u22c6 \u03b1 suf\ufb01ciently proximate to \u03c0b. As \u03b1 approaches \u221e, the optimal policy \u03c0\u22c6 \u03b1 approaches \u03c0b. On the other hand, when \u03b1 = 0, \u03c0\u22c6 \u03b1 is \u03c0\u22c6. Thus, in order to compete with \u03c0\u22c6, it is necessary to keep \u03b1 suf\ufb01ciently small. We elaborate on this selection procedure in Section 5. The natural method for of\ufb02ine RL using this formulation involves learning q\u22c6 \u03b1 from the of\ufb02ine data and plugging it into (1). The question that remains is how to accurately learn q\u22c6 \u03b1 from the of\ufb02ine data. We consider the following optimization problem: arg minq\u2208Q\u2032 0.5E(s,a)\u223cPb[q2(s, a)] (2) 6 \fAlgorithm 1 MSQP (Minimax Soft-Q-learning with Penalization) 1: Require: Parameter \u03b1 \u2208R+, Models Q, L \u2282[S \u00d7 A \u2192R+]. 2: Estimate q\u22c6 \u03b1 as follows: \u02c6 q\u03b1 \u2208arg min q\u2208Q max l\u2208L En[q2(s, a)/2 + {\u03b3\u2126\u03b1,\u03c0b(q)(s\u2032) + r \u2212q(s, a)}l(s, a)]. (5) 3: Estimate the soft optimal policy: \u02c6 \u03c0\u03b1 = softmax(\u02c6 q\u03b1/\u03b1 + log \u03c0b). where Q\u2032 consists of all functions q : S \u00d7 A \u2192R satisfying Es\u2032\u223cP (\u00b7|s,a)[\u03b3\u2126\u03b1,\u03c0b(q)(s\u2032) + r \u2212q(s, a) | s, a] = 0 \u2200(s, a) \u2208(S \u00d7 A)b. (3) Here, because of the constraint (3), the solution is q\u22c6 \u03b1. Furthermore, we use q2(s, a) in (2) because this choice relaxes the equality in (3) to an inequality \u22640 as we will demonstrate in Section B. Consequently, the entire optimization problem outlined in (2) and (3) transforms into a convex optimization problem. Then, using the method of Lagrange multipliers, (2) is transformed into minq maxl L\u03b1(q, l), L\u03b1(q, l) := E [q2(s, a)/2 + {\u03b3\u2126\u03b1,\u03c0b(q)(s\u2032) + r \u2212q(s, a)}l(s, a)] . (4) Being motivated by the above formulation, our MSQP algorithm, speci\ufb01ed in Algorithm 1, approximates this formulation by replacing expectations with sample averages and restricting optimization to function classes with bounded complexity. Remark 1 (Computation). Although minimax optimization is generally dif\ufb01cult to solve, it is computationally feasible when we choose RKHS or linear function classes for L. In this case, we can solve the inner maximization problem analytically in closed form, as the objective function is linear in l. As a result, the minimax optimization problem reduces to empirical risk minimization. 3.2 Minimax Q\u22c6-learning with Penalization Next, we examine a policy learning algorithm utilizing Q\u22c6-functions. To learn Q\u22c6, our objective function is derived from the constrained optimization problem: arg minq\u2208Q\u22c6\u2032 0.5E(s,a)\u223cPb[q2(s, a)] (6) where Q\u22c6\u2032 consists of all functions q : S \u00d7 A \u2192R satisfying \u2200(s, a) \u2208(S \u00d7 A)b; Es\u2032\u223cP (\u00b7|s,a)[\u03b3 maxa\u2032\u2208A q(s\u2032, a\u2032) + r \u2212q(s, a) | s, a] = 0. Next, again using the method of Lagrange multipliers, (6) is transformed into minq maxl L0(q, l), L0(q, l) := E[q2(s, a)/2 + {\u03b3 maxa\u2032 q(s\u2032, a\u2032) + r \u2212q(s, a)}l(s, a)]. (7) 7 \fAlgorithm 2 MQP (Minimax Q\u22c6-learning with Penalization) 1: Require: Models Q, L \u2282[S \u00d7 A \u2192R+]. 2: Estimate q\u22c6as follows: \u02c6 q0 \u2208arg min q\u2208Q max l\u2208L En[q2(s, a)/2 + {\u03b3 max a\u2032 q(s\u2032, a\u2032) + r \u2212q(s, a)}l(s, a)]. (8) 3: Estimate the optimal policy: \u02c6 \u03c00(a | s) = arg maxa:\u03c0b(a|s)>0 \u02c6 q0(s, a). Note L0 is the limit of L\u03b1 as \u03b1 \u21920. Our MQP algorithm, speci\ufb01ed in Algorithm 2, similarly approximates this formulation by replacing expectations with sample averages and restricting optimization to function classes with bounded complexity. Our \ufb01nal policy is greedy with respect to the learned Qfunction but restricting to the support of the of\ufb02ine data in order to avoid exploiting regions not covered by the of\ufb02ine data. Remark 2 (Prominent differences). There exist several other minimax estimators for Q\u22c6 including BRM (Antos et al., 2008) and MABO (Xie and Jiang, 2020). Although these ensure convergence guarantees in terms of Bellman residual errors, they do not ensure the guarantee in terms of L2-errors, which is our focus. Our minimax objective function differs signi\ufb01cantly from that of the aforementioned approaches, and its unique design plays a pivotal role in enabling L2-rates. 4 L2-convergence Rates for Soft Q-functions and Q\u22c6-functions To analyze our Q-estimators we \ufb01rst establish conditions that ensure q\u22c6 \u03b1 = arg minq\u2208Q maxl\u2208L L\u03b1(q, l) on the support (S \u00d7 A)b. Building on this, we prove L2-convergence rates for \u02c6 q\u03b1 and \u02c6 q0. These L2-convergence guarantees are subsequently translated into performance guarantees of the policies we output in Section 5. 4.1 Identi\ufb01cation of Soft Q-functions Consider an L2-space H where the inner product is de\ufb01ne as \u27e8h1, h2\u27e9= E(s,a)\u223cPb[h1(s, a)h2(s, a)]. Then we de\ufb01ne two operators and a key function:2 P \u22c6 \u03b1 : H \u220bf 7\u2192Es\u2032\u223cP (s,a),a\u2032\u223c\u03c0\u22c6 \u03b1[f(s\u2032, a\u2032) | (s, a) = \u00b7] \u2208H, {P \u22c6 \u03b1}\u22a4: H \u220bf 7\u2192 R P(\u00b7 | s, a)\u03c0\u22c6 \u03b1(\u00b7 | \u00b7)f(s, a)d(s, a) \u2208H, l\u22c6 \u03b1(s, a) := ((I\u2212\u03b3{P \u22c6 \u03b1}\u22a4)\u22121(Pb(s,a)q\u22c6 \u03b1(s,a)) Pb(s,a) (s, a) \u2208(S \u00d7 A)b, 0 (s, a) \u0338= (S \u00d7 A)b. 2We use the notation \u00b7\u22a4because P \u22c6 \u03b1 \u22a4is interpreted as the adjoint operator in the non-weighted L2-space. 8 \fThese satisfy a key adjoint property, which we leverage to show (q\u22c6 \u03b1, l\u22c6 \u03b1) is a saddle point of L\u03b1(q, l). Lemma 1. \u2200q \u2208H, we have \u27e8l\u22c6 \u03b1, (I \u2212\u03b3P \u22c6 \u03b1)q\u27e9H = \u27e8q\u22c6 \u03b1, q\u27e9H. Our \ufb01rst assumption ensures that l\u22c6 \u03b1 exists. Assumption 1. Suppose \u2225d\u03c0\u22c6 \u03b1,Pb/Pb\u2225\u221e< \u221e. Note the in\ufb01nity norm \u2225\u00b7 \u2225\u221eis over S \u00d7 A. Proposition 1. Under Assumption 1, we have \u2225l\u22c6 \u03b1\u2225\u221e< \u221e. Proposition 1 is immediate noting that (I\u2212\u03b3{P \u22c6 \u03b1}\u22a4)\u22121(Pb(\u00b7)q\u22c6 \u03b1(\u00b7)) = P\u221e t=0 \u03b3t({P \u22c6 \u03b1}\u22a4)t(Pbq\u22c6 \u03b1) and recalling the discounted occupancy measure under \u03c0\u22c6 \u03b1 with initial distribution \u00b50 is written as d\u03c0\u22c6 \u03b1,\u00b50 = (1\u2212\u03b3)(I \u2212\u03b3{P \u22c6 \u03b1}\u22a4)\u22121(\u00b50). Hence, \u2225l\u22c6 \u03b1\u2225\u221e\u2264(1\u2212\u03b3)\u22121Rmax\u2225d\u03c0\u22c6 \u03b1,Pb/Pb\u2225\u221e. Note that \u2225d\u03c0\u22c6 \u03b1,Pb/Pb\u2225\u221ecrucially differs with the standard density-ratio-based concentrability coef\ufb01cient \u2225d\u03c0\u2217 \u03b1,\u00b50/Pb\u2225\u221ein of\ufb02ine RL. Unlike \u2225d\u03c0\u2217 \u03b1,Pb/Pb\u2225\u221e, the value of \u2225d\u03c0\u2217 \u03b1,\u00b50/Pb\u2225\u221e can be in\ufb01nite when the initial distribution \u00b50 is not covered by of\ufb02ine data Pb as the practical motivating example is explained in the footnote in Section 1 and Example 2. Our next assumption ensures q\u22c6 \u03b1 \u22650, which also guarantees that l\u22c6 \u03b1 \u22650. Assumption 2. Suppose \u03b1 log \u2225\u03c0\u2217 \u03b1/\u03c0b\u2225\u221e\u2264Rmin. Assumption 2 can be satis\ufb01ed by rescaling reward (i.e., rescaling Rmin) as long as \u2225\u03c0\u2217 \u03b1/\u03c0b\u2225\u221eis \ufb01nite. Hence, it is very mild. Putting Lemma 1 together with our assumptions we have the following. Lemma 2. Suppose Assumptions 1 and 2 hold. Then, (q\u22c6 \u03b1, l\u22c6 \u03b1) is a saddle point of L\u03b1(q, l) over q \u2208H, l \u2208H, i.e., L\u03b1(q, l\u22c6 \u03b1) \u2265L\u03b1(q\u22c6 \u03b1, l\u22c6 \u03b1) \u2265L\u03b1(q\u22c6 \u03b1, l) \u2200q \u2208H, \u2200l \u2208H. Recall that a point (\u02dc q, \u02dc l) is a saddle point if and only if the strong duality holds, and \u02dc q \u2208arg minq\u2208H supl\u2208H L\u03b1(q, l), \u02dc l \u2208arg maxl\u2208H infq\u2208H L\u03b1(q, l) using the general characterization (Bertsekas, 2009). Hence, Lemma 2 ensures q\u22c6 \u03b1 \u2208arg minq\u2208H maxl\u2208H L\u03b1(q, l). Next, we consider the constrained optimization problem when we use function classes Q \u2282H, L \u2282H. As long as the saddle point is included in (Q, L), we can prove that q\u22c6 \u03b1 is a unique minimaxer. Lemma 3. Suppose Assumptions 1 and 2 hold, q\u22c6 \u03b1 \u2208Q, and l\u22c6 \u03b1 \u2208L. Then, we have that q\u22c6 \u03b1 = arg minq\u2208Q supl\u2208L L\u03b1(q, l) on the support (S \u00d7 A)b. This establishes that realizability (q\u22c6 \u03b1 \u2208Q, l\u22c6 \u03b1 \u2208L) is suf\ufb01cient to identify q\u22c6 \u03b1 on the of\ufb02ine data distribution. At a high level, q\u22c6 \u03b1 \u2208arg minq\u2208Q supl\u2208L L\u03b1(q, l) is established through the invariance of saddle points, i.e., saddle points over original sets remain saddle points over restricted sets. Its uniqueness is veri\ufb01ed by the strong convexity in q of L\u03b1(q, l) induced by E(s,a)\u223cPb[q2(s, a)]. 9 \f4.2 L2-convergence Rate for Soft Q-estimators Based on the population-level results in Section 4.1, we give a \ufb01nite-sample error analysis of \u02c6 q\u03b1 Assumption 3 (Realizability of soft Q-function). Suppose q\u22c6 \u03b1 \u2208Q and \u2225q\u2225\u221e\u2264BQ \u2200q \u2208 Q. Assumption 4 (Realizability of Lagrange multiplier). Suppose l\u22c6 \u03b1 \u2208L and \u2225l\u2225\u221e\u2264BL \u2200l \u2208 L. It is natural to set BQ = (1 \u2212\u03b3)\u22121Rmax and BL = (1 \u2212\u03b3)\u22121Rmax\u2225d\u03c0\u2217 \u03b1,Pb/Pb\u2225\u221e, but letting these be arbitrary offers further \ufb02exibility to our results. Theorem 1 (L2-convergence of soft Q-estimators). Suppose Assumptions 1, 2, 3, and 4 hold. Then, with probability 1 \u2212\u03b4, the L2-error \u2225\u02c6 q\u03b1 \u2212q\u22c6 \u03b1\u22252 is upper-bounded by c \u0000B2 Q + BQBL{\u03b1 + ln(|A|)} \u00011/2 (ln(|Q||L|/\u03b4)/n)1/4 . Our result is signi\ufb01cant as it relies on realizability-type conditions rather than Bellman closedness. Since the majority of existing works focus on non-regularized Q-functions, we postpone the comparison to these existing works to the next section. Note when Q and L are in\ufb01nite, we can easily replace |Q|, |L| with their L\u221e-covering numbers following Uehara et al. (2021). Details are given in the appendix. 4.3 L2-convergence Rate for Q\u22c6-functions Next, we give analogous \ufb01nite-sample error analysis of \u02c6 q0 leveraging the same reasoning. Assumption 5 (Realizability of Q\u22c6-functions). Suppose q\u22c6\u2208Q and \u2225q\u2225\u221e\u2264BQ \u2200q \u2208Q. Next, we de\ufb01ne the Lagrange multiplier: {P \u22c6}\u22a4: H \u220bf 7\u2192 Z P(\u00b7 | s, a)\u03c0\u22c6(\u00b7 | \u00b7)f(s, a)d\u00b5(s, a) \u2208H, l\u22c6:= {(I \u2212\u03b3{P \u22c6}\u22a4)\u22121(q\u22c6P\u03c0b)}/P\u03c0b. While l\u22c6involves the density ratio, this is always well-de\ufb01ned as long as \u2225d\u03c0\u22c6,Pb/Pb\u2225\u221e< \u221e. Then, it can be similarly established that (q\u22c6, l\u22c6) is a saddle point of L0(q, l) over q \u2208H, l \u2208H as we show in Lemma 2. We lastly require its realizability. Assumption 6 (Realizability of Lagrange multiplier). Suppose \u2225d\u03c0\u22c6,Pb/Pb\u2225\u221e< \u221eand l\u22c6\u2208L. Further suppose \u2225l\u2225\u221e\u2264BL \u2200l \u2208L. Theorem 2 (L2-convergence of Q-estimators). Suppose Assumptions 5 and 6 hold. Then, with probability 1 \u2212\u03b4, the L2-error \u2225\u02c6 q0 \u2212q\u22c6\u22252 is upper-bounded by c \u0000B2 Q + BLBQ \u00011/2 (ln(|Q||L|/\u03b4)/n)1/4. 10 \fTo the best of our knowledge, this is the \ufb01rst guarantee on L2 errors for learning q\u2217using general function approximation without relying on Bellman completeness. This is highly nontrivial, and we have carefully crafted our algorithm to obtain this guarantee. Existing results are often speci\ufb01c to particular models, such as linear models (Shi et al., 2022a), or they require Bellman completeness (Chen and Jiang, 2019; Chen and Qi, 2022), or they are limited to of\ufb02ine policy evaluation scenarios (Huang and Jiang, 2022) (i.e., cases involving linear Bellman operators, but nonlinear Bellman operators). Actually, it seems that even under the assumption of Bellman completeness, obtaining an L2 guarantee without strong coverage assumptions remains unclear. A detailed comparison among these different approaches is presented in Section A. 5 Finite Sample Guarantee of MSQP In this section, we present our primary sample complexity guarantee for our MSQP algorithm under the assumptions of realizability of q\u22c6 \u03b1 and l\u22c6 \u03b1 and partial coverage. We \ufb01rst show the learned policy \u02c6 \u03c0\u03b1 can compete with \u03c0\u22c6 \u03b1. Finally we show \u02c6 \u03c0\u03b1 can compete with \u03c0\u22c6 by selecting \u03b1 properly. We \ufb01rst introduce the \ufb02attened behavior policy \u03c0\u22c4 b, which is uniform on the support of \u03c0b. We use it as a technical device to de\ufb01ne a model-free concentrability coef\ufb01cient following Xie et al. (2021). De\ufb01nition 1 (Model-free concentrability coef\ufb01cient). De\ufb01ne CQ,d\u03c0\u22c6 \u03b1,\u00b50 := sup q\u2208Q Es\u223cd\u03c0\u22c6 \u03b1,\u00b50,a\u223c\u03c0\u22c4 b (a|s)[\u2225q(s, a) \u2212q\u22c6 \u03b1(s, a)\u22252 2] E(s,a)\u223cPb[\u2225q(s, a) \u2212q\u22c6 \u03b1(s, a)\u22252 2] where \u03c0\u22c4 b(\u00b7 | s) = ( 0 \u03c0b(\u00b7 | s) = 0 1/|{a \u2208A | \u03c0b(a | s) > 0}| \u03c0b(\u00b7 | s) > 0 is the \ufb02attened behavior policy. Clearly, CQ,d\u03c0\u22c6 \u03b1,\u00b50 is smaller than density-ratio-based concentrability coef\ufb01cient, in other words, CQ,d\u03c0\u22c6 \u03b1,\u00b50 \u2264max (s,a) d\u03c0\u22c6 \u03b1,\u00b50(s)\u03c0\u22c4 b(a | s) Pb(s)\u03c0b(a | s) . Here, we always have \u2225\u03c0\u22c4 b/\u03c0b\u2225< \u221eeven if \u03c0b(a | s) is 0 for some (s, a). In the special case where \u03c0b(a | s) \u22651/C\u2032 for any (s, a), we have CQ,d\u03c0\u22c6 \u03b1,\u00b50 \u2264C\u2032\u2225d\u03c0\u22c6 \u03b1,\u00b50/Pb\u2225\u221e. The coef\ufb01cient CQ,d\u03c0\u22c6 \u03b1,\u00b50 is is a re\ufb01ned concentrability coef\ufb01cient, which adapts to a function class Q. For example, in linear MDPs, it reduces to a relative condition number as follows. Similar properties are obtained in related works (Xie et al., 2021; Uehara and Sun, 2021). Example 1 (Linear MDPs). A linear MDP is one such that, for a known feature vector \u03c6 : S \u00d7 A \u2192Rd, the true density satis\ufb01es P(s\u2032 | s, a) = \u27e8\u00b5\u22c6(s\u2032), \u03c6(s, a)\u27e9for some \u00b5\u22c6: S \u2192Rd and the reward function satis\ufb01es E[r | s, a] = \u27e8\u03b8r, \u03c6(s, a)\u27e9for some \u03b8r \u2208Rd. 11 \fIn linear MDPs, q\u22c6 \u03b1 is clearly linear in \u03c6(s, a). Hence, the natural function class is Q = {\u27e8\u03b8, \u03c6(s, a)\u27e9| \u2225\u03b8\u2225\u2264B} for a certain B \u2208R+. Then, we have CQ,d\u03c0\u22c6 \u03b1,\u00b50 = sup x\u0338=0 x\u22a4Es\u223cd\u03c0\u22c6 \u03b1,\u00b50,a\u223c\u03c0\u22c4 b (a|s)[\u03c6(s, a)\u03c6(s, a)\u22a4]x x\u22a4E(s,a)\u223cPb[\u03c6(s, a)\u03c6(s, a)\u22a4]x . We are now prepared to present our main result, which states that given the realizability of the soft Q-function q\u22c6 \u03b1 and Lagrange multiplier l\u22c6 \u03b1, it is possible to compete with \u03c0\u22c6 \u03b1 under the coverage condition CQ,d\u03c0\u22c6 \u03b1,\u00b50 < \u221e, \u2225d\u03c0\u22c6 \u03b1,Pb/Pb\u2225\u221e< \u221e. Theorem 3 (\u02c6 \u03c0\u03b1 can compete with \u03c0\u22c6 \u03b1). Fix \u03b1 > 0. Suppose Assumptions 1, 2, 3, and 4 hold. With probability 1 \u2212\u03b4, the regret J(\u03c0\u22c6 \u03b1) \u2212J(\u02c6 \u03c0\u03b1) is upper-bounded by n\u22121/4Poly \u0012 |A|, BQ, BL, CQ,d\u03c0\u22c6 \u03b1,\u00b50, ln \u0012|Q||L| \u03b4 \u0013 , 1 1 \u2212\u03b3 , Rmax \u0013 The proof mainly consists of two steps: (1) obtaining L2-errors of \u02c6 q\u03b1 as previously demonstrated in Theorem 1, (2) translating this error into the error of \u02c6 \u03c0\u03b1. In the second step, the Lipshitz continuity of the softmax function plays a crucial role. If there is no regularization (\u03b1 = 0) and the greedy policy of q\u22c6 0 is utilized, the second step does not proceed (without any further additional assumptions). Our ultimate goal is to compete with \u03c0\u22c6. Theorem 3 serves as the primary foundation for this goal. The remaining task is to analyze the approximation error J(\u03c0\u22c6) \u2212J(\u03c0\u22c6 \u03b1). Fortunately, this term can be controlled through \u03b1 and the density ratio between \u03c0\u22c6and \u03c0b. Then, by properly controlling \u03b1, we can obtain the following sample complexity result. Theorem 4 (PAC guarantee of \u02c6 \u03c0\u03b1). Fix any \u01eb > 0. Suppose Assumptions 1, 2, 3, and 4 hold for \u03b1 = c/n1/8 and \u2225\u03c0\u22c6 0/\u03c0b\u2225\u221e\u2264C0, CQ,d\u03c0\u22c6 \u03b1,\u00b50 < \u221e. Then, if n is at least \u01eb\u22128Poly(|A|, BQ, BL, CQ,d\u03c0\u22c6 \u03b1,\u00b50, ln(|Q||L|/\u03b4), (1 \u2212\u03b3)\u22121, ln(C0), Rmax), with probability at least 1 \u2212\u03b4, we can ensure J(\u03c0\u22c6) \u2212J(\u02c6 \u03c0\u03b1) \u2264\u01eb. In summary, the realiazability of q\u22c6 \u03b1, l\u22c6 \u03b1, per-step coverage \u2225\u03c0\u22c6 0/\u03c0b\u2225\u221e< \u221eand partial coverage CQ,d\u03c0\u22c6 \u03b1,\u00b50 < \u221e, \u2225d\u03c0\u22c6 \u03b1,Pb/Pb\u2225\u221e< \u221eare suf\ufb01cient to compete with \u03c0\u22c6. This is a novel and attractive result. Firstly, if we solely use the na\u00a8 \u0131ve FQI or Bellman residual minimization, existing PAC results require the global coverage \u2225d\u03c0,\u00b50/Pb\u2225\u221e< \u221efor any possible policy \u03c0 (Munos and Szepesv\u00b4 ari, 2008; Antos et al., 2008). Our result only requires coverage under a single policy \u03c0\u22c6 \u03b1 (near-optimal policy). Secondly, we only require the realizability of two functions, and we do not necessitate realizability-type conditions for all policies in the policy class or Bellman completeness, unlike existing works with partial coverage (Xie et al., 2021; Jiang and Huang, 2020). The most similar result is Zhan et al. (2022). However, our guarantee possesses a certain advantage over their guarantee as follows. They demonstrate the realizability of certain 12 \ffunctions \u02dc w\u22c6 \u03b1, v\u22c6 \u03b1 and partial coverage \u2225d\u02dc \u03c0\u03b1,\u00b50/Pb\u2225< \u221eare suf\ufb01cient conditions in of\ufb02ine RL, where \u02dc w\u22c6 \u03b1 = d\u02dc \u03c0\u03b1,\u00b50/Pb (\u02dc \u03c0\u03b1 is a certain regularized optimal policy, but fundamentally distinct from \u03c0\u22c6 \u03b1) and v\u22c6 \u03b1 is a near-optimal regularized value function parameterized by \u03b1. Here, we have \u02dc w\u22c6 0 = w\u22c6, v\u22c6 0 = v\u22c6. Our guarantee has a similar \ufb02avor in the sense that it roughly illustrates realizability and partial coverage are suf\ufb01cient conditions. However, the meanings of realizability and partial coverage are signi\ufb01cantly different. In particular, by employing our algorithm, we can ensure PAC guarantees under the boundedness of the re\ufb01ned concentrability coef\ufb01cient CQ,d\u03c0\u22c6 \u03b1,\u00b50 < \u221e(and \u2225d\u03c0\u22c6 \u03b1,Pb/Pb\u2225\u221ethrough BL). As a result, the L\u221e-norm of the density-ratio-based concentrability coef\ufb01cient \u2225d\u03c0\u22c6 \u03b1,\u00b50/Pb\u2225\u221ecan even be in\ufb01nite. More speci\ufb01cally, we can permit situations where maxs \u00b50(s)/Pb(s) = \u221e as we will see the practical example soon. Conversely, Zhan et al. (2022) excludes this possibility since the algorithm explicitly estimates the density ratio \u02dc w\u22c6 \u03b1. Example 2 (Contextual bandit under external validity). We consider the contextual bandit setting where we want to optimize J(\u03c0) = Es\u223c\u00b50,a\u223c\u03c0(s),r\u223cPr(s,a)[r] using of\ufb02ine data s \u223c Pb, a \u223c\u03c0b(s), r \u223cP(s, a). This is the simplest RL setting with \u03b3 = 0. Here, note \u00b50 could be different from Pb. This case often happens in practice as discussed in the literature on causal inference related to external validity (Pearl and Bareinboim, 2014; Dahabreh et al., 2019; Uehara et al., 2020), which refers to the shift between the target population and the of\ufb02ine data. Here, our PAC guarantee does not require that \u00b50(s) is covered by Pb(s) in terms of the density ratio as long as the relative condition number is upper-bounded when we use linear models. On the other hand, Zhan et al. (2022) excludes this possibility. Despite the aforementioned advantage of our approach, unfortunately, our sample complexity of O(1/\u01eb8) is slower compared to that of O(1/\u01eb6) in Zhan et al. (2022). In the following, we demonstrate that MQP, which is a special version of MSQP when \u03b1 \u21920, can achieve a faster rate of O(1/\u01eb2). 6 Finite Sample Guarantee of MQP In this section, building upon the convergence result of \u02c6 q0, we demonstrate the \ufb01nite sample guarantee of our MQP algorithm under partial coverage. We \ufb01rst introduce the soft margin. Assumption 7 (Soft margin). For any a\u2032 \u2208A, there exists t0 \u2208R+, \u03b2 \u2208(0, \u221e] such that Ps\u223cd\u03c0\u22c6,\u00b50(0 < |q\u22c6(s, \u03c0\u22c6(s)) \u2212q\u22c6(s, a\u2032)| < t) \u2264(t/t0)\u03b2 for any t > 0. Here, we use the convention x\u221e= 0 if 0 < x < 1 and x\u221e= \u221eif x > 1. In the extreme case, if there exists a gap in q\u22c6(also known as a hard margin) so that the best action is always better than the second-best by some lower bounded amount, then the soft margin is satis\ufb01ed with \u03b2 = \u221e. Thus, the soft margin is more general than the gap condition used in Simchowitz and Jamieson (2019); Wu et al. (2022). Crucially, a gap 13 \fgenerally does not exist in continuous state spaces unless Q-functions are discontinuous or one action is always option, or a gap involves a large t0 constant in discrete state spaces with bad dependence on the number of states. In contrast, a soft margin with some \u03b2 > 0 generally holds (see, e.g., lemma 4 in Hu et al., 2021). The soft margin is widely used in the literature on classi\ufb01cation, decision making, and RL (Audibert and Tsybakov, 2007; Perchet and Rigollet, 2013; Luedtke and Chambaz, 2020; Hu et al., 2021, 2022). Theorem 5 (PAC guarantee of \u02c6 \u03c00). Suppose Assumptions 5, 6, and 7 hold and \u2225\u03c0\u22c6/\u03c0b\u2225\u221e\u2264 C0. Fix any \u01eb > 0. Then, if n is at least { |A| \u01eb } 4+2\u03b2 \u03b2 Poly \u0010 t\u22121 0 , |A|, BQ, BL, CQ,d\u03c0\u22c6,\u00b50, ln \u0010 |Q||L| \u03b4 \u0011 , (1 \u2212\u03b3)\u22121, ln(C0), Rmax \u0011 with probability at least 1 \u2212\u03b4, we can ensure J(\u03c0\u22c6) \u2212J(\u02c6 \u03c00) \u2264\u01eb. The proof mainly consists of two steps: (1) obtaining L2-errors of \u02c6 q0 as demonstrated in Theorem 2, (2) translating this error into the error of \u02c6 \u03c00. In the second step, the soft margin plays a crucial role. These theorems indicate that the realizability of the Q-function q\u22c6and Lagrange multiplier l\u22c6, and the soft margin are suf\ufb01cient for the PAC guarantee under partial coverage CQ,d\u03c0\u22c6,\u00b50 < \u221e, \u2225d\u03c0\u22c6,Pb/Pb\u2225\u221e< \u221e. Our algorithm is agnostic to \u03b2 and operates under any value of \u03b2. In particular, when there is a gap (\u03b2 = \u221e), we can achieve sample complexity of O(1/\u01eb2) 3. In comparison to Theorem 3, although we additionally use the soft margin, the realizability in Theorem 5 is more appealing since it is imposed on the standard Q-function q\u22c6. The closest guarantee to our work can be found in Chen and Jiang (2022), which demonstrates that the existence of the gap in q\u22c6, the realizability of q\u22c6, w\u22c6(:= d\u03c0\u22c6,\u00b50/P\u03c0b), and partial coverage \u2225w\u22c6\u2225\u221e< \u221eare suf\ufb01cient conditions. A similar comparison is made in Ozdaglar et al. (2023). In comparison to their work, we use the soft margin, which is signi\ufb01cantly less stringent. 7"
},
{
"url": "http://arxiv.org/abs/2305.16209v2",
"title": "C-MCTS: Safe Planning with Monte Carlo Tree Search",
"abstract": "The Constrained Markov Decision Process (CMDP) formulation allows to solve\nsafety-critical decision making tasks that are subject to constraints. While\nCMDPs have been extensively studied in the Reinforcement Learning literature,\nlittle attention has been given to sampling-based planning algorithms such as\nMCTS for solving them. Previous approaches perform conservatively with respect\nto costs as they avoid constraint violations by using Monte Carlo cost\nestimates that suffer from high variance. We propose Constrained MCTS (C-MCTS),\nwhich estimates cost using a safety critic that is trained with Temporal\nDifference learning in an offline phase prior to agent deployment. The critic\nlimits exploration by pruning unsafe trajectories within MCTS during\ndeployment. C-MCTS satisfies cost constraints but operates closer to the\nconstraint boundary, achieving higher rewards than previous work. As a nice\nbyproduct, the planner is more efficient w.r.t. planning steps. Most\nimportantly, under model mismatch between the planner and the real world,\nC-MCTS is less susceptible to cost violations than previous work.",
"authors": "Dinesh Parthasarathy, Georgios Kontes, Axel Plinge, Christopher Mutschler",
"published": "2023-05-25",
"updated": "2023-09-29",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "The Constrained Markov Decision Process (CMDP) formulation allows to solve\nsafety-critical decision making tasks that are subject to constraints. While\nCMDPs have been extensively studied in the Reinforcement Learning literature,\nlittle attention has been given to sampling-based planning algorithms such as\nMCTS for solving them. Previous approaches perform conservatively with respect\nto costs as they avoid constraint violations by using Monte Carlo cost\nestimates that suffer from high variance. We propose Constrained MCTS (C-MCTS),\nwhich estimates cost using a safety critic that is trained with Temporal\nDifference learning in an offline phase prior to agent deployment. The critic\nlimits exploration by pruning unsafe trajectories within MCTS during\ndeployment. C-MCTS satisfies cost constraints but operates closer to the\nconstraint boundary, achieving higher rewards than previous work. As a nice\nbyproduct, the planner is more efficient w.r.t. planning steps. Most\nimportantly, under model mismatch between the planner and the real world,\nC-MCTS is less susceptible to cost violations than previous work.",
"main_content": "INTRODUCTION Monte Carlo Tree Search (MCTS) is a decision-making algorithm that employs Monte Carlo methods across the decision space, evaluates their outcome with respect to a given reward/objective, and constructs a search tree focusing on the most promising sequences of decisions (Browne et al., 2012; \u00b4 Swiechowski et al., 2022). The success of MCTS can be attributed to the asymmetry of the trees constructed, which ensures better exploration of promising parts of the search space. Also, the possibility of using neural networks as heuristics to guide the search tree has helped tackle complex and high-dimensional problems with large state and action spaces (Schrittwieser et al., 2020a). In spite of its successful application in several diverse domains, the standard, single-objective MCTS algorithm is unsuitable for a large class of real-world problems that apart from optimizing an objective function, also require a set of constraints to be fulfilled. These types of problems are usually modeled as Constrained Markov Decision Processes (CMDPs) (Altman, 1999) and specialized algorithms are used to solve the underlying constrained optimization problem. Typical examples of such algorithms include approaches that rely on an expert knowledge base to create a safe action set (Hoel et al., 2020; Mohammadhasani et al., 2021; Mirchevska et al., 2018), Lagrangian relaxation methods that update primal and dual variables incrementally online and learn safe policies (Ding et al., 2020; Paternain et al., 2019), approaches that learn separate reward and cost/constraint signals to train a safe-aware policy both in Markov Decision Process (MDP) (Bharadhwaj et al., 2020; Srinivasan et al., 2020; Yang et al., 2022) and Robust Markov Decision Process (RMDP) environments (Tamar et al., 2014; Mankowitz et al., 2020), as well as methods that utilize uncertainty-aware estimators like Gaussian Processes to balance the exploration/exploitation risk (Wachi et al., 2018; Hewing et al., 2019). Finally, a notably different way is to model problems with constraints using temporal logic specifications (Demri & Gastin, 2012), and incorporating them as soft constraints to solve a CMDP (Guo & Zavlanos, 2018; Kalagarla et al., 2022). We propose a novel MCTS-based approach for solving Constrained Markov Decision Process (CMDP) problems called Constrained MCTS (C-MCTS),1see Fig. 1. We utilize a high-fidelity 1Source code available here: https://github.com/mutschcr/C-MCTS 1 arXiv:2305.16209v2 [cs.LG] 29 Sep 2023 \fSolve MDP: Collect training samples from a high-fidelity simulator. Evaluate Policy: Compute discounted costs of current episode. No, reset environment Generated 'n' transitions? No, collect more\u00a0 training samples Costs are optimal i.e.\u00a0 just within\u00a0the \u00a0constraints?. Training: (Re)-train safety critic. Solve CMDP: Update safety critic and deploy CMCTS on a high-fidelity simulator.\u00a0 No, collect more\u00a0 training samples Is it safe? End Training Update MDP solver: Use latest safety critic to guide action selection. Initialize: Set Lagrange multiplier to 0. Start Training Data Gathering Model Training Safety Evaluation Update Lagrange multiplier Yes, add collected samples\u00a0to the training set.\u00a0 Yes, deploy CMCTS to the target environment. Figure 1: Simplified flow of training phase in C-MCTS. simulator to collect different sets of trajectories under different safety constraint satisfaction levels. Utilizing a simulator has several benefits, since violating cost-constraints has no real-world implications. Also, we can construct scenarios with high safety implications that have rare occurrences in the real world. The samples collected are used to train a safety critic offline, which is used during deployment within MCTS to make cost predictions and avoid tree expansion to unsafe states. The proposed method manages to construct a deeper search tree with fewer planning iterations compared to the state of the art, while discovering solutions that operate safely closer to the cost-constraint, thus leading to higher rewards. The rest of the paper is structured as follows. Sec. 2 formalizes the problem and discusses previous work. Sec. 3 provides details on C-MCTS. Sec. 4 shows experimental results. Sec. 5 concludes. 2 CONSTRAINED MARKOV DECISION PROCESSES 2.1 METHODS WITH LAGRANGE MULTIPLIERS We start with the standard CMDP definition: a CMDP can be defined by the tuple \u27e8S, A, P, R, C, \u02c6 c, \u03b3, s0\u27e9where S is the set of states s, A is the set of actions a, P defines the probability of transitioning from s \u2208S to s\u2032 \u2208S for action a \u2208A executed at s, R is a reward function that returns a one-step reward for a given action a at a state s, C is the set of M cost functions, \u02c6 c \u2208[0, 1] is a set of thresholds for these costs functions, \u03b3 \u2208[0, 1) is the discount factor, and s0 \u2208S is the initial state distribution. The optimal policy \u03c0\u2217in such a framework is a policy that maximizes the expected discounted cumulative reward V \u03c0 R (s0), while satisfying all the constraints on the expected discounted cumulative cost V \u03c0 C (s0), as follows: max \u03c0\u2208\u03a0 V \u03c0 R (s0) = E\u03c0 \" \u221e X t=0 \u03b3tR(st, at)|s0 # s.t. V \u03c0 C (s0) = E\u03c0 \" \u221e X t=0 \u03b3tC(st, at)|s0 # \u2264\u02c6 c. (1) Note that depending on the context we will use the definitions of (1) or the notion of the state-action expected discounted cumulative reward/cost (also known as the state-action value function), defined (for cost) as follows: Q\u03c0 C(s, \u03c0(s)) = E\u03c0 \" \u221e X t=0 \u03b3tC(st, at)|s0 = s # \u225cV \u03c0 C (s|s0 = s). (2) Similar to assumptions of previous work (see e.g., Tessler et al. (2018) and the robust constraint objective (Eq. 2) from Mankowitz et al. (2020)), we prioritize the constraint satisfaction part of (1). Definition 1. (Tessler et al., 2018) A feasible solution of the constrained optimization problem defined in (1) is a solution that satisfies V \u03c0 C (s0) \u2264\u02c6 c. One approach to address the problem in (1) is using the Lagrange multiplier technique (see Bertsekas (2014)), which transforms the posed problem into an unconstrained one: min \u03bb\u22650 max \u03c0\u2208\u03a0 L(\u03bb, \u03c0) = min \u03bb\u22650 max \u03c0\u2208\u03a0 [V \u03c0 R (s0) \u2212\u03bb (V \u03c0 C (s0) \u2212\u02c6 c)] . (3) For solving the Lagrangian, we can define the following: Definition 2. (Tessler et al., 2018) The penalized reward function is defined as r(\u03bb, s, a) = r(s, a) \u2212 \u03bb c(s, a). The penalized expected discounted cumulative reward function is defined as V \u03c0 R (\u03bb, s) = V \u03c0 R (s) \u2212\u03bbV \u03c0 C (s). 2 \fMany approaches (e.g., Tessler et al. (2018); Ding et al. (2020)) parameterize the policy with parameters \u03b8 (e.g., a neural network policy) and directly apply consecutive steps of policy optimization (e.g., using actor-critic algorithms) and adaptations of the value of \u03bb, until the solution converges to a policy that respects the constraints. Others (e.g., Srinivasan et al. (2020); Yang et al. (2022)), learn a \u201csafety\u201d critic separately and try to maximize the expected reward without violating the constraints. 2.2 MONTE CARLO TREE SEARCH FOR CMDPS MCTS is a decision-making algorithm that can search large combinatorial spaces represented by trees. The search tree consists of nodes representing each state uniquely, and edges representing actions that connect these nodes. The algorithm is used iteratively to explore the state space and build statistical evidence about different decision paths. Based on the gathered statistics, an optimal set of actions is taken such that the expected cumulative reward is maximized. Each iteration of MCTS consists of four phases: (i) selection, (ii) expansion, (iii) simulation, and (iv) backpropagation. MCTS for discrete-action CMDPs has only been little explored. To our knowledge, apart from the seminal work of Lee et al. (2018), previous work extended MCTS to multi-objective variants (Hayes et al., 2023) that attempt to construct local (Chen & Liu, 2019) or global (Wang & Sebag, 2012) Pareto fronts and determine the Pareto-optimal solution. These approaches report good results at the expense of higher computational costs, due to the need to compute a set of Pareto-optimal solutions. Lee et al. (2018) proposed Cost-Constrained Monte Carlo Planning (CC-MCP), an MCTS algorithm to solve CMDPs. CC-MCP uses a Lagrange formulation and updates the Lagrange multiplier while constructing the search tree based on accumulated cost statistics. The CMDP problem is formulated as an Linear Program (LP), and then the dual formulation is solved: min \u03bb\u22650 \u0002 V \u2217 R(\u03bb, s) + \u03bbT \u02c6 c \u0003 (4) Here, V \u2217 R(\u03bb, s) is the optimal penalized expected discounted cumulative reward function, and \u02c6 c are the cost constraints. As the objective function in Eq. 4 is piecewise-linear and convex over \u03bb (Lee et al., 2018), \u03bb can be updated using the gradient information V \u2217 C \u2212\u02c6 c, where V \u2217 C are the costs incurred for an optimal policy with a fixed \u03bb. Hence, the CMDP can be solved by iterating the following three steps: (i) Solve MDP with a penalized reward function (see Definition 2), (ii) evaluate V \u2217 C for this policy, and (iii) update \u03bb using the gradient information. Steps (i) and (ii) can also be interleaved at a finer granularity, and this is the idea behind CC-MCP, where \u03bb is updated at every MCTS iteration based on the Monte Carlo cost estimate \u02c6 VC at the root node of the search tree. 3 METHOD CC-MCP has two major shortcomings: (1) it requires a large number of planning iterations for the tuning of the Lagrange multiplier (as it is tuned online and thus explores both unsafe and safe trajectories in the search tree), and (2) the performance is sub-optimal w.r.t costs, i.e., the agent acts conservatively. Moreover, the algorithm also relies on the planning model to calculate cost estimates, making it error-prone to use approximate planning models for fast planning at deployment. In our approach, the training phase (see Fig. 1) consists of approximating a safety critic that is utilized by the MCTS policy during the deployment phase (without a Lagrange multiplier) for pruning unsafe trajectories/sub-trees. We define two simulators: a low-fidelity simulator, which is not accurate, but its low complexity allows for utilization in the online planning/rollout phase of the MCTS algorithm; and a high-fidelity one, utilized for data collection and evaluation of the safety critic training. 3.1 SAFETY CRITIC TRAINING Instead of tuning the Lagrange multiplier online, C-MCTS varies this parameter in a pre-training phase (e.g., in simulation) to obtain different sets of trajectories with different safety levels (\u201cdata gathering\u201d in Fig. 1). Here, following the standard training process in Lagrangian relaxation/augmentation settings (Bertsekas, 2014), training iterates between calculating a new value \u03bbk in each k-th iteration of the data gathering loop and solving the k-th MDP (using MCTS) with the penalized reward function r(s, a) \u2212\u03bbk c(s, a). The new value for \u03bbk in each iteration is calculated as \u03bbk = \u03bbk\u22121 + \u03b10 k \u0010 V k,\u2217 C \u2212\u02c6 c \u0011 . The data gathering loop is terminated when \u02c6 c \u2212\u03f5 \u2264V k,\u2217 C \u2264\u02c6 c. Here, \u03b10 and \u03f5 are tunable hyper-parameters. Proposition 1. This iterative optimization process converges to the optimal \u03bb\u2217. 3 \fProof. Previous work (Kocsis & Szepesv\u00e1ri, 2006; Silver et al., 2016) shows that the MCTS policy converges to the optimal policy as the number of simulations increases, meaning that in each iteration k we are guaranteed to find the optimal solution in the k-th MDP. Based on this, and on the fact that \u03bb is updated following the gradient direction of V k,\u2217 C \u2212\u02c6 c, convergence to the optimal \u03bb\u2217is achieved (Lee et al., 2018; Tessler et al., 2018; Mankowitz et al., 2020). 3.1.1 TRAINING DATA COLLECTION At this point, we could utilize the available trajectory data collected from applying the optimal policy on the MDP defined by each \u03bbk to train a safety critic and then deploy it in the environment. For the training, we use State\u2013action\u2013reward\u2013state\u2013action (SARSA)(0) algorithm (Sutton & Barto, 2018) (a Temporal Difference (TD) Learning-like method (Sutton, 1988)). Of course, there is always the risk that the resulting critic (thus also the MCTS policy that utilizes it) does not generalize well far from the collected training data (Ross et al., 2011). Ideally, the training data covers the entire state-action space, but with a higher focus on boundary regions where selecting a specific action has a high effect on expected future performance (Rexakis & Lagoudakis, 2012; Kumar et al., 2022). Definition 3. A state s is said to be cost-non-critical if \u2200a \u2208A, min a\u2032 Q\u03c0 c (s, a\u2032) \u2264Q\u03c0 c (s, a) \u2264\u02c6 c or \u02c6 c \u2264min a\u2032 Q\u03c0 c (s, a\u2032) \u2264Q\u03c0 c (s, a) (5) In other words, in cost-non-critical states, selecting any action under the applied policy \u03c0 does not lead (in expectation) to a change in the constraint/threshold violation (positive or negative).2 Even though having more training data from cost-critical states is desirable, these do not frequently occur in trajectories generated by any policy \u03c0 (see also the discussion in Kumar et al. (2022)). In our case though, as the value of \u03bb is iteratively adapted in the \u201cdata gathering\u201d phase shown in Fig. 1, state-action pairs around the constraint-switching boundary are collected. The use of all available data (generated by different policies \u03c0k as a result of all values of \u03bbk) for the safety critic training (\u201cmodel training\u201d phase in Fig. 1), thus ensures that a large collection of state-action pairs from both critical and non-critical states are available.3 This safety critic is in turn re-used in the MCTS planner of a new \u03bb-tuning cycle, until a robust safety critic leading to a feasible solution is produced (as evaluated in the last phase shown in Fig. 1). Assumption 1. SARSA(0) estimates with sufficient accuracy the true state-action value function for any given policy \u03c0. There have been various formal results on the convergence properties of SARSA-like algorithms both for tabular (Singh et al., 2000) and linear function (Zhang et al., 2023) representations, as well as successful implementations using neural networks as approximators (Elfwing et al., 2018). It is therefore safe to assume that given representative training samples, Assumption 1 holds. Proposition 2. Let B = {(s, a)|s \u2208S and a \u2208A} be the set of all state-action pairs for a given MDP. Let Bp \u2286B, the set of state-action pairs for which the trained safety critic would over-estimate the expected discounted cumulative cost and Bn \u2286B, the state-action pairs for which the trained safety critic would under-estimate it. Then, Bp \u222aBn = B and Bp \u2229Bn = \u2205. Proof. What Proposition 2 indicates is that the trained safety critic will either under-estimate or over-estimate the expected cost of every state-action pair defined in the underlying MDP of the highfidelity simulator. If this was not true, this would imply that the safety critic provides the perfect prediction at least for some state-action pairs. This is not only numerically infeasible, but it is also due to the utilization of the low-fidelity simulator in the MCTS planner, which potentially predicts sequences of safe or unsafe next states that are different compared to the actual ones. Corollary 1. The overall training process of the safety critic, illustrated in Fig. 1, converges to a feasible solution of the constrained optimization problem defined in (1). 2Note that a similar discussion, under the concept of \u03f5-reducible datasets (or parts of datasets), also exists in safe/constrained offline reinforcement learning approaches (Liu et al., 2023). 3With this data mixture we train the safety critic using (s, a) samples that have different cost-targets (due to different \u03bb\u2019s), some of them overor under-estimating the \u201ctrue\u201d cost. We could e.g. give higher weight to data from trajectories where the value of \u03bb was close to \u03bb\u2217, but we observed that using an ensemble of safety critics (see Sec. 3.1.2) combined with using the latest safety critic in each \u201cdata gathering\u201d outer loop, leads to \u201ccorrect\u201d cost data being predominant and thus to a robust final safety critic, possibly at the cost of collecting more data. 4 \fProof sketch. As discussed before, the inner training loop will always converge to the optimal solution for the k\u2212th MDP. In case the safety critic over-estimates the expected cost, it prunes the corresponding branch in the MCTS tree. This leads to a safe, but potentially conservative behavior. If the safety critic under-estimates the expected cost, the respective branch can be traversed and a non-safe trajectory is performed at the high-fidelity simulator. Since these data are used in subsequent safety critic training iterations, the new versions of the safety critic will no longer under-estimate the cost, progressively for all the (s, a) \u2208Bn pairs (as defined in Proposition 2) that are visited in the high-fidelity simulator and there will be no constraint violations, i.e., we will have a feasible solution. 3.1.2 ROBUSTNESS TO MODEL MISMATCH Even if the safety critic has been evaluated as safe during training, there is a chance that subpar performance is observed at deployment as environment dynamics are likely different between the training (source) and the deployment (target) domain. Since MCTS explores the state space exhaustively during the online planning phase, some state-action pairs encountered during planning are likely to be out-of-distribution, i.e., differ compared to the data/trajectories encountered during training. More formally, we have two main sources of inaccurate safety critic predictions: the aleatoric and the epistemic uncertainty. The former is inherent in the training data (e.g., due to the stochastic nature of the transition model) and the latter is due to the lack of training data (e.g., it could appear as extrapolation error) \u2013 see for example (Chua et al., 2018) for a more formal discussion. To mitigate the effect of both uncertainty sources, instead of training a single safety critic, we train an ensemble. The individual members of the ensemble have the form of neural networks and approximate the state-action cost function. We denote this ensemble safety critic as \u02c6 Q\u2217 sc(s, a). The trainable parameters of each member of the ensemble are optimized to minimize the mean-squared Bellman error which uses a low variance one-step TD-target. The aggregated ensemble output (\u02c6 \u00b5, \u02c6 \u03c3) provides a mean and a standard deviation computed from the individual member\u2019s outputs, which we then use within MCTS. Hence, the safety critic output with an ensemble standard deviation greater than a set threshold \u02c6 \u03c3 > \u03c3max can be used to identify and ignore those samples and predictions. 3.2 DEPLOYMENT The trained safety critic is used during the expansion phase in MCTS, see Alg. 1 \u2013 the other phases (selection, simulation, backpropagation) are identical to vanilla MCTS. At the expansion phase, we try to expand the search tree from the leaf node along different branches corresponding to different actions. First, based on the safety critic\u2019s output we filter out predictions that we cannot Algorithm 1: C-MCTS | Using a learned safety critic in MCTS. 1 Nroot : Root node representing the current state, s0. 2 Nleaf : Selected leaf node with state st. 3 P : Traversed path from the root node to the leaf node (s0, a0, s1, a1, ..., at\u22121, st). 4 repeat // SELECTION 5 P, Nleaf \u2190SELECT(Nroot) // selection using UCT algorithm. // EXPANSION 6 i. Get safety critic outputs (\u02c6 \u00b5, \u02c6 \u03c3) for all actions at \u2208A from Nleaf. 7 ii. Identify feasible actions i.e. Afeasible = {at : \u02c6 \u03c3at \u2264\u03c3max}. 8 iii. Calculate the cost estimate \u02c6 Q\u2217 sc(st, at) for actions at \u2208Afeasible. 9 iv. Define: Cpath = c(s0, a0) + \u03b3 \u00b7 c(s1, a1) + ... + \u03b3t\u22121 \u00b7 c(st\u22121, at\u22121) 10 v. Identify unsafe actions i.e. Aunsafe = {at \u2208Afeasible : Cpath + \u03b3t \u00b7 \u02c6 Q\u2217 sc(st, at) > \u02c6 c}. 11 vi. Expand tree for branches with safe actions, at \u2208A \\ Aunsafe. // SIMULATION 12 \u02c6 VR \u2190ROLLOUT(Nleaf) // Get Monte Carlo reward estimate. // BACKPROPAGATION 13 BACKUP( \u02c6 VR, P) // Update tree statistics. 14 until maximum number of planning iterations is reached 5 \ftrust (corresponding to high ensemble variance) and create a reduced action set (lines 6-7). The safety of each action from this set is evaluated based on the safety critic\u2019s output predicting expected cumulative costs from the leaf. This is summed up with the one-step costs stored in the tree from the root node to the leaf node. If this total cost estimate is greater than the cost constraints (\u02c6 c), then we prune the corresponding branches, while other branches are expanded (lines 8-11). These steps, when repeated over multiple planning iterations create a search tree exploring a safe search space. As exploration is mostly limited to a safe search space, C-MCTS manages to construct a deeper search tree with fewer planning iterations compared to CC-MCP. We have systematically observed in our results that C-MCTS operates safely closer to the cost-constraint and collects higher rewards with fewer planning iterations as compared to CC-MCP. 4 EVALUATION We evaluate our method by comparing its performance with our baseline CC-MCP (Lee et al., 2018) on Rocksample and Safe Gridworld environments (see Sec. A.1). We also present insights on the planning efficiency of the proposed algorithm, as well as its sensitivity to different design options, such as the length of planning iterations during training and the values of the ensemble threshold during deployment. We also provide insights on its robustness to model mismatch. A detailed description of the environments can be found in A.1. Following relevant recent discussions in the community (Togelius & Yannakakis, 2023), we opted for environments that are computationally manageable \u2013 especially since our method necessitates a computationally-expensive MCTS planning sub-routine at each time-step of the environment (both in training and deployment) \u2013 which in turn enables reproducibility (Henderson et al., 2018) and wider impact of our work. At the same time, the selected environments clearly \u201cemulate\u201d the complexity that stems from domains with large state/action spaces (Schrittwieser et al., 2020b; Afsar et al., 2022). We measure the performance of the agent on different sizes and complexities of Rocksample environments (Sec. A.1.1), with C-MCTS, CC-MCP, and vanilla MCTS (for penalized reward function with known \u03bb\u2217). C-MCTS obtains higher rewards than CC-MCP (see Fig. 2, top row). The reward for C-MCTS increases with the number of planning iterations. Also, the agent operates consistently below the cost-constraint (see Fig. 2, middle row), close to the safety boundary while CC-MCP acts conservatively w.r.t costs and performs sub-optimally w.r.t. rewards. Also, costs incurred in each episode vary greatly with different environment initializations. This is mitigated with C-MCTS since cost estimates with TD learning have a lower variance than Monte Carlo cost estimates. Hence, the total number of cost violations is lower for C-MCTS compared to the other methods, in spite of operating closest to the safety constraint (see Fig. 2, bottom row). Vanilla MCTS obtains higher rewards than CC-MCP. This is because \u03bb\u2217is known, and unlike CC-MCP, doesn\u2019t require tuning online. MCTS operates close to the cost-constraint but has a high number of cost violations. C-MCTS, when compared to vanilla MCTS, is safer and obtains equally high rewards or in some cases even acts better (e.g., Rocksample(11, 11)). Planning efficiency. We compare the planning efficiency of the three methods for the same set of experiments addressed previously. The comparison is done based on the depth of the search tree, given a specific computational budget (i.e., a fixed number of planning iterations). This comparison is qualitative and is used to evaluate the effectiveness of different planning algorithms. Fig. 3 shows that C-MCTS performs a more narrow search for the same number of planning iterations. The peak tree depth when averaged over 100 episodes is the highest for C-MCTS. In C-MCTS the exploration space is restricted using the safety critic, and this helps in efficient planning. In Rocksample(11, 11) the peak tree depth of CC-MCP is high in spite of having a suboptimal performance. This is probably because the Lagrange multiplier in CC-MCP gets stuck in a local maximum and is unable to find a globally optimal solution. 4.1 ROBUSTNESS TO MODEL MISMATCH MCTS planner model. Online planners often resort to approximate models of the real world for fast planning and to account for real-time decision requirements. Those model imperfections can lead to safety violations while a high-fidelity model during deployment is infeasible due to computational constraints. We resolve this dilemma by learning safety constraints before deployment from a simulator that has a higher fidelity compared to the planning model. The benefit of such an approach is shown using a synthetically constructed Safe Gridworld scenario (see Sec. A.1.2). 6 \f28 210 212 5 10 15 Average Discounted Cumulative Reward Rocksample(5,7) 28 210 212 5 10 15 Rocksample(7,8) 28 210 212 0 2 4 6 8 10 Rocksample(11,11) 28 210 212 0 0.2 0.4 0.6 0.8 1 \u02c6 c = 1 Average Discounted Cumulative Cost 28 210 212 0 0.2 0.4 0.6 0.8 1 \u02c6 c = 1 28 210 212 0 0.5 1 \u02c6 c = 1 28 210 212 0 20 40 60 Planning Iterations Episodes With Cost Violations [%] 28 210 212 0 20 40 60 Planning Iterations 28 210 212 0 20 40 60 80 Planning Iterations CC-MCP MCTS C-MCTS Figure 2: Comparing performance of C-MCTS, MCTS, and CC-MCP on different configurations of Rocksample environments evaluated on 100 episodes. 28 210 212 0 2 4 6 8 Planning Iterations Average Peak Tree Depth Rocksample(5,7) 28 210 212 0 5 10 Planning Iterations Rocksample(7,8) 28 210 212 0 5 10 15 20 Planning Iterations Rocksample(11,11) CC-MCP MCTS C-MCTS Figure 3: Maximum depth of the search tree for C-MCTS, MCTS and CC-MCP on different rocksample configurations averaged over 100 episodes. In this setup, we use a planning simulator that models the environment dynamics approximately, and a training simulator (for the safety critic) that captures the dynamics more accurately. In the planning simulator, all the transition dynamics are accurately modeled, except the blue squares with winds (Fig. 6 right). The transitions here are determined by the action selection, and the stochasticity due to wind effects is not considered. The training simulator models the transitions in these regions more accurately, but with some errors. The agent in the blue squares moves down with a probability of 0.25, as compared to the real-world configuration where we model the probability of this transition with 0.3. For comparison, C-MCTS was trained and evaluated for 29 planning iterations, while 7 \f30 79 81 21 86 0 0 0 29 21 19 79 14 100 0 0 0 0 0 0 0 -1 100 0 0 0 0 0 0 -1 0 100 0 0 0 0 0 -1 0 100 0 0 -1 -1 -1 -1 0 100 0 0 0 0 0 0 0 100 0 0 0 0 0 0 0 -2 (a) C-MCTS with 0% cost violations. 27 27 50 51 76 0 0 0 24 49 3 25 0 76 0 0 24 0 23 0 0 -1 76 0 22 2 0 0 0 -1 1 75 24 0 0 0 0 -1 0 76 0 24 -1 -1 -1 -1 0 76 0 0 24 14 9 10 2 76 0 0 0 15 18 23 22 -2 (b) CC-MCP with 11% cost violations. Figure 4: State visitations aggregated over 100 episodes. The length of the arrows is proportional to the number of action selections. Values of -1 and -2 denote unsafe cells and the goal cell, respectively. CC-MCP was evaluated for 220 planning iterations. The latter was set to a higher planning budget to allow CC-MCP to converge. Fig. 4 shows the number of state visitations of C-MCTS (left) and CC-MCP (right). The agent using CC-MCP takes both of the possible paths (going to the top and going to the right), avoiding the unsafe region (in pink) to reach the goal state, which would be optimal in the absence of the windy squares, but here it leads to cost violations due to inaccurate cost estimates.4 C-MCTS on the other hand only traverses through the two right-most columns to avoid the unsafe region, since the safety critic being trained using the high-fidelity simulator identifies the path from the top as unsafe, which leads to zero cost violations. Accuracy of the training simulator. We study the performance of C-MCTS when trained on imperfect simulators. On the Rocksample environment, the sensor characteristics measuring the quality of the rock are defined by the constant d0 (see Sec. A.1.1). We overestimate the sensor accuracy in our training simulator by choosing dsim 0 with error \u2206d0 and observe the safety of the agent in the real world when trained on simulators with different values of \u2206d0. Fig. 5 (right column) shows the results. The values of \u2206d0 set to 10 and 40 correspond to a maximum prediction error of 11.7% and 32.5%, respectively. When \u2206d0 = 40 the agent operates at a greater distance from the cost-constraint. The reason for cost violations is that the safety critic has been trained to place too much trust in the sensor measurements due to the simulation-to-reality gap. With a smaller gap (\u2206d0 = 10) the agent performs safer. 4.2 EFFECT OF HYPER-PARAMETER SELECTION ON SAFE BEHAVIOR We vary hyper-parameters in the training and deployment of a safety critic and identify key parameters that impact the safety of the agent. The experiments were conducted on a Rocksample(7, 8) environment, and the results were averaged over 100 runs. We observed that the performance of C-MCTS is sensitive to the length of the planning horizon during training and the ensemble threshold used during deployment. To study the effect of these parameters we optimized the other algorithmic parameters, i.e., \u03b10 (initial step size to update \u03bb) and \u03f5 (termination criterion for training loop), with a grid search. For each of the experiments we then selected a reasonably performing configuration (i.e., Sec. 4.2: \u03b10=4, \u03f5=0.1; Sec. 4.2: \u03b10=8, \u03f5=0.3; Sec. 4.1: \u03b10=1, \u03f5=0.1) and ablated the respective hyperparameters, i.e., planning horizon in training, \u03c3, and d0. Length of planning horizon during training. We conduct experiments with different sets of hyperparameters to study the effect of using different planning iterations during the safety critic training, and evaluate the performance of the agent during deployment. From Fig. 5 (left column) we can observe that the safety critic trained with a longer planning horizon operates closer to the safety boundary. This is because the trained safety critic learns to predict costs for a near-optimal policy and hence discerns the safety boundary more accurately. On the other hand, 4Of course also the variance could play a minor role but we designed the setup to focus on the dynamics mismatch between the planner and the actual environment, which is much more prevalent here. 8 \f28 210 212 1 1.5 2 Planning Iterations Average Discounted Cumulative Cost Planning Horizon 28 210 212 0.8 1 1.2 1.4 Planning Iterations Ensemble Threshold 28 210 212 1 1.5 2 2.5 Planning Iterations Simulator Accuracy 28 210 212 0 5 10 15 Episodes With Cost Violations [%] 28 210 212 0 5 10 15 20 28 210 212 0 10 20 30 128 planning iterations (training) 1024 planning iterations (training) \u03c3max = 0.1 \u03c3max = 0.5 \u2206d0 = 40 \u2206d0 = 10 Figure 5: Comparing safety for different training/deployment strategies, i.e., using different planning horizons during training (left), deploying with different ensemble thresholds (middle), and collecting training samples from simulators of different accuracies (right). the safety critic trained with a smaller planning horizon learns cost estimates from a sub-optimal policy leading to cost violations during deployment. Ensemble threshold during deployment. We study the effect of using different standard deviation thresholds (\u03c3max = 0.1 and \u03c3max = 0.5) in the neural network ensemble during deployment. Fig. 5 (middle column) shows that the cost incurred exceeds the cost-constraint if \u03c3max = 0.1, but the agent performs safely within the cost-constraint with a far lesser number of cost violations if \u03c3max = 0.5. We prune unsafe branches during planning only when the predictions between the individual members of the ensemble align with each other. Setting \u03c3max = 0.1 is a tight bound resulting in most of the predictions of the safety critic being ignored. Using a higher threshold with \u03c3max = 0.5 ensures that only large mismatches between the predictions of the individual members (corresponding to out-of-distribution inputs) are ignored, and the rest are used during planning. This results in the agent performing safely within the cost-constraint, but not too conservatively. 5"
}
]
}