| { |
| "url": "http://arxiv.org/abs/2404.16767v1", |
| "title": "REBEL: Reinforcement Learning via Regressing Relative Rewards", |
| "abstract": "While originally developed for continuous control problems, Proximal Policy\nOptimization (PPO) has emerged as the work-horse of a variety of reinforcement\nlearning (RL) applications including the fine-tuning of generative models.\nUnfortunately, PPO requires multiple heuristics to enable stable convergence\n(e.g. value networks, clipping) and is notorious for its sensitivity to the\nprecise implementation of these components. In response, we take a step back\nand ask what a minimalist RL algorithm for the era of generative models would\nlook like. We propose REBEL, an algorithm that cleanly reduces the problem of\npolicy optimization to regressing the relative rewards via a direct policy\nparameterization between two completions to a prompt, enabling strikingly\nlightweight implementation. In theory, we prove that fundamental RL algorithms\nlike Natural Policy Gradient can be seen as variants of REBEL, which allows us\nto match the strongest known theoretical guarantees in terms of convergence and\nsample complexity in the RL literature. REBEL can also cleanly incorporate\noffline data and handle the intransitive preferences we frequently see in\npractice. Empirically, we find that REBEL provides a unified approach to\nlanguage modeling and image generation with stronger or similar performance as\nPPO and DPO, all while being simpler to implement and more computationally\ntractable than PPO.", |
| "authors": "Zhaolin Gao, Jonathan D. Chang, Wenhao Zhan, Owen Oertell, Gokul Swamy, Kiant\u00e9 Brantley, Thorsten Joachims, J. Andrew Bagnell, Jason D. Lee, Wen Sun", |
| "published": "2024-04-25", |
| "updated": "2024-04-25", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.CL", |
| "cs.CV" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Offline AND Reinforcement AND Learning", |
| "gt": "REBEL: Reinforcement Learning via Regressing Relative Rewards", |
| "main_content": "Introduction The generality of the reinforcement learning (RL) paradigm is striking: from continuous control problems (Kalashnikov et al., 2018) to, recently, the fine-tuning of generative models (Stiennon et al., 2022; Ouyang et al., 2022), RL has enabled concrete progress across a variety of decision-making tasks. Specifically, when it comes to fine-tuning generative models, Proximal Policy Optimization (PPO, Schulman et al. (2017)) has emerged as the de-facto RL algorithm of choice, from language models (LLMs) (Ziegler et al., 2020; Stiennon et al., 2022; Ouyang et al., 2022; Touvron et al., 2023) to image generative models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024). If we take a step back however, it is odd that we are using an algorithm designed for optimizing two-layer networks for continuous control tasks from scratch for fine-tuning the billions of parameters \u2217{zg292, jdc396, ojo2, kdb82, ws455}@cornell.edu, tj@cs.cornell.edu \u2020{wenhao.zhan, jasonlee}@princeton.edu \u2021{gswamy,bagnell2}@andrew.cmu.edu 1 arXiv:2404.16767v1 [cs.LG] 25 Apr 2024 \fImage Generation Language Modeling ( ) RLHF reinforcement learning regression REBEL ( ) ( ) x y x y Figure 1: We present REBEL: a simple and scalable RL algorithm that performs policy optimization via iteratively regressing the difference in rewards directly in terms of the policy. This allows us to eliminate much of the complexity (e.g. value functions, clipping) of algorithms like PPO (Schulman et al., 2017). We apply REBEL to problems in both image generation and language modeling and find that despite its conceptual and implementation-level simplicity, REBEL is able to match or sometimes outperform the performance of PPO while out-performing purely offline techniques like DPO (Rafailov et al., 2023). of modern-day generative models. In the continuous control setting, the randomly initialized neural networks and the possible stochasticity in the dynamics necessitate variance reduction through a learned value function as a baseline (Schulman et al., 2015b), while clipping updates is important to limit distribution shift from iteration to iteration (Kakade and Langford, 2002). This means that when applied to generative model fine-tuning, we need to store four models in memory simultaneously (the policy, the reference policy, the critic, and the reward model), each with billions of parameters. Furthermore, we often add a KL regularization to the base model for fine-tuning, making explicit clipping unnecessary nor advisable, as pointed out by Ahmadian et al. (2024). Even outside of the generative modeling context, PPO is notorious for the wide range of performances measured, with differences being attributed to seemingly inconsequential implementation details (Henderson et al., 2019; Engstrom et al., 2020). This begs the question: Are there simpler algorithms that scale to modern RL applications? Our answer is REBEL: an algorithm that reduces the problem of reinforcement learning to solving a sequence of squared loss regression problems on iteratively collected datasets. The regression problems directly use policies to predict the difference in rewards. This allows us to eliminate the complexity of value functions, avoid heuristics like clipping, and scale easily to problems in both language modeling and image generation. Our key insight is that regressing relative rewards via policies directly on a sequence of iteratively collected datasets implicitly enables policy improvement. Rather than being a heuristic, REBEL comes with strong guarantees in theory and can be seen as a strict generalization of classical techniques (e.g., NPG) in reinforcement learning. Furthermore, REBEL cleanly incorporates offline datasets when available, can be extended to robustly handle intransitive preferences (Swamy et al., 2024), and empirically out-performs techniques like PPO 2 \fand DPO (Rafailov et al., 2023) in language generation and has a faster convergence with a similar asymptotic performance in image generation. More explicitly, our key contributions are four-fold: 1. We propose REBEL, a simple and scalable RL algorithm. REBEL finds a near-optimal policy by solving a sequence of least square regression problems on iteratively collected datasets. Each regression problem involves using a policy-parameterized regressor to predict the difference in rewards across trajectories sampled from the dataset. This dataset can be generated in a purely on-policy fashion or can incorporate offline data, enabling hybrid training. Furthermore, REBEL can be easily extended to handle intransitive preferences. 2. We connect REBEL to classical RL methods. We show that REBEL is a generalization of the foundational Natural Policy Gradient (NPG, Kakade (2001)) algorithm \u2013 applying the Gauss-Newton algorithm to the sequence of regression problems that REBEL solves recovers NPG. However, by instead applying simpler first-order optimization techniques, we are able to avoid computing the Fisher Information Matrix and enjoy a variance reduction effect. Thus, REBEL can be understood as a generalization of NPG while being much more scalable. 3. We analyze the convergence properties of REBEL. We prove via a direct reduction-based analysis that as long as we can solve the regression problem well at each iteration, we will be able to compete with any policy covered by the iteratively collected datasets (matching the strongest known results in the agnostic RL). These problems involve predicting the difference in rewards between trajectories in our dataset. We expect this problem to be well-solved in practice because our class of regressors is isomorphic to a class of policies that is highly expressive for the applications we consider (i.e. flexible Transformer models). 4. We evaluate REBEL both on language modeling and image generation tasks. We find that the on-policy version of REBEL outperforms PPO and DPO on language modeling and has similar performance for image generation tasks. On the TL;DR summarization task, we show REBEL scales well by finetuning a 6.9B parameter model. For text-guided image generation, REBEL optimizes a consistency model that converges to a similar performance as PPO. In short, REBEL is a simple and scalable algorithm that enjoys strong theoretical guarantees and empirical performance. We believe it is a suitable answer to the question raised above. 2 REBEL: REgression to RElative REward Based RL We first outline the notation used throughout the paper. 2.1 Notation We consider the Contextual Bandit formulation (Langford and Zhang, 2007) of RL which has been used to formalize the generation process of models like LLMs (Rafailov et al., 2023; Ramamurthy et al., 2022; Chang et al., 2023) and Diffusion Models (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) due to the determinism of the transitions. More explicitly, in the deterministic transition setting, explicit states are not required as they can be equivalently represented by a sequence of 3 \factions. Furthermore, the entire sequence of actions can be considered as a single \u201carm\u201d in a bandit problem with an exponentially large action space. We denote by (\ud835\udc65, \ud835\udc66) a prompt/response pair with \ud835\udc65\u2208X as a prompt and \ud835\udc66\u2208Y as a response (e.g., a sequence of tokens, or in general a sequence of actions). We assume access to a reward function \ud835\udc5f(\ud835\udc65, \ud835\udc66) from which we can query for reward signals (the exact form of \ud835\udc5fdoes not need to be known). Querying \ud835\udc5fat (\ud835\udc65, \ud835\udc66) will return a scalar \ud835\udc5f(\ud835\udc65, \ud835\udc66) measuring the quality of the response. Such a reward function could be a pre-defined metric (e.g., Rouge score against human responses) or it could be learned from an offline human demonstration or preference data (e.g., the RLHF paradigm (Christiano et al., 2017; Ziegler et al., 2020)), as explored in our experiments. Denote by \ud835\udf0b\u2208X \u21a6\u2192\u0394(\ud835\udc4c), a policy (e.g. LLM) that maps from a prompt \ud835\udc65to a distribution over the response space Y. We use \ud835\udf0cto denote the distribution over prompts (i.e. initial states / contexts) \ud835\udc65. Throughout the paper, we use \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) to denote a parameterized policy with parameter \ud835\udf03(e.g., a neural network policy). At times we interchangeably use \ud835\udf0b\ud835\udc61and \ud835\udf0b\ud835\udf03\ud835\udc61when it is clear from the context. We emphasize that while we focus on the bandit formulation for notation simplicity, the algorithms proposed here can be applied to any deterministic MDP where \ud835\udc65is the initial state and the trajectory \ud835\udc66consists of the sequence of actions. At each iteration of all algorithms, our goal will be to solve the following KL-constrained RL problem: \ud835\udf0b\ud835\udc61+1 = argmax \ud835\udf0b E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02E\ud835\udc65KL (\ud835\udf0b(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) . (1) Intuitively, this can be thought of asking for the optimizer to fine-tune the policy \ud835\udf0b\ud835\udc61+1 according to \ud835\udc5f while staying close to some baseline policy \ud835\udf0b\ud835\udc61. 2.2 Deriving REBEL: REgression to RElative REward Based RL From Ziebart et al. (2008), we know that there exists a closed-form solution to the above minimum relative entropy problem (Eq. 1, Gr\u00fcnwald and Dawid (2004)): \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) = \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) \ud835\udc4d(\ud835\udc65) ; \ud835\udc4d(\ud835\udc65) = \u2211\ufe01 \ud835\udc66 \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) exp(\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)). (2) As first pointed out by Rafailov et al. (2023), observe that we can invert Eq. 2 and write the reward as a function of the policy, i.e. the \u201cDPO Trick\u201d: \u2200\ud835\udc65, \ud835\udc66: \ud835\udc5f(\ud835\udc65, \ud835\udc66) = 1 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013 . (3) As soon as X and Y become large, we can no longer guarantee the above expression holds exactly at all (\ud835\udc65, \ud835\udc66) and therefore need to turn our attention to choosing a policy such that Eq. 3 is approximately true. We propose using a simple square loss objective between the two sides of Eq. 3 to measure the goodness of a policy, i.e. reducing RL to a regression problem: \u0012 \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u22121 \ud835\udf02 \u0012 ln(\ud835\udc4d(\ud835\udc65)) + ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013\u0013\u00132 . (4) 4 \fAlgorithm 1 REgression to RElative REward Based RL (REBEL) 1: Input: Reward \ud835\udc5f, policy class \u03a0 = {\ud835\udf0b\ud835\udf03}, base distribution \ud835\udf07, learning rate \ud835\udf02 2: Initialize policy \ud835\udf0b\ud835\udf030. 3: for \ud835\udc61= 0 to \ud835\udc47\u22121 do 4: // Base distribution \ud835\udf07can either be an offline dataset or \ud835\udf0b\ud835\udc61. 5: Collect dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65) 6: Solve square loss regression problem: \ud835\udf03\ud835\udc61+1 = argmin \ud835\udf03 \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 (9) 7: end for Unfortunately, this loss function includes the partition function \ud835\udc4d(\ud835\udc65), which can be challenging to approximate over large input / output domains. However, observe that \ud835\udc4d(\ud835\udc65) only depends on \ud835\udc65and not \ud835\udc66. Thus, if we have access to paired samples, i.e. (\ud835\udc65, \ud835\udc66) and (\ud835\udc65, \ud835\udc66\u2032), we can instead regress the difference in rewards to eliminate this term from our objective: \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (5) Of course, we need to evaluate this loss function on some distribution of samples. In particular, we propose using an on-policy dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032} with \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65), where \ud835\udf07is some base distribution. The base distribution \ud835\udf07can either be a fixed offline dataset (e.g. the instruction fine-tuning dataset) or \ud835\udf0b\ud835\udc61itself. Thus, the choice of base distribution \ud835\udf07determines whether REBEL is hybrid or fully online. Putting it all together, we arrive at our core REBEL objective: \u2211\ufe01 (\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032)\u2208D\ud835\udc61 \u0012 (\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u22121 \ud835\udf02 \u0012 ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0013 \u2212ln \u0012 \ud835\udf0b\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013\u0013\u00132 . (6) To recap, given a pair of completions \ud835\udc66, \ud835\udc66\u2032 to a prompt \ud835\udc65, REBEL attempt to fit the relative reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (7) by optimizing over a class of predictors of the form 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 . (8) Critically, observe that if we were able to perfectly solve this regression problem, we would indeed recover the optimal solution to the KL-constrained RL problem we outlined in Eq. 1. While the above update might seem somewhat arbitrary at first glance, it has deep connections to prior work in the literature that illuminate its strengths over past techniques. We now discuss some of them. 3 Understanding REBEL as an Adaptive Policy Gradient We begin by recapping the foundational algorithms for policy optimization before situating REBEL within this space of techniques. 5 \f3.1 Adaptive Gradient Algorithms for Policy Optimization In this section, we give a brief overview of three adaptive gradient algorithms: Mirror Descent (MD), Natural Policy Gradient (NPG), and Proximal Policy Optimization (PPO). We discuss why they are preferable to their non-adaptive counterparts (Gradient Descent (GD) and Policy Gradient (PG)) and the connections between them. Mirror Descent. If X and Y are small discrete spaces (i.e. we are in the tabular setting), we can used the closed-form expression for the minimum relative entropy problem (Eq. 2). This is equivalent to the classic Mirror Descent (MD) algorithm with KL as the Bregman divergence. This update procedure is also sometimes known as soft policy iteration (Ziebart et al., 2008). Note that it does not even involve a parameterized policy and is therefore manifestly covariant. MD ensures a 1/\ud835\udc47convergence rate, i.e., after \ud835\udc47iterations, it must find a policy \u02c6 \ud835\udf0b, such that E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\u2605(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65,\ud835\udc66\u223c\u02c6 \ud835\udf0b(.|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42(1/\ud835\udc47). In particular, the convergence is almost dimension-free: the convergence rate scales logarithmically with respect to the size of the Y space. Note that gradient ascent will not enjoy such a dimension-free rate when optimizing over the simplex. When sup\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, we can show that the KL divergence between two policies, i.e., KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)), is also bounded, ensuring \ud835\udf0b\ud835\udc61+1 stay close to \ud835\udf0b\ud835\udc61. One can also show monotonic policy improvement, i.e., E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61+1\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2265E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61\ud835\udc5f(\ud835\udc65, \ud835\udc66). Foreshadowing a key point we will soon expound upon, both NPG and PPO can be considered approximations of this idealized tabular policy update procedure. Natural Policy Gradient. When Y and X are large, we cannot simply enumerate all \ud835\udc65and \ud835\udc66. Thus, we need to use a function to approximate \ud835\udf0b, which makes it impossible to exactly implement Eq. 2. Let us use \ud835\udf0b\ud835\udf03to denote a parameterized policy with parameter \ud835\udf03(e.g. the weights of a transformer). The Natural Policy Gradient (NPG, Kakade (2001)) approximates the KL in Equation 1 via its second-order Taylor expansion, whose Hessian is known as the Fisher Information Matrix (FIM, Bagnell and Schneider (2003)), i.e. E\ud835\udc65KL(\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)) \u2248(\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65) \u0002 \u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4\u0003 | {z } Fisher Information Matrix \ud835\udc39\ud835\udc61 (\ud835\udf03\u2212\ud835\udf03\ud835\udc61). The NPG update can be derived by plugging in this approximation to Eq. 1, further approximating the E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) by its first order Taylor expansion around \ud835\udf03\ud835\udc61, and finding the root of the resulting quadratic form: \ud835\udf03\ud835\udc61+1 = \ud835\udf03\ud835\udc61+ \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0010 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u0011 (10) where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)\u2207ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) is the standard policy gradient (i.e. REINFORCE (Williams, 1992)). As mentioned above, this update procedure can be understood as performing gradient updates in the local geometry induced by the Fisher information matrix, which ensures that we are taking small steps in policy space rather than in parameter space. Conversely, unlike regular gradient descent methods (i.e., PG), NPG allows us to make large changes in the parameter space \u0398, as long as the resulting two policies are close to each other in terms of KL divergence. This property allows NPG to make more aggressive and adaptive updates in the parameter space of the policy as well as be invariant to linear transformations of the parameters. Theoretically, Agarwal et al. (2021a) show that NPG with softmax parameterization converges at the 1/\ud835\udc47rate in a dimension-free manner, provably faster than the standard PG under the same setup. Empirically, the 6 \fsuperior convergence speed of NPG compared to that of PG was observed in its original exploration (Kakade, 2001; Bagnell and Schneider, 2003), as well as in follow-up work like TRPO (Schulman et al., 2015a). Critically, while elegant in theory, NPG, unfortunately, does not scale to modern generative models due to the need for computing the Fisher matrix inverse either explicitly or implicitly via the Hessian-vector matrix product trick. Proximal Policy Optimization. To address the scalability of NPG, Schulman et al. (2017) proposes Proximal Policy Optimization (PPO). Rather than explicitly computing the KL divergence between policies or approximating it via a Taylor expansion, PPO takes a more direct route and uses clipped updates with the hope of controlling the action probability deviation from \ud835\udf0b\ud835\udf03\ud835\udc61+1 to \ud835\udf0b\ud835\udf03\ud835\udc61, i.e. \ud835\udf03\ud835\udc61+1 := argmax \ud835\udf03 E\ud835\udc65,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65)clip \u0012 \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) ; 1 \u2212\ud835\udf16, 1 + \ud835\udf16 \u0013 \ud835\udc5f(\ud835\udc65, \ud835\udc66). (11) Prima facie, this update follows the underlying intuition of NPG: allow big and adaptive changes in the policy\u2019s parameters \ud835\udf03, as long as the corresponding action probabilities do not change too much. This perhaps explains the superiority of PPO over vanilla REINFORCE in domains like continuous control. Unfortunately, under closer scrutiny, it becomes apparent that PPO-style clipped updates neither guarantee closeness to the prior policy nor have NPG-style adaptivity. While the clipping operator can set the gradient to be zero at samples (\ud835\udc65, \ud835\udc66) where \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) is much larger or smaller than \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65), it cannot actually guarantee \ud835\udf0b\ud835\udf03\ud835\udc61+1 staying close to \ud835\udf0b\ud835\udf03\ud835\udc61, a phenomenon empirically observed in prior work (Hsu et al., 2020). Furthermore, hard clipping is not adaptive \u2013 it treats all (\ud835\udc65, \ud835\udc66) equally and clips whenever the ratio is outside of a fixed range. In contrast, constraining the KL divergence to the prior policy allows one to vary the ratio \ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65) at different (\ud835\udc65, \ud835\udc66), as long as the total KL divergence across the state space is small. Lastly, clipping reduces the effective size of a batch of training examples and thus wastes training samples. A REBEL With a Cause. Our algorithm REBEL addresses the limitations of NPG (scalability) and PPO (lack of conservativity or adaptivity) from above. First, unlike NPG, it does not rely on the Fisher information matrix at all and can easily scale to modern LLM applications, yet (as we will discuss below) can be interpreted as a generalization of NPG. Second, in contrast to PPO, it doesn\u2019t have unjustified heuristics and thus enjoys strong convergence and regret guarantees just like NPG. 3.2 Connections between REBEL and MD / NPG We now sketch a series of connections between REBEL and the methods outlined above. Exact REBEL is Mirror Descent. First, to build intuition, we interpret our algorithm\u2019s behavior under the assumption that the least square regression optimization returns the exact Bayes Optimal solution (i.e., our learned predictor achieves zero prediction error everywhere): \u2200\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032 : 1 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 = \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032) (12) Conditioned on Eq. 12 being true, a few lines of algebraic manipulation reveals that there must exist a function \ud835\udc50(\ud835\udc65) which is independent of \ud835\udc66, such that: \u2200\ud835\udc65, \ud835\udc66: 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) = \ud835\udc5f(\ud835\udc65, \ud835\udc66) + \ud835\udc50(\ud835\udc65). 7 \fTaking an exp on both sides and re-arrange terms, we get: \u2200\ud835\udc65, \ud835\udc66: \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \u221d\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) exp (\ud835\udf02\ud835\udc5f(\ud835\udc65, \ud835\udc66)) . In other words, under the strong assumption that least square regression returns a point-wise accurate estimator (i.e., Eq. 12), we see the REBEL recovers the exact MD update, which gives it (a) a fast 1/\ud835\udc47convergence rate (Shani et al., 2020; Agarwal et al., 2021a), (b) conservativity, i.e., max\ud835\udc65KL(\ud835\udf0b\ud835\udc61+1(\u00b7|\ud835\udc65)||\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)) is bounded as long as max\ud835\udc65,\ud835\udc66|\ud835\udc5f(\ud835\udc65, \ud835\udc66)| is bounded, and (c) monotonic policy improvement via the NPG standard analysis (Agarwal et al., 2021a). NPG is Approximate REBEL with Gauss-Newton Updates. We provide another interpretation of REBEL by showing that NPG (Eq. 10) can be understood as a special case of REBEL where the least square problem in Eq. 9 is approximately solved via a single iteration of the Gauss-Newton algorithm. As for any application of Gauss-Newton, we start by approximating our predictor 1 \ud835\udf02ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) by its first order Taylor expansion at \ud835\udf03\ud835\udc61: 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 \u22481 \ud835\udf02\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u22a4(\ud835\udf03\u2212\ud835\udf03\ud835\udc61), where \u2248indicates that we ignore higher order terms in the expansion. If we \ud835\udeff:= \ud835\udf03\u2212\ud835\udf03\ud835\udc61and replace 1 \ud835\udf02 \u0000ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001 by its above first order approximation in Eq. 9, we arrive at the following quadratic form: min \ud835\udeffE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udf03\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)\u0001\u22a4\ud835\udeff\u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 . (13) Further simplifying notation, we denote the uniform mixture of \ud835\udf0b\ud835\udc61 and \ud835\udf07 as \ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) := (\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65) + \ud835\udf07(\u00b7|\ud835\udc65))/2 and the Fisher information matrix \ud835\udc39\ud835\udc61averaged under said mixture as: \ud835\udc39\ud835\udc61= E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65) h \u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u0000\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)\u0001\u22a4i . Solving the above least square regression to obtain a minimum norm solution, we have the following claim. Claim 1. The minimum norm minimizer \ud835\udeff\u2605of the least squares problem in Eq. 13 recovers an advantage-based variant of the NPG update: \ud835\udeff\u2605:= \ud835\udf02\ud835\udc39\u2020 \ud835\udc61 \u0000E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc5a\ud835\udc56\ud835\udc65(\u00b7|\ud835\udc65)\u2207\ud835\udf03ln \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65)[\ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66)]\u0001 , where \ud835\udc39\u2020 \ud835\udc61is pseudo-inverse of \ud835\udc39\ud835\udc61, and the advantage is defined as \ud835\udc34\ud835\udf0b\ud835\udc61(\ud835\udc65, \ud835\udc66) := \ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212 E\ud835\udc66\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66). The proof of this claim is deferred to Appendix A. Observe that in REBEL, we never explicitly compute the advantage \ud835\udc34\ud835\udf0b\ud835\udc61. However, applying Gauss-Newton to our objective leads to an advantage-based NPG (rather than the traditional \ud835\udc44-function based NPG, e.g., Q-NPG from Agarwal et al. (2021a, 2019)) which indicates that predicting reward difference has an implicit variance reduction effect, as by definition, an advantage function includes a value function baseline. 1 1Note that the original form of NPG is on-policy (Kakade, 2001; Sutton et al., 1999), i.e., the expectations under \ud835\udf0b\ud835\udc61. Our formulation is more general: when set \ud835\udf07= \ud835\udf0b\ud835\udc61, a Gauss-Newton step will recover the original on-policy form of NPG from Kakade (2001); Sutton et al. (1999). More recent works have extended NPG beyond on-policy (e.g., Agarwal et al. (2021a, 2020)). 8 \f3.3 Extending REBEL to General Preferences In the above discussion, we assume we are given access to a ground-truth reward function. However, in the generative model fine-tuning applications of RL, we often need to learn from human preferences, rather than rewards. This shift introduces a complication: not all preferences can be rationalized by an underlying utility function. In particular, intransitive preferences which are well-known to result from aggregation of different sub-populations or users evaluating different pairs of items on the basis of different features (May, 1954; Tversky, 1969; Gardner, 1970) cannot be accurately captured by a single reward model. To see this, note that if we have \ud835\udc4e\u227b\ud835\udc4f, \ud835\udc4f\u227b\ud835\udc50, and \ud835\udc50\u227b\ud835\udc4e, it is impossible to have a reward model that simultaneously sets \u02c6 \ud835\udc5f(\ud835\udc4e) > \u02c6 \ud835\udc5f(\ud835\udc4f), \u02c6 \ud835\udc5f(\ud835\udc4f) > \u02c6 \ud835\udc5f(\ud835\udc50), and \u02c6 \ud835\udc5f(\ud835\udc50) > \u02c6 \ud835\udc5f(\ud835\udc4e). As we increase the space of possible choices to that of all possible prompt completions, the probability of such intransitivities sharply increases (Dud\u00edk et al., 2015), as reflected in the high levels of annotator disagreement in LLM fine-tuning datasets (Touvron et al., 2023). Thus, rather than assuming access to a reward model, in such settings, we assume access to a preference model (Munos et al., 2023; Swamy et al., 2024; Rosset et al., 2024; Ye et al., 2024). 3.3.1 A Game-Theoretic Perspective on Learning from Preferences More specifically, for any tuple (\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032), we assume we have access to P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65): the probability that \ud835\udc66is preferred to \ud835\udc66\u2032. We then define our preference model \ud835\udc59as \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u225c2 \u00b7 P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65) \u22121. (14) Observe that \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) \u2208[\u22121, 1] is skew-symmetric, i.e., \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66) = 0, \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) + \ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66) = 0 for all \ud835\udc65\u2208X, \ud835\udc66, \ud835\udc66\u2032 \u2208Y. If the learner can only receive a binary feedback \ud835\udc5c\u2208{0, 1} indicating the preference between \ud835\udc66and \ud835\udc66\u2032, we assume \ud835\udc5cis sampled from a Bernoulli distribution with mean P(\ud835\udc66\u227b\ud835\udc66\u2032|\ud835\udc65), where \ud835\udc5c= 1 means that \ud835\udc66is preferred over \ud835\udc66\u2032 and 0 otherwise. Given access to such a preference model, a solution concept to the preference aggregation problem with deep roots in the social choice theory literature (Kreweras, 1965; Fishburn, 1984; Kramer, 1973; Simpson, 1969) and the dueling bandit literature (Yue et al., 2012; Dud\u00edk et al., 2015) is that of a minimax winner (MW) \ud835\udf0bMW: the Nash Equilibrium strategy of the symmetric two-player zero-sum game with \ud835\udc59as a payoff function. In particular, due to the skew-symmetric property of \ud835\udc59, Swamy et al. (2024) proved that there exists a policy \ud835\udf0bMW such that max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] = min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0bMW (\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] . This implies that (\ud835\udf0bMW, \ud835\udf0bMW) is a Nash Equilibrium (Wang et al., 2023b; Munos et al., 2023; Swamy et al., 2024; Ye et al., 2024). As is standard in game solving, our objective is to obtain an \ud835\udf16-approximate MW b \ud835\udf0bmeasured by the duality gap (DG): DG(b \ud835\udf0b) := max \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223cb \ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2212min \ud835\udf0bE\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223cb \ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] \u2264\ud835\udf16. In the following discussion, we will use \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b) to denote E\ud835\udc66\u2032\u223c\ud835\udf0b(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] and \ud835\udc59(\ud835\udf0b, \ud835\udf0b\u2032) to denote E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf0b\u2032(\u00b7|\ud835\udc65) [\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032)] for notational convenience. 9 \f3.3.2 Self-Play Preference Optimization (SPO) with REBEL as Base Learner We can straightforwardly extend REBEL to the general preference setting via an instantiation of the Self-Play Preference Optimization (SPO) reduction of Swamy et al. (2024). In short, Swamy et al. (2024) prove that rather than performing adversarial training, we are able to perform a simple and stable self-play procedure while retaining strong theoretical guarantees. Practically, this corresponds to sampling at leas two completions from the current policy, querying a learned preference / supervisor model on each pair, and using the win rate for each completion as its reward. We will now describe how we can adapt REBEL to this mode of feedback. Assuming that we can query the preference oracle \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) at will, we can modify the least square objective Eq. (9) to \ud835\udf03\ud835\udc61+1 := argmin \ud835\udf03 \u2211\ufe01 \ud835\udc65,\ud835\udc66,\ud835\udc66\u2032,\ud835\udc66\u2032\u2032\u2208D\ud835\udc61 \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032)) \u00132 where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032\u2032 \u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). When the exact value of \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is unavailable but only a binary preference feedback \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2208{0, 1} sampling from Bernoulli with mean \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032) is available, we can just replace \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) by \ud835\udc5c\ud835\udc66,\ud835\udc66\u2032 \u2212\ud835\udc5c\ud835\udc66\u2032,\ud835\udc66\u2032\u2032. It is easy to see that the Bayes optimal of the above least square regression problem is equal to: E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) \u2212E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61). Swamy et al. (2024) define an iteration-dependent reward \ud835\udc5f\ud835\udc61(\ud835\udc65, \ud835\udc66) := E\ud835\udc66\u2032\u2032\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65)\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032\u2032) = \ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61). Thus, the above regression problem can be understood as an extension of REBEL to the setting where the reward function changes at each iteration \ud835\udc61. Swamy et al. (2024) shows that running the exact MD (Eq. 2) with this iteration-dependent reward function \ud835\udc5f\ud835\udc61leads to fast convergence to an approximate Minimax Winner, a property that we will use to provide the regret bound of REBEL in the general preference setting while accounting for nonzero mean squared error. 4 Theoretical Analysis In the previous section, we interpret REBEL as the exact MD and show its convergence by assuming that least square regression always returns a predictor that is accurate everywhere. While such an explanation is simple and has also been used in prior work, point-wise out-of-distribution generalization is an extremely strong condition and is significantly beyond what a standard supervised learning method can promise. In this section, we significantly relax this condition via a reduction-based analysis: As long as we can solve the regression problems well in an in-distribution manner, REBEL can compete against any policy covered by the training data distributions. Formally, we assume the following generalization condition holds on the regressors we find. Assumption 1 (Regression generalization bounds). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212\ud835\udc5f(\ud835\udc65, \ud835\udc66\u2032)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. 10 \fIntuitively, this assumption is saying that there is a function in our class of regressors that is able to accurately fit the difference of rewards. Recall that our class of regressors is isomorphic to our policy class. Therefore, as long as our class of policies is expressive, we would expect this assumption to hold with small \ud835\udf16. For all domains we consider, our policy class is a flexible set of generative models (e.g. Transformer-based LLMs or diffusion models). Thus, we believe it is reasonable to believe this assumption holds in practice \u2013 see Figure 6 in Appendix G for empirical evidence of this point and Example 1 for more discussion. More formally, the above assumption bounds the standard in-distribution generalization error (v.s. the point-wise guarantee in Eq. 12) of a well-defined supervised learning problem: least squares regression. The generalization error \ud835\udf16captures the possible errors from the learning process for \ud835\udf03\ud835\udc61+1 and it could depend on the complexity of the policy class and the number of samples used in the dataset D\ud835\udc61. For instance, when the the function ln \ud835\udf0b\u2212ln \ud835\udf0b\u2032 induced by the log-difference of two policies (\ud835\udf0b, \ud835\udf0b\u2032) are rich enough (e.g., policies are deep neural networks) to capture the reward difference, then \ud835\udf16in this assumption converges to zero as we increase the number of training data. Note that while \ud835\udf16can be small, it does not imply that the learned predictor will have a small prediction error in a point-wise manner \u2013 it almost certainly will not. Example 1. One simple example is when \ud835\udf0b(\ud835\udc66|\ud835\udc65) \u221dexp(\ud835\udf03\u22a4\ud835\udf19(\ud835\udc65, \ud835\udc66)) for some features \ud835\udf19(\ud835\udc65, \ud835\udc66). In this case, ln(\ud835\udf0b(\ud835\udc66|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66|\ud835\udc65)) \u2212ln(\ud835\udf0b(\ud835\udc66\u2032|\ud835\udc65)/\ud835\udf0b\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65)) = (\ud835\udf03\u2212\ud835\udf03\ud835\udc61)\u22a4(\ud835\udf19(\ud835\udc65, \ud835\udc66) \u2212\ud835\udf19(\ud835\udc65, \ud835\udc66\u2032)), which means that our regression problem in Eq. 9 is a classic linear regression problem. When the reward \ud835\udc5f(\ud835\udc65, \ud835\udc66) is also linear in feature \ud835\udf19(\ud835\udc65, \ud835\udc66), then Eq. 9 is a well-specified linear regression problem, and \ud835\udf16typically scales in the rate of \ud835\udc42(\ud835\udc51/|D\ud835\udc61|) with \ud835\udc51being the dimension of feature \ud835\udf19. We can extend the above example to the case where \ud835\udf19is the feature corresponding to some kernel, e.g., RBF kernel or even Neural Tangent Kernel, which allows us to capture the case where \ud835\udf0bis a softmax wide neural network with the least square regression problem solved by gradient flow. The error \ud835\udf16again scales poly(\ud835\udc51/|D\ud835\udc61|), where \ud835\udc51is the effective dimension of the corresponding kernel. We now define the concentrability coefficient (Kakade and Langford, 2002) that quantifies how the training data distribution is covering a comparator policy. Data Coverage. Recall that the base distribution \ud835\udf07can be some behavior policy, which in RLHF can be a human labeler, a supervised fine-tuned policy (SFT), or just the current learned policy (i.e., on-policy). Given a test policy \ud835\udf0b, we denote by \ud835\udc36\ud835\udf07\u2192\ud835\udf0bthe concentrability coefficient, i.e. \ud835\udc36\ud835\udf07\u2192\ud835\udf0b= max \ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) . (15) We say \ud835\udf07covers \ud835\udf0bif \ud835\udc36\ud835\udf07\u2192\ud835\udf0b< +\u221e. Our goal is to bound the regret between our learned policies and an arbitrary comparator \ud835\udf0b\u2217(e.g. the optimal policy if it is covered by \ud835\udf07) using \ud835\udf16and the concentrability coefficient defined in Eq. 15. The following theorem formally states the regret bound of our algorithm. Theorem 1. Under Assumption 1, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, among the learned policies \ud835\udf0b1, . . . , \ud835\udf0b\ud835\udc47, there must exist a policy \u02c6 \ud835\udf0b, such that: \u2200\ud835\udf0b\u2217: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\u2217(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2212E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\u02c6 \ud835\udf0b(\u00b7|\ud835\udc65)\ud835\udc5f(\ud835\udc65, \ud835\udc66) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16. ! . 11 \fHere the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217,\ud835\udc47. The above theorem shows a reduction from RL to supervised learning \u2014 as long as supervised learning works (i.e., \ud835\udf16is small), then REBEL can compete against any policy \ud835\udf0b\u2217that is covered by the base data distribution \ud835\udf07. In the regret bound, the 1/ \u221a \ud835\udc47comes from Mirror Descent style update, and \ud835\udc36\ud835\udf07\u2192\ud835\udf0b\u2217\ud835\udf16captures the cost of distribution shift: we train our regressors under distribution \ud835\udf0b\ud835\udc61and \ud835\udf07, but we want the learned regressor to predict well under \ud835\udf0b\u2217. Similar to the NPG analysis from Agarwal et al. (2021a), we now have a slower convergence rate 1/ \u221a \ud835\udc47, which is due to the fact that we have approximation error from learning. Such an agnostic regret bound \u2014 being able compete against any policy that is covered by training distributions \u2013 is the strongest type of agnostic learning results known in the RL literature, matching the best of what has appeared in prior policy optimization work including PSDP (Bagnell et al., 2003), CPI (Kakade and Langford, 2002), NPG (Agarwal et al., 2021a), and PC-PG (Agarwal et al., 2020). While in this work, we use the simplest and most intuitive definition of coverage \u2013 the density ratio-based definition in Eq. 15 \u2013 extension to more general ones such as transfer error (Agarwal et al., 2020, 2021a) or concentrability coefficients that incorporate function class (e.g., Song et al. (2023b)) is straightforward. We defer the proof of the above theorem and the detailed constants that we omitted in the \ud835\udc42notation to Appendix B. 4.1 Extension to General Preferences Extending the above analysis to the general preference case is straightforward except that it requires a stronger coverage condition. This is because we want to find a Nash Equilibrium, which requires a comparison between the learned policy against all the other policies. Results from the Markov Game literature (Cui and Du, 2022b; Zhong et al., 2022; Cui and Du, 2022a; Xiong et al., 2023) and Cui and Du (2022b) have shown that the standard single policy coverage condition used in single-player optimization is provably not sufficient. In particular, they propose using a notion of unilateral concentrability for efficient learning, which can be defined as \ud835\udc36uni,\ud835\udf07:= max \ud835\udf0b,\ud835\udc65,\ud835\udc66,\ud835\udc66\u2032\u2032 \ud835\udf0bMW(\ud835\udc66|\ud835\udc65)\ud835\udf0b(\ud835\udc66\u2032\u2032|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65)\ud835\udf07(\ud835\udc66\u2032\u2032|\ud835\udc65) , in the general preference setting. Notably, the above unilateral concentrability coefficient \ud835\udc36uni,\ud835\udf07is equivalent to \ud835\udc36\ud835\udf07:= max\ud835\udf0b,\ud835\udc65,\ud835\udc66 \ud835\udf0b(\ud835\udc66|\ud835\udc65) \ud835\udf07(\ud835\udc66|\ud835\udc65) since \ud835\udc36\ud835\udf07\u2264\ud835\udc36uni,\ud835\udf07\u2264\ud835\udc362 \ud835\udf07. Therefore in the following discussion, we will use \ud835\udc36\ud835\udf07as the coverage condition. In addition, we also assume the generalization error of the regression problem is small, Assumption 2 (Regression generalization bounds for general preference). Over \ud835\udc47iterations, assume that for all \ud835\udc61, we have: E\ud835\udc65\u223c\ud835\udf0c,\ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65),\ud835\udc66\u2032\u223c\ud835\udf07(\u00b7|\ud835\udc65) \u00121 \ud835\udf02 \u0012 ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66|\ud835\udc65) \u2212ln \ud835\udf0b\ud835\udf03\ud835\udc61+1(\ud835\udc66\u2032|\ud835\udc65) \ud835\udf0b\ud835\udf03\ud835\udc61(\ud835\udc66\u2032|\ud835\udc65) \u0013 \u2212(\ud835\udc59(\ud835\udc65, \ud835\udc66, \ud835\udf0b\ud835\udc61) \u2212\ud835\udc59(\ud835\udc65, \ud835\udc66\u2032, \ud835\udf0b\ud835\udc61)) \u00132 \u2264\ud835\udf16, for some \ud835\udf16. Under the above coverage condition and generalization bound, we can show that REBEL is able to learn an approximate Minimax Winner: 12 \fTheorem 2. With assumption 2, after \ud835\udc47many iterations, with a proper learning rate \ud835\udf02, the policy b \ud835\udf0b= Unif({\ud835\udf0b\ud835\udc61}\ud835\udc47 \ud835\udc61=1) satisfies that: DG(b \ud835\udf0b) \u2264\ud835\udc42 \u221a\ufe02 1 \ud835\udc47+ \u221a\ufe01 \ud835\udc36\ud835\udf07\ud835\udf16. ! . Here the \ud835\udc42-notation hides problem-dependent constants that are independent of \ud835\udf16, \ud835\udc36\ud835\udf07,\ud835\udc47. We defer the proof to Appendix C. Note that the coverage condition here is much stronger than the single policy coverage condition in the RL setting. We conjecture that this is the cost one has to pay by moving to the more general preference setting and leaving the investigation of the necessarily coverage condition for future work. 5 Experiments The implementation of REBEL follows Algorithm 1. In each iteration, REBEL collects a dataset D\ud835\udc61= {\ud835\udc65, \ud835\udc66, \ud835\udc66\u2032}, where \ud835\udc65\u223c\ud835\udf0c, \ud835\udc66\u223c\ud835\udf0b\ud835\udc61(\u00b7|\ud835\udc65), \ud835\udc66\u2032 \u223c\ud835\udf07(\u00b7|\ud835\udc65). Subsequently, REBEL optimizes the least squares regression problem in Eq. 9 through gradient descent with AdamW (Loshchilov and Hutter, 2017). We choose \ud835\udf07= \ud835\udf0b\ud835\udc61such that both \ud835\udc66and \ud835\udc66\u2032 are generated by the current policy. We empirically assess REBEL\u2019s performance on both natural language generation and text-guided image generation. 5.1 Natural Language Generation Baselines: We compare REBEL with baseline RL algorithms, PPO (Schulman et al., 2017), Direct Preference Optimization (DPO) (Rafailov et al., 2023), and REINFORCE (Williams, 1992) and its multi-sample extension, REINFORCE Leave-One-Out (RLOO) (Kool et al., 2019). The REINFORCE method is implemented with a moving average baseline of the reward. We include two variants of RLOO with two (\ud835\udc58= 2) and four (\ud835\udc58= 4) generations per prompt. Dataset: We use the TL;DR summarization dataset (Stiennon et al., 2020)2 to train the model to generate summaries of Reddit posts based on human preference data. The dataset comprises human reference summaries and preference data. Following prior work (Stiennon et al., 2020; Rafailov et al., 2023; Ahmadian et al., 2024), we train the DPO baseline on the preference dataset, while conducting online RL (PPO, RLOO, REBEL) on the human reference dataset. We set the maximum context length to 512 and the maximum generation length to 53 to ensure all references in the dataset can be generated. Additional dataset details are in Appendix D.1. Models: We include results with three different model sizes: 1.4B, 2.8B, and 6.9B. Each model is trained with a supervised fine-tuned (SFT) model and/or a reward model (RM) of the same size. For SFT models, we train a Pythia 1.4B (Biderman et al., 2023)3 model for 1 epoch over the dataset with human references as labels, and use the existing fine-tuned 2.8B4 and 6.9B5 models. For reward models, we train a Pythia 1.4B parameter model for 1 epoch over the preference dataset and 2Dataset available at https://github.com/openai/summarize-from-feedback 3HuggingFace Model Card: EleutherAI/pythia-1.4b-deduped 4HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__sft__tldr 5HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__sft__tldr 13 \fModel size Algorithm Winrate (\u2191) RM Score (\u2191) KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53) (\u2193) 1.4B SFT 24.5% -0.52 DPO 43.8% 0.11 30.9 PPO 51.6% 1.73 29.1 REBEL 55.3% 1.87 32.4 2.8B SFT 28.4% -0.40 DPO 53.5% 2.41 66.5 PPO 67.2% 2.37 27.4 REBEL 70.3% 2.44 29.2 Table 1: Results on TL;DR Summarization for SFT, PPO, DPO, and REBEL using three metrics. The RM Score is computed using the reward model with the respective size and the winrate is evaluated by GPT4. The models are trained with low-rank adapters. The best-performing method for each size and metric is highlighted in bold and the second best is underlined. We note that REBEL outperforms all baselines here in terms of the winrate 6.9B SFT DPO REINFORCE PPO RLOO (\ud835\udc58= 2) RLOO (\ud835\udc58= 4) REBEL Winrate (\u2191) 44.6% 68.2% 70.7%\u2217 77.6%\u2021 74.2%\u2217 77.9%\u2217 78.0% *directly obtained from Ahmadian et al. (2024) \u2021directly obtained from Huang et al. (2024) Table 2: Results on TL;DR Summarization on 6.9B models. We perform full-parameter training for all models. The best-performing method is highlighted in bold and the second best is underlined. use the existing reward models with 2.8B6 and 6.9B7 parameters. For both REBEL and baseline methods using 1.4B and 2.8B parameters, we trained the policy and/or the critic using low-rank adapters (LoRA) (Hu et al., 2022) on top of our SFT and/or reward model respectively. For the 6.9B models, we perform full-parameter training. More details about the hyperparameters are described in Appendix D.2. Evaluation: We evaluate each method by its balance between reward model score and KLdivergence with the reference policy, testing the effectiveness of the algorithm in optimizing the regularized RL object. To evaluate the quality of the generation, we compute the winrate (Rafailov et al., 2023) against human references using GPT48 (OpenAI, 2023). The winrate is computed from a randomly sampled subset (10%) of the test set with a total of 600 samples. The prompt used to query GPT4 as well as an example response is shown in Appendix D.3. 14 \f1.6 1.8 2.0 2.2 2.4 2.6 RM Score ( ) 15 20 25 30 35 KL ( || ref) ( ) 2.0 2.1 2.2 2.3 2.4 2.5 2.6 2.7 RM Score ( ) 0 10 20 30 40 50 60 REBEL PPO Figure 2: Plot of Reward vs KL-Divergence for 2.8B REBEL and PPO. We evaluate the models across the entire test set every 100 steps for 2,000 steps. Left: each point represents the average reward score and KL-divergence for a specific time step; the eclipse represents the confidence interval with 2 standard deviations. Right: we divide the KL distribution at the 2,000-step into 10 bins with equal size and average the corresponding RM scores in each bin. 5.1.1 Quality Analysis Table 1 presents a comparison between REBEL and SFT, PPO, and DPO for 1.4B and 2.8B models trained with LoRA. We calculate the KL-divergence (KL(\ud835\udf0b||\ud835\udf0b\ud835\udc5f\ud835\udc52\ud835\udc53)) using the SFT policy of the corresponding size as the reference for all models. Notably, REBEL outperforms all the baselines on RM score across all model sizes with a slightly larger KL than PPO. In addition, REBEL achieves the highest winrate under GPT4 when evaluated against human references, indicating the benefit of regressing the relative rewards. Example generations of 2.8B REBEL are included in Appendix E. We also perform full-parameter training for 6.9B models and the winrates are shown in Table 2. We can observe that REBEL still outperforms all of the baselines while REBEL, PPO, and RLOO (\ud835\udc58= 4) have comparable performances (but we will soon show in the next section that REBEL is more tractable in computation and memory than PPO and RLOO with \ud835\udc58= 4). An ablation analysis on parameter \ud835\udf02is in Appendix F. The trade-off between the reward model score and KL-divergence is shown in Figure 2. We evaluate the 2.8B REBEL and PPO every 400 gradient updates during training for 8,000 updates. The sample complexity of each update is held constant across both algorithms for fair comparison. For the left plot, each point represents the average divergence and score over the entire test set, and the eclipse represents the confidence interval with 2 standard deviations. As observed previously, PPO exhibits lower divergence, whereas REBEL shows higher divergence but is capable of achieving larger RM scores. Notably, towards the end of the training (going to the right part of the plot), REBEL and PPO have similar KL and RM scores. For the right plot in Figure 2, we analyze a single checkpoint for each algorithm at the end of training. For each algorithm, we group every generation from the test set by its KL distribution into 10 equally sized bins and calculate the average of the corresponding RM 6HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-2.8b-deduped__reward__tldr 7HuggingFace Model Card: vwxyzjn/EleutherAI_pythia-6.9b-deduped__reward__tldr 8Specific API checkpoint used throughout this section: gpt-4-0613 15 \fDPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 20 40 60 80 100 120 Time (s) Generation Policy Update DPO REINFORCE RLOO (k = 2) PPO RLOO (k = 4) REBEL Method 0 5 10 15 20 25 30 35 40 Peak Memory Usage (GB) Figure 3: Plot of runtime and memory usage for DPO, REINFORCE, RLOO, PPO, and REBEL. The runtime includes both time for generation and policy update for each batch. Runtime and memory usage are measured on A6000 GPUs. Baselines on the left-hand side of the dashed line have lower winrates. Methods on the right-hand side of the dashed line have similar winrates to REBEL, but REBEL is noticeably more computationally tractable and memory efficient than PPO and RLOO (\ud835\udc58= 4). score for each bin. We can see that REBEL achieves higher RM scores for generations with small divergence while requiring larger divergence for generations with the highest scores. 5.1.2 Runtime & Memory Analysis We analyze the runtime and peak memory usage for 2.8B models using PPO, DPO, RLOO, and REBEL. The runtime includes both the generation time and the time required for policy updates. Both runtime and peak memory usage are measured on A6000 GPUs using the same hyperparameters detailed in Appendix D.2. The methods in the plots are arranged in ascending order based on winrates. To the right of the dashed line, PPO, RLOO (\ud835\udc58= 4), and REBEL have the highest winrates, which are comparable among them. While DPO and REINFORCE require less time and memory, their performance does not match up to REBEL, as discussed in Section 5.1.1. RLOO (\ud835\udc58= 2) has similar runtime and memory usage as REBEL since we set \ud835\udf07= \ud835\udf0b\ud835\udc61, making REBEL also generate twice per prompt. However, RLOO (\ud835\udc58= 2) has worse performance than REBEL. Compared to PPO and RLOO (\ud835\udc58= 4), REBEL demonstrates shorter runtimes and lower peak memory usage. PPO is slow and requires more memory because it needs to update both two networks: policy network and value network. RLOO (\ud835\udc58= 4) requires generating 4 responses per prompt which makes it slow and less memory efficient. Compared to the two baselines PPO and RLOO (\ud835\udc58= 4) that achieve similar winrates as REBEL, we see that REBEL is more computationally tractable. REBEL is also noticeably simpler to implement than PPO since it does not learn value networks or compute the advantage estimation. 16 \f0 10000 20000 30000 40000 50000 60000 Reward Queries 6.0 6.5 7.0 7.5 8.0 8.5 9.0 LAION Aesthetic Score REBEL PPO Figure 4: Learning curves as a function of reward queries to the LAION aesthetic predictor. We report inter-quartile means (IQM) with 95% confidence intervals (CIs) across three seeds for both REBEL and PPO. The CIs were calculated with percentile bootstrap with stratified sampling over three random seeds. 5.2 Image Generation We also consider the setting of image generation, where, given a consistency model (Song et al., 2023a) and a target reward function, we seek to train the consistency model to output images which garner a higher reward. Specifically, we compare REBEL and PPO under the RLCM framework (Oertell et al., 2024). Baselines: We compare REBEL to a clipped, policy gradient objective (Black et al., 2023; Fan et al., 2024; Oertell et al., 2024) with the aim to optimize aesthetic quality to obtain high reward from the LAION aesthetic score predictor (Schuhmann, 2022). This baseline does not use critics or GAE for advantage estimates. However, the clipping objective is clearly motivated by PPO, and thus, we simply name this baseline as PPO in this section. Dataset: We use 45 common animals as generation prompts similar to Black et al. (2023); Oertell et al. (2024)9. Models: We use the latent consistency model (Luo et al., 2023) distillation of the Dreamshaper v7 model10, a finetune of stable diffusion (Rombach et al., 2021). Evaluation: We evaluate PPO and REBEL on its reward under the LAION aesthetic reward model for an equal number of reward queries/samples generated and an equal number of gradient updates. The aesthetic predictor is trained to predict human-labeled scores of images on a scale of 1 to 10. Images that tend to have the highest reward are artwork. Following the recommendations of Agarwal et al. (2021b), we report the inter-quartile mean with 95% confidence intervals for our reported results across three random seeds. 9Dataset available at https://github.com/Owen-Oertell/rlcm 10Huggingface model card: SimianLuo/LCM_Dreamshaper_v7 17 \fREBEL PPO 7.29 7.38 7.37 7.27 7.14 6.85 6.17 6.00 6.29 7.06 Figure 5: Generated images using PPO and REBEL during an intermediate checkpoint. We note that at the same number of epochs, REBEL observes a higher reward under the reward model. This can further be seen by the more diverse background of images generated from REBEL with less training time. 5.3 Quality Analysis Figure 4 shows REBEL optimizes the consistency model faster during the beginning of training but eventually achieves similar performance to that of PPO. For our experiments, we tuned both batch size and learning rate for our algorithms, testing batch sizes of [4, 8, 16] per gpu and learning rates [1e \u22124, 3e \u22124, 6e \u22124, 1e \u22123]. Note, the main difference in implementation between PPO and REBEL is the replacement of the clipped PPO objective with our regression objective. Qualitatively, we observe that eventually, both PPO and REBEL start to generate good-looking images but ignore the text prompt entirely. However, from just optimizing the reward function perspective, this behavior is not surprising since the objective does not encourage the maintenance of the consistency between the text prompt and the generated image. To maximize LAION-predicted aesthetic quality, both REBEL and PPO transform a model that produces plain images into one that produces artistic drawings. We found across multiple seeds that REBEL produced lush backgrounds when compared to PPO\u2019s generations. Please see Appendix E.2 for more examples of generated images. 6 Related Work Policy Gradients. Policy gradient (PG) methods (Nemirovsk\u0133 and Yudin, 1983; Williams, 1992; Sutton et al., 1999; Konda and Tsitsiklis, 1999; Kakade, 2001; Schulman et al., 2017) are a prominent class of RL algorithms due to their direct, gradient-based policy optimization, robustness to model mis-specification (Agarwal et al., 2020), and scalability to modern AI applications from fine-tuning LLMs (Stiennon et al., 2022) to optimizing text-to-image generators (Oertell et al., 2024). 18 \fBroadly speaking, we can taxonomize PG methods into two families. The first family is based on REINFORCE (Williams, 1992) and often includes variance reduction techniques (Kool et al., 2019; Richter et al., 2020; Zhu et al., 2023). While prior work by Ahmadian et al. (2024) has shown that REINFORCE-based approaches can outperform more complex RL algorithms like PPO on LLM fine-tuning tasks like TL;DR, we find that a properly optimized version of PPO still out-performs a REINFORCE baseline. The second family is adaptive PG techniques that precondition the policy gradient (usually with the inverse of the Fisher Information Matrix) to ensure it is covariant to re-parameterizations of the policy, which include NPG (Kakade, 2001; Bagnell and Schneider, 2003) and its practical approximations like TRPO (Schulman et al., 2015a) and PPO (Schulman et al., 2017). Intuitively, the preconditioning ensures that we make small changes in terms of action distributions, rather than in terms of the actual policy parameters, leading to faster and more stable convergence. Unfortunately, computing and then inverting the Fisher Information Matrix is computationally intensive and therefore we often resort to approximations in practice, as done in TRPO. However, these approximations are still difficult to apply to large-scale generative models, necessitating even coarser approximations like PPO. In contrast, REBEL does not need any such approximations to be implemented at scale, giving us a much closer connection between theory and practice. Reward Regression. The heart of REBEL is a novel reduction from RL to iterative squared loss regression. While using regression to fit either the reward (Peters and Schaal, 2007) or the value (Peng et al., 2019) targets which are then used to extract a policy have previously been explored, our method instead takes a page from DPO (Rafailov et al., 2023) to implicitly parameterize the reward regressor in terms of the policy. This collapses the two stage procedure of prior methods into a single regression step. Preference Fine-Tuning (PFT) of Generative Models. RL has attracted renewed interest due to its central role in \u201caligning\u201d language models \u2013 i.e., adapting their distribution of prompt completions towards the set of responses preferred by human raters. One family of techniques for PFT, often referred to as Reinforcement Learning from Human Feedback (RLHF) involves first fitting a reward model (i.e. a classifier) to the human preference data and then using this model to provide reward values to a downstream RL algorithm (often PPO) (Christiano et al., 2017; Ziegler et al., 2020). LLMs fine-tuned by this procedure include GPT-N (OpenAI, 2023), Claude-N (Anthropic, 2024), and Llama-N (Meta, 2024). Similar approaches have proved beneficial for tasks like summarization (Stiennon et al., 2022), question answering (Nakano et al., 2022), text-to-image generation (Lee et al., 2023), and instruction following (Ouyang et al., 2022). Another family of techniques for PFT essentially treats the problem as supervised learning and uses a variety of ranking loss functions. It includes DPO (Rafailov et al., 2023), IPO (Azar et al., 2023), and KTO (Ethayarajh et al., 2023). These techniques are simpler to implement as they remove components like an explicit reward model, value network, and on-policy training from the standard RLHF setup. However, recent work finds their performance to be lesser than that of on-policy methods (Lambert et al., 2024; Tajwar et al., 2024), which agrees with our findings. This is perhaps caused by their lack of interaction during training, leading to the well-known covariate shift/compounding error issue (Ross et al., 2011; Swamy et al., 2021) and the associated lower levels of performance. The third family of PFT techniques combines elements from the previous two: it involves running an offline algorithm iteratively, collecting on-policy preference feedback from either a supervisor model (Rosset et al., 2024; Xiong et al., 2024; Guo et al., 2024) or from a preference model fit on human data 19 \f(Calandriello et al., 2024). All of these approaches can be considered instantiations of the general SPO reduction proposed by Swamy et al. (2024), which itself can be thought of as a preference-based variant of DAgger (Ross et al., 2011). Recent work by Tajwar et al. (2024) confirms the empirical strength of these techniques. Our approach fits best into this family of techniques \u2013 we also iteratively update our model by solving a sequence of supervised learning problems over on-policy datasets. However, REBEL comes with several key differentiating factors from the prior work. First, we can run REBEL with datasets consisting of a mixture of on-policy and off-policy data with strong guarantees, enabling hybrid training, as previously explored in the RL (Song et al., 2023b; Ball et al., 2023; Zhou et al., 2023) and inverse RL (Ren et al., 2024) literature. Second, unlike all of the aforementioned works that regularize to the initial policy \ud835\udf0b0 during updates, we perform conservative updates by regularizing \ud835\udf0b\ud835\udc61+1 to \ud835\udf0b\ud835\udc61. Thus, for the prior work, it is difficult to prove convergence or monotonic improvement as the current policy can just bounce around a ball centered at \ud835\udf0b0, a well-known issue in the theory of approximate policy iteration (Kakade and Langford, 2002; Munos, 2003). In contrast, by incorporating the prior policy\u2019s probabilities into our regression problem, we are able to prove stronger guarantees for REBEL. 7 Summary and Future Work In summary, we propose REBEL, an RL algorithm that reduces the problem of RL to solving a sequence of relative reward regression problems on iteratively collected datasets. In contrast to policy gradient approaches that require additional networks and heuristics like clipping to ensure optimization stability, REBEL requires that we can drive down training error on a least squares problem. This makes it strikingly simple to implement and scale. In theory, REBEL matches the best guarantees we have for RL algorithms in the agnostic setting, while in practice, REBEL is able to match and sometimes outperform methods that are far more complex to implement or expensive to run across both language modeling and guided image generation tasks. There are several open questions raised by our work. The first is whether using a loss function other than square loss (e.g. log loss or cross-entropy) could lead to better performance in practice (Farebrother et al., 2024) or tighter bounds (e.g. first-order / gap-dependent) in theory (Foster and Krishnamurthy, 2021; Wang et al., 2023a, 2024). The second is whether, in the general (i.e. non-utility-based) preference setting, the coverage condition assumed in our analysis is necessary \u2013 we conjecture it is. Relatedly, it would be interesting to explore whether using preference (rather than reward) models to provide supervision for REBEL replicates the performance improvements reported by Swamy et al. (2024); Munos et al. (2023). Third, while we focus primarily on the bandit setting in the preceding sections, it would be interesting to consider the more general RL setting and explore how offline datasets can be used to improve the efficiency of policy optimization via techniques like resets (Bagnell et al., 2003; Ross and Bagnell, 2014; Swamy et al., 2023; Chang et al., 2023, 2024). 20", |
| "additional_info": [ |
| { |
| "url": "http://arxiv.org/abs/2302.14604v2", |
| "title": "IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social Dilemmas", |
| "abstract": "Achieving and maintaining cooperation between agents to accomplish a common\nobjective is one of the central goals of Multi-Agent Reinforcement Learning\n(MARL). Nevertheless in many real-world scenarios, separately trained and\nspecialized agents are deployed into a shared environment, or the environment\nrequires multiple objectives to be achieved by different coexisting parties.\nThese variations among specialties and objectives are likely to cause mixed\nmotives that eventually result in a social dilemma where all the parties are at\na loss. In order to resolve this issue, we propose the Incentive Q-Flow\n(IQ-Flow) algorithm, which modifies the system's reward setup with an incentive\nregulator agent such that the cooperative policy also corresponds to the\nself-interested policy for the agents. Unlike the existing methods that learn\nto incentivize self-interested agents, IQ-Flow does not make any assumptions\nabout agents' policies or learning algorithms, which enables the generalization\nof the developed framework to a wider array of applications. IQ-Flow performs\nan offline evaluation of the optimality of the learned policies using the data\nprovided by other agents to determine cooperative and self-interested policies.\nNext, IQ-Flow uses meta-gradient learning to estimate how policy evaluation\nchanges according to given incentives and modifies the incentive such that the\ngreedy policy for cooperative objective and self-interested objective yield the\nsame actions. We present the operational characteristics of IQ-Flow in Iterated\nMatrix Games. We demonstrate that IQ-Flow outperforms the state-of-the-art\nincentive design algorithm in Escape Room and 2-Player Cleanup environments. We\nfurther demonstrate that the pretrained IQ-Flow mechanism significantly\noutperforms the performance of the shared reward setup in the 2-Player Cleanup\nenvironment.", |
| "authors": "Bengisu Guresti, Abdullah Vanlioglu, Nazim Kemal Ure", |
| "published": "2023-02-28", |
| "updated": "2023-03-04", |
| "primary_cat": "cs.MA", |
| "cats": [ |
| "cs.MA", |
| "cs.AI", |
| "cs.GT", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Offline AND Reinforcement AND Learning", |
| "gt": "IQ-Flow: Mechanism Design for Inducing Cooperative Behavior to Self-Interested Agents in Sequential Social Dilemmas", |
| "main_content": "INTRODUCTION Social Dilemmas [16] emerge when self-interested parties have conflicting objectives. Greed or fear of being exploited drives the agents towards defecting, which results in worse outcomes for the whole group in comparison to outcomes that would come out of cooperation [11, 12]. This problem has many applications in computer science, economics and social sciences; hence, it is wellstudied under Game Theory using Matrix Game Social Dilemmas (MGSD) and their iterated extension Repeated Matrix Games [12]. Although MGSDs are useful for modelling social dilemmas in real world scenarios, they omit significant characteristics of real world social dilemmas, which are addressed by Sequential Social Dilemmas (SSD) due to their temporally extended structure [12]. Since cooperation and defection are defined for policies in SSD rather than elementary actions [12], how to induce cooperative behavior to agents in an SSD while the agents are concurrently learning is an open research question. Centralized training methods [5, 17, 18] are popular approaches in Multi Agent Reinforcement Learning (MARL) when cooperation is necessary. However, the centralized approaches involve a shared objective to optimize agents\u2019 policies and assume full control over agents\u2019 internal parameters and learning. Nevertheless, as the use of artificial intelligence becomes common and agents that are separately trained for different objectives are deployed in a shared environment [22], it will not be realistic to either expect no conflicting objectives or assume full control over agents\u2019 internal parameters and learning. Since it is not possible to guarantee the type, tasks and number of the deployed agents in an unrestricted environment, agents need to be able to continually learn and adapt to the environment while cooperating with each other. Therefore, in this work, we focus on independently learning self-interested agents in an SSD where the agents receive adaptive incentives in order to promote cooperation. There are different configurations for how agents can be incentivized during learning. Agents can give each other adaptive incentives to shape each other\u2019s behavior for their own benefit [4, 13], or there can be a central institution to provide the incentives to shape the agents\u2019 behavior for the welfare of the whole community [1, 22]. In this work, we adopt the latter approach and provide a mechanism that provides incentives to all of the agents in the system in order to prevent any undesirable outcome, such as tragedy of the commons due to defecting. While it might seem like a trivial problem to learn incentives for the mechanism, since providing the average reward to all agents would certainly remove arXiv:2302.14604v2 [cs.MA] 4 Mar 2023 \fAAMAS \u201923, May 29 \u2013 June 2, 2023, London, United Kingdom Bengisu Guresti, Abdullah Vanlioglu, and Nazim Kemal Ure the existing dilemma, it is shown to yield suboptimal results [15, 22]. Therefore, it is important to design a mechanism that promotes cooperation without incurring performance losses. Furthermore, promoting cooperation to artificial self-interested agents is not the only direction for mechanism design research. Mechanism design can also be used to model human incentives and solve human dilemmas such as determining tax rate for a higher social welfare [22]. In this work, we propose Incentive Q-Flow (IQ-Flow) algorithm to design incentive mechanisms for increasing social welfare and promoting cooperation. IQ-Flow aims to make the cooperative policy correspond to the self-interested policy of the agents by changing system\u2019s reward setup. IQ-Flow collects the experience obtained from agents into a replay buffer and trains critic networks to learn state-action values (Q-Values) for agents\u2019 self-interests and the group\u2019s collective interest. IQ-Flow parameterizes incentive function using meta-parameters and performs meta-gradient learning as in [3, 20\u201322] to update the incentive network. In order to learn incentive meta-parameters, IQ-Flow trains the critic using Offline Implicit Q-Learning [10] with the train set for multiple steps and obtains updated parameters, performs policy evaluation with the validation set, and updates the meta-parameters in the direction that makes the actions of the collaborative policy the greedy choice for self-interested agents\u2019 Q-Values. Our algorithm is distinguished from the existing incentive design methods by grounding itself on reward system shaping rather than opponent shaping. Using Offline Reinforcement Learning (Offline RL) with Implicit Q-Learning makes it possible to get a proximate estimate of Q-Values for as greedy as possible policies with selfinterested and collective interest objectives only using experience collected by external agents. This approach enables IQ-Flow to modify the reward system with the incentive function by getting a close enough estimate of how changing the incentive affects the expected future return brought about by the reward system. Using an offline method such as Implicit Q-Learning instead of standard Deep Q-Learning is justified by the fact that incentivizer critic has an indirect effect on recipient agents\u2019 policies and can only affect collecting experience indirectly. Furthermore, using Implicit Q-Learning also makes extending IQ-Flow to fully offline training simpler for future work. As opposed to opponent shaping based algorithms, IQ-Flow does not possess or make assumptions on any of the agents\u2019 internal parameters, learning algorithms, or hyperparameters which makes it independent from the agents in the environment except for the collected experience. Another key difference of IQ-Flow from existing work is that it does not require cost regularization to train an incentive mechanism in SSDs. Nevertheless, we find that including cost regularization improves IQ-Flow\u2019s performance as well. Finally, it should be noted that IQ-Flow does not learn a multi-agent policy or perform value factorization to determine the actions of a cooperative policy. This is due to the fact that the algorithm only needs to know the cooperative or selfish action of a specific agent when the actions of other agents are provided. Our contributions can be summarized as below: \u2022 Proposing reward system shaping instead of opponent shaping for incentive design; thus, instead of pushing agents towards a Nash-Equilibrium with cooperative outcomes, modifying the reward system such that rational agents are stuck in Nash-Equilibrium with cooperative outcomes \u2022 Extending incentive design framework to learn mechanisms off-policy using offline RL and replay buffer; thus, applying offline RL and replay buffer with meta-gradient learning for MARL for the first time to the best of our knowledge \u2022 Removing the requirement of accessing or making assumptions on agents\u2019 internal learning state for incentive design \u2022 Removing the requirement of cost regularization for incentive design in SSDs We illustrate how IQ-Flow operates for Iterated Matrix Games in Iterated Prisoner\u2019s Dilemma, Iterated Chicken Game and Iterated Stag Hunt. We further evaluate the performance of our algorithm in the common benchmarks Escape Room [21] and SSD-Cleanup [7, 8, 19] with 2 Players. We demonstrate that it outperforms the state-of-theart incentive design algorithm ID and perform ablation studies for IQ-Flow. We further demonstrate that the pretrained mechanism, learned by IQ-Flow, leads to significantly better learning performance than using a shared reward setup. We provide the code for our implementation and experiments at https://github.com/dataand-decision-lab/IQ-Flow.git. 2 RELATED WORK Centralized training methods in MARL such as COMA [5], VDN [18], and QMIX [17] are successful at optimizing all agents\u2019 policies or factorize value functions to achieve a common objective. However, SSD problems can not be approached as fully cooperative problems due to the nature of the problem emerging from coexisting mixed motives and diverse objectives. Hence, decentralized training methods have been developed along with opponent shaping and incentivization practices [4, 8, 13, 21] in order to model and resolve social dilemma problems. Opponent shaping was proposed by [4] to provide independent learners with the ability to shape each other\u2019s behavior in the face of a mixed motive. LOLA [4] agents can access the policy parameters of their opponents and actively learn in the direction that improves their own returns by considering how their opponent\u2019s future policy is expected to change. The disadvantage of the LOLA is that it can adopt arrogant behavior, as claimed by [13] and fixed with a new algorithm named SOS. SOS [13] algorithm is similar to LOLA in adopting opponent shaping, but offers a more robust algorithm by removing the arrogant behavior and inheriting the guarantees of LookAhead [23] on avoiding strict saddles in all differentiable games. Incentivization practices can be exemplified by Social Influence [8], AMD [1], LIO [21] and ID [22]. Social Influence [8] rewards the agent action that has the most impact on others\u2019 behavior as an intrinsic reward. In LIO [21] an agent learns to use incentive reward that affects the learning update of opponents\u2019 policies and changes the objectives of the recipient agents in the direction that improves incentivizer agents\u2019 objectives by using meta-gradient learning. AMD [1] uses a central planner agent that learns how to set an incentive reward according to agents expected policy \fIQ-Flow AAMAS \u201923, May 29 \u2013 June 2, 2023, London, United Kingdom update in the next step. [24] presents a two-dimensional grid world dynamic economic environment Gather-Trade-Build game, where agents collect resources, earn coins by building houses with these materials, and trade resources; moreover, there is a central taxplanner agent who learns to improve the trade-off between income equality and productivity by setting taxes that correspond to a payoff from the agent\u2019s income. [22] use same environment and propose meta-gradient approach to train Incentive Designer (ID), the central planning analogue of LIO, as an incentive mechanism. Mechanism design can also be used to model human incentives and solve human dilemmas such as determining tax rate for a higher social welfare [22] using the simulation environment AI Economist, proposed in [24]. This is a good illustration of how it can be used as a recommendation system for solving social problems in the future. Because we adopt the approach of directly incentivizing the agents using an extra additive reward and the economic simulation environment from Zheng et al. [24] requires an indirect approach such as determining the tax policy, we do not address the taxation problem in AI Economist in this work and leave it to future work. Incentivization practices that use meta-gradient learning to shape opponent behavior, such as LIO and ID are the approaches closest to our learning algorithm. However, while in LIO and ID, the metagradient based incentive mechanism performs on-policy learning [21, 22], IQ-Flow\u2019s incentive mechanism learns in an off-policy manner with a replay buffer. A prior work that uses off-policy learning with a replay buffer for the first time in meta-gradient learning is MetaL [3]. Unlike the opponent shaping based methods, IQ-Flow does not need access or modelling of other agents\u2019 parameters. Instead of focusing on how the behavior of agents change, IQ-Flow focuses on rendering cooperative actions in the Nash-Equilibrium for the possible states. Since non-cooperation would incur a loss for all agents, IQ-Flow tasks the agents\u2019 to optimize their returns and choose cooperation; IQ-Flow does not keep track of how agents\u2019 behavior policies change. This is due to training the incentivizer critic by offline Implicit Q-Learning, which is the key difference from LIO and ID that use online incentivizer training. 3 BACKGROUND In this work, we assume a Partially Observable MDP (POMDP) where \ud835\udc41agents learn independently. S denotes the global state of the environment, \ud835\udc4e\ud835\udc56\u2208\ud835\udc4edenote action of i\u2019th agent in joint action \ud835\udc4e, and \ud835\udc56\u2212denotes all agent indices except \ud835\udc56with index set denoted as I = {0, 1, ..., \ud835\udc41\u22121} . Observation space of agent \ud835\udc56 is O\ud835\udc56= {\ud835\udc5c\ud835\udc56|\ud835\udc60\u2208S,\ud835\udc5c\ud835\udc56= \ud835\udc42(\ud835\udc60,\ud835\udc56)} with the observation function \ud835\udc42: S\u00d7I \u2192R\ud835\udc51that maps the observations to the d-dimensional space. State, observation, action and reward at time step \ud835\udc58are denoted as \ud835\udc60\ud835\udc58, \ud835\udc5c\ud835\udc58, \ud835\udc4e\ud835\udc58, \ud835\udc5f\ud835\udc58respectively along with time horizon \ud835\udc47and discount factor \ud835\udefe. We have the transition function of the environment T : S \u00d7A\ud835\udc41\u2192P(\ud835\udc46) with P denoting the probability distribution over S and batch length \ud835\udc59\ud835\udc35. Joint reward provided by the environment is \ud835\udc45\ud835\udc52\ud835\udc5b\ud835\udc63: S \u00d7 A\ud835\udc41\u2192R\ud835\udc41where each agent receives a specific reward \ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63: S \u00d7 A\ud835\udc41\u2192R. The incentive reward that can be given to an agent is constrained according to the environment as U \u2282R. Thus, the joint incentives provided by the mechanism and parametrized by \ud835\udf02is \ud835\udc45\ud835\udc56\ud835\udc5b\ud835\udc50,\ud835\udf02: S \u00d7 A\ud835\udc41\u2192U\ud835\udc41\u2282R\ud835\udc41where each agent receives a specific incentive \ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50,\ud835\udf02: S \u00d7 A\ud835\udc41\u2192U \u2282R. We define the total reward an agent receives which directs that agent\u2019s behavior policy as \ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc51= \ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63+ \ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50,\ud835\udf02. We further define the sum of the rewards that environment provides to all agents as \ud835\udc45\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d= \u00cd\ud835\udc41\u22121 \ud835\udc56\ud835\udc51=0 \ud835\udc45\ud835\udc56\ud835\udc51 \ud835\udc52\ud835\udc5b\ud835\udc63. It should be noted that \ud835\udc45\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5dis defined for all agents with the same value. We define three different policies that are necessary for our problem case and solution method. \u2022 \ud835\udf45\ud835\udc56 \ud835\udc4f\u2208\ud835\udf45\ud835\udc4f: i\u2019th agent\u2019s behavior policy which is optimized to maximise \ud835\udc49\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60) := E\ud835\udf45\ud835\udc4f h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc51,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60 i \u2022 \ud835\udf45\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d\u2208\ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d: i\u2019th agent\u2019s cooperative policy which is optimized to maximise \ud835\udc49\ud835\udc56 \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d(\ud835\udc60) := E\ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60 i \u2022 \ud835\udf45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63\u2208\ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63: i\u2019th agent\u2019s environment policy which is optimized to maximise \ud835\udc49\ud835\udc56 \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc52\ud835\udc5b\ud835\udc63(\ud835\udc60) := E\ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63 h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60 i We further denote the different objectives that are necessary for our problem case and solution method as follows: \u2022 Action-values of \ud835\udc56\u2019th agent under \ud835\udf45\ud835\udc4faccounting for the individual total reward \ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc51 \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60,\ud835\udc4e) = E\ud835\udf45\ud835\udc4f h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc51,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60,\ud835\udc4e\ud835\udc58= \ud835\udc4e i \u2022 Action-values of \ud835\udc56\u2019th agent under \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5daccounting for the cooperative reward \ud835\udc45\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d(\ud835\udc60,\ud835\udc4e) = E\ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60,\ud835\udc4e\ud835\udc58= \ud835\udc4e i \u2022 Action-values of \ud835\udc56\u2019th agent under \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63accounting for the individual environment reward \ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63 \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc52\ud835\udc5b\ud835\udc63(\ud835\udc60,\ud835\udc4e) = E\ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63 h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60,\ud835\udc4e\ud835\udc58= \ud835\udc4e i \u2022 Values of \ud835\udc56\u2019th agent under \ud835\udf45\ud835\udc4faccounting for the individual environment reward \ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63 \ud835\udc49\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc52\ud835\udc5b\ud835\udc63(\ud835\udc60) = E\ud835\udf45\ud835\udc4f h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60 i \u2022 Action-values of \ud835\udc56\u2019th agent under \ud835\udf45\ud835\udc4faccounting for the individual environment reward \ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63 \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc52\ud835\udc5b\ud835\udc63(\ud835\udc60,\ud835\udc4e) = E\ud835\udf45\ud835\udc4f h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60,\ud835\udc4e\ud835\udc58= \ud835\udc4e i \u2022 Values of \ud835\udc56\u2019th agent under \ud835\udf45\ud835\udc4faccounting for the individual incentive reward \ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50 \ud835\udc49\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc50(\ud835\udc60) = E\ud835\udf45\ud835\udc4f h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60 i \u2022 Action-values of \ud835\udc56\u2019th agent under \ud835\udf45\ud835\udc4faccounting for the individual incentive reward \ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50 \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc50(\ud835\udc60,\ud835\udc4e) = E\ud835\udf45\ud835\udc4f h\u00cd\ud835\udc47\u22121 \ud835\udc61=\ud835\udc58\ud835\udefe\ud835\udc61\u2212\ud835\udc58\ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50,\ud835\udc61|\ud835\udc60\ud835\udc58= \ud835\udc60,\ud835\udc4e\ud835\udc58= \ud835\udc4e i Table 1: Matrix Game payoff table C D C R, R S, T D T, S P, P Social Dilemma conditions. According to preliminary work in social dilemmas [12, 14], a Matrix Game such as Table 1 is a Social Dilemma if it satisfies the following conditions: (1) \ud835\udc45> \ud835\udc43 \fAAMAS \u201923, May 29 \u2013 June 2, 2023, London, United Kingdom Bengisu Guresti, Abdullah Vanlioglu, and Nazim Kemal Ure (2) \ud835\udc45> \ud835\udc46 (3) 2\ud835\udc45> \ud835\udc47+ \ud835\udc46 (4) \ud835\udc47> \ud835\udc45or \ud835\udc43> \ud835\udc46 In this canonical Matrix Game in Table 1 actions C and D represent cooperate and defect actions as convention dictates [14]. We adopt the definitions proposed by [14] and we denote R, P, T and S respectively as reward from mutual cooperation, punishment from mutual defection, temptation reward from defecting while the other player cooperates and sucker reward from cooperating while the other player defects. Offline Implicit Q-Learning. Offline Implicit Q-Learning is performed to learn critics as proposed by [10] for dataset D, value parameters \ud835\udf13, critic parameters \ud835\udf03, target critic parameters \u00af \ud835\udf03, state \ud835\udc60, action \ud835\udc4e, next state \ud835\udc60 \u2032, discount \ud835\udefe, expectile \ud835\udf0f\ud835\udc52\ud835\udc65\ud835\udc5d\u2208(0, 1) with the following loss equations: \ud835\udc3f\ud835\udf0f\ud835\udc52\ud835\udc65\ud835\udc5d 2 (\ud835\udc62) = \f \f\ud835\udf0f\ud835\udc52\ud835\udc65\ud835\udc5d\u22121(\ud835\udc62< 0) \f \f\ud835\udc622 \ud835\udc3f\ud835\udc49(\ud835\udf13) = E(\ud835\udc60,\ud835\udc4e)\u223cD h \ud835\udc3f\ud835\udf0f\ud835\udc52\ud835\udc65\ud835\udc5d 2 (\ud835\udc44\u00af \ud835\udf03(\ud835\udc60,\ud835\udc4e) \u2212\ud835\udc49\ud835\udf13(\ud835\udc60)) i \ud835\udc3f\ud835\udc44(\ud835\udf03) = E(\ud835\udc60,\ud835\udc4e,\ud835\udc60\u2032)\u223cD h (\ud835\udc5f(\ud835\udc60,\ud835\udc4e) + \ud835\udefe\ud835\udc49\ud835\udf13(\ud835\udc60 \u2032) \u2212\ud835\udc44\ud835\udf03(\ud835\udc60,\ud835\udc4e))2i [10] (1) We extend offline Implicit Q-Learning to our multi-agent case in order to approximate \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51, \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, and \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc52\ud835\udc5b\ud835\udc63. We give the corresponding losses in Appendix A.1. We denote the training batch with B\ud835\udc47and validation batch B\ud835\udc49. 4 INCENTIVE Q-FLOW IQ-Flow bases itself on reversing the fourth social dilemma condition and make \ud835\udc47< \ud835\udc45and \ud835\udc43< \ud835\udc46in Table 1. When \ud835\udc45> \ud835\udc47and \ud835\udc46> \ud835\udc43, choosing \ud835\udc36over \ud835\udc37becomes the greedy policy automatically without regard to the opponents\u2019 policy. Thus, IQ-Flow aims to make the action of the cooperative policy the greedy choice for the incentivized behavior policy using meta-gradients as we defined in background in section 3. The necessity of using meta-gradients for estimating how QValues change according to \ud835\udf02comes from the fact that it is not possible to directly estimate the long term value change as a result of a change of incentives. Let the optimal actions of the cooperative policy and incentivized behavior policy be defined respectively as: \ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d= argmax \ud835\udc4e\ud835\udc56\ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d(\ud835\udc60,\ud835\udc4e\ud835\udc56\u2212, .) \ud835\udc4e\ud835\udc56 \ud835\udc4f= argmax \ud835\udc4e\ud835\udc56\ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60,\ud835\udc4e\ud835\udc56\u2212, .) (2) Let the optimal actions for the self-interested policy of agents under standard environment conditions with no extra incentives be defined as: \ud835\udc4e\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63= argmax \ud835\udc4e\ud835\udc56\ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc52\ud835\udc5b\ud835\udc63(\ud835\udc60,\ud835\udc4e\ud835\udc56\u2212, .) (3) In order to determine \ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, \ud835\udc4e\ud835\udc56 \ud835\udc4f, and \ud835\udc4e\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63, IQ-Flow needs to estimate \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51, and \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc52\ud835\udc5b\ud835\udc63. IQ-Flow approximates \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51, and \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc52\ud835\udc5b\ud835\udc63respectively by \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d \u0000\ud835\udf03\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d \u0001, \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51), and \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc52\ud835\udc5b\ud835\udc63(\ud835\udf03\ud835\udc52\ud835\udc5b\ud835\udc63). An important point is that since incentive function is dynamic, \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) and \ud835\udc49\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udf13\ud835\udc56\ud835\udc5b\ud835\udc51) need to be updated with the \ud835\udc5f\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50inferred from the last \ud835\udf02. IQ-Flow updates the critic parameters \ud835\udf13\ud835\udc56\ud835\udc5b\ud835\udc51and \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51, respectively for \ud835\udc49\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60,\ud835\udf13\ud835\udc56\ud835\udc5b\ud835\udc51) and \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60,\ud835\udc4e,\ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51), with Implicit Q-Learning extended to MARL with the equations in Appendix A.1. In order to update \ud835\udf02, we first update our predefined critics with learning rate \ud835\udefd\ud835\udc56\ud835\udc5b\ud835\udc51for \ud835\udc3esteps. This update can be given as following for the Stochastic Gradient Descent (SGD) optimizer: \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51\u2190\ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51+ \ud835\udefd\ud835\udc56\ud835\udc5b\ud835\udc51\u2207\ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51 1 \ud835\udc59\ud835\udc35\ud835\udc41 \ud835\udc59\ud835\udc35\u22121 \u2211\ufe01 \ud835\udc58=0 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=0 \u0010 \ud835\udc5f\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63(\ud835\udc60\ud835\udc58,\ud835\udc4e\ud835\udc58) + \ud835\udc5f\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50(\ud835\udc60\ud835\udc58,\ud835\udc4e\ud835\udc58,\ud835\udf02) + \ud835\udefe\ud835\udc49\ud835\udc56 \ud835\udf13\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60 \u2032 \ud835\udc58) \u2212\ud835\udc44\ud835\udc56 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60\ud835\udc58,\ud835\udc4e\ud835\udc58) \u00112 (4) Since we want to update \ud835\udf02in the direction that flows Q-Values from actions of defective policies to actions of cooperative policies, we regard the \ud835\udc4e\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5das target labels in a classification problem and use a modified version of cross-entropy loss. The necessity of the modification in the cross-entropy loss is because we only want the gradient flow as long as there is a dilemma in the system so that there is no unnecessary and excessive incentivization. We identify an action that causes a dilemma as \ud835\udc4e\ud835\udc56 \ud835\udc4f\u2260\ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d. Therefore we further mask our meta-loss for the case when there is no estimated dilemma. In order to get a probabilistic view of Q-Values and use them in the cross-entropy loss, we pass them through a softmax layer. Finally our meta-loss can be defined as follows: \ud835\udc3f\ud835\udc5a \ud835\udf02( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) := \u2212 1 \ud835\udc59\ud835\udc35\ud835\udc41 \ud835\udc59\ud835\udc35\u22121 \u2211\ufe01 \ud835\udc58=0 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=0 |\ud835\udc34|\u22121 \u2211\ufe01 \u02dc \ud835\udc4e=0 1 \u0010 \u02dc \ud835\udc4e= \ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d,\ud835\udc58 \u0011 \u00d7 \u0010 1 \u22121(\ud835\udc4e\ud835\udc56 \ud835\udc4f,\ud835\udc58= \ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d,\ud835\udc58) \u0011 log \u0010 \ud835\udf0e \u0010 \ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51 \u0010 \ud835\udc60\ud835\udc58,\ud835\udc4e\ud835\udc56,\ud835\udc4e\ud835\udc56\u2212 \ud835\udc58, \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51 \u0011\u0011\u0011 \f \f \f\ud835\udc4e\ud835\udc56= \u02dc \ud835\udc4e \ud835\udf0e(\ud835\udc67\ud835\udc56) = \ud835\udc52\ud835\udc67\ud835\udc56 \u00cd \ud835\udc57\ud835\udc52\ud835\udc67\ud835\udc57 (5) Since we do not want to give an unnecessary incentive if there is no dilemma in the original case without extra incentives, we use another mask which determines if \ud835\udc4e\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63= \ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d. Therefore we add a cost regularization term to the meta loss with cost coefficient \ud835\udc501. \ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc611 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) := 1 \ud835\udc59\ud835\udc35\ud835\udc41 \ud835\udc59\ud835\udc35\u22121 \u2211\ufe01 \ud835\udc58=0 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=0 |\ud835\udc34|\u22121 \u2211\ufe01 \ud835\udc4e\ud835\udc50\ud835\udc61=0 1(\ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d,\ud835\udc58= \ud835\udc4e\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc58) \u00d7 \f \f \f\ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60\ud835\udc58,\ud835\udc4e\ud835\udc56,\ud835\udc4e\ud835\udc56\u2212 \ud835\udc58\u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51))) \f \f \f \f \f \f\ud835\udc4e\ud835\udc56=\ud835\udc4e\ud835\udc50\ud835\udc61 (6) If the incentives become too high prematurely, they can have a destructive effect, especially if they are the wrong incentives. Therefore we add another cost regularization term to the meta loss with cost coefficient \ud835\udc502. Although our experiments show that these cost regularization terms are not required to get a successful performance, especially in simple problems, we find that including them leads to higher performance. \fIQ-Flow AAMAS \u201923, May 29 \u2013 June 2, 2023, London, United Kingdom \ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc612 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) := 1 \ud835\udc59\ud835\udc35\ud835\udc41 \ud835\udc59\ud835\udc35\u22121 \u2211\ufe01 \ud835\udc58=0 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc56=0 |\ud835\udc34|\u22121 \u2211\ufe01 \ud835\udc4e\ud835\udc50\ud835\udc61=0 \u0010 1 \u22121(\ud835\udc4e\ud835\udc56 \ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d,\ud835\udc58= \ud835\udc4e\ud835\udc56 \ud835\udc52\ud835\udc5b\ud835\udc63,\ud835\udc58) \u0011 \u00d7 \f \f \f\ud835\udc44\ud835\udc56 \ud835\udf45\ud835\udc4f,\ud835\udc56\ud835\udc5b\ud835\udc51(\ud835\udc60\ud835\udc58,\ud835\udc4e\ud835\udc56,\ud835\udc4e\ud835\udc56\u2212 \ud835\udc58\u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51))) \f \f \f \f \f \f\ud835\udc4e\ud835\udc56=\ud835\udc4e\ud835\udc50\ud835\udc61 (7) Our final incentive loss for \ud835\udf02is given below as \ud835\udc3f\ud835\udc45\ud835\udc56\ud835\udc5b\ud835\udc50 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51): \ud835\udc3f\ud835\udc45\ud835\udc56\ud835\udc5b\ud835\udc50 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) = \ud835\udc3f\ud835\udc5a \ud835\udf02( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) + \ud835\udc501\ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc611 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) + \ud835\udc502\ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc612 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) (8) If we use \ud835\udefcas learning rate for \ud835\udf02, set number of critic update steps \ud835\udc3eas 1, and assume SGD for optimizer, the update becomes: \u02c6 \ud835\udf02\u2190\ud835\udf02+ \ud835\udefc\u2207\ud835\udf02\ud835\udc3f\ud835\udc45\ud835\udc56\ud835\udc5b\ud835\udc50 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) \u2207\ud835\udf02\ud835\udc3f\ud835\udc45\ud835\udc56\ud835\udc5b\ud835\udc50 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) = \ud835\udf15\ud835\udc3f\ud835\udc5a \ud835\udf02( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) + \ud835\udc501\ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc611 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) + \ud835\udc502\ud835\udc3f\ud835\udc50\ud835\udc5c\ud835\udc60\ud835\udc612 \ud835\udf02 ( \u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51) \ud835\udf15\u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51 \ud835\udf15\u02c6 \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51 \ud835\udf15\ud835\udf02 (9) The diagram for how \ud835\udf02meta-parameter is updated is given below in Figure 1: Figure 1: Meta-update diagram for incentive parameter \ud835\udf02 The pseudocode of the algorithm is given below in Algorithm 1. Algorithm 1 Incentive Q-Flow procedure Train IQ-Flow Mechanism(\ud835\udf190, \ud835\udf191,...,\ud835\udf19\ud835\udc41\u22121, args) \u22b2 Input: policy of all agents, hyperparameters Initialize \ud835\udf02, \ud835\udf03\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, \ud835\udf03\ud835\udc52\ud835\udc5b\ud835\udc63, \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51, \ud835\udf13\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, \ud835\udf13\ud835\udc52\ud835\udc5b\ud835\udc63, \ud835\udf13\ud835\udc56\ud835\udc5b\ud835\udc51 \ud835\udc5b\ud835\udc62\ud835\udc5a_\ud835\udc52\ud835\udc5d\ud835\udc56\ud835\udc60\ud835\udc5c\ud835\udc51\ud835\udc52\u21900 for number of episodes to train do Run agents with policies \ud835\udf190, \ud835\udf191,...,\ud835\udf19\ud835\udc41\u22121 for an episode with incentives given by \ud835\udf02 \ud835\udc5b\ud835\udc62\ud835\udc5a_\ud835\udc52\ud835\udc5d\ud835\udc56\ud835\udc60\ud835\udc5c\ud835\udc51\ud835\udc52\u2190\ud835\udc5b\ud835\udc62\ud835\udc5a_\ud835\udc52\ud835\udc5d\ud835\udc56\ud835\udc60\ud835\udc5c\ud835\udc51\ud835\udc52+ 1 Add the transitions from episode to replay buffer of IQFlow Update agent policies \ud835\udf190, \ud835\udf191,...,\ud835\udf19\ud835\udc41\u22121 according to their private learning rules Update \ud835\udf03\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, \ud835\udf03\ud835\udc52\ud835\udc5b\ud835\udc63, \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51, \ud835\udf13\ud835\udc50\ud835\udc5c\ud835\udc5c\ud835\udc5d, \ud835\udf13\ud835\udc52\ud835\udc5b\ud835\udc63, \ud835\udf13\ud835\udc56\ud835\udc5b\ud835\udc50using equations in 10 sample B\ud835\udc47and B\ud835\udc49for meta-update simulate mechanism critic update for \ud835\udc3etimes using B\ud835\udc47, \ud835\udf03\ud835\udc56\ud835\udc5b\ud835\udc51 Update \ud835\udf02using B\ud835\udc49(with equations 5 or 9) end for end procedure 5 EXPERIMENTS 5.1 Iterated Matrix Games We demonstrate how IQ-Flow operates on the iterated extension of the three canonical Matrix Games, which are Prisoner\u2019s Dilemma, Chicken Game, and Stag Hunt. The payoff matrices for these games are given in Table 2, Table 3, and Table 4. We extend the implementation used by LOLA [4] and use the policy gradient agents for the independent learners as used by LIO [21] and ID [22]. The incentive reward is set as \ud835\udc45\ud835\udc56 \ud835\udc56\ud835\udc5b\ud835\udc50\u2208(0, 2) to provide only sufficient incentivization and number of iterations is set as 20 for all experiments. Since the experimentation purpose here is for illustration rather than comparison, hyperparameter tuning was not performed to optimize learning performance and cost regularization was not added to the meta-objective. We demonstrate how IQ-Flow changes the payoff matrix of the games in Figure 2 and Appendix C. The first column in Figure 2 represents the original payoffs. The other column represents the modified total payoffs by IQ-Flow where each row represents the mechanism state trained for 30, 210, 390, 570, 750 episodes respectively. The first rows in the figures in Appendix C represent the original payoffs, while the other rows represent the state (initial state, previous action taken CC, previous action taken CD, previous action taken DC, and previous action taken DD). The columns represent the total payoff output of the mechanism state trained for 30, 210, 390, 570, 750 episodes respectively. We depict how IQ-Flow changes the estimated Q-Values of the games in Figure 3 and Appendix D. The first columns in Figure 3 and figures in Appendix D represent the Q-Values without the mechanism incentives. The other columns represent the Q-Values with the mechanism incentives where each row represents the mechanism state trained for 30, 210, 390, 570, 750 episodes respectively. These outputs are given for the initial state. \fAAMAS \u201923, May 29 \u2013 June 2, 2023, London, United Kingdom Bengisu Guresti, Abdullah Vanlioglu, and Nazim Kemal Ure Table 2: Prisoner\u2019s Dilemma PD \ud835\udc362 \ud835\udc372 \ud835\udc361 (3, 3) (0, 4) \ud835\udc371 (4, 0) (1, 1) Table 3: Chicken Game Chicken \ud835\udc362 \ud835\udc372 \ud835\udc361 (3, 3) (1, 4) \ud835\udc371 (4, 1) (0, 0) Table 4: Stag Hunt Stag Hunt \ud835\udc362 \ud835\udc372 \ud835\udc361 (4, 4) (0, 3) \ud835\udc371 (3, 0) (1, 1) Figure 2: IPD player 1 payoff matrices In addition to the detailed payoff and Q-Value charts, we provide a plot for Iterated Matrix Games to show how the inequalities turn from \ud835\udc47> \ud835\udc45and/or \ud835\udc43> \ud835\udc46to \ud835\udc47< \ud835\udc45and \ud835\udc43< \ud835\udc46in Figure 4 and Appendix E as training progresses. We highlight that the \ud835\udc45, \ud835\udc47, \ud835\udc43, and \ud835\udc46denotes the corresponding estimated Q-Values for all states and not the single step payoffs. Figure 3: IPD player 1 estimated Q-Value matrices (left: without incentives, right: with incentives) Figure 4: IPD \ud835\udc45\u2212\ud835\udc47and \ud835\udc46\u2212\ud835\udc43plot for Q-Values \fIQ-Flow AAMAS \u201923, May 29 \u2013 June 2, 2023, London, United Kingdom Consequently, our results demonstrate clearly that IQ-Flow is capable of removing the social dilemma for Iterated Prisoner\u2019s Dilemma, Chicken Game and Stag Hunt; since we obtain \ud835\udc47< \ud835\udc45 and \ud835\udc43< \ud835\udc46in the end for all of the cases in both single step payoffs and estimated future returns. 5.2 Escape Room Escape room is a small, N-player Markov game proposed by [21]. The game contains 3 different state: initial, lever and door state where agents spawn in the initial state and aim to reach the door which is the terminal state [21]. But M number of agents must cooperate by pulling the lever at the same time to get others out of the door so that the agent who goes out of the door gets +10 reward individually while the cost of the pulling lever is -1 [21]. Therefore, in order to increase total return, some of the agents should give up their own interest and act cooperatively. We extend the implementation used by LIO [21], benefit from [9], and use the policy gradient agents for the independent learners as used by LIO [21] and ID [22]. We use the same experiment setup used by ID [22] and evaluate IQ-Flow\u2019s performance along with an ablation study given in Appendix B. Figure 5: ER(5,2) Figure 6: ER(10,5) We give the results of Escape Room (5, 2) experiment, as in [22], in Figure 5. The basic case of no incentivization, denoted as PG, performs poorly as expected. ID reaches the optimal total return of the environment, which is 28. IQ-Flow performs the best by reaching 28 faster and with better initial training performance. The results of the experiment Escape Room (10, 5) is given in Figure 6. The basic case of no incentivization, denoted as PG, performs poorly again as expected. Although Yang et al. show [22] that ID reaches the optimal return of 45, we could not replicate those results with our implementation and obtained the performance of ID similar to PG. IQ-Flow reaches the optimal return of 45 faster than ID in both our implementation and the resuls given in [22]. Since the results of ablation study, given in Appendix B, does not provide distinctive results, we focus on the ablation of experiments in the 2-Player Cleanup environment. 5.3 SSD Environment 2-Player Cleanup Cleanup [7] is a grid-world social dilemma environment where the objective is to collect apples from field that give +1 reward. Since the respawn time of the apples depends on the amount of waste, which increases over time, if the amount of waste exceeds a threshold no apples can spawn [7]; therefore, agents need to clean the waste by using clean beam skills for apples to continue to spawn even though staying in the apple field returns more individual rewards. We use decentralized independent actor critic learners and the same environment setup with 2 agents, which we call the 2-Player Cleanup environment, as used by LIO [21] and ID [22] for the 7 \u00d7 7 map. Figure 7: 7 \u00d7 7 experiment result It can be seen from Figure 7 that IQ-Flow performs the best while reaching the return upper bound, identified by shared reward agent\u2019s performance as in LIO [21]. Decentralized actor critic agents perform poor as expected while the decentralized actor critic agents with the shared centralized reward set the return upper bound. Although ID performs close to the return upper bound in both our implementation and the results provided by Yang et al. in [22], it fails to reach it. It should also be noted that while IQ-Flow performs best and reaches the upper bound for good runs, it has high variance close to the end of training for naive training. This variance occurs due to some loss in performance when the actor critic agents\u2019 policies get too disconnected from the mechanism. Therefore, in order to obtain a stable training, we reset the actor-critic agents in the environment each 1000 episodes. Since after each reset operation \fAAMAS \u201923, May 29 \u2013 June 2, 2023, London, United Kingdom Bengisu Guresti, Abdullah Vanlioglu, and Nazim Kemal Ure the actor-critic agents start learning from scratch, we sample evaluation results each 500 episodes in order to filter the pseudo-loss in performance caused by learning from scratch and have a fair comparison with other algorithms. Figure 8: Ablation results The ablation results for 2-Player Cleanup is given in Figure 8. IQ-Flow denotes the standard algorithm with cost regularization cost 1 and cost 2. IQ-Flow C denotes the case when cost coefficient 1 is 0 and IQ-Flow C2 denotes the case where there is no cost regularization. It is demonstrated that having cost regularization with both coefficients greater than 0 indeed increases learning performance. The incentive rewards provided by ID and IQ-Flow in 2-Player Cleanup Environment is given in Figure 9. Figure 9: Incentive Rewards given by IQ-Flow and ID The incentive rewards given by IQ-Flow and ID to agent 1 and agent 2 are presented in Figure 9. Incentive rewards given to agent 1 (A1, cleaner) are in close range with each other for IQ-Flow and ID, but incentive rewards given to agent 2 (A2, harvester) are dissimilar. While ID learns to give an unnecessary incentive to the harvester agent, IQ-Flow learns not to give any unnecessary incentive to this harvester agent. This is attributed to IQ-Flow\u2019s capacity to infer when there is a dilemma and when there is no dilemma. Figure 10: Comparison between pretrained IQ-Flow mechanism and shared reward setup Finally, we demonstrate in Figure 10 how a reward system supported by a pretrained incentive mechanism by IQ-Flow performs in comparison to a shared reward system. Although the shared reward case with actor-critic agents gives the return upper-bound for 2-Player Cleanup environment, incentivized case with pretrained and frozen IQ-Flow mechanism and actor-critic agents yields much faster learning with higher performance. 6 CONCLUSION In conclusion, we presented a new algorithm named IQ-Flow to design incentivizers to remove a social dilemma from an environment without any need to perform opponent modelling or access to internal agent parameters. IQ-Flow is fully decentralized and uses the offline RL method Implicit Q-Learning to evaluate policies which are not available in the experienced data. We demonstrated how IQFlow modifies the payoff matrix and estimated Q-Values of Iterated Matrix Games for both players, and that it outperforms ID in the existing sequential social dilemma benchmarks. We also demonstrated how much more efficient the reward setup that IQ-Flow produces is than the shared reward case. We consider a promising direction for future work in this area to learn incentive designers with IQ-Flow from offline data with fully offline training so that we can have a method to remove dilemmas from real world that we can not simulate. ACKNOWLEDGMENTS We thank Tolga Ok for the valuable discussions throughout this research. Bengisu Guresti thanks the DeepMind scholarship program for the support during her studies. This work is supported by the Scientific Research Project Unit (BAP) of Istanbul Technical University, Project Number: MOA-2019-42321." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.02017v1", |
| "title": "Value-Aided Conditional Supervised Learning for Offline RL", |
| "abstract": "Offline reinforcement learning (RL) has seen notable advancements through\nreturn-conditioned supervised learning (RCSL) and value-based methods, yet each\napproach comes with its own set of practical challenges. Addressing these, we\npropose Value-Aided Conditional Supervised Learning (VCS), a method that\neffectively synergizes the stability of RCSL with the stitching ability of\nvalue-based methods. Based on the Neural Tangent Kernel analysis to discern\ninstances where value function may not lead to stable stitching, VCS injects\nthe value aid into the RCSL's loss function dynamically according to the\ntrajectory return. Our empirical studies reveal that VCS not only significantly\noutperforms both RCSL and value-based methods but also consistently achieves,\nor often surpasses, the highest trajectory returns across diverse offline RL\nbenchmarks. This breakthrough in VCS paves new paths in offline RL, pushing the\nlimits of what can be achieved and fostering further innovations.", |
| "authors": "Jeonghye Kim, Suyoung Lee, Woojun Kim, Youngchul Sung", |
| "published": "2024-02-03", |
| "updated": "2024-02-03", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Offline AND Reinforcement AND Learning", |
| "gt": "Value-Aided Conditional Supervised Learning for Offline RL", |
| "main_content": "Introduction Offline reinforcement learning (RL) serves as a critical framework for learning decision-making skills from fixed datasets, especially in situations where direct, online interactions are impractical or unfeasible. This framework is especially pertinent in domains such as robotics, autonomous driving, and healthcare, where the costs and risks associated with real-time experimentation are high. There are two main approaches in offline RL: return-conditioned supervised learning (RCSL) (Chen et al., 2021; Emmons et al., 2022; Kim et al., 2024) and value-based methods (Kumar et al., 2020; Fujimoto & Gu, 2021; Kostrikov et al., 2021). However, each of these methods, while effective in its own right, exhibits distinct limitations. RCSL, inspired by recent breakthroughs in supervised learn1KAIST, Daejeon, Republic of Korea 2Carnegie Mellon University, Pittsburgh, Pennsylvania, United States. Correspondence to: Youngchul Sung <ycsung@kaist.ac.kr>. Preprint. Offline Dataset Optimal Sub-optimal RCSL VCS Policy Training Value-Aid RCSL Value-Aid Loss Function Loss Function Figure 1. Conceptual idea of VCS: Follow RCSL when learning from optimal trajectories where it predicts actions confidently but the value function may stitch incorrectly. Conversely, refer to the value function when learning from sub-optimal trajectories where RCSL is less certain but the value function is likely accurate. ing (SL) \u2014 notably the advancements in algorithms and architectures (Vaswani et al., 2017; Brown et al., 2020; Dosovitskiy et al., 2020; Liu et al., 2021) \u2014 adapts these methodologies to the RL domain by framing RL problems as sequence modeling challenges. This approach benefits from the inherent stability and scalability of SL, yet it is significantly constrained by the lack of the \u2018stitching ability\u2019 (Fu et al., 2020), which limits its efficacy to the best trajectories within the dataset. Conversely, value-based offline RL methods possess the ability to stitch together multiple sub-optimal trajectories, dissecting and reassembling them into an optimal trajectory through the use of dynamic programming. However, they encounter considerable obstacles, predominantly due to the accumulation of value function approximation errors. The function approximation error, which is difficult to rectify without online interactions, frequently leads to distribution shift and thus sub-optimal performance. Despite their advanced stitching capability, value-based methods often struggle to surpass the maximum return of the dataset. Recognizing these challenges, we seek to synergize the stable learning framework of RCSL with the dynamic stitching capacity of value-based methods. Prior works suggest that conditioning on return-to-go (RTG), which is the sum of future rewards, limits the stitching ability of RCSL, and proposes the replacement of RTG conditioning with pre-trained value conditioning (Yamagata et al., 2023; Gao et al., 2023). However, this approach has faced limitations, often resulting in increased uncertainty and only marginal performance improvements. The primary challenge lies in the fact that interpolation of the two approaches through conditioning, without managing the conditions for stable value-guided 1 arXiv:2402.02017v1 [cs.LG] 3 Feb 2024 \fValue-Aided Conditional Supervised Learning for Offline RL Value-Based Methods RCSL Methods VCS (Ours) Combined RCSL-Value Methods Max Trajectory Return (DT/DC/RvS) (TD3+BC/IQL/CQL) (QDT/EDT/CGDT) *Combination of best methods in each group Figure 2. Mean normalized return in D4RL MuJoCo medium, medium-replay, medium-expert and Antmaze. The performance of RCSL, value-based method, and the combined methods are evaluated based on the maximum scores within their respective groups for each dataset, as detailed in Section 6.2. VCS outperforms all other groups\u2019 combinations of maximum scores and, notably surpasses the maximum return of the dataset. Further details are provided in Appendix J. stitching, could result in a sub-optimal algorithm. The resulting algorithm may be inferior to either method alone, advantageous only for specific tasks, or one that requires extensive hyperparameter adjustments such as the mixing ratio. This has led us to the motivating question: \u201cWhat is the optimal combination in which RCSL and valuebased methods compensate for their weaknesses and reinforce their strengths?\u201d To address the question, we introduce VCS (Value-Aided Conditional Supervised Learning), a novel approach that effectively interweaves the stability of RCSL with the stitching prowess of value-based methods. VCS enriches RCSL\u2019s loss function with dynamic value function guidance, leveraging trajectory returns as a key indicator for the dynamic level of value aid as illustrated in Fig. 1. We justify our dynamic adjustment with a thorough analysis of the value-based methods\u2019 over-generalization via Neural Tangent Kernel (NTK). This dynamic interplay allows RCSL to gain enhanced support from value assistance in challenging scenarios with sub-optimal trajectories with precise value-guided stitching. The effectiveness of VCS is empirically substantiated across various standard offline RL benchmarks, demonstrating significant advancements over existing state-of-the-art (SOTA) methods including both RCSL and value-based methods. Moreover, only VCS reaches or surpasses the maximal dataset trajectory return across diverse MuJoCo datasets, under varying degrees of sub-optimality, as illustrated in Fig. 2. This significant achievement underscores the novelty and practical effectiveness of our approach in addressing the complex challenges of offline RL. 2. Preliminaries 2.1. Offline Reinforcement Learning We consider a Markov Decision Process (MDP) (Bellman, 1957), described as a tuple M = (S,A,P,\u03c10,r,\u03b3). S is the state space, and A is the action space. P \u2236 S \u00d7 A \u21a6\u2206(S) is the transition dynamics, \u03c10 \u2208\u2206(S) is the initial state distribution,r \u2236S \u00d7 A \u21a6R is the reward function, and \u03b3 \u2208[0,1) is a discount factor. The goal of offline RL is to learn a policy \u03c0(\u22c5\u2223s) that maximizes the expected cumulative discounted reward, Eat\u223c\u03c0(\u22c5\u2223st),st+1\u223cP(\u22c5\u2223st,at) [\u2211\u221e t=0 \u03b3tr(st,at)], from a static dataset D = {\u03c4 (i)}D i=1 with a set of trajectories \u03c4 (i). Each trajectory \u03c4 (i) consists of transitions with a time horizon T previously collected from an unknown behavior policy \u03b2. 2.2. Value-Based Offline Reinforcement Learning Offline RL effectively employs off-policy RL techniques, which permit a divergence between the behavior policy \u03b2 used for data acquisition and the target policy \u03c0 under optimization. Off-policy methods predominantly utilize temporal difference (TD) bootstrapping for training purposes. In the actor-critic off-policy approaches, both the action-value function \u02c6 Q\u03b8 and the policy \u02c6 \u03c0 undergo iterative updates. This process may cause a shift in the action distribution, leading \u02c6 \u03c0 to explore actions that significantly deviate from those in the training dataset. Such deviations can inadvertently result in an overestimation error during the value training phase due to the inability of offline RL to adjust incorrect policies and values through environmental interactions. Unlike actor-critic methods, in-sample learning methods use only in-sample actions to learn the optimal Q-function, thereby preventing the possibility of querying values of OOD actions during training (Peng et al., 2019; Nair et al., 2020; Kostrikov et al., 2021; Xu et al., 2022). Implicit QLearning (IQL) (Kostrikov et al., 2021) is a representative method of in-sample learning. It utilizes expectile regression, defined as L2 \u03b7(u) = \u2223\u03b7\u22121(u < 0)\u2223u2 where \u03b7 \u2208[0.5,1), to formulate the asymmetrical loss function for the statevalue network V\u03c8. Through this loss, V\u03c8 can approximate the implicit maximum of the TD target, maxaQ\u02c6 \u03b8(s,a). Formally, for a parameterized critic Q\u03b8(s,a) with target critic 2 \fValue-Aided Conditional Supervised Learning for Offline RL Q\u02c6 \u03b8(s,a), the value objective is given by LV (\u03c8) = E (s,a)\u223cD [L2 \u03b7 (Q\u02c6 \u03b8(s,a) \u2212V\u03c8(s))]. (1) Intuitively, this objective suggests placing more emphasis when Q\u02c6 \u03b8 is greater than V\u03c8(s). Subsequently, the critic network Q\u03b8 is updated by treating the learned V\u03c8(s\u2032) as maxa\u2032\u2208D(s\u2032) Q\u02c6 \u03b8(s\u2032,a\u2032), where D(s\u2032) denotes the in-sample actions for the given state s\u2032, i.e., (s\u2032,a\u2032) \u2208D: LQ(\u03b8) = E (s,a,s\u2032)\u223cD [(r(s,a) + \u03b3V\u03c8(s\u2032) \u2212Q\u03b8(s,a)) 2]. (2) In VCS, we use IQL to pretrain the Q-function used to aid RCSL for its relatively stable in-sample learning. Throughout our manuscript, we use the term \u201cvalue\u201d to specifically denote the state-action value Q, unless otherwise specified. Neural Tangent Kernel. The Neural Tangent Kernel (NTK) (Jacot et al., 2018) provides insightful analysis of the function approximation errors, especially those related to generalization. The NTK, denoted as k\u03b8(\u00af s, \u00af a,s,a), is defined as the inner product of two gradient vectors, \u2207\u03b8Q\u03b8(\u00af s, \u00af a) and \u2207\u03b8Q\u03b8(s,a), i.e., k\u03b8(\u00af s, \u00af a,s,a) \u2236= \u2207\u03b8Q\u03b8(\u00af s, \u00af a)\u22ba\u2207\u03b8Q\u03b8(s,a). (3) The NTK offers a valuable perspective on the impact of parameter updates in function approximation, particularly in gradient descent scenarios. It essentially measures the degree of influence a parameter update for one state-action pair (s,a) exerts on another pair (\u00af s, \u00af a). A high value of k\u03b8(\u00af s, \u00af a,s,a) implies that a single update in the value function for the pair (s,a) could lead to substantial changes for the pair (\u00af s, \u00af a). We guide the readers to Appendix B and Achiam et al. (2019) for a deeper understanding of the NTK. To assess this effect across the entire state-action space, the NTK Gram matrix and the mean row-ratio were defined. Definition 2.1 (NTK Gram matrix K\u03b8 (Achiam et al., 2019)). The NTK Gram Matrix K\u03b8 is defined as the square matrix of size \u2223S\u2223\u2223A\u2223\u00d7 \u2223S\u2223\u2223A\u2223, where the element K\u03b8(i,j) is the NTK value k\u03b8(sj,aj,si,ai) for i,j \u2208{1,2,...,\u2223S\u2223\u2223A\u2223}. Definition 2.2 (Mean row-ratio (MRR) of the NTK Gram matrix (Achiam et al., 2019)). MRR(K\u03b8) \u2236= 1 \u2223S\u2223\u2223A\u2223 \u2211 (s,a)\u2208S\u00d7A 1 \u2223S\u2223\u2223A\u2223\u22121 \u2211 (\u00af s,\u00af a)\u2208S\u00d7A (\u00af s,\u00af a)\u2260(s,a) \u2223k\u03b8(\u00af s, \u00af a,s,a)\u2223 \u2225\u2207\u03b8Q\u03b8(s,a)\u22252 2 . (4) Note that the MRR computes the ratio of the off-diagonal terms to the diagonal term of the NTK Gram matrix K\u03b8, summed over all pairs since k\u03b8(s,a,s,a) = \u2207\u03b8Q\u03b8(s,a)\u22ba\u2207\u03b8Q\u03b8(s,a) = \u2223\u2223\u2207\u03b8Q\u03b8(s,a)\u2223\u22232 2. Thus, the MRR assesses the aggregated normalized gradient coherence across the entire state-action space. A high MRR means that on average, the gradients for different stateaction pairs are highly coherent, implying that the Qfunction exhibits a tendency towards aggressive generalization with function approximation. The impact of a high MRR is double-sided. It may accelerate learning in some contexts, but can also exacerbate the issues associated with the deadly triad, leading to Q-value explosion and poor performance if not properly managed. 2.3. Return-Conditioned Supervised Learning (RCSL) RCSL is an emerging approach to addressing challenges in offline RL. It focuses on learning the action distribution on conditioned return-to-go (RTG), defined as the cumulative sum of future rewards \u02c6 Rt = \u2211T t\u2032=t rt\u2032 through supervised learning (SL). Due to the stability of SL, RCSL is capable of learning decision making by extracting and mimicking useful information from the dataset. In particular, Decision Transformer (DT) (Chen et al., 2021) applies the Transformer architecture (Vaswani et al., 2017) to reframe the RL as a sequence modeling problem. It constructs input sequences to the Transformer by using sub-trajectories, each spanning K timesteps and comprising RTGs, states, and actions: \u03c4t\u2212K+1\u2236t = ( \u02c6 Rt\u2212K+1,st\u2212K+1,at\u2212K+1,..., \u02c6 Rt\u22121,st\u22121,at\u22121, \u02c6 Rt,st). The model is then trained to predict the action at based on \u03c4t\u2212K+1\u2236t. Recently, Kim et al. (2024) proposed Decision ConvFormer (DC) to simplify the attention module of DT and better model the local dependency in the dataset, yielding performance gains over DT with reduced complexity. Alternatively, RvS (Emmons et al., 2022) adopts a twolayered MLP model for action prediction, using inputs formed by concatenating state and specific conditioning variables. RvS proposes two distinct algorithms: RvS-R, which leverages the average return over future timesteps, and RvSG, which uses future states for conditioning. These methods have shown effective planning capabilities, but their overall performance does not exceed that of value-based methods. 3. Potential Advantage and Risk of Incorporating Value Function into RCSL 3.1. Advantage: Implanting Stitching Ability RCSL tends to yield better performance than value-based offline methods by mimicking the actions in the dataset containing high-return trajectories (Levine et al., 2020; Mediratta et al., 2023). However, this approach is less effective with datasets comprised mostly of sub-optimal trajectories. In such cases, the agent needs to discern and exclude ineffective actions through return conditioning. Nevertheless, RCSL faces an inherent limitation of combining transitions 3 \fValue-Aided Conditional Supervised Learning for Offline RL S4 S6 3 1 1 Optimal Policy S2 S1 Goal Fail Start Trajectory 1 (Succeed) Trajectory 2 (Failure) S3 1 1 4 Figure 3. An example demonstrating the limit of RCSL: The dataset consists of two trajectories, with a time limit of T = 3 and a discount factor \u03b3 = 1. The black dashed arrow represents the optimal policy yielding a maximum return of 7. from multiple sub-optimal trajectories effectively to yield a better trajectory. We present a motivating example that demonstrates the aforementioned limitation of RCSL, as illustrated in Fig. 3. Suppose that the dataset is composed of two sub-optimal trajectories. At the initial state s1, the agent has two options: the \u2191action connected to trajectory 2 (the orange trajectory) with RTG of 5, and the \u2192action connected to trajectory 1 (the purple trajectory) with RTG of 6. RCSL makes the agent choose the \u2192action with high RTG and follow the path of trajectory 1, which is not optimal. This example demonstrates that RCSL alone is insufficient for the agent to learn to assemble the parts of beneficial subtrajectories. In contrast, a value-based method can develop the stitching ability. Consider the example in Fig. 3 again. We can compute the Q-values for the actions \u2191and \u2192at state s1 with dynamic programming: Q(s1,\u2191) = 3 + max(Q(s2,\u2192),Q(s2,\u2198)) = 7 Q(s1,\u2192) = 1 + max(Q(s3,\u2192),Q(s3,\u2196)) = 6 With the Q-values, the agent will select the \u2191action at s1, and then the \u2192action at s2. Consequently, with the value function, the agent can select an optimal action that yields the maximum return of 7. Therefore, integrating RCSL with value-based methods, if executed correctly, can be beneficial to developing the stitching ability required for optimal decision-making. 3.2. Risk: Over-Generalization of Q-Function with Limited Action Diversity Despite the potential advantage of using Q-values, incorporating a Q-function, denoted as Q\u03b8, in their estimation can introduce additional inaccuracies. This issue arises from changes in the values of irrelevant state-action pairs during updates of \u03b8 at a specific state-action pair. Such over-generalization makes Q\u03b8 noise-sensitive, potentially assigning high values to incorrect actions and causing state distribution shifts during testing as in Fig. 6. Consequently, it becomes challenging to integrate the value guidance in the loss function of RCSL explicitly, where the policy may query the values of OOD actions. Inverted Double Pendulum Expert OOD Actions In-Sample Actions -\uf6b2.\uf6b1 -\uf6b1.\uf6b6 \uf6b1.\uf6b1 \uf6b1.\uf6b6 \uf6b2.\uf6b1 -\uf6b2.\uf6b1 -\uf6b1.\uf6b6 \uf6b1.\uf6b1 \uf6b1.\uf6b6 \uf6b2.\uf6b1 \uf6b2.\uf6b1 \uf6b2\uf6b1\uf6b1\uf6b1 \uf6b9\uf6b1\uf6b1 \uf6b1.\uf6b6 Inverted Double Pendulum Medium OOD Actions In-Sample Actions -\uf6b2.\uf6b1 -\uf6b1.\uf6b6 \uf6b1.\uf6b1 \uf6b1.\uf6b6 \uf6b2.\uf6b1 -\uf6b2.\uf6b1 -\uf6b1.\uf6b6 \uf6b1.\uf6b1 \uf6b1.\uf6b6 \uf6b2.\uf6b1 \uf6b2.\uf6b1 \uf6b7\uf6b1\uf6b1 \uf6b5\uf6b1\uf6b1 \uf6b1.\uf6b6 Figure 4. We show estimated Q\u03b8(s, \u00af a) for one-dimensional \u00af a \u2208A and normalized NTK k\u03b8(s, \u00af a, s, aref)/\u2225\u2207\u03b8Q\u03b8(s, aref)\u22252 2 for two datasets, expert and medium. In these figures, we fix the state s and the fixed reference action aref = 0.0 (marked as \u2605), and sweep over all actions \u00af a \u2208A. Refer to Appendix D.1 for details. We analyze how Q\u03b8(s,\u22c5) varies over the action space in the Gym Inverted Double Pendulum environment (Brockman et al., 2016) trained on expert and medium-quality datasets with Implicit Q-Learning (IQL). The details of the analysis are in Appendix D.1. As depicted in the upper row of Fig. 4, the expert dataset shows concentrated action distribution, while the medium dataset has a broader spread. The concentration in the expert dataset, particularly around action 0, results in a vast OOD action region, causing over-generalization on Q\u03b8(s,\u22c5) to assign constant values across the action space. This trend is also observable in more complex environments as well in Appendix D.2. For a deeper understanding of the over-generalization in the Q-function, we analyze the gradient similarity, reflected as the Neural Tangent Kernel (NTK), with the reference action aref across different actions \u00af a, given the fixed state s. In Fig. 4, Q\u03b8 trained with the expert dataset shows uniform NTK values across actions, indicating that the gradient at one action affects all others equally. In contrast, Q\u03b8 trained with the medium dataset shows NTK values that are higher near the reference action and decrease with action distance, reflecting a more precise generalization. This analysis reveals that the expert dataset demonstrates more aggressive generalization, which can detriment the accuracy of Q-learning. Consequently, to prevent incorrect stitching and ensure effective integration of RCSL with value-based methods, it is essential to assess the degree of over-generalization in the Q-function and make appropriate adjustments to the level of value assistance. 4. Value-Aided Conditional Supervised Learning In this section, we introduce Value-Aided Conditional Supervised Learning (VCS), a novel approach designed to enhance RCSL by integrating the dynamic stitching ability 4 \fValue-Aided Conditional Supervised Learning for Offline RL Halfcheetah Hopper Walker2d Figure 5. OMRR in MuJoCo datasets, with error bars showing standard deviations from 3 random network initializations. The dataset names are abbreviated as follows: expert as \u2018e\u2019, medium as \u2018m\u2019, medium-replay as \u2018m-r\u2019. Details on the calculation and analysis of the OMRR values are in the Appendix E.3. of value-based methods. Our goal is to enable the policy to refer to the value function in scenarios where RCSL faces challenges while avoiding adherence to potentially erroneous guidance as discussed in Section 3.2. The core components of VCS to accomplish our goal are summarized as follows. \u2022 Adjust degree of value aid using trajectory returns: We propose a metric designed to assess the Q-function\u2019s over-generalization in offline RL. This metric provides insights, indicating that trajectory returns can effectively guide the adaptive extent of value assistance. \u2022 Integration of value aid into the loss function of RCSL: We enhance RCSL by incorporating a pretrained Qfunction into its loss function. Through this, the agent can predict actions that yield higher Q-values within the bounds of RCSL, depending on the degree of value aid. The following subsections detail the theoretical underpinnings and practical implementation of VCS. 4.1. Controlling Value Aid Based on Trajectory Returns To ensure effective integration of stitching ability in RCSL via the Q-function and prevent reliance on over-generalized Q-values for OOD actions, we propose a metric specially designed for offline RL settings. This metric is aimed at assessing the degree of generalization to OOD actions. While the mean row-ratio (MRR) from Eq. (4) is a meaningful metric in analyzing Q-function\u2019s over-generalization, directly computing the NTK for all state-action pairs is infeasible for offline RL. For this, we adopt the MRR and modify it according to the offline RL context. Definition 4.1 (Offline NTK Gram matrix KD \u03b8 ). We define the offline NTK Gram matrix KD \u03b8 as a \u2223D\u2223\u00d7 \u2223A\u2223matrix, where the element KD \u03b8 (i,j) is defined as k\u03b8(si,aj,si,ai) for i \u2208{1,2,...,\u2223D\u2223} and j \u2208{1,2,...,\u2223A\u2223}. KD \u03b8 is essentially a sub-matrix of the full Gram matrix K\u03b8, focusing on gradient coherence across state-action pairs within the dataset D against all possible actions in A. Then, we define the offline MRR to fit this context as follows: Definition 4.2 (Offline mean row ratio (OMRR)). OMRR(KD \u03b8 ) \u2236= 1 \u2223D\u2223 \u2211 (s,a)\u2208D 1 \u2223A\u2223\u22121 \u2211 \u00af a\u2208A \u00af a\u2260a \u2223k\u03b8(s, \u00af a,s,a)\u2223 \u2225\u2207\u03b8Q\u03b8(s,a)\u22252 2 . (5) The OMRR for the offline NTK is tailored to quantify the extent of generalization to all actions when trained over the offline dataset D. When the OMRR is high, the chance of over-generalization of OOD actions is high. However, computing the true OMRR for in-sample transitions and all contrastive actions \u00af a \u2208A for high-dimensional and continuous action space is impractical during training. In this context, we provide one of our key results, enabling the practical implementation of adaptive value aid in RCSL: Theorem 4.3. For two offline RL datasets DH and DL, such that the returns of the trajectories in DH are always greater than those in DL, i.e., \u2200\u03c4H \u2208DH and \u2200\u03c4L \u2208DL, R(\u03c4H) > R(\u03c4L), (6) where R(\u03c4) denotes the return of trajectory \u03c4, we have OMRR(KDH \u03b8 ) > OMRR(KDL \u03b8 ) (7) for a Q-function trained with in-sample data (e.g., IQL) under Assumptions C.2, C.3, and C.4. Proof. We present the assumptions and the proof in Appendix C. Theorem 4.3 states that datasets with high-return trajectories are likely to induce stronger generalization in the Q-value estimation for OOD actions. Theorem 4.3 is validated by the approximated OMRR values on MuJoCo datasets with various levels of optimality, as seen in Fig. 5. 4.2. Objective Function and Implementation Inspired by our analysis in Section 4.1, we delineate a practical algorithm for dynamically integrating RCSL with a learned value function based on the trajectory return. Specifically, we introduce an additional weighed loss term that maximizes the pre-trained Q-function, to the RCSL loss function. Our methodology unfolds in two phases: initially, we pretrain the Q-function, followed by the policy training phase where RCSL is dynamically integrated with these pre-trained values, controlled by the trajectory returns. Value Function Pretraining. In this work, we pretrain the Q-function with IQL, QIQL \u03b8 for the value-aid due to its stable in-sample learning. The ablation study on the use of another Q-learning method is available in Appendix I.1. We fix the parameter \u03b8 unchanged after the pretraining phase and move on to the VCS policy training. 5 \fValue-Aided Conditional Supervised Learning for Offline RL VCS Policy Training. To equip RCSL with stitching ability, we modify the standard RCSL\u2019s loss function by adding a term that maximizes the Q-value, which we call value aid. Additionally, we adapt the weight of Q-value aid, referred to as the VCS weight, to harness its benefit and avoid detriment. This is done by setting the weight to be low for high-return trajectories. The rationale behind this is that OMRR, which assesses the Q-value over-generalization, is high for trajectories of high returns as stated in Theorem 4.3. Concretely, we set the VCS weight w(R(\u03c4)) for a trajectory \u03c4 as a continuous monotone-decreasing function of the return of \u03c4, R(\u03c4) such that \u2200\u03c41,\u03c42, R(\u03c41) < R(\u03c42) \u21d2w(R(\u03c41)) \u2265w(R(\u03c42)), where the continuity is imposed for gradual impact change. Among various such choices, we find that simple choices such as the linear decay are enough to produce good results, i.e., w(R(\u03c4)) = \u03bb\u22c5(R\u2217\u2212R(\u03c4)) with some \u03bb > 0, where R\u2217 represents the optimal return of the task. R\u2217can practically be obtained from an expert dataset or from the maximum value in the dataset. Thus, the final loss function of VCS for the policy \u03c0\u03d5 is given by LVCS \u03c0 (\u03d5) = E\u03c4\u223cD \u23a1 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a2 \u23a3 1 K K\u22121 \u2211 j=0 \u2225at+j \u2212\u03c0\u03d5 (\u03c4t\u2236t+j)\u22252 2 \u00b4\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b8\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b6 RCSL \u2212w (R(\u03c4)) \u00b4\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b8\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b6 VCS weight \u22c5QIQL \u03b8 (st+j,\u03c0\u03d5(\u03c4t\u2236t+j)) \u00b4\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b8\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b9\u00b6 Value Aid \u23a4 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a5 \u23a6 , (8) where the input sub-trajectory of context length K starting from time t is given by \u03c4t\u2236t+K\u22121 = ( \u02c6 Rt,st,at,..., \u02c6 Rt+K\u22121,st+K\u22121) \u2282\u03c4, (9) and R(\u03c4) is the return of the entire trajectory \u03c4 containing the sub-trajectory \u03c4t\u2236t+K\u22121. Note that R(\u03c4) differs from RTG \u02c6 Rt which is the sum of future rewards after timestep t and decreases as timestep t goes, thereby failing to accurately represent the trajectory\u2019s optimality. We describe the details of the VCS weight w(R(\u03c4)) and the policy update with the loss function in Appendix H.2 and present our full algorithm\u2019s pseudocode in Appendix A. Our new objective function enables different learning strategies depending on the quality of the trajectory that includes the sub-trajectory. When it belongs to an optimal trajectory, action selection follows RCSL. When it belongs to a suboptimal trajectory \u03c4 with R(\u03c4) < R\u2217, on the other hand, the Q-value aid term kicks in and its impact becomes stronger as R(\u03c4) becomes lower. Note that as R(\u03c4) decreases, insample action diversity increases, OMRR decreases and the Q-function estimation becomes more accurate, as desired. Base Architecture. For implementing \u03c0\u03d5, a general RCSL policy can be used. When K = 1, considering only the current time step to estimate the action, we simply consider an MLP network. When K \u22652, we consider a history-based policy network (Chen et al., 2021; Kim et al., 2024). Conditioning. We consider two conditioning approaches as proposed by Emmons et al. (2022): one for tasks maximizing returns and the other for tasks aiming at reaching specific goals. For return-maximizing tasks, we employ RTG conditioning, and our algorithm is named VCS-R. For goal-reaching tasks, we use subgoal conditioning, and our algorithm is named VCS-G. For subgoal selection, we randomly select a state that the agent will visit in the future. The ablations on conditioning are in Appendix I.2. 5. Related Work Prompting RCSL Using Learned Value. Recent studies have recognized the limitations of RCSL in stitching abilities (Kumar et al., 2022; Brandfonbrener et al., 2022; Zhou et al., 2024). Our work contributes to the ongoing efforts to imbue RCSL with this capability. Notably, Q-learning Decision Transformer (QDT) (Yamagata et al., 2023) and Advantage Conditioned Transformer (ACT) (Gao et al., 2023) have proposed the integration of the value function into RCSL. These methods, rooted in the Transformer architecture (Vaswani et al., 2017), suggest modifying the RTG prompt to include value information. Our approach, VCS, parallels these efforts by leveraging a learned value function for action guidance and trajectory stitching. However, unlike these methods which substitute RTG prompting with the value function to incorporate value implicitly, VCS explicitly augments its loss function with the value information. Incorporating RCSL with Stitching Ability. In a distinct vein, recently proposed Critic-Guided Decision Transformer (CGDT) (Wang et al., 2023) identifies the gap between target RTG and expected returns of actions as key to RCSL\u2019s limited stitching. To mitigate this, it adjusts DT\u2019s output with the critic network\u2019s Monte-Carlo (MC) return predictions and target RTG. In contrast, our VCS method employs values, learned via dynamic programming, to guide actions directly to maximize the predicted value, explicitly incorporating enhanced stitching ability. Another approach, Elastic Decision Transformer (EDT) (Wu et al., 2023), advocates for using variable context lengths in DT during inference. It suggests a longer historical context for decision-making in optimal trajectories and a shorter one in sub-optimal trajectories to increase the likelihood of identifying an optimal path. Similarly, VCS adapts its strategy based on trajectory optimality. However, VCS sets itself apart by modifying its learning approach during training, rather than at the inference stage. This is achieved by leveraging the complementary strengths of the value function and RCSL. 6 \fValue-Aided Conditional Supervised Learning for Offline RL Table 1. The performance of VCS and baselines in the MuJoCo domain. The \u2018Max Return\u2019 column indicates the normalized maximum trajectory return for each dataset. The dataset names are abbreviated as follows: medium to \u2018m\u2019, medium-replay to \u2018m-r\u2019, medium-expert to \u2018m-e\u2019. The boldface numbers denote the maximum score or comparable one among the algorithms. Value-Based Method RCSL Combined Method Ours Max Dataset TD3+BC IQL CQL DT DC RvS-R QDT EDT CGDT VCS-R Return halfcheetah-m 48.3 47.4 44.0 42.6 43.0 41.6 42.3 42.5 43.0 59.0 45.0 hopper-m 59.3 66.3 58.5 67.6 92.5 60.2 66.5 63.5 96.9 96.4 99.6 walker2d-m 83.7 78.3 72.5 74.0 79.2 71.7 67.1 72.8 79.1 88.5 92.0 halfcheetah-m-r 44.6 44.2 45.5 36.6 41.3 38.0 35.6 37.8 40.4 54.1 42.4 hopper-m-r 60.9 94.7 95.0 82.7 94.2 73.5 52.1 89.0 93.4 100.4 98.7 walker2d-m-r 81.8 73.9 77.2 66.6 76.6 60.6 58.2 74.8 78.1 92.7 90.0 halfcheetah-m-e 90.7 86.7 91.6 86.8 93.0 92.2 93.6 93.3 92.9 hopper-m-e 98.0 91.5 105.4 107.6 110.4 101.7 107.6 110.2 116.1 walker2d-m-e 110.1 109.6 108.8 108.1 109.6 106.0 109.3 116.6 109.1 average 75.3 77.0 77.6 74.7 82.2 71.7 82.4 90.1 87.3 Table 2. The performance of VCS and baselines in the Antmaze domain. The dataset names are abbreviated as follows: umaze to \u2018u\u2019, umaze-diverse to \u2018u-d\u2019, medium-play to \u2018m-p\u2019, medium-diverse to \u2018m-d\u2019, large-play to \u2018l-p\u2019, and large-diverse to \u2018l-d\u2019. The boldface numbers denote the maximum score or comparable one among the algorithms. Value-Based Method RCSL Ours Dataset TD3+BC IQL CQL DT DC RvS-R RvS-G VCS-R VCS-G antmaze-u 78.6 87.5 74.0 65.6 85.0 64.4 65.4 86.7 93.6 antmaze-u-d 71.4 62.2 84.0 51.2 78.5 70.1 60.9 71.5 69.7 antmaze-m-p 10.6 71.2 61.2 4.3 33.2 4.5 58.1 78.8 83.2 antmaze-m-d 3.0 70.0 53.7 1.2 27.5 7.7 67.3 75.8 78.5 antmaze-l-p 0.2 39.6 15.8 0.0 4.8 3.5 32.4 44.1 64.1 antmaze-l-d 0.0 47.5 14.9 0.5 12.3 3.7 36.9 55.2 66.3 average 27.3 63.0 50.6 20.5 40.2 25.6 53.5 68.7 75.9 6. Experiments 6.1. Experimental Setup Baseline Methods. The purpose of our experiment is to demonstrate that our method effectively combines the strengths of both RCSL and value-based approaches. To this end, we conduct a comprehensive benchmarking against nine representative baselines that are state-of-the-art in each category. For the value-based category, we assess three methods: TD3+BC (Fujimoto & Gu, 2021), IQL (Kostrikov et al., 2021), and Conservative Q-Learning (CQL) (Kumar et al., 2020). For RCSL, we assess three methods: DT (Chen et al., 2021), DC (Kim et al., 2024), RvS (Emmons et al., 2022). Additionally, we evaluate three advanced RCSL methods that are proposed to integrate stitching capabilities: QDT (Yamagata et al., 2023), EDT (Wu et al., 2023), and CGDT (Wang et al., 2023). For more details on the experimental setup and the baselines, refer to Appendix F. Offline Benchmarks. We evaluated VCS against various baselines using datasets featuring diverse characteristics. These include tasks focused on return-maximization or goalreaching, as well as those with dense or sparse rewards, and varying levels of sub-optimality. Our focus was particularly on the D4RL (Fu et al., 2020) MuJoCo, Antmaze, Adroit, and Kitchen domains. Detailed information on domains and datasets is provided in Appendix E. Performance results for the MuJoCo and Antmaze domains are presented in Table 1 and Table 2, while results for the Adroit and Kitchen domains are included in Appendix G. Backbone Architecture. We implemented VCS based on DT, DC, and a simple MLP, and compared the performance of each. Detailed performance results for each architectural choice are provided in Appendix I.2. We observed that for MuJoCo, the DC-based approach performs best, and for Antmaze, the MLP-based approach is superior, although the performance gap is minor. Evaluation Metric. In all evaluations of VCS, we assess the expert-normalized returns (Fu et al., 2020) of 10 episodes at each evaluation checkpoint (every 103 gradient steps). Subsequently, we compute the running average of these normalized returns over ten consecutive checkpoints. We report the final score averaged across five random seeds. 7 \fValue-Aided Conditional Supervised Learning for Offline RL Table 3. Comparison of constant VCS weight and the dynamic weight. The dataset names are abbreviated as follows: expert as \u2018e\u2019, medium as \u2018m\u2019, and medium-replay as \u2018m-r\u2019. Dataset Constant Weight Dynamic Weight mujoco-m 74.7 81.3 mujoco-m-r 75.4 82.4 mujoco-m-e 104.2 106.7 6.2. Overall Performance As shown in Table 1 and Table 2, VCS significantly outperforms prior value-based, RCSL, and combined methods in all but two datasets. A particularly remarkable achievement of VCS is its ability to substantially improve efficiency in goal-reaching tasks such as the Antmaze environment, a scenario where prior methods exhibited notably low performance. This enhancement is largely attributed to the stitching ability introduced by the value aid of VCS. Additionally, VCS outperforms both IQL and DC on which it is based, particularly on more challenging MuJoCo medium, medium-replay, and Antmaze medium and large datasets. Furthermore, by effectively combining the strengths of RCSL and value-based methods, VCS demonstrates an exceptional capacity to not only match but exceed the maximum returns of datasets in 6 out of 9 MuJoCo tasks. Such results underscore VCS\u2019s robustness and superiority in a wide array of offline RL contexts. While we explore combinations of certain subsets of RCSL and valuebased methods, a wide range of potential integrations exists. For the example on antmaze-umaze-diverse, while the IQL-aided VCS (69.7) underperforms CQL (84.0), the CQL-aided VCS (85.2) can further improve performance as in Appendix I.1. 6.3. Ablation Studies To further analyze how each design element influences performance, we conducted additional experiments. More ablation studies can be found in Appendix I. The Importance of Weights Relative to Trajectory Return. To assess the impact of dynamically setting the VCS weight w(R(\u03c4)) based on return, we compare our approach with a constant VCS weight, w(R(\u03c4)) = c. We test five constant weights c \u2208{1,2.5,5,7.5,10}, and report the maximum score among these values in Table 3. Details on the default dynamic setting of w(R(\u03c4)) are in Appendix H.2. VCS, with the dynamic weight based on the trajectory return, outperforms the highest scores obtained with various constant weight settings across datasets in Table 3. This demonstrates that our dynamic weight control, grounded in trajectory return, is more effective and robust in integrating value aids. Dataset State Test Time Visit State Normalized Return: \uf6b8\uf6ba.\uf6b3 Normalized Return: -\uf6b1.\uf6b3\uf6b8 Normalized Return: \uf6b9\uf6b9.\uf6b6 (a) RCSL (b) Q-greedy (c) VCS (Ours) Figure 6. t-SNE (Van der Maaten & Hinton, 2008) analysis of states visited by policies trained with RCSL, Q-greedy (argmaxa\u2208A QIQL \u03b8 (s, a)), and VCS losses during evaluation, alongside dataset\u2019s states in walker2d-medium. Test Time State Distribution Shift. To validate whether VCS effectively acquires stitching ability while preventing a shift in the test-time state distribution, as discussed in Section 3.2, we present Fig. 6. This figure compares the state distributions explored by RCSL, Q-greedy, and VCS policies during evaluation. RCSL and Q-greedy, representing VCS\u2019s extremes, were trained using specific loss configurations: RCSL loss as VCS loss in Eq. 8 with w(R(\u03c4)) = 0 and Q-greedy loss as VCS loss without the RCSL term, i.e., selecting actions as argmaxa\u2208A QIQL \u03b8 (s,a). Fig. 6 illustrates RCSL\u2019s adherence to dataset states, contrasting with the notable state distribution shift of the Q-greedy policy. VCS inherits RCSL\u2019s stability but surpasses its performance, indicating an effective blend of transition recombination without straying excessively from the state distribution. 7. Conclusion In conclusion, our VCS methodology effectively combines the stability of RCSL with the stitching ability inherent in value-based methods. Anchored by a thorough analysis of value function generalization error, VCS adeptly modulates the extent of value assistance. This strategic fusion enables VCS to exceed the performance of existing SOTA methods in both efficacy and stability, particularly in complex offline RL benchmarks encompassing a wide range of optimality. A notable feat of VCS is its unprecedented capability to surpass the maximum returns of the datasets, thereby pushing the boundaries of what is achievable in offline RL. Within the course of answering our initial motivating question on integrating RCSL and value-based methods, VCS opens up promising future research directions. While we have established a correlation between trajectory return and the mixing weight, we have considered simple linear weights to control the level of value-aid. It is also plausible that the mixing weight might be influenced by other dataset characteristics, such as the dimensions of the state and actions. We believe VCS will stand as a motivating work, inspiring new advancements in the field. 8 \fValue-Aided Conditional Supervised Learning for Offline RL Broader Impact This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2302.03770v2", |
| "title": "Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability", |
| "abstract": "Goal-conditioned reinforcement learning (GCRL) refers to learning\ngeneral-purpose skills that aim to reach diverse goals. In particular, offline\nGCRL only requires purely pre-collected datasets to perform training tasks\nwithout additional interactions with the environment. Although offline GCRL has\nbecome increasingly prevalent and many previous works have demonstrated its\nempirical success, the theoretical understanding of efficient offline GCRL\nalgorithms is not well established, especially when the state space is huge and\nthe offline dataset only covers the policy we aim to learn. In this paper, we\nprovide a rigorous theoretical analysis of an existing empirically successful\noffline GCRL algorithm. We prove that under slight modification, this algorithm\nenjoys an $\\widetilde{O}(\\text{poly}(1/\\epsilon))$ sample complexity (where\n$\\epsilon$ is the desired suboptimality of the learned policy) with general\nfunction approximation thanks to the property of (semi-)strong convexity of the\nobjective functions. We only require nearly minimal assumptions on the dataset\n(single-policy concentrability) and the function class (realizability).\nMoreover, this algorithm consists of two uninterleaved optimization steps,\nwhich we refer to as $V$-learning and policy learning, and is computationally\nstable since it does not involve minimax optimization. We also empirically\nvalidate our theory by showing that the modified algorithm outperforms the\nprevious algorithm in various real-world environments. To the best of our\nknowledge, this is the first algorithm that is both provably efficient with\ngeneral function approximation and single-policy concentrability, and\nempirically successful without requiring solving minimax optimization problems.", |
| "authors": "Hanlin Zhu, Amy Zhang", |
| "published": "2023-02-07", |
| "updated": "2023-10-11", |
| "primary_cat": "cs.LG", |
| "cats": [ |
| "cs.LG", |
| "cs.AI" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Offline AND Reinforcement AND Learning", |
| "gt": "Provably Efficient Offline Goal-Conditioned Reinforcement Learning with General Function Approximation and Single-Policy Concentrability", |
| "main_content": "Introduction Goal-conditioned reinforcement learning (GCRL) aims to design agents that are able to learn general-purpose skills to reach diverse goals [Kaelbling, 1993, Schaul et al., 2015, Plappert et al., 2018]. In particular, of\ufb02ine GCRL learns goal-reaching policies by purely pre-collected data without any further interactions with the environment [Chebotar et al., 2021, Yang et al., 2022]. Since such interaction can be expensive or even unsafe in practice, of\ufb02ine GCRL is increasingly popular as a way to learn generalist agents in real-world environments [Lange et al., 2012, Levine et al., 2020]. Although of\ufb02ine GCRL is promising and achieves great success in various practical scenarios [Lynch et al., 2020, Chebotar et al., 2021, Yang et al., 2022, Ma et al., 2022b,c], designing practical algorithms that are provably ef\ufb01cient still remains an open question. On the practical side, an ideal algorithm should be scalable to huge (or in\ufb01nite) state spaces and only require minimal dataset coverage assumptions. Moreover, the algorithm should be computationally ef\ufb01cient and stable (e.g., 37th Conference on Neural Information Processing Systems (NeurIPS 2023). \fonly using regression-based methods to train policies to avoid unstable minimax optimization). On the theoretical side, we aim to provide \ufb01nite-sample guarantees of the learned policy. Unfortunately, most existing algorithms are not both theoretically and practically ef\ufb01cient. On the one hand, many empirically ef\ufb01cient algorithms do not enjoy \ufb01nite-sample guarantees [Lynch et al., 2020, Chebotar et al., 2021, Yang et al., 2022, Ma et al., 2022c] or even suffer constant suboptimality in favorable settings given in\ufb01nite data (e.g., Ma et al. [2022c]). On the other hand, although many previous of\ufb02ine RL algorithms with theoretical \ufb01nite-sample guarantees can be naturally extended to of\ufb02ine GCRL settings, they either cannot handle general value function approximation in the presence of huge (or in\ufb01nite) state spaces [Jin et al., 2021, Rashidinejad et al., 2021, Yin et al., 2021, Shi et al., 2022, Li et al., 2022], or require impractically strong dataset coverage assumptions, such as all policy concentrability [Antos et al., 2008, Munos and Szepesv\u00e1ri, 2008, Xie and Jiang, 2021b]. Recently, several provably ef\ufb01cient algorithms have been proposed under general function approximation and single-policy concentrability [Zhan et al., 2022, Cheng et al., 2022, Rashidinejad et al., 2022]. In particular, the algorithm of [Zhan et al., 2022], based on the duality form of regularized linear programming formulation of RL, only requires the realizability assumption of function class. However, they all require solving minimax optimization problems which can be dif\ufb01cult or computationally unstable [Daskalakis et al., 2021]. On the contrary, some practically ef\ufb01cient algorithms (e.g., Ma et al. [2022b,c]) do not involve minimax optimization and thus are computationally more stable. This naturally raises an important question: Can we design an ef\ufb01cient of\ufb02ine GCRL algorithm that enjoys favorable theoretical guarantees under mild assumptions and performs well empirically in realworld scenarios without a minimax formulation? In this paper, we answer the above question af\ufb01rmatively by providing rigorous theoretical guarantees for an empirically successful of\ufb02ine GCRL algorithm named GoFAR proposed by Ma et al. [2022c]. We made some slight yet critical modi\ufb01cations to GoFAR. For deterministic MDPs, we need to carefully select the value of one hyperparameter that is set to 1 in the original GoFAR (which can be tuned in practice). For stochastic MDPs, we need to \ufb01rst learn the true transition model via maximum likelihood (MLE) and then plug in the learned model in the algorithm. To distinguish the difference between the original algorithm and the modi\ufb01ed ones, we name the modi\ufb01ed versions VP-learning (Algorithm 1). We show that the VP-learning algorithm has both good empirical performance in real-world scenarios (already shown by Ma et al. [2022c], and we compare VP-learning and GoFAR empirically and show that our modi\ufb01cation further improves the performance of the previous algorithm GoFAR) and favorable theoretical guarantees under mild assumptions. Speci\ufb01cally, it achieves \u02dc O(poly(1/\u01eb)) sample complexity (where \u01eb is the desired suboptimality level of the learned policy) under general function approximation with realizability-only assumption and partial data coverage with singlepolicy concentrability assumption. Moreover, the VP-learning algorithm can be decomposed into two uninterleaved learning (optimization) procedures (i.e., V -learning and policy learning), which only require solving regression problems without minimax optimization. Note that the VP-learning algorithm can be naturally applied to single-task RL settings, and all the analysis in this paper does not rely on whether the setting is goal-conditioned. Since the original algorithm is proposed and empirically validated in goal-conditioned settings, we analyze the algorithm in goal-conditioned settings as well. 1.1 Related Work Since our algorithm can be naturally applied to single-task of\ufb02ine RL settings, we discuss the related work in a broader scope, which also includes single-task of\ufb02ine RL. Of\ufb02ine RL in tabular and linear function approximation settings. In tabular and linear settings, a line of work proposed ef\ufb01cient (both statistically and computationally) algorithms under single-policy concentrability [Jin et al., 2021, Rashidinejad et al., 2021, Yin et al., 2021, Shi et al., 2022, Li et al., 2022]. These algorithms construct uncertainty quanti\ufb01ers to ensure pessimism such that policies not well covered by the dataset (which, by single-policy concentrability assumption, 2 \fare thus suboptimal) suffer a large penalty. Yin and Wang [2021] also consider of\ufb02ine RL with single-policy concentrability and achieve instance-dependent characterization. However, the above algorithms cannot be directly applied to many practical scenarios when non-linear function approximators are required, since it is hard to obtain uncertainty quanti\ufb01ers without oracle access when function approximators are non-linear [Jiang and Huang, 2020, Jin et al., 2021, Uehara and Sun, 2021, Xie et al., 2021]. In our algorithm (V -learning step), we use a regularizer in the form of f-divergence instead of uncertainty quanti\ufb01ers to ensure pessimism, which makes our algorithm ef\ufb01cient in the presence of non-linear function approximators without additional oracle access. Of\ufb02ine RL with all-policy concentrability. Besides huge state spaces, another central challenge for of\ufb02ine RL is the lack of dataset coverability. Concentrability, de\ufb01ned as the ratio of occupancy frequency induced by a policy to the dataset distribution, is one of the most widely used de\ufb01nitions to characterize the dataset coverability [Munos, 2007, Scherrer, 2014]. Many previous works require all-policy concentrability to make the algorithm ef\ufb01cient [Szepesv\u00e1ri and Munos, 2005, Munos, 2007, Antos et al., 2007, 2008, Farahmand et al., 2010, Scherrer, 2014, Liu et al., 2019, Chen and Jiang, 2019, Jiang, 2019, Wang et al., 2019, Feng et al., 2019, Liao et al., 2020, Zhang et al., 2020, Uehara et al., 2020, Xie and Jiang, 2021a]. However, in practice, it is unreasonable to require that the of\ufb02ine dataset can cover all candidate policies, and our algorithm only requires single-policy concentrability. Of\ufb02ine RL with general function approximation and single-policy concentrability. A recent line of work, which is based on marginalized importance sampling (MIS) formulation of RL, has shown success either empirically [Nachum et al., 2019a,b, Lee et al., 2021, Kim et al., 2021] or theoretically [Zhan et al., 2022, Rashidinejad et al., 2022]. In particular, Zhan et al. [2022], Rashidinejad et al. [2022] provide \ufb01nite-sample guarantees for their algorithms under general function approximation and only single-policy concentrability. Another line of work [Xie et al., 2021, Cheng et al., 2022] also proposes provably ef\ufb01cient algorithms based on an actor-critic formulation under similar assumptions. However, all the above algorithms require solving minimax optimization, which could be dif\ufb01cult or computationally unstable [Daskalakis et al., 2021]. Instead, our algorithm only involves uninterleaved regression-based optimization without minimax optimization. Of\ufb02ine GCRL. In the context of of\ufb02ine GCRL, the sparsity of the reward is another core challenge [Kaelbling, 1993, Schaul et al., 2015]. Several previous works aim to solve the issue and show empirical success without \ufb01nite-sample guarantee [Ghosh et al., 2019, Chebotar et al., 2021, Yang et al., 2022, Ma et al., 2022c]. Although there exist theoretical studies of relevant problems such as the of\ufb02ine stochastic shortest path problem [Yin et al., 2022], theoretical understanding of the of\ufb02ine GCRL problem is still lacking. The most relevant work to this paper is Ma et al. [2022c], which shows great performance in several real-world settings without minimax optimization but lacks theoretical guarantee. In this paper, we proved that a slightly modi\ufb01ed version of their algorithm (i.e., our VP-learning algorithm shown in Algorithm 1) is provably ef\ufb01cient. Also, we note that Ma et al. [2022a] can be viewed as the single-task version of Ma et al. [2022c], and our algorithm and analysis can be naturally extended to single-task of\ufb02ine RL settings. 2 Preliminaries Basic Notations. Throughout this paper, we use |X| and \u2206(X) to denote the cardinality and probability simplex of a given set X. We use x \u2272y to denote that there exists a constant c > 0 such that x \u2264cy, use x \u2273y if y \u2272x and use x \u224dy if x \u2272y and y \u2272x. Also, we use the standard O(\u00b7) notation where f(n) = O(g(n)) if there exists n0, C > 0 such that |f(n)|\u2264Cg(n) for all n \u2265n0, and denote f(n) = \u2126(g(n)) if g(n) = O(f(n)). For any x \u2208R, de\ufb01ne x+ \u225cmax{x, 0}; for any general function f : X \u2192R, de\ufb01ne f+(x) = max{f(x), 0}, \u2200x \u2208X. Also, for any function f : R \u2192R, de\ufb01ne \u00af f(x) = f(x) \u2212minu\u2208R f(u) for any x \u2208R if minu\u2208R f(u) exists and further overwrite the notation f+(x) = 1{f \u2032(x) \u22650} \u00b7 \u00af f(x) where 1{\u00b7} is the indicator function. Markov decision process. We consider an in\ufb01nite-horizon discounted Markov decision process (MDP), which is described by a tuple M = (S, A, P, R, \u03c1, \u03b3), where S and A denote the state and action spaces respectively, P : S \u00d7 A \u2192\u2206(S) is the transition kernel, R : S \u00d7 A \u2192\u2206([0, 1]) encodes a family of reward distributions given state-action pairs with r : S \u00d7 A \u2192[0, 1] as the 3 \fexpected reward function, \u03c1 : S \u2192[0, 1] is the initial state distribution, and \u03b3 \u2208[0, 1) is the discount factor. We assume A is \ufb01nite while S could be arbitrarily complex (even continuous) as in many real-world scenarios. A stationary (stochastic) policy \u03c0 : S \u2192\u2206(A) outputs a distribution over action space for each state. Goal-conditioned reinforcement learning. In goal-conditioned RL, we additionally assume a goal set G. Similar to Ma et al. [2022c], in goal-conditioned settings, the reward function R(s; g) (as well as the expected reward function r(s; g)) and policy \u03c0(a|s, g) also depend on the commanded goal g \u2208G, and the reward no longer depends on the action a and is deterministic. Each (goal-conditioned) policy \u03c0 induces a (discounted) occupancy density over state-action pairs for any commanded goal d\u03c0 : S \u00d7A\u00d7G \u2192[0, 1] de\ufb01ned as d\u03c0(s, a; g) := (1\u2212\u03b3) P\u221e t=0 \u03b3t Pr(st = s, at = a; \u03c0), where Pr(st = s, at = a; \u03c0) denotes the visitation probability of state-action pair (s, a) at step t, starting at s0 \u223c\u03c1(\u00b7) and following \u03c0 given commanded goal g. We also write d\u03c0(s; g) = P a\u2208A d\u03c0(s, a; g) to denote the marginalized state occupancy. Let p(g) be a distribution over desired goals, then we denote d\u03c0(s, a, g) = d\u03c0(s, a; g)p(g) and d\u03c0(s, g) = d\u03c0(s; g)p(g). An important property of occupancy density d\u03c0 is that it satis\ufb01es the following Bellman \ufb02ow constraint: X a d(s, a; g) = (1 \u2212\u03b3)\u03c1(s)+\u03b3 X s\u2032,a\u2032 P(s|s\u2032, a\u2032)d(s\u2032, a\u2032; g) (1) for all s \u2208S and g \u2208G when letting d = d\u03c0 for any policy \u03c0. Moreover, any d satisfying (1) is the occupancy density of a policy \u03c0d where \u03c0d(a|s, g) = \u001ad(s, a; g)/d(s; g), d(s; g) > 0 1/|A|, d(s; g) = 0 and d(s; g) = X a\u2208A d(s, a; g). (2) An important quantity associated with a policy \u03c0 is the value function, which is the expected discounted cumulative reward de\ufb01ned as V \u03c0(s; g) := E [P\u221e t=0 \u03b3trt | s0 = s, at \u223c\u03c0(\u00b7|st, g) \u2200t \u22650] starting at state s \u2208S with a commanded goal g \u2208G where rt = R(st; g) = r(st; g). We use the notation J(\u03c0) := (1 \u2212\u03b3)Es\u223c\u03c1,g\u223cp[V \u03c0(s; g)] = E(s,a,g)\u223cd\u03c0[r(s; g)] to represent a scalar summary of the performance of a policy \u03c0. We denote by \u03c0\u2217the optimal policy that maximizes the above objective and use V \u2217:= V \u03c0\u2217to denote the optimal value function. Of\ufb02ine GCRL. In this paper, we focus on of\ufb02ine GCRL, where the agent is only provided with a previously-collected of\ufb02ine dataset D = {(si, ai, ri, s\u2032 i, gi)}N i=1. Here, ri \u223cR(si; gi), s\u2032 i \u223cP(\u00b7 | si, ai), and we assume that gi are i.i.d. sampled from p(\u00b7) and it is common that data are collected by a behavior policy \u00b5 of which the discounted occupancy density is d\u00b5. Therefore, we assume that (si, ai, gi) are sampled i.i.d. from a distribution \u00b5 where \u00b5(s, a, g) = p(g)d\u00b5(s, a; g) = d\u00b5(s, a, g). Note that we use \u00b5 to denote both the behavior policy and the dataset distribution. We also assume an additional dataset D0 = {(s0,i, g0,i)}N0 i=1 where s0,i are i.i.d. sampled from \u03c1(\u00b7) and g0,i are i.i.d. sampled from p(\u00b7). The goal of of\ufb02ine RL is to learn a policy \u02c6 \u03c0 using the of\ufb02ine dataset so as to minimize the sub-optimality compared to the optimal policy \u03c0\u2217, i.e., J(\u03c0\u2217) \u2212J(\u02c6 \u03c0), with high probability. Function approximation. To deal with huge state spaces, (general) function approximation is necessary for practical scenarios. In this paper, we assume access to two function classes: a value function class V \u2286{V : S \u00d7 G \u2192[0, Vmax]} that models the value function of the (regularized) optimal policies, and a policy class \u03a0 \u2286{\u03c0 : S \u00d7 G \u2192\u2206(A)} consisting of candidate policies. For stochastic MDP (Section 3.1.2), we also need a transition kernel class P \u2286{P : S \u00d7 A \u2192\u2206(S)} which contains the ground-truth transition kernel. Additionally, for any function f : S \u00d7 G \u2192R, we denote the operator T : RS\u00d7G \u2192RS\u00d7A\u00d7G as (T f)(s, a; g) = Es\u2032\u223cP (\u00b7|s,a)[f(s\u2032; g)]. Also, for any function V : S \u00d7 G \u2192[0, Vmax], we de\ufb01ne AV (s, a; g) = r(s; g) + \u03b3T V (s, a; g) \u2212V (s; g). Of\ufb02ine data coverage assumption. Our algorithm works within the single-policy concentrability framework [Rashidinejad et al., 2021], which is de\ufb01ned as below. De\ufb01nition 1 (Single-policy concentrability for GCRL). Given a policy \u03c0, de\ufb01ne C\u03c0 to be the smallest constant that satis\ufb01es d\u03c0(s,a,g) \u00b5(s,a,g) \u2264C\u03c0 for all s \u2208S, a \u2208A and g \u2208G. 4 \fThe single-policy concentrability parameter C\u03c0 captures the coverage of policy \u03c0 in the of\ufb02ine data. Our algorithm only requires this parameter to be small for \u03c0\u2217 \u03b1 which is a regularized optimal policy (see Section 3 for formal de\ufb01nition) and is close to some optimal policy. This assumption is similar to Zhan et al. [2022] and and is much weaker than the widely used all-policy concentrability that assumes bounded C\u03c0 for all \u03c0 (e.g., Scherrer [2014]). 3 Algorithms Of\ufb02ine GCRL can be formulated as the following program: max \u03c0 E(s,g)\u223cd\u03c0(s,g)[r(s; g)]. (3) (3) requires solving an optimization problem over the policy space. One can also optimize over occupancy density d(s, a; g) s.t. d = d\u03c0 for some policy \u03c0 which is equivalent to that d satis\ufb01es Bellman \ufb02ow constraint (1). Therefore, the program (3) can be represented equivalently as follows: max d(s,a;g)\u22650 E(s,g)\u223cd(s,g)[r(s; g)] s.t. X a d(s, a; g) = (1 \u2212\u03b3)\u03c1(s) + \u03b3 X s\u2032,a\u2032 P(s|s\u2032, a\u2032)d(s\u2032, a\u2032; g), \u2200(s, g) \u2208S \u00d7 G. (4) Let d\u2217denote the optimal solution of (4), then (one of) the optimal policy can be induced by \u03c0\u2217= \u03c0d\u2217as in (2). Under partial data coverage assumptions, (4) might fail in empirical settings by choosing a highly suboptimal policy that is not well covered by the dataset with constant probability. Similar to Zhan et al. [2022], Ma et al. [2022c], a regularizer is needed to ensure that the learned policy is well covered by the dataset. Therefore, one should instead solve a regularized version of (4), which is stated as follows: max d(s,a;g)\u22650 E(s,g)\u223cd(s,g)[r(s; g)] \u2212\u03b1Df(d\u2225\u00b5) s.t. X a d(s, a; g) = (1 \u2212\u03b3)\u03c1(s) + \u03b3 X s\u2032,a\u2032 P(s|s\u2032, a\u2032)d(s\u2032, a\u2032; g), \u2200(s, g) \u2208S \u00d7 G, (5) where the f-divergence is de\ufb01ned as Df(d\u2225\u00b5) \u225cE(s,a,g)\u223c\u00b5[f(d(s, a, g)/\u00b5(s, a, g))] for a convex function f. Throughout this paper, we choose f(x) = 1 2(x \u22121)2 as in Ma et al. [2022c], where the f-divergence is known as \u03c72-divergence under this speci\ufb01c choice of f and it is shown to be more stable than other divergences such as KL divergence [Ma et al., 2022c]. Let d\u2217 \u03b1 denote the optimal solution of (5), then the regularized optimal policy can be induced by \u03c0\u2217 \u03b1 = \u03c0d\u2217 \u03b1 as in (2). The following single-policy concentrability assumption assumes that \u03c0\u2217 \u03b1 is well covered by the of\ufb02ine dataset. Assumption 1 (Single-policy concentrability for \u03c0\u2217 \u03b1). Let d\u2217 \u03b1 be the optimal solution of (5), and let \u03c0\u2217 \u03b1 = \u03c0d\u2217 \u03b1 as de\ufb01ned in (2). We assume C\u03c0\u2217 \u03b1 \u2264C\u2217 \u03b1 where C\u03c0\u2217 \u03b1 is de\ufb01ned in De\ufb01nition 1 and C\u2217 \u03b1 > 0 is a constant. Under Assumption 1, it can be observed that the performance difference between the regularized optimal policy \u03c0\u2217 \u03b1 and the optimal policy \u03c0\u2217is bounded by O(\u03b1). The following proposition formally presents this observation. Proposition 3.1. Let d\u2217 \u03b1 be the optimal solution of (5), and let \u03c0\u2217 \u03b1 = \u03c0d\u2217 \u03b1 as de\ufb01ned in (2). Then under Assumption 1, it holds that J(\u03c0\u2217) \u2212J(\u03c0\u2217 \u03b1) \u2264O \u0000\u03b1(C\u2217 \u03b1)2\u0001 . The proof of Proposition 3.1 is deferred to Appendix A.1. Proposition 3.1 shows that by solving the regularized program (5), we can obtain a near-optimal policy as long as \u03b1 is small. The algorithm of Ma et al. [2022c] also aims to solve (5) and they simply choose \u03b1 = 1. We show empirically in Section 5 that \u03b1 < 1 achieves better performance than \u03b1 = 1. In theory, we must carefully choose the value of \u03b1 s.t. the suboptimality of our learned policy vanishes to 0 with a reasonable rate. Finally, as in Ma et al. [2022c], we convert (5) to the dual form, which is an unconstrained problem and amenable to solve: Proposition 3.2 (Dual form of (5)). The duality form of (5) is min V (s;g)\u22650(1 \u2212\u03b3)E(s,g)\u223c(\u03c1,p(g))[V (s; g)] + E(s,a,g)\u223c\u00b5[1{g\u2032 \u2217(AV (s, a; g)) \u22650}\u00af g\u2217(AV (s, a; g))] (6) 5 \fwhere g\u2217is the convex conjugate of g = \u03b1 \u00b7 f. Moreover, let V \u2217 \u03b1 denote the optimal solution of (6), then it holds d\u2217 \u03b1(s, a; g) = \u00b5(s, a; g)g\u2032 \u2217(r(s; g) + \u03b3T V \u2217 \u03b1 (s, a; g) \u2212V \u2217 \u03b1 (s; g))+ (7) for all (s, a, g) \u2208S \u00d7 A \u00d7 G. The proof of Proposition 3.2 is shown in Appendix A.2. According to the above proposition, one can \ufb01rst learn the V function according to (6), and then use the learned V function to learn the desired policy by (7). We call the \ufb01rst step V -learning and the second step policy learning, which will be discussed in detail in Sections 3.1 and 3.2 respectively. Finally, the main algorithm, which we call VP-learning, is presented in Algorithm 1. Algorithm 1 VP-learning 1: Input: Dataset D = {(si, ai, ri, s\u2032 i, gi)}N i=1, D0 = {(s0,i, g0,i)}N0 i=1, value function class V, policy class \u03a0, model class P for stochastic settings. 2: Obtain \u02c6 U by V-Learning (Algorithm 2 or 3 ). 3: Obtain \u02c6 \u03c0 by policy learning (Algorithm 4) using learned function \u02c6 U. 4: Output: \u02c6 \u03c0. 3.1 V -Learning De\ufb01ne L\u03b1(V ) = \u03b1((1 \u2212\u03b3)E(s,g)\u223c(\u03c1,p(g))[V (s; g)] + E(s,a,g)\u223c\u00b5[1{g\u2032 \u2217(AV (s, a; g)) \u22650}\u00af g\u2217(AV (s, a; g))]). (8) Then (6) is equivalent to minV (s;g)\u22650 L\u03b1(V ). A natural estimator of L\u03b1(V ) is 1 \u2212\u03b3 N0 N0 X i=1 \u03b1 \u00b7 V (s0,i; g0,i) + 1 N N X i=1 \u03b1 \u00b7 g\u2217+(ri + \u03b3V (s\u2032 i; gi) \u2212V (si; gi)). (9) However, when the transition kernel is not deterministic, this estimator is biased and will cause an over-estimation issue since g\u2217(x) = \u03b1f\u2217(x/\u03b1) = \u03b1(x/\u03b1+1)2 2 \u2212\u03b1 2 contains a square operator outside of the Bellman operator (consider estimating (E[X])2 using 1 N P X2 i ). Therefore, we use the original version of Ma et al. [2022c] for V -learning in deterministic dynamics (Algorithm 2 in Section 3.1.1), and a slightly modi\ufb01ed version in stochastic dynamics (Algorithm 3 in Section 3.1.2). For both settings, we assume realizability of V \u2217 \u03b1 on value function class V: Assumption 2 (Realizability of V \u2217 \u03b1 ). Assume V \u2217 \u03b1 \u2208V. 3.1.1 V -Learning in Deterministic Dynamics When the transition kernel P is deterministic, it holds that T V (s, a; g) = V (s\u2032; g) where P(s\u2032|s, a) = 1. In this case, the natural estimator (9) is unbiased and can be directly applied to the V -learning procedure. The V -learning algorithm for deterministic dynamic settings is presented in Algorithm 2. Algorithm 2 V -learning in deterministic dynamics 1: Input: Dataset D = {(si, ai, ri, s\u2032 i, gi)}N i=1, D0 = {(s0,i, g0,i)}N0 i=1, value function class V. 2: V -learning by solving \u02c6 V = arg minV \u2208V \u02c6 L(d)(V ) where \u02c6 L(d)(V ) \u225c1 \u2212\u03b3 N0 N0 X i=1 \u03b1 \u00b7 V (s0,i; g0,i) + \u03b1 N N X i=1 g\u2217+(ri + \u03b3V (s\u2032 i; gi) \u2212V (si; gi)). (10) 3: \u02c6 U(s, a; g) \u2190r(s; g) + \u03b3 \u02c6 V (s\u2032; g) \u2212\u02c6 V (s; g) + \u03b1 4: Output: \u02c6 V , \u02c6 U. 6 \fNow for any V , we de\ufb01ne UV (s, a; g) = r(s; g) + \u03b3T V (s, a; g) \u2212V (s; g) + \u03b1 = AV (s, a; g) + \u03b1 which can be interpreted as the advantage function of V with an \u03b1-shift. We also denote U \u2217 \u03b1 = UV \u2217 \u03b1 . Note that besides the learned \u02c6 V function, Algorithm 2 also outputs a \u02c6 U function. By (7), one can observe that in policy learning, what we indeed need is \u02c6 U instead of \u02c6 V , and thus in the V -learning procedure we also compute this \u02c6 U function in preparation for policy learning. One may challenge that \u02c6 U cannot be computed for all (s, a; g) since we do not have knowledge of all r(s; g). However, we only need the value of \u02c6 U(si, ai; gi) for (si, ai; gi) contained in the of\ufb02ine dataset, where ri is also contained. Therefore, we can evaluate the value of \u02c6 U at all (s, a; g) tuples requested in the policy learning algorithm. Note that Algorithm 2 is equivalent to the \ufb01rst step of Ma et al. [2022c] except for the choice of \u03b1 and a clip for the value of g\u2217. However, the above V -learning, as well as the original GoFAR algorithm, might suffer the over-estimation issue under stochastic dynamics, and we present algorithms suitable for stochastic dynamics in Section 3.1.2. 3.1.2 V -Learning in Stochastic Dynamics When the transition kernel is stochastic, one cannot directly use V (s\u2032; g) to estimate T V (s, a; g). Since T V (s, a; g) = Es\u2032\u223cP (\u00b7|s,a)[V (s\u2032; g)], a natural idea is to learn the ground-truth transition kernel P \u22c61 \ufb01rst, and then use the learned transition kernel \u02c6 P to estimate T V (s, a; g): \u02c6 T V (s, a; g) = Es\u2032\u223c\u02c6 P (\u00b7|s,a)[V (s\u2032; g)]. This is achievable under the following realizability assumption. Assumption 3 (Realizability of the ground-truth transition model). Assume the ground-truth transition kernel P \u22c6\u2208P. The algorithm for stochastic dynamic settings is presented in Algorithm 3, where we \ufb01rst learn the transition kernel \u02c6 P, and then plug in the learned transition kernel to learn V function. Similar to Algorithm 2, we also compute \u02c6 U in V -learning procedure. Algorithm 3 V -learning in stochastic dynamics 1: Input: Dataset D = {(si, ai, ri, s\u2032 i, gi)}N i=1, D0 = {(s0,i, g0,i)}N0 i=1, value function class V, model class P. 2: Estimate the transition kernel via maximum likelihood estimation (MLE) \u02c6 P = max P \u2208P 1 N N X i=1 log P(s\u2032 i|si, ai) (11) 3: V -learning using the learned transition kernel: \u02c6 V = arg minV \u2208V \u02c6 L(s)(V ) with \u02c6 L(s)(V ) \u225c1 \u2212\u03b3 N0 N0 X i=1 \u03b1 \u00b7 V (s0,i; g0,i) + \u03b1 N N X i=1 g\u2217+(ri + \u03b3 \u02c6 T V (si, ai; gi) \u2212V (si; gi)). (12) where \u02c6 T V (s, a, ; g) = Es\u2032\u223c\u02c6 P (\u00b7|s,a)[V (s\u2032; g)]. 4: \u02c6 U(s, a; g) \u2190r(s; g) + \u03b3 \u02c6 T \u02c6 V (s, a; g) \u2212\u02c6 V (s; g) + \u03b1 5: Output: \u02c6 V , \u02c6 U. 3.2 Policy Learning We now derive policy learning, the second step of the VP-learning algorithm. Note that \u03c0\u2217 \u03b1 = arg max\u03c0 E(s,a,g)\u223cd\u2217 \u03b1[log \u03c0(a|s, g)]. By (7), we also have d\u2217 \u03b1(s, a; g) = \u00b5(s, a; g)g\u2032 \u2217(r(s; g) + \u03b3T V \u2217 \u03b1 (s, a; g) \u2212V \u2217 \u03b1 (s; g))+ = \u00b5(s, a; g)U \u2217 \u03b1(s, a; g)+ \u03b1 . 1For notation convenience, in stochastic settings, we use P \u22c6to denote the ground-truth transition kernel. 7 \fTherefore, \u03c0\u2217 \u03b1 = arg max\u03c0 LMLE \u03b1 (\u03c0) where LMLE \u03b1 (\u03c0) \u225cE(s,a,g)\u223c\u00b5 h U\u2217 \u03b1(s,a;g)+ \u03b1 log \u03c0(a|s, g) i . Since we already learned \u02c6 U, which is close to U \u2217 \u03b1, we can use the following estimator for LMLE \u03b1 (\u03c0): \u02c6 LMLE(\u03c0) = 1 N N X i=1 \u02c6 U(si, ai; gi)+ \u03b1 log \u03c0(ai|si, gi). (13) Algorithm 4 Policy learning 1: Input: Dataset D = {(si, ai, ri, s\u2032 i, gi)}N i=1, policy class \u03a0, \u02c6 U learned by Algorithm 2 or 3. 2: Policy learning by: \u02c6 \u03c0 = arg max \u03c0\u2208\u03a0 \u02c6 LMLE(\u03c0) \u225c1 N N X i=1 \u02c6 U(si, ai; gi)+ \u03b1 log \u03c0(ai|si, gi). (14) 3: Output: \u02c6 \u03c0. The policy learning algorithm is presented in Algorithm 4, which can be viewed as a weighted maximum likelihood estimation (MLE) procedure. Finally, we make the following two assumptions on the policy class \u03a0. Assumption 4 (Single-policy realizability). Assume \u03c0\u2217 \u03b1 \u2208\u03a0. Assumption 5 (Lower bound of policy). For any policy \u03c0 \u2208\u03a0, we assume that \u03c0(a|s, g) \u2265\u03c4 > 0 for any (s, a, g) \u2208S \u00d7 A \u00d7 G. Remark 1. One may consider Assumption 5 strong if \u03c4 is a constant independent of \u03b1 or N. However, we allow that \u03c4 depends on \u03b1. In that case, \u03c4 can be extremely small, and any policy mixed with a uniform policy with a tiny probability satis\ufb01es this assumption. Therefore, Assumption 5 is mild. 4 Theoretical Guarantees In this section, we provide theoretical guarantees of our main algorithm (Algorithm 1). We \ufb01rst show the results for V -Learning and policy learning in Section 4.1 and Section 4.2 respectively and then combine them to obtain our main theorem in Section 4.3. 4.1 Analysis of V -Learning We mainly focus on V -learning in deterministic dynamics in this section. The analysis for stochastic dynamics is similar and presented in Appendix B.2. As discussed in Section 3.1.1, although the \ufb01rst step of the algorithm is called V -learning, the main goal of this step is to estimate U \u2217 \u03b1 = UV \u2217 \u03b1 accurately. The following lemma provides a theoretical guarantee that the output of V -learning algorithm \u02c6 U is a good estimator of U \u2217 \u03b1 in positive parts: Lemma 1 (Closeness of \u02c6 U+ and U \u2217 \u03b1+). Under Assumptions 1 and 2, with probability at least 1 \u2212\u03b4, \u2225\u02c6 U+ \u2212U \u2217 \u03b1+\u22252,\u00b5\u2264O \u0000\u221a\u01ebstat \u0001 , where \u02c6 U is the output of Algorithm 2 and \u01ebstat \u224dV 2 max q log(|V|/\u03b4) N . Proof sketch. By standard concentration argument, it can be shown that the empirical estimator \u02c6 L(d) in Algorithm 2 (which is unbiased in deterministic dynamics) concentrates well on L\u03b1 for all V \u2208V (Lemma 3). Therefore, by realizability of V \u2217 \u03b1 , the value of L\u03b1 at V \u2217 \u03b1 and the learned V -function \u02c6 V are close (Lemma 4). Finally, one can observe that L\u03b1 is \u201csemi-strongly\u201d convex w.r.t. UV + in \u2225\u00b7\u22252,\u00b5-norm, and thus we can show that \u02c6 U+ and U \u2217 \u03b1+ are also close. The complete proof of Lemma 1 is deferred to Appendix B.1. In Appendix B.2, we also show the counterpart of Lemma 1 for stochastic dynamic settings. 8 \f4.2 Analysis of Policy Learning After obtaining an accurate estimator \u02c6 U+ of U \u2217 \u03b1+ in the V -Learning procedure, i.e., \u2225\u02c6 U+ \u2212 U \u2217 \u03b1+\u22252,\u00b5\u2272\u221a\u01ebstat, we can use \u02c6 U+ to perform policy learning and obtain the following guarantee: Lemma 2 (Closeness of \u03c0\u2217 \u03b1 and \u02c6 \u03c0). Under Assumptions 4 and 5, with probability at least 1 \u2212\u03b4, the output policy \u02c6 \u03c0 of Algorithm 4 satis\ufb01es Es\u223cd\u2217 \u03b1,g\u223cp(g)\u2225\u03c0\u2217 \u03b1(\u00b7|s, g) \u2212\u02c6 \u03c0(\u00b7|s, g)\u2225TV\u2264O \u0010p \u01ebMLE stat /\u03c4 2 \u0011 , where \u01ebMLE stat is de\ufb01ned in Lemma 8. The proof of Lemma 2 is provided in Appendix C.2. This result shows that the TV distance between the regularized optimal policy \u03c0\u2217 \u03b1 and the output policy \u02c6 \u03c0 by Algorithm 4 is small, which translates to a bounded performance difference between these two policies as formalized in Theorem 1. Theorem 1 (Suboptimality of \u02c6 \u03c0). Under Assumptions 4 and 5, with probability at least 1 \u2212\u03b4, the output policy \u02c6 \u03c0 of Algorithm 4 satis\ufb01es J(\u03c0\u2217 \u03b1) \u2212J(\u02c6 \u03c0) \u2264O \u0010 Vmax p \u01ebMLE stat /\u03c4 2 \u0011 . The proof of Theorem 1 is deferred to Appendix C.3. 4.3 Main Theorem: Statistical Rate of Suboptimality Theorem 1 compares the performance difference between \u02c6 \u03c0 and the regularized optimal policy \u03c0\u2217 \u03b1. Since the ultimate goal is to compare with the optimal policy \u03c0\u2217, we also need to combine this result with Proposition 3.1. By carefully choosing the value of \u03b1 to balance J(\u02c6 \u03c0) \u2212J(\u03c0\u2217 \u03b1) and J(\u03c0) \u2212J(\u03c0\u2217 \u03b1), we can bound the suboptimality of the policy \u02c6 \u03c0 output by Algorithm 1 compared to the optimal policy \u03c0\u2217, leading to the following main result: Theorem 2 (Statistical rate of suboptimality (in deterministic dynamics)). Under Assumptions 1, 2, 4 and 5, with probability at least 1 \u2212\u03b4, the output policy \u02c6 \u03c0 by Algorithm 1 (with the choice of Algorithm 2 for V -learning in deterministic dynamics) satis\ufb01es J(\u03c0\u2217) \u2212J(\u02c6 \u03c0) \u2272 \u0012V 3 max(C\u2217 \u03b1)3 log(1/\u03c4) log(|V||\u03a0|/\u03b4) \u03c4 2N 1/4 \u00131/3 if we choose \u03b1 \u224d \u0010 V 3 max log(1/\u03c4) log(|V||\u03a0|/\u03b4) \u03c4 2(C\u2217 \u03b1)3N 1/4 \u00111/3 and assume N = N0. The proof of Theorem 2 is deferred to Appendix D.1. Note that Theorem 2 provides a suboptimality rate of O(1/N 1/12) which implies an O(1/poly(\u01eb)) sample complexity and thus is statistically ef\ufb01cient. A similar rate can also be obtained in stochastic dynamic settings, and we present the result in Appendix D.2. Note that our rate is slightly worse than the O(1/N 1/6) rate in Zhan et al. [2022], and worse than the optimal rate O(1/ \u221a N) in Rashidinejad et al. [2022]. We brie\ufb02y discuss the intrinsic dif\ufb01culty to derive an optimal convergence rate. First, we only require a realizability assumption on our function class, while Rashidinejad et al. [2022] requires a much stronger completeness assumption. Second, our optimization procedure is uninterleaved and only requires solving regression problems, while Zhan et al. [2022] and Rashidinejad et al. [2022] require solving minimax problems. Finally, Rashidinejad et al. [2022] assumes that the behavior policy is known and directly computes the policy using the knowledge of behavior policy, while our algorithm uses a more practical method, i.e., MLE, to solve the policy in the policy learning step. We also compare our theoretical results to Ma et al. [2022c]. Theorem 4.1 of Ma et al. [2022c] provides a \ufb01nite-sample guarantee for the suboptimality. However, they compare the performance of \u02c6 \u03c0 to \u03c0\u2217 \u03b1 (with \u03b1 = 1) instead of \u03c0\u2217. Since the performance gap between \u03c0\u2217and \u03c0\u2217 \u03b1 can be as large as a constant when \u03b1 = 1, even zero suboptimality (compared to \u03c0\u2217 \u03b1) cannot imply that the learned policy has good performance. Moreover, their theoretical analysis assumes that V \u2217can be learned with zero error, which is unreasonable in practical scenarios. We also note that they only provide a proof for deterministic policy classes, which can be restrictive in practice. 9 \f5 Experiments In this section, we provide experimental results of our VP-learning algorithm with different choices of \u03b1 under \ufb01ve different environments: FetchReach, FetchPick, FetchPush, FetchSlide, and HandReach [Plappert et al., 2018]. Similar to Ma et al. [2022c], the datasets for the \ufb01ve tasks are from Yang et al. [2022]. All the implementation details of our VP-learning are the same as GoFAR (see dataset details and implementation details in Ma et al. [2022c]) 2, except for the value of \u03b1. Note that our VP-learning algorithm with \u03b1 = 1 is equivalent to the GoFAR algorithm. Table 1 presents the discounted returns and Table 2 presents the \ufb01nal distances of the policies trained after 100 epochs and evaluated over 10 runs. For each environment and each \u03b1 , the result was averaged over 3 random seeds. The best results of each environment are in bold. Table 1: Discounted return of different choices of \u03b1, averaged over 3 random seeds. \u03b1\\ Env FetchReach FetchPick FetchPush FetchSlide HandReach 0.01 27.4 \u00b1 0.29 18.5 \u00b1 0.1 18.0 \u00b1 1.8 2.36 \u00b1 1.13 8.72 \u00b1 1.69 0.02 27.4 \u00b1 0.32 18.7 \u00b1 1.8 18.6 \u00b1 2.6 2.40 \u00b1 0.47 7.96 \u00b1 1.27 0.05 27.4 \u00b1 0.32 17.3 \u00b1 1.1 19.3 \u00b1 2.0 3.18 \u00b1 0.90 8.98 \u00b1 3.11 0.1 27.4 \u00b1 0.33 20.3 \u00b1 1.3 20.3 \u00b1 2.5 3.22 \u00b1 0.38 5.28 \u00b1 1.25 0.2 27.4 \u00b1 0.32 20.7 \u00b1 0.9 17.7 \u00b1 2.9 2.25 \u00b1 0.23 2.92 \u00b1 0.98 0.5 27.5 \u00b1 0.29 18.5 \u00b1 0.4 20.1 \u00b1 2.2 3.47 \u00b1 1.08 5.74 \u00b1 2.72 1 27.3 \u00b1 0.34 18.2 \u00b1 1.2 19.6 \u00b1 1.6 2.75 \u00b1 1.84 7.13 \u00b1 3.60 2 27.4 \u00b1 0.29 18.3 \u00b1 0.7 19.6 \u00b1 1.4 1.80 \u00b1 0.66 3.99 \u00b1 1.88 Table 2: Final distance of different choices of \u03b1, averaged over 3 random seeds. \u03b1\\ Env FetchReach FetchPick FetchPush FetchSlide HandReach 0.01 0.0171 \u00b1 0.0017 0.042 \u00b1 0.004 0.033 \u00b1 0.001 0.1177 \u00b1 0.012 0.0269 \u00b1 0.0049 0.02 0.0168 \u00b1 0.0016 0.045 \u00b1 0.012 0.031 \u00b1 0.002 0.1085 \u00b1 0.010 0.0274 \u00b1 0.0049 0.05 0.0181 \u00b1 0.0011 0.052 \u00b1 0.013 0.032 \u00b1 0.002 0.1061 \u00b1 0.009 0.0270 \u00b1 0.0049 0.1 0.0173 \u00b1 0.0014 0.032 \u00b1 0.010 0.027 \u00b1 0.002 0.1018 \u00b1 0.002 0.0275 \u00b1 0.0043 0.2 0.0172 \u00b1 0.0019 0.031 \u00b1 0.004 0.031 \u00b1 0.003 0.1029 \u00b1 0.010 0.0275 \u00b1 0.0046 0.5 0.0166 \u00b1 0.0011 0.044 \u00b1 0.009 0.031 \u00b1 0.005 0.1017 \u00b1 0.017 0.026826 \u00b1 0.0049 1 0.0175 \u00b1 0.0013 0.043 \u00b1 0.011 0.043 \u00b1 0.012 0.1202 \u00b1 0.019 0.026828 \u00b1 0.0044 2 0.0171 \u00b1 0.0011 0.034 \u00b1 0.005 0.032 \u00b1 0.001 0.1044 \u00b1 0.011 0.0275 \u00b1 0.0045 The empirical results demonstrate the correctness of our theoretical analysis: choosing \u03b1 = 1 will result in a large suboptimality of \u03c0\u2217 \u03b1 and thus the learned policy \u02c6 \u03c0. Instead, we should carefully choose the value of \u03b1 to ensure a vanishing suboptimality. In practice, we can tune the value of \u03b1 and typically it is less than one. In our experiments, the best \u03b1 ranges over [0.05, 0.5]. 6 Conclusions In this paper, we theoretically analyze the VP-learning algorithm (Algorithm 1, which is based on the previous empirically successful algorithm in Ma et al. [2022c]) for both single-task and goalconditioned of\ufb02ine settings. This algorithm can deal with general value function approximation and only requires near minimal assumptions on the dataset (single-policy concentrability) and function class (realizability). We also provide an O(1/N 1/12) upper bound of the suboptimality of the policy learned by the algorithm and empirically validate its effectiveness. As for future directions, one important question is whether we can achieve the optimal suboptimality rate \u02dc O(1/ \u221a N) while keeping the algorithm practical without unreasonably strong assumptions. 2We use the code at https://github.com/JasonMa2016/GoFAR with different values of \u03b1 for our experiments. 10 \fAcknowledgements We thank the anonymous reviewer for catching a technical issue in a previous version of our paper. The work was done when HZ was a visiting researcher at Meta." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2402.06102v1", |
| "title": "Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning", |
| "abstract": "Recent advances in real-world applications of reinforcement learning (RL)\nhave relied on the ability to accurately simulate systems at scale. However,\ndomains such as fluid dynamical systems exhibit complex dynamic phenomena that\nare hard to simulate at high integration rates, limiting the direct application\nof modern deep RL algorithms to often expensive or safety critical hardware. In\nthis work, we introduce \"Box o Flows\", a novel benchtop experimental control\nsystem for systematically evaluating RL algorithms in dynamic real-world\nscenarios. We describe the key components of the Box o Flows, and through a\nseries of experiments demonstrate how state-of-the-art model-free RL algorithms\ncan synthesize a variety of complex behaviors via simple reward specifications.\nFurthermore, we explore the role of offline RL in data-efficient hypothesis\ntesting by reusing past experiences. We believe that the insights gained from\nthis preliminary study and the availability of systems like the Box o Flows\nsupport the way forward for developing systematic RL algorithms that can be\ngenerally applied to complex, dynamical systems. Supplementary material and\nvideos of experiments are available at\nhttps://sites.google.com/view/box-o-flows/home.", |
| "authors": "Mohak Bhardwaj, Thomas Lampe, Michael Neunert, Francesco Romano, Abbas Abdolmaleki, Arunkumar Byravan, Markus Wulfmeier, Martin Riedmiller, Jonas Buchli", |
| "published": "2024-02-08", |
| "updated": "2024-02-08", |
| "primary_cat": "cs.RO", |
| "cats": [ |
| "cs.RO", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Offline AND Reinforcement AND Learning", |
| "gt": "Real-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning", |
| "main_content": "Introduction Reinforcement learning promises to deliver a principled, general-purpose framework for generating control policies for complex dynamical systems directly from experiential data, without the need for domain expertise (Sutton and Barto, 2018). Indeed, modern deep RL approaches that leverage expressive neural networks for function approximation have led to breakthroughs in a variety of domains, such as game-playing (Mnih et al., 2013; Schrittwieser et al., 2020; Mnih et al., 2015), protein folding (Jumper et al., 2021), control of tokamak plasmas in nuclear fusion reactors (Degrave et al., 2022), and real-world robotics (Tan et al., 2018; Handa et al., 2022). However, a key ingredient in the success of these applications has been the ability to accurately simulate these systems at scale, and constructing such simulation environments themselves requires significant human effort and knowledge, thus forgoing the original promise of removing the need for domain expertise. For instance, leading approaches for learningbased locomotion and dexterous manipulation (Tan et al., 2018; Kumar et al., 2021; Fu et al., 2021; Handa et al., 2022; Pinto et al., 2017) rely on a sim-to-real paradigm to learn robust \u2217Google DeepMind \u2020 University of Washington \u00a9 M. Bhardwaj et al. arXiv:2402.06102v1 [cs.RO] 8 Feb 2024 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning policies in simulation that can be directly transferred to the real world. Even when policies are learned directly on real hardware, practitioners often rely on simulation to gain intuition about the problem domain, and make critical design decisions such as the choice of algorithm, reward functions and other hyperparameters (Lee et al., 2022; Schwab et al., 2019). In addition to human expertise involved in simulation design, the high sample complexity of current RL algorithms necessitates fast simulations to achieve reasonable wall clock times for training. While this is possible for domains such as video games and rigid-body systems (Todorov et al., 2012; Liang et al., 2018), for several real-world problems satisfying this need becomes increasingly expensive or outright impossible. Examples include systems involving non-steady fluid dynamics and/or continuum mechanics (e.g. flying, swimming, soft matter based mechatronic systems), and multi-scale problems that occur in biological systems or digital twins of large industrial systems. How can we scale RL to such systems? This work focuses on one such domain the control of coupled mechanical-fluid dynamic systems. Here, the fact that one can not assume steady state dynamics hugely increases the complexity of simulations. For example, consider an Unmanned Aerial Vehicle operating in off-nominal regimes such as high angle of attack or ground/obstacle effects. Here, the turbulent air flows that are generated can be difficult to model, and create instabilities that nominal controllers are incapable of handling. While there is a growing literature on learning control policies in the presence of non-steady fluid flows that utilize simulation (Verma et al., 2018), and the dynamics are known in principle, simulating them requires supercomputers which is beyond the resources of most practitioners. The study of such systems raises interesting questions that have several implications for real-world deployment of reinforcement learning. 1. How do we design experiments to characterize the capabilities of a system that is hard to simulate at scale? 2. How do we ensure sample efficient learning given limited data collection rates? 3. How can we efficiently re-use prior experience to test different hypotheses, and aid the learning of new behaviors? To investigate these questions, we have developed a novel fluid-dynamic control system dubbed \"Box o\u2019 Flows\". This system consists of 9 upward facing nozzles arranged in parallel with a proportional pneumatic valve per nozzle regulating the airflow. The valves can be controlled programmatically to create complex pressure fields between two parallel panels forming a box. The airflow can be used to control the state of rigid objects, such as colored balls, that are placed inside. The setup is also equipped with an RGB camera capturing the box and objects inside it (Fig. 1 provides a detailed overview). The system is intentionally designed to be impossible to simulate accurately at the high integration rates required by deep RL algorithms, and exhibits complex non-steady fluid dynamics which makes (unknowingly) injecting prior human knowledge, or hand-designing control policies hard in practice. In Fig. 2 we demonstrate fluid patterns generated by the air flowing through the nozzles. This work serves as a preliminary investigation of how model-free RL can be used to learn a variety of dynamic control tasks on the Box o\u2019 Flows directly in the real world, as well as characterize hardware capabilities. We limit the algorithms tested to the state-of-the-art Maximum A-posteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018b), with fixed hyperparameters across different experiments. Desired behaviors are described via minimally 2 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning Figure 1: An overview of the different components of bench-top Box o\u2019 Flows system. Figure 2: Smoke visualizes the complex flow field that emerges from a single valve with constant flow. This illustrates the complex relationship between actuator and the flow field and ultimately its effects on the balls. This relationship is further complicated when several actuators are acting simultaneously. specified rewards functions, which gives the RL agent the freedom to find interesting control strategies. Furthermore, we test how offline RL can be used as a means for hypotheses testing by training new policies on logged data from past experiments, and intermittently evaluating them on the real system. Our framework can generate diverse dynamic behaviors to control the state of multiple rigid objects (table tennis balls) such as hovering, rearrangement, stacking and goal-reaching (detailed in Sec. 4). In summary, our main contributions are: \u2022 We present a novel benchtop fluid-dynamic control system geared towards real-world RL research. \u2022 We demonstrate the application of sample-efficient, model-free RL to learning dynamic behaviors and analyzing hardware capabilities. \u2022 We explore how offline RL with past data can be used to test various hypotheses when simulation is not available. 2. Box o\u2019 Flows System Overview In this section we describe the Box o\u2019 Flows system as shown in Fig. 1. The system comprises of a 70cmX70cm square aluminum frame on which a black opaque back panel and a transparent front panel are mounted, creating a shallow box of roughly 60mm depth. 3 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning 0.0 0.2 0.4 0.6 0.8 1.0 Environment Steps 1e6 200 300 400 500 600 700 800 900 Reward (a) (b) (c) Figure 3: Reinforcement learning applied to the task of maximizing the height of orange ball in presence of distractors (purple and green). The non-steady fluid dynamics of interacting objects and complex actuator coupling makes it hard to hand-design controllers. (a) Reward curve (b) Heatmap visualization of states visited by learned policy (averaged over 100 episodes) (c) Filmstrip of an episode (More details in Sec. 4) Mounted at the bottom edge of this box is a blade consisting of 9 proportional flow control valves (SMC PVQ 30), each attached to a nozzle facing upwards. An LED strip is mounted on the remaining three sides to evenly illuminate the interior of the box. Objects, such as the colored table tennis balls used in this work, can be placed within the space inside the box, so that their state can be controlled via the airflow. All valves share a common air supply that is hooked up to an air pump and fed via the proportional control valves at 6 bar. By connecting all the nozzles to a single pump, the supply pressure and consequently the flow across the nozzles drops when multiple valves are opened simultaneously. This cross coupling has been added intentionally, to increase the complexity of the system behaviour. Further, the system can only measure the overall supply pressure and not the pressure or flow at each valve. Communication with the valves and sensors is realized through EtherCAT, a realtime ethernet protocol providing synchronization between the individual nozzles. The control system runs on an intel-i7 based Beckhoff industrial PC running Linux and the EtherLab EtherCAT master (Ingenieurgemeinschaft IgH GmbH, 2024). A machine vision camera (BASLER Ace acA1920-40gc) is attached via GigE Ethernet and captures RGB images of the interior of the box. While the underlying Ethercat bus runs at higher rates, for the experiments described here a control rate of 20 Hz has been used. 4 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning 2.1. What Makes Box o\u2019 Flows a Hard Problem? The Box o\u2019 Flows brings to light several key challenges in controlling real-world systems with complex dynamics. As a motivating example, consider a simple setting with three colored balls placed inside the box, and one must design a control policy to maximize the height of one of the balls, with the others being distractors, i.e their motion is not constrained. (For reference, Fig. 3(c) shows behavior learned by our framework). While intuitively it may seem straightforward to hand-design a controller (eg. maximally open all valves), the nature of the Box o\u2019 Flows makes it hard in practice. First, the cross coupling between actuators due to shared air supply means that maximally opening all valves will not work for this task since the pressure per valve will drop. This relation is also hard to model and changes unpredictably over time due to practical issues such as oil accumulation. Second, in the Box o\u2019 Flows there is a less direct relationship from the actuator space to the state space than a standard robotic system. The non-steady dynamics of the emerging flow given an actuation input is highly complex and stochastic, especially as the objects interact with each other, and the controller must account for this. Moreover, current methods for accurately simulating non-steady flows require large amounts of compute which precludes techniques like sim-to-real RL that rely on cheap simulated data. Third, the system is highly under-observed as we can not directly measure the flow field inside the box, but only the supply pressure. One can only attempt to recover this information from a history of images of object motion from the camera stream. Finally, real-world data collection is a limiting factor. The current setup can collect approximately 1M environment steps per day, thus, experiments must be designed carefully for efficient data use. From the above, it is clear that hand-designing controllers is non-trivial even in simple settings, and model-based techniques that rely on accurate system identification or simulation can be prohibitively expensive. It is therefore more promising to consider efficient data-driven approaches that can overcome these constraints. 3. Methods We focus on sample-efficient, model-free RL algorithms that can facilitate learning control policies from limited real-world experience, both via online interactions and offline datasets. To this end, we leverage a high performance off policy actor-critic algorithm, Maximum Aposteriori Policy Optimization (MPO) (Abdolmaleki et al., 2018a,b). At iteration k, MPO updates the parameters \u03d5 and \u03b8 of the critic Q\u03c0k \u03d5 and policy \u03c0k \u03b8(\u00b7|s) respectively by optimizing min \u03d5 \u0010 rt + \u03b3Q\u03c0k\u22121 \u03d5\u2032 (st+1, at+1 \u223c\u03c0k\u22121) \u2212Q\u03c0k \u03d5 (st, at) \u0011 (1) \u03c0k+1 \u03b8 = arg min E\u00b5 [KL(q(a|s)||\u03c0\u03b8((a|s)))] (2) where q(a|s) \u221dexp(Qk \u03d5(s, a)\u00b5/\u03b2)) is a non-parametric estimate of the optimal policy given a temperature \u03b2, and KL (q(\u00b7|s)||\u03c0(\u00b7|s)) is the KL divergence, and \u00b5 is the distribution of states stored in a replay buffer. The efficient off-policy updates enable MPO to demonstrate sample-efficient learning in high dimensional continuous control tasks. We refer the reader to Abdolmaleki et al. (2018a) for a detailed derivation of the update rules. 5 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning Offline RL: Since Box o\u2019 Flows is distinct from existing robotic setups, it can be a priori unknown what reward functions can lead to desired behaviors with online RL. This problem is aggravated by the lack of simulation and constrained data collection rates. Thus, it is vital to be able to to re-use prior experience to test hypotheses about new rewards. To this end, we focus on the offline RL paradigm that enables learning effective policies from logged datasets without further exploration (Levine et al., 2020). To deal with limited data coverage, modern offline RL algorithms (Kumar et al., 2020; Cheng et al., 2022) rely on a concept of pessimism under uncertainty by optimizing performance lower bounds, such that the agent is penalized for choosing actions outside the data support. The actor update of MPO can be easily adapted to the offline setting. Given a dataset of transitions D = {(si, airi, si+1)}N i=1 collected by a behavior policy \u00b5B, we can modify the distribution of states in Eq. 2 from \u00b5 to \u00b5B (state distribution in D) and non-parametric optimal policy to q(a|s) \u221dexp(Qk \u03d5(s, a)\u00b5B/\u03b2). The actor update thus encourages reward maximization while staying close to \u00b5B. This is a special case of Critic Regularized Regression (CRR) (Wang et al., 2020), a state-of-the-art offline RL algorithm, and can be implemented it in a common framework with MPO. In our setting, we re-label data from prior online RL experiments with new rewards (in line with (Davchev et al., 2021; Yarats et al., 2022; Lambert et al., 2022; Tirumala et al., 2023)), and train a CRR agent offline that is tested intermittently on the real system to validate policy performance. The minimal use of hardware enables us to test multiple policies instead of just one that continuously trains online. We now present our main empirical results. 4. Experiments We use a suite of dynamic control tasks to test the efficacy of our RL framework and study the physical capabilities of the Box o\u2019 Flows system. Setup: To delineate the interplay between hardware capabilities and algorithm performance, we keep our RL agent (Sec. 3) fixed across all tasks. We use a distributed learning framework akin to Hoffman et al. (2020), and select hyperparameters using a candidate task where optimal behavior is qualitatively known (see below). The actor and critic are represented by feedforward neural networks, and object state by a history of pixel xy coordinates measured from the vision system via a blob detector. The 9-dim action space represents degree of valve opening in the range [0, 1]. Object locations are reset using random air bursts at the beginning of every episode (1000 steps long at 20Hz).We describe desired behaviors via simple rewards based on desired object configurations, which gives the RL agent the freedom to find interesting control strategies. Next, we describe the tasks in detail.1 4.1. Learning Dynamic Behaviors with Online RL Hovering with Distractors: We first consider the task of maximizing the height of a target ball (orange) in the presence of distractors (purple and green), and use it to select relevant hyperparameters. Intuitively, a near-optimal strategy is to place the distractors near a bottom corner and use other valves to hover the target ball. However, as described in 1. A complete description of rewards and hyperparameters can be found in the supplementary material at https://sites.google.com/view/box-o-flows/home 6 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning 0.0 0.2 0.4 0.6 0.8 1.0 Environment Steps 1e6 200 400 600 800 1000 Reward (a) (b) (c) Figure 4: Task: Orange in right, purple in left (a) Reward curve and (b) Heatmap visualization of states visited by learned policy (averaged over 100 episodes) (c) Filmstrip of an episode. Sec. 2.1, complex actuator coupling and non-steady flow patterns make it hard to hand-design such a controller. We test whether our MPO agent can recover this intuitive policy, by training it using a reward proportional to the pixel y coordinate of only the target ball, normalized to [0.0, 1.0] (based on maximum and minimum coordinate values). Fig. 3(a) presents the reward obtained over environment steps during training that shows the agent is able to obtain near-optimal reward in about 1M steps. In Fig. 3(b), we visualize the learned behavior via coarsely discretized heatmaps of ball locations over the last 100 training episodes, which show that the agent successfully learns the intuitive policy of maintaining the target ball near the top while pushing the distactors near the bottom left. Object Rearrangement: Next, we consider a harder task where the agent must place two target balls (orange and purple) anywhere in the right and left halves of the box respectively, with the green ball being a distractor. Here, it is hard to even intuitively reason about optimal behavior as it depends on the initial object locations which are randomized. We provide our agent a sparse reward equal to the product of the horizontal distances from the respective goal regions, which forces it to accomplish both tasks. As shown in Fig. 4, we observe that this task is much easier for RL, and our agent is able to achieve near-optimal reward within approximately 200k environment steps. Interestingly, the agent also learns a stable strategy of switching off controls once the balls are in the target halves as can be seen in the heatmap visualizations in Fig. 4(b) and filmstrip Fig. 4(c). Stacking: To test if our agent can exploit the airflow at a finer level, we consider a more challenging task of stacking two balls on top of each other. We again provide the agent a product of two simple rewards: keep the y-coordinate of the orange over purple by a fixed value and align x-coordinates. We observe that the agent not only learns to successfully stack 7 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning 0.0 0.2 0.4 0.6 0.8 1.0 Environment Steps 1e6 100 200 300 400 Reward (a) (b) (c) Figure 5: Task: Stack orange ball over purple (a) Reward curve. (b) Heatmap visualization of states visited by learned policy (averaged over 100 episodes). (c) Filmstrip of an episode. the balls Fig. 5(a), but also discovers an interesting strategy to always align them against the left wall of box as it is easier to control airflow near the walls (Fig. 5(b)). 4.2. Learning Goal-conditioned Policies to Analyze Reachability We wish to characterize what parts of the Box o\u2019 Flows are reachable given the actuator configuration and limits. Since, it is not possible analytically, we leverage our RL agent by designing a goal reaching task where the agent must position a ball to randomly chosen pixel targets. We add the goal location to the observation, and train MPO for 1.2M environment steps (1200 episodes). We visually analyze reachability by plotting a coarsely discretized heatmap of reaching errors for different target regions (Fig. 6). The intensity of each bin is proportional to the cumulative reaching error for every training episode during which the target was in that bin (normalized such that black is minimum error and red is maximum). This accounts for noise due to policy training and exploration, target height and inherent system stochasticity. The analysis clearly shows that target locations closer to the bottom and center are easier to reach in general. Also, targets near the bottom right are harder than bottom-left and bottom-center, which reveals an imbalance in the airflow through different nozzles. Interestingly, targets closer to the walls are also easily reachable since the agent can better exploit the airflow. These findings also align with the behavior learned in the stacking task. The hardest regions to reach are at the top, especially top-left and top-right corners. 4.3. Re-using Past Experience via Offline RL As discussed in Sec. 3, we perform a preliminary experiment to study how offline RL from logged datasets obtained from online RL experiments can be used to test new reward 8 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning (a) (b) Figure 6: (a) Pixel intensity is proportional to cumulative error for episodes when the target was in that pixel\u2019s bin. Error is the average distance between the ball and target in the last 200 episode steps. (b) Filmstrip of an episode. 0.2 0.4 0.6 0.8 1.0 Environment Steps 1e6 100 200 300 400 500 600 700 800 Reward (a) (b) (c) Figure 7: Task: Maximize the height of orange ball while aligning along the vertical center line in presence of distractors (a) Reward curve and (b) Heatmap visualization of states visited by learned policy (averaged over 100 episodes)(c) Filmstrip of an episode. functions. If the logged data has sufficient coverage (i.e the target task is close enough) one can expect the learned policy from offline RL to be representative of what we can obtain by running online RL from scratch. Specifically, we use data from the task of hovering with distractors and re-label the rewards to additionally constrain the ball to remain close to the vertical center line. We then train CRR (Sec. 3) and evaluate the current learner\u2019s policy intermittently on the real system. We show the learning curve in Fig. 7(a) and a heatmap of the states visited by the learned policy in Fig 7(b). A stark difference is observed compared to the heatmap in Fig. 3(b) as the states concentrate entirely near the center as desired, while distractors are at different bottom corners. This experiment provides a promising first result for applying offline RL to study complex dynamical systems like Box o\u2019 Flows. 9 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning 5. Related Work Deep RL for Complex Physical Systems: In addition to real-world robotics discussed in Sec. 1, RL is also applied to control other complex systems, such as data center cooling systems (Lazic et al., 2018). Degrave et al. (2022) apply deep RL to control Tokamak plasmas in nuclear fusion reactors. This is a high dimensional dynamic control problem, however, they rely on simulation in a constrained regime to learn policies that transfer to the real system. Machine Learning for Fluid Dynamics: Machine learning and deep RL are being extensively used for the modelling and control of fluid dynamical systems. We provide an overview here and refer the reader to the review papers by Brunton et al. (2020) and Larcher and Hachem (2022) for a comprehensive treatment. 1. Flow Modelling & Control: Machine learning is leveraged to accelerate high-fidelity numerical simulations of fluid dynamics (Kochkov et al., 2021) and automatic turbulence modelling (Novati et al., 2021). Deep RL is also applied to active flow control (Fan et al., 2020) and deformable object manipulation (Xu et al., 2022). The work by Ma et al. (2018) on rigid body manipulation via directed fluid flow is the closest to ours, however, they are limited to simulation with several approximations for computational efficiency. 2. Modelling Biological Systems: Deep RL can aid the understanding of physical mechanisms and decision-making processes underlying animal behavior. Verma et al. (2018) combine RL with high-fidelity fluid simulation to study how schooling helps fish reduce energy expenditure. However, running such simulations requires computational resources which are prohibitive for most practitioners. The flight behavior of birds is also studied to design agile UAVs. Tedrake et al. design a glider that demonstrates perching under high angle of attack and Reddy et al. (2016) learn energy efficient soaring behaviors by creating numerical models of turbulent thermal convective flows based on bird flight. Offline RL: Offline RL aims to learn competitive policies using logged data without further exploration, and consists of both model-free (Kumar et al., 2020; Cheng et al., 2022; Kostrikov et al., 2021), and model-based (Yu et al., 2021; Bhardwaj et al., 2023; Kidambi et al., 2020) variants. A key challenge is offline policy evaluation under limited data coverage (Levine et al., 2020) which is generally solved by importance sampling based approaches (Precup, 2000). We tackle this via intermittent evaluations of the learner\u2019s policy on the real system. 6. Discussion We presented Box o\u2019 Flows, a novel benchtop fluid-dynamic control system geared towards real-world RL research. We empirically demonstrated how model-free RL can be used to learn diverse dynamic behaviors directly on hardware, and the applicability of offline RL for efficient re-use of past experience. However, the capabilities of the learning agent can be further enhanced. First, model-based RL methods can be utilized to enhance the understanding of system dynamics and share data among tasks. Second, while our preliminary experiment with offline RL offers promising results, we expect we can improve performance by leveraging methods such as Cheng et al. (2022) that provide robust policy improvement guarantees. Last but not least, there are many variants of such table top systems that can be realized fairly straightforwardly to vary the difficulty and scope of the experiment. 10 \fReal-World Fluid Directed Rigid Body Control via Deep Reinforcement Learning Acknowledgments The authors would like to thank IgH for their contribution to the design and engineering of the Box o\u2019Flows and the Google DeepMind London Robotics Lab team for engineering and operational support." |
| }, |
| { |
| "url": "http://arxiv.org/abs/2312.00054v2", |
| "title": "Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective", |
| "abstract": "Inverse Reinforcement Learning (IRL) -- the problem of learning reward\nfunctions from demonstrations of an \\emph{expert policy} -- plays a critical\nrole in developing intelligent systems. While widely used in applications,\ntheoretical understandings of IRL present unique challenges and remain less\ndeveloped compared with standard RL. For example, it remains open how to do IRL\nefficiently in standard \\emph{offline} settings with pre-collected data, where\nstates are obtained from a \\emph{behavior policy} (which could be the expert\npolicy itself), and actions are sampled from the expert policy.\n This paper provides the first line of results for efficient IRL in vanilla\noffline and online settings using polynomial samples and runtime. Our\nalgorithms and analyses seamlessly adapt the pessimism principle commonly used\nin offline RL, and achieve IRL guarantees in stronger metrics than considered\nin existing work. We provide lower bounds showing that our sample complexities\nare nearly optimal. As an application, we also show that the learned rewards\ncan \\emph{transfer} to another target MDP with suitable guarantees when the\ntarget MDP satisfies certain similarity assumptions with the original (source)\nMDP.", |
| "authors": "Lei Zhao, Mengdi Wang, Yu Bai", |
| "published": "2023-11-29", |
| "updated": "2024-02-10", |
| "primary_cat": "stat.ML", |
| "cats": [ |
| "stat.ML", |
| "cs.AI", |
| "cs.LG" |
| ], |
| "label": "Original Paper", |
| "paper_cat": "Offline AND Reinforcement AND Learning", |
| "gt": "Is Inverse Reinforcement Learning Harder than Standard Reinforcement Learning? A Theoretical Perspective", |
| "main_content": "Introduction Inverse Reinforcement Learning (IRL) aims to recover reward functions from demonstrations of an expert policy (Ng and Russell, 2000; Abbeel and Ng, 2004), in contrast to standard reinforcement learning which aims to learn optimal policies for a given reward function. IRL has applications in numerous domains such as robotics (Argall et al., 2009; Finn et al., 2016), target-driven navigation tasks (Ziebart et al., 2008; Sadigh et al., 2017; Kuderer et al., 2015; Pan et al., 2020; Barnes et al., 2023), game AI (Ibarz et al., 2018; Vinyals et al., 2019), and medical decision-making (Woodworth et al., 2018; Hantous et al., 2022). The learned reward functions in these applications are typically used for replicating the expert behaviors in similar or varying downstream environments. Broadly, the problem of learning reward functions from data is of rising importance beyond the scope of IRL, and is used in procedures such as Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017) for aligning large language models (Ouyang et al., 2022; Bai et al., 2022; OpenAI, 2023; Touvron et al., 2023). Despite the success of IRL in practical applications (Agarwal et al., 2020; Finn et al., 2016; Sadigh et al., 2017; Kuderer et al., 2015; Woodworth et al., 2018; Wu et al., 2020; Ravichandar et al., 2020; Vasquez et al., 2014), theoretical understanding is still in an early stage and presents several unique challenges, especially when compared with standard RL (finding optimal policy under a given reward) where the theory is more established. First, the solution is inherently non-unique for any IRL problem\u2014For example, for any given \u2217University of Science and Technology of China. Email: zl20071451@mail.ustc.edu.cn. \u2020Princeton University. Email: mengdiw@princeton.edu. \u2021Salesforce AI Research. Email: yu.bai@salesforce.com. 1 arXiv:2312.00054v2 [stat.ML] 10 Feb 2024 \fexpert policy, zero reward is always a feasible solution (making the expert policy optimal under this reward). A sensible definition of IRL would require not just recovering a single reward function but instead a set of feasible rewards (Metelli et al., 2021; Lindner et al., 2023). Second, theoretical results for IRL is lacking even for some standard learning settings, such as learning from an offline dataset of trajectories from the expert policy (akin to an imitation setting). Finally, as a more nuanced challenge (but related to both challenges above), so far there is no commonly agreed performance metric for measuring the distance between the estimated reward set and the ground truth reward set. Existing performance metrics in the literature either require strong feedback such as a simulator (Metelli et al., 2021, 2023), or do not require the returned solution to be aware of the transition dynamics Lindner et al. (2023) (see Section 3.3 for a discussion). These challenges motivate the following open question: Is IRL more difficult than standard RL? In this paper, we theoretically study IRL in standard episodic tabular Markov Decision Processes without Rewards (MDP\\R\u2019s) under vanilla offline and online learning settings. Our contributions can be summarized as follows. \u2022 The goal of IRL is to output a set of rewards that approximate the ground truth set of feasible rewards, i.e. rewards under which the expert policy is optimal. We define new metrics for both reward functions and for IRL using the concept of reward mapping, which can be viewed as a \u201cgenerating function\u201d of the (ground truth) set of feasible rewards (Section 2.1 & 3.1). We show that our metrics are stronger / more appropriate than existing metrics in certain aspects (Section 3.3). \u2022 We show that any estimated reward that is similar in our metric and satisfies monotonicity with respect to the true reward admits an approximate planning/learning guarantee (Section 3.2). \u2022 We design an algorithm, Reward Learning with Pessimism (RLP) that performs IRL from any given offline demonstration dataset (Section 4). Our algorithm returns an estimated reward mapping that is \u03f5-close in our metric and satisfies monotonicity, and requires a number of episodes that is polynomial in the size of the MDP as well as the single-policy concentrability coefficient between the evaluation policy and the behavior policy that generated the states of the offline dataset. To our best knowledge, this is the first provably sample-efficient algorithm for IRL in the standard offline setting. Technically, the algorithm seamlessly adapts the pessimism principle from the offline RL literature to achieve the desired monotonicity and closeness conditions, demonstrating that IRL is \u201cnot much harder than standard RL\u201d in a certain sense. \u2022 We next design an algorithm Reward Learning with Exploration (RLE), which operates in a natural online setting where the learner can both actively explore the environment and query the expert policy, and achieves IRL guarantee in a stronger metric from polynomial samples (Section 5). Algorithm RLE builds on a simple reduction to reward-free exploration (Jin et al., 2020; Li et al., 2023) and the RLP algorithm. \u2022 We establish sample complexity lower bounds for both the offline and online settings, showing that our upper bounds are nearly optimal up to a small factor (Section 4.4 & 5.3). \u2022 We extend our results to a transfer learning setting, where the learned reward mapping is transferred to and evaluated in a target MDP\\R different from the source MDP\\R. We provide guarantees for RLP and RLE under certain similarity assumptions between the source and target MDP\\Rs (Section 6 & Appendix I). 2 \f1.1 Related work Inverse reinforcement learning Inverse reinforcement learning (IRL) was first proposed by Ng and Russell (2000) and since then significantly developed in various follow-up approaches such as feature matching (Abbeel and Ng, 2004), maximum margin (Ratliff et al., 2006), maximum entropy (Ziebart et al., 2008), relative entropy (Boularias et al., 2011), and generative adversarial imitation learning (Ho and Ermon, 2016). Other notable approaches include Bayesian IRL (Ramachandran and Amir, 2007) which subsume IRL, and the reduction method (Brantley et al., 2019). IRL has been successfully applied in many domains including target-driven navigation tasks (Ziebart et al., 2008; Sadigh et al., 2017; Kuderer et al., 2015; Pan et al., 2020), robotics (Argall et al., 2009; Finn et al., 2016; Hadfield-Menell et al., 2016; Kretzschmar et al., 2016; Okal and Arras, 2016; Kumar et al., 2023; Jara-Ettinger, 2019), medical decision-making (Woodworth et al., 2018; Hantous et al., 2022; Gong et al., 2023; Yu et al., 2019; Chadi and Mousannif, 2022), and game AI (Finn et al., 2016; Fu et al., 2017; Qureshi et al., 2018; Brown et al., 2019). Theoretical understandings of IRL Despite their successful applications, theoretical understandings of IRL are still in an early stage. Recently, Metelli et al. (2021) pioneered the investigation of the sample complexity of IRL under the simulator (generative model) setting where the learner can directly query feedback from any (state, action) pair. This work was later extended by Metelli et al. (2023), who introduced a framework based on Hausdorff-based metrics for measuring distances between reward sets, examined relationships between different metrics, and provided corresponding lower bounds. However, their results critically rely on the simulator setting and do not generalize to more realistic offline/online learning settings. Dexter et al. (2021) also performed a theoretical analysis for IRL in the simulator setting with continuous states and discrete actions. The recent work of Lindner et al. (2023) considers IRL in the online setting where the learner can interact with the MDP\\R in an online fashion, which is closely related to our results for the online setting. Compared with our metric, their metric is defined for an estimated IRL problem (instead of an estimated reward set). Further, their metric does not effectively take into account the estimated transitions, which can lead to a family of counter-exmaples where the estimated IRL problem achieves perfect recovery under their metric, but the induced reward sets are actually far from the true feasible reward set in our metric (cf. Section 3.3 for a detailed discussion). Our work improves upon the above works by introducing new performance metrics for IRL, and providing new algorithms for standard learning settings such as offline learning. Relationship with standard RL theory Our work builds upon various existing techniques from the sample-efficient RL literature to design our algorithms and establish our theoretical results. For the offline setting, our algorithm and analysis build upon the pessimism principle and the single-policy concentrability condition commonly used in offline RL (Kidambi et al., 2020; Jin et al., 2021; Yu et al., 2020; Kumar et al., 2020; Rashidinejad et al., 2021; Xie et al., 2021, 2022). For the online setting, we adapt the reward-free learning algorithm of Li et al. (2023) to find a policy that achieves a certain concentrability-like condition with respect to all policies. We note theoretical results on imitation learning (Abbeel and Ng, 2004; Ratliff et al., 2006; Ziebart et al., 2008; Levine et al., 2011; Fu et al., 2017; Chang et al., 2021) and RLHF (Zhu et al., 2023a,b; Wang et al., 2023; Zhan et al., 2023), which are related to but different from (and do not imply) our results. Additional related work is discussed in Appendix A. 2 Preliminaries Markov Decision Processes without Reward We consider episodic Markov Decision Processes without Reward (MDP\\R), specified by M = (S, A, H, P), where S is the state space with |S| = S, A is the action 3 \fspace with |A| = A, H is the horizon length, P = {Ph}h\u2208[H] where Ph(\u00b7|s, a) \u2208\u2206(S) is the transition probability at step h. Without loss of generality, we assume that the initial state is deterministically some s1 \u2208S. Reward functions A reward function r : [H] \u00d7 S \u00d7 A \u2192[\u22121, 1] maps a state-action-time step triplet (h, s, a) to a reward rh(s, a). Given an MDP\\R M and a reward function r, we denote the MDP induced by M and r as M \u222ar. A policy \u03c0 = {\u03c0h(\u00b7 | s)}h\u2208[H],s\u2208S, where \u03c0h : S \u2192\u2206(A) maps a state to an action distribution. Values and visitation distributions A policy \u03c0 = (\u03c0h)h\u2208[H], where each \u03c0h(\u00b7|s) \u2208\u2206(A) for each s \u2208S. Let supp(\u03c0h(\u00b7|s)) := {a : \u03c0h(a|s) > 0} denote the support set of \u03c0h(\u00b7|s). For any policy \u03c0 and any reward function r, we define the value function V \u03c0 h (\u00b7; r) : S \u2192R at each time step h \u2208[H] by the expected cumulative reward: V \u03c0 h (s; r) = E\u03c0 hPH h\u2032=h rh\u2032(sh\u2032, ah\u2032) | sh = s i , where E\u03c0 denotes the expectation with respect to the random trajectory induced by \u03c0 in the MDP\\R, that is, (s1, a1, s2, a2, ..., sH, aH), where ah \u223c\u03c0h(sh), rh = rh(sh, ah), sh+1 \u223cPh(\u00b7 | sh, ah). Similarly, we denote the Q-function at time step h as : Q\u03c0 h(s, a; r) = E\u03c0 hPH h\u2032=h rh\u2032(sh\u2032, ah\u2032) | sh = s, ah = a i . For any reward r, the corresponding advantage function A\u03c0 h(\u00b7; r) : S \u00d7 A \u2192R is defined as A\u03c0 h(s, a; r) := Q\u03c0 h(s, a; r) \u2212V \u03c0 h (s; r) and we say a policy is an optimal policy of M \u222ar if A\u03c0 h(s, a; r) \u22640 holds for all (h, s, a) \u2208[H] \u00d7 S \u00d7 A1. Additionally, we represent the set of all optimal policies for M \u222ar as \u03a0\u22c6 M\u222ar and denote the set of all deterministic policies for M \u222ar as \u03a0det M\u222ar. We introduce d\u03c0 h to denote the state(-action) visitation distributions associated with policy at time step h \u2208[H]: d\u03c0 h(s) := P(sh = s|\u03c0) and d\u03c0 h(s, a) := P(sh = s, ah = a|\u03c0). Lastly, we define the operators Ph and Vh by [PhVh+1](s, a) := E[Vh+1(sh+1)|sh = s, ah = a] and [VhVh+1](s, a) := Var[Vh+1(sh+1)|sh = s, ah = a] applying to any value function Vh+1 at time step h + 1. In this paper, we will frequently employ b Ph and b Vh to represent empirical counterparts of these operators constructed based on estimated models. For any function f : S \u2192R, define its infinity norm as \u2225f\u2225\u221e:= sups\u2208S |f(s)| (and we define similarly for any f : S \u00d7 A \u2192R). 2.1 Inverse Reinforcement Learning An Inverse Reinforcement Learning (IRL) problem is denoted as a pair (M, \u03c0E), where M is an MDP\\R and \u03c0E is a policy called the expert policy. The goal of IRL is to interact with (M, \u03c0E), and recover reward function r\u2019s that are feasible for (M, \u03c0E), in the sense that \u03c0E an optimal policy for MDP M \u222ar. Reward mapping Noting that learning one feasible reward function is trivial (the zero reward r \u22610 is feasible for any \u03c0E), we consider the stronger goal of recovering the set of all feasible rewards, which can be characterized by an explicit formula by the classical result of Ng and Russell (2000). Here we restate this result through the concept of a reward mapping. Let Rall denote the set of all possible reward functions, and Rfeas [\u2212B,B] := {r \u2208Rall : r is feasible and |r| \u2264B} denote the set of all feasible rewards bounded by B for any B > 0. Let V := V1 \u00d7 \u00b7 \u00b7 \u00b7 \u00d7 VH and A := A1\u00d7\u00b7 \u00b7 \u00b7\u00d7AH, where Vh := \b Vh \u2208RS | \u2225Vh\u2225\u221e\u2264H \u2212h + 1 \t and Ah := n Ah \u2208RS\u00d7A \u22650 | \u2225Ah\u2225\u221e\u2264H \u2212h + 1 o denote the set of all possible \u201cvalue functions\u201d and \u201cadvantage functions\u201d respectively. Definition 2.1 (Reward mapping). The (ground truth) reward mapping R\u22c6: V \u00d7 A 7\u2192Rall of an IRL 1This definition of optimal policy requires \u03c0 to be optimal starting from any time step h and state s \u2208S (not necessarily visitable ones), which is stronger than the standard definition but is commonly adopted in the IRL literature (Ng and Russell, 2000). 4 \fproblem (M, \u03c0E) is the mapping that maps any (V, A) \u2208V \u00d7 A to the following reward function r: rh(s, a) = [R\u22c6(V, A)]h(s, a) := \u2212Ah(s, a) (2.1) \u00d7 1 \b a / \u2208supp \u0000\u03c0E h(\u00b7 | s) \u0001\t + Vh(s) \u2212[PhVh+1](s, a), where we recall that Ph is the transition probability of M at step h \u2208[H]. With the definition of reward mapping ready, we now restate the classical result of Ng and Russell (2000), which shows that the reward mapping R generates a set of rewards that is a superset of Rfeas [\u22121,1]\u2014the set of all [\u22121, 1]-bounded feasible rewards\u2014by ranging over (V, A) \u2208V \u00d7 A. Lemma 2.2 (Reward mapping produces all bounded feasible rewards). The set of rewards R\u22c6(V \u00d7 A) = {R(V, A) : (V, A) \u2208V \u00d7 A} induced by R\u22c6satisfies Rfeas [\u22121,1] \u2286R\u22c6(V \u00d7 A) \u2286Rfeas [\u22123H,3H]. (2.2) In words, R\u22c6always produces feasible rewards bounded in [\u22123H, 3H], and the set R\u22c6(V \u00d7 A) contains (is a superset of) all [\u22121, 1]-bounded feasible rewards. As IRL is concerned precisely with the recovery of the set Rfeas [\u22121,1], we consider the recovery of the reward mapping R\u22c6itself as a natural learning goal\u2014An accurate estimator b R \u2248R\u22c6guarantees b R(V, A) \u2248R\u22c6(V, A) for any (V, A) \u2208V \u00d7 A, and thus imply accurate estimation of R\u22c6(V \u00d7 A) in precise ways which we specify in the sequel. We will also consider recovering the reward mapping on a subset \u0398 \u2282V \u00d7 A. We use the following standard definition of covering numbers to measure the capacity of such \u0398\u2019s: Definition 2.3 (Covering number). The \u03f5-covering number of \u0398 \u2282V \u00d7 A is defined as N(\u0398; \u03f5) := maxh\u2208[H] N(V \u0398 h ; \u03f5), where V \u0398 h := {Vh : (V, A) \u2208\u0398} denotes the restriction of \u0398 onto Vh, and N(V \u0398 h ; \u03f5) is the \u03f5-covering number of V \u0398 h in \u2225\u00b7\u2225\u221enorm. Note that log N(\u0398; \u03f5) \u2264min {log |\u0398|, O(S log(H/\u03f5))} by combining the (trivial) bound for the finite case and the standard covering number bound for \u0398 = V \u00d7 A (Vershynin, 2018). In addition, the left-hand side may be much smaller than the right-hand side if \u0398 admits additional structure (for example, if V \u0398 h lies in a low-dimensional subspace of RS). 3 Performance metrics for IRL 3.1 Metric for IRL We now define our performance metric for IRL based on the recovery of reward mapping R\u22c6. Fixing any MDP\\R M, we begin by defining our base metric d\u03c0 (indexed by a policy \u03c0) and dall between two rewards. Definition 3.1 (Base metric for rewards). We define the metric2 d\u03c0 (indexed by any policy \u03c0) between any pair of rewards r, r\u2032 \u2208Rall as d\u03c0(r, r\u2032) := sup h\u2208[H] Esh\u223c\u03c0|V \u03c0 h (sh; r) \u2212V \u03c0 h (sh; r\u2032)|. (3.1) We further define dall(r, r\u2032) := sup\u03c0 d\u03c0(r, r\u2032). 2Technically a semi-metric. 5 \fIn words, metric d\u03c0 compares the rewards r and r\u2032 when executing \u03c0. Concretely, (3.1) compares the difference in the value functions V \u03c0 h (\u00b7; r) and V \u03c0 h (\u00b7; r\u2032) averaged over the visitation distribution sh \u223c\u03c0, which is sensible for our learning settings as it takes into account the transition structure of M (compared with other existing metrics based the sup-distance over all states; cf. Section 3.3). The stronger metric dall takes the supremum of d\u03c0 over all policy \u03c0\u2019s. We now define our main metric D\u03c0 \u0398 for the recovery of reward mappings, which simply takes the supremum of d\u03c0 between all pairs of rewards induced by the two reward mappings using the same parameter (V, A) \u2208\u0398. Definition 3.2 (Metric for reward mappings). Given any policy \u03c0 and any parameter set \u0398, we define the metric2 D\u03c0 \u0398 between any pair of reward mappings R, R\u2032 as D\u03c0 \u0398(R, R\u2032) := sup (V,A)\u2208\u0398 d\u03c0(R(V, A), R\u2032(V, A)). (3.2) We further define Dall \u0398 (R, R\u2032) := sup\u03c0 D\u03c0 \u0398(R, R\u2032). (3.2) compares two reward mappings R and R\u2032 by measuring the distance between R(V, A) and R\u2032(V, A) using our base metric and taking the sup over all (V, A) \u2208\u0398. Another common choice in the IRL literature is the Hausdorff distance (based on some base metric) between the two sets R(V \u00d7 A) and R\u2032(V \u00d7 A) (Metelli et al., 2021, 2023; Lindner et al., 2023). We show that (3.2) is always stronger than the Haussdorff distance in the sense that a metric of the form (3.2) is greater or equal to the Hausdorff distance regardless of the base metric (Lemma D.3), and the inequality can be strict for some base metric (Lemma D.4). 3.2 Implications for learning with estimated reward For IRL, a natural desire for a base metric between rewards is that, a small metric between r and b r should imply that learning (planning) using reward b r in M should at most incur a small error when the true reward is r. The following result shows that our metric d\u03c0 satisfies such a desiderata. The proof can be found in Appendix D.5. Proposition 3.3 (Planning with estimated reward). Given an MDP\\R M, let r, b r be a pair of rewards such that (a) (Small d\u03c0 on near-optimal policy) d\u03c0(r, b r) \u2264\u03f5 for some \u00af \u03f5 near-optimal policy \u03c0 for MDP M \u222ar; (b) (Monotonicity) b rh(s, a) \u2264rh(s, a) for any (h, s, a) \u2208[H] \u00d7 S \u00d7 A. Then, letting b \u03c0 be any \u03f5\u2032 near-optimal policy for MDP M \u222ab r, i.e, V \u22c6 1 (s1; b r) \u2212V b \u03c0 1 (s1; b r) \u2264\u03f5\u2032, we have V \u22c6 1 (s1; r) \u2212V b \u03c0 1 (s1; r) \u2264\u03f5 + \u03f5\u2032 + 2\u00af \u03f5, (3.3) i.e. b \u03c0 is also (\u03f5 + \u03f5\u2032 + 2\u00af \u03f5) near-optimal for M \u222ar. Proposition 3.3 ensures that any estimated reward b r that satisfies (a) small D\u03c0 \u0398 and (b) monotonicity with respect to the true reward will incur a small error when used in planning. We emphasize that monotonicity is necessary in order for (3.3) to hold, similar to how pessimism is necessary for near-optimal learning in offline bandits/RL (Jin et al., 2021). Throughout the rest of the paper, we focus on designing IRL algorithms that satisfy (a) & (b). These guarantees can then directly yield planning/learning guarantees as corollaries by Proposition 3.3, and we will omit such statements. 3.3 Relationship with existing metrics Our metrics d\u03c0 and dall differ from several metrics for IRL used in existing theoretical work, which we discuss here. 6 \fAlgorithm 1 Reward Learning with Pessimism 1: Input: Dataset D = {(sk h, ak h, ek h)}K,H k=1,h=1, parameter set \u0398 \u2282V \u00d7 A, confidence level \u03b4 > 0, error tolerance \u03f5 > 0. 2: for (h, s, a) \u2208[H] \u00d7 S \u00d7 A do 3: Compute the empirical transition kernel b Ph, the empirical expert policy b \u03c0E and the penalty term b\u03b8 h for all \u03b8 \u2208\u0398 as follows: b Ph(s\u2032 | s, a) = 1 N b h(s, a) \u22281 X (sh,ah,sh+1)\u2208D 1 \b (sh, ah, sh+1) = (s, a, s\u2032) \t , (4.1) b \u03c0E h(a | s) = \uf8f1 \uf8f2 \uf8f3 1 Nb h(s)\u22281 \u00b7 P (sh,ah,eh)\u2208D 1 {(sh, eh) = (s, a)} in option 1, 1 Nb h,1(s)\u22281 \u00b7 P (sh,ah,eh)\u2208D 1 {(sh, ah, eh) = (s, a, 1)} in option 2, (4.2) b\u03b8 h(s, a) = C \u00b7 min (s log N(\u0398; \u03f5/H)\u03b9 N b h(s, a) \u22281 h b VhVh+1 i (s, a) + H log N(\u0398; \u03f5/H)\u03b9 N b h(s, a) \u22281 + \u03f5 H 1 + s log N(\u0398; \u03f5/H)\u03b9 N b h(s, a) \u22281 ! , H ) , (4.3) where the visitation counts N b h(s, a) := P (sh,ah)\u2208D 1 {(sh, ah) = (s, a)}, N b h(s) := P a\u2208A N b h(s, a), N b h,1(s) := P (sh,ah,eh) 1 {(sh, eh) = (s, 1)}, \u03b9 := log (HSA/\u03b4) and C > 0 is an absolute constant. 4: end for 5: Output: Estimated reward mapping b R defined as follows: For all (V, A) \u2208\u0398, [ b R(V, A)]h(s, a) := \u2212Ah(s, a) \u00b7 1 n a / \u2208supp \u0010 b \u03c0E h(\u00b7|s) \u0011o + Vh(s) \u2212[b PhVh+1](s, a) \u2212b\u03b8 h(s, a). (4.4) Lindner et al. (2023) measures the difference between two reward mappings implicitly by a metric DL (see (D.1)) between the two inducing IRL problems (the ground truth problem (M, \u03c0E) and the estimated problem ( c M, b \u03c0E) returned by an algorithm). The following result shows that DL is weaker than our metric Dall \u0398 in a strong sense. Theorem 3.4 (Relationship with DL; informal). The metric DL defined in (D.1) satisfies the following: (a) (Informal version of Prop. D.1) Under the same setting as Theorem 5.1 (in which our algorithm RLE achieves \u03f5 error in Dall \u0398 ), RLE also achieves \u03f5 error in DL with the same sample complexity therein. (b) (Informal version of Prop.D.2) Conversely, there exists a family of pairs of IRL problems which has distance 0 in the DL metric but distance 1 in the Dall \u0398 metric between the induced reward mappings. In a separate thread, the works of Metelli et al. (2021, 2023) consider IRL under access to a simulator. Their metric between two reward functions requires the induced value/Q functions to be close uniformly over all (s, a) \u2208S \u00d7 A (cf. Appendix D.2), regardless of whether the state is visitable by a policy in this particular MDP\\R), which is tailored to the simulator setting and does not applicable to the standard offline/online settings considered in this work. By contrast, our metrics d\u03c0 and dall measure the distance between the induced value functions averaged over visitation distributions, which are more tractable for the offline/online settings. 4 IRL in the offline setting 4.1 Setting In the offline setting, the learner does not know (M, \u03c0E), and only has access to a dataset D = {(sk h, ak h, ek h)}K,H k=1,h=1 consisting of K iid trajectories without reward from M, where actions are obtained by executing some 7 \fbehavior policy \u03c0b in M: ak h \u223c\u03c0b h(\u00b7|sk h) for all (k, h), and the expert feedback ek h\u2019s are obtained from the expert policy \u03c0E using one of the following two options: ek h = ( aE,k h \u223c\u03c0E h(\u00b7|sk h) in option 1, 1 \b ak h \u2208supp \u0000\u03c0E h(\u00b7|sk h) \u0001\t in option 2. (4.5) Option 1, where the learner directly observes an expert action aE,k h , is the commonly employed setting in the IRL literature (Metelli et al., 2021; Lindner et al., 2023; Metelli et al., 2023). In the special case where \u03c0b = \u03c0E, we can take aE,k h := ak h, i.e. no need for additional expert feedback when the behavior policy coincides with the expert policy. We also allow option 2, in which ek h indicates whether ak h \u201cis an expert action\u201d (belongs to the support of \u03c0E h(\u00b7|s)). As we will see, both options suffice for performing IRL. Additionally, for option 1, we require the following well-posedness assumption on the expert policy \u03c0E. Assumption 4.1 (Well-posedness). For any \u2206\u2208(0, 1], we say policy \u03c0E is \u2206-well-posed if min (h,s,a):\u03c0E h(a|s)\u0338=0 \u03c0E h(a|s) \u2265\u2206. (4.6) This assumption is also made by Metelli et al. (2023, Assumption D.1), and is necessary for ruling out the edge case where \u03c0E h(a|s) is positive but extremely small for some action a \u2208A, in which case a large number of samples is required to determine 1 \b a \u2208supp(\u03c0E h(\u00b7|s)) \t . 4.2 Algorithm We now present our algorithm Reward Learning with Pessimism (RLP; full description in Algorithm 1) for IRL in the offline setting. RLP returns an estimated reward mapping b R given any offline dataset D. At a high level, RLP consists of two main steps: \u2022 (Empirical MDP) We estimate the transition probabilities Ph and expert policy \u03c0E by standard empirical estimates b Ph and b \u03c0E, as in (4.1) and (4.2). \u2022 (Pessimism) We compute a bonus function b\u03b8 h(s, a) for any \u03b8 = (V, A) \u2208\u0398, (h, s, a) \u2208[H] \u00d7 S \u00d7 A as in (4.3). The final estimated reward (and thus the reward mapping) (4.4) is defined by the empirical version of the ground truth reward (2.1) combined with the negative bonus \u2212b\u03b8 h(s, a), for every parameter (V, A) \u2208\u0398. The specific design of b\u03b8 h(s, a) is based on Bernstein\u2019s inequality, and ensures that with high probability, for all (h, s, a, \u03b8) simultaneously, b\u03b8 h(s, a) \u2265Ah(s, a) \u00d7 \f \f1 \b a / \u2208supp \u0000b \u03c0E h(\u00b7|s) \u0001\t \u22121 \b a / \u2208supp \u0000\u03c0E h(\u00b7|s) \u0001\t \f \f + \f \f\u0002\u0000b Ph \u2212P \u0001 Vh+1 \u0003 (s, a) \f \f. Combined with the form of the ground truth reward R(V, A) in (2.1), a standard pessimism argument ensures the monotonicity condition [ b R(V, A)]h(s, a) \u2264[R(V, A)]h(s, a) for all (h, s, a) and all (V, A). Therefore, in Algorithm 1, the empirical estimates ensure that the estimated reward (4.4) is close to the ground truth reward (over sh \u223cD or equivalently the behavior policy \u03c0b), whereas the pessimism (negative bonus) ensures the monotonicity condition, both being desired properties for IRL as discussed in Section 3.1. 8 \f4.3 Theoretical guarantee We now state our theoretical guarantee for Algorithm 1. To measure the quality of the recovered reward mappings, we will be considering the d\u03c0 and D\u03c0 \u0398 metric with \u03c0 = \u03c0eval being any given evaluation policy. We assume that \u03c0eval satisfies the standard single-policy concentrability condition with respect to the behavior policy \u03c0b. Assumption 4.2 (Average form single-policy concentrability). We say \u03c0eval satisfies C\u22c6-single-policy concentrability with respect to \u03c0b if (with the convention 0/0 = 0) 1 HS X h\u2208[H] X (s,a)\u2208S\u00d7A d\u03c0eval h (s, a) d\u03c0b h (s, a) \u2264C\u22c6. (4.7) Assumption 4.2 is standard in the offline RL literature (Jin et al., 2021; Rashidinejad et al., 2021; Xie et al., 2021), though we remark that our (4.7) only requires the average form, instead of the worst-case form made in (Rashidinejad et al., 2021; Xie et al., 2021) which requires the distribution ratio to be bounded for all (h, s, a). We are now ready to present the guarantee for RLP (Algorithm 1). The proof can be found in Appendix E.2. Theorem 4.3 (Sample complexity of RLP). Let \u03c0eval be any policy that satisfies C\u22c6single-policy concentrability (Assumption 4.2) with respect to \u03c0b. Assume that \u03c0E is \u2206-well-posed (Assumption 4.1) if we choose option 1 in (4.5). Then for both options, with probability at least 1 \u2212\u03b4, RLP (Algorithm 1) outputs a reward mapping b R such that D\u03c0eval \u0398 \u0010 R\u22c6, b R \u0011 \u2264\u03f5, h b R(V, A) i h(s, a) \u2264[R\u22c6(V, A)]h(s, a) for all (V, A) \u2208\u0398 and (h, s, a) \u2208[H] \u00d7 S \u00d7 A, as long as the number of episodes K \u2265e O \u0012H4SC\u22c6log N \u03f52 + H2SC\u22c6\u03b7 \u03f5 \u0013 . Above, log N := log N(\u0398; \u03f5/H), \u03b7 := \u2206\u221211 {option 1}, and e O(\u00b7) hides polylog(H, S, A, 1/\u03b4) factors. To our best knowledge, Theorem 4.3 provides the first theoretical guarantee for IRL under the standard offline setting, showing that RLP achieves the desired monotonicity condition and small D\u03c0 \u0398 distance for any evaluation policy \u03c0eval that satisfies single-policy concentrability with respect to \u03c0b. For small enough \u03f5, the sample complexity (number of episodes required) scales as e O(H4SC\u22c6log N/\u03f52), which depends on the number of states S, the concentrability coefficient C\u22c6, as well as the log-covering number log N which always admits the bound log N \u2264e O(S) in the worst case and may be smaller. Apart from the log N factor, this rate resembles that of standard offline RL under single-policy concentrability (Rashidinejad et al., 2021; Xie et al., 2021). This is no coincidence, as our algorithm and proof (for both the D\u03c0eval \u0398 bound and the monotonicity condition) can be viewed as an adaptation of the pessimism technique for all rewards (R(V, A))(V,A)\u2208\u0398 simultaneously, demonstrating that IRL is \u201cno harder than standard RL\u201d in this setting. We remark that the \u2206\u22121 factor brought by Assumption 4.1 appears only in the e O(\u03f5\u22121) burn-in term in the rate when the feedback {ek h}k,h in (4.5) comes from option 1. Result for \u03c0eval = \u03c0E In the special case where \u03c0eval = \u03c0E, we establish a slightly stronger result where we can improve over Theorem 4.3 by one H factor (H4 \u2192H3) in the main term. The proof uses the specific form of our Bernstein-like bonus (4.3) combined with a total variance argument (Azar et al., 2017; Zhang et al., 2020; Xie et al., 2021), and can be found in Appendix E.3. 9 \fTheorem 4.4 (Improved sample complexity for \u03c0eval = \u03c0E). Suppose \u03c0eval = \u03c0E which achieves C\u22c6singlepolicy concentrability with respect to \u03c0b (Assumption 4.2), and in addition sup(h,s,a)\u2208[H]\u00d7S\u00d7A |[R\u22c6(V, A)]h(s, a)| \u2264 1 for all (V, A) \u2208\u0398. Then under both options in (4.5), with probability at least 1 \u2212\u03b4, RLP (Algorithm 1) achieves the same guarantee as in Theorem 4.3 (D\u03c0eval \u0398 (R\u22c6, b R) \u2264\u03f5 and monotonicity), as long as the number of episodes K \u2265e O \u0012H3SC\u22c6log N \u03f52 + H2SC\u22c6(A + H log N) \u03f5 \u0013 . Theorem 4.4 no longer requires well-posedness of \u03c0E (Assumption 4.1) in option 1. This happens due to the assumed concentrability between \u03c0E(= \u03c0eval) and \u03c0b, which can aid the learning of supp \u0000\u03c0E h(\u00b7|s) \u0001 even without well-posedness. IRL from full expert trajectories An important special case of Theorem 4.4 is when \u03c0b further coincides with \u03c0E. This represents a natural and clean setting where dataset D consists of full trajectories drawn from the expert policy \u03c0E, and our goal is to recover a reward mapping with a small D\u03c0E \u0398 . This case is covered by Theorem 4.4 by taking C\u22c6= 1 and admits a sample complexity e O(H3S log N/\u03f52). 4.4 Lower bound We present an information-theoretic lower bound showing that the upper bound in Theorem 4.3 is nearly tight. Theorem 4.5 (Informal version of Theorem H.2). For any (H, S, A, \u03f5) and any C\u22c6\u22651, there exists a family of offline IRL problems where D consists of K episodes, \u03c0eval satisfies C\u22c6-concentrability at most C\u22c6, \u0398 = V \u00d7 A, and \u03c0E is \u2206well-posed with \u2206= 1, such that the following holds. Suppose any IRL algorithm achieves D\u03c0eval \u0398 (R\u22c6, b R) \u2264\u03f5 for every problem in this family with probability at least 2/3, then we must have K \u2265\u2126 \u0000H2SC\u22c6min {S, A}/\u03f52\u0001 . For \u0398 = V \u00d7 A, the upper bound in Theorem 4.3 scales as e O(H4S2C\u22c6/\u03f52). Ignoring H and polylogarithmic factors, Theorem 4.5 assert that this rate is tight for S \u2264A (so that min {S, A} = S). The form of this min {S, A} factor in Theorem 4.5 is due to certain technicalities in the hard instance construction; whether this can be improved to an S factor would be an interesting question for future work. 5 IRL in the online setting 5.1 Setting We now consider IRL in a natural online learning setting (also known as \u201cactive exploration IRL\u201d (Lindner et al., 2023)). In each episode, the learner interacts with the IRL problem (M, \u03c0E) as follows: At each h \u2208[H], the learner receives the state sh \u2208S and chooses their action ah \u2208A from an arbitrary policy. The environment then provides the expert feedback eh as in (4.5) (from one of the two options) and transits to the next state sh+1 \u223cPh(\u00b7|sh, ah). This setting shares the same expert feedback model (eh) with the offline setting, and differs in that the learner can interact with the environment, instead of learning from a fixed dataset pre-collected by some fixed behavior policy. 5.2 Algorithm and guarantee Our algorithm Reward Learning with Exploration (RLE; Algorithm 2) performs IRL in the online setting by a simple reduction to reward-free learning and the RLP algorithm. RLE consists of two main 10 \fAlgorithm 2 Reward Learning with Exploration 1: Input: Parameter set \u0398 \u2286V \u00d7 A, confidence level \u03b4 > 0, error tolerance \u03f5 > 0, N, K \u2208Z\u22650, threshold \u03be = c\u03beH3S3A3 log 10HSA \u03b4 . 2: Call Algorithm 3 to play in the environment for NH episodes and obtain an explorative behavior policy \u03c0b. 3: Collect a dataset D = {(sk h, ak h, ek h)}K,H k=1,h=1 by executing \u03c0b in M. 4: Subsampling: subsample D to obtain Dtrim, such that for each (h, s, a) \u2208[H] \u00d7 S \u00d7 A, Dtrim contains min n b N b h(s, a), Nh(s, a) o sample transitions randomly drawn from D, where b N b h(s, a) and Nh(s, a) are defined by Nh(s, a) := K X k=1 1 n (sk h, ak h) = (s, a) o b N b h(s, a) := min \u0014K 4 , E \u03c0\u223c\u00b5b[ b d\u03c0 h(s, a)] \u2212K\u03be 8N \u22123 log 10HSA \u03b4 \u0015 + , (5.1) where b d\u03c0 h(s, a) is specified in Algorithm 3. 5: Call RLP (Algorithm 1) on dataset Dtrim with parameters (\u0398, \u03b4/10, \u03f5/10) to compute the recovered reward mapping b R. 6: Output: Estimated reward mapping b R. steps: (1) Call a reward-free exploration subroutine (Algorithm 3, building on the algorithm of Li et al. (2023)) to explore the environment M and obtain an explorative behavior policy \u03c0b (Line 2); (2) Collect K episodes of data D using \u03c0b, subsample the data, and call the RLP algorithm on the subsampled data Dtrim to obtain the estimated reward mapping b R. We now present the theoretical guarantee of RLE. The proof can be found in Appendix F.2. Theorem 5.1 (Sample complexity of RLE). Suppose \u03c0E is \u2206-well-posed (Assumption 4.1) when we receive feedback in option 1 of (4.5). Then for the online setting, for sufficiently small \u03f5 \u2264H\u22129(SA)\u22126, with probability at least 1 \u2212\u03b4, RLE (Algorithm 2) with N = e O( \u221a H9S7A7K) outputs a reward mapping b R such that Dall \u0398 \u0010 R\u22c6, b R \u0011 \u2264\u03f5, h b R(V, A) i h(s, a) \u2264[R\u22c6(V, A)]h(s, a) for all (V, A) \u2208\u0398 and (h, s, a) \u2208[H] \u00d7 S \u00d7 A, as long as the total the number of episodes K + NH \u2265e O \u0012H4SA log N \u03f52 + H2SA\u03b7 \u03f5 \u0013 . Above, log N := log N(\u0398; \u03f5/H), \u03b7 := \u2206\u221211 {option 1}, and e O(\u00b7) hides polylog(H, S, A, 1/\u03b4) factors. For small enough \u03f5, RLE requires e O(H4SA log N/\u03f52) episdoes for finding R with Dall \u0398 (R\u22c6, b R) \u2264\u03f5. Compared with the offline setting (Theorem 4.3), the main differences here are that the metric is stronger (Dall \u0398 versus D\u03c0eval \u0398 therein), and that the concentrability coefficient C\u22c6in the sample complexity is replaced with the number of actions A. This is because using online interaction, our reward-free exploration subroutine (Algorithm 3) can find a policy \u03c0b that achieves a form of \u201csingle-policy concentrability\u201d A with respect to any policy \u03c0; see (C.3). To our best knowledge, the only existing work that studies IRL in the same online setting is Lindner et al. (2023), who also achieve a sample complexity3 of e O(H4S2A/\u03f52 + H2SA\u03b7/\u03f5) (for \u0398 = V \u00d7 A) in their performance metric DL (cf. (D.1)). However, our metric Dall \u0398 is stronger than their DL and avoids certain indistinguishability issues of theirs, as we have shown in Theorem 3.4. 3Extracted from the proof of Lindner et al. (2023, Theorem 8) and taking into account the uniform convergence over V \u00d7 A and dependence on \u03b7 = \u2206\u221211 {option 1}; cf. Appendix D.1. 11 \f5.3 Lower bound We also provide a lower bound for IRL in the online setting in the Dall \u0398 metric. The rate of the lower bound is similar to Theorem 4.5, and ensures that the rate in Theorem 5.1 is tight up to H and polylogarithmic factors when S \u2264A. Theorem 5.2 (Informal version of Theorem G.2). For any (H, S, A, \u03f5), there exists a family of online IRL problems where \u0398 = V \u00d7 A, and \u03c0E is \u2206well-posed with \u2206= 1, such that the following holds. Suppose any IRL algorithm achieves Dall \u0398 (R\u22c6, b R) \u2264\u03f5 for every problem in this family with probability at least 2/3, then we must have K \u2265\u2126 \u0000H3SA min {S, A}/\u03f52\u0001 . 6 Transfer learning As a further application, we consider a transfer learning setting, where rewards learned in a source MDP\\R are transferred to a target MDP\\R (possibly different from the source MDP\\R). Inspired by the singlepolicy concentrability assumption, we define two concepts called weak-transferability and transferability (Definition I.2 & I.3) that measure the similarity between two MDP\\R\u2019s. We show that when the target MDP\\R exhibits a small week-transferability (transferability) with respect to the source MDP\\R, our algorithms RLP and RLE can perform IRL with sample complexity polynomial in these transferability coefficients and other problem parameters (Theorem I.4 & I.5), and provide guarantees for performing RL algorithms with the learned rewards in the target environments (Corollary I.6 & I.7). We defer the detailed setups and results to Appendix I. 7 Conclusion This paper designs the first provably sample-efficient algorithm for inverse reinforcement learning (IRL) in the offline setting. Our algorithms and analyses seamlessly adapt the pessimism principle in standard offline RL, and we also extend it to an online setting by a simple reduction aided by reward-free exploration. We believe our work opens up many important questions, such as generalization to function approximation settings and empirical verifications." |
| } |
| ] |
| } |