AcademicEval / title_30K /test_title_long_2404.16399v1.json
jiyuuuu's picture
syn
6cde16e
raw
history blame
324 kB
{
"url": "http://arxiv.org/abs/2404.16399v1",
"title": "Offline Reinforcement Learning with Behavioral Supervisor Tuning",
"abstract": "Offline reinforcement learning (RL) algorithms are applied to learn\nperformant, well-generalizing policies when provided with a static dataset of\ninteractions. Many recent approaches to offline RL have seen substantial\nsuccess, but with one key caveat: they demand substantial per-dataset\nhyperparameter tuning to achieve reported performance, which requires policy\nrollouts in the environment to evaluate; this can rapidly become cumbersome.\nFurthermore, substantial tuning requirements can hamper the adoption of these\nalgorithms in practical domains. In this paper, we present TD3 with Behavioral\nSupervisor Tuning (TD3-BST), an algorithm that trains an uncertainty model and\nuses it to guide the policy to select actions within the dataset support.\nTD3-BST can learn more effective policies from offline datasets compared to\nprevious methods and achieves the best performance across challenging\nbenchmarks without requiring per-dataset tuning.",
"authors": "Padmanaba Srinivasan, William Knottenbelt",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "Offline Reinforcement Learning with Behavioral Supervisor Tuning",
"main_content": "Introduction Reinforcement learning (RL) is a method of learning where an agent interacts with an environment to collect experiences and seeks to maximize the reward provided by the environment. This typically follows a repeating cycle of experience collecting and improvement [Sutton and Barto, 2018]. This is termed online RL due to the need for policy rollouts in the environment. Both on-policy and off-policy RL require some schedule of online interaction which, in some domains, can be infeasible due to experimental or environmental limitations [Mirowski et al., 2018; Yu et al., 2021]. With such constraints, a dataset may instead be collected that consists of demonstrations by arbitrary (potentially multiple, unknown) behavior policies [Lange et al., 2012] that may be suboptimal. Offline reinforcement learning algorithms are designed to recover optimal policies from such static datasets. The primary challenge in offline RL is the evaluation of out-of-distribution (OOD) actions; offline datasets rarely offer support over the entire state-action space and neural networks overestimate values when extrapolating to OOD actions [Fujimoto et al., 2018; Gulcehre et al., 2020; Kumar et 3. Figure 1: An illustration of our method versus typical, TD3-BClike actor-constraint methods. TD3-BC: a) A policy selecting an OOD action is constrained to select in-dataset actions. b) A policy selecting the optimal action may be penalized for not selecting an in-dataset, but not in-batch, inferior action. Our method: c) A policy selecting OOD actions is drawn towards in-dataset actions with decreasing constraint coefficient as it moves closer to any supported action. d) An optimal policy is not penalized for selecting an indataset action when the action is not contained in the current batch. al., 2020; Kumar et al., 2019]. If trained using standard offpolicy methods, a policy will select any actions that maximize reward, which includes OOD actions. The difference between the rewards implied by the value function and the environment results in a distribution shift that can result in failure in real-world policy rollouts. Thus, offline RL algorithms must both maximize the reward and follow the behavioral policy, while having to potentially \u201cstitch\u201d together several suboptimal trajectories. The former requirement is usually satisfied by introducing a constraint on the actor to either penalize deviation from the behavior policy or epistemic uncertainty of the value function, or by regularizing the value function to directly minimize OOD action-values. Many recent approaches to offline RL [Tarasov et al., 2023; Zhu et al., 2022; Li et al., 2022; Nikulin et al., 2023; Xu et al., 2023] demonstrate success in D4RL benchmarks [Fu et al., 2020], but demand the onerous task of per-dataset arXiv:2404.16399v1 [cs.LG] 25 Apr 2024 \fhyperparameter tuning [Zhang and Jiang, 2021]. Algorithms that require substantial offline fine-tuning can be infeasible in real-world applications [Tang and Wiens, 2021], hampering their adoption in favor of simpler, older algorithms [Emerson et al., 2023; Zhu et al., 2023]. These older methods [Fujimoto and Gu, 2021; Kumar et al., 2020; Kostrikov et al., 2021b] provide excellent \u201cbang-for-buck\u201d as their hyperparameters work well across a range of D4RL datasets. Contributions In this paper, we show how a trained uncertainty model can be incorporated into the regularized policy objective as a behavioral supervisor to yield TD3 with behavioral supervisor tuning (TD3-BST). The key advantage of our method is the dynamic regularization weighting performed by the uncertainty network, which allows the learned policy to maximize Q-values around dataset modes. Evaluation on D4RL datasets demonstrates that TD3-BST achieves SOTA performance, and ablation experiments analyze the performance of the uncertainty model and the sensitivity of the parameters of the BST objective. 2 Related Work Reinforcement learning is a framework for sequential decision making often formulated as a Markov decision process (MDP), M = {S, A, R, p, p0, \u03b3} with state space S, action space A, a scalar reward dependent on state and action R(s, a), transition dynamics p, initial state distribution p0 and discount factor \u03b3 \u2208 [0, 1) [Sutton and Barto, 2018]. RL aims to learn a policy \u03c0 \u2208\u03a0 that executes action a = \u03c0(s) that will maximize the expected discounted reward J(\u03c0) = E\u03c4\u223cP\u03c0(\u03c4) hPT t=0 \u03b3tR(st, at) i where P\u03c0(\u03c4) = p0(s0) QT t=0 \u03c0(at | st)p(st+1 | st, at) is the trajectory under \u03c0. Rather than rolling out an entire trajectory, a state-action value function (Q function) is often used: Q\u03c0(s, a) = E\u03c4\u223cP\u03c0(\u03c4) hPT t=0 \u03b3tr(st, at) | s0 = s, a0 = a i . 2.1 Offline Reinforcement Learning Offline RL algorithms are presented with a static dataset D that consists of tuples {s, a, r, s\u2032} where r \u223cR(s, a) and s\u2032 \u223cp(\u00b7 | s, a). D has limited coverage over S \u00d7A; hence, offline RL algorithms must constrain the policy to select actions within the dataset support. To this end, algorithms employ one of three approaches: 1) policy constraints; 2) critic regularization; or 3) uncertainty penalization. Policy constraint Policy constraints modify the actor\u2019s objective only to minimize divergence from the behavior policy. Most simply, this adds a constraint term [Fujimoto and Gu, 2021; Tarasov et al., 2023] to the policy objective: arg max \u03c0 E{s,a}\u223cD [Q(s, \u03c0(s)) \u2212\u03b1D(\u03c0, \u03c0\u03b2)] , (1) where \u03b1 is a scalar controlling the strength of regularization, D(\u00b7, \u00b7) is a divergence function between the policy \u03c0 and the behavior policy \u03c0\u03b2. In offline RL, we do not have access to \u03c0\u03b2; some prior methods attempt to estimate it empirically [Kostrikov et al., 2021a; Li et al., 2023] which is challenging when the dataset is generated by a mixture of policies. Furthermore, selecting the constraint strength can be challenging and difficult to generalize across datasets with similar environments [Tarasov et al., 2023; Kostrikov et al., 2021a]. Other policy constraint approaches use weighted BC [Nair et al., 2020; Kostrikov et al., 2021b; Xu et al., 2023] or (surrogate) BC constraints [Li et al., 2022; Wu et al., 2019; Li et al., 2023]. The former methods may be too restrictive as they do not allow OOD action selection, which is crucial to improve performance [Fu et al., 2022]. The latter methods may still require substantial tuning and in addition to training if using model-based score methods. Other methods impose architectural constraints [Kumar et al., 2019; Fujimoto et al., 2019] that parameterize separate BC and reward-maximizing policy models. Critic Regularization Critic regularization methods directly address the OOD action-value overestimation problem by penalizing large values for adversarially sampled actions [Kostrikov et al., 2021a]. Ensembles Employing an ensemble of neural network estimators is a commonly used technique for prediction with a measure of epistemic uncertainty [Kondratyuk et al., 2020]. A family of offline RL methods employ large ensembles of value functions [An et al., 2021] and make use of the diversity of randomly initialized ensembles to implicitly reduce the selection of OOD actions or directly penalize the variance of the reward in the ensemble [Ghasemipour et al., 2022; Sutton and Barto, 2018]. Model-Based Uncertainty Estimation Learning an uncertainty model of the dataset is often devised analogously to exploration-encouraging methods used in online RL, but, employing these for anti-exploration instead [Rezaeifar et al., 2022]. An example is SAC-RND which directly adopts such an approach [Nikulin et al., 2023]. Other algorithms include DOGE [Li et al., 2022] which trains a model to estimate uncertainty as a distance to dataset action and DARL [Zhang et al., 2023] which uses distance to random projections of stateaction pairs as an uncertainty measure. As a whole, these methods optimize a distance d(\u00b7, \u00b7) \u22650 that represents the uncertainty of an action. 2.2 Uncertainty Estimation Neural networks are known to predict confidently even when presented with OOD samples [Nguyen et al., 2015; Goodfellow et al., 2014; Lakshminarayanan et al., 2017]. A classical approach to OOD detection is to fit a generative model to the dataset that produces a high probability for in-dataset samples and a low probability for OOD ones. These methods work well for simple, unimodal data but can become computationally demanding for more complex data with multiple modes. Another approach trains classifiers that are leveraged to become finer-grained OOD detectors [Lee et al., 2018]. In this work, we focus on Morse neural networks [Dherin et al., 2023], an approach that trains a generative model to produce an unnormalized density that takes on value 1 at the dataset modes. \f3 Preliminaries A Morse neural network produces an unnormalized density M(x) \u2208[0, 1] on an embedding space Re [Dherin et al., 2023]. A Morse network can produce a density in Re that attains a value of 1 at mode submanifolds and decreases towards 0 when moving away from the mode. The rate at which the value decreases is controlled by a Morse Kernel. Definition 1 (Morse Kernel). A Morse Kernel is a positive definite kernel K. When applied in a space Z = Rk, the kernel K(z1, z2) takes values in the interval [0, 1] where K(z1, z2) = 1 iff z1 = z2. All kernels of the form K(z1, z2) = e\u2212D(z1,z2) where D(\u00b7, \u00b7) is a divergence [Amari, 2016] are Morse Kernels. Examples include common kernels such as the Radial Basis Function (RBF) Kernel, KRBF (z1, z2) = e\u2212\u03bb2 2 ||z1\u2212z2||2. (2) The RBF kernel and its derivatives decay exponentially, leading learning signals to vanish rapidly. An alternative is the ubiquitous Rational Quadratic (RQ) kernel: KRQ(z1, z2) = \u0012 1 + \u03bb2 2\u03ba || z1 \u2212z2 ||2 \u0013\u2212\u03ba (3) where \u03bb is a scale parameter in each kernel. The RQ kernel is a scaled mixture of RBF kernels controlled by \u03ba and, for small \u03ba, decays much more slowly [Williams and Rasmussen, 2006]. Consider a neural network that maps from a feature space into a latent space f\u03d5 : X \u2192Z, with parameters \u03d5, X \u2208Rd and Z \u2208Rk. A Morse Kernel can impose structure on the latent space. Definition 2 (Morse Neural Network). A Morse neural network is a function f\u03d5 : X \u2192Z in combination with a Morse Kernel on K(z, t) where t \u2282Z is a target, chosen as a hyperparameter of the model. The Morse neural network is defined as M\u03d5(x) = K(f\u03d5(x), t). Using Definition 1 we see that M\u03d5(x) \u2208[0, 1], and when M\u03d5(x) = 1, x corresponds to a mode that coincides with the level set of the submanifold of the Morse neural network. Furthermore, M\u03d5(x) corresponds to the certainty of the sample x being from the training dataset, so 1 \u2212M\u03d5(x) is a measure of the epistemic uncertainty of x. The function \u2212log M\u03d5(x) measures a squared distance, d(\u00b7, \u00b7), between f\u03d5(x) and the closest mode in the latent space at m: d(z) = min m\u2208M d(z, m), (4) where M is the set of all modes. This encodes information about the topology of the submanifold and satisfies the Morse\u2013Bott non-degeneracy condition [Basu and Prasad, 2020]. The Morse neural network offers the following properties: 1 M\u03d5(x) \u2208[0, 1]. 2 M\u03d5(x) = 1 at its mode submanifolds. 3 \u2212log M\u03d5(x) \u22650 is a squared distance that satisfies the Morse\u2013Bott non-degeneracy condition on the mode submanifolds. 4 As M\u03d5(x) is an exponentiated squared distance, the function is also distance aware in the sense that as f\u03d5(x) \u2192t, M\u03d5(x) \u21921. Proof of each property is provided in the appendix. 4 Policy Constraint with a Behavioral Supervisor We now describe the constituent components of our algorithm, building on the Morse network and showing how it can be incorporated into a policy-regularized objective. 4.1 Morse Networks for Offline RL The target t is a hyperparameter that must be chosen. Experiments in [Dherin et al., 2023] use simple, toy datasets with classification problems that perform well for categorical t. We find that using a static label for the Morse network yields poor performance; rather than a labeling model, we treat f\u03d5 as a perturbation model that produces an action f\u03d5(s, a) = \u02c6 a such that \u02c6 a = a if and only if s, a \u223cD. An offline RL dataset D consists of tuples {s, a, r, s\u2032} where we assume {s, a} pairs are i.i.d. sampled from an unknown distribution. The Morse network must be fitted on N state-action pairs [{s1, a1, }, ..., {sN, aN}] such that M\u03d5(si, aj) = 1, \u2200i, j \u22081, ..., N ] only when i = j. We fit a Morse neural network to minimize the KL divergence between unnormalized measures [Amari, 2016] following [Dherin et al., 2023], DKL(D(s, a) || M\u03d5(s, a)): min \u03d5 Es,a\u223cD \u0014 log D(s, a) M\u03d5(s, a) \u0015 + Z M\u03d5(s, a) \u2212D(s, a) da. (5) With respect to \u03d5, this amounts to minimizing the empirical loss: L(\u03d5) = \u22121 N X s,a\u223cD log K(f\u03d5(s, a), a) + 1 N X s\u223cD a \u00af D\u223cDuni K(f\u03d5(s, au), au), (6) where au is an action sampled from a uniform distribution over the action space Duni. A learned Morse density is well suited to modeling ensemble policies [Lei et al., 2023], more flexibly [Dherin et al., 2023; Kostrikov et al., 2021a; Li et al., 2023] and without down-weighting good, in-support actions that have low density under the behavior policy [Singh et al., 2022] as all modes have unnormalized density value 1. A Morse neural network can be expressed as an energybased model (EBM) [Goodfellow et al., 2016]: Proposition 1. A Morse neural network can be expressed as an energy-based model: E\u03d5(x) = e\u2212log M\u03d5(x) where M\u03d5 : Rd \u2192R. \fNote that the EBM E\u03d5 is itself unnormalized. Representing the Morse network as an EBM allows analysis analogous to [Florence et al., 2022]. Theorem 1. For a set-valued function F(x) : x \u2208Rm \u2192 Rn\\{\u2205}, there exists a continuous function g : Rm+n \u2192R that is approximated by a continuous function approximator g\u03d5 with arbitrarily small bounded error \u03f5. This ensures that any point on the graph F\u03d5(x) = arg miny g\u03d5(x, y) is within distance \u03f5 of F. We refer the reader to [Florence et al., 2022] for a detailed proof. The theorem assumes that F(x) is an implicit function and states that the error at the level-set (i.e. the modes) of F(x) is small. 4.2 TD3-BST We can use the Morse network to design a regularized policy objective. Recall that policy regularization consists of Q-value maximization and minimization of a distance to the behavior policy (Equation 1). We reconsider the policy regularization term and train a policy that minimizes uncertainty while selecting actions close to the behavior policy. Let C\u03c0(s, a) denote a measure of uncertainty of the policy action. We solve the following optimization problem: \u03c0i+1 = arg min \u03c0\u2208\u03a0 Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] (7) s.t. DKL (\u03c0(\u00b7 | s) || \u03c0\u03b2(\u00b7 | s)) \u2264\u03f5. (8) This optimization problem requires an explicit behavior model, which is difficult to estimate and using an estimated model has historically returned mixed results [Kumar et al., 2019; Fujimoto et al., 2019]. Furthermore, this requires direct optimization through C\u03c0 which may be subject to exploitation. Instead, we enforce this implicitly by deriving the solution to the constrained optimization to obtain a closed-form solution for the actor [Peng et al., 2019; Nair et al., 2020]. Enforcing the KKT conditions we obtain the Lagrangian: L(\u03c0, \u00b5) = Ea\u223c\u03c0(\u00b7|s) [C\u03c0(s, a)] + \u00b5(\u03f5 \u2212DKL(\u03c0 || \u03c0\u03b2)). (9) Computing \u2202L \u2202\u03c0 and solving for \u03c0 yields the uncertainty minimizing solution \u03c0C\u2217(a | s) \u221d\u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a). When learning the parametric policy \u03c0\u03c8, we project the nonparametric solution into the policy space as a (reverse) KL divergence minimization of \u03c0\u03c8 under the data distribution D: arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0C\u2217(\u00b7 | s) || \u03c0\u03c8(\u00b7 | s) \u0011i (10) = arg min \u03c8 Es\u223cD h DKL \u0010 \u03c0\u03b2(a | s)e 1 \u00b5 C\u03c0(s,a) || \u03c0\u03c8(\u00b7 | s) \u0011i (11) = arg min \u03c8 Es,a\u223cD h \u2212log \u03c0\u03c8(a | s)e 1 \u00b5 C\u03c0(s,a)i , (12) which is a weighted maximum likelihood update where the supervised target is sampled from the dataset D and C\u03c0(s, a) = 1 \u2212M\u03d5(s, \u03c0\u03c8(s)). This avoids explicitly modeling the behavior policy and uses the Morse network uncertainty as a behavior supervisor to dynamically adjust the strength of behavioral cloning. We provide a more detailed derivation in the appendix. Interpretation Our regularization method shares similarities with other weighted regression algorithms [Nair et al., 2020; Peng et al., 2019; Kostrikov et al., 2021b] which weight the advantage of an action compared to the dataset/replay buffer action. Our weighting can be thought of as a measure of disadvantage of a policy action in the sense of how OOD it is. We make modifications to the behavioral cloning objective. From Morse network property 1 we know M\u03d5 \u2208[0, 1], hence 1 \u2264e 1 \u00b5 C\u03c0 \u2264e 1 \u00b5 , i.e. the lowest possible disadvantage coefficient is 1. To minimize the coefficient in the mode, we require it to approach 0 when near a mode. We adjust the weighted behavioral cloning term and add Q-value maximization to yield the regularized policy update: \u03c0i+1 \u2190arg max \u03c0 E s,a\u223cD, a\u03c0\u223c\u03c0i(s) [ 1 ZQ Qi+1(s, a\u03c0) \u2212(e 1 \u00b5 C\u03c0(s,a) \u22121)(a\u03c0 \u2212a)2], (13) where \u00b5 is the Lagrangian multiplier that controls the magnitude of the disadvantage weight and ZQ = 1 N PN n=1|Q(s, a\u03c0)| is a scaling term detached from the gradient update process [Fujimoto and Gu, 2021], necessary as Q(s, a) can be arbitrarily large and the BC-coefficient is upper-bounded at e 1 \u00b5 . The value function update is given by: Qi+1 \u2190arg min Q Es,a,s\u2032\u223cD[(y \u2212Qi(s, a))2], (14) with y = r(s, a) + \u03b3Es\u2032\u223c\u00af \u03c0(s\u2032) \u00af Q(s\u2032, a\u2032) where \u00af Q and \u00af \u03c0 are target value and policy functions, respectively. 4.3 Controlling the Tradeoff Constraint Tuning TD3-BST is straightforward; the primary hyperparameters of the Morse network consist of the choice and scale of the kernel, and the temperature \u00b5. Increasing \u03bb for higher dimensional actions ensures that the high certainty region around modes remains tight. Prior empirical work has demonstrated the importance of allowing some degree of OOD actions [An et al., 2021]; in the TD3-BST framework, this is dependent on \u03bb. In Figure 2 we provide a didactic example of the effect of \u03bb. We construct a dataset consisting of 2-dimensional actions in [\u22121, 1] with means at the four locations {[0.0, 0.8], [0.0, \u22120.8], [0.8, 0.0], [\u22120.8, 0.0]} and each with standard deviation 0.05. We sample M = 128 points, train a Morse network and plot the density produced by the Morse network for \u03bb = { 1 10, 1 2, 1.0, 2.0}. A behavioral cloning policy learned using vanilla MLE where all targets are weighted equally results in an OOD action being selected. Training using Morse-weighted BC downweights the behavioral cloning loss for far away modes enabling the policy to select and minimize error to a single mode. \f(a) \u03bb = 0.1 (b) \u03bb = 0.5 (c) \u03bb = 1.0 (d) \u03bb = 2.0 (e) Ground Truth (f) Density \u03bb = 1.0 Figure 2: a-d: Contour plots of unnormalized densities produced by a Morse network for increasing \u03bb with ground truth actions included as \u00d7 marks. e: Ground truth actions in the synthetic dataset, the MLE action (red). A Morse certainty weighted MLE model can select actions in a single mode, in this case, the mode centred at [0.8, 0.0] (orange). Weighting a divergence constraint using a Morse (un)certainty will encourage the policy to select actions near the modes of M\u03d5 that maximize reward. f: Plot of the 3D unnormalized Morse density for \u03bb = 1.0. Algorithm 1 TD3-BST Training Procedure Outline. The policy is updated once for every m = 2 critic updates, as is the default in TD3. Input: Dataset D = {s, a, r, s\u2032} Initialize: Initialize Morse network M\u03d5. Output: Trained Morse network M\u03d5. Let t = 0. for t = 1 to TM do Sample minibatch (s, a) \u223cD Sample random actions a \u00af D \u223cDuni for each state s Update \u03d5 by minimizing Equation 6 end for Initialize: Initialize policy network \u03c0\u03c8, critic Q\u03b8, target policy \u00af \u03c8 \u2190\u03c8 and target critic \u00af \u03b8 \u2190\u03b8. Output: Trained policy \u03c0. Let t = 0. for t = 1 to TAC do Sample minibatch (s, a, r, s\u2032) \u223cD Update \u03b8 using Equation 14 if t mod m = 0 then Obtain a\u03c0 = \u03c0(s) Update \u03c8 using Equation 13 Update target networks \u00af \u03b8 \u2190\u03c1\u03b8 + (1 \u2212\u03c1)\u00af \u03b8, \u00af \u03c8 \u2190 \u03c1\u03c8 + (1 \u2212\u03c1) \u00af \u03c8 end if end for return \u03c0 4.4 Algorithm Summary Fitting the Morse Network The TD3-BST training procedure is described in Algorithm 1. The first phase fits the Morse network for TM gradient steps. Actor\u2013Critic Training In the second phase of training, a modified TD3-BC procedure is used for TAC iterations with alterations highlighted in red. We provide full hyperparameter details in the appendix. 5 Experiments In this section, we conduct experiments that aim to answer the following questions: \u2022 How does TD3-BST compare to other baselines, with a focus on comparing to newer baselines that use perdataset tuning? \u2022 Can the BST objective improve performance when used with one-step methods (IQL) that perform in-sample policy evaluation? \u2022 How well does the Morse network learn to discriminate between in-dataset and OOD actions? \u2022 How does changing the kernel scale parameter \u03bb affect performance? \u2022 Does using independent ensembles, a second method of uncertainty estimation, improve performance? We evaluate our algorithm on the D4RL benchmark [Fu et al., 2020], including the Gym Locomotion and challenging Antmaze navigation tasks. 5.1 Comparison with SOTA Methods We evaluate TD3-BST against the older, well known baselines of TD3-BC [Fujimoto and Gu, 2021], CQL [Kumar et al., 2020], and IQL [Kostrikov et al., 2021b]. There are more recent methods that consistently outperform these baselines; of these, we include SQL [Xu et al., 2023], SAC-RND [Nikulin et al., 2023], DOGE [Li et al., 2022], VMG [Zhu et al., 2022], ReBRAC [Tarasov et al., 2023], CFPI [Li et al., 2023] and MSG [Ghasemipour et al., 2022] (to our knowledge, the best-performing ensemble-based method). It is interesting to note that most of these baselines implement policy constraints, except for VMG (graph-based planning) and MSG (policy constraint using a large, independent ensemble). We note that all the aforementioned SOTA methods (except SQL) report scores with per-dataset tuned parameters in stark contrast with the older TD3-BC, CQL, and IQL algorithms, which use the same set of hyperparameters in each D4RL domain. All scores are reported with 10 evaluations in Locomotion and 100 in Antmaze across five seeds. We present scores for D4RL Gym Locomotion in Table 1. TD3-BST achieves best or near-best results compared to all previous methods and recovers expert performance on five of nine datasets. The best performing prior methods include SAC-RND and ReBRAC, both of which require per-dataset tuning of BRAC-variant algorithms [Wu et al., 2019]. We evaluate TD3-BST on the more challenging Antmaze tasks which contain a high degree of suboptimal trajectories \fand follow a sparse reward scheme that requires algorithms to stitch together several trajectories to perform well. TD3-BST achieves the best scores overall in Table 2, especially as the maze becomes more complex. VMG and MSG are the bestperforming prior baselines and TD3-BST is far simpler and more efficient in its design as a variant of TD3-BC. The authors of VMG report the best scores from checkpoints rather than from the final policy. MSG report scores from ensembles with both 4 and 64 critics of which the best scores included here are from the 64-critic variant. We pay close attention to SAC-RND, which, among all baselines, is most similar in its inception to TD3-BST. SACRND uses a random and trained network pair to produce a dataset-constraining penalty. SAC-RND achieves consistent SOTA scores on locomotion datasets, but fails to deliver commensurate performance on Antmaze tasks. TD3-BST performs similarly to SAC-RND in locomotion and achieves SOTA scores in Antmaze. 5.2 Improving One-Step Methods One-step algorithms learn a policy from an offline dataset, thus remaining on-policy [Rummery and Niranjan, 1994; Sutton and Barto, 2018], and using weighted behavioral cloning [Brandfonbrener et al., 2021; Kostrikov et al., 2021b]. Empirical evaluation by [Fu et al., 2022] suggests that advantageweighted BC is too restrictive and relaxing the policy objective to Equation 1 can lead to performance improvement. We use the BST objective as a drop-in replacement for the policy improvement step in IQL [Kostrikov et al., 2021b] to learn an optimal policy while retaining in-sample policy evaluation. We reproduce IQL results and report scores for IQL-BST, both times using a deterministic policy [Tarasov et al., 2022] and identical hyperparameters to the original work in Table 3. Reproduced IQL closely matches the original results, with slight performance reductions on the -large datasets. Relaxing weighted-BC with a BST objective leads to improvements in performance, especially on the more difficult -medium and -large datasets. To isolate the effect of the BST objective, we do not perform any additional tuning. 5.3 Ablation Experiments Morse Network Analysis We analyze how well the Morse network can distinguish between dataset tuples and samples from Dperm, permutations of dataset actions, and Duni. We plot both certainty (M\u03d5) density and t-SNEs [Van der Maaten and Hinton, 2008] in Figure 3 which show that the unsupervised Morse network is effective in distinguishing between Dperm and Duni and assigning high certainty to dataset tuples. Ablating kernel scale We examine sensitivity to the kernel scale \u03bb. Recall that k = dim(A). We see in Figure 4 that the scale \u03bb = k 2 is a performance sweet-spot on the challenging Antmaze tasks. We further illustrate this by plotting policy deviations from dataset actions in Figure 5. The scale \u03bb = 1.0 is potentially too lax a behavioral constraint, while \u03bb = k is too strong, resulting in performance reduction. However, performance on all scales remains strong and compares well with most prior algorithms. Performance may be further improved by tuning \u03bb, possibly with separate scales for each input dimension. Figure 3: M\u03d5 densities and t-SNE for hopper-medium-expert (top row) and Antmaze-large-diverse (bottom row). Density plots are clipped at 10.0 as density for D is large. 10 actions are sampled from Duni and Dperm each, per state. t-SNE is plotted from the per-dimension perturbation | f\u03d5(s, a) \u2212a |. Figure 4: Ablations of \u03bb on Antmaze datasets. Recall k = dim(A). Independent or Shared Targets? Standard TD3 employs Clipped Double Q-learning (CDQ) [Hasselt, 2010; Fujimoto et al., 2018] to prevent value overestimation. On tasks with sparse rewards, this may be too conservative [Moskovitz et al., 2021]. MSG [Ghasemipour et al., 2022] uses large ensembles of fully independent Q functions to learn offline. We examine how independent double Q functions perform compared to the standard CDQ setup in Antmaze with 2 and 10 critics. The results in Figure 6 show that disabling CDQ with 2 critics is consistently detrimental to performance. Using a larger 10-critic ensemble leads to moderate improvements. This suggests that combining policy regularization with an efficient, independent ensemble could bring further performance benefits with minimal changes to the algorithm. 6 Discussion Morse Network In [Dherin et al., 2023], deeper architectures are required even when training on simple datasets. This rings true for our application of Morse networks in this work, with low-capacity networks performing poorly. Training the Morse network for each locomotion and Antmaze dataset typically takes 10 minutes for 100 000 gradient steps using a batch size of 1 024. When training the policy, using the Morse network increases training time by approximately 15%. Optimal Datasets On Gym Locomotion tasks TD3-BST performance is comparable to newer methods, all of which \fDataset TD3-BC CQL IQL SQL SAC-RND1 DOGE ReBRAC CFPI TD3-BST (ours) halfcheetah-m 48.3 44.0 47.4 48.3 66.6 45.3 65.6 52.1 62.1 \u00b1 0.8 hopper-m 59.3 58.5 66.3 75.5 97.8 98.6 102.0 86.8 102.9 \u00b1 1.3 walker2d-m 83.7 72.5 78.3 84.2 91.6 86.8 82.5 88.3 90.7 \u00b1 2.5 halfcheetah-m-r 44.6 45.5 44.2 44.8 42.8 54.9 51.0 44.5 53.0 \u00b1 0.7 hopper-m-r 60.9 95.0 94.7 99.7 100.5 76.2 98.1 93.6 101.2 \u00b1 4.9 walker2d-m-r 81.8 77.2 73.9 81.2 88.7 87.3 77.3 78.2 90.4 \u00b1 8.3 halfcheetah-m-e 90.7 91.6 86.7 94.0 107.6 78.7 101.1 97.3 100.7 \u00b1 1.1 hopper-m-e 98.0 105.4 91.5 111.8 109.8 102.7 107.0 104.2 110.3 \u00b1 0.9 walker2d-m-e 110.1 108.8 109.6 110.0 105.0 110.4 111.6 111.9 109.4 \u00b1 0.2 Table 1: Normalized scores on D4RL Gym Locomotion datasets. VMG scores are excluded because this method performs poorly and the authors of MSG do not report numerical results on locomotion tasks. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. 1 SAC-RND in addition to per-dataset tuning, is trained for 3 million gradient steps. Though not included here, ensemble methods may perform better than the best non-ensemble methods on some datasets, albeit still requiring per-dataset tuning to achieve their reported performance. Top scores are in bold and second-best are underlined. Dataset TD3-BC CQL IQL SQL SAC-RND1 DOGE VMG2 ReBRAC CFPI MSG3 TD3-BST (ours) -umaze 78.6 74.0 87.5 92.2 97.0 97.0 93.7 97.8 90.2 98.6 97.8 \u00b1 1.0 -umaze-d 71.4 84.0 62.2 74.0 66.0 63.5 94.0 88.3 58.6 81.8 91.7 \u00b1 3.2 -medium-p 10.6 61.2 71.2 80.2 74.7 80.6 82.7 84.0 75.2 89.6 90.2 \u00b1 1.8 -medium-d 3.0 53.7 70.0 79.1 74.7 77.6 84.3 76.3 72.2 88.6 92.0 \u00b1 3.8 -large-p 0.2 15.8 39.6 53.2 43.9 48.2 67.3 60.4 51.4 72.6 79.7 \u00b1 7.6 -large-d 0.0 14.9 47.5 52.3 45.7 36.4 74.3 54.4 52.4 71.4 76.1 \u00b1 4.7 Table 2: Normalized scores on D4RL Antmaze datasets. 1 SAC-RND is trained for three million gradient steps. 2 VMG reports scores from the best-performing checkpoint rather than from the final policy; despite this, TD3-BST still outperforms VMG in all datasets except -umaze-diverse. 3 for MSG we report the best score among the reported scores of all configurations, also, MSG is trained for two million steps. Prior methods are grouped by those that do not perform per-dataset tuning and those that do. Other ensemble-based methods are not included, as MSG achieves higher performance. Top scores are in bold and second-best are underlined. Dataset IQL (reproduced) IQL-BST -umaze 87.6 \u00b1 4.6 90.8 \u00b1 2.1 -umaze-d 64.0 \u00b1 5.2 63.1 \u00b1 3.7 -medium-p 70.7 \u00b1 4.3 80.3 \u00b1 1.3 -medium-d 73.8 \u00b1 5.9 84.7 \u00b1 2.0 -large-p 35.2 \u00b1 8.4 55.4 \u00b1 3.2 -large-d 40.7 \u00b1 9.2 51.6 \u00b1 2.6 Table 3: Normalized scores on D4RL Antmaze datasets for IQL and IQL-BST. We use hyperparameters identical to the original IQL paper and use Equation 13 as a drop-in replacement for the policy objective. rarely outperform older baselines. This can be attributed to a significant proportion of high-return-yielding trajectories that are easier to improve. 7 Conclusion In this paper, we introduce TD3-BST, an algorithm that uses an uncertainty model to dynamically adjust the strength of regularization. Dynamic weighting allows the policy to maximize reward around individual dataset modes. Our algorithm compares well against prior methods on Gym Locomotion tasks and achieves the best scores on the more challenging Antmaze tasks, demonstrating strong performance when learning from suboptimal data. In addition, our experiments show that combining our pol(a) hopper-medium (b) amaze-large-play Figure 5: Histograms of deviation from dataset actions. Figure 6: % change in Antmaze scores without CDQ for critic ensembles consisting of 2 and 10 Q functions. icy regularization with an ensemble-based source of uncertainty can improve performance. Future work can explore other methods of estimating uncertainty, alternative uncertainty measures, and how best to combine multiple sources of uncertainty.",
"additional_info": [
{
"url": "http://arxiv.org/abs/2310.06268v1",
"title": "Bi-Level Offline Policy Optimization with Limited Exploration",
"abstract": "We study offline reinforcement learning (RL) which seeks to learn a good\npolicy based on a fixed, pre-collected dataset. A fundamental challenge behind\nthis task is the distributional shift due to the dataset lacking sufficient\nexploration, especially under function approximation. To tackle this issue, we\npropose a bi-level structured policy optimization algorithm that models a\nhierarchical interaction between the policy (upper-level) and the value\nfunction (lower-level). The lower level focuses on constructing a confidence\nset of value estimates that maintain sufficiently small weighted average\nBellman errors, while controlling uncertainty arising from distribution\nmismatch. Subsequently, at the upper level, the policy aims to maximize a\nconservative value estimate from the confidence set formed at the lower level.\nThis novel formulation preserves the maximum flexibility of the implicitly\ninduced exploratory data distribution, enabling the power of model\nextrapolation. In practice, it can be solved through a computationally\nefficient, penalized adversarial estimation procedure. Our theoretical regret\nguarantees do not rely on any data-coverage and completeness-type assumptions,\nonly requiring realizability. These guarantees also demonstrate that the\nlearned policy represents the \"best effort\" among all policies, as no other\npolicies can outperform it. We evaluate our model using a blend of synthetic,\nbenchmark, and real-world datasets for offline RL, showing that it performs\ncompetitively with state-of-the-art methods.",
"authors": "Wenzhuo Zhou",
"published": "2023-10-10",
"updated": "2023-10-10",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"math.ST",
"stat.TH"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "Bi-Level Offline Policy Optimization with Limited Exploration",
"main_content": "Introduction Offline reinforcement learning (RL) is a task to learn a good policy using only a pre-collected, fixed dataset, without further exploration with the environment. This distinctive characteristic positions offline RL as a promising approach for solving real-world sequential decision-making problems in healthcare [53, 96], financial marketing [75], robotics [76] and education [49], as acquiring diverse or expert-quality data in these fields can be costly or practically unattainable. Arguably, two of the biggest challenges in offline RL are the distributional shift between the datagenerating distribution and those induced by candidate policies, and the stringent requirements on the properties of function approximation [42]. It has been observed that, in practice, the distributional mismatch often results in unsatisfactory performance of many existing algorithms, and even amplifying with function approximation [23, 41]. Many prior works [60, 18, 4, 19] crucially rely on a global data-coverage assumption and completeness-type function approximation condition in a technical sense. The former necessitates that the dataset to contain any state-action pair with a lower bounded probability so that the distributional shift can be well calibrated. The latter requires the function class to be closed under Bellman updates. Both assumptions are particularly strong and are likely to be violated in practice [93]. Consequently, algorithms that depend on these assumptions may experience performance degradation and instability [82]. Therefore, it is crucial to develop novel algorithms that relax these assumptions, offering robust and widely applicable solutions for real-world scenarios. 37th Conference on Neural Information Processing Systems (NeurIPS 2023). arXiv:2310.06268v1 [cs.LG] 10 Oct 2023 \fTo address the aforementioned challenges in offline settings, one fundamental principle is the concept of pessimism, which aims to maximize rewards in the worst possible MDP consistent with the offline dataset [23, 86]. In practice, these methods have generally been shown to be more robust when coverage assumptions are violated [41]. Although many such pessimistic algorithms have been developed, very few works can tackle datacoverage and function approximation issues simultaneously, while establishing strong regret guarantees. For instance, deep offline RL algorithms [23, 37, 35] exhibit impressive empirical performance, but their theoretical consistency guarantees are limited to tabular Markov decision processes (MDPs). The works of [47, 62, 31, 86, 90, 93, 13] relax the global coverage to a partial coverage condition, wherein the offline data only covers a single comparator policy. However, all of these methods require Bellman completeness for the function class. The most recent works [12, 94] take a significant step towards relaxing Bellman completeness to realizability, that the function class can capture the target ground-truth function. Nonetheless, these algorithms are unable to provide a meaningful regret guarantee without any data-coverage assumption (when both global and partial coverage fails), and also empirical evaluations are absent. Even without additional conditions, the learned policies of these algorithms can only compete with the (Bellman flow) optimal policy, resulting in a lack of robustness when the optimal policy is not covered by data, a situation that frequently occurs. Due to page limit, we have only discussed the closest related work here, and the rest is deferred to Appendix. Our contribution. In this paper, we develop a provably sample-efficient offline RL framework. Our information-theoretic algorithm is designed based on the concept of bi-level (upper and lower level) structured optimization, which leads to a hierarchical interpretation and naturally enjoys learning stability and algorithmic convergence from a game-theoretic perspective. In particular, at the lower level, one component is to construct a confidence set with consistent value estimates regarding the appropriately small weighted average Bellman error, effectively preventing overly pessimistic evaluation. Meanwhile, the second component, which deals with uncertainty control, implicitly enhances the power of model extrapolation. In addition to the information-theoretic algorithm, we also develop a computationally efficient counterpart that is solved by a penalized adversarial estimation algorithm with proximal-mapping updating, allowing both non-linear and linear function approximation. From a theoretical standpoint, we establish a strong regret guarantee for both information-theoretical and practical algorithms under only realizability without requiring any data-coverage (neither global nor partial coverage) and completeness-type assumptions. As a special case study, we further refine our developed mixture density ratio-based concentrability coefficient to a relative condition number in linear MDP settings. The sample complexity of our regret bound improves or at least matches the prior results in the fully exploratory or partial coverage settings where the Bellman-completeness holds. Notably, compared with existing works, either focusing on theoretical or empirical development, we provide a comprehensive theoretical analysis of the proposed framework and also conduct synthetic, benchmark, and real data experiments for empirical evaluation. 2 Preliminaries and Notations Markov decision process. We consider an infinite-horizon discounted MDP M = {S, A, P, \u03b3, r, s0} [61], where S is the state space, A is the action space, P : S \u00d7 A \u2192\u2206(S) is the Markov transition kernel for some probabilistic simplex \u2206, r : S \u00d7 A \u2192[0, \u00af R] is the reward function for \u00af R \u22650, \u03b3 \u2208[0, 1) is the discounted factor and s0 is the initial state. A policy \u03c0 : S \u2192\u2206(A) induces a distribution of the trajectory s0, a0, r0, s1, . . ., where at \u223c\u03c0(\u00b7|st), rt = r(st, at), st+1 \u223cP(\u00b7|st, at) for any t \u22650. The expected discounted return of a policy is defined as J(\u03c0) = E[P\u221e t=0 \u03b3trt|\u03c0]. The discounted return when the trajectory starts with (s, a) and all remaining actions are taken according to \u03c0 is called q-function q\u03c0 : S\u00d7A \u2192[0, \u00af V ]. The q\u03c0 is the unique fixed point of the Bellman operator B\u03c0, satisfying the Bellman equation [74]: B\u03c0q(s, a) := r(s, a)+\u03b3Es\u2032\u223cP(\u00b7|s,a)[q(s\u2032, \u03c0)]. Here q(s\u2032, \u03c0) is denoted as shorthand for Ea\u2032\u223c\u03c0(\u00b7|s\u2032) [q (s\u2032, a\u2032)], and we define P\u03c0q(s, a) := Es\u2032\u223cP(\u00b7|s,a) [q (s\u2032, \u03c0)]. Additionally, it is helpful to remember that J(\u03c0) = q\u03c0(s0, \u03c0). Another important notion is the normalized discounted visitation of \u03c0, defined as d\u03c0(s, a) := (1 \u2212\u03b3) P\u221e t=0 \u03b3td\u03c0,t(s, a), where d\u03c0,t is the marginal state-action distribution at the time-step t. Offline RL under function approximation. In the offline RL setting, there exists an unknown offline data-generating distribution \u00b5 induced by behavior policies. Despite the unknowns of \u00b5, we can observe a set of transition pairs, as offline dataset D1:n := {si, ai, ri, s\u2032 i}n i=1 sampling from \u00b5. For a given policy \u03c0, the density-ratio (importance-weight), \u03c4d\u03c0/\u00b5(s, a) = d\u03c0(s, a)/\u00b5(s, a), measures how 2 \feffectively \u00b5 covers the visitation induced by \u03c0. The primary objective of offline policy optimization is to learn an optimal policy that maximizes the return, J(\u03c0), using the offline dataset. Under the function approximation setting, we assume access to two function classes Q : S \u00d7 A \u2192R and \u2126: S \u00d7 A \u2192R, which are utilized to capture q\u03c0 and \u03c4d\u03c0/\u00b5, respectively. Exploration and coverage. In general, when saying an offline dataset is well-explored, it means that a well-designed behavior policy has been executed, allowing for comprehensive exploration of the MDP environment. As a result, the dataset is likely to contain possibly all state-action pairs. This implicitly requires \u00b5 has the global coverage [23, 78]. In this context, the global coverage means that the density ratio-based concentrability coefficient, sups,a{d\u03c0(s, a)/\u00b5(s, a)}, is upper-bounded by a constant c \u2208R+ for all policies \u03c0 \u2208\u03a0, where \u03a0 is some policy class. This condition is frequently employed in offline RL [4, 11, 17]. However, in practice, this assumption may not hold true, as devising an exploratory policy is a challenging task for large-scale RL problems. Instead, our goal is to learn a good policy with strong theoretical guarantees that can compete against any arbitrarily covered comparator policy under much weaker conditions than the global coverage. 3 Bi-Level Offline Policy Optimization Algorithm In this section, we introduce our bi-level offline policy optimization framework. The development of the framework consists of three major steps. Step 1: robust interval learning. In this step, we aim to provide a robust off-policy interval evaluation. The major advantage of this interval formulation is its robustness to the model-misspecification of the importance-weight class \u2126, and the encoding of distributional-shift information in the policy evaluation process. First, we define a detection function D(\u00b7), which is used to measure the degree of the distributional-shift in terms of density ratio. Definition 3.1. For x, c1, c2, C \u2208R+ and C \u22651, the detection function D(\u00b7) satisfies the following conditions: (1) 1-minimum: D(1) = 0. (2) Non-negativity: D(x) \u22650. (3) Boundedness on first-order derivative: |D\u2032(x)| \u2264c2 if x \u2208[0, C]. (4) Boundedness on value: |D(x)| \u2264c1 for x \u2208[0, C]. (5) Strong convexity: D(x) is M-strongly convex with respect to x. The family of R\u00e9nyi entropy [65], Bhattacharyya distance [14], and simple quadratic form functions [95]all satisfy the conditions outlined in Definition 3.1. Under this definition, it can easily observe that D has a convex conjugate function [9], D\u2217with D\u2217(x\u2217) = supx {x \u00b7 x\u2217\u2212D(x)}, that satisfies D\u2217(0) = 0. It follows from Bellman equation B\u03c0q\u03c0(s, a) = q\u03c0(s, a) for any s, a, then J(\u03c0) = q\u03c0(s0, \u03c0) + E\u00b5[\u03bbD\u2217((B\u03c0q\u03c0(s, a) \u2212q\u03c0(s, a)/\u03bb))/(1 \u2212\u03b3)] for \u03bb \u22650. Applying Fenchel-Legendre transformation [55, 29], and model x in a restricted importance weight class \u2126for any s, a, we obtain J(\u03c0) =q\u03c0(s0, \u03c0) + E\u00b5[sup x x \u00b7 (B\u03c0q\u03c0(s, a) \u2212q\u03c0(s, a)) \u2212\u03bbD(x)]/(1 \u2212\u03b3) (1) \u2265q\u03c0(s0, \u03c0) + E\u00b5[\u03c4(s, a)(r(s, a) + \u03b3q\u03c0(s\u2032, \u03c0) \u2212q\u03c0(s, a)) \u2212\u03bbD(\u03c4(s, a))]/(1 \u2212\u03b3). (2) Suppose q\u03c0 is well-sepcified, i.e., q\u03c0 \u2208Q, we can find a lower bound of (2), which is valid for any \u03c4 \u2208\u2126, via replacing q\u03c0 with infq\u2208Q as follows: J(\u03c0) \u2265inf q\u2208Q n \u0000E\u00b5 [\u03c4(s, a)(r(s, a) + \u03b3q (s\u2032, \u03c0) \u2212q(s, a))] + q(s0, \u03c0) \u0001 /(1 \u2212\u03b3) | {z } :=H(\u03c4,q,\u03c0) \u2212\u03bb/(1 \u2212\u03b3)E\u00b5[D(\u03c4(s, a))] | {z } :=\u03bb\u03be(D,\u03c4) o . After following a similar derivation, we can establish an upper bound for J(\u03c0) as well, and thus construct a value interval for J(\u03c0). This interval holds for any \u03c4 and is therefore robust against model-misspecification of \u2126. In order to obtain a tighter interval, we can shrink the interval width by maximizing the lower bound and minimizing the upper bound, both with respect to \u03c4. This procedure can be interpreted as searching for some good \u03c4 \u2208\u2126to minimize the function approximation error. J(\u03c0) \u2208 \u0014 sup \u03c4\u2208\u2126 inf q\u2208Q H(\u03c4, q, \u03c0) \u2212\u03bb\u03be(D, \u03c4), inf \u03c4\u2208\u2126sup q\u2208Q H(\u03c4, q, \u03c0) + \u03bb\u03be(D, \u03c4) \u0015 , (3) While the interval offers a robust method for dealing with the bias introduced by function approximation when estimating J(\u03c0), it lacks a crucial and non-trivial step for handling statistical uncertainty. 3 \fStep 2: uncertainty quantification. In this step, we quantify the uncertainty of the interval (3), and establish a non-asymptotic confidence interval (CI) for J(\u03c0) which integrates bias and uncertainty quantifications in a single interval inspired by [95]. Given offline data D1:n, our formal result for quantifying sampling uncertainty in order to establish the CI for J(\u03c0). Theorem 3.1 (Non-asymptotic confidence interval). For a target policy \u03c0, the return J(\u03c0) is within a CI for any \u03c4 \u2208\u2126with probability at least 1 \u2212\u03b4, i.e., J(\u03c0) \u2208[ b J\u2212 n (\u03c0; \u03c4), b J+ n (\u03c0; \u03c4)] for b J\u2212 n (\u03c0; \u03c4) := 1 n n X i=1 ri\u03c4(si, ai) 1 \u2212\u03b3 \u2212sup q\u2208Q c Mn(\u2212q, \u03c4) \u2212\u03bb\u03ben(D, \u03c4) \u2212\u03c3n, b J+ n (\u03c0; \u03c4) := 1 n n X i=1 ri\u03c4(si, ai) 1 \u2212\u03b3 + sup q\u2208Q c Mn(q, \u03c4) + \u03bb\u03ben(D, \u03c4) + \u03c3n, (4) if the uncertainty deviation \u03c3n satisfies P \u0012 sup \u03c4\u2208\u2126 \f \f \f 1 n(1 \u2212\u03b3) n X i=1 \u03c4(si, ai) (ri + \u03b3q\u03c0(s\u2032 i, \u03c0) \u2212q\u03c0(si, ai)) \u2212\u03bb\u03ben(D, \u03c4) \f \f \f \u2264\u03c3n \u0013 \u22651 \u2212\u03b4, where c Mn(q, \u03c4) := Pn i=1 \u03c4(si, ai)(\u03b3q(s\u2032 i, \u03c0) \u2212q(si, ai))/(1 \u2212\u03b3)n + q(s0, \u03c0). Similar to the value interval, the CI [ b J\u2212 n (\u03c0; \u03c4), b J+ n (\u03c0; \u03c4)] also holds for any \u03c4 \u2208\u2126. Therefore, we can optimize the confidence lower and upper bounds in (4) over \u03c4 \u2208\u2126to tighten the CI, and obtain: P \u0010 J(\u03c0) \u2208[sup \u03c4\u2208\u2126 b J\u2212 n (\u03c0; \u03c4), inf \u03c4\u2208\u2126 b J+ n (\u03c0; \u03c4)] \u2286[ b J\u2212 n (\u03c0; \u03c4), b J+ n (\u03c0; \u03c4) \u0011 \u22651 \u2212\u03b4. Step 3: bridge policy evaluation to policy optimization. In this step, we aim to formulate a policy optimization based on the derived high-confidence policy evaluation from the previous steps. Given the consistent CI estimation of J(\u03c0), we can naturally incorporate the pessimism principle, i.e., using the CI lower bounds of J(\u03c0) as the value estimate of the policy evaluation of \u03c0 [31]. With such a procedure, our objective is to maximize these lower bounds over some family \u03a0 of policies: max \u03c0\u2208\u03a0 \u001a sup \u03c4\u2208\u2126 b J\u2212 n (\u03c0; \u03c4) \u001b . (5) Although (5) is algorithmically feasible for obtaining a policy solver b \u03c0, it lacks direct interpretation without taking advantage of the bi-level optimization structure in hindsight. Therefore, we propose to reformulate (5) via a dual-to-prime conversion (shown in Theorem 3.2), which naturally lends itself to lower-upper optimization with guaranteed convergence. Specifically, we formulate (5) as a bi-level framework problem: (Upper Level) min \u03c0\u2208\u03a0 \u2212q\u03c0(s0, \u03c0), (6) (Lower Level) s.t. q\u03c0 \u2208arg min q\u2208Q\u03b5n q(s0, \u03c0), (7) Consistency : Q\u03b5n = \b q \u2208Q : sup \u03c4\u2208e \u2126e \u03c3n \f \fn\u22121 n X i=1 \u03c4(si, ai)(ri + \u03b3q(s\u2032 i, \u03c0) \u2212q(si, ai)) \f \f \u2264\u03b5n \t , Uncertainty Control : e \u2126e \u03c3n = \u001a \u03c4\u25e6/ sup \u03c4\u25e6\u2208\u2126 \u2225\u03c4\u25e6\u2225\u2126for \u03c4\u25e6\u2208\u2126: \u03ben(D, \u03c4\u25e6)) \u2264e \u03c3n \u001b . At the upper level, the learned policy b \u03c0 attempts to maximize the value estimate of q\u03c0 over some policy class \u03a0, while at the lower level, q\u03c0 is to seek the q-function with the pessimistic policy evaluation value from the confidence set Q\u03b5n with consistency guarantee and uncertainty control. For consistency, whenever q\u03c0 or its good approximator is included in Q (realizability for Q class is satisfied), the set Q\u03b5n ensures the estimation consistency of q\u03c0 in terms of \u201csufficently small\u201d weighted average Bellman error. For uncertainty control, the constrained set e \u2126e \u03c3n attempts to control the uncertainty arising from distributional shift via a user-specific thresholding hyperparameter e \u03c3n. The feasible (uncertainty controllable) candidates \u03c4 \u2208e \u03c3n are used as weights for the average Bellman error, helping to construct the consistent set Q\u03b5n. Risk-averse users can specify a lower value for the thresholding hyperparameter or consider a higher e \u03c3n to tolerate a larger distribution shift. In other words, the chosen value of e \u03c3n depends on the degree of pessimism users want to incorporate in the policy optimization. 4 \fTheorem 3.2. There must exist some threshold values \u03b5n and e \u03c3n, the return policy of (5) b \u03c0 satisfies the minimization problem in (6), indicating the solution of the optimization (5) and (6) is equivalent. Interestingly, the new form in (6) characterizes our policy optimization framework as a two-player general-sum game [21], which is a sequential game involving two players. Each player aims to maximize their own payoffs while considering the decisions of other players. Our bi-level optimization framework has been demonstrated to improve learning stability and ensure algorithmic convergence, benefiting from the existence of a local equilibrium [80]. To close this section, we remark that the establishment of consistency with respect to the weighted average Bellman error is the key point for us to relax the completeness-type assumptions. In the famous API/AVI-type algorithms [18, 19, 11], they target to minimize a squared or minimax Bellman error for finding q \u2208Q so that \u2225q \u2212B\u03c0q\u22252 L2(\u00b5) \u22480 to obtain q \u2248q\u03c0. Unfortunately, even with the infinite amount of data, the empirical estimate of \u2225q \u2212B\u03c0q\u22252 L2(\u00b5), i.e., squared empirical Bellman error) is biased due to the appearance of unwanted conditional variance, i.e., the double sampling issue [5]. The API/AVI-type algorithms need a separate helper function class e Q for modeling B\u03c0q, and [11] has shown that when the class e Q realizes the Bayes optimal regressor Bq (Bellman-completeness condition), the estimation is consistent and unbiased. In contrast, thanks to not using the squared loss, our weighted average Bellman error can be estimated from an unbiased estimate without concern about the double sampling issue, and thus no need for any completeness-type conditions. 4 Information-Theoretic Results In this section, we provide theoretical analyses of our algorithm, which reveals the advantages of the proposed policy optimization method from a technical standpoint. Notably, to the best of our knowledge, Theorem 4.1 is the first result of regret guarantee under only realizability without requiring any data coverage or completeness-type assumptions. Additionally, in contrast to most existing works that assume finite function classes, we carefully quantify the space complexities for infinite function classes (e.g., a class of real-valued functions generated by neural networks) using Pollard\u2019s pseudo-dimension [59]. The formal definition is provided in Appendix. It notices that the pseudo-dimension is a generalization of the well-known VC dimension [79]. In the following, we first introduce the necessary assumptions before presenting the guarantees for our algorithm. Assumption 1 (Realizability for q-function class). For any policy \u03c0 \u2208\u03a0, we have q\u03c0 \u2208Q. When this assumption holds approximately, we measure violation by infq\u2208Q sup\u03c1 E\u03c1[(q(s, a) \u2212B\u03c0q(s, a))2] \u2264 \u03b5Q, where \u03b5Q \u22650 and \u03c1 is some data distribution such that \u03c1 \u2208{de \u03c0 : e \u03c0 \u2208\u03a0}. We would like to emphasize that we do not require Bellman-completeness condition [86, 93], which is much stronger than the realizability condition. In addition, we do not impose realizability on the importance-weight class \u2126, thereby allowing model misspecification on \u2126. Having stated the major assumptions, we now turn to the routine ones on boundedness before presenting the main results. Assumption 2 (Boundedness on Q). There exists a non-negative constant \u00af V < \u221e, the function q(s, a) \u2208[0, \u00af V ], \u2200q \u2208Q, s \u2208S, a \u2208A. Assumption 3 (Boundedness on \u2126). There exists a non-negative constant 1 \u2264U\u03c4 \u221e< \u221e, the function \u03c4(s, a) \u2208[0, U\u03c4 \u221e], \u2200\u03c4 \u2208\u2126, s \u2208S, a \u2208A. Theorem 4.1. Under Assumptions 1-3 and denote supremum of \u00b5-weighted L2 norm of \u2126, i.e., sup\u03c4\u2208\u2126\u2225\u03c4(s, a)\u2225L2(\u00b5), as U\u03c4 2 . Let b \u03c0 be the output of solving (6) when we set \u03b5n = e O(n\u22121/2U\u03c4 2 ( p ln{Vol(\u0398)/\u03b4}+U\u03c4 \u221e \u221a\u03b5Q) and e \u03c3n = e O(n\u22121/2U\u03c4 2 L p ln{Vol(\u0398)/\u03b4}+M(U\u03c4 2 \u22121)2), then for any policy \u03c0 \u2208\u03a0 and some constant U\u22c6 2 \u2208[1, U\u03c4 2 ), w.p. \u22651 \u2212\u03b4, J(\u03c0) \u2212J(b \u03c0) \u2264 1 1 \u2212\u03b3 e O U\u22c6 2 C \u00af V ,L r ln{Vol(\u0398)/\u03b4} nM | {z } \u03f5\u03c3 + r CU\u03c4 \u221e M max{(\u03b5Q)1/2, (\u03b5Q)3/4} | {z } \u03f5mis + min \u001a \u03c1:\u2225\u03c1 \u00b5\u2225L2(\u00b5)\u2264U\u22c6 2 \u001b \u001a E(d\u03c0\u2212\u03c1)+\u0002 1\u00b5=0(I \u2212\u03b3P\u03c0)\u2206q\u03c0\u2212q\u03c0(s, a) | {z } \u03f5off + 1\u00b5>0C \u00af V ,\u03b3 r ln{Vol(\u0398)/\u03b4} n | {z } \u03f5b \u0003\u001b! . 5 \fHere \u2206q\u03c0\u2212q\u03c0(s, a) = q\u03c0(s, a) \u2212q\u03c0(s, a) for q\u03c0 := arg maxq\u2208Q\u03b5n q(s0, \u03c0) and q\u03c0 := arg minq\u2208Q\u03b5n q(s0, \u03c0). For Pollard\u2019s pseudo-dimensions D\u2126, DQ, D\u03a0, Vol(\u0398) = (eD max{D\u2126, DQ, D\u03a0} + 1)3({1 \u2228L}U\u03c4 2 )2D with the effective pseudo dimension D = D\u2126+ DQ + D\u03a0, where L is Lipschitz constant of M-strongly convex function D(\u00b7). Moreover, Cx and e O denote constant terms depending on x, and big-Oh notation ignoring high-order terms, respectively. In the upper bound of Theorem 4.1, we split the regret into four different parts: the on-support intrinsic uncertainty \u03f5\u03c3, the on-support bias \u03f5b, the violation of realizability \u03f5mis, and the off-support extrapolation error \u03f5off. Recall that we require q\u03c0 \u2208Q as in Assumption 1, in fact, we can further relax the condition to requiring q\u03c0 to be in the linear hull of Q [77], which is more robust to the realizability error \u03f5mis. In the following, we focus on investigating the roles of the on-support and off-support error terms in the regret bound. On-support errors: bias and uncertainty tradeoff. The on-support error consists of two terms: \u03f5b and \u03f5\u03c3. The on-support uncertainty deviation, \u03f5\u03c3, is scaled by a weighted L2-based concentrability coefficient U\u22c6 2 := \u2225\u03c1/\u00b5\u2225L2(\u00b5), which measures the distribution mismatch between the implicit exploratory data distribution and the baseline data distribution \u00b5. Meanwhile, \u03f5b depends on the probability mass of (d\u03c0 \u2212\u03c1)+1\u00b5>0, and represents the bias weighted by the probability mass difference between d\u03c0 and \u03c1 in the support region of \u00b5. In general, a small value of U\u22c6 2 necessitates choice of the distribution \u03c1 to be closer to \u00b5 which reduces \u03f5\u03c3, reducing \u03f5\u03c3 but potentially increasing the on-support bias \u03f5b due to the possible mismatch between d\u03c0 and \u03c1. Consequently, within the on-support region, there is a trade-off between \u03f5\u03c3 and \u03f5b, which is adjusted through U\u22c6 2 . Off-support error: enhanced model extrapolation. One of our main algorithmic contributions is that the off-support extrapolation error \u03f5off can be minimized by selecting the best possible \u03c1 without worrying about balancing the error trade-off, unlike the on-support scenario. This desirable property is essential for allowing the model to harness its extrapolation capabilities to minimize \u03f5off, while simultaneously achieving a good on-support estimation error. As a result, the model attains a small regret. Recall the bi-level formulation; at the lower level, (7) addresses uncertainty arising from the distributional shift using L2(\u00b5) control rather than L\u221econtrol. This plays an important role in enhancing the power of the model extrapolation. In particular, Specifically, there exists an implicit exploratory data distribution \u03c1 with on-support behavior (\u03c11\u00b5>0) close to \u00b5, such that \u2225\u03c1/\u00b5\u2225L2(\u00b5) is small. On the other hand, its off-support behavior (\u03c11\u00b5=0) can be arbitrarily flexible, ensuring that d\u03c01\u00b5=0 is close to \u03c11\u00b5=0. Consequently, (d\u03c0 \u2212\u03c1)+1\u00b5=0 is small, as is \u03f5off. When a dataset with partial coverage, as indicated in [78], it is necessary to provide a guarantee: learn the policy with \u201cbest efforts\u201d which is competitive to any policy as long as it is covered. Before we state the near-optimal regret guarantee of our algorithm, we formally define a notion of covered policies according to a newly-defined concentrability coefficient. Definition 4.1 (U\u03c4 2 -covered policy class). Let \u03a0(U\u03c4 2 ) denote the U\u03c4 2 -covered policy class of \u00b5 for U\u03c4 2 \u22651, defined as \u03a0(U\u03c4 2 ) := ( \u03c0 \u2208\u03a0 : \r \r \r \r d\u03c0(s, a)1\u00b5(s,a)>0 \u00b5(s, a) \r \r \r \r L2(\u00b5) \u2264U\u03c4 2 and sup s,a d\u03c0(s, a)1\u00b5(s,a)=0 \u00b5(s, a) < +\u221e ) . Note that this mixture density ratio concentrability coefficient is always bounded by the L\u221e-based concentrability coefficient. Thus such single-policy concentrability assumption in terms of the mixture density ratio is weaker than the standard L\u221edensity ratio-based assumption. Corollary 4.1 (Near-optimal regret). Under Assumptions 1-3 with \u03b5Q \u2208[0, 1), and we set \u03b5n, e \u03c3n as in Theorem 4.1, then for any good comparator policy \u03c0\u22c4\u2208\u03a0(U\u03c4 2 ) (not necessary the optimal policy \u03c0\u2217), w.p. \u22651 \u2212\u03b4, the output policy b \u03c0 of (6) satisfies J(\u03c0\u22c4) \u2212J(b \u03c0) \u2264 1 1 \u2212\u03b3 e O U\u22c6 2 ( \u00af V + L) r ln{Vol(\u0398)/\u03b4} nM + p (1 + U\u03c4 \u221e+ U\u03c4 \u221e/M) \u03b5Q ! . A close prior result to Corollary 4.1 is that of [12], which develops a pessimistic algorithm based on a nontrivial performance gap condition. Their regret guarantees only hold if the data covers the optimal policy \u03c0\u2217, in particular, requiring a bounded L\u221esingle-policy concentrability with respect to \u03c0\u2217. In comparison, our guarantee can still provide a meaningful guarantee even when \u03c0\u2217is not covered by data. In the following, we include the sample complexity of our algorithm when \u03b5Q = 0. 6 \fCorollary 4.2 (Polynomial sample complexity). Under the conditions in Corollary 4.1, the output policy b \u03c0 of solving (6) satisfies J(\u03c0\u22c4) \u2212J(b \u03c0) \u2264\u03b5 w.p. \u22651 \u2212\u03b4, if n = O \u0010(U\u22c6 2 ( \u00af V + L)/ \u221a M)2 \u03b52(1 \u2212\u03b3)2 + (U\u03c4 2 \u00af V 2( \u00af V + L)/M)0.67 \u03b51.33(1 \u2212\u03b3)1.33 + U\u03c4 \u221e( \u00af V + L) \u03b5(1 \u2212\u03b3) \u0011 ln Vol(\u0398) \u03b4 ! . The sample complexity consists of three terms corresponding to the slow rate O(n\u22121/2) and the two faster rate O(n\u22121) and O(n\u22123/4) terms in Corollary 4.1. When U\u03c4 2 and U\u03c4 \u221eare not too much larger than U\u22c6 2 , the fast rate terms are dominated, and the sample complexity is of order O(1/\u03b52), which is much faster than O(1/\u03b56) in the close work of [94]. It is worth noting that even in exploratory settings where the global coverage assumption holds, our sample complexity rate matches the fast rate in popular offline RL frameworks with general function approximation [11, 87, 17]. In addition to the near-optimal regret guarantee, in safety-critical applications, an offline RL algorithm should consistently improve upon the baseline (behavior) policies that collected the data [24, 39]. Our algorithm also achieves this improvement guarantee with respect to the baseline policy. Theorem 4.2 (Baseline policy improvement). Under Assumptions 1-3 with \u03b5Q = 0 and set \u03b5n, e \u03c3n as in Theorem 4.1. Suppose 1 \u2208\u2126and the baseline policy \u03c0b \u2208\u03a0 such that d\u03c0b = \u00b5, then the regret (1 \u2212\u03b3)(J(\u03c0b) \u2212J(b \u03c0)) for the output policy b \u03c0 of solving (6), w.p. \u22651 \u2212\u03b4, is upper bounded by O \u0012r ( \u00af V + L)2 ln{Vol(\u0398)/\u03b4} nM + r ( \u00af V 3 + \u00af V 2L) M \u0012ln{Vol(\u0398)/\u03b4} n \u0013 3 4 + ( \u00af V + L) ln{Vol(\u0398)/\u03b4} n \u0013 . The aforementioned information-theoretic results enhance the understanding of the developed algorithm, in terms of the function approximation and coverage conditions, sample complexity, horizon dependency, and bound tightness. In practice, although the information-theoretic algorithm offers a feasible solution to the problem, it is not yet tractable and computationally efficient due to the need to solve constrained optimization. In the following section, we develop a practical algorithm as a computationally efficient counterpart for the information-theoretic algorithm. 5 Penalized Adversarial Estimation Algorithm Although the information-theoretic algorithm offers a feasible solution to the problem, it is not yet tractable and computationally efficient due to the need to solve constrained optimization. In this section, we develop an adversarial estimation proximal-mapping algorithm that still adheres to the pessimism principle, but through penalization. Specifically, the adversarial estimation loss is constructed as follows: max \u03c4 min q L(q, \u03c4, \u03c0, c\u2217, \u03bb) for solving q(s0, \u03c0) + 1 (1 \u2212\u03b3)n ( c\u2217\f \f \f n X i=1 \u03c4(si, ai) (q(si, ai) \u2212ri \u2212\u03b3q(s\u2032 i, \u03c0)) \f \f \f \u2212\u03bb n X i=1 D(\u03c4(si, ai)) ) . We observe that the inner minimization for solving q is relatively straightforward, as we can obtain a closed-form global solver using the maximum mean discrepancy principle [25, 70]. In contrast, optimizing \u03c4\u03c8 is more involved, often requiring a sufficiently expressive non-linear function approximation class, e.g., neural networks. However, concavity typically does not hold for such a class of functions [29]. From this perspective, our problem can be viewed as solving a non-concave maximization problem, conditional on the solved global optimizer \u00af q := arg minq L(q, \u03c4, \u03c0, c\u2217, \u03bb). At each iteration, we propose to update \u03c4 by solving the proximal mapping [58] using the Euclidean distance to reduce the computational burden. As a result, the pre-iteration computation is quite low. Algorithm 1 Adversarial proximal-mapping algorithm 1: Input observed data D1:n = {(si, ai, ri, s\u2032 i)}n i=1 and parameters q0, \u03c4 0, \u03c00, c\u2217, \u03bb and \u03b6. 2: For k = 1 to \u00af K: 3: Update \u03c4 k and qk by solving max \u03c4 min q L(q, \u03c4, \u03c0k\u22121, c\u2217, \u03bb) 4: Update \u03c0k by solving \u03c0k(\u00b7|s) = argmax \u03c0\u2208\u03a0 \u03b6 qk(\u00b7, s), \u03c0(\u00b7|s) \u000b \u2212DNegEntropy \u0000\u03c0(\u00b7|s), \u03c0k(\u00b7|s) \u0001 . 5: Return the policy b \u03c0, which randomly selects a policy from the set {\u03c0k} \u00af K k=1. 7 \fOnce q and \u03c4 are solved, we apply mirror descent in terms of the negative entropy DNegEntropy [7]. That is, given a stochastic gradient direction of \u03c0 we solve the prox-mapping in each iteration as outlined in step 4 of Algorithm 1. A detailed version of Algorithm 1 with extended discussions on convergence and complexity is provided in Appendix. In the following, we establish the regret guarantee for the policy output by Algorithm 1. Theorem 5.1. Under Assumptions 1-3 with \u03b5Q = 0, we properly choose \u03bb = \u03bb(U\u03c4 2 ), i.e., \u03bb well depends on U\u03c4 2 , and c\u2217= e O \u0000p n \u00af V /(\u03bbLU\u03c4 2 ln{Vol(\u0398\u2020)/\u03b4}) \u0001 . After running \u00af K \u2265log |A| rounds of Algorithm 1 with the stepsize \u03b6 = p log |A|/(2 \u00af V \u00af K), for any policy \u03c0 \u2208\u03a0, the output policy b \u03c0 of the algorithm, w.p \u22651 \u2212\u03b4, satisfies, J(\u03c0) \u2212J(b \u03c0) \u2264 1 1 \u2212\u03b3 e O 4 s (U\u22c6 2 )2C1 \u00af V ,\u03bb,L ln{Vol(\u0398\u2020)/\u03b4} n + r \u00af V log |A| \u00af K + 1 \u00af K \u00af K X k=1 min \u03c1k\u2208\u2206U\u22c6 2 E(d\u03c0\u2212\u03c1k)+ \u0014 1\u00b5=0 \u0010 B\u03c0kqk(s, a) \u2212qk(s, a) \u0011 + 1\u00b5>0 s C2 \u00af V ,\u03bb,L ln{Vol(\u0398\u2020)/\u03b4} n \u0015! , where \u2206U\u22c6 2 := {\u03c1k : \u2225\u03c1k \u00b5 \u2225L2(\u00b5) < U\u22c6 2 }, C1 \u00af V ,\u03bb,L, C2 \u00af V ,\u03bb,L are some constant terms, and the function class complexity Vol(\u0398\u2020) = (eD max{D\u2126, DQ, D\u03a0}+1)3({1\u2228L}U\u03c4 2 )2D for D = D\u2126+DQ+D\u03a0. Trajectory-adaptive exploratory data distribution. Similar to Theorem 4.1, the penalized algorithm also exhibits a desirable extrapolation property for minimizing extrapolation error while simultaneously preserving small on-support estimation errors. This is achieved through adaptations of the implicit exploratory data distributions, \u03c1k for k \u2208[ \u00af K]. In contrast to the information-theoretic algorithm, the automatic splitting by \u03c1k now depends on the optimization trajectory. At each iteration k, the penalized algorithm allows each implicit exploratory data distribution \u03c1k to adapt to the comparator policy \u03c0. This results in a more flexible adaptation than the one in the information-theoretic algorithm, either for balancing the trade-off between on-support bias and uncertainty incurred by the distributional mismatch between d\u03c0 and \u03c1k, or for selecting the best implicit exploratory to minimize model extrapolation error. Opimization error. Blessed by the reparametrization in the proximal-mapping policy update, which projects the mixture policies into the parametric space \u03a0\u03c9, the complexity of the restricted policy class is independent of the class of Q and the horizon optimization trajectory \u00af K. As a result, the optimization error O( p \u00af V log |A|/ \u00af K) can be reduced arbitrarily by increasing the maximum number of iterations, \u00af K, without sacrificing overall regret to balance statistical error and optimization error. This allows for the construction of tight regret bounds. This distinguishes our algorithm from API-style algorithms, which do not possess a policy class that is independent of Q [4, 68, 86]. 5.1 An Application to Linear MDPs with Refined Concentrability Coefficient In this section, we conduct a case study in linear MDPs with insufficient data coverage. The concept of the linear MDP is initially developed in the fully exploratory setting [89]. Let \u03d5 : S \u00d7A \u2192Rd be a ddimensional feature mapping. We assume throughout that these feature mappings are normalized, such that \u2225\u03d5(s, a)\u2225L2 \u22641 uniformly for all (s, a) \u2208S \u00d7 A. We focus on action-value functions that are linear in \u03d5 and consider families of the following form: Q\u03b8 := {(s, a) 7\u2192\u27e8\u03d5(s, a), \u03b8\u27e9| \u2225\u03b8\u2225L2 \u2264c\u03b8}, where c\u03b8 \u2208[0, \u00af V ]. For stochastic policies, we consider the soft-max policy class \u03a0\u03c9 := {\u03c0\u03c9(a|s) \u221d e\u27e8\u03d5(s,a),\u03c9\u27e9| \u2225\u03c9\u2225L2 \u2264c\u03c9}, where c\u03c9 \u2208(0, \u221e). Note that the softmax policy class is consistent with the implicit policy class produced by the mirror descent updates with negative entropy in Algorithm 1, where the exponentiated gradient update rule is applied in each iteration. For the importance-weight class, we also consider the following form: \u2126\u03c8 := {(s, a) 7\u2192\u27e8\u03d5(s, a), \u03c8\u27e9| \u2225\u03c8\u2225L2 \u2264c\u03c8} where c\u03c8 \u2208(0, \u221e). To simplify the analysis, we assume the realizability condition for Q\u03b8 is exactly met. In this linear MDP setting, we further refine the density ratio to a relative condition number to characterize partial coverage. This concept is recently introduced in the policy gradient literature [1] and is consistently upper-bounded by the L\u221e-based density ratio concentrability coefficient. Definition 5.1 (Relative condition number). For any policy \u03c0 \u2208\u03a0\u03c9 and behavior policy \u03c0b such that d\u03c0b = \u00b5, the relative condition number is defined as \u03b9(d\u03c0, \u00b5) = supx\u2208Rd xT Ed\u03c0[\u03d5(s,a)\u03d5(s,a)\u22a4]x x\u22a4E\u00b5[\u03d5(s,a)\u03d5(s,a)\u22a4]x . Assumption 4 (Bounded relative condition number). For any \u03c0 \u2208\u03a0\u03c9, \u03b9(d\u03c0, \u00b5) < \u221e. 8 \fIntuitively, this implies that as long as a high-quality comparator policy exists, which only visits the subspace defined by the feature mapping \u03d5 and is covered by the offline data, our algorithm can effectively compete against it [78]. This partial coverage assumption, in terms of the relative condition number, is considerably weaker than density ratio-based assumptions. In the following, we present our main near-optimal guarantee in linear MDPs. In addition, we design and conduct numerical experiments to empirically validate Theorem 5.2 in terms of the regret rate of convergence. Theorem 5.2. Under Assumption 4, if we set propertly choose \u03bb = \u03bb(c\u03c8) and c\u2217 = e O \u00004 p n/d ln{(1 + e\u221an(1 \u2228L) \u00af V c\u03c8c\u03c9)/\u03b4} \u0001 , and suppose b \u03c0lr is returned by Algorithm 1 with linear function approxiamiton after running \u00af K \u226blog |A| rounds, then for any policy in \u03c0 \u2208\u03a0\u03c9(Ulr 2 ) for Ulr 2 \u22651, w.p. \u22651 \u2212\u03b4, J(\u03c0) \u2212J(b \u03c0lr) is bounded by e O \u0012q min{\u03ba2c2 \u03c8{Ulr 2 }d2, \u03b9(d\u03c0, \u00b5)d} 1 \u2212\u03b3 4 s C \u00af V ,\u03bb,Ld ln{(1 + e\u221an(1 \u2228L) \u00af V c\u03c8c\u03c9)/\u03b4} n \u0013 , where \u03ba = trace(E\u00b5[\u03d5(s, a)\u03d5(s, a)\u22a4]) and c\u03c8{Ulr 2 } = sup{\u03c8:\u2225\u03d5(s,a)\u22a4\u03c8\u2225L2(\u00b5)=Ulr 2 } \u2225\u03c8\u2225L\u221e. To the best of our knowledge, this is the first result PAC guarantees for an offline model-free RL algorithm in linear MDPs, requiring only realizability and single-policy concentrability. The regret bound we obtain is at least linear and, at best, sub-linear with respect to the feature dimension d. Our approach demonstrates a sample complexity improvement in terms of feature dimension compared to prior work by [31], with a complexity of O(d1/2) versus O(d). It is worth noting that [31] only establishes results that compete with the optimal policy, and when specialized to linear MDPs, assumes the offline data has global coverage. Another previous study by [86] achieves a similar sub-linear rate in d as our approach; however, their algorithm is computationally intractable, relying on a much stronger Bellman-completeness assumption and requiring a small action space. 6 Experiments In this section, we evaluate the performance of our practical algorithm by comparing to the model-free offline RL baselines including CQL [37], BEAR [36], BCQ [23], OptiDICE [41], ATAC [13], IQL [35], and TD3+BC [22]. We also compete with a popular model-based approach COMBO [91]. Synthetic data. We consider two synthetic environments: a synthetic CartPole environment from the OpenAI Gym [10] and a simulated environment. Detailed discussions on the experimental designs are deferred to the Appendix. In both settings, following [77], we first learn a sub-optimal policy using DQN [51] and then apply softmax to its q-function, divided by a temperature parameter \u03b1 to set the action probabilities to define a behavior policy \u03c0b. Figure 1: The boxplot of the discounted return over 50 repeated experiments. A smaller \u03b1 implies \u03c0b is less explored, and thus the support of \u00b5 = d\u03c0b is relatively small. We vary different values of \u03b1 for evaluating the algorithm performance in \u201clow\u201d, \u201cmedium\u201d and \u201crelatively high\u201d offline data exploration scenarios. We use \u03b3 = 0.95 with the sample-size n = 1500 in all experiments. Tuning parameter selection is an open problem in offline policy optimization. Fortunately, Theorem 5.2 suggests an offline selection rule for hyperparameters \u03bb and c\u2217. In the following experiments, we set the hyper-parameters satisfying the condition O( n1/4 d log( \u00af V \u221an)). Figure 1 shows that our algorithm almost consistently outperforms competing methods in different settings. This performance mainly benefits from the advantages exposed in our theoretical analysis, such as model extrapolation enhancement, relaxation of completeness-type assumptions on function approximation, etc. The only exception is the slightly poorer performance compared to COMBO in a high exploration setting, where COMBO may learn a good dynamic model with relatively sufficient exploration. We provide the experiment details in Appendix due to page limit. Benchmark data. We evaluate our proposed approach on the D4RL benchmark of OpenAI Gym locomotion (walker2d, hopper, halfcheetah) and Maze2D tasks [20], which encompasses a variety 9 \fof dataset settings and domains and positions our algorithm within the existing baselines. We take the results of COMBO, OptiDICE and ATAC from their original papers for Gym locomotion, and run COMBO and ATAC using author-provided implementations for Maze2D. The results of BCQ, BEAR methods from the D4RL original paper. In addition, CQL, IQL and TD3+BC are re-run to ensure a fair evaluation process for all tasks. As shown in Table 1, the proposed algorithm achieves the best performance in 7 tasks and is comparable to the baselines in the remaining tasks. In addition to the evaluation of the policy performance, we also conduct sensitivity analyses on the hyperparameter-tuning and study the regret rate of convergence. Table 1: The normalized score of the policy at the last iteration of training, averaged over 5 random seeds. The highest performing scores are highlighted. The med, med-rep, and med-exp is shorthand for medium, medium-replay, and medium-expert, respectively. Tasks Proposed COMBO BCQ BEAR OptiDICE ATAC CQL IQL TD3+BC walker2d-med 80.8 \u00b1 5.1 81.9 \u00b1 2.8 53.1 59.1 21.8 \u00b1 7.1 89.6 77.2 \u00b1 4.2 78.3 \u00b1 4.3 81.7 \u00b1 2.3 hopper-med 94.9 \u00b1 4.3 97.2 \u00b1 2.2 54.5 52.1 94.1 \u00b1 3.7 85.6 74.3 \u00b1 5.8 66.3 \u00b1 6.4 98.4 \u00b1 1.6 halfcheetah-med 58.1 \u00b1 1.4 54.2 \u00b1 1.5 40.7 41.7 38.2 \u00b1 0.1 53.3 37.2 \u00b1 0.3 47.4 \u00b1 1.1 27.8 \u00b1 0.7 walker2d-med-rep 99.6 \u00b1 2.9 56.0 \u00b1 8.6 15.0 19.2 21.6 \u00b1 2.1 92.5 20.8 \u00b1 1.6 73.9 \u00b1 2.8 34.4 \u00b1 4.2 hopper-med-rep 113.0 \u00b1 2.1 89.5 \u00b1 1.8 33.1 33.7 36.4 \u00b1 1.1 102.5 32.6 \u00b1 1.9 94.7 \u00b1 1.5 44.4 \u00b1 3.7 halfcheetah-med-rep 49.3 \u00b1 2.1 55.1 \u00b1 1.0 38.2 38.6 39.8 \u00b1 0.3 48.0 41.9 \u00b1 1.1 44.2 \u00b1 2.5 48.3 \u00b1 0.7 walker2d-med-exp 108.2 \u00b1 7.4 103.3 \u00b1 5.6 57.5 40.1 74.8 \u00b1 9.2 114.2 103.8 \u00b1 6.9 109.6 \u00b1 7.0 100.5 \u00b1 8.9 hopper-med-exp 117.8 \u00b1 1.9 111.1 \u00b1 2.9 110.9 96.3 111.5 \u00b1 0.6 119.2 111.4 \u00b1 1.2 91.5 \u00b1 2.2 112.4 \u00b1 0.3 halfcheetah-med-exp 98.5 \u00b1 3.8 90.0 \u00b1 5.6 64.7 53.4 91.1 \u00b1 3.7 94.8 66.7 \u00b1 8.9 86.7 \u00b1 3.6 95.9 \u00b1 3.9 walker2d-random 11.2 \u00b1 3.8 7.0 \u00b1 3.6 4.9 7.3 9.9 \u00b1 4.3 6.8 4.7 \u00b1 1.5 5.8 \u00b1 2.8 3.4 \u00b1 1.7 hopper-random 18.7 \u00b1 1.5 17.9 \u00b1 1.4 10.6 11.4 11.2 \u00b1 1.1 17.5 10.7 \u00b1 0.1 10.8 \u00b1 0.6 11.1 \u00b1 0.2 halfcheetah-random 37.6 \u00b1 2.4 38.8 \u00b1 3.7 2.2 25.1 11.6 \u00b1 1.2 3.9 26.7 \u00b1 1.4 22.4 \u00b1 1.8 26.1 \u00b1 1.8 maze2d-umaze 96.5 \u00b1 27.8 34.2 \u00b1 8.6 12.8 3.4 111.0 \u00b1 8.3 84.4 \u00b1 24.8 50.5 \u00b1 7.9 41.5 \u00b1 4.7 13.8 \u00b1 22.8 maze2d-med 137.5 \u00b1 18.9 49.9 \u00b1 13.9 8.3 29.0 145.2 \u00b1 17.5 152.3 \u00b1 34.6 28.6 \u00b1 9.2 38.5 \u00b1 4.2 59.1 \u00b1 44.7 maze2d-large 187.8 \u00b1 15.2 128.2 \u00b1 17.3 6.2 4.6 155.7 \u00b1 33.4 142.1 \u00b1 33.8 46.2 \u00b1 16.2 54.2 \u00b1 18.1 87.6 \u00b1 15.4 Real-world application. The Ohio Type 1 Diabetes (OhioT1DM) dataset [50] comprises a cohort of patients with Type-1 diabetes, where each patient exhibits different dynamics and 8 weeks of life-event data, including health status measurements and insulin injection dosages. Clinicians aim to adjust insulin injection dose levels [50, 6] based on a patient\u2019s health status in order to maintain glucose levels within a specific range for safe dose recommendations. The state variables consist of health status measurements, and the action space is a bounded insulin dose range. The glycemic index serves as a reward function to assess the quality of dose suggestions. Since the data-generating process is unknown, we follow [48, 44] to utilize the Monte Carlo approximation of the estimated value function on the initial state of each trajectory to evaluate the performance of each method. The mean and standard deviation of the improvements on the Monto Carlo discounted returns are presented in Table 2. As a result, our algorithm achieves the best performance for almost all patients, except for Patient 552. The main reason for the desired performance in real data is from the enhanced model extrapolation and relaxed function approximation requirements and outperforms the competing methods. This finding is consistent with the results in the synthetic and benchmark datasets, demonstrating the potential applicability of the proposed algorithm in real-world environments. Table 2: The baseline policy improvements over 50 repeated experiments in the OhioT1DM dataset. Patient ID Proposed COMBO BCQ BEAR OptiDICE ATAC CQL IQL TD3+BC 596 6.5 \u00b1 1.1 4.1 \u00b1 0.8 3.8 \u00b1 0.9 2.7 \u00b1 1.1 4.7 \u00b1 1.1 5.1 \u00b1 2.0 4.6 \u00b1 0.6 3.4 \u00b1 0.7 4.8 \u00b1 1.3 584 33.1 \u00b1 1.8 27.0 \u00b1 1.3 20.3 \u00b1 1.2 22.9 \u00b1 1.6 27.7 \u00b1 1.9 26.9 \u00b1 2.6 21.6 \u00b1 1.2 22.7 \u00b1 1.3 22.4 \u00b1 1.7 567 36.9 \u00b1 1.3 30.6 \u00b1 2.0 24.3 \u00b1 1.4 25.6 \u00b1 1.4 28.8 \u00b1 2.2 29.7 \u00b1 2.8 26.5 \u00b1 1.4 25.8 \u00b1 1.4 27.8 \u00b1 1.5 552 7.9 \u00b1 0.9 6.8 \u00b1 0.7 5.7 \u00b1 0.5 5.0 \u00b1 0.8 8.1 \u00b1 0.9 7.2 \u00b1 1.5 6.7 \u00b1 0.4 6.1 \u00b1 0.5 7.4 \u00b1 0.8 544 13.2 \u00b1 1.9 9.8 \u00b1 1.5 7.5 \u00b1 2.5 5.9 \u00b1 0.8 10.3 \u00b1 1.8 10.1 \u00b1 2.1 8.7 \u00b1 1.0 7.8 \u00b1 0.9 9.7 \u00b1 0.8 540 20.4 \u00b1 0.5 17.5 \u00b1 0.9 14.3 \u00b1 0.6 12.7 \u00b1 0.5 17.9 \u00b1 0.9 18.2 \u00b1 1.4 16.5 \u00b1 0.5 14.0 \u00b1 0.6 17.1 \u00b1 0.8 7 Conclusion We study offline RL with limited exploration in function approximation settings. We propose a bi-level policy optimization framework, which can be further solved by a computationally practical penalized adversarial estimation algorithm, offering strong theoretical and empirical guarantees. Regarding limitations and future work, while the penalized adversarial estimation is more computationally efficient than the previously constrained problem, it may still be more challenging to solve than singlestage optimization problems. Another future direction is to explore environments with unobservable confounders. It will be interesting to address these limitations in future works. 10 \f8 Acknowledments The author is grateful to the five anonymous reviewers and the area chair for their valuable comments and suggestions."
},
{
"url": "http://arxiv.org/abs/2402.07314v2",
"title": "Online Iterative Reinforcement Learning from Human Feedback with General Preference Model",
"abstract": "We study Reinforcement Learning from Human Feedback (RLHF) under a general\npreference oracle. In particular, we do not assume that there exists a reward\nfunction and the preference signal is drawn from the Bradley-Terry model as\nmost of the prior works do. We consider a standard mathematical formulation,\nthe reverse-KL regularized minimax game between two LLMs for RLHF under general\npreference oracle. The learning objective of this formulation is to find a\npolicy so that it is consistently preferred by the KL-regularized preference\noracle over any competing LLMs. We show that this framework is strictly more\ngeneral than the reward-based one, and propose sample-efficient algorithms for\nboth the offline learning from a pre-collected preference dataset and online\nlearning where we can query the preference oracle along the way of training.\nEmpirical studies verify the effectiveness of the proposed framework.",
"authors": "Chenlu Ye, Wei Xiong, Yuheng Zhang, Nan Jiang, Tong Zhang",
"published": "2024-02-11",
"updated": "2024-04-25",
"primary_cat": "cs.LG",
"cats": [
"cs.LG",
"stat.ML"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "Online Iterative Reinforcement Learning from Human Feedback with General Preference Model",
"main_content": "Introduction Reinforcement Learning from Human Feedback (RLHF) has emerged as a pivotal technique in adapting machine learning to leverage relative feedback, especially in aligning Large Language Models (LLMs) with human values and preferences (Christiano et al., 2017; Ziegler et al., 2019). Notable examples include ChatGPT (OpenAI, 2023), Claude (Anthropic, 2023), and Bard (Google, 2023). The primary goal of RLHF in the context of LLMs is to adjust the responses generated by LLMs so that they are more favorably received by human evaluators. Inspired by the standard LLM alignment workflow (Ouyang et al., 2022; Bai et al., 2022b; Touvron et al., 2023), we characterize an LLM by a policy \u03c0, which takes a prompt x \u2208X and produces a response a \u2208A from the distribution \u03c0(\u00b7|x). In a typical LLM training pipeline (Ouyang et al., 2022; Touvron et al., 2023; OpenAI, 2023), the tuning process begins with a pretrained model, which is subsequently fine-tuned using specialized and instructional data to produce an initial model \u03c00. The initial model \u03c00 is then aligned with a prompt set from some distribution x \u223cd0. The key component in RLHF is the General Preference Oracle, which is mathematically defined as follows. Definition 1 (General Preference Oracle). There exists a preference oracle P : X \u00d7 A \u00d7 A \u2192[0, 1], and we can query it to receive the preference signal: y \u223cBer \u0000P(a1 \u227ba2|x, a1, a2) \u0001, where y = 1 means a1 is preferred to a2, and y = 0 means that a2 is preferred. \u2217Equal contributions with random author order. \u2020The Hong Kong University of Science and Technology. Email: cyeab@connect.ust.hk \u2021University of Illinois Urbana-Champaign. Email: wx13@illinois.edu \u00a7University of Illinois Urbana-Champaign. Email: yuhengz2@illinois.edu \u00b6University of Illinois Urbana-Champaign. Email: nanjiang@illinois.edu \u2016University of Illinois Urbana-Champaign. Email: tongzhang@tongzhang-ml.org 1 arXiv:2402.07314v2 [cs.LG] 25 Apr 2024 \fInstead of directly optimizing against the preference oracle P, the existing prevalent RLHF framework is reward-based (Ouyang et al., 2022; Touvron et al., 2023), which consists of three steps: (1) preference data collection, (2) reward modeling, and (3) policy optimization. Specifically, the preference dataset D consists of multiple tuples of the form (x, a1, a2, y), whose collection process can be modeled as: x \u223cd0, a1 \u223c\u03c01 D, a2 \u223c\u03c02 D, y \u223cBer \u0000P(a1 \u227ba2|x, a1, a2) \u0001 , (1) where \u03c01 D and \u03c02 D are behavior policies and are typically set as \u03c00 (Touvron et al., 2023; Liu et al., 2023a) or some powerful closed-form LLMs (Cui et al., 2023). The second step is reward modeling, which is the origin of the name \u201creward-based\u201d. This step can be viewed as a kind of inverse RL (Ziebart et al., 2008), which models some difficult-to-specify goals (preferred by the human or AI evaluators) as a scalar reward signal. Specifically, the Bradley-Terry (BT) model (Bradley and Terry, 1952), a framework widely adopted in Ouyang et al. (2022); Bai et al. (2022a); Touvron et al. (2023); Rafailov et al. (2023); Xiong et al. (2023), assumes that there exists a ground-truth reward function P \u2217and the preference model satisfies: P(a1 \u227ba2|x, a1, a2) = exp(R\u2217(x, a1)) exp(R\u2217(x, a1)) + exp(R\u2217(x, a2)) = \u03c3 \u0000R\u2217(x, a1) \u2212R\u2217(x, a2) \u0001 , (2) where \u03c3(z) = 1/(1 + exp(\u2212z)) is the sigmoid function. Then, the reward model is taken as the Maximum Likelihood Estimation (MLE) of the BT model on the preference dataset D (e.g., Pacchiano et al., 2021; Novoseller et al., 2020; Ouyang et al., 2022; Bai et al., 2022a; Touvron et al., 2023) and is used in subsequent policy optimization steps to provide a signal for algorithms like Proximal Policy Optimization (Schulman et al., 2017). Despite its successes, the existence of a reward function and the BT model are strong assumptions, which may not fully capture the complicated human preferences. In particular, the BT model assumes that human preference is transitive, which means that if we prefer A to B (P(A \u227bB|x, A, B) > 0.5) and we prefer B to C, then it automatically holds that we prefer A to C. This assumption, however, is contradicted by evidence of intransitivity in human decision-making (Tversky, 1969; May, 1954). This limitation is particularly pronounced if we consider the population-level preferences, where the ultimate preference signal is aggregated across diverse human groups (May, 1954). This may further be evidenced that in the practical RLHF, the accuracy of the learned BT model is around 70% (Bai et al., 2022a; Touvron et al., 2023; Cui et al., 2023), suggesting the challenges in approximating the complicated human preference by BT model. While there are some recent efforts to bypass reward modeling (Rafailov et al., 2023; Zhao et al., 2023), they are still fundamentally derived from the reward-based preference model and suffer from the aforementioned issues. Table 1: Comparison of the test accuracy between the BT-based reward model and the preference model. The reward model and preference model are trained with the same base model and preference dataset, where the details are deferred to Section 6. We evaluate the model using the Reward-Bench (Lambert et al., 2024). Base Model Method Chat Chat Hard Safety Reasoning Gemma-2B-it BT 95.0 40.8 81.2 74.2 Gemma-2B-it Preference 96.0 40.5 82.8 80.7 LLaMA3-8B-it BT 99.4 65.0 87.7 87.8 LLaMA3-8B-it Preference 98.9 62.6 88.6 95.6 In contrast, the general preference oracle defined in Definition 1 is strictly more general than the BT model and can capture a more complicated preference pattern from the definition itself. It allows an intransitive preference model and can further capture the preference feedback from AI (Bai et al., 2022b), with a notable example of GPT-4 (OpenAI, 2023), which is widely used for model evaluations in practice and may more accurately reflect real user experience (Touvron et al., 2023; Dong et al., 2023; Rafailov et al., 2023; Xiong et al., 2023). Moreover, from a practical side, the preference model construction tends to be more efficient than the reward function in terms of ranking accuracy. This is evidenced by the fact that the preference model, pairRM with 0.4B parameters (Jiang et al., 2023), performs comparably to a LLaMA2-13B-based 2 \freward model across a diverse set of preference targets (Cui et al., 2023). As a case study, we train a reward model based on the Bradley-Terry (BT) model and a preference model with the same starting checkpoint Gemma-2B-it (Team et al., 2024) and preference dataset1, with results presented in Table 1 and the training details are deferred to Section 6. As we can see, the preference model achieves much higher test accuracy in the reasoning task while maintaining comparable results in other tasks. Meanwhile, the training set we use is rather limited in the reasoning data (math and coding), so the reasoning task can be viewed as an outof-distribution task. In this sense, the preference model may also provide a better generalization compared to the reward model. The results also extend to another case study with LLaMA3-8B-instruct, where the preference model shows promising potential in the improvement of reasoning tasks. We refer interested readers to check Zhao et al. (2023); Liu et al. (2023a) for further examples with similar observations. The advantage in ranking accuracy is not only directly beneficial for the algorithms that depend on ranking information (Dong et al., 2023; Gulcehre et al., 2023), but also improves the performance of algorithms derived from the reward-based framework (i.e., Bradley-Terry model), as evidenced by the results in the study of (iterative) DPO (Xiong et al., 2023; Hoang Tran, 2024). Given all these considerations, our study focuses on exploring the theoretical properties of RLHF under the general preference oracle (Definition 1), with the goal of advancing practical algorithmic designs. We summarize our contributions as follows: \u2022 We make the first attempt to study the theoretical learnability of RLHF under general preference oracle with KL regularization, in both the offline setting with a pre-collected preference dataset and the online setting where we can query human feedback along the way of training, which demonstrates the potential of reward-model-free learning under general preference; \u2022 We propose sample-efficient algorithms in both the offline setting and online setting and establish the finite-sample theoretical guarantees under standard coverage and exploration conditions; \u2022 We show that the theoretical insights can be used to guide practical algorithmic designs with a reasonable approximation of the computational oracle. 2 Problem Formulation In this section, we formulate the RLHF with general preference learning. Suppose that there exists a preference function P \u2217: X \u00d7 A \u00d7 A \u2192R which represents the prefererence of one action a1 over another a2 given a prompt x: P \u2217(x, a1, a2) = P(a1 \u227ba2|x, a1, a2). In practical applications, we want to make the resulting LLM \u03c0 close to \u03c00 (Ziegler et al., 2019; Ouyang et al., 2022; Bai et al., 2022a; Rafailov et al., 2023). Therefore, we adopt the following KL-regularized objective: J(\u03c01, \u03c02) = Ex\u223cd0Ea1\u223c\u03c01,a2\u223c\u03c02 h P \u2217(x, a1, a2) \u2212\u03b7\u22121DKL(\u03c01(\u00b7|x)\u2225\u03c00(\u00b7|x)) + \u03b7\u22121DKL(\u03c02(\u00b7|x)\u2225\u03c00(\u00b7|x)) i . (3) One primary reason to consider the regularized target is that the constructed preference model is only locally accurate, i.e., performs well when there is little distribution shift. For instance, if the preference model is fine-tuned on a preference dataset collected by the initial model \u03c00, it improves the in-distribution generalization, but the resulting model often performs poorly out-of-distribution (Burns et al., 2023). Meanwhile, even if we require human labelers to give feedback along the way, the choices of the labelers may not be representative enough or the labelers can make mistakes due to limited time, attention, or care (Geng and Liu, 2023). Moreover, the KL divergence in the target ensures that the resulting policy is stochastic instead of deterministic (given a suitable initial checkpoint), thereby more accurately reflecting the dynamics of generative language models. 1We remark that the mixture of the open-source preference dataset and hyper-parameters are mainly tuned for the BT model with > 2000 A100 hours, while the preference model adopts most of them directly. Therefore, we expect that the preference model may be even better with a more refined hyper-parameter search. 3 \fWe choose P \u2217as the target mostly for historical reasons (Dud\u00b4 \u0131k et al., 2015; Wang et al., 2023b). A choice is the relative preference log(P \u2217(x, a1, a2)/(1 \u2212P \u2217(x, a1, a2))), which is equal to R\u2217(x, a1) \u2212R\u2217(x, a2) when the BT model holds so that Equation (3) becomes two decoupled regularized-reward maximization problems in this case and automatically reduces to the setting considered in the previous work Xiong et al. (2023). While we do not handle this target directly, the analysis techniques presented in this paper readily apply to it with slight modifications. Nash Equilibrium and Best Response. Without loss of generality, we restrict our attention to the policy class \u03a0 consisting of the policies with the same support as \u03c00 and denote the unique Nash equilibrium (known as the Minimax Winner (Simpson, 1969; Kramer, 1973; Fishburn, 1984) or the von Neumann Winner (Dud\u00b4 \u0131k et al., 2015)) as the solution of the following minimax problem as: (\u03c01 \u2217, \u03c02 \u2217) = (\u03c0\u2217, \u03c0\u2217) = argmax \u03c01\u2208\u03a0 argmin \u03c02\u2208\u03a0 J(\u03c01, \u03c02), (4) where the Nash policies of two players coincide as we prove in Lemma 5. In the rest of this paper, we still use the notation (\u03c01 \u2217, \u03c02 \u2217) to distinguish between the max-player and min-player. Accordingly, we refer to the first LLM \u03c01 as the max-player, while the second LLM \u03c02 is the min-player. We also define the notion of best response. For function J and policy \u03c01, the best response to \u03c01 is defined as argmin\u03c02\u2208\u03a0 J(\u03c01, \u03c02) and the value is denoted by J(\u03c01, \u2020) = min\u03c02\u2208\u03a0 J(\u03c01, \u03c02). Similarly, for \u03c02, we have J(\u2020, \u03c02) = max\u03c01\u2208\u03a0 J(\u03c01, \u03c02). In particular, since \u03c01 \u2217and \u03c02 \u2217are the Nash equilibrium, they are the best response to each other. Function Approximation. Suppose that we have access to a function class P \u2282(X \u00d7 A \u00d7 A \u2192R) (e.g. neural network), which provides us with a set of candidates to approximate the P \u2217, and also the preference functions P \u2208P satisfies P(x, a1, a2) = 1 \u2212P(x, a2, a1). We make the following assumptions on the class P. Assumption 1. We assume that P is finite and the capacity of the class is large enough so that P \u2217\u2208P. Note that the finite class assumption is just for a clear presentation and the results readily generalize to an infinite class with a bounded covering number by the standard discretization technique. We define a theoretical computation oracle as follows and defer the practical implementations to the experiment section. Definition 2 (Nash Equilibrium Oracle). For a given preference function P \u2208P and a reference policy \u03c00, we can compute the Nash Equilibrium policy \u03c0P = argmax \u03c01\u2208\u03a0 min \u03c02\u2208\u03a0 Ex\u223cd0Ea1\u223c\u03c01,a2\u223c\u03c02 h P(x, a1, a2) \u2212\u03b7\u22121DKL(\u03c01(\u00b7|x)\u2225\u03c00(\u00b7|x)) + \u03b7\u22121DKL(\u03c02(\u00b7|x)\u2225\u03c00(\u00b7|x)) i . Learning Objective. The learning objective is to find an \u03f5-approximate Nash policy \u02c6 \u03c01 for the max-player: J(\u03c01 \u2217, \u03c02 \u2217) \u2212J(\u02c6 \u03c01, \u2020) = J(\u03c01 \u2217, \u03c02 \u2217) \u2212min \u03c0\u2032 J(\u02c6 \u03c01, \u03c0\u2032) \u2264\u03f5, which means that the max-player is consistently preferred by the KL-regularized preference in the face of any competing policy \u03c0\u2032 up to a relaxation of \u03f5. To stress the non-symmetric structures of the two players, we refer to the max-player as the main agent, which aims to find her \u03f5-approximate Nash policy, and refer the min-player to as the enhancer, which is designed to facilitate the main agent\u2019s learning. In particular, when \u03b7 is large enough so that the KL is roughly omitted, then, we can further obtain that min \u03c02\u2208\u03a0 Ex\u223cd0Ea1\u223c\u02c6 \u03c01,a2\u223c\u03c02P \u2217(x, a1, a2) \u22650.5 \u2212\u03f5. In this case, the obtained policy \u02c6 \u03c01 is consistently preferred by the preference oracle P \u2217against any competing policies. We mention in passing that the KL penalty coefficient \u03b7 > 0 exhibits a trade-off between being preferred by the oracle P \u2217and staying close to the initial model \u03c00, and reflects the degree of our belief in the oracle P \u2217. In practice, \u03b7 is typically treated as a hyper-parameter and is adjusted by parameter search (Huggingface, 2023). Compared to the previous literature formulating the preference learning as finding a Nash equilibrium, although we focus on optimizing the policy for the max-player, we can also have a duality gap guarantee 4 \fbecause of the symmetry of the objective function: J(\u03c01, \u03c02) = 1 \u2212J(\u03c02, \u03c01). To see this, we decompose the duality gap into the suboptimality for the max-player \u02c6 \u03c01 and the min-player \u02c6 \u03c02: J(\u2020, \u02c6 \u03c02) \u2212J(\u02c6 \u03c01, \u2020) =J(\u2020, \u02c6 \u03c02) \u2212J(\u03c01 \u2217, \u03c02 \u2217) + J(\u03c01 \u2217, \u03c02 \u2217) \u2212J(\u02c6 \u03c01, \u2020) =J(\u03c01 \u2217, \u03c02 \u2217) \u2212J(\u02c6 \u03c02, \u2020) + J(\u03c01 \u2217, \u03c02 \u2217) \u2212J(\u02c6 \u03c02, \u2020). If we obtain such an \u03f5-suboptimal max player \u02c6 \u03c01, by taking the min-player \u02c6 \u03c02 = \u02c6 \u03c01, the duality gap J(\u2020, \u02c6 \u03c02) \u2212J(\u02c6 \u03c01, \u2020) is naturally bounded by 2\u03f5. Notations. We use the short-hand notation \u03c0 = (\u03c01, \u03c02) when there is no confusion. We use P(x, \u03c01, \u03c02) to represent Ea1\u223c\u03c01,a2\u223c\u03c02[P(x, a1, a2)]. We use J(x, \u03c01, \u03c02) to denote the objective function in Equation (3) without the expectation over the prompt x \u223cd0. Let \u03c3(x) denote the sigmoid function 1/(1 + e\u2212x). We also provide a notation table in Table 2 to improve the readability of this paper. 3 Related Work RLHF. RLHF was first popularized in the deep RL literature by Christiano et al. (2017), which served to direct the attention of the RL community to the preference-based feedback, but may further date back to Bennett et al. (2007); Knox and Stone (2008) in the context of machine learning. It has attracted significant attention recently, mainly due to its tremendous success in Chat-GPT (OpenAI, 2023). The most popular and standard RLHF framework is outlined in Ouyang et al. (2022); Touvron et al. (2023) and we have described the details in Section 1. In terms of reward optimization, PPO (Schulman et al., 2017) is the most well-known algorithm in LLM alignment literature. However, tuning the PPO algorithm to the best performance requires extensive efforts and the result of Chat-GPT4 (OpenAI, 2023) has not been widely reproduced so far. This motivates another line of works of algorithms that are based on supervised learning. For instance, Dong et al. (2023); Yuan et al. (2023); Touvron et al. (2023); Gulcehre et al. (2023); Ji et al. (2024) propose reward ranked finetuning, (also known as rejection sampling finetuning), which essentially learns from the best-of-n policy (Nakano et al., 2021) to maximize the reward. The reward-ranked finetuning algorithm is a stable policy optimization algorithm with minimal hyper-parameter configuration and was applied to the RLHF of LLaMA2 (Touvron et al., 2023). However, it is also observed that the reward ranked finetuning algorithm leads to considerable forgetting in a wide range of tasks (also referred to as the alignment tax), as the algorithmic design only considers reward optimization (Touvron et al., 2023; Lin et al., 2023; Chen et al., 2024). One approach to mitigate this issue is to use the KL-regularized formulation, which is widely adopted in the deep RL approach (e.g. PPO) (Ziegler et al., 2019; Wu et al., 2021; Ouyang et al., 2022; Bai et al., 2022a; Korbak et al., 2022; Li et al., 2023a), and other supervised-learning-based algorithms (Rafailov et al., 2023; Wang et al., 2023a; Liu et al., 2023a; Azar et al., 2023), whose theoretical property is studied in Xiong et al. (2023). Among them, (offline) Direct Preference Optimization (DPO) (Rafailov et al., 2023) has emerged as an attractive alternative approach to PPO with notable stability and competitive performance. Xiong et al. (2023); Hoang Tran (2024); Yuan et al. (2024) further extend the offline DPO to the iterative (online) variant, and the resulting models demonstrate impressive performance (Hoang Tran, 2024). However, all these algorithms are designed under the reward-based RLHF framework to maximize the underlying reward function (with appropriate regularization). Theoretical Study of Reward-based RLHF. The theoretical study of policy optimization from preference feedback dated back to the dueling bandits (e.g., Yue et al., 2012; Saha, 2021; Bengs et al., 2021). This was later extended to the online RL setting by Xu et al. (2020); Novoseller et al. (2020); Pacchiano et al. (2021); Chen et al. (2022), including the study of tabular online RLHF with finite state, and the study of general function approximation for capturing real-world problems with large state spaces. Zhan et al. (2023b); Wu and Sun (2023) further encompasses the development of reward-free learning type algorithms and posterior sampling-based algorithms tailored for online RLHF. In addition to the online setting, there is also another line of works (Zhu et al., 2023; Zhan et al., 2023a; Li et al., 2023b) studying the reward-based RLHF in the offline setting, which learns from a pre-determined offline dataset with suitable coverage condition over the state-action space. However, these studies only consider reward maximization and deviate 5 \ffrom the practical applications (e.g., these frameworks admit a deterministic optimal policy). Recently, Xiong et al. (2023) first formulated the RLHF as a reverse-KL regularized contextual bandit and provided finite-sample guarantees in offline, online, and hybrid settings. We remark that all these papers consider only the reward-based RLHF framework, thus differing from ours. Theoretical Study of RLHF under General Preference Oracle. Our work is related to Dud\u00b4 \u0131k et al. (2015) and Wang et al. (2023b), which also investigate preference-based RLHF under a general preference model. The major difference is that we consider the reverse-KL regularized preference, aligning closely with recent LLM advancements (Ziegler et al., 2019; Ouyang et al., 2022; Bai et al., 2022a; Rafailov et al., 2023), while Dud\u00b4 \u0131k et al. (2015); Wang et al. (2023b) considers only the non-regularized one. Meanwhile, Dud\u00b4 \u0131k et al. (2015) only considers the problem of finite action, while our work and Wang et al. (2023b) consider the problem with large or even infinite state-action under function approximation. In terms of learning paradigm and algorithmic design, we consider both offline learning from a pre-collected dataset and batch online learning with a sparse policy update, while Dud\u00b4 \u0131k et al. (2015); Wang et al. (2023b) studies sequential online learning that updates policy in each step, which is not feasible in the context of LLMs. Moreover, we demonstrate that the proposed algorithms can be reasonably implemented in practice, but Dud\u00b4 \u0131k et al. (2015); Wang et al. (2023b) only focus on information-theoretical algorithms. To summarize, compared to the previous works, the framework in this work accurately reflects real-world alignment practices thus aligning more closely with the RLHF practice. Our work is also closely related to the IPO (Azar et al., 2023) and Nash learning (Munos et al., 2023), which also motivate new algorithmic design with a general preference oracle. We comment on the similarities and differences between our framework and theirs as follows. In terms of the problem setting, our work and Nash learning consider the minimax game under the reverse-KL regularized preference, while IPO can be interpreted to find the best response of the fixed reference policy, and may be considered as a special case of the game formulation. In terms of learning paradigm, both the IPO and Nash learning only consider learning toward a fixed and known preference oracle, and study the optimization property of the problem: how to compute the optimal policy under the given preference oracle. In contrast, we study the statistical property, where the preference model needed to be learned and our goal is to find the optimal policy under the underlying ground-truth preference model. In particular, the computational challenge is hidden in Definition 2 and Munos et al. (2023) provides a reasonable approximation of the planning oracle. In this sense, our work and Munos et al. (2023) are complementary to each other. Finally, the concurrent work Swamy et al. (2024) studies the non-regularized general preference model in the sequential online setting and aims to find the Nash equilibrium in the context of continuous control tasks. In terms of the observation model, they assume access to the preference score P(a1 \u227ba2|x, a1, a2), while we only observe the preference signal y \u223cBer(P(a1 \u227ba2|x, a1, a2)). Moreover, they design online RLHF algorithms based on a reduction to the no-regret algorithm like Hedge (Freund and Schapire, 1997), whose techniques are fundamentally different from ours. 4 Improved Algorithms in Offline Setting 4.1 Setup In the offline setting, our goal is to learn a good policy from a pre-collected dataset Doff = {(xi, a1 i , a2 i , yi)}n i=1 without further query with the oracle P, where comparison sample is assumed to be independently collected as in Equation (1). We measure the suboptimality of the learned policy \u02c6 \u03c01 by the gap between the Nash value and the best response value: J(\u03c0\u2217 1, \u03c0\u2217 2) \u2212J(\u02c6 \u03c01, \u2020), (5) where the KL-regularized function J is defined in Equation (3). Similar to the reward-based framework (Ouyang et al., 2022), one natural approach is a two-staged method: 6 \f0 20 40 60 80 100 KL divergence between policy and initial policy 0.0 0.2 0.4 0.6 0.8 1.0 1.2 Ground Truth/Proxy Reward Proxy Reward Ground Truth Reward Figure 1: An illustration of the reward over-optimization issue adapted from Gao et al. (2023). Here the proxy reward is trained from the responses of \u03c00. Therefore, in the early stage, the proxy reward aligns well with the gold reward in terms of the in-distribution responses. As the reward gets higher, the distribution shift becomes larger, and since the training set is lacking in the coverage over these out-of-distribution responses, the proxy reward does not align with the gold reward in this stage. \u2022 Construct an empirical preference model (reward model in the literature) by maximizing the following log-likelihood function: \u2113Doff (P) = X (x,a1,a2,y)\u2208Doff y log P(x, a1, a2) + (1 \u2212y) log P(x, a2, a1); (6) \u2022 Solve the policy by plugging the learned preference model \u02c6 P into the Nash Equilibrium Oracle 2. However, it is known that this framework typically leads to severe reward over-optimization issue (Gao et al., 2023), meaning that while the model is preferred by the learned \u02c6 P, it may not achieve good performance under the evaluation of P \u2217. This is because, with finite Doff drawn from some behavior policy, it is unlikely to provide an accurate estimation for all the prompt-response pairs. Therefore, imposing heavy optimization pressure toward \u02c6 P will push the model to exploit these unreliable estimations to chase for a high proxy metric, which typically leads to a worse performance under the ground truth P \u2217. See Figure 1 for an illustration of this phenomenon in terms of reward learning. 4.2 Learning with Pessimism The recent advances in the offline RL theory have demonstrated that the principle of pessimism with a conservative estimation is statistically efficient for offline learning across a diverse set of scenarios (Jin et al., 2021; Rashidinejad et al., 2021; Xie et al., 2021; Zanette et al., 2021; Zhong et al., 2022a; Cui and Du, 2022; Xiong et al., 2022; Zhang et al., 2023). In this section, we connect the KL-reversed minimax game in Equation (3) with offline RL by pessimism via version space2. We introduce our algorithm, Pessimistic Equilibrium Learning from Human Feedback (PELHF) in ALgorithm 1. Given an offline dataset Doff, we first obtain the maximum likelihood estimation (MLE) \u02c6 P by maximizing Equation (6). Rather than directly planning with this empirical \u02c6 P, we form a version space 2We also introduce another algorithm achieving pessimism via uncertainty bonus construction, see Appendix B.2. 7 \fAlgorithm 1 Pessimistic Equilibrium Learning from Human Feedback 1: Input: Dataset Doff = {xi, a1 i , a2 i , yi}n i=1, preference space P, policy class \u03a0, parameter \u03b7, \u03b2 > 0. 2: Compute the MLE \u02c6 P = argminP \u2208P \u2113Doff (P). 3: Construct version space b P = n P \u2208P : n X i=1 (P(xi, a1 i , a2 i ) \u2212\u02c6 P(xi, a1 i , a2 i ))2 \u2264\u03b22/2 o . (7) 4: Compute the best policy under the conservative value estimation \u02c6 \u03c01 = argmax \u03c01\u2208\u03a0 min \u03c02\u2208\u03a0 min P \u2208b P Ex\u223cd0Ea1\u223c\u03c01,a2\u223c\u03c02 h P(x, a1, a2) + \u03b7\u22121 ln \u03c00(a1|x) \u03c01(a1|x) \u2212\u03b7\u22121 ln \u03c00(a2|x) \u03c02(a2|x) i . (8) 5: Output: \u02c6 \u03c01. b P that contains P \u2217\u2208b P with a high probability under a suitable choice of \u03b2, as we show in the following lemma. Lemma 1. [Proof] Under Assumption 1, with probability at least 1 \u2212\u03b4, we have n X i=1 ( \u02c6 P(xi, a1 i , a2 i ) \u2212P \u2217(xi, a1 i , a2 i ))2 \u2264log(|P|/\u03b4). For each policy \u03c01, we take the minimum preference function over b P and the best responded \u03c02 as its conservative value estimation: \u02c6 Joff(\u03c01) = min \u03c02\u2208\u03a0 min P \u2208\u02c6 P Ex\u223cd0Ea1\u223c\u03c01,a2\u223c\u03c02 h P(x, a1, a2) + \u03b7\u22121 ln \u03c00(a1|x) \u03c01(a1|x) \u2212\u03b7\u22121 ln \u03c00(a2|x) \u03c02(a2|x) i . Then, we solve the minimax game concerning this conservative value estimator. With this pessimistic modification, the resulting algorithm enjoys the following theoretical guarantee. Theorem 1. [Proof] If Assumption 1 holds, and we set \u03bb = log(|P|/\u03b4) and \u03b22 = 2 log(|P|/\u03b4), then, with probability at least 1 \u2212\u03b4, the output policy of Algorithm 1 satisfies J(\u03c0\u2217 1, \u03c0\u2217 2) \u2212J(\u02c6 \u03c01, \u2020) \u22644\u03b2 r C(\u03c01 \u2217, \u03c0D, P) n . where the coverage coefficient C(\u03c01 \u2217, \u03c0D, P) = max \u03c02\u2208\u03a0 sup P \u2208P (Ex\u223cd0[P(x, \u03c01 \u2217, \u03c02) \u2212\u02c6 P(x, \u03c01 \u2217, \u03c02)])2 Ex\u223cd0,a1\u223c\u03c01 D,a2\u223c\u03c02 D(P(x, a1, a2) \u2212\u02c6 P(x, a1, a2))2 . This theorem shows that the suboptimality gap depends on how the target (\u03c01 \u2217, \u03c02) is covered by the offline dataset, where \u03c02 is maximized over the policy set \u03a0. This coverage coefficient resembles the unilateral coverage3 for Markov games (Cui and Du, 2022; Zhong et al., 2022a). Then, a natural question is whether a good coverage condition (C(\u03c01 \u2217, \u03c0D, P) is small) is practical in the context of LLMs. Unfortunately, since the response is usually long in practice, the distribution shift between policies is also very large. We summarize some statistics here: \u2022 Along the way of the RLHF training, the average density ratio \u03c0(a|x) \u03c00(a|x) > exp(25) as reported in Figure 13 of Bai et al. (2022a). See similar results of rejection sampling fine-tuning (Dong et al., 2023) and DPO (Rafailov et al., 2023). 3In Appendix B.3, we show that with an improved analysis, Algorithm 1 enjoys a refined coverage condition, similar to the coverage notion in Zhang et al. (2023). 8 \f\u2022 For a case study, we use the Gemma-7B-it as the behavior policy to collect data for aligning Gemma2B-it (Team et al., 2024) with 15k prompt from (Cui et al., 2023). Then, we calculate the average KL divergence between Gemma-7B-it and Gemma-2B-it as 456.4. Therefore, it is unlikely to expect that we can learn the optimal policy from a pre-collected dataset. This motivates us to consider the online setting, where we can further query the preference oracle during the training to continuously enrich the dataset thus enhancing our models. 5 Iterative RLHF with Online Exploration 5.1 Setup of Iterative RLHF The major difference between the online and offline settings is that online algorithms can further query the preference oracle P \u2217along the way of training. Since updating the LLMs is expensive, we consider the batch online setting for a sparse policy update. Specifically, for each batch t \u2208[T], \u2022 we first update the policy pair (\u02c6 \u03c01 t , \u02c6 \u03c02 t ) based on the historical information collected so far; \u2022 we collect m tuples: we sample a random prompt by xt,i \u223cd0, collect two responses by (a1 t,i, a2 t,i) \u223c (\u02c6 \u03c01 t , \u02c6 \u03c02 t ), and query the preference signal yt,i \u223cBer(P \u2217(xt,i, a1 t,i, a2 t,i)). Here the batch size m is usually very large compared to the typically adopted mini-batch update. To distinguish this from the sequential online setting where we update policy after collecting a single preference pair, we refer to this learning paradigm as the iterative RLHF. 5.2 Learning with Exploration The primary advantage of online learning is that we can strategically choose the behavior policies in each iteration to improve the coverage of the collected data, which is referred to as the exploration in the literature. To achieve this goal, we need to quantify the data uncertainty to guide the exploration direction. To this end, we present the notions of information ratio and eluder coefficient. Information Ratio and Eluder Coefficient. Distinct from the offline setting where we assume the coverage condition of a pre-collected dataset Doff, online exploration makes it possible to upper bound the suboptimality by the complexity of the function space. We leverage the notion of the eluder coefficient, which explicitly limits the generalization from the visited state-action distributions to the unseen part. Definition 3 (Information Ratio and Eluder Coefficient). For any two policy \u03c01, \u03c02, we define the information ratio as \u0393t(\u03bb, \u03c01, \u03c02) = sup P \u2208P |Ex\u223cd0[P(x, \u03c01, \u03c02) \u2212\u02c6 P(x, \u03c01, \u03c02)]| q \u03bb + Pt\u22121 s=1 Exs\u223cd0,a1 s\u223c\u02c6 \u03c01 s,a2 s\u223c\u02c6 \u03c02 s(P(xs, a1 s, a2 s) \u2212\u02c6 P(xs, a1 s, a2 s))2 . Then, the eluder coefficient is given by d(P, \u03bb, T) := sup\u03c01 1:T ,\u03c02 1:T PT t=1 min(1, (\u0393t(\u03bb, \u03c01 t , \u03c02 t ))2). The information ratio and eluder coefficient considered here have also been adopted in the literature (e.g., Wang et al., 2020; Gentile et al., 2022; Xie et al., 2022; Ye et al., 2023a; Agarwal et al., 2023). Essentially, the information ratio compares the out-of-sample error on the unseen data with the in-sample error measured on the historical data, and can be interpreted as the worst-case ratio between them (as we take sup over all possible P \u2208P). Meanwhile, the eluder coefficient limits the extent to which we can be \u201csurprised\u201d by the new out-of-sample distributions, given the historical data collected so far. The uncertainty for the preference model aligns with the uncertainty for the BT model under boundedness conditions, which is illustrated in the following example. We defer the details to Appendix C.1. 9 \fExample 1 (Uncertainty in Bradley-Terry model with linear reward). Suppose the reward function can be embedded into a d-dimensional vector space {r(x, a) = \u27e8\u03b8, \u03d5(x, a)\u27e9: \u03b8 \u2208Rd, \u2225\u03b8\u2225\u2264B, \u2225\u03d5(x, a)\u2225\u22641}. Then, if we define the covariance matrix as \u03a3t = Pt\u22121 s=1 Ex\u223cd0,a1\u223c\u02c6 \u03c01 s,a2\u223c\u02c6 \u03c02 s(\u03d5(x, a1) \u2212\u03d5(x, a2))\u22a4(\u03d5(x, a1) \u2212\u03d5(x, a2)) + \u03bb(1 + eB)2I. \u0393t(\u03bb, \u03c01, \u03c02) \u2264(1 + eB)\u2225\u03d5(x, \u03c01) \u2212\u03d5(x, \u03c02)\u2225\u03a3\u22121 t . We refer interested readers to Du et al. (2021); Zhong et al. (2022b); Xie et al. (2022) for the extensive examples when d(P, \u03bb, T) can have a sub-linear dependency on T. We are now ready to present the algorithm for the online setting, as summarized in Algorithm 2. Specifically, for each iteration: \u2022 The main agent exploits the information contained in the data collected so far by computing the MLE \u02c6 Pt and solving the minimax game with respect to it to get \u02c6 \u03c01 t ; \u2022 The enhancer, however, aims to facilitate the main agent\u2019s learning by maximizing the uncertainty relative to the \u02c6 \u03c01 t ; \u2022 Finally, we use the policy pair to collect m preference pairs and query oracle P \u2217to get the preference signals. Algorithm 2 Optimistic Equilibrium Learning from Human Feedback with Enhancer 1: Input: Preference space P, policy class \u03a0, parameter \u03b7, \u03bb > 0. 2: for t=1,. . . ,T do 3: Exploitation with the main agent: compute the MLE \u02c6 Pt with \u2113D1:t\u22121 defined in Equation (6) and compute Nash equilibrium by calling the Nash equilibrium oracle 2: \u02c6 \u03c01 t = argmax \u03c01\u2208\u03a0 min \u03c02\u2208\u03a0 Ex\u223cd0,a1\u223c\u03c01,a2\u223c\u03c02 h \u02c6 Pt(x, a1, a2) + \u03b7\u22121 log \u03c00(a1|x) \u03c01(a1|x) \u2212\u03b7\u22121 log \u03c00(a2|x) \u03c02(a2|x) i , (9) 4: Exploration with the enhancer: compute enhancer to maximize the uncertainty: \u03c02 t = argmax \u03c02\u2208\u03a0 e \u0393m t (\u03bb, \u02c6 \u03c01 t , \u03c02) := argmax \u03c02\u2208\u03a0 sup P \u2208P |Ex\u223cd0[P(x, \u02c6 \u03c01 t , \u03c02) \u2212\u02c6 Pt(x, \u02c6 \u03c01 t , \u03c02)]| q \u03bb + 1 m Pt\u22121 s=1 Pm j=1(P(xs,j, a1 s,j, a2 s,j) \u2212\u02c6 Pt(xs,j, a1 s,j, a2 s,j))2 , (10) 5: Collect Dt = {(xi, a1 i , a2 i , yi)}m i=1 by xi \u223cd0, a1 i \u223c\u02c6 \u03c01 t (\u00b7|xi), a2 i \u223c\u02c6 \u03c02 t (\u00b7|xi) and yi \u223cBer \u0000P(a1 i \u227ba2 i |x, a1 i , a2 i ) \u0001 ; 6: end for 7: Output: the best policy in (\u03c01 1:T ) by a validation set. In particular, we choose to adopt an algorithmic structure that assigns the exploitation and exploration to the main agent and enhancer, separately. This choice turns out to be important when we move toward practical algorithms with reasonable approximations, as we detail in Section 6. We now present the main theoretical guarantee for Algorithm 2. Theorem 2. [Proof] Under Assumption 1, for any \u03f5 > 0, if we set the total iterations as T = min{n \u2208 N+ : n \u22652d(P, \u03bb, n)}, batch size as m = 18T log(2T|P|/\u03b4)/\u03f52, \u03b2 = p 2T log(2T|P|/\u03b4)/m, and \u03bb = 2T log(2T|P|/\u03b4)/m for Algorithm 2, then, with probability at least 1 \u2212\u03b4, there exists a t0 \u2208[T], J(\u03c01 \u2217, \u03c02 \u2217) \u2212J(\u02c6 \u03c01 t0, \u2020) \u2264\u03f5. The theorem states that with suitable hyper-parameter choices, after roughly T iterations (up to log factors), we can find an \u03f5-approximate Nash policy \u02c6 \u03c01 t0 for the max-player. Here T depends on the eluder coefficient that is intrinsic to the preference model and characterizes the complexity of the RLHF problem. 10 \f5.3 Key Ideas In this subsection, we present a brief discussion of the key algorithmic ideas. In Algorithm 2, the main agent always exploits all the historical information and takes the best guess we can obtain so far by solving the minimax game with respect to the MLE \u02c6 Pt. However, the success of this process necessitates that the online data collected at each iteration provides enough information. Intuitively, we should explore the direction where we are uncertain about so that we can gain more information. To this end, the enhancer aims to find the \u02c6 \u03c02 t to maximize the uncertainty relative to the \u02c6 \u03c01 t . Then, according to the definition of the eluder coefficient, we know that T X t=1 min \u00001, (\u0393t(\u03bb, \u02c6 \u03c01 t , \u02c6 \u03c02 t ))2\u0001 \u2264d(P, \u03bb, T). Since each term on the left-hand side is non-negative, there exists at least a t0 \u2208[T] such that the value at t0 is smaller or equal to the average value: min \u00001, (\u0393t(\u03bb, \u02c6 \u03c01 t , \u02c6 \u03c02 t ))2\u0001 \u2264d(P, \u03bb, T)/T \u22641/2, which further implies that (\u0393t(\u03bb, \u02c6 \u03c01 t , \u02c6 \u03c02 t ))2 \u22641/2. Recalling that the uncertainty bonus is essentially the worst-case ratio between the out-of-sample error (our learning target) and the in-sample error, we know that for \u02c6 \u03c01 t0, our target is close to the controllable in-sample error. Specifically, similar to the proof of Lemma 1, we can show that the in-sample error satisfies (with details in the Appendix C): 1 m t\u22121 X s=1 m X j=1 (P \u2217(xs,j, a1 s,j, a2 s,j) \u2212\u02c6 Pt(xs,j, a1 s,j, a2 s,j))2 \u2272log(2T|P|/\u03b4) m . In practice, searching for the most uncertain policy in the whole policy space can be challenging and the enhancer policy itself does not enjoy any theoretical guarantee. We may slightly modify Algorithm 2 by restricting the exploration step to the following subset \u03a0t = {\u03c0 \u2208\u03a0 : \u03b7\u22121Ex\u223cd0DKL(\u03c0(\u00b7|x), \u02c6 \u03c01(\u00b7|x)) \u2264\u03b2(e \u0393m t (\u03bb, \u02c6 \u03c01, \u03c0) + e \u0393m t (\u03bb, \u02c6 \u03c01, \u02c6 \u03c01))}, (11) where \u03b2 is the parameter defined in Theorem 2. This set is never empty because we can prove that both \u02c6 \u03c01 t and argmin\u03c0\u2032 J(\u02c6 \u03c01 t , \u03c0\u2032) belong to \u03a0t. Intuitively speaking, maintaining a small KL divergence against \u02c6 \u03c01 t corresponds to exploiting the historical information, and maximizes the uncertainty relative to \u02c6 \u03c01 t leading to more information gain. The choice of \u03a0t represents a refined trade-off between these two different goals, which allows that \u02c6 \u03c02 t also converges to \u03c0\u2217. The details are deferred to Appendix C.2. 6 Practical Implementation of Iterative RLHF In this section, we discuss how to implement the proposed theoretical algorithms. 6.1 Practical Algorithmic Design Main agent approximates Nash equilibrium oracle via self-play IPO. The first step is to approximate the information-theoretical oracle 2. Due to the separation between exploitation and exploration, we do not introduce optimism for the main agent. In particular, the main agent aims to solve the minimax game with respect to a learned preference model that satisfies \u02c6 Pt(x, a1, a2) = 1 \u2212\u02c6 Pt(x, a2, a1). Optimizing a fixed and known preference model has been studied in Munos et al. (2023); Calandriello et al. (2024) and the proposed algorithms can serve as a reasonable approximation of the oracle. Specifically, we adopt the self-play IPO by optimizing the following loss function: Ex\u223cd0,a,a\u2032\u223cSG[\u03c0],a+,a\u2212\u223c\u02c6 Pt(x,a,a\u2032) h log \u03c0(a+|x)\u03c00(a\u2212|x) \u03c0(a\u2212|x)\u03c00(a+|x) \u22121 2\u03b7 i2 , (12) 11 \fwhere SG[\u03c0] means that although we generate data from policy \u03c0, but we do not compute the gradient for this data-generation process. We have the following result. Proposition 1 (Proposition 4.1 of Calandriello et al. (2024)). The minimizer of Equation (12) is the unique Nash policy of the Equation (9). Enhancer explores via rejection sampling. According to Equation (11), the enhancer aims to find a policy that 1) is close to the main agent\u2019s policy \u02c6 \u03c01 t but also 2) maximizes the uncertainty relative to the \u02c6 \u03c01 t . However, since for the general neural network, the uncertainty estimator does not admit a closed form, in practice, we typically resort to heuristic methods. One popular way to do so in the context of alignment is the rejection sampling (Nakano et al., 2021; Dong et al., 2023; Liu et al., 2023a; Hoang Tran, 2024; Yuan et al., 2024). Specifically, given a prompt x, we use \u02c6 \u03c01 t to independently sample n responses, use a tournament-style procedure to get the best response (and reject all other responses), and take the best responses as \u02c6 \u03c02 t . In other words, we take the policy-induced by rejection sampling with \u02c6 \u03c01 t and P \u2217as the enhancer policy \u02c6 \u03c02 t . In this way, the \u02c6 \u03c02 t enlarges the margins between \u02c6 \u03c01 t while maintaining a moderate KL divergence. For instance, in the special case of the BT model, if we rank the samples via the learned reward, the KL divergence is upper bounded by log n \u2212n\u22121 n and is usually far better than this conservative estimation (Beirami et al., 2024). 6.2 Preference Model Construction Bradley-Terry model construction. We follow the previous works (Ouyang et al., 2022; Bai et al., 2022a) to initialize the reward model using an SFT model but replace the last layer with a linear head to predict a scalar score. The loss function of reward modeling is the negative log-likelihood so that minimizing the loss is equivalent to MLE: LRM(\u03b8) = \u2212Ex,aw,al\u223cD log \u03c3 \u0000r\u03b8(x, aw) \u2212r\u03b8(x, al) \u0001 , where aw is the preferred response over al. We use the open-source training script4. We train the model for one epoch and use a batch size of 256, a learning rate of lr = 1e-5, and a cosine learning rate schedule with a warm-up ratio of 0.03. Preference model construction. We follow Zhao et al. (2023); Liu et al. (2023a) to utilize the fact that the LLM is the next token predictor for the preference modeling. Specifically, suppose that we have a preference pair (x, a1, a2, A) where A means that the first response is better. Then, we format it as instruction = [CONTEXT] {x} [RESPONSE A] {a1} [RESPONSE B] {a2}, and label = A. Then, we simply treat the preference modeling as an instruction-following task to fine-tune the model on these instruction-label pairs. In particular, to mitigate the position bias (the preference model may prefer the response that is given in the position of RESPONSE A), we randomly switch the order of the two responses in the data formatting process. During inference, we simply use the probability of decoding A as the \u02c6 P(x, a1, a2). We mention in passing that it is also possible to include a rubric in the instruction template to guide the model\u2019s prediction and achieve better results (Qin et al., 2023). We do not explore the prompt engineering for the best preference model construction for simplicity. Ground-truth preference model for simulation. Ideally, the P \u2217is supposed to be a group of human labelers or closed-source LLMs like Chat-GPT. Unfortunately, due to resource constraints, we cannot afford the cost of using these preference oracles. Instead, we follow Gao et al. (2023) to use a strong preference model to serve as the P \u2217in the simulation. Specifically, we adopt the LLaMA3-8B, and train the preference model on a diverse set of open-source preference datasets including HH-RLHF (Bai et al., 2022a), Stanford Human Preferences Dataset (SHP) (Ethayarajh et al., 2022), Ultra-feedback (Cui et al., 2023), HelpSteer (Wang et al., 2023c), distilabel-capybara5, distilabel-orca6, and UltraInteract7. Motivated by the Theorem 1 4https://github.com/WeiXiongUST/RLHF-Reward-Modeling 5https://huggingface.co/datasets/argilla/distilabel-capybara-dpo-7k-binarized 6https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs 7openbmb/UltraInteract_pair 12 \fas well as the practical application (Ouyang et al., 2022), we include more than 1 comparison pair when a prompt is with more than 2 responses for better coverage. To be specific, \u2022 for SHP, we only use the samples with score ratio > 2, and for each prompt, we take at most 5 comparison pairs; \u2022 for HelpSteer, we use all the possible pairs except for those with the same score where the score is averaged over helpfulness and correctness; \u2022 for UltraFeedback, we use all possible pairs except for those with the same score where the score is averaged over all attributes; \u2022 for UltraInteract, we take a subset of 150K pairs into the mixture. We have about 700K preference pairs in the final training set and we have uploaded it to the huggingface at https://huggingface.co/datasets/hendrydong/preference_700K. We use the package8 to perform supervised finetuning, with the detailed hyper-parameters given in Appendix E. The resulting preference models are evaluated by the reward bench (Lambert et al., 2024), with the results summarized in Table 1. The preference model based on LLaMA3-8B-it achieves state-of-the-art test accuracy and can serve as a stable preference oracle for the simulation study. 7 Conclusion In this paper, we study the RLHF under a general preference oracle that can capture the non-transitive preferences. Specifically, we formulate the problem as a KL-regularized minimax game between two LLMs, and propose statistically efficient algorithms in both the offline and online settings. The proposed algorithms, with a carefully crafted non-symmetric algorithmic structure, can be practically implemented with reasonable approximations of the information-theoretical computational oracles. We hope our findings can advance the understanding of preference signal modeling in RLHF and stimulate further research beyond the classic reward-based framework. 8 Acknowledgement The authors would like to thank Tianqi Liu for insightful discussions on the training of preference model, and thank Haoxiang Wang, and Zihao Li for valuable discussions on the preference dataset selection."
},
{
"url": "http://arxiv.org/abs/2401.12934v1",
"title": "Reward-Relevance-Filtered Linear Offline Reinforcement Learning",
"abstract": "This paper studies offline reinforcement learning with linear function\napproximation in a setting with decision-theoretic, but not estimation\nsparsity. The structural restrictions of the data-generating process presume\nthat the transitions factor into a sparse component that affects the reward and\ncould affect additional exogenous dynamics that do not affect the reward.\nAlthough the minimally sufficient adjustment set for estimation of full-state\ntransition properties depends on the whole state, the optimal policy and\ntherefore state-action value function depends only on the sparse component: we\ncall this causal/decision-theoretic sparsity. We develop a method for\nreward-filtering the estimation of the state-action value function to the\nsparse component by a modification of thresholded lasso in least-squares policy\nevaluation. We provide theoretical guarantees for our reward-filtered linear\nfitted-Q-iteration, with sample complexity depending only on the size of the\nsparse component.",
"authors": "Angela Zhou",
"published": "2024-01-23",
"updated": "2024-01-23",
"primary_cat": "stat.ML",
"cats": [
"stat.ML",
"cs.LG",
"math.OC"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "Reward-Relevance-Filtered Linear Offline Reinforcement Learning",
"main_content": "Introduction Offline reinforcement learning, learning to make decisions from historical data, is necessary in important application areas such as healthcare, ecommerce, and other real-world domains, where randomized exploration is costly or unavailable. It requires certain assumptions such as full observability and no unobserved confounders. This motivates, esProceedings of the 27th International Conference on Artificial Intelligence and Statistics (AISTATS) 2024, Valencia, Spain. PMLR: Volume TBD. Copyright 2024 by the author(s). pecially in the era of big data, collecting as much information as possible about the environment into the state variable. On the other hand, common sensing modalities by default capture not only information that can be affected by an agent\u2019s actions, but also information about the environment that is unaffected by an agent\u2019s actions. For example, in robotics applications, the dynamics of clouds moving in the sky is a separate process that does not affect, nor is affected by, agents\u2019 actions, and does not affect agent reward. Given the overall high variance of learning offline, removing such exogenous information can help improve policy information and optimization, while recovering a minimally sufficient state variable for the optimal policy can reduce vulnerability to distribution shifts. Though various combinations of relevance/irrelevance are possible for rewards and actions, as has been recognized in a recent work, most works methodologically impose statistically difficult conditional independence restrictions with variational autoencoders that lack strong theoretical computational/statistical guarantees. Other approaches suggest simpler variable screening, but without discussion of underlying signal strength assumptions, or tradeoffs in downstream estimation and value under potential false negatives/positives, and without guarantees. To bridge between these methods, we focus on a model with linear function approximation, a popular structural assumption in the theoretical literature, and develop methods based on thresholded LASSO regression, connecting classical statistical results to new decision-theoretic notions of sparsity introduced by these causal decompositions of reward/action ir/relevance. In particular, we focus on a particular decomposition: the transitions factor into a sparse component that affects the reward, with dynamics that can affect the next timestep\u2019s sparse component and an exogenous component. The exogenous component does arXiv:2401.12934v1 [stat.ML] 23 Jan 2024 \fnot affect the reward or sparse component. A toy example of such a setting is controlling a boat with an image representation of the state environment: actions affect navigation locally and also propagate ripples leaving the boat. Though these ripples evolve under their own dynamics, they themselves do not affect local control of the boat or rewards. Our structural assumptions, though restrictive, still surface what we call \u201cdecision-theoretic, but not estimation sparsity\u201d: that is, the minimally sufficient causal adjustment set to predict transition probabilities requires the full state variable, but the optimal policy only depends on the sparse component. The contributions of our work are as follows: under our structural assumptions, we develop methodology for filtering out exogenous states based on support recovery via thresholded lasso regression for the rewards, and linear estimation on the recovered support for the q function via least-squares policy evaluation/fitted-Q-iteration (FQI). We prove predictive error guarantees on the q function estimation, and correspondingly on the optimal policy, showing how the optimal policy now depends on the dimensionality of the sparse component, rather than the full ambient dimension. 2 Preliminaries We consider a finite-horizon Markov Decision Process on the full-information state space comprised of a tuple M = (S, A, r, P, \u03b3, T) of states, actions, reward function r(s, a) , transition probability matrix P, \u03b3 < 1 discount factor, and time horizon of T steps, where t = 1, . . . , T. We let the state spaces S \u2286Rd be continuous, and assume the action space A is finite: \u03d5(s, a) denotes a (known) feature map. A policy \u03c0 : S 7\u2192\u2206(A) maps from the state space to a distribution over actions, where \u2206(\u00b7) is the set of distributions over (\u00b7), and \u03c0(a | s) is the probability of taking action a in state s. Since the optimal policy in the Markov decision process is deterministic, we also use \u03c0(s) \u2208A for deterministic policies, to denote the action taken in state s. The policy and MDP M induce a joint distribution P\u03c0 where P\u03c0(at | s0:t, a0:t\u22121) = \u03c0(at | st) and P\u03c0(st+1 | s0:t, a0:t) = P(st+1 | at, st), the transition probability. The value function is v\u03c0 t (s) = E\u03c0[PT t\u2032=t \u03b3t\u2032\u2212trt\u2032 | s], where E\u03c0 denotes expectation under the joint distribution induced by the MDP M running policy \u03c0. The state-action value function, or q function is q\u03c0 t (s) = E\u03c0[PT t\u2032=t \u03b3rt\u2032 | s, a]. These satisfy the Bellman operator, e.g. q\u03c0 t (s, a) = r(s, a) + \u03b3E[v\u03c0 t+1(st+1) | s, a]. The optimal value and q-functions are v\u2217, q\u2217correspond to the optimal policy and optimal action, respectively. We focus on the offline reinforcement learning setting where we have access to a dataset of n offline trajectories, D = {(s(i) t , a(i) t , s(i) t+1)T t=1}n i=1, where actions were taken according to some behavior policy \u03c0b. We assume throughout that the underlying policy was stationary, i.e. offline trajectories (drawn potentially from a series of episodes) that are independent. Linearity Throughout this paper, we focus on linear Markov decision processes. Let the feature mapping be denoted \u03d5 : S \u00d7 A 7\u2192Rd. We assume the reward function and value functions are linear in \u03d5. Assumption 1 (Linear MDP). Assume that both the rewards and transitions are linear functions (possibly with different parameters): rt(s, a) = \u03b2t \u00b7 \u03d5(s, a), q\u03c0 t (s, a) = \u03b8\u03c0 t \u00b7 \u03d5(s, a), Pt(\u00b7 | s, a) = \u00b5t\u03d5(s, a), \u2200t The theoretical analysis of reinforcement learning typically assumes that the reward function is known, since noise in rewards leads to lower-order terms in the analysis. However, in our setting, we will leverage sparsity of the rewards to consider minimal state space representations (and adaptive model selection) which affect first-order terms in the analysis. Linear Bellman completeness is the assumption that for any linear function f(s, a) := \u03b8\u22a4\u03d5(s, a), the Bellman operator applied to f(s, a) also returns a linear function with respect to \u03d5. (It is an equivalent assumption but generalizes more directly to potential nonlinear settings). Definition 1 (Linear Bellman Completeness). the features \u03d5 satisfy the linear Bellman completeness property if for all \u03b8 \u2208Rd and (s, a, h) \u2208S \u00d7A\u00d7[T], there exists w \u2208Rd such that: w\u22a4\u03d5(s, a) = r(s, a) + \u03b3Es\u2032\u223cPh(s,a) max a\u2032 \u03b8\u22a4\u03d5 (s\u2032, a\u2032) . As w depends on \u03b8, we use the notation Th : Rd 7\u2192 Rd to represent such a w, i.e., w := Th(\u03b8) in the above equation. Note that the above implies that r(s, a) is in the span of \u03d5 (to see this, take \u03b8 = 0 ). Furthermore, it also implies that q\u22c6 h(s, a) is linear in \u03d5, i.e., there exists \u03b8\u22c6 h such that q\u22c6 h(s, a) = (\u03b8\u22c6 h)\u22a4\u03d5(s, a). 2 \fWe let \u03c1 \u2208[d] denote an index set. We use the superscript (\u00b7)\u03c1 to denote subindexing a (random) vector by the index set (since time is the typical subscript), i.e. s\u03c1 is the subvector of state variable according to dimensions \u03c1, s\u03c1 = {sk}k\u2208\u03c1. We also introduce a new notion of extension of a subvector s\u03c1 to the ambient dimension, i.e. \u02d8 s[\u03c1] = sk if k \u2208 \u03c1 and 0 otherwise, which makes it easier, for example, to state equivalence of generic q functions comparing full-dimensional states vs. the extension of sparse subvectors to the full-dimensional space, denoted \u02d8 q. 3 Related work Our work is related to sparse offline reinforcement learning, LASSO regression for variable selection, and approaches for leveraging causal structure in reinforcement learning to remove important information. We describe each of these in turn. Structure in offline reinforcement learning. [Hao et al., 2021] studies LASSO estimation for fitted-q-evaluation and interation, and also suggests thresholded LASSO. Although we also use thresholded LASSO, our method is quite different because we directly impose the sparsity structure induced by reward-relevance into estimation of the q function, because the optimal policy is sparse. An emerging line of work identifies causal decomposition of state variables into reward-relevant/rewardirrelevant/controllable components (or variations thereof) [Dietterich et al., 2018, Wang et al., 2022b, Wang et al., Zhang et al., 2020, Seitzer et al., 2021, Efroni et al., 2021]. Methodologically, these works regularize representation learning such as with variational autoencoders towards conditional independence (which generally lacks theoretical guarantees) [Dietterich et al., 2018, Wang et al., 2022a, Seitzer et al., 2021], or assume specific structure such as block MDPs with deterministic latent dynamics emitting high-dimensional observations [Efroni et al., 2021], or require auxiliary non-standard estimation [Lamb et al., 2022]. Our model somewhat resembles the exogenous-endogenous decomposition of [Dietterich et al., 2018], but swaps cross-dependence of exogenous and endogenous components: this gives different conditional independence restrictions directly admits sparse learning. Overall, the main simplification of our model relative to these is that rewards do not depend on the exogenous component. The most methodologically related work is that of [Efroni et al., 2022], which studies sparse partial controllability in the linear quadratic regulator; although they also use thresholded LASSO, they consider online control under a different quadratic cost, focus on controllability (action-relevance), and consider entrywise regression of matrix entries. Variable selection via LASSO. There is an enormous literature on LASSO. We quickly highlight only a few works on thresholded LASSO. [Meinshausen and Yu, 2009] studies model selection properties of thresholded LASSO under a so-called \u201cbetamin\u201d condition, i.e. an assumed lower bound on the smallest non-zero coefficient and gives an asymptotic consistency result. [Zhou, 2010] also studies thresholded LASSO, while [Van de Geer et al., 2011] studies adaptive and thresholded LASSO. For simplicity, we focus on high-probability guarantees under the stronger beta-min condition. But stronger guarantees on thresholded LASSO can easily be invoked instead of the ones we use here. See [B\u00a8 uhlmann and Van De Geer, 2011] as well. In a different context, that of single-timestep causal inference, [Shortreed and Ertefaie, 2017] proposes the \u201coutcome-adaptive\u201d lasso which adds a coefficient penalty to estimation of the propensity score based on the inverse-strength of coefficients of the outcome model, to screen out covariates unrelated to both exposure and outcome. We are broadly inspired by the idea to encourage sparsity in one model (in our setting, the q-function) based on sparse estimation of another (the reward function). Note, however, that the outcome-adaptive lasso is not applicable to enforce this specific structure. Our work. Even under our simpler model, leveraging classical results from the sparse regression literature sheds light on different approaches that have already been proposed. For example, Wang et al. [2022b] proposes a variable screening method based on independence testing, which performs better for variable selection than a previous regularization-based method [Wang et al.]. The improvement of thresholding procedures upon regularized LASSO for support recovery is classically well known [B\u00a8 uhlmann and Van De Geer, 2011]. The tighter analysis of thresholded lasso also sheds light on implicit signal strength assumptions and tradeoffs of false positives for downstream policy value. Overall, relative to works on exogenous structure in reinforcement learning via representation learning, we connect to a classical literature on sparse regression with provable guarantees. On the other hand, relative to an extensive literature on LASSO, the reinforcement learning setting imposes different decision-theoretic desiderata, such that the optimal policy is sparse (hence q-function) even when from 3 \fs\u03c1 0 s\u03c1c 0 a0 r0 s\u03c1 1 a1 r1 s\u03c1c 1 Figure 1: Reward-relevant/irrelevant factored dynamics. The dotted line from at to s\u03c1c t+1 indicates the presence or absence is permitted in the model. a pure estimation perspective, estimating the transitions are not. 4 Structure We describe the conditional independence and other restrictions that characterize our filtered rewardrelevant model. Let \u03c1 \u2286[d] denote the supported set of reward-relevant and endogenous states. Let |\u03c1| be the size of the support. Assumption 2 (Blockwise independent design). s\u03c1 t \u22a5 \u22a5s\u03c1c t | st\u22121, at\u22121 Assumption 3 (Reward-irrelevant decomposition ). Assume that R(s, a) = R(\u02dc s, a) when s\u03c1 = \u02dc s\u03c1, and that next-time-step endogenous states are independent of prior exogenous states given prior endogenous states and action: s\u03c1 t+1 \u22a5 \u22a5s\u03c1c t | s\u03c1 t , at (1) The conditional independence restriction implies that P(s\u03c1 t+1 | st, at) = P(s\u03c1 t+1 | s\u03c1 t , at). Even under these restrictions on the data structure, we can surface a nontrivial qualitative distinction between estimation and decision-making, driven by this causal structure, which we call \u201ccausal sparsity\u201d for short. Although the minimal sufficient adjustment set for estimating the entire-state transitions is the non-sparse union of s\u03c1, s\u03c1p, our next results establish that the optimal decision policy is sparse, and hence our thresholded lasso method depends on the sparse component alone. Note that this decomposition differs from the exogenous-endogenous decomposition in [Dietterich et al., 2018] because our sparse component can affect the exogenous component; but not the other way around \u2013 in our model, the exogenous component does not affect the endogenous component. Let \u03b2 be the parameter for the q function, and \u03b8 be the parameter for the reward function. We let \u03c3r, \u03c3\u03b8, \u03c3r+\u03b3q denote the subgaussian parameters of the reward-variance, the Bellman-target, and the transitions, respectively. Interpreting Assumption 3. For example, consider linear dynamics (with exogenous noise) in an interacted model, i.e. st+1(s, a) = Mas + \u03f5 for Ma \u2208Rd\u00d7d. Then Ma is a block matrix and it satisfies Assumption 3 if, assuming without loss of generality, that the coordinates are ordered such that the first \u03c1 reward-supported components are first, st+1(s, a) = Mas + \u03f5, where Ma = \u0014 M \u03c1\u2192\u03c1 a 0 M \u03c1\u2192\u03c1c a M \u03c1c\u2192\u03c1c a \u0015 . In particular, the block matrix M \u03c1c\u2192\u03c1 a = 0. We can also specify a corresponding probabilistic model. Let Pa(st+1 | st) denote the a-conditioned transition probability, and suppose Pa(st+1 | st) \u223c N(\u00b5a, \u03a3a), and that Pa(st+1 | st) is partitioned (without loss of generality) as Pa(s\u03c1 t+1, s\u03c1c t+1 | s\u03c1 t , s\u03c1c t ). Then by Assumption 3 Pa(s\u03c1 t+1 | s\u03c1 t ) D = Pa(s\u03c1 t+1 | s\u03c1 t , s\u03c1c t ) \u223cN(\u00b5\u03c1 a, \u03a3\u03c1,\u03c1 a ). where the first equality in distribution follows from the conditional independence restriction of Assumption 3 and the parameters of the normal distribution follow since marginal distributions of a jointly normal random variable follow by subsetting the mean vector/covariance matrix appropriately. Remark 1. Similar to previous works studying similar structures, we assume this structure holds. If it may not, we could use model selection methods [Lee et al., 2022]: if we incorrectly assume this structure, we would obtain a completeness violation; so the model selection method\u2019s oracle inequalities would apply and be rate-optimal relative to nonsparse approaches. We emphasize that we don\u2019t posit this method as a general alternative to general sparsity, but rather as a simple principled approach to estimate in settings with this exogenous structure. 4.1 Implications for decisions We characterize important structural properties under the endogenous-exogenous assumption. Under Assumptions 1 and 3, the optimal policy is sparse. Proposition 1 (Sparse optimal policies). When s\u03c1 t = \u02dc s\u03c1 t , \u03c0\u2217 t (st) = \u02dc \u03c0\u2217 t (\u02dc st). 4 \fProposition 1 is the main characterization that motivates our method. Even though the estimation of transitions are not sparse, the optimal qand value functions are sparse. Although well-specification/realizability does not imply Bellman completeness of a function class in general, the reward-sparse linear function class is Bellman-complete for q functions as well. Let F\u03c1 t denote the true sparse function classes F\u03c1 t = {\u03b2 \u2208 Rd : \u03b2j = 0, j \u2208\u03c1}. Proposition 2 (Reward-sparse function classes are Bellman-complete.). Let r\u03c1(s, a) be the \u03c1-sparse reward function. Let \u02d8 q \u2208\u02d8 Q be the extension of \u03c1sparse q functions to the full space, i.e. where \u02d8 Q is the space of functions that are zero outside the support \u03c1. Then: sup\u02d8 qt+1\u2208\u02d8 Qt+1 infqt\u2208\u02d8 Qt \u2225qt \u2212T \u22c6 t qt+1\u22252 \u00b5t = 0 5 Method Based on the posited endogenous-exogenous structure, the sparsity in the linear rewards is the same sparsity pattern as the optimal value function. Notably, the transitions are not sparse unless only regressing on the endogenous states alone. In our method, we first run thresholded LASSO on rewards to recover the sparse support. Then we fit the q function via ordinary least squares as the regression oracle in least-squares policy evaluation/iteration on the estimated support. We describe each of these components in turn; thresholded LASSO, and fittedQ-evaluation, before describing our specific method in more detail. Our main estimation oracle of interest is a variant of thresholded LASSO, described in Algorithm 1. We are not limited to thresholded lasso \u2013 we could develop analogous adaptations of any method that performs well for support recovery. We simply require finite-sample prediction error guarantees, high probability inclusion of the entire support, and bounds on the number of false positives. Algorithm 1 Thresholded LASSO 1: Input: (standardized mean-zero and unit variance) covariate matrix X, outcome vector Y , from data-generating process where y = w\u22a4x + \u03f5. 2: Obtain an initial estimator winit using the Lasso. 3: Let \u02c6 \u03c1 = {j : wj init > \u03c40}. 4: Compute ordinary least squares restricted to \u02c6 \u03c1: \u02c6 w\u03c1 = (XT \u02c6 \u03c1kX\u02c6 \u03c1)\u22121XT \u02c6 \u03c1 Y. Algorithm 2 Reward-Filtered Fitted Q Iteration 1: At timestep t = T : Run thresholded LASSO (Algorithm 1) on rT and obtain sparse support \u02c6 \u03c1T . \u03c0\u2217 T (s\u02c6 \u03c1T ) = arg maxa qT (s\u02c6 \u03c1T , a). 2: for timestep t = T \u22121, . . . , 1 do 3: Run thresholded LASSO (Algorithm 1) on rt. Obtain sparse support \u02c6 \u03c1t. 4: Compute Bellman target yt = rt + \u03b3E\u03c0\u2217,\u03c1 t+1[qt+1(st+1, at+1)]. 5: Fit Bellman residual restricted to \u02c6 \u03c1t. e \u03b2t \u2208arg min \u03b2\u2208Rp { 1 2En[(\u03b2\u22a4\u03d5t \u2212yt)2]: \u03b2j = 0, j \u2208\u02c6 \u03c1c t} 6: \u03c0\u2217 t (s\u03c1) = arg maxa qt(s\u03c1, a). 7: end for Fitted-Q-Iteration Linear fitted-q-evaluation, equivalent to offline least-squares policy evaluation, [Ernst et al., 2006, Le et al., 2019, Nedi\u00b4 c and Bertsekas, 2003, Duan et al., 2020], and fitted-Q-iteration [Chen and Jiang, 2019, Duan et al., 2021] successively approximate \u02c6 qt at each time step by minimizing an empirical estimate of the Bellman error: yt(q) := rt + max a\u2032 [q(st+1, a\u2032)] , qt(s, a) = E[yt(qt+1)|st = s, at = a], \u02c6 qt \u2208arg min qt\u2208Q En,t[(yt(\u02c6 qt+1) \u2212qt(st, at))2]. The Bayes-optimal predictor of yt is the true qt function, even though yt is a stochastic approximation of qt that replaces the expectation over the next-state transition with a stochastic sample thereof (realized from data). 5 \fOur method Our algorithm, described in Algorithm 2, is a natural modification of these two ideas. At the last timestep, we simply run thresholded lasso on the rewards and set the optimal policy to be greedy with respect to the sparsely-supported reward. At earlier timesteps, we first run thresholded lasso on the rewards and recover an estimate of the sparse support, \u03c1t. Then, we fit the Bellman residual (rt + E\u03c0\u2217,\u03c1 t+1[qt+1(st+1, at+1)] \u2212qt(st, at))2 over linear functions of \u03d5t that are supported on \u03c1t. That is, we use the sparse support estimated from rewards only in order to sparsely fit the qt function. Again we set the optimal policy to be greedy with respect to the sparse qt function. Why not simply run thresholded LASSO fitted-Q-iteration? Lastly, we provide some important motivation by outlining potential failure modes of simply applying thresholded lasso fittedQ-iteration (without specializing to the endogenousexogenous structure here). The first iteration (last timestep), qT = RT . So thresholded regression at last timestep is analogous to thresholded reward regression. Note that if reward regression succeeds at time T, then we are integrating a dense measure against the sparse function VT . On the other hand, mistakes in time T will get amplified (i.e. upboosted as \u201csignal\u201d by the dense transition measure). Our reward-thresholded LASSO will not accumulate this error based on the structural assumptions. Without these structural assumptions, it would be unclear whether the rewards are truly dense or whether the dense transitions are amplifying errors in support recovery on the rewards. 6 Analysis We show a predictive error bound, approximate Bellman completeness under the strong-signal support inclusion of thresholded LASSO, and improvement in policy value. The main technical contribution of our work is the finite-sample prediction error bound for the reward-thresholded fitted-Q-regression. Typical prediction error analyses of thresholded lasso do not directly apply to our setting, where we recover the support from the reward and apply it directly to the q-function estimation. The key observation is that the two regressions share covariance structure and some outcome structure in part. Given this result on the finite-sample prediction error and high-probability inclusion of high-signal sparse covariates, since fitted-Q-evaluation analysis uses prediction bounds on regression in a black-box way, we immediately obtain results on policy value. See [B\u00a8 uhlmann and Van De Geer, 2011, Ariu et al., 2022, Zhou, 2010] for discussion of analysis of thresholded LASSO. 6.1 Preliminaries: standard convergence results for thresholded LASSO Let xt = \u03d5(st, at) denote regression covariates, with yt the Bellman residual; in this statement we drop the timestep for brevity and let (X, Y ) denote the data matrix and outcome vector, e.g. at a given timestep concatenated over trajectories. Our first assumption is that transition probabilities are timehomogeneous. Assumption 4. Time-homogeneous transitions. Next we define problem-dependent constants used in the analysis, assumptions, and statements. Definition 2 (Problem-dependent constants.). For a \u22650, define \u03bb\u03c3,a,d := \u03c3 \u221a 1 + a p 2 log p/n, (2) Ea := \b \u03f5 : \r \rXT \u03f5/n \r \r \u221e\u2264\u03bb\u03c3,a,p \t . (3) \u03bb\u03c3,a,d bounds the maximum correlation between the noise and covariates of X and Ea is a high probability event where P (Ea) \u22651\u2212 \u0000\u221a\u03c0 log ppa\u0001\u22121 when X has column \u21132 norms bounded by \u221an. Let \u03c10 \u2264s be the smallest integer such that: Pp i=1 min \u0000\u03b22 i , \u03bb2\u03c32\u0001 \u2264\u03c10\u03bb2\u03c32. Let T0 denote the largest \u03c10 coordinates of \u03b2 in absolute values. Define an active set of strong-signal coordinates, for which we would like to assure recovery, and \u02dc \u03c10 \u2286T0 \u2282\u03c1: \u02dc \u03c10 = {j : |\u03b2j| > \u03bb\u03c3} , (4) We assume standard restricted-eigenvalue conditions and beta-min conditions for support inclusion results. Assumption 5 (Restricted Eigenvalue Condition RE(|\u03c1|, k0, X) (Bickel et al., 2009)). Let X be the data matrix. Define 1 \u03ba (|\u03c1|, k0) \u225c min J0\u2286{1,...,d} |J0|\u2264|\u03c1| min \u2225vJ0\u22251\u2264k0\u2225vJ0\u22251 \u2225Xv\u22252 \u221an \u2225vJ0\u22252 . For some integer 1 \u2264|\u03c1| \u2264d and a number k0 > 0, it holds for all v \u0338= 0, \u03ba (|\u03c1|, k0)\u22121 > 0, \u039bmin(2|\u03c1|) := min v\u0338=0,\u2225v\u22250\u22642|\u03c1| \u2225Xv\u22252 2 n\u2225v\u22252 2 > 0, \u039bmin(2|\u03c1|) := max v\u0338=0,\u2225v\u22250\u22642|\u03c1| \u2225Xv\u22252 2 n\u2225v\u22252 2 > 0. 6 \fThe restricted eigenvalue condition of Assumption 5 is one of the common assumptions for LASSO. It corresponds to assuming well-conditioning of the matrix under sparse subsets. It also ensures that the behavior policy provides good coverage over relevant features; indeed it characterizes coverage for linear function approximation [Duan et al., 2020]. Assumption 6 (Beta-min condition on strong signals). \u03b2min,\u02dc \u03c10 := minj\u2208\u02dc \u03c10 |\u03b2j| > \u03bb\u03c3r. Assumption 6 is a signal-strength condition, that the smallest coordinate of the active set is separated from the threshold defining the active set. This prevents knife-edge situations where a relevant coordinate is not recovered (but is also of irrelevant signal strength). Analogous assumptions are generally required to show support inclusion. Assumption 6 is somewhat milder; instead imposing a stronger version would give correspondingly stronger recovery results. Under these assumptions, our main result is a prediction error bound on q-function estimation under reward-thresholded lasso, under given rate conditions on threshold and regularization strength of initial lasso. Theorem 1 (Prediction error bound for reward-thresholded LASSO). Suppose Assumptions 1 to 6. Suppose Assumption 5, RE (\u03c10, 4, X) holds with \u03ba (\u03c10, 4). Let \u03b2init be an optimal solution to LASSO(\u03d5, r; \u03bbn), e.g. lasso regression of rewards on features, with \u03bbn \u2265 \u2225X\u03f5\u03b8\u2225\u221e n . Suppose that for some constants \u02d8 D1 \u2265 D1, and for D0(\u039bmax, \u039bmin, |\u03c1|, \u03c10), D1(\u039bmax, \u039bmin, |\u03c1|, \u03c10) specified in the appendix, it holds that \u03b2min,\u02dc \u03c10 \u2265 D0\u03bbn\u03c3\u221a\u03c10 + \u02d8 D1\u03bbn\u03c3. Choose threshold \u03c40 = C\u03bb\u03c3 \u22652\u221a1 + a\u03bb\u03c3, for some constant C \u2265D1. Let I be the recovered support on \u03b2init. I = {j : |\u03b2j,init | \u2265\u03c40} , where \u03c40 \u2265\u02d8 D1\u03bb\u03c3. Then on Ea, \u02dc \u03c10 \u2282I, |I| \u22642\u03c10, and |I \u2229T c 0 | \u2264\u03c10 And, with high probability we have predictive error bounds: 1 n\u2225X \u02c6 \u03b8 \u2212X\u03b8\u2217\u22252 2 \u22644 \u03c32 q(|I|(1+468 log(2d))+2(1+2\u221a |I|) n . Given this \u201cfast rate\u201d on the prediction error of the reward-thresholded LASSO, we obtain a bound on the policy error of the fitted-Q-iteration procedure that depends primarily on the sparsity (up to constant factors) rather than the potentially highdimensional state. The analysis is standard, given the result we prove above specialized for our method. Note that we did not attempt to optimize problemindependent constants in our analysis. Before we do so, we show how the thresholded procedure also quantifies an important structural restriction for policy evaluation/optimization: (approximate) Bellman completeness, which states that the Bellman operator is approximately closed under the regression function class. Although Proposition 2 establishes that the class of linear functions restricted to the sparse component is Bellman complete, in practice, thresholding noisy estimates may lead to false positives and false negatives. Our previous analysis establishes that these are of controlled magnitude due to the choices of thresholding and regularization parameter. This also implies that the misspecification bias due to finite-sample estimation is also vanishing in n at the same rate, stated in the following proposition on approximate instancedependent Bellman completeness. Proposition 3 (Bound on Bellman completeness violation under approximate recovery). With high probability, under Ea, sup qt+1\u2208QI,\u03c1\\\u02dc \u03c10\u0338\u2286I inf qt\u2208QI,\u03c1\\\u02dc \u03c10\u0338\u2286I\u2225qt\u2212T \u22c6 t qt+1\u22252 \u00b5t = Op(n\u22121). With these results, we can establish a finite-sample bound on the policy value under Algorithm 2. Theorem 2. Suppose Assumptions 1 to 6. V \u2217 1 (s1) \u2212V \u03c0 1 (s1) \u22642T s \u039bmin\u03c32 q(938|\u03c1| log(2d) + 2(1 + 2 p |\u03c1|) n + op(n\u22121 2 ). The result follows straightforwardly given our predictive error bound and standard analysis of fittedQ-iteration. This sample complexity result improves upon prior work since it now depends on the underlying sparsity rather than the full ambient dimension. 7 Experiments We first consider a simulated setting to validate the method. Our primary comparison is with thresholded LASSO regression for fitted-Q-evaluation. This highlights the benefit of tailoring estimation for the inductive bias. In the data-generating process, we first consider |S| = 50, |\u03c1| = 10, and A = {0, 1}. The reward and states evolve according to rt(s, a) = \u03b2\u22a4\u03d5t(s, a) + \u03f5r, st+1(s, a) = Mas + \u03f5s. 7 \f200 400 600 800 1000 N, number of trajectories 0 2 4 6 8 MSE of q e 1 (s, a) reward-filtered naive TL (a) MSE of \u02c6 q\u03c0e 1 under naive thresholding of q-estimation vs. rewardfiltered evaluation 200 400 600 800 1000 N 0.75 0.80 0.85 0.90 0.95 1.00 TPR, q e 1 (s, a) reward-filtered TPR naive TL TPR (b) True positive rate, recovered support of \u02c6 q\u03c0e 1 200 400 600 800 1000 N 0.2 0.3 0.4 0.5 0.6 0.7 0.8 FPR, q e 1 (s, a) reward-filtered FPR naive TL FPR (c) False positive rate, recovered support of \u02c6 q\u03c0e 1 Recalling that Ma = \u0014 M \u03c1\u2192\u03c1 a 0 M \u03c1\u2192\u03c1c a M \u03c1c\u2192\u03c1c a \u0015 , we generate the coefficient matrix with independent normal random variables \u223cN(0.2, 1). (Note that the nonzero mean helps ensure the beta-min condition). The zero-mean noise terms are normally distributed with standard deviations \u03c3s = 0.4, \u03c3r = 0.6. In the estimation, we let \u03d5(s, a) be a product space over actions, i.e. equivalent to fitting a q function separately for every action. We first show experiments for policy evaluation in the main text due to space constraints. Fitted-Qevaluation is similar to fitted-Q-iteration, but replaces the max over q functions with the expectation over actions according to the next time-step\u2019s policy. See the appendix for additional experiments for policy optimization specifically. We compare our reward-filtered estimation using Algorithm 2 with naive thresholded lasso, i.e. thresholding lasso-based estimation of q-functions alone in Figures 2a to 2c. (We average the q function over actions; results are similar across actions). The behavior and evaluation policies are both (different) logistic probability models in the state variable, with the coefficient vector given by (different) random draws from the uniform distribution on [\u22120.5, 0.5]. We average over 50 replications from this data generating process and add standard errors, shaded, on the plot. The first plot, Figure 2a, shows the benefits in mean-squared error estimation of the q-function qpie 1 (s, a), relative to the oracle q function, which is estimated from a separate dataset of n = 20000 trajectories. The reward-filtered method achieves an order of magnitude smaller mean-squared error for small sample sizes, with consistent improvement over thresholded LASSO estimation on the q function alone. Next in Figure 2b we show the true positive rate: both methods perform similarly in including the sparse component the recovered support. But the last plot of Figure 2c shows that the naive thresholded lasso method includes many exogenous variables that are not necessary to recover the optimal policy, while the false positive rate for the reward-filtered method is controlled throughout as a constant fraction of the sparsity. Overall this simple simulation shows the improvements in estimation of the q function (which translate down the line to improvements in decisionvalue) under this special structure. Acknowledgments AZ thanks participants at the Simons Program on Causality for inspiring discussions on causal representation learning, even though she still mostly thinks about MDPs. AZ also thanks Dan Malinsky for discussions on graphs and structure at an early stage of this project."
},
{
"url": "http://arxiv.org/abs/2402.04418v1",
"title": "A Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs",
"abstract": "Multirotor UAVs are used for a wide spectrum of civilian and public domain\napplications. Navigation controllers endowed with different attributes and\nonboard sensor suites enable multirotor autonomous or semi-autonomous, safe\nflight, operation, and functionality under nominal and detrimental conditions\nand external disturbances, even when flying in uncertain and dynamically\nchanging environments. During the last decade, given the\nfaster-than-exponential increase of available computational power, different\nlearning-based algorithms have been derived, implemented, and tested to\nnavigate and control, among other systems, multirotor UAVs. Learning algorithms\nhave been, and are used to derive data-driven based models, to identify\nparameters, to track objects, to develop navigation controllers, and to learn\nthe environment in which multirotors operate. Learning algorithms combined with\nmodel-based control techniques have been proven beneficial when applied to\nmultirotors. This survey summarizes published research since 2015, dividing\nalgorithms, techniques, and methodologies into offline and online learning\ncategories, and then, further classifying them into machine learning, deep\nlearning, and reinforcement learning sub-categories. An integral part and focus\nof this survey are on online learning algorithms as applied to multirotors with\nthe aim to register the type of learning techniques that are either hard or\nalmost hard real-time implementable, as well as to understand what information\nis learned, why, and how, and how fast. The outcome of the survey offers a\nclear understanding of the recent state-of-the-art and of the type and kind of\nlearning-based algorithms that may be implemented, tested, and executed in\nreal-time.",
"authors": "Serhat S\u00f6nmez, Matthew J. Rutherford, Kimon P. Valavanis",
"published": "2024-02-06",
"updated": "2024-02-06",
"primary_cat": "cs.RO",
"cats": [
"cs.RO",
"cs.SY",
"eess.SY"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "A Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs",
"main_content": "Introduction Unmanned aerial vehicles (UAVs) have witnessed unprecedented levels of growth during the last 20 years, with civilian and public domain applications spanning power line inspection [1], monitoring mining areas [2], wildlife monitoring and conservation [3], border protection [4], building and infrastructure inspection [5], and precision agriculture [6], to name but a few applications. Although different UAV types and configurations have been utilized for such applications, multirotor UAVs, particularly quadrotors, are the most commonly and widely used, due to their perceived advantages, effectiveness, hovering capabilities, and efficiency during flight. A plethora of conventional and advanced model-based linear, linearized, and nonlinear controllers have already been derived and used for multirotor navigation and control. However, recently, learning-based algorithms and techniques have gained momentum because they facilitate and allow, among other things, for: i.) data-driven system modeling that may also include model updates; ii.) combined data-driven and model-based modeling and control and parameter identification; iii.) data-driven parameter identification; iv.) data-driven environment model development, and v.) pure learning-based control. Stated different, learning-based approaches add to the model-based formulation, they enhance multirotor modeling and control, and they offer alternatives to learning, modeling and understanding the environment. arXiv:2402.04418v1 [cs.RO] 6 Feb 2024 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Learning-based algorithms are basically divided into offline and online learning, although there exist some learning algorithms that include an offline and an online learning component. Most researchers have extensively studied and applied to different families of (linear and nonlinear) systems offline learning-based algorithms. Derivation and implementation of online learning-based algorithms to multirotor UAVs is a relatively recent area of research that has attracted increased interest because of the real-time implementability potential, which may facilitate continuous anytime online learning. This momentum has motivated the present survey. To begin with, a review of the literature reveals that there exists considerable published research addressing the use of learning algorithms for UAV control. The emphasis of already published surveys is on developing and adopting machine learning, deep learning, or reinforcement learning algorithms. To be specific, Carrio et al. [7] center around deep learning methods that are applied to UAVs, while in Polydoros and Nalpanditis [8] emphasis is in model-based reinforcement learning techniques applied to robotics, but also with applications to UAVs. Machine learning algorithms for UAV autonomous control are explored by Choi and Cha [9], while Azar et al. [10] investigate deep reinforcement learning algorithms as applied to drones. Most recently, Brunke et al. [11] presented several learning algorithm-based applications in robotics, including multirotor UAVs. The common theme of already published surveys is that they discuss different offline learning-based control algorithms that may be, or have been applied to different UAV types, but they are not real-time implementable. Therefore, in contrast to the existing surveys, the focus of this research is on also registering the online learning-based algorithms that have shown potential, and/or have been implemented and tested on multirotor UAVs. Without loss of generality, for the purpose of this survey, the below provided \u2019attributes\u2019 are considered important and they facilitate the review process. The list is by no means complete, nor unique; it may be modified and enhanced accordingly based on set objectives. Note that in what follows, the terms \u2019agent\u2019 and \u2019multirotor\u2019 are used interchangeably. 1. Navigation task: This refers to the (autonomous or semi-autonomous) function the multirotor needs to accomplish, given a specific controller design and/or configuration. 2. Learning: This refers to \u2019what\u2019 the agent learns in order to complete the navigation task. 3. Learning Algorithm: This refers to the specific algorithm that needs to be followed for the agent to learn. Inherent in this attribute is \u2019what\u2019 is being learned by the agent, and \u2019how\u2019. 4. Real-time applicability: This refers to \u2019how fast\u2019 learning is achieved, and \u2019how computationally expensive\u2019 is the learning algorithm, which, basically dictates whether learning is applicable in hard real-time, or, in almost hard real-time. Stated differently, the answer to \u2019how fast\u2019 determines the implementability of the learning algorithm. Calculation of the algorithm\u2019s computational complexity may also provide additional information on \u2019how fast\u2019 the agent learns. 5. Pros & Cons: This refers to the advantages and limitations of the underlying learning approach, which, in unison with all other attributes, determines the overall applicability and implementability of the learning approach on multirotor UAVs. The rest of the survey is organized as follows: Section 2 provides background information and related definitions, which is deemed essential for clarification and classification purposes. Section 3 summarizes offline learning, and provides a detailed Table reflecting published research, also stating what is being learned. The review of offline learning techniques is essential for completeness purposes, and to also understand the differences between offline and online learning. Section 4 dives into online learning approaches. An overview of each learning method is presented, along with what is being learned and why, advantages, and disadvantages, and obtained results. Discussion and conclusions are offered in Section 5. 2 Background Information Key concepts and definitions are presented next. Related and relevant literature is cited to support statements, findings, and observations. This information is adopted and used throughout the paper; it also helps to correctly classify reviewed learning-based algorithms, when needed. 2.1 Definitions: Reinforcement Learning: Reinforcement Learning, RL, is a machine learning technique in which an agent communicates with the environment to find the best action using the state space and a reward function. RL includes four main components; a policy, a reward signal, a value function, and optionally, an environment model [7], [12]. RL may be 2 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs either online or offline. The general configuration of the offline RL and the online RL approaches are shown in Figures 1 and 2. Figure 1: Offline reinforcement learning block diagram illustration [13]. Figure 2: Online reinforcement learning block diagram illustration [13]. Policy: In the context of learning algorithms, a policy determines how the learning agent behaves at a given moment. A policy maps the discerned states of the environment to the actions that the agent should take when in those states [12]. The policy may reflect either on-policy or off-policy options. On-policy: On-policy techniques relate to a policy that is used to make decisions. Policy Iteration, Value Iteration, Monte Carlo for On-Policy, and State-Action-Reward-State-Action (SARSA) algorithms are representative examples of on-policy algorithms [12]. Off-policy: Off-policy methods refer to a policy that is dissimilar to that used to produce data. Q-learning and Deep Deterministic Policy Gradient (DDPG) are examples of off-policy algorithms [12], see Figure 3 for the configuration of the off-policy RL algorithm. The agent experience is the input to a data buffer D, which is also called a replay buffer. Each new policy \u03c0k+1 is trained by utilizing the samples of all previous policies \u03c00, \u03c01, . . . , \u03c0k that are stored in D. On-policy methods are generally less complicated than off-policy ones, and they are contemplated first. Since data is produced by a different policy in off-policy methods, they converge more slowly. Regardless, off-policy methods provide more powerful and general alternatives [12]. Reward signal: A reward signal is a target in a RL algorithm. The agent receives a single number from the environment in each time step, the reward. The only objective of the agent is to maximize the total reward over time. The reward 3 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Figure 3: Off-policy RL algorithm configuration [13]. signal provides an immediate sense of the current state and indicates what events are beneficial or detrimental to the agent [12]. Value Function: A value function determines what events are advantageous to the agent, long-term. Environment Model: An environment model helps mimic and replicate the behavior of the environment. It allows for making inferences about how the environment will behave. In what follows, the definitions and fundamental concepts of offline, online, supervised and unsupervised learning, machine and deep learning, are detailed. Offline Learning: In offline learning, the learning algorithm trains an agent or artificial neural network (ANN). The agent interacts with the environment and it is updated during the training process. However, the agent is never updated after training is completed. Bartak and Vykovsky [14] and Edhah et al. [15] present representative examples of offline machine learning (ML) and deep learning (DL) algorithms, respectively. The studies of Xu et al. [16], Rodriguez at al. [17], and Yoo et al. [18] are representative examples of offline RL algorithms. Observing the configuration diagram of offline RL in Figure 1, a dataset (D) is collected by the behavior policy (\u03c0\u03b2) with the help of the states (s) and the reward function (r). A policy (\u03c0) is trained by using D. The training process does not interact with the Markov Decision Process (MDP). The trained policy (\u03c0) is deployed to control the system. The policy (\u03c0) interacts with the environment using the states (s) and the reward function (r), and creates the action space (a). Online Learning: In online learning, according to Hoi et al. [19], the learner keeps on learning and improves prediction to reach the best possible one by interacting with the environment as shown in Figure 2. A policy (\u03c0k) creates the action space to interact with the environment using the states (s) and the reward function (r). Then, \u03c0k is updated by using the roll-out data including states, actions, future states, and the reward function. After the updated policy (\u03c0k+1) is determined, it is replaced with the current policy (\u03c0k). Note that for the purposes of this survey, the definition of online learning is extended to account for cases where the agent continues the learning process during operation, in real-time, even after completing the offline learning process, or without any basic learning. This extension allows for \u2019anytime learning\u2019 while the underlying system continues to function, and accounts for the ability to modify the system model and also adapt its parameter values (time-varying systems). Supervised Learning: In supervised learning, the learning algorithm learns from the classified and labeled dataset [7]. Unsupervised Learning: In unsupervised learning, the learning algorithm utilizes unlabeled data, which are collected from sensors, to learn the proposed task. Unsupervised learning techniques are widely used in RL [7]. Machine Learning: Machine learning, ML, is a component of Artificial Intelligence (AI), in which tasks are learned (or imitation of tasks is learned) from collected data [7]. 4 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Figure 4: Publications of online and offline learning algorithms for control of multirotor UAVs since 2015 based on Google Scholar search. Deep Learning: Deep learning, DL, is a category or subset of ML that involves the use of deep neural networks, DNNs, with input, hidden, and output layers to model and solve complex problems. Moreover, RL techniques and methods are divided into model-based and model-free ones. Model-based: In a model-based method, the agent predicts the future states and reward and also chooses the action that provides the highest expected reward [12]. Models and planning are used to solve RL problems. Model-free: In a model-free method, the agent does not utilize the environment model but makes decisions by only using trial-and-error approaches [12]. The main difference between model-based and model-free methods is that the former relies on planning, while the latter relies on learning [12]. A Google Scholar Search since 2015 returns the paper distribution that is illustrated in Figure 4, which shows the number of offline and online learning published papers that deal with multirotor UAVs. The next Section reviews offline learning techniques. 3 Offline learning In offline learning, the system may be trained either using collected and/or provided data (supervised learning), or, alternatively, by using feedback before its actual operation, without using any data (unsupervised learning). In this case, when operating in real-time, the agent, or the neural network (NN), is not updated nor affected by the environment. Table 1 summarizes offline learning and RL approaches that have been applied to multirotor UAVs. The Table includes the publication year of the paper, the adopted or derived learning model, the application task/domain, as well as what is being learned. 55 articles have been reviewed and classified either as machine learning, ML, or deep learning, DL, or reinforcement learning, RL, approaches (as indicated by the authors). In cases where no information is provided in the reviewed papers, the classification follows the provided definitions in Section 2. 3.1 Machine Learning Most offline ML techniques applied to and used for multirotor UAVs consider an onboard monocular camera. ML based approaches have been developed and adopted for navigation purposes, for stabilization, to track an object, to pass through waypoints with a desired velocity, and for landing purposes on stationary or dynamic targets. In addition, ML approaches are also used to tune and adjust controller parameters. 5 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Table 1: Offline Learning Papers Year Authors Learning Model Application Task What is being Learned 2015 Bartak et al. [14] ML Object tracking How to detect an object 2015 Giusti et al. [20] ML Navigation Image classification to determine direction 2018 Kaufmann et al. [21] ML Waypoint & desired velocity How to detect an object 2021 Janousek et al. [22] ML Landing & flight planning How to recognize an object 2023 Vladov et al. [23] ML Stabilization How to adjust controller parameters 2015 Kim et al. [24] DL Navigation Image classification to assist in flights 2017 Li et al. [25] DL Trajectory tracking Control signals 2017 Smolyanskiy et al. [26] DL Navigation View orientation and lateral offset 2018 Jung et al. [27] DL Navigation How to detect the center of a gate 2018 Loquercio et al. [28] DL Navigation How to adjust yaw angle, and probability of collision 2019 Edhah et al. [15] DL Hovering How to determine propeller speed 2019 Mantegazza et al. [29] DL Ground target tracking Image classification for control 2023 Cardenas et al. [30] DL Position control How to determine the rotor speeds 2016 Imanberdiyev et al. [31] RL Navigation How to select the moving direction 2017 Polvara et al. [32] RL Landing How to detect a landmark and control vertical descent 2017 Choi et al. [33] RL Trajectory tracking Control input 2017 Kahn et al. [34] RL Avoiding failure Policy 2017 Hwangbo et al. [35] RL Stabilization How to determine the rotor thrusts 2018 Xu et al. [16] RL Landing How to determine the velocities of the UAV 2018 Lee at al. [36] RL Landing How to determine the roll and pitch angles 2018 Vankadari et al. [37] RL Landing How to determine the velocities of the UAV on the x and y axis 2018 Kersandt et al. [38] RL Navigation How to select three actions: move forward, turn right, and turn left 2018 Pham et al. [39] RL Navigation How to select the moving direction 2019 Rodriguez et al. [17] RL Landing How to determine velocities of the UAV on x and y axes 2019 Liu et al. [40] RL Formation control Optimal control law 2019 Lambert et al. [41] RL Hovering The mean and variance of the change in states 2019 Manukyan et al. [42] RL Hovering How to determine the rotor speeds 2019 Srivastava et al. [43] RL Target tracking How to determine the velocities of the UAV on x, y, and z axes 2019 Wu et al. [44] RL Trajectory planning How to select the moving direction 2019 Wang et al. [45] RL Navigation How to determine the steering angle 2019 Zeng & Xu [46] RL Path Planning How to select flight direction 2020 Yoo et al. [18] RL Trajectory tracking How to adjust PD and LQR controllers gains 2020 Rubi et al. [47] RL Trajectory tracking How to determine the yaw angle 2020 Pi et al. [48] RL Trajectory tracking How to determine the rotor thrusts 2020 Zhao et al. [49] RL Formation control How to solve Bellman equation 2020 Guerra et al. [50] RL Trajectory optimization Control signal 2020 Li et al. [51] RL Target tracking How to determine the angular velocity of yaw angle & Linear acceleration 2020 Kulkarni et al. [52] RL Navigation How to select moving direction 2020 Hu & Wang [53] RL Speed optimization How to determine the rotor speeds 2021 Kooi et al. [54] RL Landing How to determine the total thrust, and roll and pitch angles 2021 Rubi et al. [55] RL Trajectory tracking How to determine the yaw angle 2021 Bhan et al. [56] RL Avoiding failure How to adjust the gains of PD position controller 2021 Li et al. [57] RL Trajectory planning How to obtain the parameter vector of the approximate value function 2022 Jiang et al. [58] RL Landing How to determine the velocity of the UAV on x and y axes 2022 Abo et al. [59] RL Landing How to determine the roll, pitch, and yaw angles and velocity of the UAV on z axis 2022 Panetsos et al. [60] RL Payload transportation How to obtain the reference Euler angles & Velocity on z axis 2022 Ye et al. [61] RL Navigation How to select the moving direction and determine the velocity 2022 Wang & Ye [62] RL Trajectory tracking How to determine the pitch and roll torques 2022 Farsi & Liu [63] RL Hovering Hot to determine the rotor speeds 2023 Xia et al. [64] RL Landing How to obtain the force and torque command 2023 Ma et al. [65] RL Trajectory tracking How to determine the rotor speeds 2023 Castro et al. [66] RL Path Planning How to find optimized routes for navigation 2023 Mitakidis et al. [67] RL Target Tracking How to obtain the roll, pitch and yaw actions 2023 Shurrab et al. [68] RL Target localization How to determine the linear velocity and yaw angle 6 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs In general, captured and acquired images are sent to pre-trained NNs, which first classify the obtained images into different classes, and then pass this information to the underlying multirotor controller as discussed in Bartak and Vykovsky [14], Janousek et al. [22], Giusti et al. [20], and Kaufmann et al. [21]. Specifics are offered next. Bartak and Vykovsky [14] have combined computer vision, ML, and control theory techniques to develop a software tool for a UAV to track an object; the object is selected by a user who observes a series of video frames (images) and picks a specific object to be tracked by the multirotor. P-N learning, where P and N represent positive and negative learning, respectively, is used by a Tracking-Learning-Detection (TLD) algorithm. The Lucas-Kanade tracker is implemented in the tracking phase, and a cascaded classification algorithm that includes a ML technique helps detect the object. A cascaded classification algorithm that consists of a patch variance classifier, an ensemble classifier, and a nearest neighbor classifier is also used. A simple RL algorithm decides the forward or backward speed of the multirotor by using a scaled size of the object. The yaw angle of the multirotor, used to follow the object, is provided to a Proportional\u2013Integral\u2013Derivative (PID) controller as input, to determine flight direction. Janousek et al. [22] have developed a method to accurately guide an autonomous UAV to land in a specific area that is labeled as the \u2019ground object\u2019. The landing area includes a QR code, which, after it is identified and recognized it provides specific instructions/commands to the UAV for landing. A NN is used to identify the landing area using the onboard UAV camera. The recognition process is done on a ground control station, GCS, which is a part of the overall UAV-GCS ensemble. The GCS includes a communication channel for command transmission (to and from the UAV). False detection of the landing area may occur (i.e., due to sunlight) thus, success depends on how accurately the processed image determines the landing area, and not on the learning process itself. Regardless, when the UAV is within an \u2019acceptable flight altitude\u2019, a successful landing is accomplished. Giusti et al. [20] have used a quadrotor with an onboard monocular camera to determine the path configuration and direction of forest or mountain trails. A single image is collected from the onboard camera. A DNN is trained using supervised learning to classify obtained images. Two parameters are defined, \u20d7 v for the direction of the camera\u2019s optical axis, and \u20d7 t for the direction of the trail. Based on the calculated angle \u03b1 between \u20d7 v and \u20d7 t and the angle \u03b2 that is 15\u25e6around \u20d7 t, three actions are determined and classified as Turn Left (TL), Go Straight (GS), and Turn Right (TR), represented as \u2013 TL if \u221290\u25e6< \u03b1 < \u2212\u03b2 \u2013 GS if \u2212\u03b2 \u2264\u03b1 < +\u03b2 \u2013 TR if +\u03b2 \u2264\u03b1 < +90\u25e6 These choices consider that the onboard camera centers and focuses on the motion direction. For performance evaluation and comparison, three alternatives are considered: learning using a Saliency-based model, learning following the method discussed in Santana et al. [69], and by using two human observers who are asked to make one of the three previously mentioned decisions. The accuracy of the DNN is 85.2%; the accuracy of the Saliency-based model is 52.3%; the accuracy of the model in [69] is 36.5%. The accuracy of Human1 is slightly better than the accuracy of DNN, 86.5%; Human2 has 82% accuracy, which is lower than the accuracy of the DNN. This methodology has been tested experimentally and has produced successful results. Kaufmann et al. [21] have focused on the problem of autonomous, vision-based drone racing in dynamic environments with emphasis in path planning and control. A convolutional neural network (CNN) is used to detect the location of the waypoints from raw images and to decide about the speed to pass through the gates. The planner utilizes this information to design a short minimum jerk trajectory to reach the targeted waypoints. This technique is tested via simulations and in real environments. Comparisons with other navigation approaches and professional human drone pilots are made. It is shown that this method completes the track slower than human pilots do, but with a higher success rate. The success rate is also much higher compared to using visual-inertial odometry (VIO). Vladov et al. [23] have studied the UAV stabilization problem. They adjust PID controller parameters using a recurrent multilayer perceptron (RMLP) method to stabilize the UAV attitude angles. The determined error is the input to a NN. Instead of using a constant training rate, an adaptive training rate is implemented to overcome slow convergence in the learning part and to avoid trapping in a local minimum during the learning phase. Results show that this method has a lower attitude error when compared to the RMLP method with a constant training rate and when using only an ANN. 3.2 Deep Learning DL algorithms (discussed in 8 papers) that have been applied to multirotor UAVs focus on navigation and control, hovering, ground target tracking, and trajectory tracking. 7 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs In Edhah et al. [15], a DNN has been used to control UAV altitude and hover. The standard feedforward, greedy layer-wise, and Long Short-Term Memory (LSTM) methods are evaluated and compared. The controller outputs, position and speed errors, are collected every 1 ms, and then used by a supervised learning technique to train the DNN. After training, the trained DNN controller (and related parameters) is replaced by a Linear\u2013Quadratic Regulator (LQR) controller. To overcome a slight offset in the output signal that results in a small error (in the system response), a proportional corrector is added in parallel to the DNN controller to recover the error in the DNN output signal. Best results are received when using the greedy layer-wise method. Mantegazza et al. [70] have presented three different approaches and models to track a moving user. In the first model, the ResNet [71] architecture (a CNN) is utilized [28]. Red-green-blue (RGB) images captured from 14 different people are used as input, providing x, y, and z positions as output. The second model follows the same structure, but velocities on the x, and y axes are provided as additional inputs. These additional inputs skip ResNet, but they are concatenated to the output of the NN. The outputs are control variables corresponding to four moving directions, up, down, left, and right. In the third model, a simple multilayer perceptron is utilized to map the quadrotor position on three axes and velocities on the x and y axes to control variables. The Mean Absolute Error (MAE) approach is deployed as a loss function during training of all three models. The first and second models use a simple baseline controller function. The last approach uses a combination of the first and third models. Li et al. [25] have studied trajectory tracking without any adaptation, but they have considered quadrotor stabilization and robustness in the presence of disturbances. A DNN is trained with labeled training examples using a standard feedforward method. The DNN uses the quadrotor desired trajectory and current states of position and translational velocities (on each axis), Euler angles, angular velocities, and acceleration on the z-axis. The quadrotor reference states are provided as the DNN output. The trained DNN is placed in front of a controller, and errors between current and desired states are used as inputs to a PID controller. Results show that the DNN with current state feedback is more efficient than the DNN without current state feedback. However, the DNN using future desired state feedback provides better performance. Kim and Chen [24], Jung et al. [27], Loquercio et al. [28], and Smolyanskiy et al. [26] have centered around the quadrotor navigation task using DL techniques. Kim and Chen [24] have developed an autonomous indoor navigation system for a quadrotor to find a specific item using a single onboard camera. Six flight commands are used, Move Forward, Move Right, Move Left, Spin Right, Spin Left, and Stop. To establish a dataset, an expert pilot flies the quadrotor in seven different locations; images are collected from the UAV that are based on, corresponding to, (specific) flight commands. A CNN that is a modified CaffeNet model [72] is trained with the established dataset. This modification allows for faster training. During indoor flights, obtained images are classified by the trained NN, and based on image classification, specific control commands are issued to the quadrotor. Jung et al. [27] have developed a CNN to identify accurately the center of gates during indoor drone racing. ADRNet is built and trained using the Caffe library. To build the ADRNet, the AlexNet used in [73] and applied instead of the VGG-16 employed in [74] are adopted. In [75], a convolutional layer is added among the fully connected layers of AlexNet instead of the fc6 and fc7 layers, while the fc8 layer is removed. Thus, a shorter inference time is required compared to the VGG-16-based Single-Shot-Detection (SSD) approach. ADRNet detects the center of the gate, and this information is forwarded to a Line-Of-Sight (LOS) guidance algorithm that issues specific flight control commands. Performance of three Single-Shot Detection (SSD) models, the VGG-16-based SSD, the AlexNet-based SSD, and the ADRNet are compared. ADRNet is the fastest model to detect the center of the gate. Different from the traditional map-localize-plan methods, Loquercio et al. [28] have applied a data-driven approach to overcome UAV challenges encountered when navigating in unstructured and dynamic environments. A CNN, called DroNet, which is used to navigate quadrotors through the streets of a city, is proposed. Collecting data within a city, in urban areas, to train UAVs, is dangerous for both, pedestrians and vehicles, even if an expert pilot flies the quadrotor. Therefore, publicly available datasets from Udacity\u2019s project are used to learn the steering angles. The dataset includes over 70,000 driving car images collected and classified through six experiments. Five experiments are for training and one for testing. A collision probability dataset is also developed for different areas of a city by placing a GoPro camera on the handlebars of a bicycle. UAV control is achieved via commands issued based on the output of DroNet. The collision probability is used to determine the quadrotor forward velocity. The desired yaw angle in the range [\u2212\u03c0 2 , \u03c0 2 ] is determined by using predicted scaled steering in the range [\u22121, 1]. DroNet has worked successfully to avoid unexpected situations and obstacles, predicting the collision probability and the desired steering angle. The quadrotor learned to fly in several environments, and in indoor environments such as parking lots and corridors. Smolyanskiy et al. [26] have focused on autonomously navigating a micro aerial vehicle (MAV) in unstructured outdoor environments. A DNN called TrailNet is developed and used to estimate the view orientation and lateral offset of the 8 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Table 2: Classification of Reinforcement Learning (Offline) Methods Algorithms Papers Q-learning Guerra et al.[50], Pham et al. [39], Kulkarni et al. [52], Abo et al.[59], Zeng & Xu [46] DQN Xu et al. [16], Polvara et al. [32], Castro et al. [66], Shurrab et al. [68], Wu et al. [44], Kersandt et al. [38] Value Function-based LSPI Vankadari et al. [37], Lee et al. [36], Srivastava et al. [43] IRL Choi et al. [33] Others Imanberdiyev et al. [31], Ye et al. [61], Farsi & Liu [63], Li et al. [57], Xia et al. [64] PPO Kooi & Babu\u0161ka [54], Bhan et al [56] TRPO Manukyan et al. [42] Policy Search-based PILCO Yoo et al. [18] PLATO Kahn et al. [34] Others Hu & Wang [53], Lambert et al. [41] DDPG Jiang & Song [58], Rodriguez et al. [17], Rubi et al. [47], Rubi et al. [55], Ma et al. [65], Mitakidis et al. [67] TD3 Jiang & Song [58], Kooi & Babu\u0161ka [54], Li et al. [51], Panetsos et al. [60] SAC Jiang & Song [58], Kooi & Babu\u0161ka [54], Actor-Critic Fast-RDPG Wang et al. [45] DeFRA Li et al. [76] CdRL Wang & Ye [62] Others Pi et al. [48], Hwangbo et al. [35] MAV with respect to the center of a trail. A DNN-based controller provides for a stable flight and avoids overconfident maneuvers by utilizing a loss function that includes both label smoothing and an entropy reward. The MAV includes two vision modules, a second DNN and a visual odometry component that is called direct sparse odometry (DSO). The second DNN helps detect the objects in the environment; the DSO estimates the depth to compute a pseudo-colored depth map. Their combination with TrailNet provides an autonomous flight controller functioning in unstructured environments. ResNet-18, SqueezeNet (a miniature version of AlexNet), the DNN architecture in [20], and TrailNet are compared considering autonomous long-distance flight ability, prediction accuracy, computational efficiency, and power efficiency. Only Trailnet is 100% autonomous; SqueezeNet and mini AlexNet are the closest to TrailNet with 98% and 97%, respectively. The least autonomous architecture is the DNN in [20] with 80%. Software modules run simultaneously in real-time, and the quadcopter successfully flies in unstructured environments. Cardenas et al. [30] have developed a DNN-based flight position controller using a supervised DL technique. A dataset that includes position, velocity, acceleration, and motor output signals for different trajectories is created by using a PID flight controller. Five different NN architectures (from the literature) are utilized to learn the rotor speeds using a dataset, and their performance is compared. The five developed architectures are: i) ANN, ii) ANN Feedback, iii) LSTM, iv) LSTM Layers interleaved with convolutional 1D layers (LSTMCNN), and v) Convolutional 1D Layers cascaded with LSTM layers (CLSTM). A comparative study shows that LSTMCNN gives the best performance as a DNN-based flight position controller. LSTMCNN performance is checked against the PID position controller, and it is shown that LSTMCNN has a wider operational range than the PID position controller. 3.3 Reinforcement Learning Offline RL has been extensively applied to multirotor UAVs. The literature review reveals 42 papers that focus on 13 different tasks, which include trajectory tracking, landing, navigation, formation control, flight control, and hovering. In general, the RL framework uses an agent that is trained through trial-and-error to decide on an action that maximizes a long-term benefit. RL is described by a Markov Decision Process (MDP). The agent-environment interaction in an MDP is illustrated in Fig. 5, where, Agent, Environment, and action represent in engineering terms the controller, the controlled system, and the control signal, respectively [12]. RL algorithms are classified according to whether they are model-based or model-free, on-policy or off-policy, value function-based or policy search-based, or according to whether they are derived for planning or learning purposes. In what follows, the classification is in terms of value function-based, policy search-based, or actor-critic. Table 2 offers a summary of the different variations of RL algorithms. 3.3.1 Value Function-Based Algorithms Value function-based methods use state-value and action-state functions that are presented in (1) and (2), respectively, shown next v\u03c0(s) = E\u03c0 \" \u221e X k=0 \u03b3kRt+k+1 \f \f \f \f \f St = s # for all s \u2208S (1) 9 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Figure 5: Block diagram and interaction between agent and environment [12]. q\u03c0(s, a) = E\u03c0 \" \u221e X k=0 \u03b3kRt+k+1 \f \f \f \f \f St = s, At = a # (2) where v\u03c0(s) denotes the value function for policy \u03c0 at state s while q\u03c0(s, a) represents the action-value function for policy \u03c0 at state s and action a. E\u03c0[\u00b7] denotes the expected value under policy \u03c0. P\u221e k=0 \u03b3kRt+k+1 is the sum of discounted future rewards starting from time t in state s and represents the expected discounted return, and \u03b3 is the discount rate, 0 \u2264\u03b3 \u22641, here. St and At represent the state and action at time t, respectively [12][77][78]. The discount rate \u03b3 plays a critical role in calculating the present value of future rewards. If \u03b3 < 1, the infinite sum has a finite value when the reward sequence, Rk, is bounded. If \u03b3 = 0, the agent cannot calculate future rewards; it can only calculate the immediate reward (P\u221e k=0 \u03b3kRt+k+1 = 00Rt+1 + 01Rt+2 + \u00b7 \u00b7 \u00b7 = Rt+1). So the agent learns how to choose the action At to maximize Rt+1. If \u03b3 approaches 1, the future rewards are highlighted in the expected discount return, that is, the agent behaves more farsighted. For example, in [12] the discount factor is chosen close to 1. Li et al. [57], Hu and Wang [53] have chosen the value of 0.9, and Castro et al. [66] and Panetsos et al [60] have chosen the discount factor value to be 0.99. Value function-based algorithms consist of variance algorithms, such as Dynamic Programming (DP), Monte Carlo (MC), and Temporal-Difference (TD). Two most popular DP methods, policy iteration and value iteration, benefit from policy evaluation and policy improvement. The MC method is also based on policy evaluation and policy improvement, but unlike the DP method, an alternative policy evaluation process is utilized. The policy evaluation in the DP method employs a bootstrapping technique, while sampling and average return techniques are applied in the MC method. The TD method is created by combining the DP and MC methods applying sampling and bootstrapping [12][77][78]. TD has been extensively applied in control algorithms for multirotors executing diverse tasks like landing, navigation, obstacle avoidance, path planning, and trajectory optimization. Q-learning and Deep Q-Networks (DQN) stand out as commonly employed RL algorithms within the TD framework, particularly in value function-based algorithms, see Guerra et al. [50], Pham et al. [39], and Polvara et al. [32]. Xu et al. [16] have applied an end-to-end control scheme that includes a DNN and a double DQN algorithm for quadrotor landing on a stable platform. The output of the underlying Deep Reinforcement Learning (DRL) model is the quadrotor speed in x and y the velocity in the z direction is not controlled, it is considered fixed; this makes the problem easier. After testing, the improved DQN method produces good results on autonomous landing. Imanberdiyev et al. [31] have used a model-based RL algorithm on a quadrotor to create an efficient path to reach a destination by considering its battery life. The agent uses three states, position in x and y, and the battery level. The agent follows one of eight possible actions, moving on the x-y plane, and learning the moving direction. The direction action is converted to trajectory commands that are executed using position control. In model-based RL methods, there is a limited number of actions the agent learns to create a sufficiently accurate environment model, as opposed to model-free RL methods. Model-based RL algorithms are not suitable for real-time systems since the planning and model learning aspects are computationally expensive. However, in [31] a parallel architecture is used, called TEXPLORE. Thus, it is possible to take action fast enough based on the current policy there is no need to wait for the planning and model update. Simulation results illustrate that the approach has the ability to learn and to perform well after a few iterations and to also perform actions in real-time. Performance is compared with Q-learning algorithms. In 150 episodes, while there is no significant change in the average reward in Q-learning, the average 10 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs reward of TEXPLORE dramatically increases after the 25th episode. TEXPLORE obtains significantly more rewards than Q-learning in each episode. In [46], the path design problem for a cellular-connected UAV has been handled to reduce mission completion time. A new RL-based UAV path planning algorithm is derived and TD is applied to learn directly the state-value function. A linear function is added to the algorithm with tile coding. Function approximation has two advantages over a table-based RL. It learns the parameter vector that has a lower dimension than the state vector, instead of storing and updating the value function for all states. It also allows for generalization. Tile coding is used to build the feature vector. The parameter vector may be updated to minimize its mean squared error based on a stochastic semi-gradient method with a linear approximation for each state-reward-nextState transition observed by the agent. It is shown that TD with a tile coding algorithm overcomes problems with cellular networks in complex urban environments of size 2 km \u00d7 2 km with high-rise buildings. Also, accumulated rewards from the TD and the TD with tile coding learning algorithms are almost identical, but tile coding provides faster convergence. When tested, the UAV reaches the desired location without running into the coverage holes of cellular networks. The approach discussed in [32] has used two DQNs. One is utilized for landmark detection, the other is used to control the UAV vertical descent. A hierarchy representing sub-policies is applied to the DQNs to reach decisions during the different navigation phases. The DQNs can autonomously decide about the next state. However, the hierarchy decreases the sophistication of the task decision. Algorithm performance is compared with an augmented reality (AR) tracker algorithm and with human pilots. The proposed algorithm is faster than a human pilot when landing on a marked pad but also more robust than the AR tracker in finding the marker. Complete details of [32] may be found in [79]. Ye et al. [61] have developed a DRL-based control algorithm to navigate a UAV swarm around an unexplored environment under partial observations. This may be accomplished by using GAT-based FANET (GAT-FANET), which is a combination of the flying Ad-hoc network (FANET) and the graph attention network (GAT). Partial observations lead to loss of information. Thus, a network architecture named Deep Recurrent Graph Network (DRGN) is developed and combined with GAT-FANET to collect environment spatial information, and use previous information from memory via a gated recurrent unit (GRU). A maximum-entropy RL algorithm, called soft deep recurrent graph network (SDRGN), is developed, which is a multi-agent deep RL algorithm. It learns a DRGN-based stochastic policy with a soft Bellman function. The performance of the DRGN (a deterministic model) and the SDRGN are compared with DQN, multi-actor attention critic (MAAC), CommNet, and graph convolutional RL (DGN). In a partially observable environment, the stochastic policy approach is more robust than the deterministic policy one. Also, GAT-FENAT provides an advantage because of its memory unit. When the number of UAVs increases, more information is required from the GAT-FANET, and this reduces dependency on the memory unit. Results [61] show that policies based on GAT-FANET provide better performance on coverage than other policies. It is observed that graph-based communication improves performance in cooperative exploration and path planning, too. The SDRGN algorithm has lower energy consumption than DRGN, but DQN has the lowest energy consumption when compared with DRL methods. SDRGN and DRGN performance increases linearly with the number of UAVs. SDRGN shows better performance than DRL methods this verifies that it has better transferability. Consequently, overall, SDRGN has better performance, scalability, transferability, robustness, and interpretability than other DRL methods. Abo et al. [59] have solved the problem of UAV landing on a dynamic platform taking advantage of Q-learning. Two types of adaptive multi-level quantization (AMLQ) are used; AMLQ 4A with 4 actions and AMLQ 5A with 5 actions, and then compared with a PID controller. The PID position magnitude errors in x and y are higher than the corresponding AMLQ errors, while the oscillation in the AMLQ models is higher than in the PID controller. The developed AMLQ reduces the error on the targeted landing platform. This solution provides faster training and allows for knowledge representation without the need of a DNN. Path planning is effectively used in several areas that include precision agriculture. Castro et al. [66] have worked on adaptive path planning using DRL to inspect insect traps on olive trees. The proposed path planning algorithm includes two parts, the rapidly-exploring random tree (RRT) algorithm and a DQN algorithm. The former searches for path options, while the latter performs optimized route planning integrating environment changes in real-time; however, the training process of DQN is completed offline. Simulation runs are done in an area of 300 m2 with 10 dynamic objects; the UAV is provided with a safe route determined by the proposed approach, and it arrives at the insect traps to take their picture. Shurrab et al. [68] have studied the target localization problem and have proposed a Q-learning-based data-driven method in which a DQN algorithm helps overcome dimensionality challenges. Data measurements, from previous and the current step, the previous action, and the direction of the nearest boundary of the UAV compose the state space. The action space includes the UAV linear velocity and the yaw angle that determines flight direction. This approach is compared with the traditional uniform search method and the gradient descent-based ML technique; it returns better results in terms of localization and traveled distance. 11 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Guera et al. [50] have emphasized detection and mapping for trajectory optimization. For detection, the aim is to minimize \u2019wrong detection\u2019, and for mapping, the aim is to minimize the uncertainty related to estimating the unknown environment map. The proposed MDP-based RL algorithm, inspired by Q-learning, consists of state and control estimations. The states are the UAV position depending on actions, a binary parameter that shows the presence or absence of a signal source in the environment, and the states of each cell. The action space includes the control signal to move the UAV from one cell to another in the grid (environment) map. Numerical results show that this technique provides a high probability of target detection and improves the capabilities of map exploration. Pham et al. [39] have handled the UAV navigation problem using Q-learning. The navigation problem is formulated using a discretized state space within a bounded environment. The algorithm learns the action, which is the UAV moving direction in the described environment. The state space includes the distance between the UAV and the target position, and the distance to the nearest obstacle in North, South, West, or East directions. UAV navigation following the shortest path is demonstrated. Kulkarni et al. [52] have also used Q-learning for navigation purposes. The objective is to determine the location of a victim by using a RF signal emitted from smart devices. The transmitted signal reaches the agent and according to the received signal strength (RSS), the agent learns to choose one of eight directions separated by 45 degrees on the x-y plane. For mapping, a grid system is utilized, and each state label is correlated to a particular RSS value (two adjacent grids in the map have different RSS values). Each location on the map has a unique state. The \u03f5-greedy approach provides an action to the UAV, and each episode or iteration is completed when the RSS value of the grid is determined to be greater than -21dBm this value means that the distance from the victim is less than 2 meters. The proposed approach is tested for different starting positions on different floor plans, demonstrating that the UAV successfully reaches the victim\u2019s position. Choi et al. [33] have trained a multirotor UAV by mimicking the control performance of an expert pilot. A pilot collects data from several actual flights. Then, a hidden Markov model (HMM) and dynamic time warping (DTW) are utilized to create the trajectory. Inverse RL is used to learn the hidden reward function and use it to design a controller for trajectory following. Simulations and experiments show successful results. Wu et al. [44] have worked on the general task of \u2019object finding\u2019, for example, in rescue missions. A DQN algorithm is used for trajectory planning. The elimination of the loop storm effect that reflects the current sequence in MDP is repeated and it does not cause a punishment in continued actions. The Odor Storm that is caused by not reaching the highest reward value when the agent gets closer to the target increases the convergence speed of the training process. It is shown that the break loop storm technique and the odor effect reduce the training process time. In Kersandt et al. [38] a DNN has been trained with a DRL algorithm to control a fully autonomous quadcopter that is equipped with a stereo-vision camera to avoid obstacles. Three different DRL algorithms, DQN, Double DQN (DDQN), and Dueling DDQN are applied to the system. The average performance of each algorithm with respect to rewards is 33, 120, and 116, respectively; they all are below human performance. The results of applying DDQN and Dueling techniques show that the quadrotor reaches the target with 80% success. Liu et al. [40] and Zhao et al. [49] have used RL for formation control. In [40], a value function-based RL control algorithm is applied to leader-follower quadrotors to tackle the attitude synchronization problem. The output of each quadrotor is synchronized with the output of the leader quadrotor by the designed control system. In [49], the aim is to solve the model-free robust optimal formation control problem by utilizing off-policy value function-based algorithms. The algorithms are trained for robust optimal position and attitude controllers by using the input and output data of the quadrotors. Theoretical analysis and simulation results match, and the robust formation control method works effectively. Vankadari et al. [37] and Lee et al. [36] have worked on the landing task using a Least Square Policy Iteration (LSPI) algorithm that is considered a form of approximate dynamic programming (ADP). Srivastava et al. [43] and Li et al. [57] have applied a LSPI algorithm in multirotor UAVs for target tracking and trajectory planning, respectively. ADP is used to solve problems with large state or action spaces. ADP approximates the value function or policy with function approximation techniques, since storing values for every state-action pair is not practical. In [37], a LSPI algorithm has been used to study the landing problem. A RL algorithm estimates quadrotor control velocities using instantaneous position and velocity errors. The optimal value function of any policy is determined by solving the Bellman equation as applied to a linear system. In the RL algorithm, the LSPI method forecasts the value function parameterizing it into basis functions instead of calculating an optimal value function. The RL algorithm converges quickly and learns how to minimize the tracking error for a given set point. Different waypoints are used to train the algorithm for landing. The method can also be used in noisy environments, effectively. Simulations and real environment results demonstrate applicability of the approach. 12 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Research in [63] has provided a low-level control approach for a quadrotor by implementing a structured online learning-based algorithm (SOL) [80] to fly and keep the hovering position at a desired altitude. The learning procedure consists of two stages; the quadrotor is first flown with almost equal pulse-width modulation (PWM) values for each rotor; these values are collected to create an initial model. Then, learning in a closed-loop form is applied using the initial model. Before applying closed-loop learning, three pre-run flights are completed. 634 samples are collected in 68 seconds of flying. The state samples are determined at each time step in the control loop, and then the system model is updated using an RLS algorithm. After determining the updated model, a value function (needed to find the control value for the next step) is updated. The quadrotor is autonomously controlled. This online learning control approach successfully reaches the desired position and keeps the quadrotor hovering. In [36] a trained NN has been adopted for guidance in a simulation environment. A quadrotor with a PID controller has been used, which has onboard a ground-looking camera. The camera provides pixel deviation of the targeted landing platform from an image frame, and a laser rangefinder that procures the altitude information. During training, the NN is trained to learn how to control the UAV attitude. In simulation studies, the UAV reaches the proposed landing location. In experiments, the AI pilot is turned off below the altitude of 1.5 m, but the AI pilot can land at the targeted location using a vision sensor. Trajectories are not smooth because the landing location in the image is not accurately determined due to oscillations, because of image processing errors in the actor NN, because signal transmission creates a total delay of 200 ms, and because of disturbances in real-world environments. Three target tracking approaches that deserve attention are the Image-based (IBVS), Position-based (PBVS), and Direct Visual Servoing approaches. In Kanellakis and Nikolakopoulos [81], IBVS has been found to be the more effective approach for target tracking since it tackles directly the control problem in the image space; it also has better robustness when it comes to camera calibration and to depth estimation errors. Srivastava et al. [43] track a maneuvering target using only vision-based feedback, IBVS. However, tracking is difficult when using only monocular vision without depth measurements. This deficiency is eliminated by a RL technique where optimal control policies are learned by LSPI to track the target quadrotor. Two different basis functions (with and without velocity basis), and four types of reward functions (only exponential reward, quadratic reward function without velocity control, quadratic reward function with velocity control, asymmetric reward function) are described in [43]. The basis function with velocity basis shows better performance than the basis function without velocity basis. In [57], the objective has been to solve the problem of cable-suspended load transportation utilizing three quadrotors. The trajectory planning method is based on a value-function approximation algorithm with the aim to reach the final position as fast as possible keeping the load stable. This method includes two processes, trajectory planning and tracking. The trajectory planning process consists of parameter learning and trajectory generation. Training and learning help determine the parameter vector of the approximate value function (parameters learning part). In the trajectory generation phase, the value function is approximated by using the learned parameters in the former stage, and the flight trajectory is determined via a greedy strategy. The results effectiveness of the load trajectory and the physical effect on the quadrotor flight are checked based on the trajectory tracking process. The quadrotors are independent; in the trajectory tracking phase, positions and attitudes are controlled with a hierarchical control scheme using PID controllers (transmitting the position of the load to the controller of the quadrotor). Results show that the actual value function is successfully estimated. Also, the value function confirms that the proposed algorithm works effectively. Xia et al. [64] have used a RL control method for autonomous landing on a dynamic target. Unlike other studies, position and orientation constraints for safe and accurate landing are described. Adaptive learning and a cascaded dynamic estimator are utilized to create a robust RL control algorithm. In the adaptive learning part, the critic network weight is formulated and calculated in an adaptive way. Also, the stability of the closed-loop system is analyzed. 3.3.2 Policy Search-Based Algorithms Value function-based methods calculate the value of an agent\u2019s every possible action to choose the one based on the best value. The probability distribution over all available actions plays a key role in policy-based methods, and for the agent to decide the action each time step. A comparison of value function-based and policy search-based algorithms is provided in Table 3. Kooi and Babu\u0161ka [54] have developed an approach using deep RL to land a quadrotor on an inclined surface, autonomously. Proximal Policy Optimization (PPO), Twin-Delay Deep Deterministic Gradient (TD3), and Soft ActionCritic (SAC) algorithms are applied to solve this problem. The TD3 and SAC algorithms trained successfully the set-point tracking policy network, but the PPO algorithm was trained in a shorter time and provided a better performance on the final policy. Trained policies may be implemented in real-time. Hu and Wang [53] have utilized an advanced PPO RL algorithm to find the optimal stochastic control strategy for a quadrotor speed. During training, an actor and a critic NN are used. They have the same nine-dimensional state vector 13 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Table 3: Comparison of value function-based and policy search-based methods Value Function-Based Policy Search-Based Indirect policy optimization Direct policy optimization Generally off-policy On-policy Simpler algorithm Complex algorithm Computationally expensive Computationally inexpensive More iteration to converge Less iteration to converge (Euler angles, Euler angle derivatives, errors between expected and current velocities after integration on the x, y, and z axes). An integral compensator is applied to both NNs to improve speed-tracking accuracy and robustness. The learning approach includes online and offline components. In the offline learning phase, a flight control strategy is learned using a simplified quadrotor model, which is continuously optimized in the online learning phase. In offline learning, the critic NN evaluates the current action to determine an advantage value choosing a higher learning rate to improve evaluation. In online learning, the action NN is composed of four policy trainable sub-networks. The state vector is used as input to the four sub-networks; their outputs are the mean and variance of the corresponding four Gaussian distributions, each normalized to [0, 1]. Parameters of the four policy sub-networks are also used in the old policy networks that are untrainable. The old policy sub-network parameters are fixed. The four policy sub-networks in the action NN are trained to produce new actions in the next batch. When applying new actions to the quadrotor, new states are recorded in a buffer. After the integration and compensation process, a batch of the state vector is used as input to the critic NN. The batch of the advantage values is the output of the critic NN; it is used to evaluate the quality of the actions taken to determine these states. The parameters of the critic NN are updated by minimizing the advantage value per batch. The policy network is updated per batch using the action vectors taken from the old policy network, the state vector from the buffer, and the advantage value from the critic NN. In [53], the PPO and the PPO-IC algorithms have been compared with the offline PPO one and a well-tuned PID controller. The average linear velocity steady-state error of the PPO-IC approaches zero faster, and it is smaller than the PPO. The average accumulated reward of the PPO-IC reaches a higher value. The PPO-IC converges closer to the targeted velocity in the x, y, and z axes than the PPO. PPO-IC velocity errors in the x, y, and z axes are much smaller compared to the PPO errors. The Euler angle errors are also smaller in the PPO-IC algorithm. In the offline learning phase, the nominal quadrotor weight is increased by 10% in each step until it reaches 150% of the nominal weight. The performance of the well-tuned PID controller and the proposed method are compared. When the quadrotor weight increases, the velocity error along the z-axis increases, too, but the PPO-IC algorithm demonstrates stable behavior without fluctuations in speed tracking. Moreover, 12 experiments are conducted when the nominal 0.2 m radius of the quadrotor is increased from 50% to 550%, that is, from 0.1 m to 1.1 m. PID and PPO-IC performance are similar when the radius is between 0.2 0.4 m. For higher values, PID performance decreases even when convergence to the desired value is observed. However, the PID controller cannot control the quadrotor it becomes unstable when the radius increases to more than 1 m. On the contrary, changes in the radius value slightly affect the performance of the PPO-IC algorithm. Kahn et al. [34] and Bhan et al. [56] have worked on failure avoidance and on compensating for occurred failures during flights. In [34], a Policy Learning using Adaptive Trajectory Optimization (PLATO) algorithm, a continuous, reset-free RL algorithm, is developed. In PLATO, complex control policies are trained with supervised learning using model predictive control (MPC) to observe the environment. Partially trained and unsafe policies are not utilized in the action decision. During training, taking advantage of the MPC robustness, catastrophic failures are minimized since it is not necessary to run the learned NN policy during training time. It is shown that good long-horizon performance of the resulting policy is achieved by the adaptive MPC. In [56], accommodation and recovery from fault problems occurring in an octacopter is achieved using a combination of parameter estimation, RL, and model-based control. Fault-related parameters are estimated using an Unscented Kalman Filter (UKF) or a Particle Filter (PF). These fault-related parameters are given as inputs to a DRL, and the action NN in the DRL provides the new set of control parameters. In this way, the PID controller is updated when the control performance is affected by the parameter(s) correlated with faults. In [42], a DRL technique has been applied to a hexacopter to learn stable hovering in a state action environment. The DRL used for training is a model-free, on-policy, actor-critic-based algorithm called Trust Region Policy Optimization (TRPO). Two NNs are used as nonlinear function approximators. Experiments show that such a learning approach achieves successful results and facilitates controller design. Yoo et al. [18] have combined RL and deterministic controllers to control a quadrotor. Five different methods, the original probabilistic inference for learning control (PILCO), PD-RL with high gain, PD-RL with low gain, LQR-RL, and LQRRL with model uncertainty are compared via simulations when the quadrotor tracks a circular reference trajectory. The 14 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs high-gain PD-RL approaches fast the reference trajectory. The low-gain PD-RL behaves less aggressively and reference trajectory tracking is delayed. The convergence rate of the PD-RL and LQR-RL methods is better. Performance is also better when compared to the original PILCO. The main advantages of combining a deterministic controller with PILCO are simplicity and rapid learning convergence. In [41], errors on the pith and roll angles are minimized to provide stability at hovering. A user-designed objective function uses simulated trajectories to choose the best action. The objective function also minimizes the cost of each state. The performance of this controller is worse than a typical quadrotor controller performance. However, the proposed controller achieves hovering for up to 6 seconds after training using 3 minutes of data. 3.3.3 Actor-Critic Algorithms Actor-critic algorithms consist of both value function-based and policy search-based methods. The actor refers to the policy search-based method and chooses the actions in the environment; the critic refers to the value function-based method and evaluates the actor using the value function. In [58], three different RL algorithms, DDPG, TD3, and SAC, have been applied to study multirotor landing. Using the DDPG method does not result in successful landings. The TD3 and SAC methods successfully complete the landing task. However, TD3 requires a longer training period and landing is not as smooth, most likely because of noise presence in the algorithm. Rodriguez et al. [17] have studied landing on a dynamic/moving platform using DDPG. Slow and fast scenarios have been tried in 150 test episodes. During the slow scenario, the moving platform (the moving platform trajectory is periodic) velocity is 0.4 m/s, and during the fast scenario is set at 1.2 m/s. The success rate is 90% and 78%, respectively. Using a constant velocity on the z-axis results in landing failure on the moving platform. This problem may be overcome by using the velocity on the z-axis as a state, but this makes the training process more complicated and learning the landing process becomes more challenging. Rubi et al. [47] have solved the quadrotor path following problem using a deep deterministic policy gradient (DDPG) reinforcement learning algorithm. A lemniscate and one lap of a spiral path are used to compare agents with different state configurations in DDPG and in an adaptive Nonlinear Guidance Law (NLGL) algorithm. The agent has only two states, distance error and angle error. According to the results, the adaptive NLGL has a lower distance error than the 2-state agent, but its distance error is significantly greater than the agent with the future states on the lemniscate path. Rubi et al. [55] have also used three different approaches to solve the path following problem using DDPG. The first agent utilizes only instantaneous information, the second uses a structure (the agent expects the curve), and the third agent computes the optimal speed according to the shape of the path. The lemniscate and spiral paths are used to test the three agents. The lemniscate path is used in the training and test phases. The agents are evaluated in tests but with the assumption that the third agent is also limited by a maximum velocity of 1 m/s. For the lemniscate path, the agents are first tested with ground truth measurements. The second agent shows the best performance with respect to cross-track error. When the agents are tested with the sensor model, the third agent shows slightly better performance in terms of cross-track errors. Then, all agents are tested in the spiral path. When the performance of the agents is compared in simulations with ground truth measurements and with sensor models, the third agent (with a maximum velocity of 1 m/s) shows the best performance in terms of position error. In all tests, the third agent (without a maximum velocity limitation) completes the tracks faster. Wang et al. [45] have handled the UAV navigation problem in a large-scale environment using a DRL. Two policy gradient theorems within the actor-critic framework are derived to solve the problem that has been formulated as a partially observable Markov decision process (POMDP). As opposed to conventional navigation methods, raw sensor measurements are utilized in the DRL; control signals are the output of the navigation algorithm. Stochastic and deterministic policy gradients for POMDP are applied to the RL algorithm. The stochastic policy requires samples from both the state and action space. The deterministic policy requires only samples from the state space. Therefore, the RL algorithm with a deterministic policy is faster (and preferred) it is called a fast recurrent deterministic policy gradient algorithm (Fast-RDPG). For comparisons, four different large-scale complex environments are built with random-height buildings to test the DDPG, RDPG, and Fast-RDPG. The success rate of the Fast-RDPG is significantly higher in all environments. Fast-RDPG has the lowest crash rate in one environment. DDPG provides the best performance with respect to the average crash rate in all environments. Fast-RDPG has a much lower crash rate than RDPG. However, Fast-RDPG provides a much lower stray rate than other algorithms in all environments. Li et al. [76] have developed a new DRL-based flight resource allocation framework (DeFRA) for a typical UAV-assisted wireless sensor network used for smart farming (crop growth condition). The DeFRA reduces the overall data packet loss in a continuous action space. A DDPG is used in DeFRA, and DeFRA learns to determine the instantaneous heading and speed of the quadrotor and to choose the ground device to collect data from the field. The time-varying 15 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs airborne channels and energy arrivals at ground devices cause variations in the network dynamics. The network dynamics are estimated by a newly developed state characterization layer based on LSTM in DeFRA. An MDP handles simultaneously the control of the quadrotor\u2019s maneuver and the communication schedule according to decision parameters (time-varying energy harvesting, packet arrival, and channel fading). The state space comprises the battery level, the data buffer length of all ground devices, the battery level and location of the UAV, the channel gain between the UAV and the ground devices, and the time-span parameter of the ground device. The UAV current battery level depends on the battery level of the UAV in the previous time step, harvested energy, and energy consumption. The quadrotor is required to keep its battery level at an equal or higher than the battery level threshold. Performance is compared with two DRL-based policies, DDPG-based Movement Control (DDPG-MC) and DQNs-based Flight Resource Allocation Policy (DQN-FRA) and with two non-learning heuristics, Channel-Aware Waypoint Selection (CAWS) and Planned Trajectory Random Scheduling (PTRS). The DeFRA framework provides lower packet loss than other methods. The relation between the packet loss rate and the number of ground devices is investigated according to all methods. The DRL-based methods outperform CAWS and PTRS. For up to 150 ground devices, DeFRA and DDPG-MC show similar performance and better than other methods, but after increasing the number of ground devices to 300, DeFRA provides better performance than DDPG-MC. Pi et al. [48] have created a low-level quadrotor control algorithm to hover at a fixed point and to track a circular trajectory using a model-free RL algorithm. The combination of the on-policy and off-policy methods is used to train an agent. The standard policy gradient method determines the update direction within the parameter space, while the TRPO and PPO algorithms are designed to identify an appropriate update size. However, for updating the policy, the proposed model establishes new updating criteria that extend beyond the parameter space concentrating on local improvement. The NN output provides the thrust of each rotor. The simulator is created in Python using the dynamic model of Stevens et al. [82]. The effects of the rotation matrix and quaternion are investigated in the learning process. The model with the quaternion may converge slower in the training process than the model with the rotation matrix. However, both models show similar performance when tested. Ma et al. [65] have developed a DRL-based algorithm for trajectory tracking under wind disturbance. The agent learns to determine the rotation speed of each rotor of a hexacopter. A DDPG algorithm is used, but in addition to the existing DDPG algorithm, a policy relief (PR) method based on an epsilon-greedy exploration-based technique, and a significance weighting (SW) method are integrated into the DDPG framework. The former method improves the agent\u2019s exploration skills and its adaptation to environmental changes. The latter helps the agent update its parameters in a dynamic environment. In training, the implementation of PR and SW methods to the DDPG algorithm provides better exploration performance and faster convergence of the learning process, respectively, even in a dynamic environment. This method reaches a higher average reward and has a lower position error compared to the DDPG, DDPG with RP, and DDPG with SW. Also, this algorithm provides higher control accuracy compared to the cascaded active disturbance rejection control algorithm in terms of position, velocity, acceleration, and attitude errors. Hwangbo et al. [35] have proposed a method to increase UAV stabilization. A NN improves UAV stability training with RL. Monte Carlo samples are produced by on-policy trajectories and are used for the value function. The value network is used to guide policy training, and the policy network controls the quadrotor. Both are updated in every iteration. A new analytical measurement method describes the distance between action distribution and a new policy for policy optimization. This policy network gives an accurate reaction to step response. The policy stabilizes the quadrotor even under extreme situations. The algorithm shows better performance than DDPG and Trust Region Policy Optimization with a generalized advantage estimator (TRPO-gae) in terms of computation time. Mitakidis et al. [67] have also studied the target tracking problem. A CNN-based target detection algorithm has been used on an octocopter platform to track a UGV. A DDPG-RL is applied in a hierarchical controller (instead of a position controller). The CNN learns to detect the UGV, and the DDPG-RL algorithm learns to determine the roll, pitch, and yaw actions in the outer loop of the controller. These actions are taken from the NN output as normalized between [\u22121, 1], but these normalized values are multiplied by a spectrum of acceptable values. While the roll and pitch actions expand between -3 and 3 degrees, the yaw action spreads between -5 and 5 degrees. An experiment is conducted with a low-altitude octocopter and with manual control of a UGV. Fluctuations are observed in the distance error due to the aggressive maneuver of the UGV, but overall, the results are good. Li et al. [51] have studied the target tracking problem in uncertain environments. The proposed approach consists of a TD3 algorithm and meta-learning. The used algorithm is named meta twin delay deep deterministic policy gradient (meta-TD3). TD3 learns to control the linear acceleration of the UAV and the angular velocity of the heading angle. The state space includes the position of the quadrotor in the x-y plane, the heading angle, the linear velocity, the angle between the motion direction and the straight line between the UAV and the target, and the Euclidean distance between the UAV and the target. Meta-learning overcomes the multi-task learning challenge. Tasks are trajectories of the ground vehicle that is followed. A reply buffer is built for the task experience. When the agent interacts with the environment, 16 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs the state space, action space, reward value, and the next step state space that corresponds to the task are saved into the reply buffer. The method provides a significant improvement in convergence value and rate. Meta-TD3 adapts to the different movements of the ground vehicle faster than TD3 and DDPG algorithms. Meta-TD3 tracks the target more effectively. Panetsos et al. [60] have offered a solution for the payload transportation challenge using a DRL approach. An attitude PID controller is used in the inner loop of the cascaded controller structure, while a position controller in the outer loop is replaced with a TD3-based DRL algorithm. The DRL algorithm in the outer loop learns to create the reference Euler angles, roll and pitch, and the reference translational velocity of the octocopter on the z-axis. The method controls the system successfully to reach desired waypoints. Wang and Ye [62] have developed a consciousness-driven reinforcement learning (CdRL) for trajectory tracking control. The CdRL learning mechanism consists of online attention learning and consciousness-driven actor-critic learning. The former selects the best action. The latter increases the learning efficiency based on the cooperation of all subliminal actors. Two different attention-learning methods are utilized for online attention learning: short-term attention learning and long-term attention learning. The aim of the former is to select the best action. The latter selects the best action to sustain the system\u2019s stability. The longand short-term attention arrays are combined to make a decision about which actor should be given more attention. This learning algorithm is compared with Q-learning; the position error in the proposed algorithm is lower than in Q-learning. The same is also seen in the velocity error. However, this method is slightly better than Q-learning when it comes to attitude error. The UAV is successfully controlled to track the desired trajectory by the CdRL algorithm. Xu et al. [83] have created a benchmark using PPO, SAC, DDPG, DQN algorithms for single-agent tasks and multi-agent PPO (MAPPO), heterogeneous-agent PPO (HAPPO), multi-agent DDPG (MADDPG), and QMIX algorithms for multi-agent tasks with different drone systems. Single-agent tasks include hovering, trajectory tracking, and flythrough. Multi-agent tasks cover hover, trajectory tracking, flythrough, and formation. To increase task variation, payload, inverse pendulum, and transportation challenges are integrated into the singleand multi-agent tasks. Learning performance differs based on specific tasks. 4 Online learning In online learning, an agent learns and it is updated during the system operation by incorporating collected data from sensors used to make and improve decisions, also interacting with the environment. The agent learns in real-time, enhancing its decision and prediction capabilities (with respect to the assigned mission). Before proceeding, it is important to mention three more perspectives that clarify further what online learning is (compared to the definition provided in Section 2) and also they offer a more \u2019rounded\u2019 and more \u2019open-minded\u2019 point of view to consider. Srinivasan and Jain [84], state that the intelligent behavior of an agent may be limited given the extreme difficulty in developing knowledge structures and rule bases, which completely describe the task of the agent if the task by nature is complex. This problem can be partially overcome by causing the agent to learn on its own during this task. Hoi et al. [19], define online learning as a method of ML based on which data is arriving in sequential order, and where a learner aims to learn and to update the best prediction for future data at every step. Online learning is able to overcome the drawbacks of batch learning because the predictive model can be updated instantly for any new data instances. Otterlo and Wiering [85], state that an important aspect during the learning task (during the learning process) is the distinction between online and offline learning. The difference between these two types is influenced by factors such as whether one wants to control a real-world entity or whether all necessary information is available. Online learning performs learning directly on the problem instance. Offline learning uses a simulator of the environment as a cheap way to get many training examples for safe and fast learning. The literature review (from 2015) reveals that 11 papers have used online learning-based algorithms to control multirotor UAVs, in particular quadrotors. Their review and comparison provide a more accurate understanding of what the research is all about, what is being learned, why, and how, and what results have been produced. Yang et al. [86] have developed optimal control protocols to solve the distributed output synchronization problem for leader-follower multiagent systems. The adopted RL algorithm solves the underlying non-homogeneous Algebraic Riccati Equations (AREs) in real-time this is basically a distributed optimal tracking problem. Solving the AREs guarantees synchronization of the followers\u2019 and the leader\u2019s output. A distributed observer is derived to forecast the leader\u2019s state and to produce the reference signal for each follower. The proposed RL algorithm does not require knowledge of the quadrotor dynamics. This method gives better results than the adaptive control approach developed by 17 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Das and Lewis [87]. Moreover, when the transient response of the output synchronization error for each follower is compared, the error in [86] converges to zero faster than in [87]. A neural proactive closed-loop controller has been developed in [88] that learns control parameters within a few trials and in a computationally inexpensive way. No knowledge of the UAV dynamic model is required. This technique has been used for quadrotor speed adaptation and obstacle avoidance following the detailed block diagram configuration shown in Fig. 6. For performance evaluation purposes, an MPC and the proposed method are implemented and compared. The MPC requires knowledge of the quadrotor dynamic model. Thus, the neural proactive controller method is more appealing. It also provides robust results for speed adaptation even in the presence of wind disturbances. This approach has been implemented and tested on UAVs with different dynamics, in a Gazebo environment, under different maximum flying speeds. The neural proactive controller generates a control command 56.32% faster (on average) than the MPC; it is also 99.47% faster than the MPC in total learning and optimization time. The UAV is trained within 3-4 trials adapting its speed and learns to stay away from obstacles at a safe distance. Shin et al. [89] have applied online learning to optimize speed parameters with the aim to complete missions in less time (i.e., drone races through gates). A pre-trained actor network, modified to be suitable for online training, detects a specific object. The control network, which comprises the online learning component, provides UAV optimized linear velocity and acceleration in the x, y, and z directions, as well as the distance to a gate to pass through with maximum velocity, and without collisions. Each gate and the gate center are detected by a monocular camera system. The UAV coordinates are determined based on the provided information, and a loss function is used to make sure that the quadrotor passes through the center of the gate. The maximum velocity, acceleration, and threshold distance (that determines whether or not the quadrotor reaches the gate location) are parameters that are optimized for each gate. They are updated each time the quadrotor passes through each gate. As opposed to several conventional navigation approaches, this method requires a single neural network to control the quadrotor. The derived network has successfully handled all race processes in environments with demanding obstacle avoidance and navigation requirements. When tested in real-time, the quadrotor completed the race track in around 180 seconds in the first time, but in about 60 seconds after about 9000 times. Sarabakha and Kayacan [90] have proposed an online learning-based control method to improve UAV trajectory tracking. Total thrust and the three torques around the x, y, and z axes are used as control inputs. The online learning component learns to update the weight of a DNN to improve control performance. This method consists of two phases, pre-training called offline learning, and, post-training called online learning. In pre-training, supervised learning is used to control the quadrotor by mimicking a PID controller (using an input-output dataset for a set of trajectories). When the quadrotor is controlled by the trained DNN controller, a fuzzy logic system (FLS) keeps training the DNN online by providing feedback about its performance. The offline learning based performance is not different from the classic PID controller performance. However, online learning allows for the system to accurately predict evolution and desired signal estimation. This approach has been tested in slow circular, fast circular, and square-shaped trajectories. Performance of the well-tuned PID controller used in offline training, offline-trained DNN, and the online training approach are compared, with the last one clearly showing better performance for the slow and fast circular trajectories. The square-shaped trajectory was not used in the offline training phase. When the three are implemented and tested in the square-shaped trajectory, again, the online learning method slightly outperforms other approaches. O\u2019Connell et al. [91] have proposed Neural-Fly that includes an online learning phase to overcome instabilities caused by wind effects. The approach includes offline and online learning. In offline learning, the DNN output is a set of basis functions that represent aerodynamic effects. The latter phase is an online adaptive control phase that learns to adapt the control system to new wind conditions rapidly and robustly. For offline learning, the domain adversarially invariant meta-learning (DAIML) algorithm is developed to learn aerodynamic effects under wind-invariant conditions. A stochastic gradient descent method is used in the DAIML algorithm for training. For online learning, a Kalman Filter-based adaptation algorithm estimates the wind-dependent linear coefficients. The position tracking error and the aerodynamic force prediction error terms are utilized in this estimation under wind-variant conditions. The online learning component provides fast adaptation to wind variations. This approach may be used to control several quadrotors without the need to pre-train each UAV. Neural-Fly shows better performance than the nonlinear tracking controller found in [92], L1 adaptive controller, and an incremental nonlinear dynamics inversion controller when the quadrotor is subjected to time-variant wind profiles. Jia et al [93] have provided a solution to the trajectory tracking problem by combining a fuzzy logic method, a radial basis function (RBF) NN, and a classical PID controller. The PID output and the current UAV position information are provided to the RBF NN as input values, and the network learns to adjust controller parameters. The fuzzy logic component selects the initial controller parameters, while the RBF NN adjusts them. Both are combined to create a new set of parameters for the PID controller. In the fuzzy logic component, a database created from expert knowledge is used to decide about the initial controller parameters. However, this dataset cannot be used in the RBF NN component 18 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Figure 6: Online learning controller scheme of [88]. 19 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs since the adopted algorithm is an unsupervised learning method. However, the combination of both methods overcomes this limitation and provides online learning abilities. When the fuzzy logic system adjusts the PID gains, the RBF NN tweaks the PID parameters to overcome PID weaknesses caused by environmental disturbances. This approach when compared with PID and fuzzy-PID (FPID) controllers shows better performance for trajectory tracking. Zhang et al.[94] have investigated the problem of adaptive control and have offered a real-time brain-inspired learning control (RBiLC) method as a solution. Three attitude angle errors are set as input and a NN provides the control parameter increment as output. In the RBiLC method, a PID controller is used, and the controller parameter is updated in each interaction. A DRL method learns the controller parameter rate. The algorithm uses the Nesterov momentum technique for gradient descent. Controller stability and convergence of the tracking error is demonstrated. The quadrotor takes off and reaches a 10-meter altitude, then it hovers for 3 seconds in this position with the initial controller parameters. The learning algorithm is activated using a switch system, and the RBiLC algorithm updates PID gains in real-time in an environment with wind disturbances. This approach learns controller parameters in 3 to 5 minutes, which is a much shorter time than in offline learning methods. This method shows significant improvement for stabilization in roll and pitch angles, but performance is not the same in the yaw angle. Under wind disturbances, the method provides a shorter rise time and steady-state time for roll and pitch when compared to the classic PID controller. Shiri et al. [95] have studied the online path planning problem using NN-based opportunistic control. An online-trained NN learns to solve the Hamilton-Jacobian-Belmann (HJB) equation in real-time. The opportunistic HJB, oHJB, control algorithm learns whether it will upload the control action (aHJB) that is the output of the NN or the current online trained NN (mHJB). A base station (BS) is utilized to handle the online learning algorithm. The UAV state is downloaded to the BS. First, the NN is trained online to solve the HJB in real-time, in the BS. Then, the output of the trained NN (the control action aHJB), is uploaded to the UAV. Secondly, the current online trained NN (represented as mHJB), is uploaded to the UAV (instead of the aHJB), and the mHJB is fed with the current states. Then, the UAV takes the action that is locally assessed by the uploaded mHJB in real-time. The oHJB control provides the decision mechanism to switch from aHJB to mHJB according to the connection between the multirotor UAV and the BS. Based on the oHJB, the UAV can keep taking actions using the last uploaded mHJB even if it loses connection with the BS. Since the size of the trained NN model in the BS is larger than the size of the action space, a trade-off occurs between uploading delays and control robustness against poor channel conditions. The oHJB arrives at the targeted location in a shorter time than aHJB and mHJB. The aHJB and mHJB may fail to arrive at the desired location. Wang et al. [96] have used a data-driven approach (Gaussian processes) to learn quadrotor models applied in partially unknown environments. Barrier certificates are learned and utilized to adjust the controller to not encroach in an unsafe region. A safe region is certified and it is progressively spread with new data. Collecting more data points about system dynamics reduces uncertainty and maximizes the volume of the safe region. Discretizing the state space helps sample a finite number of points to affirm the barrier certificates; so adaptive sampling decreases the number of the required sampling points. Also, the state space is adaptively sampled by enhancing the Lipschitz continuity of the barrier certificates without taking any risk on safety guarantees. Sampling the most uncertain state in the safe set of the system boosts learning efficiency during the exploration phase. A kernel function is utilized to decide data relevance, and 300 data points are chosen in the recursive Gaussian process by eliminating irrelevant data points online. The approach reduces the possibility of an accident occurring during the online learning phase. The adaptive sampling strategy provides significantly better results in decreasing the required sample points. The quadrotor never violates safe and unsafe regions; it uses barrier certificates to regulate the learning algorithms. A high probability of safety guarantee is produced by a Gaussian process for the dynamic system, and this process learns unmodeled dynamics to help the quadrotor successfully stay within the safe region. When the tracking error of the learning-based controller is compared to the tracking error without Gaussian process inference, the tracking error is much smaller in the learning-based controller. He et al. [97] have solved the time-varying channel(s) problem on air-to-ground links caused by UAV mobility in dynamic environments. Offline learning learns to minimize the prediction loss function in the prediction network and the evaluation loss function in the evaluation network. In online learning, the aim is to learn to minimize the difference between two networks. A state-optimized rate adaptation algorithm called StateRate is developed to solve this problem using the onboard UAV sensors. The evaluation and prediction networks are exploited in online training. The received signal strength indicator and the channel state information are used as inputs for both networks; the prediction network, additionally, uses the UAV\u2019s states as input. The output of the evaluation NN is used for supervision and compared with the output of the prediction network by using a fully connected layer. The StateRate algorithm accurately predicts the optimal rate. The rate prediction is handled as a multi-class classification problem using online learning. This method is applied and tested in a commercial quadrotor (DJI M100), and has shown better performance than the best-known rate adaptation algorithms applied in UAVs with 2-6 m/s velocity. 20 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs Table 4: Online Learning Papers Year Paper Task Algorithm Model-free or -based Advantages Compared with Offline part Sim/Exp 2018 Yang et al. [86] Navigation Synchronization [86] Model-free Solves inhomogeneous algebraic Riccati equations online Adaptive control approach in [87] No Sim 2018 Wang et al. [96] Environment Exploration Data-driven approach based on Gaussian Process Model-free Reduces the possible crashes in the online learning phase No Sim 2019 Sarabakha and Kayacan [90] Trajectory tracking Back-propagation Model-free Offline trained network PID controller Yes Sim 2019 He et al. [97] Agile mobility in dynamic environment StateRate Model-free Adjusts finely the prediction framework, and onboard sensor data are effectively used Previous OPT Signal-to-noise rate (SNR) SampleRate CHARM Yes Sim 2019 Wang et al. [98] Robust Control DPG-IC Model-free Elimination of the steady error PID controller DDPG Yes Sim 2020 Shin et al. [89] Speed Optimization SSD MobileNet Model-free Quicker object detection time No Sim 2020 Shiri et al. [95] Path Planning oHJB Model-free The algorithm keeps working even if UAV loses the connection with BS aHJB mHJB No Sim 2022 Jaiton et al. [88] Speed optimization Neural Proactive Control Model-free Computationally inexpensive MPC No Exp 2023 O\u2019Connell et al. [91] Stabilization DAIML Model-free Can control a wide range of quadrotors Not require pretraining for each UAV Mellinger & Kumar [92] L1 Adaptive Controller Incremental Nonlinear Dynamics Inversion Controller Yes Exp 2023 Jia et al. [93] Trajectory tracking RFPID Model-based Strong learning ability PID Fuzzy-PID No Sim 2023 Zhang et al. [94] Stabilization RBiLC Model-free Significant improvement for the stabilization in roll and pitch Not show the same performance on yaw PID No Exp Wang et al. [98] have worked on robust control of a quadrotor using a DRL-based controller. The method includes both offline and online learning. During offline learning, an offline learning control policy is learned. The actor-network learns the normalized thrusts produced by each rotor between 0 and 1. During online learning, the offline control policy continues to be optimized. In the offline phase, the Deterministic Policy Gradient-Integral Compensator (DPG-IC) algorithm is trained with random actor and critic NN weights in episodic structure. In the online learning phase, the trained DPG-IC is used as the initial NN structure that is trained continuously. The offline DPG-IC also runs alongside the online algorithm. The system switches to the offline algorithm when the states are close to the safety limits, and the quadrotor goes back to the safe range. The aim of the online learning phase is to close the gap between the simplified model and the model with real flight dynamics. DPG-IG eliminates the steady-state error. A well-tuned PID controller shows similar performance compared to the offline DPG-IC policy when the size of the quadrotor is increased from 0.12 m to 0.4 m, but for larger sizes, the PID controller is not stable while the proposed method shows successful performance for sizes up to 1 m. The PID controller and the offline DPG-IC algorithm are also compared for different payloads. The payload of the quadrotor increases by 10% each step; the PID cannot control the quadrotor with a 30% increased payload. On the other hand, the proposed offline learning method successfully controls the quadrotor with up to 50% payload increase. Then, the performance of the online learning policy is compared with the offline learning policy. The model with 0.4 m radius and 20% increased payload is used. After only 200 online training steps, performance is significantly increased. In summary, since RL is built on the interaction between the agent and environment and it is based on a reward system, RL algorithms are useful for online learning and they are widely preferred for applications related to control of multirotor UAVs. 5 Discussion and Conclusions This survey provides summary results that are combined in four Tables. The main two tables reflect offline and online learning techniques and algorithms. Both tables offer a clear picture of the publication year, the adopted method, the task/mission, and also what is being learned. Coupled with the provided information for each method, the reader acquires information about the applicability and implementability of each technique, as well as about the specific application the approach has been developed for. The other two tables offer specific details on RL approaches and on the value function-based and policy search-based methods. Overall, this survey provides a comprehensive overview of the evolving landscape of multirotor UAV navigation and control, particularly focusing on the integration of learning-based algorithms over the past decade. Given that eventually, there will be \u2019almost infinite computational power\u2019 that will require \u2019almost zero computational time\u2019 to return results, this survey paper offers a starting point for subsequent studies on what is hard real-time implementable. Acknowledgments This work is part of Serhat S\u00f6nmez\u2019s PhD dissertation research. Serhat S\u00f6nmez is the main contributor in this paper. This research was partially supported by the Ministry of National Education of the Republic of Turkey on behalf of Istanbul Medeniyet University and the D. F. Ritchie School of Engineering and Computer Science, University of Denver, CO 80208. 21 \fA Survey of Offline and Online Learning-Based Algorithms for Multirotor UAVs"
},
{
"url": "http://arxiv.org/abs/2303.17156v2",
"title": "MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations",
"abstract": "We study a new paradigm for sequential decision making, called offline policy\nlearning from observations (PLfO). Offline PLfO aims to learn policies using\ndatasets with substandard qualities: 1) only a subset of trajectories is\nlabeled with rewards, 2) labeled trajectories may not contain actions, 3)\nlabeled trajectories may not be of high quality, and 4) the data may not have\nfull coverage. Such imperfection is common in real-world learning scenarios,\nand offline PLfO encompasses many existing offline learning setups, including\noffline imitation learning (IL), offline IL from observations (ILfO), and\noffline reinforcement learning (RL). In this work, we present a generic\napproach to offline PLfO, called $\\textbf{M}$odality-agnostic\n$\\textbf{A}$dversarial $\\textbf{H}$ypothesis $\\textbf{A}$daptation for\n$\\textbf{L}$earning from $\\textbf{O}$bservations (MAHALO). Built upon the\npessimism concept in offline RL, MAHALO optimizes the policy using a\nperformance lower bound that accounts for uncertainty due to the dataset's\ninsufficient coverage. We implement this idea by adversarially training\ndata-consistent critic and reward functions, which forces the learned policy to\nbe robust to data deficiency. We show that MAHALO consistently outperforms or\nmatches specialized algorithms across a variety of offline PLfO tasks in theory\nand experiments. Our code is available at https://github.com/AnqiLi/mahalo.",
"authors": "Anqi Li, Byron Boots, Ching-An Cheng",
"published": "2023-03-30",
"updated": "2023-08-06",
"primary_cat": "cs.LG",
"cats": [
"cs.LG"
],
"label": "Original Paper",
"paper_cat": "Offline AND Reinforcement AND Learning",
"gt": "MAHALO: Unifying Offline Reinforcement Learning and Imitation Learning from Observations",
"main_content": "Introduction Online reinforcement learning (RL) has shown great promise in solving simulated tasks (Silver et al., 2016; Mnih et al., 2015). However, exploratory interactions with the environment, which are central to online RL, often can not be afforded in risk-sensitive applications, such as robotics (Ibarz et al., 2021) and healthcare (Gottesman et al., 1University of Washington 2Microsoft Research. Correspondence to: Anqi Li <anqil4@cs.washington.edu>. Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s). 2018). In these domains, it is more practical to consider an offline setting (Levine et al., 2020), where data is collected by behavioral policies satisfying certain criteria. There are two main approaches to solving decision making problems offline: offline imitation learning (IL) (Chang et al., 2021; Kidambi et al., 2021; Kim et al., 2021) and offline RL (Fujimoto et al., 2019; Levine et al., 2020). Offline IL generally does not assume access to the reward. Theses approaches learn with a small set of expert demonstrations and potentially a separate dynamics dataset with unknown quality. Offline IL seeks to mimic expert behavior while avoiding distribution shift caused by using offline datasets. Offline imitation learning from observations (ILfO) (Kidambi et al., 2021; Ma et al., 2022) further relaxes the requirements of expert actions. ILfO allows learning from experts with different action spaces (Edwards et al., 2020), or when the expert has a different action modality or a embodiment (Cao & Sadigh, 2021; Radosavovic et al., 2021). Offline RL, on the other hand, does not require expert-level demonstrations. It instead assumes that each transition in the offline dataset is labeled with reward. The goal of offline RL is to learn a policy which 1) always improves upon the behavioral policy (Fujimoto et al., 2019; Laroche et al., 2019), and 2) can outperform any other policies whose state-action distribution is covered by data (Xie et al., 2021). However, in real-world applications, it is expensive to either acquire expert-level demonstrations (even if they are observation-only), or label every transition with reward. In this paper, we propose a more general and realistic formulation called offline Policy Learning from Observations (PLfO). Our goal is to learn from datasets where 1) a subset of trajectories is labeled with rewards, 2) labeled trajectories may not contain actions, 3) labeled trajectories may not be of high quality, and 4) the overall data may not have full coverage. The flexibility of this formulation allows us to directly take advantage of more data sources, such as dynamics data collected for other tasks and reward data collected by a non-expert agent with a different action space. Offline PLfO considers two offline datasets: the reward dataset DR = {(s, r, s\u2032)} and dynamics dataset DA = {(s, a, s\u2032)}, where the dynamics dataset is consistent with 1 arXiv:2303.17156v2 [cs.LG] 6 Aug 2023 \fMAHALO: Unifying Offline RL and IL from Observations Table 1. Different problem formulations for sequential decision making on offline datasets. Our PLfO formulation is the most general and can leverage the broadest range of data, which makes it the most realistic. The other formulations can be reduced to PLfO with additional restrictions on data. * denotes that data can only be used partially, with either action or reward removed. (s, a, r, s\u2032) (s, a, s\u2032) Expert (s, a, s\u2032) Expert (s, s\u2032) Non-expert (s, r, s\u2032) Offline IL \u2717* \u2713 \u2713 \u2717 \u2717 Offline ILfO \u2717* \u2713 \u2717* \u2713 \u2717 Offline RL \u2713 \u2717 \u2717 \u2717 \u2717 Offline RL w/ Unlabeled Data \u2713 \u2713 \u2717 \u2717 \u2717 Offline PLfO (Proposed) \u2713 \u2713 \u2713 \u2713 \u2713 the Markovian dynamics that the learner aims to solve (i.e. it is collected by agents that have the same embodiment as the learner). In offline RL setting, the reward and dynamics datasets are aligned, since they are from the same underlying dataset D = {(s, a, r, s\u2032)}. Recent work (Yu et al., 2022) relaxes this requirement by assuming that only a subset of transitions are labeled with rewards, i.e., the set of state transitions contained in DR is a subset of DA. On the contrary, in offline PLfO, we make no assumption on how these two datasets are related to each other. Offline ILfO can also be viewed as a special case of offline PLfO. Although ILfO does not assume knowledge of reward, it makes an implicit assumption that expert trajectories attain high returns, while making no assumptions on reward information elsewhere (e.g., on the dynamics data DA). In other words, from the perspective of offline PLfO, expert demonstrations essentially act as the reward-labeled dataset DR for ILfO. Practically, we can simply label the expert demonstrations with the maximum reward. This observation is in line with existing work (Fu et al., 2018a; Eysenbach et al., 2021; Smith et al., 2023). We refer readers to Table 1 for a summary of comparison between offline PLfO and existing formulations. In Appendix A we provide a more comprehensive literature review. The key challenge to offline PLfO is the mismatch among the reward dataset, the dynamics dataset, and the test-time distribution. We present a generic approach to offline PLfO, called Modality-agnoistic Adversarial Hypothesis Adaptation for Learning from Observations (MAHALO). Built upon the concept of pessimism from offline RL literature (Jin et al., 2021; Liu et al., 2020; Kumar et al., 2020; Xie et al., 2021; Cheng et al., 2022), MAHALO optimizes for a performance lower bound accounting for insufficient data coverage on reward and dynamics. It can be realized by modifying existing offline RL algorithms based on adversarial training, such as (Xie et al., 2021; Cheng et al., 2022; Uehara & Sun, 2022; Rigter et al., 2022; Xie et al., 2022). In particular, we present a model-free instantiation of MAHALO built upon ATAC (Cheng et al., 2022), an offline RL algorithm based on a Stackelberg game of relative pessimism. In MAHALO, we consider the actor policy as the leader in the Stackelberg game, and adversarially train critic and reward functions so that they are data-consistent and can detect potential deficiency of the actor policy. As a result, the policy can be robust to the missing data coverage. The contribution of this paper is two-fold. First, we propose offline PLfO, a novel formulation which relaxes data assumption for policy learning with offline data. This general formulation encompasses most existing offline formulations, including, but not limited to, offline IL, ILfO, RL, and RL with unlabeled data. Second, we present MAHALO, a solution to offline PLfO based on pessimism. We further present a model-free realization of MAHALO. In theory and experiments, we show that MAHALO consistently outperforms or matches performance with more specialized algorithms across various offline PLfO scenarios and tasks. 2. Preliminaries Markov Decision Process We consider RL in a Markov Decision Process (MDP) M = (S, A, P, R, \u03b3), where S and A are the state and action spaces, \u03b3 \u2208[0, 1) is the discount factor, P : S \u00d7 A \u2192\u2206(S) is the transition probability, where \u2206(\u00b7) denotes the space of probability distributions. We assume that the reward function R is defined on state transitions, i.e., R : S \u00d7 S \u2192[0, Rmax], as we consider learning from observations. This state-transition reward function R induces an effective state-action reward function \u00af R(s, a) := Es\u2032\u223cP (\u00b7|s,a)[R(s, s\u2032)], which is the expected state-transition reward under the transition probability P. We denote a Markovian policy as \u03c0 : S \u2192\u2206(A). The goal of RL is to find a policy which maximizes the expected discounted return J(\u03c0) := E[P\u221e t=0 \u03b3trt], where rt = R(st, st+1) and the expectation is over the randomness of running policy \u03c0 with transition probability P starting from an initial state distribution d0(s). For a policy \u03c0 and any function f : S \u00d7 A \u2192R, we define the transition operator P\u03c0 as (P\u03c0f)(s, a) := \u03b3Es\u2032\u223cP (\u00b7|s,a)[f(s\u2032, \u03c0)], where f(s\u2032, \u03c0) = P a\u2032 \u03c0(a\u2032|s\u2032)f(s\u2032, a\u2032). For a policy \u03c0, we define the average state-action occupancy measure d\u03c0(s, a) := (1 \u2212\u03b3)E[P\u221e t=0 \u03b3t1(st = s, at = a)]. We recall that J(\u03c0) = 1 1\u2212\u03b3 Es,a\u223cd\u03c0Es\u2032\u223cP (s\u2032|s,a)[R(s, s\u2032)]. Offline RL Offline RL studies the problem of policy learning from a reward-labeled transition dataset D = {(s, a, r, s\u2032)}. The goal of offline RL is to learn the best policy that can be explained by data, while not making assumptions on the data coverage quality. An offline RL algorithm 2 \fMAHALO: Unifying Offline RL and IL from Observations ideally is able to learn the optimal policy of the MDP M, as long as the dataset covers the states and actions that the optimal policy would visit. Such robustness of offline RL to data coverage quality is commonly realized by pessimism, which reasons about the worst case for states and actions not covered by the offline data. Being pessimistic in the face of uncertainty naturally forces the agent to search for good policies within the data support. Typically, the pessimism is implemented via behavior regularization (Fujimoto & Gu, 2021; Wu et al., 2019), value penalty (Jin et al., 2021; Yu et al., 2020b; Kumar et al., 2020), or adversarial training via a two-player game (Cheng et al., 2022; Xie et al., 2021; Rigter et al., 2022; Uehara & Sun, 2022). Offline IL Offline IL (such as behavior cloning) studies the problem of policy learning using only the transition dataset D = {(s, a, s\u2032)} without reward labels. The transition data is a union of near-optimal expert data DE and (optionally) a separately collected data of unknown quality DX. Like offline RL, the data in offline IL does not have full coverage, and the principle of mimicking the expert data in IL also effectively encourages the learner to stay within the the data distribution. In fact, Cheng et al. (2022) show that offline IL can be viewed as an offline RL problem with the largest reward uncertainty: By running an offline RL algorithm that optimizes for the relative performance between the learner and the behavioral policy under data uncertainty, an IL mimicking behavior would naturally occur. Stackelberg game A Stackelberg game is a sequential two-player game (Von Stackelberg, 2010) between a leader x and a follower y. In this game, the leader plays first and then the follower plays after seeing the leader\u2019s decision. The game can be written as a bilevel optimization problem: maxx f(x, yx) s.t. yx \u2208maxy g(x, y), where f and g are the objectives of the leader and the follower, respectively. 3. Offline PLfO: A Unified Formulation for Offline RL and IL from Observations In this section, we first introduce the generic setup of offline policy learning from observations (PLfO). Then we discuss practical scenarios where the data cannot be fully leveraged in offline RL and IL setups but is within the PLfO setup. 3.1. Problem Formulation In offline PLfO, we assume access to pre-collected offline data consisting of transitions. In contrast to typical offline RL, we allow our data to include transitions which contain either reward or action. In other words, in offline PLfO, we consider two datasets, a reward dataset DR = {(s, r, s\u2032)} and a dynamics dataset DA = {(s, a, s\u2032)}. We note that these two datasets may not necessarily have an intersection. We assume that both datasets are compliant with the underlying MDP. For the dynamics dataset, we follow standard compliance assumption in offline RL literature (Jin et al., 2021): 1) for any (s, a, s\u2032) \u2208DR, we have s\u2032 \u223cP(\u00b7|s, a); and 2) the state-action pairs (s, a) in DA are sampled from the discounted state-action occupancy d\u00b5 of a behavioral policy \u00b5 : S \u2192\u2206(A). We slightly abuse notation to use \u00b5 to also denote the discounted state-action occupancy d\u00b5, i.e., \u00b5 = d\u00b5. For the reward dataset, for any (s, r, s\u2032) \u2208DR, we assume that the reward function R is defined on the state transition (s, s\u2032) and r = R(s, s\u2032). We do not make assumption on the underlying distribution of state transitions (s, s\u2032) \u223c\u03bd, e.g., s\u2032 in (s, s\u2032) may not be sampled from P(\u00b7|s, a) for some action a. With a slight abuse of notation, we will also use \u03bd to denote the underlying distribution of transition tuple containing reward (s, r, s\u2032). Like offline RL, we do not assume the coverage of these datasets on states and actions. The goal of offline PLfO is to learn a policy \u03c0 which obtains high expected discounted return J(\u03c0) in MDP M while using reward and dynamics datasets of limited coverage. 3.2. Relation to Existing Formulations Offline PLfO is a general formulation encompassing many existing problem setups. This means that an offline PLfO algorithm can solve any of the following problems, or the combination of them, via simple reductions. Offline RL Offline RL can be reduced to offline PLfO where the reward dataset and dynamics dataset are generated from the same underlying offline dataset D = {(s, a, r, s\u2032)}. The alignment assumption makes offline RL in general an easier problem than offline PLfO since there is no mismatch between reward and dynamics data. Offline RL with unlabeled data Yu et al. (2022); Singh et al. (2020); Hu et al. (2023) consider a formulation where a subset of dynamics data is labeled with reward. In this scenario, the reward data is well-covered by dynamics data. The main challenge here is to leverage unlabeled dynamics data while not suffering from insufficient reward coverage. Offline IL Offline IL uses a dataset of expert demonstrations DE. Although IL generally assumes no reward information, it makes an implicit assumption on expert performance. In other words, IL is a learning problem where only positive (i.e., high return) examples are given. This observation is also in line with existing work which takes a density matching perspective on IL (Ho & Ermon, 2016; Kim et al., 2021) and reward learning from demonstrations (Fu et al., 2018a; Eysenbach et al., 2021). Recent work such as (Kim et al., 2021; Chang et al., 2021; Smith et al., 2023) has considered offline IL with a separately-collected dynamics dataset DX of unknown quality. Offline IL can be reduced to offline PLfO: The reward dataset DR = {(s, Rmax, s\u2032)} comes from expert demonstrations, where Rmax is the maximum reward. The dynamics dataset contains both the expert demonstrations DE and, 3 \fMAHALO: Unifying Offline RL and IL from Observations if given, the separately-collected dynamics data DX. Offline ILfO Offline ILfO is similar to offline IL, except that the expert demonstrations only contain state transitions, i.e., DE = {(sE, s\u2032 E)}. As such, offline ILfO can be viewed as offline PLfO with reward dataset DR = {(sE, Rmax, s\u2032 E)} and dynamics dataset DA = DX. Compared to the previous setups, offline ILfO faces an additional challenge of insufficient action coverage, as the expert state transitions (sE, s\u2032 E) may not be in the dynamics dataset DA. 3.3. Practical Scenarios We now consider a few practical examples of reward and dynamics data sources. As we will see, it is likely that the coverage of reward data mismatches with that of dynamics data. In such scenario, offline PLfO formulation is wellsuited as it can leverage all available data. Sources of dynamics data Since dynamics data is taskagnostic, dynamics data can be obtained from running datacollection policies on the MDP, as well as any other MDPs with the same dynamics. For example, in a multi-task setting (Yu et al., 2020a), dynamics data can be acquired when solving different task rewards. However, it is expensive to label dynamics data with rewards for every task. Realistically, data often only has reward information of the particular task that the data collection is for, which means that some state transitions only have actions, but not rewards. Sources of reward data One practical strategy to reward labeling is to label randomly sampled trajectories from a pre-collected dynamics dataset (Yu et al., 2022). This means that a large subset of the dynamics data remains unlabeled. Another setting is to re-use reward data collected by another agent (with the same state space). It is possible that the action information is unavailable or unusable in the underlying MDP. For example, the other agent can have different embodiment or use a different control modality. Additionally, reward information can be implicitly provided through expert demonstrations, as is discussed in Section 3.2. 4. MAHALO We propose a generic approach to offline PLfO, called Modality-agnostic Adversarial Hypothesis Adaptation for Learning from Observations (MAHALO). It is inspired by the idea of pessimism in offline RL (Jin et al., 2021). 4.1. Solution Concept to Offline PLfO In order to tackle the heterogeneous uncertainty in offline PLfO, we leverage the concept of version space in the offline RL literature (Cheng et al., 2022; Xie et al., 2021; Rigter et al., 2022; Uehara & Sun, 2022; Xie et al., 2022) to construct a performance lower bound for optimizing policies in MAHALO. Here, a version space, denoted as J = { \u02c6 J : \u03a0 \u2192R}, is the space of policy performance hypotheses that remain feasible after observing data in DA and DR. For example, if a bipedal robot has experienced that falling down receives zero rewards, then any hypothesis in the version space would give a zero reward to any policy that makes the robot fall, but the hypotheses in the version may disagree on which reward to give for other behaviors. In MAHALO we use the version space J to encapsulate uncertainty due to heterogeneous missing coverage. Because the version space J by definition includes the true performance function J, we can use J to construct a policy performance lower bound. For example, for a policy \u03c0 we can compute its absolute performance lower bound naturally as min \u02c6 J\u2208J \u02c6 J(\u03c0). Thus we can optimize policies through solving a saddle-point problem: max\u03c0\u2208\u03a0 min \u02c6 J\u2208J \u02c6 J(\u03c0) which provides a way to systematically optimize policy performance accounting for missing information in data. For the case where the data are fully labeled with rewards and actions, offline RL literature has proposed several designs of the version space J : with MDP models or value functions, in conjunction with absolute or relative pessimism (Cheng et al., 2022; Xie et al., 2022; 2021; Rigter et al., 2022; Uehara & Sun, 2022). Here we generalize this technique to offline PLfO. In particular, we will design model-free versions of MAHALO, as model-free methods are simpler to implement, use less hyperparameters, and have demonstrated superior empirical performance in offline RL (Yu et al., 2021). In principle, MAHALO can be realized by any version-space offline RL algorithm, including those based on models. 4.2. Model-Free Realization of MAHALO We now present a model-free realization of MAHALO based on the concept of relative pessimism in offline RL (Cheng et al., 2022). For clarity, we first present the formulation at the population level. We will later provide theoretical analysis for the finite-sample scenario in Section 4.2.1. We introduce and analyze another realization of MAHALO based on absolute pessimism (Xie et al., 2021) in Appendix D. We formulate offline PLfO as a Stackelberg game, with the actor policy \u03c0 \u2208\u03a0 as the leader. The followers consist of critic f \u2208F and reward function g \u2208G. \u02c6 \u03c0 \u2208arg max \u03c0\u2208\u03a0 L\u00b5(\u03c0, f \u03c0) (1) s.t. f \u03c0 \u2208arg min f\u2208F,g\u2208G L\u00b5(\u03c0, f) + \u03b1E\u03bd(g) + \u03b2E\u00b5(\u03c0, f, g), with \u03b1 \u22650, \u03b2 \u22650 being hyperparameters, and L\u00b5(\u03c0, f) := E\u00b5 \u0002 f(s, \u03c0) \u2212f(s, a) \u0003 , (2) E\u03bd(g) := E\u03bd \u0002\u0000g(s, s\u2032) \u2212r \u00012\u0003 , (3) E\u00b5(\u03c0, f, g) := E\u00b5 \u0002\u0000(f \u2212\u00af g \u2212P\u03c0f)(s, a) \u00012\u0003 . (4) 4 \fMAHALO: Unifying Offline RL and IL from Observations where f(s, \u03c0) := Ea\u223c\u03c0(\u00b7|s)[f(s, a)] and \u00af g(s, a) := Es\u2032\u223cP (\u00b7|s,a)[g(s, s\u2032)]. This optimization problem can be viewed as a regularized version of the constrained problem with a version space J = { \u02c6 J : \u02c6 J(\u03c0) = J(\u00b5) + 1 1\u2212\u03b3 L\u00b5(\u03c0, f), E\u03bd(g) \u2264\u03f5\u03b1, E\u00b5(\u03c0, f, g) \u2264\u03f5\u03b2} for some \u03f5\u03b1, \u03f5\u03b2 \u22650 related to \u03b1, \u03b2.1 We adopt the regularized version, as it is easier to implement numerically. To understand the above formulation, let us start by considering the last two terms in the followers\u2019 objective function. Recall that \u03bd is the underlying distribution of reward data, so E\u03bd(g) measures whether the candidate reward function g is data consistent. E\u00b5(\u03c0, f, g) quantifies whether the candidate critic f is Bellman-consistent on the dynamics data for policy \u03c0 and reward function g. Therefore, with sufficiently large \u03b1 and \u03b2, the reward g\u03c0 and critic f \u03c0 are both (approximately) consistent with the reward and dynamics data. On the other hand, L\u00b5(\u03c0, f) is the relative performance of between the candidate policy \u03c0 and behavioral policy \u00b5, with value estimated by critic f. The followers minimizes this quantity, meaning that f \u03c0 provides a pessimistic relative evaluation of policy \u03c0 (which will be formally shown in Proposition 4.4). The learned actor policy \u02c6 \u03c0 maximizes this lower bound to improve over the the behavioral policy. We would like to stress on the importance of making the reward function additionally minimize for the Bellman error in the followers\u2019 objective (1). By minimizing (4), the reward and critic functions work together to provide a pessimistic evaluation of the policy. In other words, the Bellman error (4) connects the learned reward function to the pessimistic loss (2), since the reward function can change in a way to make the critic more pessimistic. The Stackelberg game for MAHALO in (1) is similar to ATAC (Cheng et al., 2022). The main difference is that ATAC uses the observed reward in the Bellman error term as it assumes the reward is available for every transition. MAHALO, on the other hand, trains the reward function to be consistent with the reward data (3) and Bellman-consistent with a pessimistic critic function (4) on the dynamics data. Below we discuss three desirable properties of MAHALO. First, given large dynamics and reward datasets, MAHALO can outperform any policy whose state-action distribution is well-covered by both datasets. Second, MAHALO ensures safe policy improvement (Fujimoto et al., 2019; Laroche et al., 2019), i.e., the learned policy \u02c6 \u03c0 is no worse than the behavioral policy \u00b5 given sufficient data. Third, MAHALO can automatically adapt to the structure within data. When applied to more restrictive formulations such as offline RL, offline IL, and offline ILfO, MAHALO shows similar behavior as specialized algorithms. 1This analogy between the constrained and the regularized versions can be derived following the principle in (Xie et al., 2021). 4.2.1. THEORETICAL PROPERTIES We analyze the solution to a finite-sample version of (1) based on dynamics dataset DA and reward dataset DR: \u02c6 \u03c0 \u2208arg max \u03c0\u2208\u03a0 LDA(\u03c0, f \u03c0) (5) s.t. f \u03c0 \u2208arg min f\u2208F,g\u2208G LDA(\u03c0, f) + \u03b1EDR(g) + \u03b2EDA(\u03c0, f, g), where LDA(\u03c0, f) and EDR(g) are the empirical estimates of L\u00b5(\u03c0, f) and E\u03bd(g), respectively, and EDA(\u03c0, f, g) := EDA \u0002\u0000f(s, a) \u2212g(s, s\u2032) \u2212\u03b3f(s\u2032, \u03c0) \u00012\u0003 (6) \u2212min f\u2032\u2208F EDA \u0002\u0000f \u2032(s, a) \u2212g(s, s\u2032) \u2212\u03b3f(s\u2032, \u03c0) \u00012\u0003 . The quantity EDA(\u03c0, f, g) is the estimated Bellman error (Antos et al., 2008). We show in Appendix C (Lemma C.2) that EDA(\u03c0, f, g) can be used to approximate the Bellman error E\u00b5(\u03c0, f, g) defined in (4). For clarity purposes, we make a perfect realizability and completeness assumption below. Our analysis can be easily modified to consider an approximate version. Assumption 4.1 (Realizability & Completeness). We assume \u00b5 \u2208\u03a0, R \u2208G, and for all \u03c0 \u2208\u03a0, Q\u03c0 \u2208F. In addition, inff \u2032\u2208F \u2225f \u2032 \u2212\u00af g \u2212P\u03c0f\u2225\u00b5 = 0, \u2200g \u2208G, f \u2208F. Due to the nature of offline learning, we will compare the performance of the learned policy \u02c6 \u03c0 with a policy \u03c0 whose induced state-action occupancy d\u03c0 is \u201cwell-covered\u201d by data distributions. Similar to (Xie et al., 2021; Cheng et al., 2022; Uehara & Sun, 2022), we define error transfer coefficients to measure distribution shift from the data distributions based on critic class F, and reward class G. This is a weaker notion than, e.g., density ratio (Munos & Szepesv\u00b4 ari, 2008). Definition 4.2 (Error transfer coefficients). The Bellman error transfer coefficient between \u03c1 \u2208\u2206(S \u00d7 A) and \u00b5 \u2208 \u2206(S \u00d7 A) under policy \u03c0, critic class F and reward class G is defined as C(\u03c1; \u00b5, F, G, \u03c0) := sup f\u2208F,g\u2208G \u2225f \u2212\u00af g \u2212P\u03c0f\u22252 2,\u03c1 \u2225f \u2212\u00af g \u2212P\u03c0f\u22252 2,\u00b5 . (7) Similarly, the reward error transfer coefficient between \u03c1 and \u03bd \u2208\u2206(S \u00d7 S) under reward class G is defined as C(\u03c1; \u03bd, G) := sup g\u2208G \u2225\u00af g \u2212\u00af R\u22252 2,\u03c1 \u2225g \u2212R\u22252 2,\u03bd . (8) We use dF,G,\u03a0 to denote the joint statistical complexity of critic class F, reward class G and policy class \u03a0, and use dG to denote the statistical complexity of reward class G (e.g., when F, G and \u03a0 are all finite, we have dF,G,\u03a0 = O(log |F||G||\u03a0|/ \u03b4) where \u03b4 is the failure probability). In Appendix C, we establish statistical complexity using covering number. We now state the main theoretical property of MAHALO of relative pessimism in (5). 5 \fMAHALO: Unifying Offline RL and IL from Observations Theorem 4.3. Under Assumption 4.1, let \u02c6 \u03c0 be the solution to (5) and let \u03c0 \u2208\u03a0 be any comparator policy. Let C1 \u2265 1, C2 \u22651 be constants, \u03c1 \u2208\u2206(S \u00d7 A) be a distribution that satisfy C(\u03c1; \u00b5, F, G, \u03c0) \u2264C1 and C(\u03c1; \u03bd, G) \u2264C2. Define \u03f5\u00b5 := V 2 maxdF,G,\u03a0/ |DA|, \u03f5\u03bd := R2 maxdG/ |DR| and \u03f5 := (\u221aC1\u03f5\u03bd + p C2\u03f5\u00b5)2. Choosing \u03b1 = \u0398 (V 1/3 max \u03f51/3/ \u03f5\u03bd) and \u03b2 = \u0398 (V 1/3 max \u03f51/3/ \u03f5\u00b5), with high probability, J(\u03c0) \u2212J(\u02c6 \u03c0) (9) \u2264O 1 1 \u2212\u03b3 C1/3 1 Vmax(dF,G,\u03a0)1/3 |DA|1/3 + C1/3 2 Rmax(dG)1/3 |DR|1/3 !! + \u27e8d\u03c0 \\ \u03c1, \u00af g\u03c0 + P\u03c0f \u03c0 \u2212f \u03c0\u27e9 1 \u2212\u03b3 | {z } off-support error (dynamics) + \u27e8(d\u03c0 \u2296\u00b5) \\ \u03c1, | \u00af R \u2212\u00af g\u03c0|\u27e9 1 \u2212\u03b3 | {z } off-support error (reward) , where (d\u03c0 \u2296\u00b5) := d\u03c0 \\ \u00b5 + \u00b5 \\ d\u03c0 with (d1 \\ d2)(s, a) := max(d1(s, a) \u2212d2(s, a), 0) and \u27e8\u03b9, f\u27e9:= P (s,a)\u2208S\u00d7A \u03b9(s, a)f(s, a). The first term in (9) is the statistical error, which vanishes as |DA|, |DR| \u2192\u221e. The second and third terms measure how much the comparator policy \u03c0 is outside of the support of dynamics and reward data distributions. In other words, our learned policy \u02c6 \u03c0 can compete with any policy \u03c0 that is wellsupported by both dynamics and reward data. Compared with Theorem 5 of ATAC in (Cheng et al., 2022), we have an extra term about the statistical error of the estimated reward, and an off-support reward error, because we do not assume access to rewards on all transitions. Despite using partial reward labels, MAHALO has a robust policy improvement property, similar to ATAC (Cheng et al., 2022), which guarantees that, for a known range of hyperparameters, the learned policy \u02c6 \u03c0 is no worse than the behavioral policy by more than statistical errors. Proposition 4.4 (Robust Policy Improvement). Under Assumption 4.1, let \u02c6 \u03c0 be the solution to (5). Let \u03f5\u00b5 and \u03f5\u03bd be as defined in Theorem 4.3. We have, for any fixed \u03b1 \u22650 and \u03b2 \u22650, with high probability, J(\u00b5) \u2212J(\u02c6 \u03c0) (10) \u2264O Vmax 1 \u2212\u03b3 s dF,G,\u03a0 |DA| + \u03b1R2 maxdG (1 \u2212\u03b3)|DR| + \u03b2V 2 maxdF,G,\u03a0 (1 \u2212\u03b3)|DA| ! . As |DA| and |DR| \u2192\u221e, we can see that the solution actor policy \u02c6 \u03c0 is guaranteed to be no worse than the behavioral policy \u00b5 with any choices of fixed \u03b1, \u03b2 \u22650. 4.2.2. ADAPTION TO STRUCTURE WITHIN DATA Since MAHALO can be used to solve offline PLfO, MAHALO can be used to solve more restrictive problems via simple reductions. Then, a natural question is: Can MAHALO achieve similar performance as specialized algorithms? The answer is yes. Below we show that this is because MAHALO can adapt to the hidden structure within data despite being agnostic to the relationship between dynamics and reward datasets. Offline RL In offline RL, since the reward and dynamics datasets are aligned, we have E\u03bd(g) = E\u00b5[Es\u2032\u223cP (\u00b7|s,a)[(g(s, s\u2032) \u2212R(s, s\u2032))2]]. The expected Bellman error of critic f (with the true reward R), E\u00b5(\u03c0, f, R), can be upper bounded by 2E\u03bd(g) + 2E\u00b5(\u03c0, f, g). With sufficiently large \u03b1 and \u03b2, for any actor policy \u03c0, its corresponding critic f \u03c0 is an approximately Bellman-consistent critic function. Therefore, MAHALO behaves similarly as ATAC (Cheng et al., 2022). This can also be seen from Theorem 4.3. Since |DA| = |DR|, the statistical error is dominated by the first term, which is the same as the statistical error as ATAC. In offline RL, good dynamics coverage implies good data coverage, i.e., we have (d\u03c0 \u2296\u00b5)\\\u03c1 \u2264d\u03c0 \\\u03c1. This gives us a similar off-support error term. Offline RL with unlabeled data For the sake of simplicity, we consider a tabular setting. In this case, regardless of the policy \u03c0 \u2208\u03a0, with sufficiently large \u03b1, the reward function g\u03c0 such that g\u03c0(s, s\u2032) \u2248R(s, s\u2032) when (s, s\u2032) is within coverage of reward data \u03bd; the value of g\u03c0(s, s\u2032) on other states would adapt pessimistically according to the learner policy. The critic f \u03c0 is effectively conducting a pessimistic policy evaluation for \u03c0 in such a reward function. This means that MAHALO in the tabular setting has a similar behavior as UDS (Yu et al., 2022), a strategy where zero reward is given to all unlabeled data. The difference, though, is that MAHALO still assigns an accurate reward to unlabeled dynamics transitions (s, a, s\u2032) when (s, s\u2032) is within coverage of \u03bd. This implies that MAHALO induces less bias than UDS, even though UDS knows more about the underlying data generation process. Offline IL and ILfO In the simplest offline IL setting where only the set of expert demonstrations DE is given, Proposition 4.4 (with \u03c0E = \u00b5) shows that the learned policy \u02c6 \u03c0 is no worse than the expert policy \u03c0E (in terms of R(s, s\u2032) = Rmax1[(s, s\u2032) \u2208supp(\u03bd)]) up to statistical errors. Now consider the scenario where we have access the expert demonstrations DE (which may or may not have actions)2 and a separately collected dynamics dataset DX with unknown quality. Theorem 4.3 (with R(s, s\u2032) = Rmax1[(s, s\u2032) \u2208supp(\u03bd)]) shows that the learned policy would stay within the support of the expert distribution, similar to (Wang et al., 2019; Smith et al., 2023). Summary MAHALO is an general, data-agnostic algorithm for solving offline PLfO problems. When applied to more restrictive settings when data presents additional structure, MAHALO can automatically adapt to such structure, and achieves behavior on par with existing specialized 2When DE contains action, we can alternatively replace LDA(\u03c0, f) with LDE(\u03c0, f), which would ensure robust policy improvement to the expert policy. 6 \fMAHALO: Unifying Offline RL and IL from Observations Algorithm 1 MAHALO (realized by ATAC) 1: Input: Batch datasets DR, DA; policy \u03c0, critics f1, f2; coefficients \u03b1, \u03b2 \u22650 and \u03c4, w \u2208[0, 1]. 2: Initialize target networks \u00af f1 \u2190f1, \u00af f2 \u2190f2. 3: for k = 1, 2, . . . , K do 4: Sample minibatches Dmini R and Dmini A from DR and DA. 5: Compute critic loss lcritic(fi) \u2190LDmini A (\u03c0, fi) + \u03b2Ew Dmini A (\u03c0, fi, g), for i \u2208{1, 2} 6: Compute reward loss lreward(g) \u2190\u03b1EDmini R (g) + \u03b2 P i={1,2} Ew Dmini A (\u03c0, fi, g). 7: Update critic network fi \u2190ProjF(fi \u2212\u03b7fast\u2207lcritic) for i \u2208{1, 2}. 8: Update reward network g \u2190ProjG(g \u2212\u03b7fast\u2207lreward). 9: Compute actor loss lactor \u2190\u2212LDmini A (\u03c0, f1). 10: Update actor network \u03c0 \u2190Proj\u03a0(\u03c0 \u2212\u03b7slow\u2207lactor) 11: Update target \u00af f \u2190(1 \u2212\u03c4) \u00af f + \u03c4f for (f, \u00af f) \u2208{(fi, \u00af fi)}i=1,2. 12: end for algorithms. This means that MAHALO can leverage broad sources of data and no special care, e.g. data alignment or management, needs to be taken during data collection. 4.3. Implementation The MAHALO realization above can be implemented by making a few simple modifications to ATAC (Cheng et al., 2022). The resulting algorithm is presented in Algorithm 1, with the modifications marked in magenta. This implementation of MAHALO is based on a reduction of two-player game to no-regret policy optimization (Cheng et al., 2022). We use ADAM (Kingma & Ba, 2015) optimizer with a faster learning rate \u03b7fast for the critic and reward functions, and a smaller learning rate \u03b7slow for the actor. 4.3.1. ACTOR AND CRITIC UPDATE We use a strategy for updating the critic similar to ATAC. ATAC uses a surrogate to the Bellman error term in Equation (4) called double Q residual algorithm (DQRA) loss, which combines double Q heuristics (Fujimoto et al., 2018), the residual algorithm (Baird, 1995), and target networks (Mnih et al., 2015). The critic is parameterized by two networks {f1, f2}, each with a delayed target { \u00af f1, \u00af f2}. The target value is computed by taking the minimum of the two targets \u00af fmin(s, a) = mini\u2208{1,2} \u00af fi(s, a). DQRA uses a convex combination of the temporal difference (TD) losses of the critic network and target networks to stabilize learning. For i \u2208{1, 2}, the DQRA loss is defined as Ew Dmini A (\u03c0, fi, g) := (1 \u2212w)Etd Dmini A (\u03c0, fi, fi, g) (11) + wEtd Dmini A (\u03c0, fi, \u00af fmin, g), where w \u2208[0, 1] is the weight and the TD loss is given by Etd Dmini A (\u03c0, f, f \u2032, g) := EDmini A [(f(s, a)\u2212g(s, s\u2032)\u2212\u03b3f \u2032(s\u2032, \u03c0))2]. Note that here we use predicted reward g(s, s\u2032) since reward r is not observed in DA. After each gradient update, we apply an \u21132 projection on critic network weights (not on bias terms) (line 7) as is done in ATAC (Cheng et al., 2022). We update the actor using the same way as ATAC. The actor policy optimizes for a single critic f1 and uses a Lagrangian relaxation of minimum entropy constraint similar to SAC (Haarnoja et al., 2018). 4.3.2. REWARD UPDATE The major difference between Algorithm 1 and ATAC is the reward function update. We estimate the reward prediction loss empirically from minibatches sampled from the reward dataset DR: EDmini R (g) = EDmini R [(g(s, s\u2032) \u2212r)2]. The reward loss is the weighted sum of reward prediction loss EDmini R (g) and DQRA losses P i={1,2} Ew Dmini A (\u03c0, fi, g) (line 6). The DQRA losses connect the reward to the critic which minimizes also the performance difference; as a result, the learned reward function is also pessimistically estimated. In other words, the critic and the reward functions jointly form a hypothesis that adversarially adapts to the learner\u2019s policy. 5. Experiments We aim to answer the following questions: (a) Is MAHALO effective in solving different instances of offline PLfO problems? (b) Can MAHALO achieve similar performance as other specialized algorithms? (c) Whether MAHALO can obtain comparable performance to oracle algorithms with full reward and dynamics information? (d) In what situation is the pessimistic reward function of MAHALO critical in achieving good performance? Scenarios To answer question (a), we design five instances of offline PLfO inspired by practical scenarios. ILfO: The learner is presented with a relatively small set of expert-level state-only trajectories, and a dynamics dataset of mixed quality. The dynamics data contains trajectories collected by policies with different performance-level. IL: Similar to ILfO, but we additionally provide expert actions to the learner. RLfO: Similar to ILfO, but the learner is additionally given the reward along the expert trajectories. This simulates the scenario where we provide manual labels 7 \fMAHALO: Unifying Offline RL and IL from Observations Table 2. Results on D4RL benchmark (Fu et al., 2020). We show the average normalized score over 50 evaluation trials across 10 random seeds. (The standard errors are reported in Table 6). Algorithms with scores greater than 90% of the best score (excluding Oracle) are in bold. \u2020 ATAC only uses data with both dynamics and reward information. + Oracle has access to reward for all dynamics data. Scenario Dataset MAHALO RP AP UDS ATAC\u2020 BCO BC SMODICE Oracle+ ILfO hopper 104.66 97.48 45.97 46.80 67.72 walker 88.60 77.15 61.11 63.02 1.52 halfcheetah 61.24 36.00 4.87 5.16 59.64 IL hopper 104.06 97.88 53.35 32.21 63.56 32.12 74.75 walker 89.03 77.71 63.53 8.45 78.35 18.78 0.94 halfcheetah 54.99 23.20 3.74 25.82 3.64 22.36 58.40 RLfO hopper 106.47 105.65 47.01 103.39 walker 96.65 97.26 63.30 98.52 halfcheetah 50.38 68.66 3.35 63.57 RL-expert hopper 87.73 105.56 51.54 98.63 65.45 103.39 walker 103.18 98.31 56.27 72.97 66.40 98.52 halfcheetah 48.43 64.37 3.47 13.68 3.41 63.57 RL-sample hopper 103.08 101.66 71.92 0.95 71.50 103.34 walker 95.00 94.59 5.94 0.00 0.48 95.73 halfcheetah 68.30 68.71 13.26 20.36 19.38 69.91 Table 3. Five scenarios of PLfO considered in our experiments. Scenarios Mixed Quality Data Expert ILfO state + action state IL state + action state + action RLfO state + action state + reward RL-expert state + action state + action + reward RL-sample state + action + (sampled trajs) reward to the expert demonstrations. RL-expert: The learner is presented with the expert action in addition to what is given in RLfO. RL-sample: The learner is provided with the mixed quality dynamics data, with a subset of trajectories labeled with reward. The information available in each scenario is summarized in Table 3. Baselines To address question (b), we consider a few specialized baseline algorithms for each scenario. We consider behavior cloning from observation (BCO) (Torabi et al., 2018) as a baseline for ILfO, and behavior cloning (BC) for IL. We also include SMODICE (Ma et al., 2022), a state-of-the art offline ILfO algorithm as a baseline algorithm for ILfO, and a variation of SMODICE which uses a state-action discriminator as a baseline for offline IL. Since IL, RL-expert and RL-sample can be viewed as RL with unlabeled data, we present two baselines for these settings: running an offline RL algorithm, we use ATAC (Cheng et al., 2022) since it is the closest to MAHALO, only on labeled data and UDS (Yu et al., 2022). We note that UDS requires knowing the common transitions between reward and dynamics datasets. We implement UDS with ATAC, which is slightly different than (Yu et al., 2022),where CQL (Kumar et al., 2020) is used. Since learning inverse dynamics is a common approach to learning with observations (Torabi et al., 2018; Edwards et al., 2020), we implement a baseline called action prediction (AP). It pretrains an inverse dynamics model on the dynamics dataset, predicts the missing actions in the reward dataset, and runs ATAC (Cheng et al., 2022) on the reward dataset. For question (d), we implement a baseline algorithm called reward prediction (RP). It pretrains a reward function on the reward dataset. This reward function is then fixed for offline RL training using ATAC (Cheng et al., 2022). Comparing MAHALO with RP can give us information on whether the adversarial training of reward function in MAHALO is effective. Finally, to answer question (c), we train ATAC on a fully-labeled dynamics dataset (Oracle) to evaluate if MAHALO can achieve comparable performance with a privileged offline RL algorithm (which has access to more information). We evaluate MAHALO and above-mentioned algorithms on two sets of environments: locomotion tasks from the D4RL benchmark (Fu et al., 2020) and robot manipulation tasks from the Meta-World domain (Yu et al., 2020a). 5.1. Evaluation on D4RL We consider three environments from D4RL (Fu et al., 2020): hopper-v2, walker2d-v2, and halfcheetah-v2. The rewards in the three environments promote moving forward. In hopper and walker2d, the agent is required to stay within a health height range, otherwise the episode terminates. We construct a mixed quality dynamics dataset with 3.4 M transitions through concatenating the random, medium, medium-replay, and full-replay datasets. The expert data is consisted of 10k transitions (\u223c10 trajectories) generated by randomly sampling trajectories from the expert dataset. For RL-sample, we sample 34k transitions (1% of overall data). The normalized return for five scenarios for each dataset is listed in Table 2. MAHALO achieves top performance in almost every task except halfcheetah. MAHALO also 8 \fMAHALO: Unifying Offline RL and IL from Observations Table 4. Success rate of the final policy (with the exception of SMODICE\u2217) on Meta-World (Yu et al., 2020a). The success rate is computed over 50 evaluation episodes. We report the average success rate across 10 random seeds. (The standard errors across random seeds are reported in Table 7). We consider an episode success if it is able to reach the goal within 128 steps. \u2217SMODICE often diverges during training; we therefore take the success rate of its best performing policy during training instead of the final one. Scenario Dataset MAHALO RP AP UDS ATAC\u2020 BCO BC SMODICE\u2217 Oracle+ ILfO reach 65.0 62.6 13.0 11.6 10.6 push 62.4 11.6 12.4 14.6 0.4 plate-slide 100.0 22.0 94.0 75.4 0.0 handle-press 75.4 32.4 96.6 87.8 16.6 button-press 100.0 100.0 93.6 93.8 0.4 IL reach 24.6 23.6 38.0 21.6 98.0 62.2 19.8 push 92.6 11.8 35.2 79.2 50.0 91.4 0.2 plate-slide 80.4 34.4 89.8 76.2 89.4 85.3 0.0 handle-press 71.4 34.8 100.0 35.2 97.0 75.2 20.2 button-press 100.0 99.8 96.2 100.0 99.6 100.0 0.2 RLfO reach 86.4 88.0 15.0 51.6 push 58.2 32.0 20.2 91.8 plate-slide 100.0 100.0 83.2 100.0 handle-press 77.8 84.4 96.0 78.6 button-press 100.0 100.0 92.6 100.0 RL-expert reach 39.2 54.0 42.0 57.8 98.4 51.6 push 95.6 88.6 40.4 90.4 99.6 91.8 plate-slide 100.0 100.0 89.6 99.4 85.4 100.0 handle-press 72.6 82.0 97.6 81.2 99.0 78.6 button-press 100.0 100.0 97.0 100.0 100.0 100.0 RL-sample reach 86.6 87.4 63.0 87.4 85.4 88.8 push 40.6 47.8 29.2 35.8 35.0 46.0 plate-slide 100.0 100.0 97.6 100.0 99.6 100.0 handle-press 78.4 81.0 100.0 76.8 82.8 83.4 button-press 100.0 100.0 100.0 100.0 100.0 100.0 shows comparable performance with the privileged oracle. Reward prediction (RP) also performs well in all RL tasks. It, however, does not perform as well in IL tasks, since the reward function of RP can not generalize beyond the constant training reward. UDS does not perform well in these scenarios potentially due to being overly pessimistic. SMODICE uniformly performs worse than MAHALO in hopper and walker tasks. We hypothesize that the fixed discriminator reward function of SMODICE becomes overly pessimistic and discourages policy learning when the expert and dynamics distribution mismatches with one another. 5.2. Evaluation on Meta-World We additionally evaluate MAHALO and other baseline algorithms on five robot manipulation tasks from MetaWorld (Yu et al., 2020a). For Meta-World tasks, we use a non-positive reward function that promotes agent to reach the goal, which is a terminal state, as fast as possible. We generate a mixed quality dataset by adding different level of noises to a scripted policy provided by Meta-World for each task. We collect 100 trajectories each for zero-mean Gaussian noise with standard deviations [0.1, 0.5, 1.0], and use these 300 trajectories as the mixed quality dataset. We separately collect 100 expert trajectories. For, RL-sample, we randomly sample trajectories to label 50% of the dataset. Note that we use more reward data for Meta-World since the state space (which includes the space of goals) is larger. We observe that MAHALO is one of the overall bestperforming algorithms. ATAC also achieves strong results in IL and RL-expert. This is because ATAC, in these scenarios, is only presented with expert data (we do not see similar effect in D4RL tasks since the expert dataset is much smaller there). UDS shows similar performance as MAHALO in RL scenarios. They, however, perform worse than MAHALO in IL. Reward prediction (RP) performs worse than MAHALO in most cases, especially in IL and ILfO scenarios. This shows that the pessimistic reward function in MAHALO is critical in achieving robust performance across different tasks and scenarios. SMODICE is not able to train reliably, and often diverges during training. We therefore take the success rate of the best performing policy during training instead of the final one. We find that the best policy of SMODICE underperforms the final policy of most algorithms, including MAHALO, potentially due to this instability in training. Acknowledgements We thank Andrey Kolobov for sharing data collected on the Meta-World tasks. We thank Mohak Bhardwaj for providing the script for processing results and Sinong Geng for helpful discussions on Meta-World experiments. 9 \fMAHALO: Unifying Offline RL and IL from Observations"
}
]
}