Introduction
It is important that robots and other autonomous agents can safely learn from and adapt to a variety of human preferences and goals. One common way to learn preferences and goals is via imitation learning, in which an autonomous agent learns how to perform a task by observing demonstrations of the task [@Argall2009]. When learning from demonstrations, it is important for an agent to be able to provide high-confidence bounds on its performance with respect to the demonstrator; however, while there exists much work on high-confidence off-policy evaluation in the reinforcement learning (RL) setting, there has been much less work on high-confidence policy evaluation in the imitation learning setting, where the reward samples are unavailable.
Prior work on high-confidence policy evaluation for imitation learning has used Bayesian inverse reinforcement learning (IRL) [@ramachandran2007bayesian] to allow an agent to reason about reward uncertainty and policy generalization error [@brown2018risk]. However, Bayesian IRL is typically intractable for complex problems due to the need to repeatedly solve an MDP in the inner loop, resulting in high computational cost as well as high sample cost if a model is not available. This precludes robust safety and uncertainty analysis for imitation learning in high-dimensional problems or in problems in which a model of the MDP is unavailable. We seek to remedy this problem by proposing and evaluating a method for safe and efficient Bayesian reward learning via preferences over demonstrations. Preferences over trajectories are intuitive for humans to provide [@akrour2011preference; @wilson2012bayesian; @sadigh2017active; @christiano2017deep; @palan2019learning] and enable better-than-demonstrator performance [@browngoo2019trex; @brown2019drex]. To the best of our knowledge, we are the first to show that preferences over demonstrations enable both fast Bayesian reward learning in high-dimensional, visual control tasks as well as efficient high-confidence performance bounds.
We first formalize the problem of high-confidence policy evaluation [@thomas2015high] for imitation learning. We then propose a novel algorithm, Bayesian Reward Extrapolation (Bayesian REX), that uses a pairwise ranking likelihood to significantly increase the efficiency of generating samples from the posterior distribution over reward functions. We demonstrate that Bayesian REX can leverage neural network function approximation to learn useful reward features via self-supervised learning in order to efficiently perform deep Bayesian reward inference from visual demonstrations. Finally, we demonstrate that samples obtained from Bayesian REX can be used to solve the high-confidence policy evaluation problem for imitation learning. We evaluate our method on imitation learning for Atari games and demonstrate that we can efficiently compute high-confidence bounds on policy performance, without access to samples of the reward function. We use these high-confidence performance bounds to rank different evaluation policies according to their risk and expected return under the posterior distribution over the unknown ground-truth reward function. Finally, we provide evidence that bounds on uncertainty and risk provide a useful tool for detecting reward hacking/gaming [@amodei2016concrete], a common problem in reward inference from demonstrations [@ibarz2018reward] as well as reinforcement learning [@ng1999policy; @leike2017ai].
Method
We model the environment as a Markov Decision Process (MDP) consisting of states $\mathcal{S}$, actions $\mathcal{A}$, transition dynamics $T:\mathcal{S} \times \mathcal{A} \times \mathcal{S} \to [0,1]$, reward function $R:\mathcal{S} \to \mathbb{R}$, initial state distribution $S_0$, and discount factor $\gamma$. Our approach extends naturally to rewards defined as $R(s,a)$ or $R(s,a,s')$; however, state-based rewards have some advantages. @airl prove that a state-only reward function is a necessary and sufficient condition for a reward function that is disentangled from dynamics. Learning a state-based reward also allows the learned reward to be used as a potential function for reward shaping [@ng1999policy], if a sparse ground-truth reward function is available.
A policy $\pi$ is a mapping from states to a probability distribution over actions. We denote the value of a policy $\pi$ under reward function $R$ as $V^\pi_R = \mathbb{E}\pi[\sum{t=0}^\infty \gamma^t R(s_t) | s_0 \sim S_0]$ and denote the value of executing policy $\pi$ starting at state $s \in \mathcal{S}$ as $V^\pi_R(s) = \mathbb{E}\pi[\sum{t=0}^\infty \gamma^t R(s_t) | s_0 = s]$. Given a reward function $R$, the Q-value of a state-action pair $(s,a)$ is $Q^{\pi}_R(s,a) = \mathbb{E}\pi[\sum{t=0}^\infty \gamma^t R(s_t) | s_0 = s, a_0 = a]$. We also denote $V^*R = \max{\pi} V^\pi_R$ and $Q^*R(s,a) = \max{\pi} Q^{\pi}_R(s,a)$.
Bayesian inverse reinforcement learning (IRL) [@ramachandran2007bayesian] models the environment as an MDP$\setminus$R in which the reward function is unavailable. Bayesian IRL seeks to infer the latent reward function of a Boltzman-rational demonstrator that executes the following policy $$\begin{equation} \label{eq:boltzman_demonstrator} \pi^\beta_R(a|s) = \frac{e^{\beta Q^*R(s,a)}}{\sum{b \in \mathcal{A}} e^{\beta Q^*R(s,b)}}, \end{equation}$$ in which $R$ is the true reward function of the demonstrator, and $\beta \in [0, \infty)$ represents the confidence that the demonstrator is acting optimally. Under the assumption of Boltzman rationality, the likelihood of a set of demonstrated state-action pairs, $D = { (s,a) : (s,a) \sim \pi_D }$, given a specific reward function hypothesis $R$, can be written as $$\begin{equation} \label{eqn:boltzman_likelihood} P(D | R) = \prod{(s,a) \in D} \pi^\beta_R(a|s) = \prod_{(s,a) \in D} \frac{e^{\beta Q^*R(s,a)}}{\sum{b \in \mathcal{A}} e^{\beta Q^*_R(s,b)}}. \end{equation}$$
Bayesian IRL generates samples from the posterior distribution $P(R|D) \sim P(D|R)P(R)$ via Markov Chain Monte Carlo (MCMC) sampling, but this requires solving for $Q^*_{R'}$ to compute the likelihood of each new proposal $R'$. Thus, Bayesian IRL methods are only used for low-dimensional problems with reward functions that are often linear combinations of a small number of hand-crafted features [@bobu2018learning; @biyik2019asking]. One of our contributions is an efficient Bayesian reward inference algorithm that leverages preferences over demonstrations in order to significantly improve the efficiency of Bayesian reward inference.
Before detailing our approach, we first formalize the problem of high-confidence policy evaluation for imitation learning. We assume access to an MDP$\setminus$R, an evaluation policy $\pi_{\rm eval}$, a set of demonstrations, $D = {\tau_1,\ldots,\tau_m}$, in which $\tau_i$ is either a complete or partial trajectory comprised of states or state-action pairs, a confidence level $\delta$, and performance statistic $g:\Pi \times \mathcal{R} \rightarrow \mathbb{R}$, in which $\mathcal{R}$ denotes the space of reward functions and $\Pi$ is the space of all policies.
The High-Confidence Policy Evaluation problem for Imitation Learning (HCPE-IL) is to find a high-confidence lower bound $\hat{g}: \Pi \times \mathcal{D} \rightarrow \mathbb{R}$ such that $$\begin{equation} \text{Pr}(g(\pi_{\rm eval}, R^*) \geq \hat{g}(\pi_{\rm eval}, D)) \geq 1 - \delta, \end{equation}$$ in which $R^*$ denotes the demonstrator's true reward function and $\mathcal{D}$ denotes the space of all possible demonstration sets. HCPE-IL takes as input an evaluation policy $\pi_{\rm eval}$, a set of demonstrations $D$, and a performance statistic, $g$, which evaluates a policy under a reward function. The goal of HCPE-IL is to return a high-confidence lower bound $\hat{g}$ on the performance statistic $g(\pi_{\rm eval}, R^*)$.
We now describe our main contribution: a method for scaling Bayesian reward inference to high-dimensional visual control tasks as a way to efficiently solve the HCPE-IL problem for complex imitation learning tasks. Our first insight is that the main bottleneck for standard Bayesian IRL [@ramachandran2007bayesian] is computing the likelihood function in Equation ([eqn:boltzman_likelihood]{reference-type="ref" reference="eqn:boltzman_likelihood"}) which requires optimal Q-values. Thus, to make Bayesian reward inference scale to high-dimensional visual domains, it is necessary to either efficiently approximate optimal Q-values or to formulate a new likelihood. Value-based reinforcement learning focuses on efficiently learning optimal Q-values; however, for complex visual control tasks, RL algorithms can take several hours or even days to train [@mnih2015human; @hessel2018rainbow]. This makes MCMC, which requires evaluating large numbers of likelihood ratios, infeasible given the current state-of-the-art in value-based RL. Methods such as transfer learning have great potential to reduce the time needed to calculate $Q^*_R$ for a new proposed reward function $R$; however, transfer learning is not guaranteed to speed up reinforcement learning [@taylor2009transfer]. Thus, we choose to focus on reformulating the likelihood function as a way to speed up Bayesian reward inference.
An ideal likelihood function requires little computation and minimal interaction with the environment. To accomplish this, we leverage recent work on learning control policies from preferences [@christiano2017deep; @palan2019learning; @biyik2019asking]. Given ranked demonstrations, @browngoo2019trex propose Trajectory-ranked Reward Extrapolation (T-REX): an efficient reward inference algorithm that transforms reward function learning into classification problem via a pairwise ranking loss. T-REX removes the need to repeatedly sample from or partially solve an MDP in the inner loop, allowing it to scale to visual imitation learning domains such as Atari and to extrapolate beyond the performance of the best demonstration. However, T-REX only solves for a point estimate of the reward function. We now discuss how a similar approach based on a pairwise preference likelihood allows for efficient sampling from the posterior distribution over reward functions.
We assume access to a sequence of $m$ trajectories, $D = { \tau_1,\ldots,\tau_m }$, along with a set of pairwise preferences over trajectories $\mathcal{P} = {(i,j) : \tau_i \prec \tau_j }$. Note that we do not require a total-ordering over trajectories. These preferences may come from a human demonstrator or could be automatically generated by watching a learner improve at a task [@jacq2019learning; @browngoo2019trex] or via noise injection [@brown2019drex]. Given trajectory preferences, we can formulate a pair-wise ranking likelihood to compute the likelihood of a set of preferences over demonstrations $\mathcal{P}$, given a parameterized reward function hypothesis $R_\theta$. We use the standard Bradley-Terry model [@bradley1952rank] to obtain the following pairwise ranking likelihood function, commonly used in learning to rank applications such collaborative filtering [@volkovs2014new]: $$\begin{equation} \label{eqn:pairwiserank} P(D, \mathcal{P} \mid R_\theta) = \prod_{(i,j) \in \mathcal{P}} \frac{e^{\beta R_\theta(\tau_j)}}{e^{\beta R_\theta(\tau_i)} + e^{\beta R_\theta(\tau_j)}}, \end{equation}$$ in which $R_\theta(\tau) = \sum_{s \in \tau} R_\theta(s)$ is the predicted return of trajectory $\tau$ under the reward function $R_\theta$, and $\beta$ is the inverse temperature parameter that models the confidence in the preference labels. We can then perform Bayesian inference via MCMC to obtain samples from $P(R_\theta \mid D, \mathcal{P}) \propto P(D, \mathcal{P} \mid R_\theta) P(R_\theta)$. We call this approach Bayesian Reward Extrapolation or Bayesian REX.
Note that using the likelihood function defined in Equation ([eqn:pairwiserank]{reference-type="ref" reference="eqn:pairwiserank"}) does not require solving an MDP. In fact, it does not require any rollouts or access to the MDP. All that is required is that we first calculate the return of each trajectory under $R_\theta$ and compare the relative predicted returns to the preference labels to determine the likelihood of the demonstrations under the reward hypothesis $R_\theta$. Thus, given preferences over demonstrations, Bayesian REX is significantly more efficient than standard Bayesian IRL. In the following section, we discuss further optimizations that improve the efficiency of Bayesian REX and make it more amenable to our end goal of high-confidence policy evaluation bounds.
{#fig:BayesianREX width="\linewidth"}
In order to learn rich, complex reward functions, it is desirable to use a deep network to represent the reward function $R_\theta$. While MCMC remains the gold-standard for Bayesian Neural Networks, it is often challenging to scale to deep networks. To make Bayesian REX more efficient and practical, we propose to limit the proposal to only change the last layer of weights in $R_\theta$ when generating MCMC proposals---we will discuss pre-training the bottom layers of $R_\theta$ in the next section. After pre-training, we freeze all but the last layer of weights and use the activations of the penultimate layer as the latent reward features $\phi(s) \in \mathbb{R}^k$. This allows the reward at a state to be represented as a linear combination of $k$ features: $R_\theta(s) = w^T \phi(s)$. Similar to work by @pradier2018projected, operating in a lower-dimensional latent space makes full Bayesian inference tractable.
A second advantage of using a learned linear reward function is that it allows us to efficiently compute likelihood ratios when performing MCMC. Consider the likelihood function in Equation ([eqn:pairwiserank]{reference-type="ref" reference="eqn:pairwiserank"}). If we do not represent $R_\theta$ as a linear combination of pretrained features, and instead let any parameter in $R_\theta$ change during each proposal, then for $m$ demonstrations of length $T$, computing $P(D, \mathcal{P} \mid R_\theta)$ for a new proposal $R_\theta$ requires $O(mT)$ forward passes through the entire network to compute $R_\theta(\tau_i)$. Thus, the complexity of generating $N$ samples from the posterior results is $O(mTN |R_\theta|)$, where $|R_\theta|$ is the number of computations required for a full forward pass through the entire network $R_\theta$. Given that we would like to use a deep network to parameterize $R_\theta$ and generate thousands of samples from the posterior distribution over $R_\theta$, this many computations will significantly slow down MCMC proposal evaluation.
If we represent $R_\theta$ as a linear combination of pre-trained features, we can reduce this computational cost because $$\begin{equation} R_\theta (\tau) = \sum_{s \in \tau} w^T\phi(s) = w^T\sum_{s \in \tau} \phi(s) = w^T \Phi_{\tau}. \end{equation}$$ Thus, we can precompute and cache $\Phi_{\tau_i} = \sum_{s \in \tau_i} \phi(s)$ for $i = 1,\ldots,m$ and rewrite the likelihood as $$\begin{equation} \label{eqn:lincombo_Boltzman} P(D, \mathcal{P} \mid R_\theta) = \prod_{(i,j) \in \mathcal{P}} \frac{e^{\beta w^T \Phi_{\tau_j}}}{e^{\beta w^T \Phi_{\tau_j}} + e^{\beta w^T \Phi_{\tau_i}}}. \end{equation}$$ Note that demonstrations only need to be passed through the reward network once to compute $\Phi_{\tau_i}$ since the pre-trained embedding remains constant during MCMC proposal generation. This results in an initial $O(mT)$ passes through all but the last layer of $R_\theta$ to obtain $\Phi_{\tau_i}$, for $i=1,\ldots,m$, and then only $O(mk)$ multiplications per proposal evaluation thereafter---each proposal requires that we compute $w^T\Phi_{\tau_i}$ for $i=1,\ldots,m$ and $\Phi_{\tau_i} \in \mathbb{R}^k$. Thus, when using feature pre-training, the total complexity is only $O(mT|R_\theta| + mkN)$ to generate $N$ samples via MCMC. This reduction in the complexity of MCMC from $O(mTN |R_\theta|)$ to $O(mT|R_\theta| + mkN)$ results in significant and practical computational savings because (1) we want to make $N$ and $R_\theta$ large and (2) the number of demonstrations, $m$, and the size of the latent embedding, $k$, are typically several orders of magnitude smaller than $N$ and $|R_\theta|$.
A third, and critical advantage of using a learned linear reward function is that it makes solving the HCPE-IL problem discussed in Section 4{reference-type="ref" reference="sec:hcpe-il"} tractable. Performing a single policy evaluation is a non-trivial task [@sutton2000policy] and even in tabular settings has complexity $O(|S|^3)$ in which $|S|$ is the size of the state-space [@littman1995complexity]. Because we are in an imitation learning setting, we would like to be able to efficiently evaluate any given policy across the posterior distribution over reward functions found via Bayesian REX. Given a posterior distribution over $N$ reward function hypotheses we would need to solve $N$ policy evaluations. However, note that given $R(s) = w^T \phi(s)$, the value function of a policy can be written as $$\begin{equation} V^\pi_R = \mathbb{E}{\pi}[\sum{t=0}^T R(s_t)] = w^T \mathbb{E}{\pi}[\sum{t=0}^T \phi(s_t)] = w^T \Phi_\pi, \end{equation}$$ in which we assume a finite horizon MDP with horizon $T$ and in which $\Phi_\pi$ are the expected feature counts [@abbeel2004apprenticeship; @barreto2017successor] of $\pi$. Thus, given any evaluation policy $\pi_{\rm eval}$, we only need to solve one policy evaluation problem to compute $\Phi_{\rm eval}$. We can then compute the expected value of $\pi_{\rm eval}$ over the entire posterior distribution of reward functions via a single matrix vector multiplication $W \Phi_{\pi_{\rm eval}}$, where $W$ is an $N$-by-$k$ matrix with each row corresponding to a single reward function weight hypothesis $w^T$. This significantly reduces the complexity of policy evaluation over the reward function posterior distribution from $O(N |S|^3)$ to $O(|S|^3 + Nk)$.
When we refer to Bayesian REX we will refer to the optimized version described in this section (see the Appendix for full implementation details and pseudo-code)[^1] . Running MCMC with 66 preference labels to generate 100,000 reward hypothesis for Atari imitation learning tasks takes approximately 5 minutes on a Dell Inspiron 5577 personal laptop with an Intel i7-7700 processor without using the GPU. In comparison, using standard Bayesian IRL to generate one sample from the posterior takes 10+ hours of training for a parallelized PPO reinforcement learning agent [@baselines] on an NVIDIA TITAN V GPU.
The previous section presupposed access to a pretrained latent embedding function $\phi: S \rightarrow \mathbb{R}^k$. We now discuss our pre-training process. Because we are interested in imitation learning problems, we need to be able to train $\phi(s)$ from the demonstrations without access to the ground-truth reward function. One potential method is to train $R_\theta$ using the pairwise ranking likelihood function in Equation ([eqn:pairwiserank]{reference-type="ref" reference="eqn:pairwiserank"}) and then freeze all but the last layer of weights; however, the learned embedding may overfit to the limited number of preferences over demonstrations and fail to capture features relevant to the ground-truth reward function. Thus, we supplement the pairwise ranking objective with auxiliary objectives that can be optimized in a self-supervised fashion using data from the demonstrations.
::: {#tab:aux_losses}
Inverse Dynamics $f_{\rm ID}(\phi(s_t), \phi(s_{t+1})) \rightarrow a_t$ Forward Dynamics $f_{\rm FD}(\phi(s_t), a_t) \rightarrow s_{t+1}$ Temporal Distance $f_{\rm TD}(\phi(s_t), \phi(s_{t+x}) \rightarrow x$ Variational Autoencoder $f_{A}(\phi(s_t)) \rightarrow s_t$
: Self-supervised learning objectives used to pre-train $\phi(s)$. :::
We use the following self-supervised tasks to pre-train $R_\theta$: (1) Learn an inverse dynamics model that uses embeddings $\phi(s_t)$ and $\phi(s_{t+1})$ to predict the corresponding action $a_t$ [@torabi2018behavioral; @hanna2017grounded], (2) Learn a forward dynamics model that predicts $s_{t+1}$ from $\phi(s_t)$ and $a_t$ [@oh2015action; @thananjeyan2019safety], (3) Learn an embedding $\phi(s)$ that predicts the temporal distance between two randomly chosen states from the same demonstration [@imitationyoutube], and (4) Train a variational pixel-to-pixel autoencoder in which $\phi(s)$ is the learned latent encoding [@makhzani2017pixelgan; @doersch2016tutorial]. Table 1{reference-type="ref" reference="tab:aux_losses"} summarizes the self-supervised tasks used to train $\phi(s)$.
There are many possibilities for pre-training $\phi(s)$. We used the objectives described above to encourage the embedding to encode different features. For example, an accurate inverse dynamics model can be learned by only attending to the movement of the agent. Learning forward dynamics supplements this by requiring $\phi(s)$ to encode information about short-term changes to the environment. Learning to predict the temporal distance between states in a trajectory forces $\phi(s)$ to encode long-term progress. Finally, the autoencoder loss acts as a regularizer to the other losses as it seeks to embed all aspects of the state (see the Appendix for details). The Bayesian REX pipeline for sampling from the reward function posterior is shown in Figure 1{reference-type="ref" reference="fig:BayesianREX"}.
We now discuss how to use Bayesian REX to find an efficient solution to the high-confidence policy evaluation for imitation learning (HCPE-IL) problem (see Section 4{reference-type="ref" reference="sec:hcpe-il"}). Given samples from the distribution $P(w \mid D, \mathcal{P})$, where $R(s) = w^T \phi(s)$, we compute the posterior distribution over any performance statistic $g(\pi_{\rm eval}, R^*)$ as follows. For each sampled weight vector $w$ produced by Bayesian REX, we compute $g(\pi_{\rm eval}, w)$. This results in a sample from the posterior distribution $P(g(\pi_{\rm eval}, R) \mid D, \mathcal{P})$, i.e., the posterior distribution over performance statistic $g$. We then compute a $(1-\delta)$ confidence lower bound, $\hat{g}(\pi_{\rm eval}, D)$, by finding the $\delta$-quantile of $g(\pi_{\rm eval}, w)$ for $w \sim P(w\mid D, \mathcal{P})$.
While there are many potential performance statistics $g$, we chose to focus on bounding the expected value of the evaluation policy, i.e., $g(\pi_{\rm eval}, R^*) = V^{\pi_{\rm eval}}{R^*} = {w^*}^T \Phi{\pi_{\rm eval}}$. To compute a $1 - \delta$ confidence bound on $V^{\pi_{\rm eval}}{R^*}$, we take advantage of the learned linear reward representation to efficiently calculate the posterior distribution over policy returns given preferences and demonstrations. This distribution over returns is calculated via a matrix vector product, $W \Phi{\pi_{\rm eval}}$, in which each row of $W$ is a sample, $w$, from the MCMC chain and $\pi_{\rm eval}$ is the evaluation policy. We then sort the resulting vector and select the $\delta$-quantile lowest value. This results in a $1-\delta$ confidence lower bound on $V^{\pi_{\rm eval}}{R^*}$ and corresponds to the $\delta$-Value at Risk (VaR) over $V^{\pi{\rm eval}}_{R}\sim P(R \mid D, \mathcal{P})$ [@jorion1997value; @brown2018efficient].