## **Behavior Alignment via** **Reward Function Optimization**
**Dhawal Gupta** _[∗]_ **Yash Chandak** _[∗†]_ **Scott M. Jordan** _[†]_
University of Massachusetts Stanford University University of Alberta
**Philip S. Thomas** **Bruno Castro da Silva**
University of Massachusetts University of Massachusetts
**Abstract**
Designing reward functions for efficiently guiding reinforcement learning (RL)
agents toward specific behaviors is a complex task. This is challenging since it
requires the identification of reward structures that are not sparse and that avoid inadvertently inducing undesirable behaviors. Naively modifying the reward structure
to offer denser and more frequent feedback can lead to unintended outcomes and
promote behaviors that are not aligned with the designer’s intended goal. Although
potential-based reward shaping is often suggested as a remedy, we systematically
investigate settings where deploying it often significantly impairs performance.
To address these issues, we introduce a new framework that uses a bi-level objective to learn _behavior alignment reward functions_ . These functions integrate
auxiliary rewards reflecting a designer’s heuristics and domain knowledge with
the environment’s primary rewards. Our approach automatically determines the
most effective way to blend these types of feedback, thereby enhancing robustness
against heuristic reward misspecification. Remarkably, it can also adapt an agent’s
policy optimization process to mitigate suboptimalities resulting from limitations
and biases inherent in the underlying RL algorithms. We evaluate our method’s
efficacy on a diverse set of tasks, from small-scale experiments to high-dimensional
control challenges. We investigate heuristic auxiliary rewards of varying quality—
some of which are beneficial and others detrimental to the learning process. Our
results show that our framework offers a robust and principled way to integrate
designer-specified heuristics. It not only addresses key shortcomings of existing
approaches but also consistently leads to high-performing solutions, even when
given misaligned or poorly-specified auxiliary reward functions.
**1** **Introduction**
In this paper, we investigate the challenge of enabling reinforcement learning (RL) practitioners, who
may not be experts in the field, to incorporate domain knowledge through heuristic auxiliary reward
functions. Our goal is to ensure that such auxiliary rewards not only induce behaviors that align with
the designer’s intentions but also allow for faster learning. RL practitioners typically model a given
control problem by first designing simple reward functions that directly quantify whether (or how
well) an agent completed a task. These could be, for instance, functions assigning a reward of +1 iff
the agent reaches a specified goal state, and zero otherwise. However, optimizing a policy based on
such a sparse reward function often proves challenging.
_∗_ Both authors contributed equally to this work. _†_ Work done while at the University of Massachusetts.
Corresponding author: Dhawal Gupta (dgupta@cs.umass.edu).
37th Conference on Neural Information Processing Systems (NeurIPS 2023).
Figure 1: Auxiliary rewards can be used to convey to the agent how we (designers) _think_ it should
solve the problem. However, if not carefully designed, they can lead to policies that result in undesired
behaviors. This figure provides a visual illustration of a toy example depicting how the proposed
method works. The star represents the optimal policy, and the red dot represents the fixed point of a
policy optimization process under a “sub-optimal” heuristic; i.e., one that, when naively combined
with _rp_, induces behaviors different from those under the optimal policy for _rp_ . **(Left)** Vector field of
a policy optimization process converging to a sub-optimal policy. **(Middle and Right)** By changing
the influence of auxiliary rewards, our method can dynamically _correct_ the _entire policy optimization_
_process_ steering it towards a policy that results in the desired behavior.
To address this issue, designers often introduce auxiliary reward functions that supplement the original
rewards. Auxiliary rewards are heuristic guidelines aimed at facilitating and speeding up the learning
process. One could, e.g., augment the previously described reward function (which gives a reward of
+1 upon reaching a goal state) with an auxiliary reward accounting for the agent’s distance to the
goal. However, the effectiveness of using auxiliary reward functions largely depends on the problem’s
complexity and the designer’s skill in crafting heuristics that, when combined with the original reward
function, do not induce behaviors different than the ones originally intended [26, 27].
Existing methods like potential-based reward shaping [41] aim to incorporate domain knowledge
without misaligning the behaviors induced by the resulting combined reward functions. However,
as we discuss in Section 3, potential-based shaping has several limitations: _(i)_ it is restricted to
state-based functions; _(ii)_ it amounts to a different initialization of the _q_ -function; _(iii)_ it does not
alter policy gradients in expectation; and _(iv)_ it can increase the variance in policy gradient methods.
To address these challenges, we introduce a scalable algorithm that empowers RL practitioners to
specify potentially imperfect auxiliary reward functions. It ensures that the resulting optimization
process will not inadvertently lead to unintended behaviors and that it will allow for faster learning.
In particular, this paper addresses the following challenges:
**(1) How to incorporate auxiliary reward information:** We introduce a novel bi-level objective
to analyze and automatically fine-tune designer-created auxiliary reward functions. It ensures they
remain aligned with the original reward and do not induce behaviors different from those originally
intended by the designer. Additionally, we formulate the problem to shape the optimization landscape,
biasing our bi-level optimizer toward auxiliary reward functions that facilitate faster learning.
**(2) How to use auxiliary reward to mitigate algorithmic biases:** We show that our framework can
automatically adjust how primary and auxiliary rewards are blended to mitigate limitations or biases
inherent in the underlying RL algorithm (Section 4.1). For instance, many policy-gradient-based RL
algorithms are subject to biases due to issues like discounting mismatch [59] or partial off-policy
correction [51]. These biases can hinder the algorithm’s ability to identify near-optimal policies.
**(3) How to ensure scalability to high-dimensional problems:** We introduce an algorithm that
employs _implicit gradients_ to automatically adjust primary and auxiliary rewards, ensuring that
the combined reward function aligns with the designer’s original expectations (see Figure 1). We
evaluate our method’s efficacy across a range of tasks, from small-scale to high-dimensional control
settings (see Section 6). In these tasks, we experiment with auxiliary rewards of varying quality;
some accelerate learning, while others can be detrimental to finding an optimal policy.
**2** **Notation**
In this paper, we investigate sequential decision-making problems modeled as Markov decision
processes (MDPs). An MDP is defined as a tuple ( _S, A, p, rp, r_ aux _, γ, d_ 0), where _S_ is the state set,
_A_ is the action set, _p_ is the transition function, _rp_ : _S × A →_ R is the _primary_ reward function, _r_ aux :
2
_S × A →_ R is an _optional_ auxiliary reward function (possibly designed by a non-expert in machine
learning, based on domain knowledge), and _d_ 0 is the starting state distribution. Let _πθ_ : _S×A →_ [0 _,_ 1]
be any policy parameterized using _θ ∈_ Θ. For brevity, we will often use _πθ_ and _θ_ interchangeably. Let
_St_ and _At_ be the random variables for the state and action observed at the time _t_ . As in the standard
RL setting, the performance _J_ ( _θ_ ) of a policy _πθ_ is defined as the expected discounted return with
respect to the (primary) reward function, _rp_ ; i.e., _J_ ( _θ_ ) := E _π_ [ [�] _[T]_ _t_ =0 _[γ][t][r][p]_ [(] _[S][t][, A][t]_ [)]][, where] _[ T]_ [ + 1][ is]
the episode length. An optimal policy parameter _θ_ _[∗]_ is defined as _θ_ _[∗]_ _∈_ arg max _θ∈_ Θ _J_ ( _θ_ ). A popular
technique to search for _θ_ _[∗]_ is based on constructing sample estimates ∆( [ˆ] _θ, rp_ ) of the ( _γ_ -dropped)
policy gradient, ∆( _θ, rp_ ), given an agent’s interactions with the environment for one episode [58, 59].
Then, using _ψθ_ ( _s, a_ ) as a shorthand for d ln _πθ_ ( _s, a_ ) _/_ d _θ_, these quantities are defined as follows:
_T_
- _γ_ _[j][−][t]_ _rp_ ( _Sj, Aj_ ) _._ (1)
_j_ = _t_
- ∆( _θ, rp_ ) = E _πθ_ ˆ∆( _θ, rp_ ) and ˆ∆( _θ, rp_ ) :=
_T_
- _ψθ_ ( _St, At_ )
_t_ =0
**3** **Limitations of Potential Based Reward Shaping**
When the objective function _J_ ( _θ_ ) is defined with respect to a _sparse_ reward function _rp_ (i.e., a reward
function such that _rp_ ( _s, a_ ):= 0 for most _s ∈S_ and _a ∈A_ ), searching for _θ_ _[∗]_ is challenging [24]. A
natural way to provide more frequent (i.e., denser) feedback to the agent, in the hope of facilitating
learning, is to consider an alternate reward function, ˜ _r_ naive := _rp_ + _r_ aux. However, as discussed earlier,
_r_ aux may be a designer-specified auxiliary reward function not perfectly aligned with the objective
encoded in _rp_ . In this case, using ˜ _r_ naive may encourage undesired behavior. An alternative way to
incorporate domain knowledge to facilitate learning was introduced by Ng et al. [41]. They proposed
using a _potential function_, Φ : _S →_ R (analogous to _r_ aux), to define new reward functions of the form
_r_ ˜Φ( _St, At, St_ +1):= _rp_ ( _St, At_ )+ _γ_ Φ( _St_ +1) _−_ Φ( _St_ ). Importantly, they showed that optimal policies
with respect to the objective E[ [�] _[T]_ _t_ =0 _[γ][t][r]_ [˜][Φ][(] _[S][t][, A][t][, S][t]_ [+1][)]][ are also optimal with respect to] _[ J]_ [(] _[θ]_ [)][.]
While potential-based reward shaping can partially alleviate some of the difficulties arising from
sparse rewards, Wiewiora [64] showed that _q_ -learning using ˜ _r_ Φ produces the _exact same sequence_
_of updates_ as _q_ -learning using _rp_ but with a different initialization of _q_ -values. In what follows, we
establish a similar result: we show that performing potential-based reward shaping has _no impact on_
_expected policy gradient_ updates; and that it can, in fact, even increase the variance of the updates.
**Property 1.** E[ ∆( [ˆ] _θ,_ ˜ _r_ Φ)]= E[ ∆( [ˆ] _θ, rp_ )] _and_ Var( ∆( [ˆ] _θ,_ ˜ _r_ Φ)) _can be higher than_ Var( ∆( [ˆ] _θ, rp_ )) _._
All proofs are deferred to Appendix A. The above points highlight some of the limitations of potentialbased shaping for policy gradients and _q_ -learning—both of which form the backbone of the majority
of model-free RL algorithms [58]. Furthermore, potential functions Φ _cannot_ depend on actions [41],
which restricts the class of eligible auxiliary rewards _r_ aux and heuristics functions that may be used.
Finally, notice that Φ is designed independently of the agent’s underlying learning algorithm. As we
will show in the next sections, our method can autonomously discover auxiliary reward functions that
not only facilitate learning but also help mitigate various types of algorithmic limitations and biases.
**4** **Behavior Alignment Reward Function**
In this section, we introduce an objective function designed to tackle the primary challenge investigated in this paper: how to effectively leverage designer-specified auxiliary reward functions to
rapidly induce behaviors envisioned by the designer. The key observation is that naively adding
an auxiliary reward function _r_ aux to _rp_ may produce policies whose corresponding behaviors are
misaligned with respect to the behaviors induced by _rp_ . In these cases, _r_ aux should be ignored during
the search for an optimal policy. On the other hand, if _rp_ and _r_ aux may be combined in a way that
results in the desired behaviors, then combinations that produce frequent and informative feedback to
the agent should be favored, as they are likely to facilitate faster learning.
To tackle the challenges discussed above, we employ a bi-level optimization procedure. This approach
aims to create a _behavior alignment reward_ by combining _r_ aux and _rp_ using a parameterized function.
Our method is inspired by the optimal rewards framework by Singh et al. [52, 53]. Let _γφ ∈_ [0 _,_ 1) be
3
a _discount rate value_ parameterized by _φ ∈_ Γ. [2] Let _rϕ_ : _S ×A →_ R be a _behavior alignment reward_ :
a function of both _rp_ and _r_ aux, parameterized by _ϕ ∈_ Υ, where Υ and Γ are function classes. One
example of a behavior alignment reward function is _rϕ_ ( _s, a_ ):= _fϕ_ 1( _s, a_ )+ _ϕ_ 2 _rp_ ( _s, a_ )+ _ϕ_ 3 _r_ aux( _s, a_ ),
where _fϕ_ : _S × A →_ R and _ϕ_ :=( _ϕ_ 1 _, ϕ_ 2 _, ϕ_ 3). Let Alg be any gradient/semi-gradient/non-gradientbased algorithm that outputs policy parameters. To mitigate possible divergence issues arising
from certain policy optimization algorithms like DQN [60, 1], we make the following simplifying
assumption, which can generally be met with appropriate regularizers and step sizes:
**Assumption 1.** _Given rϕ and γφ, the algorithm Alg_ ( _rϕ, γφ_ ) _converges to a fixed point θ— which we_
_denote as θ_ ( _ϕ, φ_ ) _∈_ Θ _to emphasize its indirect dependence on ϕ and φ through Alg, rϕ, and γφ._
Given this assumption, we now specify the following bi-level objective:
_ϕ_ _[∗]_ _, φ_ _[∗]_ _∈_ arg max
_ϕ∈_ Υ _,φ∈_ Γ
_J_ ( _θ_ ( _ϕ, φ_ )) _−_ _λγγφ,_ where _θ_ ( _ϕ, φ_ ) := Alg( _rϕ, γφ_ ) _._ (2)
Here, _λγ_ serves as the regularization coefficient for the value of _γφ_, and Alg denotes a given policy
optimization algorithm. Let, as an example, Alg be an on-policy gradient algorithm that uses samples
to estimate the gradient ∆( _θ, rp_ ), as in (1). We can then define a corresponding variant of ∆( _θ, rp_ )
that is compatible with our formulation and objective, and which uses both _rϕ_ and _γφ_, as follows:
_._ (3)
∆on( _θ, ϕ, φ_ ) := E _πθ_
_T_
_ψθ_ ( _St, At_ )
_t_ =0
_T_
- _γφ_ _[j][−][t]_ _rϕ_ ( _Sj, Aj_ )
_j_ = _t_
Notice that the bi-level formulation in (2) is composed of three key components: outer and inner
objectives, and an outer regularization term. In what follows, we discuss the need for these.
**Need for Outer- and Inner-Level Objectives:** The **outer-level objective** in Equation (2) serves
a critical role: it evaluates different parameterizations, denoted by _ϕ_, for the behavior alignment
reward function. These parameterizations influence the induced policy _θ_ ( _ϕ, φ_ ), which is evaluated
using the performance metric _J_ . Recall that this metric quantifies the alignment of a policy with
the designer’s primary reward function, _rp_ . In essence, the outer-level objective seeks to optimize
the behavior alignment reward function to produce policies that are effective according to _rp_ . This
design adds robustness against any misspecification of the auxiliary rewards. [3] In the inner-level
optimization, by contrast, Alg identifies a policy _θ_ ( _ϕ, φ_ ) that is optimal or near-optimal with respect
to _rϕ_ (which combines _r_ aux through the behavior alignment reward). In the **inner-level optimization**,
the algorithm Alg works to identify a policy _θ_ ( _ϕ, φ_ ) that is optimal or near-optimal in terms of _rϕ_
(which incorporates _r_ aux via the behavior alignment reward). By employing a bi-level optimization
structure, several benefits emerge. When _r_ aux is well-crafted, _rϕ_ can exploit its detailed information to
give Alg frequent/dense reward feedback, thus aiding the search for an optimal _θ_ _[∗]_ . Conversely, if _r_ aux
leads to sub-optimal policies, then the influence of auxiliary rewards can be modulated or decreased
accordingly by the optimization process by adjusting _rϕ_ . Consider, for example, a case where the
behavior alignment reward function is defined as _rϕ_ ( _s, a_ ) := _fϕ_ 1( _s, a_ ) + _ϕ_ 2 _rp_ ( _s, a_ ) + _ϕ_ 3 _r_ aux( _s, a_ ).
In an adversarial setting—where the designer-specified auxiliary reward _r_ aux may lead to undesired
behavior—the bi-level optimization process has the ability to set _ϕ_ 3 to 0. This effectively allows
the behavior alignment reward function _rϕ_ to exclude _r_ aux from consideration. Such a bi-level
approach to optimizing the parameters of behavior alignment reward functions can act as a safeguard
against the emergence of sub-optimal behaviors due to a misaligned auxiliary reward, _r_ aux. This
design is particularly valuable because it allows the objective in (2) to leverage the potentially dense
reward structure of _r_ aux to provide frequent action evaluations when the auxiliary reward function is
well-specified. At the same time, the approach maintains robustness against possible misalignments.
**Need for Outer Regularization:** The bi-level optimization problem (2) may have multiple optimal
solutions for _ϕ_ —including the trivial solution where _r_ aux is always ignored. The goal of regularizing
the outer-level objective (in the form of the term _λγγφ_ ) is to incorporate a prior that adds a preference
for solutions, _ϕ_ _[∗]_, that provide useful and frequent evaluative feedback to the underlying RL algorithm.
In the next paragraphs, we discuss the need for such a regularizer and motivate its mathematical form.
First, recall that _sparse_ rewards can pose challenges for policy optimization. An intuitive solution
2Our framework can be generalized to support state-action dependent discount rates, _γ_ .
3“Misspecification” indicates that an optimal policy for _rp_ + _r_ aux may not be optimal for _rp_ alone.
4
to this problem could involve biasing the optimization process towards _denser_ behavior alignment
reward functions, e.g., by penalizing for sparsity of _rϕ_ . Unfortunately, the distinction between sparse
and dense rewards alone may not fully capture the nuances of what designers typically consider to be
a “good” reward function. This is the case because _a reward function can be dense and still may not_
_be informative_ ; e.g., a reward function that provides _−_ 1 to the agent in every non-goal state is dense
but fails to provide useful feedback regarding how to reach a goal state. A better characterization
of how useful (or informative) a reward function is may be constructed in terms of how _instructive_
and _instantaneous_ the evaluation or feedback it generates is. We consider a reward function to be
_instructive_ if it produces rewards that are well-aligned with the designer’s goals. A reward function
is _instantaneous_ if its corresponding rewards are dense, rather than sparse, and are more readily
indicative of the optimal action at any given state. [4] Reward functions that are both instructive and
instantaneous can alleviate issues associated with settings with sparse rewards and long horizons.
To bias our bi-level optimization objective towards this type of reward function, we introduce a
regularizer, _γφ_ . This regularizer favors solutions that can generate policies with high performance
(i.e., high expected return _J_ with respect to _rp_ ) _even when the discount factor γφ is small_ . To see why,
first notice that this regularizer encourages behavior alignment reward functions that provide more
instantaneous feedback to the agent. This has to be the case; otherwise, it would be challenging to
maximize long-term reward should the optimized alignment reward function be sparse. Second, the
regularizer promotes instructive alignment reward functions—i.e., functions that facilitate learning
policies that maximize _J_ . This is equally crucial: effective policies under the metric _J_ are the ones
that align well with the designer’s objectives as outlined in the original reward function, _rp_ .
**4.1** **Overcoming Imperfections of Policy Optimization Algorithms**
The advantages of the bi-level formulation in (2) extend beyond robustness to sub-optimality from
misspecified _r_ aux. Even with a well-specified _r_ aux, RL algorithms often face design choices, such as
the bias-variance trade-off, that can induce sub-optimal solutions. Below we present examples to
show how _bias_ in the underlying RL algorithm may be mitigated by carefully optimizing _rϕ_ and _γφ_ .
**4.1.1 Bias in policy gradients:** Recall that the popular “policy gradient” ∆( _θ, rp_ ) is not, in fact,
the gradient of any function, and using it in gradient methods may result in biased and sub-optimal
policies [44]. However, policy gradient methods based on ∆( _θ, rp_ ) remain vastly popular in the
RL literature since they tend to be sample efficient [59]. Let ∆ _γ_ ( _θ, rp_ ) denote the _unbiased_ policy
gradient, where ∆ _γ_ ( _θ, rp_ ) := E[ [�] _[T]_ _t_ =0 _[γ][t][ψ][θ]_ [(] _[S][t][, A][t]_ [)][ �] _j_ _[T]_ = _t_ _[γ][j][−][t][r][p]_ [(] _[S][j][, A][j]_ [)]][. We can show that with]
a sufficiently expressive parameterization, optimized _rϕ_ and _γφ_ can effectively mimic the updates that
would have resulted from using the _unbiased_ gradient ∆ _γ_ ( _θ, rp_ ), even if the underlying RL algorithm
uses the _biased_ “gradient”, ∆on( _θ, ϕ, φ_ ), as defined in (3). Detailed proofs are in Appendix A.
**Property 2.** _There exists rϕ_ : _S × A →_ R _and γφ ∈_ [0 _,_ 1) _such that_ ∆on( _θ, ϕ, φ_ ) = ∆ _γ_ ( _θ, rp_ ) _._
**4.1.2 Off-policy learning without importance sampling:** To increase sample efficiency when
evaluating a given policy _πθ_, it is often useful to use off-policy data collected by a different policy, _β_ .
Under the assumption that _∀s ∈S, ∀a ∈A,_ _[π]_ _β_ _[θ]_ ( [(] _s,a_ _[s,a]_ ) [)] _[<][ ∞]_ [, importance ratios] _[ ρ][j]_ [ :=][ �] _k_ _[j]_ =0 _πβθ_ (( _s,as,a_ )) [can]
be used to adjust the updates and account for the distribution shift between trajectories generated by
_β_ and _πθ_ . However, to avoid the high variance stemming from _ρj_, many methods tend to drop most
of the importance ratios and thus only partially correct for the distribution shift—-which can lead to
bias [51]. We can show (given a sufficiently expressive parameterization for the behavior alignment
reward function) that this type of bias can also be mitigated by carefully optimizing _rϕ_ and _γφ_ .
Let us denote the unbiased off-policy update with full-distribution correction as ∆off( _θ, rp_ ) :=
E _β_ [ [�] _[T]_ _t_ =0 _[γ][t][ψ][θ]_ [(] _[S][t][, A][t]_ [)][ �] _j_ _[T]_ = _t_ _[ρ][j][γ][j][r][p]_ [(] _[S][j][, A][j]_ [)]][. Now consider an extreme scenario where off-policy]
evaluation is attempted _without any correction for distribution shift_ . In this situation, and with a slight
abuse of notation, we define ∆off( _θ, ϕ, φ_ ) := E _β_ [ [�] _[T]_ _t_ =0 _[ψ][θ]_ [(] _[S][t][, A][t]_ [)][ �] _j_ _[T]_ = _t_ _[γ]_ _φ_ _[j][−][t]_ _rϕ_ ( _Sj, Aj_ )].
**Property 3.** _There exists rϕ_ : _S × A →_ R _and γφ ∈_ [0 _,_ 1) _such that_ ∆off( _θ, ϕ, φ_ ) = ∆off( _θ, rp_ ) _._
**Remark 1.** _Our method is capable of mitigating various types of algorithmic biases and imperfections_
_in underlying RL algorithms, without requiring any specialized learning rules. Additionally, thanks_
4E.g., if _rϕ ≈_ _q∗_, then its corresponding rewards are instantly indicative of the optimal action at any state.
5
_to the γφ∗_ _regularization, it favors reward functions that lead to faster learning of high-performing_
_policies aligned with the designer’s objectives, as outlined in the original reward function rp._
**5** **BARFI: Implicitly Learning Behavior Alignment Rewards**
Having introduced our bi-level objective and discussed the benefits of optimizing _rϕ_ and _γφ_, an
important question arises: Although _θ_ ( _ϕ, φ_ ) can be optimized using any policy learning algorithm,
how can we efficiently identify the optimal _ϕ_ _[∗]_ and _φ_ _[∗]_ in equation (2)? Given the practical advantages
of gradient-based methods, one would naturally consider using them for optimizing _ϕ_ _[∗]_ and _φ_ _[∗]_ as well.
However, a key challenge in our setting lies in computing d _J_ ( _θ_ ( _ϕ, φ_ )) _/_ d _ϕ_ and d _J_ ( _θ_ ( _ϕ, φ_ )) _/_ d _φ_ .
These computations require an analytical characterization of the impact that _rϕ_ and _γφ_ have on the
_entire optimization process_ of the inner-level algorithm, Alg.
In addressing this challenge, we initially focus on an Alg that employs policy gradients for updating
_πθ_ . Similar extensions for other update rules can be derived similarly. We start by re-writing the
expression for d _J_ ( _θ_ ( _ϕ, φ_ )) _/_ d _ϕ_ using the chain rule:
d _J_ ( _θ_ ( _ϕ, φ_ ))
d _θ_ ( _ϕ, φ_ )
d _ϕ_
- ~~�~~ - ~~�~~
( _b_ )
( _ϕ, φ_ )) = [d] _[J]_ [(] _[θ]_ [(] _[ϕ, φ]_ [))]
d _ϕ_ d _θ_ ( _ϕ, φ_ )
d _θ_ ( _ϕ, φ_ )
- ~~��~~ ~~�~~
( _a_ )
_,_ (4)
where _**(a)**_ is the policy gradient at _θ_ ( _ϕ, φ_ ), and _**(b)**_ can be computed via implicit bi-level optimization,
as discussed below.
**Implicit Bi-Level Optimization:** We compute (4) by leveraging implicit gradients [14, 34, 19],
an approach previously employed, e.g., in few-shot learning [38, 49] and model-based RL algorithms [50]. First, observe that when Alg converges to _θ_ ( _ϕ, φ_ ), then it follows that
∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ ) = 0 _._ (5)
Let _∂f_ denote the partial derivative with respect to the immediate arguments of _f_, and d _f_ be the total
derivative as before. That is, if _f_ ( _x, g_ ( _x_ )) := _xg_ ( _x_ ), then _[∂f]_ _∂x_ [(] _[x, g]_ [(] _[x]_ [)) =] _[ g]_ [(] _[x]_ [)][ and] [d] d _[f]_ _x_ [(] _[x, g]_ [(] _[x]_ [)) =]
_g_ ( _x_ ) + _x_ _[∂g]_ [(] _[x]_ [)][. Therefore, taking the total derivative of (][5][) with respect to] _[ ϕ]_ [ yields]
_[∂f]_ [d] _[f]_
_∂x_ [(] _[x, g]_ [(] _[x]_ [)) =] _[ g]_ [(] _[x]_ [)][ and] d _x_
_∂x_ [(] _[x]_ [)][. Therefore, taking the total derivative of (][5][) with respect to] _[ ϕ]_ [ yields]
) _, ϕ, φ_ )
= _[∂]_ [∆(] _[θ]_ [(] _[ϕ, φ]_ [)] _[, ϕ, φ]_ [)]
d _ϕ_ _∂ϕ_
_[θ]_ [(] _[ϕ, φ]_ [)] _[, ϕ, φ]_ [)] _∂θ_ ( _ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ ) _∂ϕ_
= 0 _._ (6)
_∂ϕ_
d∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
[)] _[, ϕ, φ]_ [)]
+ _[∂]_ [∆(] _[θ]_ [(] _[ϕ, φ]_ [)] _[, ϕ, φ]_ [)]
_∂ϕ_ _∂θ_ ( _ϕ, φ_ )
By re-arranging terms in (6), we obtain the term _**(b)**_ in (4). In particular,
( _ϕ, φ_ ) - _∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
= _−_
_∂ϕ_ _∂θ_ ( _ϕ, φ_ )
- _−_ 1 _∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_._ (7)
_∂ϕ_
_∂θ_ ( _ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
Furthermore, by combining (7) and (4) we obtain the desired gradient expression for _ϕ_ :
- _−_ 1
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂ϕ_
- ~~�~~ - ~~�~~
**A**
_∂J_ ( _θ_ ( _ϕ, φ_ ))
( _ϕ, φ_ ))
= _−_ _[∂J]_ [(] _[θ]_ [(] _[ϕ, φ]_ [))]
_∂ϕ_ _∂θ_ ( _ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
- ~~�~~ - ~~�~~
**H**
_._ (8)
Similarly, a gradient expression for _φ_ can be derived; the full derivation is detailed in Appendix E.
Using _θ_ _[∗]_ as shorthand for _θ_ ( _ϕ, φ_ ), we find that the terms **A** and **H** can be expressed as
- _j_ = _t_ _γφ_ _[j][−][t]_ _∂rϕ_ ( _∂ϕSj_ _, Aj_ )
_∂ϕ_
_._
_⊤_ []
_T_
- _γφ_ _[j][−][t]_ _rϕ_ ( _Sj_ _, Aj_ )
_j_ = _t_
_T_
_T_
_t_ =0
_∂ψθ∗_ ( _St, At_ )
_∂θ_ _[∗]_
**A** = E _D_
_T_
_ψθ∗_ ( _St, At_ )
_t_ =0
_,_ **H** = E _D_
When working with the equations above, we assume the inverse of **H** _[−]_ [1] exists. To mitigate the risk of
ill-conditioning, we discuss regularization strategies for Alg in Appendix D. Notice that equations (8)
and (15) are the key elements needed to calculate the updates to _ϕ_ and _φ_ in our bi-level optimization’s
outer loop. However, computing **A** and **H** directly can be impractical for high-dimensional problems
due to the need for outer products and second derivatives. To address this, we employ two strategies:
_(1)_ We approximate **H** _[−]_ [1] using the Neumann series [40], and _(2)_ we calculate (8) and (15) via
Hessian-vector products [47], which are readily available in modern auto-diff libraries [46]. These
methods eliminate the need for explicit storage or computation of **H** or **A** .
6
The mathematical approach outlined above results in an algorithm with linear compute and memory
footprint, having _O_ ( _d_ ) complexity, where _d_ is the number of parameters for both the policy and the
reward function. Details can be found in Appendix C. We refer to our method as BARFI, an acronym
for _behavior_ _alignment_ _reward_ _function’s_ _implicit_ optimization. [5] BARFI is designed to iteratively
solve the bi-level optimization ~~pr~~ oblem defined in (2). With policy regularization, the updates to _rϕ_
and _γφ_ incrementally modify _θ_ ( _ϕ, φ_ ). This enables us to initialize Alg using the fixed point achieved
in the previous inner optimization step, further reducing the time for subsequent inner optimizations.
**6** **Empirical Analyses**
Our experiments serve multiple purposes and include detailed ablation studies. First, we demonstrate
our bi-level objective’s efficacy in discovering behavior alignment reward functions that facilitate
learning high-performing policies. We focus especially on its robustness in situations where designers
provide poorly specified or misaligned auxiliary rewards that could disrupt the learning process
(Section 6.1). Second, we present a detailed analysis of the limitations of potential-based reward
shaping, showing how it can lead to suboptimal policies (Section 6.2). We then provide a qualitative
illustration of the behavior alignment reward function learned by BARFI (Section 6.3). Finally, we
evaluate how well BARFI scales to problems with high-dimensional, continuous action and state
spaces (Section 6.4).
In the sections that follow, we examine a range of methods and reward combinations for comparison:
- _Baseline RL methods_ : We consider baseline RL methods that employ a naive reward combination
strategy: ˜ _r_ naive( _s, a_ ) := _rp_ ( _s, a_ ) + _r_ aux( _s, a_ ). In this case, the auxiliary reward from the designer
is simply added to the original reward without checks for alignment. Both the REINFORCE and
Actor-Critic algorithms are used for optimization.
- _Potential-based shaping_ : To assess how well potential-based reward shaping performs, we introduce
variants of the baseline methods. Specifically, we investigate the effectiveness of the reward function
_r_ ˜Φ( _s, a, s_ _[′]_ ) := _rp_ ( _s, a_ ) + _γr_ aux( _s_ _[′]_ ) _−_ _r_ aux( _s_ ).
- BARFI: We use REINFORCE as the underlying RL algorithm when implementing BARFI and
define _rϕ_ ( _s, a_ ) := _ϕ_ 1( _s, a_ ) + _ϕ_ 2( _s_ ) _rp_ ( _s, a_ ) + _ϕ_ 3( _s_ ) _r_ aux( _s, a_ ). Our implementation includes a
warm-up period wherein the agent collects data for a fixed number of episodes, using ˜ _r_ naive, prior
to performing the first updates to _ϕ_ and _φ_ (See Appendix 5 for the complete algorithm).
We evaluate each algorithm across four distinct environments: GridWorld, MountainCar [58], CartPole [16], and HalfCheetah-v4 [9]. These domains offer increasing levels of complexity and are
intended to assess the algorithms’ adaptability. Furthermore, we examine their performance under a
variety of auxiliary reward functions, ranging from well-aligned to misaligned with respect to the
designer’s intended objectives.
In our experiments, we investigate different types of auxiliary reward functions for each environment:
some are action-dependent, while others are designed to reward actions aligned with either effective
or ineffective known policies. These functions, therefore, vary in their potential to either foster rapid
learning or inadvertently mislead the agent away from the designer’s primary objectives, hindering the
efficiency of the learning process. Comprehensive details of each environment and their corresponding
auxiliary reward functions can be found in Appendix F.
**6.1** **BARFI’s Robustness to Misaligned Auxiliary Reward Functions**
In this section, we evaluate the performance of various methods for reward combination, particularly
in scenarios where auxiliary reward functions can either be well-aligned with a designer’s intended
goals or be misaligned or poorly specified, thus inadvertently hindering efficient learning. We
introduce two types of auxiliary reward functions for CartPole. First, we used domain knowledge to
design an _r_ aux that provides bonuses when the agent’s actions align with a known effective policy
in this domain. Second, we designed an adversarial example where the auxiliary reward function
rewards actions that are consistent with a particularly poorly performing policy. For MountainCar, we
first leveraged knowledge about an _energy pumping policy_ (i.e., a well-known effective policy [18]) to
5“BARFI” commonly refers to a type of south-Asian sweet confectionery, typically pronounced as ‘bur-fee’.
7
Table 1: Summary of the performance of various reward combination methods and types of _r_ aux
CartPole MountainCar
Method for Reward Combination
Well-aligned _r_ aux Misaligned _r_ aux Well-aligned _r_ aux Partially-aligned _r_ aux
(w.r.t. _energy policy_ ) (w.r.t. _high velocity policy_ )
BARFI ( _our method_ ) 487 _._ 2 _±_ 9 _._ 4 475 _._ 5 _±_ 15 _._ 5 0 _._ 99 _±_ 0 _._ 0 0 _._ 90 _±_ 0 _._ 1
_r_ ˜naive ( _naive reward combination_ ) 498 _._ 9 _±_ 1 _._ 0 9 _._ 04 _±_ 0 _._ 2 0 _._ 99 _±_ 0 _._ 0 0 _._ 63 _±_ 0 _._ 1
_r_ ˜Φ ( _potential-based shaping_ ) 8 _._ 98 _±_ 0 _._ 2 500 _±_ 0 _._ 0 0 _._ 00 _±_ 0 _._ 0 0 _._ 00 _±_ 0 _._ 0
BARFI’s performance compared to two baselines that use: ˜ _r_ naive and ˜ _r_ Φ, respectively. CartPole uses
an action-dependent _r_ aux function that either rewards agents when actions align with a known effective
policy ( _well-aligned raux_ ) or with a poorly-performing policy ( _misaligned raux_ ). MountainCar uses
either an action-dependent function aligned with an energy-pumping policy [18] or a partially-aligned
function incentivizing higher velocities. BARFI consistently achieves near-optimal performance
across scenarios, even if given poorly specified/misaligned auxiliary rewards. Competitors, by
contrast, often induce suboptimal policies. Performances significantly below the optimal are shown
in red, above.
craft an auxiliary reward function that provides bonuses for actions in line with such a control strategy.
We also experimented with a partially-aligned auxiliary function that rewards high velocity—a factor
not particularly indicative of high performance.
Table 1 summarizes the results across different reward functions and combination methods. The results
suggest that if auxiliary rewards provide positive feedback when agents’ actions align with effective
policies, naive combination methods perform well. In such cases, auxiliary rewards effectively
“nudge” agents towards emulating expert actions. However, our experimental results also indicate that
_all_ baseline methods are susceptible to poor performance when auxiliary rewards are not well aligned
with the designer’s goals. We provide more discussion for potential-based shaping in Section 6.2.
The key takeaway from the experimental results in Table 1 is that BARFI consistently performs
well across various domains and under different types of auxiliary rewards. Specifically, when
designer-specified feedback is appropriate and can assist in accelerating learning, BARFI
efficiently exploits it to produce high-performing policies. Conversely, if auxiliary rewards are
misaligned with the designer’s intended goals, BARFI is capable of adapting and effectively
dismissing “misleading rewards”. This adaptability ensures that high-performing policies can
be reliably identified. Other methods, by contrast, succeed only in some of these scenarios.
Importantly, the unpredictability of whether a given auxiliary reward function will aid or hinder
learning makes such alternative methods less reliable, as they may fail to learn effective policies.
**6.2** **Pitfalls of potential-based reward shaping**
We now turn our attention to the (possibly negative) influence of action-dependent auxiliary rewards,
particularly when used in combination with potential-based reward shaping. Our results in Table 1
reveal a key limitation: potential-based shaping struggles to learn efficient policies even when
auxiliary rewards are well-aligned with effective strategies. This shortcoming is attributable to
the action-dependent nature of the auxiliary rewards, which compromises the potential shaping
technique’s guarantee of policy optimality.
As there is no prescribed way for designing potential shaping when _r_ aux is action-dependent, we
use a direct extension of the original formulation [41] by considering ˜ _r_ Φ( _s, a, s_ _[′]_ _, a_ _[′]_ ) := _rp_ ( _s, a_ ) +
_γr_ aux( _s_ _[′]_ _, a_ _[′]_ ) _−_ _r_ aux( _s, a_ ). Furthermore, we designed an auxiliary reward function, _r_ aux, that is well
aligned: it provides positive reward signals of fixed magnitude both for ( _s, a_ ) and ( _s_ _[′]_ _, a_ _[′]_ ) whenever
the agent’s actions coincide with the optimal policy. Notice, however, that if _γ <_ 1, the resultant
value from _γr_ aux( _s_ _[′]_ _, a_ _[′]_ ) _−_ _r_ aux( _s, a_ ) is _negative_ . Such a negative component may deter the agent
from selecting actions that are otherwise optimal, depending on how _r_ aux and _rp_ differ in magnitude.
Conversely, potential-based shaping can also occasionally perform well under misaligned rewards. In
these cases, the shaping function may yield a _positive_ value whenever the agent selects an optimal
action, which could induce well-performing behaviors.
8
**GridWorld**
100
0
−20
|Col1|Col2|Col3|Col4|Goal
+100|
|---|---|---|---|---|
||||||
|||+50|||
||||||
|~~_s_~~0|||||
**6.3** **What does** _rϕ_ **learn?**
We now investigate BARFI’s performance and robustness in the GridWorld when operating under
misspecified auxiliary reward functions. Consider the reward function depicted in Figure 2 [left].
This reward function provides the agent with a bonus for visiting the state at the center, akin to
providing intermediate feedback to the agent when it makes progress towards the goal. However,
such intermediate positive feedback can lead to behaviors where the agent repeatedly cycles around
the middle state (i.e., behaviors that are misaligned with the original objective of reaching the goal
state at the top right corner of the grid). Importantly, BARFI is capable of autonomously realizing
that it should disregard such misleading incentives (Figure 2 [center left]), thereby avoiding poorlyperforming behaviors that focus on revisiting irrelevant central states. Similarly, when Cartpole
operates under a misspecified _r_ aux (Figure 2 [right]), BARFI is capable of rapidly adapting (after
a warm-up period) and effectively disregarding misleading auxiliary reward signals. These results
highlight once again BARFI’s robustness when faced with reward misspecification.
**6.4** **Scalability to High-Dimensional Continuous Control**
One might wonder whether computing implicit gradients for _ϕ_ and _φ_ would be feasible in highdimensional problems, due to the computational cost of inverting Hessians. To address this concern,
we leverage Neumann series approximation with Hessian-vector products (See Appendix C) and conduct further experiments, as shown in Figure 3. These experiments focus on evaluating the scalability
of BARFI in control problems with high-dimensional state spaces and continuous actions—scenarios
that often rely on neural networks for both the policy and critic function approximators. For a more
comprehensive evaluation, we also introduced an alternative method named BARFI unrolled.
Unlike BARFI, which uses implicit bi-level optimization, BARFI unrolled employs path-wise
bi-level optimization. It maintains a complete record of the optimization path to determine updates
for _ϕ_ and _φ_ . Further details regarding this alternative method can be found in Appendix C.6.
We conducted experiments on the HalfCheetah-v4 domain and investigated, in particular, a reward
function comprising two components with varying weights. This empirical analysis was designed
to help us understand how different weight assignments to each reward component could influence
the learning process. Specifically, in HalfCheetah-v4, the agent receives a positive reward _rp_
proportional to how much it moved forward. It also incurs a small negative reward (concretely, an
auxiliary reward, _r_ aux( _s, a_ ) := _c∥a∥_ [2] 2 [, known as a] _[ control cost]_ [) for the torque applied to its joints.]
A hyperparameter _c_ determines the balance between these rewards. The naive combination of such
primary and auxiliary rewards is defined as ˜ _r_ naive( _s, a_ ) = _rp_ ( _s, a_ ) + _r_ aux( _s, a_ ). Figure 3 [left] shows
that the baselines and both variants of BARFI appear to learn effectively. With alternative reward
weighting schemes, however, only BARFI and BARFI unrolled show learning progress, as seen
in Figure 3 [middle]. It is worth noting that path-wise bi-level optimization can become impractical as
the number of update steps in (4) increases, due to growing computational and memory requirements
(Figure 3 [right]). Although we do not recommend BARFI unrolled, we include its results
for completeness. Additional ablation studies on _(a)_ the effect of the inner optimization step; _(b)_
Neumann approximations; _(c)_ decay of _γ_ ; and _(d)_ returns based on _rϕ_, are provided in Appendix H.
9
Figure 3: Results for MuJoCo environment. ( **Left** ) Auxiliary reward is defined to be _−c∥a∥_ [2] 2 [, where] _[ c]_ [ is a]
positive hyperparameter and _a_ is the continuous high-dimensional action vector. ( **Middle** ) Similar setting as
before, but uses an amplified variant of the auxiliary reward: _−_ 4 _c∥a∥_ [2] 2 [. It is worth highlighting that even under]
alternative reward weighting schemes, both variants of our (behavior-aligned) bi-level optimization methods
demonstrate successful learning. Learning curves correspond to mean return over 15 trials, and the shaded
regions correspond to one standard error. ( **Right** ) Required compute and memory for BARFI unrolled,
compared to BARFI, as a function of the number of inner-optimization updates. This figure also showcases
BARFI’s characteristics under various orders of Neumann approximation.
**7** **Related work**
This paper focuses primarily on how to efficiently leverage auxiliary rewards _r_ aux. Notice, however,
that in the absence of _r_ aux, the resulting learned behavior alignment rewards _rϕ_ may be interpreted
as _intrinsic rewards_ [70, 71]. Furthermore, several prior works have investigated meta-learning
techniques, which are methods akin to the bi-level optimization procedures used in our work. Such
prior works have employed meta-learning in various settings, including automatically inferring the
effective return of trajectories [68, 62, 7, 71], parameters of potential functions [72, 28, 17], targets
for TD learning [69], rewards for planning [54, 23], and even fully specified reinforcement learning
update rules [33, 45]. Additionally, various other relevant considerations to effectively learning
rewards online have been discussed by Armstrong et al. [5]. Our work complements these efforts
by focusing on the reward alignment problem, specifically in settings where auxiliary information
is available. An extended discussion on related works can be found in Appendix B. It is worth
mentioning that among the above-mentioned techniques, most rely on path-wise meta-gradients.
As discussed in Section 6.4, this approach can be disadvantageous as it often performs only one or
a few inner-optimization steps, which limits its ability to fully characterize the result of the inner
optimization [67]. Further, it requires caching intermediate steps, which increases computational and
memory costs. BARFI, by contrast, exploits implicit gradients to alleviate these issues by directly
characterizing the fixed point of Alg induced by learned behavior alignment rewards.
Finally, it is also important to highlight that a concurrent work on reward alignment using bi-level
optimization was made publicly available after our manuscript was submitted for peer-reviewing
at NeurIPS [10]. While our work analyses drawbacks of potential-based shaping and establishes
different forms of correction that can be performed via bi-level optimization, this concurrent work
provides complementary analyses on the convergence rates of bi-level optimization, as well as a
discussion on its potential applications to Reinforcement Learning from Human Feedback (RLHF).
**8** **Conclusion and Future Work**
In this paper, we introduced BARFI, a novel framework that empowers RL practitioners—who may
not be experts in the field—to incorporate domain knowledge through heuristic auxiliary reward
functions. Our framework allows for more expressive reward functions to be learned while ensuring
they remain aligned with a designer’s original intentions. BARFI can also identify reward functions
that foster faster learning while mitigating various limitations and biases in underlying RL algorithms.
We empirically show that BARFI is effective in training agents in sparse-reward scenarios where
(possibly poorly-specified) auxiliary reward information is available. If the provided auxiliary rewards
are determined to be misaligned with the designer’s intended goals, BARFI autonomously adapts
and effectively disincentivizes their use as needed. This adaptability results in a reliable pathway
to identifying high-performing policies. The conceptual insights offered by this work provide RL
practitioners with a structured way to design more robust and easy-to-optimize reward functions. We
believe this will contribute to making RL more accessible to a broader audience.
10
**Acknowledgement and Funding Disclosures**
We thank Andy Barto for invaluable discussions and insightful feedback on an earlier version of this
manuscript, which significantly improved the quality of our work.
This work is partially supported by the National Science Foundation under grant no. CCF-2018372
and by a gift from the Berkeley Existential Risk Initiative.
**References**
[1] Joshua Achiam, Ethan Knight, and Pieter Abbeel. Towards characterizing divergence in deep
q-learning. _arXiv preprint arXiv:1903.08894_, 2019.
[2] Akshay Agrawal, Brandon Amos, Shane Barratt, Stephen Boyd, Steven Diamond, and Zico
Kolter. Differentiable convex optimization layers. _arXiv preprint arXiv:1910.12430_, 2019.
[3] Ron Amit, Ron Meir, and Kamil Ciosek. Discount factor as a regularizer in reinforcement
learning. In _International Conference on Machine Learning_, 2020.
[4] Brandon Amos and J Zico Kolter. Optnet: Differentiable optimization as a layer in neural
networks. In _International Conference on Machine Learning_, 2017.
[5] Stuart Armstrong, Jan Leike, Laurent Orseau, and Shane Legg. Pitfalls of learning a reward
function online. _arXiv preprint arXiv:2004.13654_, 2020.
[6] Karl J Åström and Tore Hägglund. Pid control. _IEEE Control Systems Magazine_, 2006.
[7] Sarah Bechtle, Artem Molchanov, Yevgen Chebotar, Edward Grefenstette, Ludovic Righetti,
Gaurav Sukhatme, and Franziska Meier. Meta-learning via learned loss. _arXiv preprint_
_arXiv:1906.05374_, 2019.
[8] Yoshua Bengio. Gradient-based optimization of hyperparameters. _Neural computation_, 2000.
[9] Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang,
and Wojciech Zaremba. Openai gym. _arXiv preprint arXiv:1606.01540_, 2016.
[10] Souradip Chakraborty, Amrit Singh Bedi, Alec Koppel, Dinesh Manocha, Huazheng Wang,
Furong Huang, and Mengdi Wang. Aligning agent policy with externalities: Reward design via
bilevel rl. _arXiv preprint arXiv:2308.02585_, 2023.
[11] Xu Chu Dennis Ding, Stephen L Smith, Calin Belta, and Daniela Rus. LTL control in uncertain
environments with probabilistic satisfaction guarantees. _IFAC Proceedings Volumes_, 2011.
[12] Chuong B Do, Chuan-Sheng Foo, and Andrew Y Ng. Efficient multiple hyperparameter learning
for log-linear models. In _Advances in Neural Information Processing Systems_, 2007.
[13] Justin Domke. Generic methods for optimization-based modeling. In _Artificial Intelligence and_
_Statistics_, 2012.
[14] Asen L Dontchev and R Tyrrell Rockafellar. _Implicit functions and solution mappings_, volume
543. Springer, 2009.
[15] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. In, _Proceedings of the 34th International Conference on Machine_
_Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017_, 2017.
[16] Razvan V Florian. Correct equations for the dynamics of the cart-pole system. _Center for_
_Cognitive and Neural Studies (Coneural), Romania_, 2007.
[17] Zhao-Yang Fu, De-Chuan Zhan, Xin-Chun Li, and Yi-Xing Lu. Automatic successive reinforcement learning with multiple auxiliary rewards. In _IJCAI_, 2019.
11
[18] Sina Ghiassian, Banafsheh Rafiee, Yat Long Lo, and Adam White. Improving performance in
reinforcement learning by breaking generalization in neural networks. In, _Proceedings of the_
_19th International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’20,_
_Auckland, New Zealand, May 9-13, 2020_, 2020.
[19] Stephen Gould, Basura Fernando, Anoop Cherian, Peter Anderson, Rodrigo Santa Cruz, and
Edison Guo. On differentiating parameterized argmin and argmax problems with application to
bi-level optimization. _arXiv preprint arXiv:1607.05447_, 2016.
[20] Edward Grefenstette, Brandon Amos, Denis Yarats, Artem Molchanov, Franziska Meier, and
Kyunghyun Cho. higher: A pytorch meta-learning library. 2020.
[21] Ricardo Grunitzki, Bruno C da Silva, and Ana LC Bazzan. A flexible approach for designing
optimal reward functions. In _Proceedings of the 16th Conference on Autonomous Agents and_
_MultiAgent Systems_, 2017.
[22] Ricardo Grunitzki, Bruno C da Silva, and LC Ana Bazzan. Towards designing optimal reward functions in multi-agent reinforcement learning problems. In _2018 International Joint_
_Conference on Neural Networks (IJCNN)_, 2018.
[23] Xiaoxiao Guo, Satinder Singh, Richard Lewis, and Honglak Lee. Deep learning for reward
design to improve monte carlo tree search in atari games. _arXiv preprint arXiv:1604.07095_,
2016.
[24] Joshua Hare. Dealing with sparse rewards in reinforcement learning. _arXiv preprint_
_arXiv:1910.09281_, 2019.
[25] Geoffrey Hinton, Nitish Srivastava, and Kevin Swersky. Neural networks for machine learning
lecture 6a overview of mini-batch gradient descent. _Cited on_, 2012.
[26] Mark K Ho, Michael L Littman, Fiery Cushman, and Joseph L Austerweil. Teaching with
rewards and punishments: Reinforcement or communication? In _CogSci_, 2015.
[27] Mark K Ho, Fiery Cushman, Michael L Littman, and Joseph L Austerweil. People teach with
rewards and punishments as communication, not reinforcements. _Journal of Experimental_
_Psychology: General_, 2019.
[28] Yujing Hu, Weixun Wang, Hangtian Jia, Yixiang Wang, Yingfeng Chen, Jianye Hao, Feng Wu,
and Changjie Fan. Learning to utilize shaping rewards: A new approach of reward shaping.
_Advances in Neural Information Processing Systems_, 2020.
[29] Rodrigo Toro Icarte, Toryn Klassen, Richard Valenzano, and Sheila McIlraith. Using reward
machines for high-level task specification and decomposition in reinforcement learning. In
_International Conference on Machine Learning_, 2018.
[30] Rodrigo Toro Icarte, Ethan Waldie, Toryn Klassen, Rick Valenzano, Margarita Castro, and
Sheila McIlraith. Learning reward machines for partially observable reinforcement learning.
_Advances in Neural Information Processing Systems_, 2019.
[31] Rodrigo Toro Icarte, Toryn Q Klassen, Richard Valenzano, and Sheila A McIlraith. Reward
machines: Exploiting reward function structure in reinforcement learning. _arXiv preprint_
_arXiv:2010.03950_, 2020.
[32] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In, _3rd_
_International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May_
_7-9, 2015, Conference Track Proceedings_, 2015.
[33] Louis Kirsch, Sjoerd van Steenkiste, and Jürgen Schmidhuber. Improving generalization in
meta reinforcement learning using learned objectives. _arXiv preprint arXiv:1910.04098_, 2019.
[34] Steven G Krantz and Harold R Parks. _The implicit function theorem: history, theory, and_
_applications_ . Springer Science & Business Media, 2012.
[35] Hadas Kress-Gazit, Georgios E Fainekos, and George J Pappas. Temporal-logic-based reactive
mission and motion planning. _IEEE Transactions on Robotics_, 2009.
12
[36] Karl Kunisch and Thomas Pock. A bilevel optimization approach for parameter learning in
variational models. _SIAM Journal on Imaging Sciences_, 2013.
[37] Jan Larsen, Lars Kai Hansen, Claus Svarer, and M Ohlsson. Design and regularization of neural
networks: the optimal use of a validation set. In _Neural Networks for Signal Processing VI._
_Proceedings of the 1996 IEEE Signal Processing Society Workshop_, 1996.
[38] Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with
differentiable convex optimization. In _Proceedings of the IEEE/CVF Conference on Computer_
_Vision and Pattern Recognition_, 2019.
[39] Michael L Littman, Ufuk Topcu, Jie Fu, Charles Isbell, Min Wen, and James MacGlashan.
Environment-independent task specifications via GLTL. _arXiv preprint arXiv:1704.04341_,
2017.
[40] Jonathan Lorraine, Paul Vicol, and David Duvenaud. Optimizing millions of hyperparameters
by implicit differentiation. In _International Conference on Artificial Intelligence and Statistics_,
2020.
[41] Andrew Y Ng, Daishi Harada, and Stuart Russell. Policy invariance under reward transformations: Theory and application to reward shaping. In _ICML_, 1999.
[42] Alex Nichol, Joshua Achiam, and John Schulman. On first-order meta-learning algorithms.
_CoRR_, 2018.
[43] Scott Niekum, Andrew G Barto, and Lee Spector. Genetic programming for reward function
search. _IEEE Transactions on Autonomous Mental Development_, 2010.
[44] Chris Nota and Philip S. Thomas. Is the policy gradient a gradient? _arXiv preprint_
_arXiv:1906.07073_, 2020.
[45] Junhyuk Oh, Matteo Hessel, Wojciech M Czarnecki, Zhongwen Xu, Hado van Hasselt, Satinder
Singh, and David Silver. Discovering reinforcement learning algorithms. _arXiv preprint_
_arXiv:2007.08794_, 2020.
[46] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan,
Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas
Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy,
Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style,
high-performance deep learning library. In _Advances in Neural Information Processing Systems_
_32_ . Curran Associates, Inc., 2019.
[47] Barak A Pearlmutter. Fast exact multiplication by the hessian. _Neural computation_, 1994.
[48] Silviu Pitis. Rethinking the discount factor in reinforcement learning: A decision theoretic
approach. In _Proceedings of the AAAI Conference on Artificial Intelligence_, 2019.
[49] Aravind Rajeswaran, Chelsea Finn, Sham M Kakade, and Sergey Levine. Meta-learning with
implicit gradients. In _Advances in Neural Information Processing Systems_, 2019.
[50] Aravind Rajeswaran, Igor Mordatch, and Vikash Kumar. A game theoretic framework for model
based reinforcement learning. _arXiv preprint arXiv:2004.07804_, 2020.
[51] John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal
policy optimization algorithms. _arXiv preprint arXiv:1707.06347_, 2017.
[52] Satinder Singh, Richard L Lewis, and Andrew G Barto. Where do rewards come from. In
_Proceedings of the Annual Conference of the Cognitive Science Society_, 2009.
[53] Satinder Singh, Richard L Lewis, Andrew G Barto, and Jonathan Sorg. Intrinsically motivated
reinforcement learning: An evolutionary perspective. _IEEE Transactions on Autonomous Mental_
_Development_, 2010.
[54] Jonathan Sorg, Richard L Lewis, and Satinder Singh. Reward design via online gradient ascent.
_Advances in Neural Information Processing Systems_, 2010.
13
[55] Jonathan Sorg, Satinder P Singh, and Richard L Lewis. Internal rewards mitigate agent
boundedness. In _Proceedings of the 27th international conference on machine learning (ICML-_
_10)_, 2010.
[56] Jonathan Sorg, Satinder P Singh, and Richard L Lewis. Optimal rewards versus leaf-evaluation
heuristics in planning agents. In _Proceedings of the AAAI Conference on Artificial Intelligence_,
2011.
[57] Jonathan Daniel Sorg. _The Optimal Reward Problem: Designing Effective Reward for Bounded_
_Agents._ PhD thesis, University of Michigan, 2011.
[58] Richard S Sutton and Andrew G Barto. _Reinforcement learning: An introduction_ . MIT press,
2018.
[59] Philip Thomas. Bias in natural actor-critic algorithms. In _International Conference on Machine_
_Learning_, 2014.
[60] John N Tsitsiklis and Benjamin Van Roy. An analysis of temporal-difference learning with
function approximation. _IEEE Transactions on Automatic Control_, 1997.
[61] Marin Vlastelica, Anselm Paulus, Vít Musil, Georg Martius, and Michal Rolínek. Differentiation
of blackbox combinatorial solvers. _arXiv preprint arXiv:1912.02175_, 2019.
[62] Yufei Wang, Qiwei Ye, and Tie-Yan Liu. Beyond exponentially discounted sum: Automatic
learning of return function. _arXiv preprint arXiv:1905.11591_, 2019.
[63] Martha White. Unifying task specification in reinforcement learning. In _International Confer-_
_ence on Machine Learning_, 2017.
[64] Eric Wiewiora. Potential-based shaping and q-value initialization are equivalent. _Journal of_
_Artificial Intelligence Research_, 2003.
[65] Eric M Wolff, Ufuk Topcu, and Richard M Murray. Robust control of uncertain markov decision
processes with temporal logic specifications. In _2012 IEEE 51st IEEE Conference on Decision_
_and Control (CDC)_, 2012.
[66] David H Wolpert and Kagan Tumer. Optimal reward functions in distributed reinforcement
learning. In _Intelligent agent technology: Research and development_ . World Scientific, 2001.
[67] Yuhuai Wu, Mengye Ren, Renjie Liao, and Roger Grosse. Understanding short-horizon bias in
stochastic meta-optimization. _arXiv preprint arXiv:1803.02021_, 2018.
[68] Zhongwen Xu, Hado P van Hasselt, and David Silver. Meta-gradient reinforcement learning. In
_Advances in Neural Information Processing Systems_, 2018.
[69] Zhongwen Xu, Hado P van Hasselt, Matteo Hessel, Junhyuk Oh, Satinder Singh, and David
Silver. Meta-gradient reinforcement learning with an objective discovered online. _Advances in_
_Neural Information Processing Systems_, 2020.
[70] Zeyu Zheng, Junhyuk Oh, and Satinder Singh. On learning intrinsic rewards for policy gradient
methods. In _Advances in Neural Information Processing Systems_, 2018.
[71] Zeyu Zheng, Junhyuk Oh, Matteo Hessel, Zhongwen Xu, Manuel Kroiss, Hado Van Hasselt,
David Silver, and Satinder Singh. What can learned intrinsic rewards capture? In _International_
_Conference on Machine Learning_, 2020.
[72] Haosheng Zou, Tongzheng Ren, Dong Yan, Hang Su, and Jun Zhu. Reward shaping via
meta-learning. _arXiv preprint arXiv:1901.09330_, 2019.
14
**Behavior Alignment via Reward Function Optimization**
**(Supplemental Material)**
Table 2: Notations
**Symbol** **Description**
_θ_ Parameters for policy _π_
_ϕ_ Parameters for reward function
_φ_ Parameters for learned _γ_
_πθ, rϕ, γφ_ Functional form of policy, reward and _γ_ with their respective parameters
_αθ, αϕ, αφ_ Step sizes for the respective parameters
_λθ, λϕ, λφ_ Regularization for policy, reward and _γ_ function
_δ_ Number of on-policy samples collected between subsequent updates to
_ϕ, φ_
_η_ Neumann Approximator Eigen value scaling factor
_n_ Number of loops used in Neumann Approximation
optim Any standard optimizer like Adam, RMSprop, SGD, which takes input
as gradients and outputs the appropriate update
_E_ Total Number of episodes to sample from the environment
_Ni_ Number of updates to be performed for updating the _π_ by Alg
_N_ 0 Number of initial updates to be peformed
_τ_ Sample of a trajectory from a full episode
**A** **Proofs for Theoretical Results**
In this section, we provide proofs for Property 1, Property 2, and Property 3. For the purpose of these
proofs, we introduce some additional notation. To have a unified MDP notation for goal-based and
time-based tasks, we first consider that in the the time-based task, time is a part of the state such that
Markovian dynamics is ensured.
The (un-normalized) discounted and (un-normalized) undiscounted visitation probability is denoted
as
_d_ _[π]_ _γ_ [(] _[s, a]_ [) :=]
_d_ ¯ _[π]_ ( _s, a_ ) :=
_T_
- _γ_ _[t]_ Pr( _St_ = _s, At_ = _a_ ; _π_ ) _,_ (9)
_t_ =0
_T_
- Pr( _St_ = _s, At_ = _a_ ; _π_ ) _._ (10)
_t_ =0
We can normalize it so that it is a distribution as follows :
_d_ ¯ _[π]_ ( _s, a_ )
_d_ _[π]_ ( _s, a_ ) :=
~~�~~
_s_ _[′]_ _∈S,a_ _[′]_ _∈A_ _[d]_ ~~[¯]~~ _[π]_ [(] _[s][′][, a][′]_ [)] _[.]_
**Property 1.** _The expected update performed by the biased policy gradient update is same when_
_using the primary reward and reward modified with potential-based shaping, i.e.,_ ∆( _θ,_ ˜ _r_ ) = ∆( _θ, rp_ ) _._
_Further, the variance of the update when using potential-based reward shaping can be higher than the_
- - - _variance of the update performed using the primary reward, i.e.,_ Var ˆ∆( _θ,_ ˜ _r_ ) _≥_ Var ˆ∆( _θ, rp_ ) _._
15
_Proof._ **Part 1: Equality of the expected update**
_T_
- _γ_ _[j][−][t]_ _r_ ˜( _Sj, Aj_ )
_j_ = _t_
∆( _θ,_ ˜ _r_ ) = E _πθ_
= E _πθ_
= E _πθ_
_T_
_T_
_ψθ_ ( _St, At_ )
_t_ =0
_ψθ_ ( _St, At_ )
_t_ =0
_T_
_T_
_ψθ_ ( _St, At_ )
_t_ =0
- _γ_ _[j][−][t]_ ( _rp_ ( _Sj, Aj_ ) + _γ_ Φ( _Sj_ +1) _−_ Φ( _Sj_ ))
_j_ = _t_
_T_
- _γ_ _[j][−][t]_ _rp_ ( _Sj, Aj_ )
_j_ = _t_
+ E _πθ_
_T_
_ψθ_ ( _St, At_ )
_t_ =0
_T_
- _γ_ _[j][−][t]_ ( _γ_ Φ( _Sj_ +1) _−_ Φ( _Sj_ ))
_j_ = _t_
_T_
- _γ_ _[j][−][t]_ ( _γ_ Φ( _Sj_ +1) _−_ Φ( _Sj_ ))
_j_ = _t_
= ∆( _θ, rp_ ) + E _πθ_
( _a_ )
= ∆( _θ, rp_ ) + E _πθ_
( _b_ )
= ∆( _θ, rp_ ) + E _πθ_
( _c_ )
= ∆( _θ, rp_ ) + E _πθ_
( _d_ )
= ∆( _θ, rp_ ) _,_
_T_
_ψθ_ ( _St, At_ )
_t_ =0
- _T_
- _ψθ_ ( _St, At_ )( _γ_ _[T][ −][t]_ [+1] Φ( _ST_ +1) _−_ Φ( _St_ ))
_t_ =0
- _T_
- _ψθ_ ( _St, At_ )( _γ_ _[T][ −][t]_ [+1] _c −_ Φ( _St_ ))
_t_ =0
- _T_
( _γ_ _[T][ −][t]_ [+1] _c −_ Φ( _St_ ))E _πθ_ [ _ψθ_ ( _St, At_ ) _|St_ ]
_t_ =0
where (a) holds because on the expansion of future return, intermediate potential values cancel out,
(b) holds because _ST_ +1 is the terminal state and potential function is defined to be a fixed constant _c_
for any terminal state [41], (c) holds from the law of total expectation, and (d) holds because,
- _πθ_ ( _St, a_ ) _[∂]_ [ln] _[ π][θ]_ [(] _[S][t][, a]_ [)]
_∂θ_
_a∈A_
_∂πθ_ ( _St, a_ )
E _πθ_ [ _ψθ_ ( _St, At_ ) _|St_ ] =
_[θ]_ _[t]_ =
_∂θ_
- _πθ_ ( _St, a_ ) = 0 _._
_a∈A_
_a∈A_
( _St, a_ )
= _[∂]_
_∂θ_ _∂θ_
_∂θ_
In the stochastic setting, i.e., when using sample average estimates instead of the true expectation,
_γ_ _[T][ −][t]_ [+1] _c −_ _ϕ_ ( _St_ ) is analogous to a state-dependent baseline for the sum of discounted future primary
rewards. It may reduce or increase the variance of ∆( _θ, rp_ ), depending on this baseline’s co-variance
with [�] _j_ _[T]_ = _t_ _[γ][j][−][t][r][p]_ [(] _[S][j][, A][j]_ [)][.]
**Note:** As we encountered the potential at the terminal state to be _c_, as it is a constant, we will use
the value of _c_ = 0 in accordance with [41].
**Part 2: Variance characterization**
For this result that discusses the possibility of the variance being higher when using potential-based
reward shaping, we demonstrate the result using a simple example. We will consider the single-step
case wherein an episode lasts for one time step. That is, the agent takes an action _A_ 0 at the starting
state _S_ 0 and then transitions to the terminal state. Hence, the stochastic update ∆( [ˆ] _θ,_ ˜ _r_ ) can be written
as:
ˆ∆( _θ,_ ˜ _r_ ) = _ψθ_ ( _S_ 0 _, A_ 0)˜ _r_ ( _S_ 0 _, A_ 0)
= _ψθ_ ( _S_ 0 _, A_ 0)( _rp_ ( _S_ 0 _, A_ 0) _−_ Φ( _S_ 0)) _,_
wherein, we assume that Φ for terminal states is 0 and similarly, ∆( [ˆ] _θ, rp_ ) = _ψθ_ ( _S_ 0 _, A_ 0) _rp_ ( _S_ 0 _, A_ 0).
For the purpose of this proof we will consider the case wherein we have a scalar _θ_, i.e., _θ ∈_ R, such
that, _ψθ_ ( _., ._ ) _∈_ R.
16
- Hence, Var ˆ∆( _θ,_ ˜ _r_ ) can be written as:
- - - - �2
Var ˆ∆( _θ,_ ˜ _r_ ) =E ˆ∆( _θ,_ ˜ _r_ ) [2][�] _−_ E ˆ∆( _θ,_ ˜ _r_ )
=E�( _ψθ_ ( _S_ 0 _, A_ 0)( _rp_ ( _S_ 0 _, A_ 0) _−_ Φ( _S_ 0))) [2][�] _−_ E[ _ψθ_ ( _S_ 0 _, A_ 0)( _rp_ ( _S_ 0 _, A_ 0) _−_ Φ( _S_ 0))] [2]
(= _a_ )E� _ψθ_ ( _S_ 0 _, A_ 0) [2] ( _rp_ ( _S_ 0 _, A_ 0) [2] + Φ( _S_ 0) [2] _−_ 2Φ( _S_ 0) _rp_ ( _S_ 0 _, A_ 0))� _−_ E[ _ψθ_ ( _S_ 0 _, A_ 0)( _rp_ ( _S_ 0 _, A_ 0))] [2]
=E� _ψθ_ ( _S_ 0 _, A_ 0) [2] (Φ( _S_ 0) [2] _−_ 2Φ( _S_ 0) _rp_ ( _S_ 0 _, A_ 0))� +
E� _ψθ_ ( _S_ 0 _, A_ 0) [2] _rp_ ( _S_ 0 _, A_ 0) [2][�] _−_ E[ _ψθ_ ( _S_ 0 _, A_ 0)( _rp_ ( _S_ 0 _, A_ 0))] [2]
- ~~��~~
Var(∆( [ˆ] _θ,rp_ ))
_._
Therefore,
Var�ˆ∆( _θ,_ ˜ _r_ )� _−_ Var�ˆ∆( _θ, rp_ )� = E� _ψθ_ ( _S_ 0 _, A_ 0) [2] (Φ( _S_ 0) [2] _−_ 2Φ( _S_ 0) _rp_ ( _S_ 0 _, A_ 0))� _._
Subsequently, variance of ∆( [ˆ] _θ,_ ˜ _r_ ) will be higher than that of ∆( [ˆ] _θ, rp_ ) if E� _ψθ_ ( _S_ 0 _, A_ 0) [2] Φ( _S_ 0) [2][�] _−_
2E� _ψθ_ ( _S_ 0 _, A_ 0) [2] Φ( _S_ 0) _rp_ ( _S_ 0 _, A_ 0)� _>_ 0.
**Example:** Let us look at an example where the above condition can be true. Let us consider an
MDP with a single state and a single-step horizon. In that case, we can consider the variance of the
update to the policy at the said state, i.e.,
Var _π_ �ˆ∆( _θ,_ ˜ _r_ )� _−_ Var _π_ �ˆ∆( _θ, rp_ )� = Φ( _s_ ) [2] E _π_ - _ψθ_ ( _s, A_ ) [2][�] _−_ 2Φ( _s_ )E _π_ - _ψθ_ ( _s, A_ ) [2] ( _rp_ ( _s, A_ ))� _,_
where _s_ is the fixed state. Hence, the variance of the potential-based method might be more than the
variance from using only the primary reward when
Φ( _s_ ) [2] E _π_ - _ψθ_ ( _s, A_ ) [2][�] _−_ 2Φ( _s_ )E _π_ - _ψθ_ ( _s, A_ ) [2] ( _rp_ ( _s, A_ ))� _>_ 0
Φ( _s_ ) [2] E _π_ - _ψθ_ ( _s, A_ ) [2][�] _>_ 2Φ( _s_ )E _π_ - _ψθ_ ( _s, A_ ) [2] ( _rp_ ( _s, A_ ))� _._
Further, let us consider the case where Φ( _s_ ) _̸_ = 0, because otherwise the variance of the update for
those states would be same, and Φ( _s_ ) _>_ 0.
Φ( _s_ ) [2][�] E _π_ - _ψθ_ ( _s, A_ ) [2][�] _>_ 2�Φ( [�] - _s_ )E _π_ - _ψθ_ ( _s, A_ ) [2] ( _rp_ ( _s, A_ ))�
Φ( _s_ )E _π_ - _ψθ_ ( _s, A_ ) [2][�] _>_ 2E _π_ - _ψθ_ ( _s, A_ ) [2] ( _rp_ ( _s, A_ ))� _._
We can see that the above condition can be satisfied by choosing a potential function that might be
overly optimistic about the average reward of the state _s_, i.e. any Φ( _s_ ) _,_ s.t. Φ( _s_ ) _>_ 2 _rp_ ( _s, a_ ) _∀a_
would lead to an increase in variance. A common place where this might be true is the use of an
optimal value function (as hinted by [41]) as a baseline for a bad/mediocre policy initially.
**Property 2.** _There exists rϕ_ : _S × A →_ R _and γφ ∈_ [0 _,_ 1) _such that_ ∆ _on_ ( _θ, ϕ, φ_ ) = ∆ _γ_ ( _θ, rp_ ) _._
_Proof._ Recall the definition of ∆ _γ_ ( _θ, rp_ ) from Section 4.1:
_γ_ _[t]_ _ψθ_ ( _St, At_ )
17
_._
_T_
- _γ_ _[j][−][t]_ _rp_ ( _Sj, Aj_ )
_j_ = _t_
∆ _γ_ ( _θ, rp_ ) = E _πθ_
_T_
_t_ =0
Using the law of total expectation,
_St, At_
������
_T_
- _γ_ _[j][−][t]_ _rp_ ( _Sj, Aj_ )
_j_ = _t_
∆ _γ_ ( _θ, rp_ ) = E _πθ_
= E _πθ_
_T_
_t_ =0
- _T_
_t_ =0
_γ_ _[t]_ _ψθ_ ( _St, At_ )E _πθ_
_γ_ _[t]_ _ψθ_ ( _St, At_ ) _q_ _[π][θ]_ ( _St, At_ )
=
_s∈S,a∈A_
_T_
_t_ =0
_γ_ _[t]_ Pr( _St_ = _s, At_ = _a_ ; _πθ_ ) _ψθ_ ( _s, a_ ) _q_ _[π][θ]_ ( _s, a_ )
= - _ψθ_ ( _s, a_ ) _q_ _[π][θ]_ ( _s, a_ )
_s∈S,a∈A_
_T_
_t_ =0
_γ_ _[t]_ Pr( _St_ = _s, At_ = _a_ ; _πθ_ )
= - _ψθ_ ( _s, a_ ) _q_ _[π][θ]_ ( _s, a_ ) _d_ _[π]_ _γ_ _[θ]_ [(] _[s, a]_ [)] _[.]_ (11)
_s∈S,a∈A_
Notice from (9) and (10) that for any ( _s, a_ ) pair, if _d_ _[π]_ _γ_ _[θ]_ [(] _[s, a]_ [)] _[ >]_ [ 0][, then][ ¯] _[d][π][θ]_ [(] _[s, a]_ [)] _[ >]_ [ 0][ since] _[ γ][ ≥]_ [0][.]
Therefore, dividing and multiplying by _d_ [¯] _[π][θ]_ ( _s, a_ ) leads to:
∆ _γ_ ( _θ, rp_ ) = - _d_ ¯ _[π]_ _θ_ ( _s, a_ ) _ψθ_ ( _s, a_ ) _q_ _[π]_ _θ_ ( _s, a_ ) _[d]_ _d_ ~~¯~~ _γ_ _[π]_ _π_ _[θ]_ _θ_ [(] ( _[s, a]_ _s, a_ ) [)]
_s∈S,a∈A_
=
_s∈S,a∈A_
_T_
- _γ_ [(] _[s, a]_ [)]
Pr( _St_ = _s, At_ = _a_ ; _πθ_ ) _ψθ_ ( _s, a_ ) _q_ _[π][θ]_ ( _s, a_ ) _[d]_ _d_ ~~¯~~ _π_ _[π][θ]_ _θ_ ( _s, a_ )
_t_ =0
= E _πθ_
- _T_
- _γ_ [(] _[S][t][, A][t]_ [)]
_t_ =0 _ψθ_ ( _St, At_ ) _q_ _[π][θ]_ ( _St, At_ ) _[d]_ _d_ ~~¯~~ _[π]_ _π_ _[θ]_ _θ_ ( _St, At_ )
_._
_πθ_
Now, notice that if _γφ_ = 0 and _rϕ_ ( _s, a_ ) = _q_ _[π][θ]_ ( _s, a_ ) _[d]_ _d_ ~~¯~~ _γ_ _[π]_ _θ_ ( [(] _[s,a]_ _s,a_ [)] ) [, for all] _[ s][ ∈S]_ [ and] _[ a][ ∈A]_ [, then]
∆on( _θ, ϕ, φ_ ) = ∆ _γ_ ( _θ, rp_ ) _._
**Property 3.** _There exists rϕ_ : _S × A →_ R _and γφ ∈_ [0 _,_ 1) _such that_ ∆ _off_ ( _θ, ϕ, φ_ ) = ∆ _off_ ( _θ, rp_ ) _._
_Proof._ This proof follows a similar technique as the proof for Property 2. Recall the definition of
∆off( _θ, rp_ ):
_γ_ _[t]_ _ρtψθ_ ( _St, At_ )
18
_T_
- _γ_ _[j][−][t]_ _ρjrp_ ( _Sj, Aj_ )
_j_ = _t_
∆off( _θ, rp_ ) := E _β_
:= E _β_
_T_
_t_ =0
_T_
_t_ =0
_γ_ _[t]_ _ψθ_ ( _St, At_ )
_._
_T_
- _γ_ _[j][−][t]_ _ρj−trp_ ( _Sj, Aj_ )
_j_ = _t_
Now using the law of total expectations,
- _γ_ _[j][−][t]_ _ρj−trp_ ( _Sj, Aj_ )
_j_ = _t_
_St, At_
������
∆off( _θ, rp_ ) = E _β_
_T_
_γ_ _[t]_ _ρtψθ_ ( _St, At_ )E _β_
_t_ =0
_T_
_St, At_
������
_T_
- _γ_ _[j][−][t]_ _rp_ ( _Sj, Aj_ )
_j_ = _t_
= E _πθ_
= E _πθ_
_T_
_t_ =0
- _T_
_t_ =0
_γ_ _[t]_ _ψθ_ ( _St, At_ )E _πθ_
_γ_ _[t]_ _ψθ_ ( _St, At_ ) _q_ _[π][θ]_ ( _Sj, Aj_ )
= - _ψθ_ ( _s, a_ ) _q_ _[π][θ]_ ( _s, a_ ) _d_ _[π]_ _γ_ _[θ]_ [(] _[s, a]_ [)] _[,]_
_s∈S,a∈A_
where the last line follows similar to (11). Now, notice that for any ( _s, a_ ) pair, the assumption
that _πθ_ ( _s, a_ ) _/β_ ( _s, a_ ) _< ∞_ for all _s ∈S, a ∈A_, implies _d_ _[π]_ _γ_ _[θ]_ [(] _[s, a]_ [)] _[/d]_ _γ_ _[β]_ [(] _[s, a]_ [)] _[ <][ ∞]_ [. Further, if]
_d_ _[β]_ _γ_ [(] _[s, a]_ [)] _[ >]_ [ 0][ it has to be that] _[ d][β]_ [(] _[s, a]_ [)] _[ >]_ [ 0][ as well. Therefore,] _[ d][π]_ _γ_ _[θ]_ [(] _[s, a]_ [)] _[/d][β]_ [(] _[s, a]_ [)] _[ <][ ∞]_ [as well.]
Multiplying and dividing by _d_ _[β]_ ( _s, a_ ) results in:
∆off( _θ, rp_ ) = - _d_ ¯ _[β]_ ( _s, a_ ) _ψθ_ ( _s, a_ ) _q_ _[π]_ _θ_ ( _s, a_ ) _[d]_ _d_ ~~¯~~ _γ_ _[π]_ _β_ _[θ]_ ( [(] _s, a_ _[s, a]_ ) [)]
_s∈S,a∈A_
_T_
- - _γ_ [(] _[s, a]_ [)]
= Pr( _St_ = _s, At_ = _a_ ; _β_ ) _ψθ_ ( _s, a_ ) _q_ _[π][θ]_ ( _s, a_ ) _[d]_ _d_ ~~¯~~ _[π]_ _β_ _[θ]_ ( _s, a_ )
_s∈S,a∈A_ _t_ =0
= E _β_
- _T_
- _γ_ [(] _[S][t][, A][t]_ [)]
_t_ =0 _ψθ_ ( _St, At_ ) _q_ _[π][θ]_ ( _St, At_ ) _[d]_ _d_ ~~¯~~ _[π]_ _β_ _[θ]_ ( _St, At_ )
_._
_πθ_
Finally, notice that if _γφ_ = 0 and _rϕ_ ( _s, a_ ) = _q_ _[π][θ]_ ( _s, a_ ) _[d]_ _d_ ~~¯~~ _γβ_ ( [(] _s,a_ _[s,a]_ ) [)] [for all] _[ s][ ∈S]_ [ and] _[ a][ ∈A]_ [,]
∆off( _θ, ϕ, φ_ ) = ∆off( _θ, rp_ ) _._
**Remark 2.** _Notice that as with any optimization problem, issues of realizability and identifiability of_
_the desired rϕ must be taken into account. The examples provided in this section aim to highlight the_
_capability of optimized behavior alignment reward functions. In particular, they not only improve_
_and accelerate the learning process but are also capable of inducing updates capable of ‘fixing’_
_imperfections in the underlying RL algorithm._
**B** **Extended Related Works**
The bi-level objective draws inspiration from the seminal work of Singh et al. [52, 53] that provides an
optimal-rewards framework for an agent. Prior works have built upon it to explore search techniques
using evolutionary algorithms [43, 21], develop extensions for multi-agent setting [66, 22], and
mitigate sub-optimality due to use of inaccurate models [55–57]. Our work also builds upon this
direction and focuses on various aspects of leveraging auxiliary rewards _r_ aux, while staying robust
against its misspecification.
Apart from specifying auxiliary rewards _r_ aux directly, other techniques for reward specification
include linear temporal logic [35, 11, 65, 39] or reward machines [29–31] that allow exposing the
reward functions as a white-box to the agent.
Recent works also explore _γ_ that is state-action dependent [63, 48], or establishes connection between
_γ_ and value function regularization in TD learning Amit et al. [3]. These ideas are complementary to
our proposed work and combining these with BARFI remains interesting directions for the future.
The concept of path-based meta-learning was initially popularized for few-shot task learning in
supervised learning [15, 42]. Similar path-based approaches have been adopted in reinforcement
19
learning (RL) in various forms [28, 62, 69, 71]. Initially designed for stochastic gradient descent,
these methods have been extended to other optimizers such as Adam [32] and RMSprop [25], treating
them as differentiable counterparts [20].
**C** **Algorithm**
In this section we discuss the algorithm for the proposed method. As the proposed method does
behavior alignment reward function’s implicit optimization, we name it BARFI. Pseudo-code for
BARFI is presented in Algorithm 5. We will first build on some preliminaries to understand the
concepts.
**C.1** **Vector Jacobian Product**
Let us assume that, _x ∈_ R _[d]_ _, y ∈_ R _[m]_ _, f_ ( _x, y_ ) _∈_ R. Then, we know that _∂f_ ( _x, y_ ) _/∂x ∈_
R _[d]_ _, ∂f_ ( _x, y_ ) _/∂y ∈_ R _[m]_ _, ∂_ [2] _f_ ( _x, y_ ) _/∂y∂x ∈_ R _[d][×][m]_ . Let us also assume that we have a vector
_v ∈_ R _[d]_, and if we need to calculate the following, we can pull the derivative outside as shown:
= _[∂]_
_∂y_
_._
_v_ _,_ _[∂f]_ _∂x_ [(] _[x,y]_ [)]
����
R _[d]_ - ~~�~~ - ~~�~~
R _[d]_
_v_
����
R _[d]_
_∂_ [2] _f_ ( _x, y_ )
���� _∂y∂x_
R _[d]_
- ~~��~~ ~~�~~
R _[d][×][m]_
~~�~~ R ~~�~~ _[m]_ - ~~�~~
~~�~~ ~~��~~ ~~�~~
R [1]
~~�~~ R ~~��~~ _[m]_ ~~�~~
As we can see, the vector Jacobian product can be broken down into differentiating a vector product
but shifting the place of multiplication, in which case we assume that the gradient passes through
_v_ w.r.t. _y_ and hence we don’t ever have to deal with large multiplications. Also note that the outer
partial w.r.t. can easily be handled by autodiff packages. A pseudo-code is show in Algorithm 1.
**Algorithm 1:** Jacobian Vector Product
**1 Input:** _f_ ( _x, y_ ) _∈_ R [1] _, x ∈_ R _[d]_ _, y ∈_ R _[m]_ _, v ∈_ R _[d]_
**2** _f_ _[′]_ _←_ grad( _f_ ( _x, y_ ) _, x_ )
**3** jvp _←_ grad( _f_ _[′]_ _, y,_ grad_outputs = _v_ )
**4 Return:** jvp
**C.2** **Neumann Series Approximation for Hessian Inverse**
Recall, that for a given real number _β ∈_ R, such that 0 _≤_ _β <_ 1, we know that the geomertric series
of this has a closed form solution, i.e.,
_s_ = 1 + _β_ [1] + _β_ [2] + _β_ [3] + _· · ·_ +
1
=
1 _−_ _β_ _[.]_
Similarly, given we have a value _α_ such that _β_ = 1 _−_ _α_, we can write _α_ _[−]_ [1] as follows:
1
1 _−_ _β_ [= 1 +] _[ β]_ [ +] _[ β]_ [2][ +] _[ β]_ [3][ +] _[ · · ·]_ [ +]
1
1 _−_ (1 _−_ _α_ ) [= 1 + (1] _[ −]_ _[α]_ [) + (1] _[ −]_ _[α]_ [)][2][ + (1] _[ −]_ _[α]_ [)][3][ +] _[ · · ·]_ [ +]
_α_ _[−]_ [1] = 1 + (1 _−_ _α_ ) + (1 _−_ _α_ ) [2] + (1 _−_ _α_ ) [3] + _· · ·_ +
_α_ _[−]_ [1] =
_∞_
�(1 _−_ _α_ ) _[i]_ _._
_i_ =0
The same can be generalized for a matrix, i.e., given a matrix **A** _∈_ R _[d][×][d]_, we can write **A** _[−]_ [1] as
follows:
**A** _[−]_ [1] =
_∞_
�( **I** _−_ **A** ) _[i]_ _._
_i_ =0
20
Note for the above to hold, matrix **A**, where we represent eig( **A** ) as the eigenvalues of matrix **A**,
we should have the following condition to hold, 0 _<_ eig( **A** ) _<_ 1. Note here we would regularize **A**
to ensure that all eigenvalues are positive, and then we can always scale the matrix **A**, by its biggest
eigenvalue to ensure that the above condition holds. Let say _η_ = 1 _/_ max eig( **A** ). Then we can
write the following:
**A** _[−]_ [1] = _[η]_
_η_ **[A]** _[−]_ [1]
= _η_ ( _η_ **A** ) _[−]_ [1]
= _η_
_∞_
�( **I** _−_ _η_ **A** ) _[i]_ _._
_i_ =0
As _η_ **A** would always satisfy the above condition.
**C.3** **Neumann Approximation for Hessian Vector Product**
Given we have seen how we can approximate the Inverse of a matrix without relying _O_ ( _d_ [3] ) operations,
through Neumann approximation, lets look what needs to be done for our updates. Recall that the
update _ϕ, φ_ (8) and (15) were,
- _−_ 1
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂ϕ_
~~�~~ ~~�~~ - ~~�~~
**A**
( _ϕ, φ_ ))
= _−_ _[∂J]_ [(] _[θ]_ [(] _[ϕ, φ]_ [))]
_∂ϕ_ _∂θ_ ( _ϕ, φ_ )
_∂J_ ( _θ_ ( _ϕ, φ_ ))
_∂θ_ ( _ϕ, φ_ )
~~�~~ �� ~~�~~
_v_
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
~~�~~ ~~�~~ - ~~�~~
**H**
and
- _−_ 1
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂φ_
- ~~��~~ ~~�~~
**B**
_∂_ - _J_ ( _θ_ ( _ϕ, φ_ )) _−_ [1]
[1] 2 _[∥][γ][φ][∥]_ [2][�]
)) _−_ 2 _[∥][γ][φ][∥]_
= _−_ _[∂J]_ [(] _[θ]_ [(] _[ϕ, φ]_ [))]
_∂φ_ _∂θ_ ( _ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
~~�~~ ~~��~~ ~~�~~
_v_
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
~~�~~ - ~~�~~ ~~�~~
**H**
_−_ _[∂γ][φ]_
_∂φ_ _[.]_
Let us look closely at the update for _ϕ_ and we can generalize the updates easily for the case of _φ_ .
_∂J_ ( _θ_ ( _ϕ, φ_ ))
= _−v_ **H** _[−]_ [1] **A**
_∂ϕ_
We first look at how we can approximate the value of _v_ **H** _[−]_ [1] efficiently, as we can always make use
of the Jacobian Vector product later to get ( _v_ **H** _[−]_ [1] ) **A**, as _v_ **H** _[−]_ [1] becomes a vector. Let us assume we
wish to run the Neumann approximation up to _n_ steps, i.e., we want to approximate **H** _[−]_ [1] up to _n_
order Neumann expansion,
_η_ ( _η_ **H** _[−]_ [1] ) _≈_ _η_
_n_
�( _I −_ _η_ **H** ) _[i]_ (12)
_i_ =0
Here we are assuming that the outer optimization for update (1) is for the function _J_ ( _θ_ ( _ϕ, φ_ )) and
the inner optimization which is represented by the update (3) is _f_ ( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ ), i.e.,
∆( _θ, rp_ ) = _[∂J]_ [(] _[θ]_ [(] _[ϕ, φ]_ [))]
[(] _[ϕ, φ]_ [))]
_,_ ∆( _θ, ϕ, φ_ ) = _[∂f]_ [(] _[θ]_ [(] _[ϕ, φ]_ [)] _[, ϕ, φ]_ [)]
_∂θ_ _∂θ_
_._
_∂θ_
The most common form in which _f_ (; _B_ ) is usually defined is the following:
_._
1
_f_ ( _θ, ϕ, φ_ ; _B_ ) :=
_|B|_
_τ_ _∈B_
- log( _πθ_ ( _St_ _[τ]_ _[, A]_ _t_ _[τ]_ [))]
_t_ =0
_T_
_T_
- _γφ_ _[j][−][t]_ _rϕ_ ( _Sj_ _[τ]_ _[, A]_ _j_ _[τ]_ [)]
_j_ = _t_
21
**Algorithm 2:** Vector Hessian Inverse Product for (8) i.e., _v_ **H** _[−]_ [1]
**1 Input:** _θ, ϕ, φ, J, f, n, η, D_ off _, D_ on
**2** _v ←_ grad( _J_ ( _θ_ ; _D_ on) _, θ_ )
**3** _v_ _[′]_ _←_ _η ×_ grad( _f_ ( _θ, ϕ, φ_ ; _D_ off) _, θ_ )
**4 Let:** _v_ 0 _←_ _v, p_ 0 _←_ _v_
**5 for** _i ∈_ [0 _, n_ ) **do**
**6** _vi_ +1 _←_ _vi −_ grad( _v_ _[′]_ _, θ,_ grad_outputs = _vi_ )
**7** _pi_ +1 _←_ _pi_ + _vi_ +1
**8 Return:** _ηpn_ ; // Approximation of _v_ **H** _[−]_ [1] as in (12)
Similarly this can be defined for _J_, except making use of _rp_ and problem defined _γ_ :
_._
1
_J_ ( _θ_ ; _B_ ) :=
_|B|_
_τ_ _∈B_
- log( _πθ_ ( _St_ _[τ]_ _[, A]_ _t_ _[τ]_ [))]
_t_ =0
_T_
_T_
- _γ_ _[j][−][t]_ _rp_ ( _Sj_ _[τ]_ _[, A]_ _j_ _[τ]_ [)]
_j_ = _t_
Finally, once we have _v_ **H** _[−]_ [1], we can use the Vector Jacobian Product to calculate ( _v_ **H** _[−]_ [1] ) **A** as
described in Algorithm 3:
**Algorithm 3:** Update for _ϕ_, i.e. (8) i.e., _v_ **H** _[−]_ [1] **A**
**1 Input:** _θ, ϕ, φ, J, f, n, η, D_ off _, D_ on
**2** _v ←_ Algorithm 2 ( _θ, ϕ, φ, J, f, n, η, D_ off _, D_ on)
**3** _v_ _[′]_ _←_ grad( _f_ ( _θ, ϕ, φ_ ; _D_ off) _, θ_ )
**4** ∆ _ϕ ←_ grad( _v_ _[′]_ _, ϕ,_ grad_outputs = _v_ )
**5 Return** ∆ _ϕ_
We can similarly derive updates for _φ_ . Note we are not including the different forms of regularizers
**Algorithm 4:** Update for _ϕ_, i.e. (15) i.e., _v_ **H** _[−]_ [1] **B**
**1 Input:** _θ, ϕ, φ, J, f, n, η, D_ off _, D_ on
**2** _v ←_ Algorithm 2 ( _θ, ϕ, φ, J, f, n, η, D_ off _, D_ on)
**3** _v_ _[′]_ _←_ grad( _f_ ( _θ, ϕ, φ_ ) _, D_ off) _, θ_ )
**4** ∆ _φ ←_ grad( _v_ _[′]_ _, φ,_ grad_outputs = _v_ )
**5 Return** ∆ _φ_
over here to reduce clutter, but adding them is simple.
**C.4** **Pseudo Code (Algorithm 5)**
Lines 8–10 and 21–23 of Algorithm 5 represent the inner optimization process, and the outer
optimization process if from lines 16-17. Lines 8–10 is the initial step of updates to converge to
the current values of _ϕ, φ_, and from there onwards after each update of outer optimization, we
consequently update the policy in (21–23). The flow of the algorithm is show in Figure 4.
As discussed in Section D, using regularizers in ∆( _θ, ϕ, φ_ ) smoothens the objective _J_ ( _θ_ ( _ϕ, φ_ )) with
respect to _ϕ_ and _φ_ . This is helpful as gradual changes in _rϕ_ an _γφ_ can result in gradually changes in
the fixed point for the inner optimization. Therefore, for computational efficiency, we initialize the
policy parameters from the fixed-point of the previous inner-optimization procedure such that the
inner-optimization process may start close to the new fixed-point.
In lines 8–10, the inner optimization for the policy parameters _θ_ are performed till (approximate)
convergence. Note that only trajectories from past interactions are used and no new-trajectories are
sampled for the inner optimization.
22
**Algorithm 5:** BARFI: Behavior Alignment Reward Function’s Implicit optimization
**1 Input:** _J, f, αθ, αϕ, αφ, η, n, δ,_ optim _, E, Ni, N_ 0 _,_
**2 Initialize:** _πθ, rϕ, γφ_
**3 Initialize:** optim _θ ←_ optim( _αθ_ ) _,_ optim _ϕ ←_ optim( _αϕ_ ) _,_ optim _φ ←_ optim( _αφ_ )
**4** _D_ off _←_ [ ]
# Collect a batch of data for warmup period
**5 for** _e ∈_ [1 _, N_ 0) **do**
**6** Generate _τe_ using _πθ_
**7** Append _τe_ to _D_ off
# Initial training steps using warmup data
**8 for** _i ∈_ [0 _, Ni_ + _N_ 0) **do**
**9** Sample a batch of trajectories _B_ from _D_ off
# Update policy
**10** _θ ←_ _θ_ + optim _θ_ (grad( _f_ ( _θ, ϕ, φ_ ; _B_ ) _, θ_ ))
# Start reward alignment
**11 for** _e ∈_ [ _N_ 0 _, E_ ) **do**
# Collect a batch of on-policy data
**12** _D_ on _←_ [ ]
**13** **for** _j ∈_ [0 _, δ_ ) **do**
**14** Generate trajectory _τe_ + _j_ using _πθ_ and append in _D_ on
**15** _e ←_ _e_ + _δ_
# Update _rϕ_ and _γφ_
**16** ∆ _ϕ ←_ Algorithm 3( _θ, ϕ, φ, J, f, n, η, D_ off _, D_ on)
**17** ∆ _φ ←_ Algorithm 4( _θ, ϕ, φ, J, f, n, η, D_ off _, D_ on)
**18** _ϕ ←_ _ϕ_ + optim _ϕ_ (∆ _ϕ_ )
**19** _φ ←_ _φ_ + optim _φ_ (∆ _φ_ )
**20** _D_ off _←D_ off + _D_ on
# Learn policy for new reward function, initializing from the last
**21** **for** _i ∈_ [0 _, Ni_ ) **do**
**22** Sample a batch of trajectories _B_ from _D_ off
# Update policy
**23** _θ ←_ _θ_ + optim _θ_ (grad( _f_ ( _θ, ϕ, φ_ ; _B_ ) _, θ_ ))
Figure 4: **Algorithm Flow:** The change in different parameters
In Lines 13–14, a new batch _D_ on of data is sampled using the policy returned by the inner-optimization
process. This data is used to compute _∂J_ ( _θ_ ( _ϕ, φ_ )) _/∂θ_ ( _ϕ, φ_ ). Existing data _D_ off that was used in the
inner-optimization process is then used to compute _∂θ_ ( _ϕ, φ_ ) _/∂ϕ_ and _∂θ_ ( _ϕ, φ_ ) _/∂φ_ . Using these in
(8) and (15), the parameters for _rϕ_ and _γφ_ are updated in Lines 16 and 17, respectively.
Finally, the new data _D_ on is merged into the existing data _D_ off and the entire process continues.
**C.5** **Note on Approximation**
An important limitation of the methods discussed above is that _θ_ ( _ϕ, φ_ ) is considered such that
∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ ) = 0, i.e., the Alg is run to convergence. In practice, we only execute Alg for a
predetermined number of update steps that need not result in convergence to an optimum _exactly_ .
However, the impact of this approximation can be bounded by assuming convergence to an _ϵ_ neighborhood of the optima [49]. Furthermore, due to smoothness in the functional space, slight
23
changes to _ϕ_ and _φ_ should result in slight shifts in the optimum _θ_ ( _ϕ, φ_ ). The continuity property
allows for improvements in the optimization process: it suffices to initialize the parameters of each
inner-loop optimization problem with the final parameters of the approximate fixed point solution,
( _ϕ, φ_ ), identified in the previous iteration of the inner loop. The complete resulting algorithm is
presented in the appendix as Algorithm 5.
**C.6** **Path-wise Bi-level Optimization**
An alternative approach for computing the term **(b)** in (4) is possible. The formulation of BARFI
described above, based on implicit bi-level optimization, is agnostic to the optimization path taken
by Alg. For the sake of completeness, let us also consider a version of BARFI that does take into
account the path followed by the inner optimization loop. This is advantageous because it allows
us to eliminate the need for the convergence criteria (5). We call this variant BARFI unrolled.
The main difference, in this case, is that when computing the term **(b)** in (4), we now consider each
inner update step until the point _θ_ ( _ϕ, φ_ ) is reached—where the sequence of steps depends on the
specific Alg used for the inner updates. Details are deferred to Appendix C. Notice that this approach
results in a path-wise optimization process that can be more demanding in terms of computation and
memory. We further discuss this issue, and demonstrate the efficacy of this alternative approach, in
the empirical analyses section (Section **??** ).
**D** **Smoothing the objective**
To understand why _J_ ( _θ_ ( _ϕ, φ_ )) might be ill-conditioned is to note that, often a small perturbation in
the reward function doesn’t necessarily lead to a change in the corresponding optimal policy. This
can lead lack of gradient directions in the neighborhood of _ϕ, φ_ for gradient methods to be effective.
This issue can be addressed by employing common regularization techniques like L2 regularization
of the policy parameters or entropy regularization for the policy [6] . We discuss two ways to regularize
the objective in the upcoming sections.
**D.1** **L2 Regularization**
To understand how severely ill-conditioned _J_ ( _θ_ ( _ϕ, φ_ )) can be, notice that a small perturbation in
the reward function often does not change the corresponding optimal policies or the outcome of a
policy optimization algorithm Alg. Therefore, if the parameters of the behavior alignment reward are
perturbed from _ϕ_ to _ϕ_ _[′]_, it may often be that _J_ ( _θ_ ( _ϕ, φ_ )) = _J_ ( _θ_ ( _ϕ_ _[′]_ _, φ_ )) and this limits any gradient
based optimization for _ϕ_ as _∂J_ ( _θ_ ( _ϕ, φ_ )) _/∂ϕ_ is 0. Similarly, minor perturbations in _φ_ may result in
no change in _J_ ( _θ_ ( _ϕ, φ_ )) either.
Fortunately, there exists a remarkably simple solution: incorporate regularization for the _policy_
_parameters θ_ in objective for Alg in the inner-level optimization. For example, the optimal policy for
the following regularized objective E _πθ_ [ [�] _[T]_ _t_ =0 _[γ]_ _φ_ _[t]_ _[r][ϕ]_ [(] _[S][t][, A][t]_ [)]] _[ −]_ _[λ]_ 2 _[∥][θ][∥]_ [2][ varies smoothly to trade-off]
between the regularization value of _θ_ and the magnitude of the performance characterized by ( _rϕ, γφ_ ),
which changes with the values of _rϕ_ and _γφ_ . See Figure 5 for an example with L2 regularization.
**D.2** **Entropy Regularized**
In Section D.1, smoothing of _J_ ( _θ_ ( _ϕ, φ_ )) was done by using L2 regularization on the policy parameters
_θ_ in the inner-optimization process. However, alternate regularization methods can also be used. For
example, in the following we present an alternate update rule for _θ_ based on entropy regularization,
_._
∆( _θ, ϕ, φ_ ) := E _D_
_T_
_ψθ_ ( _St, At_ )
_t_ =0
_T_
- _γφ_ _[j][−][t]_ ( _rϕ_ ( _Sj, Aj_ ) _−_ _λ_ ln _πθ_ ( _Sj, Aj_ ))
_j_ = _t_
6This regularization is performed so as to avoid a noninvertible Hessian as we had discussed in Section 5
24
Figure 5: **(Left)** A bandit problem, where the data is collected from a policy _β_ that samples action _A_
mostly. **(Middle)** Each point on the 3D surface corresponds to the performance of _θ_ ( _ϕ,_ 1) returned
by an Alg that uses the update rule ∆off( _θ, ϕ,_ 1) corresponding to the value of _rϕ_ for actions _A_ and
_C_ in the bottom axes; _rθ_ for action _B_ is set to 0 to avoid another variable in a 3D plot. Notice that
small perturbation in _rϕ_ may lead to no or sudden changes in _J_ ( _θ_ ( _ϕ,_ 1)). **(Right)** Performance of
_θ_ ( _ϕ,_ 1) returned by an Alg that uses the update rule ∆off( _θ, ϕ,_ 1) _−_ _θ_ that incorporates gradient of
the L2 regularizer. Vector fields in Figure 1 were also obtained from this setup.
Notice that new update rule for _ϕ_ and _φ_ can be obtained from steps (4) to (15) with the following **A**,
**B**, and **H** instead, where for shorthand _θ_ _[∗]_ = _θ_ ( _ϕ, φ_ ),
_T_
- _ψθ_ _[∗]_ ( _St, At_ )
_t_ =0
_∂ϕ_
- _T_
- _γφ_ _[j][−][t]_ _rϕ_ ( _S∂ϕj, Aj_ )
_j_ = _t_
- _⊤_ []
_,_
**A** = E _D_
**B** = E _D_
**H** = E _D_
- _T_
- _∂ψθ_ _[∗]_ ( _St, At_ )
_∂θ_ _[∗]_
_t_ =0
_T_
��
_∂γφ_ _[j][−][t]_ ( _rϕ_ ( _Sj, Aj_ ) _−_ _λ_ ln _πθ_ _[∗]_ ( _Sj, Aj_ ))
_∂φ_
��
- _T_
- _ψθ_ _[∗]_ ( _St, At_ )
_t_ =0
- _T_
- _T_
_j_ = _t_
_,_
_−_ _λψθ_ _[∗]_ ( _St, At_ )
_._
- _T_
- _γφ_ _[j][−][t]_ ( _rϕ_ ( _Sj, Aj_ ) _−_ _λ_ ln _πθ_ _[∗]_ ( _Sj, Aj_ ))
_j_ = _t_
- _T_ ��
- _γφ_ _[j][−][t]_ _ψθ_ _[∗]_ ( _Sj, Aj_ ) _[⊤]_
_j_ = _t_
**E** **Meta Learning via Implicit Gradient: Derivation**
The general technique of implicit gradients [14, 34, 19] has been used in a vast range of applications,
ranging from energy models [13, 36], differentiating through black-box solvers [61], few-shot learning
[38, 49], model-based RL [50], differentiable convex optimization neural-networks layers [4, 2], to
hyper-parameter optimization [37, 8, 12, 40]. In this work, we show how implicit gradients can also
be useful to efficiently leverage auxiliary rewards _ra_ and overcome various sub-optimalities.
Taking total derivative in (5) with respect to _ϕ_,
_d_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
) _, ϕ, φ_ )
= _[∂]_ [∆(] _[θ]_ [(] _[ϕ, φ]_ [)] _[, ϕ, φ]_ [)]
_dϕ_ _∂ϕ_
[)] _[, ϕ, φ]_ [)]
+ _[∂]_ [∆(] _[θ]_ [(] _[ϕ, φ]_ [)] _[, ϕ, φ]_ [)]
_∂ϕ_ _∂θ_ ( _ϕ, φ_ )
_[θ]_ [(] _[ϕ, φ]_ [)] _[, ϕ, φ]_ [)] _∂θ_ ( _ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ ) _∂ϕ_
= 0 _._ (13)
_∂ϕ_
Let us try to understand why the above is true, considering the finite difference approach for this
derivative,
) _, ϕ, φ_ ) ∆( _θ_ ( _ϕ_ + _dϕ, φ_ ) _, ϕ_ + _dϕ, φ_ ) _−_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
= lim
_dϕ_ _∥dϕ∥→_ 0 _dϕ_
_d_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_dϕ_
= [0] _[ −]_ [0] = 0 _,_
_dϕ_
∆( _θ_ ( _ϕ_ + _dϕ, φ_ ) _, ϕ_ + _dϕ, φ_ ) = ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ ) = 0, as _θ_ ( _·, ·_ ) defines convergence to fixed point.
By re-arranging terms in (13) we obtain the term (b) in (4),
_ϕ, φ_ ) - _∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
= _−_
_∂ϕ_ _∂θ_ ( _ϕ, φ_ )
- _−_ 1 _∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_._ (14)
_∂ϕ_
_∂θ_ ( _ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
25
By combining (14) with (4) we obtain the desired gradient expression for _ϕ_,
- _−_ 1
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_,_
_∂ϕ_
- �� **A**
_∂J_ ( _θ_ ( _ϕ, φ_ ))
( _ϕ, φ_ ))
= _−_ _[∂J]_ [(] _[θ]_ [(] _[ϕ, φ]_ [))]
_∂ϕ_ _∂θ_ ( _ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
- ~~��~~ **H**
and following similar steps, it can be observed that the gradient expression for _φ_,
- _−_ 1
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂φ_
- ~~��~~ **B**
[1] 2 _[∥][γ][φ][∥]_ [2][�]
_∂_ - _J_ ( _θ_ ( _ϕ, φ_ )) _−_ [1]
)) _−_ 2 _[∥][γ][φ][∥]_
= _−_ _[∂J]_ [(] _[θ]_ [(] _[ϕ, φ]_ [))]
_∂φ_ _∂θ_ ( _ϕ, φ_ )
_∂_ ∆( _θ_ ( _ϕ, φ_ ) _, ϕ, φ_ )
_∂θ_ ( _ϕ, φ_ )
- ~~�~~ - **H**
_−_ _[∂γ][φ]_
_∂φ_ _[,]_
_∂θ_ ( _ϕ, φ_ )
where using _θ_ _[∗]_ as a shorthand for _θ_ ( _ϕ, φ_ ) the terms **A** _,_ **B** and **H** can be expressed as
_T_
- _ψθ_ _[∗]_ ( _St, At_ )
_t_ =0
_∂ϕ_
- _T_
- _ψθ_ _[∗]_ ( _St, At_ )
_t_ =0
- _T_
_j_ = _t_
��
_∂γφ_ _[j][−][t]_
_∂φ_ _[r][ϕ]_ [(] _[S][j][, A][j]_ [)]
(15)
_,_
- _T_
- _γφ_ _[j][−][t]_ _∂rϕ_ ( _∂ϕSj, Aj_ )
_j_ = _t_
- _⊤_ []
_,_ **B** = E _D_
- _T_
**A** = E _D_
��
_−_ _λ._
- _T_
- _γφ_ _[j][−][t]_ _rϕ_ ( _Sj, Aj_ )
_j_ = _t_
**H** = E _D_
- _T_
_t_ =0
_∂ψθ_ _[∗]_ ( _St, At_ )
_∂θ_ _[∗]_
These provide the necessary expressions for updating _ϕ_ and _φ_ in the outer loop. As **A** involves an
outer product and **H** involves second derivatives, computing them _exactly_ might not be practical
when dealing with high-dimensions. Standard approximation techniques like conjugate-gradients or
Neumann series can thus be used to make it more tractable [40]. In our experiments, we made use of
the Neumann approximation to the Hessian Inverse vector product ( **AH** _[−]_ [1] ), which requires the same
magnitude of resources as the baseline policy gradient methods that we build on top off.
**Algorithm:** Being based on implicit gradients, we call our method BARFI, shorthand for _behavior_
_alignment reward function’s implicit_ optimization. Overall, BARFI iteratively solves the bi-level
optimization specified in (2) by alternating between using (3) till approximate converge of Alg
to _θ_ ( _ϕ, φ_ ) and then updating _rϕ_ and _γφ_ . Importantly, being based on (3) for sample efficiency,
Alg leverages only the past samples and does _not_ sample any new trajectories for the inner level
optimization. Further, due to policy regularization which smoothens the objective as discussed in
D, updates in _rϕ_ and _γφ_ changes the policy resulting from Alg gradually. Therefore, for compute
efficiency, we start Alg from the policy obtained from the previous inner optimization, such that it
is in proximity of the new fixed point. This allows BARFI to be both sample and compute efficient
while solving the bi-level optimization iteratively online. Pseudo-code for BARFI and more details
on the approximation techniques can be found in Appendix C.
**F** **Environment & Reward Details**
The first environment is a **GridWorld** (GW), where the start state is in the bottom left corner and a
goal state is in the top right corner. The agent receives an _rp_ of +100 on reaching the goal followed
by termination of the episode. The second environment is **MountainCar** (MC) [58], wherein we
make use of the sparse reward variant, wherein the agent receives a +1 reward on reaching on top
of the hill and 0 otherwise. The third environment is **CartPole** (CP) [16]. Finally, to assess the
scalability we pick HalfCheetah-v4 from **Mujoco** (MJ) suite of OpenAI Gym [9].
For each environment, we define two auxiliarly reward functions. For GridWorld, we define the
functions: _r_ aux [1] _,_ GW [:=] _[ −]_ [(] _[s][ −]_ _[s]_ [goal][)][2][, which provides the negative L2 squared distance from the goal]
position, and _r_ aux [1] _,_ GW [:= 50] _[×]_ **[1]** _[s][∈S]_ Center [, which provides an additional bonus of][ +50][ to the agent along]
the desired path to the goal state (i.e. the center states). In MountainCar the state is composed of two
components: the position _x_, and velocity v. The first auxiliary reward function, _r_ aux [1] _,_ MC [(] _[s, a]_ [) :=] _[ |]_ [v] _[|]_ [,]
encourages a higher absolute velocity of the car, and the second, _r_ aux [1] _,_ MC [(] _[s, a]_ [) :=] **[ 1]** [sign(v)][=] _[a]_ [,]
encourages the direction of motion to increase the magnitude of the velocity (also knows as the
_energy pumping policy_ [18]). For CartPole, we consider a way to reuse knowledge from a hand crafted
policy. CartPole can be solved using a Proportional Derivate (PD) controller [6], hence we tune a
PD controller, PD _[∗]_ : _S →A_, to solve CartPole for the max possible return. We design two auxiliary
26
reward functions which make use of this PD controller. The first, _r_ aux [1] _,_ CP [(] _[s, a]_ [) := 5] _[ ×]_ **[ 1]** PD _[∗]_ ( _s_ )= _a_ _[−]_
(1 _−_ 1PD _∗_ ( _s_ )= _a_ ), encourages the agent to match the action of the optimal PD controller, and penalizes it
for not matching. The second auxiliary reward function, _r_ aux [1] _,_ GW [(] _[s, a]_ [) :=] _[ −][r]_ aux [1] _,_ CP [(] _[s, a]_ [)][, encourages]
the agent to do the opposite. In the case of Mujoco, the reward function provided by the environment
is itself composed of multiple different functions. We explain the same and the respective auxiliary
functions for this case later.
We have considered several forms of information encoded as auxiliary rewards for these experiments. We have heuristic-based reward functions (i.e., _r_ aux [1] _,_ GW _[, r]_ aux [1] _,_ GW _[, r]_ aux [1] _,_ MC [). Reward func-]
tions that encode a guess of an optimal policy (i.e., _r_ aux [1] _,_ MC _[, r]_ aux [1] _,_ CP [) and reward functions that]
change the optimal policy (i.e., _r_ aux [1] _,_ GW _[, r]_ aux [1] _,_ CP [).] We also have rewards that only depend on
states (i.e., _r_ aux [1] _,_ GW _[, r]_ aux [1] _,_ GW _[, r]_ aux [1] _,_ MC [) as well as ones that depend on both state and actions (i.e.,]
_r_ aux [1] _,_ MC _[, r]_ aux [1] _,_ CP _[, r]_ aux [1] _,_ CP [). Therefore, we can test if][ BARFI][ can overcome misspecified auxiliary]
reward functions and does not hurt performance when well-specified.
**Mujoco Environment** In this experiment, we investigate the scalability of BARFI in learning control
policies for high-dimensional state spaces with continuous action spaces. In HalfCheetah-v4 the
agent’s task is to move forward, and it receives a reward based on its forward movement (denoted
as _rp_ ). Additionally, there is a small cost associated with the magnitude of torque required for
action execution (denoted as _r_ aux( _s, a_ ) := _c|a|_ [2] 2 [). The weighting between the main reward and]
the control cost is pre-defined as _c_ for this environment, and we form the reward as ˜ _r_ ( _s, a_ ) =
_rp_ ( _s, a_ ) + _r_ aux( _s, a_ ). We explore how an arbitrary weighting choice can cause the agent to fail in
learning, while BARFI is still able to adapt and learn the appropriate weighting, remaining robust
to possible misspecification. We consider two different weightings for the control cost: the first
weighting, denoted as _r_ aux [1] _,_ MJ [(] _[s, a]_ [) :=] _[ −][c][|][a][|]_ [2][2][, uses the default setting, while the second weighting,]
denoted as _r_ aux [1] _,_ MJ [(] _[s, a]_ [) :=] _[ −]_ [4] _[c][|][a][|]_ [2] 2 [, employs a scaled variant of the first weighting. Additionally, we]
implement the path-wise bi-level optimization variant i.e., BARFI Unrolled. In these experiments,
we keep the value of _γ_ fixed to isolate the agent’s capability to adapt and recover from an arbitrary
reward weighting. We will also measure what computational and performance tradeoffs we might
have to make between using the implicit version i.e., BARFI against, the path-wise version i.e.,
BARFI Unrolled.
**G** **Details for the Empirical Results**
**G.1** **Implementation Details**
In this section we will briefly describe the implementation details around the different environments
that were used.
**GridWorld (GW):** In the case of GridWorld we made use of the Fourier basis (of Order = 3) over
the raw coordinates of agent position in the GridWorld. Details about this could be found in the
src/utils/Basis.py file.
**MountainCar (MC):** For this environment, to reduce the limitation because of the function approximator we used TileCoding [58], which offers a suitable representation for the MountainCar problem.
We used 4 Tiles and Tilings of 5.
**CartPole (CP):** For CartPole also make use of Fourier Basis of (Order = 3), with linear function
approximator on top of that.
**MuJoco (MJ):** For this we made use of a neural network with 1 hidden layer of 32 nodes and
ReLU activation as the function approximator over the raw observations. The output of the policy is
continuous actions, hence we used a Gaussian representation, where the policy outputs the mean of
the multivariate Gaussian and we used a fixed diagonal standard deviation, fixed to _σ_ = 0 _._ 1.
**General Details:** All the outer returns are evaluated without any discounting, whereas all the inner
optimizations were initialized with _γφ_ = 0 _._ 99. Hence to do this we made _φ_ a single bias unit,
initialized to 4 _._ 6, and passed through a sigmoid (i.e., _σ_ (4 _._ 6) = 0 _._ 99).
For GW, CP and MC _rϕ_ is defined as below
_rϕ_ ( _s, a_ ) = _ϕ_ 1( _s_ ) + _ϕ_ 2( _s_ ) _rp_ + _ϕ_ 3( _s_ ) _ra_
27
Wherein _ϕ_ 1 _, ϕ_ 2 _, ϕ_ 3 are scalar outputs of a 3-headed function, in this case simply a linear layer over
the states inputs.
Whereas in the case of MJ, we have
_rϕ_ ( _s, a_ ) = _ϕ_ 1 + _rp_ + _ϕ_ 3 _ra_
Wherein _ϕ_ 1 is initialized to zero and _ϕ_ 3 is 1 _._ 0 act like bias units.
Gradient normalization was used for all the cases where neural nets were involved (i.e., MJ), and
also for MJ we modified the Baseline (REINFORCE) update to subtract the running average of the
performance as a baseline to get acceptable performance for the baseline method.
**G.2** **Hyper-parameter Selection**
As different make use of different function approximators the range of hyper-params can vary we talk
about all the above over here.
Best-performing Parameters for different methods and environments are listed where
Table 3: Hyper-parameters for GridWorld
**Hyper Parameter** **BARFI Value** **REINFORCE Value** **Actor-Critic Value**
_αθ_ 1 _×_ 10 _[−]_ [3] 1 _×_ 10 _[−]_ [3] 1 _×_ 10 _[−]_ [3]
_αϕ_ 5 _×_ 10 _[−]_ [3] _−_ _−_
_αφ_ 5 _×_ 10 _[−]_ [3] _−_ _−_
optim RMSprop RMSprop RMSprop
_λθ_ 0 _._ 25 0 _._ 25 0 _._ 25
_λϕ_ 0 _._ 0625 _−_ _−_
_λφ_ 4 _._ 0 _−_ _−_
Buffer 1000 _−_ _−_
Batch Size 1 1 1
_η_ 0 _._ 0005 _−_ _−_
_δ_ 3 _−_ _−_
_n_ 5 _−_ _−_
_N_ 0 150 _−_ _−_
_Ni_ 15 _−_ _−_
Table 4: Hyper-parameters for MountainCar
**Hyper Parameter** **BARFI Value** **REINFORCE Value** **Actor-Critic Value**
_αθ_ 0 _._ 015625 0 _._ 125 0 _._ 03125
_αϕ_ 0 _._ 0625 _−_ _−_
_αφ_ 0 _._ 0625 _−_ _−_
optim RMSprop RMSprop RMSprop
_λθ_ 0 _._ 0 0 _._ 0 0 _._ 25
_λϕ_ 0 _._ 0 _−_ _−_
_λφ_ 0 _._ 25 _−_ _−_
Buffer 50 _−_ _−_
Batch Size 1 1 1
_η_ 0 _._ 001 _−_ _−_
_δ_ 3 _−_ _−_
_n_ 5 _−_ _−_
_N_ 0 50 _−_ _−_
_Ni_ 15 _−_ _−_
**Hyperparameter Sweep** : Here we list the details about how we swept the values for different
hyper-params. We used PyTorch [46] for all our implementations. We usually used an optimizer
between RMSProp or Adam with default parameters as provided in Pytorch. For _αθ ∈{_ 5 _×_
28
Table 5: Hyper-parameters for CartPole
**Hyper Parameter** **BARFI Value** **REINFORCE Value** **Actor-Critic Value**
_αθ_ 1 _×_ 10 _[−]_ [3] 1 _×_ 10 _[−]_ [3] 5 _×_ 10 _[−]_ [4]
_αϕ_ 1 _×_ 10 _[−]_ [3] _−_ _−_
_αφ_ 5 _×_ 10 _[−]_ [3] _−_ _−_
optim RMSprop RMSprop RMSprop
_λθ_ 1 _._ 0 1 _._ 0 0 _._ 0
_λϕ_ 0 _._ 0 _−_ _−_
_λφ_ 4 _._ 0 _−_ _−_
Buffer 10000 _−_ _−_
Batch Size 1 1 1
_η_ 0 _._ 0005 _−_ _−_
_δ_ 3 _−_ _−_
_n_ 5 _−_ _−_
_N_ 0 150 _−_ _−_
_Ni_ 15 _−_ _−_
Table 6: Hyper-parameters for MuJoco
**Hyper Parameter** **BARFI Value** **REINFORCE Value** **Actor-Critic Value**
_αθ_ 7 _._ 5 _×_ 10 _[−]_ [5] 5 _×_ 10 _[−]_ [4] 2 _._ 5 _×_ 10 _[−]_ [4]
_αϕ_ 2 _._ 5 _×_ 10 _[−]_ [3] _−_ _−_
_αφ_ 0 _._ 0 _−_ _−_
optim Adam Adam Adam
_λϕ_ 0 _._ 0625 _−_ _−_
_λφ_ 0 _._ 0 _−_ _−_
Buffer 50 _−_ _−_
Batch Size 1 1 1
_η_ 0 _._ 0005 _−_ _−_
_δ_ 3 _−_ _−_
_n_ 5 _−_ _−_
_N_ 0 30 _−_ _−_
_Ni_ 15 _−_ _−_
10 _[−]_ [3] _,_ 2 _._ 5 _×_ 10 _[−]_ [3] _,_ 1 _×_ 10 _[−]_ [3] _,_ 5 _×_ 10 _[−]_ [4] _,_ 2 _._ 5 _×_ 10 _[−]_ [4] _,_ 1 _×_ 10 _[−]_ [4] _,_ 7 _._ 5 _×_ 10 _[−]_ [5] _}_, we use similar ranges
for _αϕ, αφ_ (which tend to be larger). For _λθ_ and _λϕ_, we sweeped from [0 _,_ 0 _._ 25 _,_ 0 _._ 5 _,_ 1 _._ 0] and for _λγ_
we sweeped from [0 _,_ 0 _._ 25 _,_ 1 _._ 0 _,_ 4 _._ 0 _,_ 16 _._ 0]. We simply list ranges for different values and later we
present sensitivity curves showing that these values are usually robust for BARFI across different
methods as we can see from the tables above. _δ ∈_ [1 _,_ 3 _,_ 5], _n ∈_ [1 _,_ 3 _,_ 5], _Ni ∈_ [1 _,_ 3 _,_ 6 _,_ 9 _,_ 12 _,_ 15],
_η ∈_ [1 _×_ 10 _[−]_ [3] _,_ 5 _×_ 10 _[−]_ [4] _,_ 1 _×_ 10 _[−]_ [4] ], _N_ 0 _∈_ [30 _,_ 50 _,_ 100 _,_ 150], buffer _∈_ [25 _,_ 50 _,_ 100 _,_ 1000]. _α_ for
Tilecoding was adopted from [58] and hence similar ranges were sweeped in that case. Most sweeps
were done with around 10 seeds, and later the parameter ranges were reduced and performed with
more seeds.
**G.3** **Compute**
The computer is used for a cluster where the CPU class is Intel Xeon Gold 6240 CPU @2 _._ 60GHz.
The total compute required for GW was around 3 CPU years [7], CP also required around 3 CPU years,
and MC required around 4 CPU years. For MJ we needed around 5-6 CPU years. In total we utilized
around 15-16 CPU years, where we needed around 1 GB of memory per thread.
71 CPU year := Compute equal to running a CPU thread for a year.
29
**H** **Extra Results & Ablations**
**H.1** **Experiment on partially misspecified** _r_ **aux**
In these set of experiments, we consider the case where auxiliary reward signals could be helpful
only in a few—possibly arbitrary—state-action pairs. In general, we anticipate that solutions in this
scenario would be such that assigns weightings, allowing the agent to avoid regions where might be
misspecified. Meanwhile, the agent would still make use of the places where is well specified and
useful.
We consider another _r_ aux in the GridWorld domain in which the auxiliary reward is misspecified for
a subset of states near the starting position. Meanwhile, it is still well-specified for states near the
goal (Figure 6 (a)). Figures 6 (b) and (c) illustrate the learned and the weighting on, showcasing the
agent’s ability to depict the expected behavior described above.
(a) _r_ aux( _s_ )
|Col1|70
60
50
40
30
20|
|---|---|
||10
|
||0
|
|Col1|101
100
0
10|
|---|---|
||1
|
|||
(c) The net reward induced i.e., _rp_ ( _s_ ) +
_ϕ_ 3( _s_ ) _r_ aux( _s_ )
(b) Learned _ϕ_ 3( _s_ ), the weighting on _r_ aux( _s_ )
(d) Learning performance
Figure 6: 40 random seeds were used to generate the plots. The starting state is at the bottom left and
the goal state is at the top right corner. The primary reward _rp_ is +100 when the agent reaches the
goal and 0 otherwise. **(a)** A state-dependent _r_ aux function that is partially misspecified (in the blue
region _r_ aux provides a value equal to the **Manhattan distance**, thereby incentivizing the agent to stay
near the start), and partially well specified (in the red region, it is the **negative Manhattan distance**
and encourages movement towards the goal). **(b)** The state-dependent weighting _ϕ_ 3( _s_ ) learned by
BARFI negates the positive value from _r_ aux near the start state. **(c)** The effective reward function
_rp_ ( _s_ ) + _ϕ_ 3( _s_ ) _r_ aux( _s_ ) learned by BARFI. **(d)** Learning curves for BARFI, and the baseline that uses
a naive addition ( _rp_ ( _s_ ) + _r_ aux( _s_ )) of the above auxiliary reward.
30
**H.2** **Return based on learned** _rϕ_ **and** _γφ_
Figure 7 and Figure 8 summarize the achievable return based on _rϕ_ and the _γ_ learned by the agent
across different domains and reward specification. We observe that REINFORCE often optimizes
the naive combination of reward for sure, but that doesn’t really lead to a good performance on _rp_,
whereas BARFI does achieve an appropriate return on _rϕ_, but is also able to successively decay _γ_
as the learning progress across different domains. Particularly notice Figure 7 (a) Bottom, where
REINFORCE does optimize aux return a lot, but actually fails to solve the problem, as it simply
learns to loop around the center state.
**H.3** **Ablations**
Figure 9 represents the ablation of BARFI on GridWorld with the misspecified reward for its different
params. We can see that usually having _η_ = 0 _._ 001 _,_ 0 _._ 0005, _n_ = 5 works for the approximation.
31
**MountainCar**
**GridWorld**
**CartPole**
**(a)** **(b)** **(c)**
Figure 8: **Learned discounting** _γφ_ **:** This figure illustrates the learned _γφ_ for BARFI and normal
_γ_ for other methods, the curves are chosen based on best-performing curves on _rp_, and averaged
over 20 runs (except 40 for GW). **(a) Top** - _r_ aux [1] _,_ GW [,] **[ Bottom]** [ –] _[ r]_ aux [2] _,_ GW [,] **[ (b) Top]** [ –] _[ r]_ aux [1] _,_ CP [,] **[ Bottom]**
- _r_ aux [2] _,_ CP [,] **[ (c) Top]** [ –] _[ r]_ aux [2] _,_ MC [,] **[ Bottom]** [ –] _[ r]_ aux [1] _,_ MC [. Mujoco is not included as the] _[ γ]_ [ was not learned in]
that case. We can observe that the agents start to learn to decay _γ_ at the appropriate pace. Note that
the curves for methods other than BARFI and BARFI Unrolled are overlapping as the baselines
don’t change the value of _γ_ .
32
98
97
96
95
94
100
80
60
40
20
98.0
97.5
97.0
96.5
96.0
95.5
2 12 2 10 2 8 2 6 2 4
95
90
85
80
75
2 12 2 10 2 8 2 6 2 4
98
97
96
95
94
93
2 [0] 2 [1] 2 [2] 2 [3]
99
98
97
96
95
94
93
92
2 12 2 10 2 8 2 6 2 4
98
96
94
92
90
88
86
2 17 2 15 2 13 2 11 2 9 2 7
95.0
94.5
94.0
2 [0] 2 [1] 2 [2]
Figure 9: **Sensitivity Curves:** The set of graphs representing the sensitivity of different hyperparams keeping all the other params fixed. The sensitivity is for BARFI in GW with _r_ aux,GW [2] [, i.e.,]
the misspecified reward. We choose the best-performing parameters and vary each parameter to see
its influence. The curves are obtained for 50 runs (seeds) in each case, and error bars are standard
errors. We can notice that _αθ_ and _αϕ_ can have a large influence, and tend to stay around similar
values. _λθ,ϕ,φ_ tends to help but doesn’t really influence a lot in terms of its magnitude, except larger
values of _λφ_ seem to do better. Smaller values of _η_ seems to work fine, hence something around
5 _×_ 10 _[−]_ [4] _,_ 1 _×_ 10 _[−]_ [3] usually should suffice. _n, δ_ can be chosen to around 5 and 3, and usually workout
fine. We also defined _Ni_ = 5 _× δ_ in this case.
33