# A Distributional View on Multi-Objective Policy Optimization Abbas Abdelmaleki\* Sandy H. Huang\* Leonard Hasenclever\* Michael Neunert\* H. Francis Song\* Martina Zambelli Murilo F. Martins Nicolas Heess Raia Hadsell Martin Riedmiller # Abstract Many real-world problems require trading off multiple competing objectives. However, these objectives are often in different units and/or scales, which can make it challenging for practitioners to express numerical preferences over objectives in their native units. In this paper we propose a novel algorithm for multi-objective reinforcement learning that enables setting desired preferences for objectives in a scale-invariant way. We propose to learn an action distribution for each objective, and we use supervised learning to fit a parametric policy to a combination of these distributions. We demonstrate the effectiveness of our approach on challenging high-dimensional real and simulated robotics tasks, and show that setting different preferences in our framework allows us to trace out the space of nondominated solutions. # 1. Introduction Reinforcement learning (RL) algorithms do an excellent job at training policies to optimize a single scalar reward function. Recent advances in deep RL have made it possible to train policies that exceed human-level performance on Atari (Mnih et al., 2015) and Go (Silver et al., 2016), perform complex robotic manipulation tasks (Zeng et al., 2019), learn agile locomotion (Tan et al., 2018), and even obtain reward in unanticipated ways (Amodei et al., 2016). However, many real-world tasks involve multiple, possibly competing, objectives. For instance, choosing a financial portfolio requires trading off between risk and return; controlling energy systems requires trading off performance and cost; and autonomous cars must trade off fuel costs, efficiency, and safety. Multi-objective reinforcement learning (MORL) algorithms aim to tackle such problems (Rojiers  Figure 1. We demonstrate our approach in four complex continuous control domains, in simulation and in the real world. Videos are at http://sites.google.com/view/mo-mpo. et al., 2013; Liu et al., 2015). A common approach is scalarization: based on preferences across objectives, transform the multi-objective reward vector into a single scalar reward (e.g., by taking a convex combination), and then use standard RL to optimize this scalar reward. It is tricky, though, for practitioners to pick the appropriate scalarization for a desired preference across objectives, because often objectives are defined in different units and/or scales. For instance, suppose we want an agent to complete a task while minimizing energy usage and mechanical wear-and-tear. Task completion may correspond to a sparse reward or to the number of square feet a vacuuming robot has cleaned, and reducing energy usage and mechanical wear-and-tear could be enforced by penalties on power consumption (in kWh) and actuator efforts (in N or Nm), respectively. Practitioners would need to resort to using trial and error to select a scalarization that ensures the agent prioritizes actually doing the task (and thus being useful) over saving energy. Motivated by this, we propose a scale-invariant approach for encoding preferences, derived from the RL-as-inference perspective. Instead of choosing a scalarization, practitioners set a constraint per objective. Based on these constraints, we learn an action distribution per objective that improves on the current policy. Then, to obtain a single updated policy that makes these trade-offs, we use supervised learning to fit a policy to the combination of these action distributions. The constraints control the influence of each objective on the policy, by constraining the KL-divergence between each objective-specific distribution and the current policy. The higher the constraint value, the more influence the objective has. Thus, a desired preference over objectives can be encoded as the relative magnitude of these constraint values. Fundamentally, scalarization combines objectives in reward space, whereas our approach combines objectives in distribution space, thus making it invariant to the scale of rewards. In principle, our approach can be combined with any RL algorithm, regardless of whether it is off-policy or on-policy. We combine it with maximum a posteriori policy optimization (MPO) (Abdolmaleki et al., 2018a;b), an off-policy actor-critic RL algorithm, and V-MPO (Song et al., 2020), an on-policy variant of MPO. We call these two algorithms multi-objective MPO (MO-MPO) and multi-objective V-MPO (MO-V-MPO), respectively. Code for MO-MPO will be made available online. $^{1}$ Our main contribution is providing a distributional view on MORL, which enables scale-invariant encoding of preferences. We show that this is a theoretically-grounded approach, that arises from taking an RL-as-inference perspective of MORL. Empirically, we analyze the mechanics of MO-MPO and show it finds all Pareto-optimal policies in a popular MORL benchmark task. Finally, we demonstrate that MO-MPO and MO-V-MPO outperform scalarized approaches on multi-objective tasks across several challenging high-dimensional continuous control domains (Fig. 1). # 2. Related Work # 2.1. Multi-Objective Reinforcement Learning Multi-objective reinforcement learning (MORL) algorithms are either single-policy or multiple-policy (Vamplew et al., 2011). Single-policy approaches seek to find the optimal policy for a given scalarization of the multi-objective problem. Often this scalarization is linear, but other choices have also been explored (Van Moffaert et al., 2013). However, the scalarization may be unknown at training time, or it may change over time. Multiple-policy approaches handle this by finding a set of policies that approximates the true Pareto front. Some approaches repeatedly call a single-policy MORL algorithm with strategically-chosen scalarizations (Natarajan & Tadepalli, 2005; Roijers et al., 2014; Mossalam et al., 2016; Zuluaga et al., 2016). Other approaches learn a set of policies simultaneously, by using a multi-objective variant of the Q-learning update rule (Barrett & Narayanan, 2008; Moffaert & Nowé, 2014; Reymond & Nowé, 2019; Yang et al., 2019) or by modifying gradient-based policy search (Parisi et al., 2014; Pirotta et al., 2015). Most existing approaches for finding the Pareto front are limited to discrete state and action spaces, in which tabular algorithms are sufficient. Although recent work combining MORL with deep RL handles high-dimensional observations, this is in domains with low-dimensional and usually discrete action spaces (Mossalam et al., 2016; van Seijen et al., 2017; Friedman & Fontaine, 2018; Abels et al., 2019; Reymond & Nowé, 2019; Yang et al., 2019; Nottingham et al., 2019). In contrast, we evaluate our approach on continuous control tasks with more than 20 action dimensions. A couple of recent works have applied deep MORL to find the Pareto front in continuous control tasks; these works assume scalarization and rely on additionally learning either a meta-policy (Chen et al., 2019) or inter-objective relationships (Zhan & Cao, 2019). We take an orthogonal approach to existing approaches: one encodes preferences via constraints on the influence of each objective on the policy update, instead of via scalarization. MO-MPO can be run multiple times, with different constraint settings, to find a Pareto front of policies. # 2.2. Constrained Reinforcement Learning An alternate way of setting preferences is to enforce that policies meet certain constraints. For instance, threshold lexicographic ordering approaches optimize a (single) objective while meeting specified threshold values on the other objectives (Gábor et al., 1998), optionally with slack (Wray et al., 2015). Similarly, safe RL is concerned with learning policies that optimize a scalar reward while not violating safety constraints (Achiam et al., 2017; Chow et al., 2018); this has also been studied in the off-policy batch RL setting (Le et al., 2019). Related work minimizes costs while ensuring the policy meets a constraint on the minimum expected return (Bohez et al., 2019), but this requires that the desired or achievable reward is known a priori. In contrast, MOMPO does not require knowledge of the scale of rewards. In fact, often there is no easy way to specify constraints on objectives, e.g., it is difficult to figure out a priori how much actuator effort a robot will need to use to perform a task. # 2.3. Multi-Task Reinforcement Learning Multi-task reinforcement learning can also be cast as a MORL problem. Generally these algorithms learn a separate policy for each task, with shared learning across tasks (Teh et al., 2017; Riedmiller et al., 2018; Wulfmeier et al., 2019). In particular, Distral (Teh et al., 2017) learns a shared prior that regularizes the per-task policies to be similar to each other, and thus captures essential structure that is shared across tasks. MO-MPO differs in that the goal is to learn a single policy that must trade off across different objectives. Other multi-task RL algorithms seek to train a single agent to solve different tasks, and thus need to handle different reward scales across tasks. Prior work uses adaptive normalization for the targets in value-based RL, so that the agent cares equally about all tasks (van Hasselt et al., 2016; Hessel et al., 2019). Similarly, prior work in multi-objective optimization has dealt with objectives of different units and/or scales by normalizing objectives to have similar magnitudes (Marler & Arora, 2005; Grodzevich & Romanko, 2006; Kim & de Weck, 2006; Daneshmand et al., 2017; Ishibuchi et al., 2017). MO-MPO can also be seen as doing adaptive normalization, but for any preference over objectives, not just equal preferences. In general, invariance to reparameterization of the function approximator has been investigated in optimization literature resulting in, for example, natural gradient methods (Martens, 2014). The common tool here is measuring distances in function space instead of parameter space, using KL-divergence. Similarly in this work, to achieve invariance to the scale of objectives, we use KL-divergence over policies to encode preferences. # 3. Background and Notation Multi Objective Markov Decision Process. In this paper, we consider a multi-objective RL problem defined by a multi-objective Markov Decision Process (MO-MDP). The MO-MDP consists of states $s \in S$ and actions $a \in \mathcal{A}$ , an initial state distribution $p(s_0)$ , transition probabilities $p(s_{t+1}|s_t, a_t)$ which define the probability of changing from state $s_t$ to $s_{t+1}$ when taking action $a_t$ , reward functions $\{r_k(s, a) \in \mathbb{R}\}_{k=1}^N$ per objective $k$ , and a discount factor $\gamma \in [0,1)$ . We define our policy $\pi_\theta(a|s)$ as a state conditional distribution over actions parametrized by $\theta$ . Together with the transition probabilities, this gives rise to a state visitation distribution $\mu(s)$ . We also consider per-objective action-value functions. The action-value function for objective $k$ is defined as the expected return (i.e., cumulative discounted reward) from choosing action $a$ in state $s$ for objective $k$ and then following policy $\pi: Q_k^\pi(s, a) = \mathbb{E}_\pi[\sum_{t=0}^\infty \gamma^t r_k(s_t, a_t)|s_0 = s, a_0 = a]$ . We can represent this function using the recursive expression $Q_k^\pi(s_t, a_t) = \mathbb{E}_{p(s_{t+1}|s_t, a_t)}[r_k(s_t, a_t) + \gamma V_k^\pi(s_{t+1})]$ , where $V_k^\pi(s) = \mathbb{E}_\pi[Q_k^\pi(s, a)]$ is the value function of $\pi$ for objective $k$ . Problem Statement. For any MO-MDP there is a set of nondominated policies, i.e., the Pareto front. A policy is nondominated if there is no other policy that improves its expected return for an objective without reducing the expected return of at least one other objective. Given a preference setting, our goal is to find a nondominated policy $\pi_{\theta}$ that satisfies those preferences. In our approach, a setting of constraints does not directly correspond to a particular scalarization, but we show that by varying these constraint Algorithm 1 MO-MPO: One policy improvement step 1: given batch size $(L)$ , number of actions to sample $(M)$ $(N)$ Q-functions $\{Q_k^{\pi_{\mathrm{old}}}(s,a)\}_{k = 1}^N$ , preferences $\{\epsilon_k\}_{k = 1}^N$ , previous policy $\pi_{\mathrm{old}}$ , previous temperatures $\{\eta_k\}_{k = 1}^N$ , replay buffer $\mathcal{D}$ first-order gradient-based optimizer O 2: 3: initialize $\pi_{\theta}$ from the parameters of $\pi_{\mathrm{old}}$ 4: repeat 5: // Collect dataset $\{s^i,a^{ij},Q_k^{ij}\}_{i,j,k}^{L,M,N}$ , where 6: // $M$ actions $a^{ij}\sim \pi_{\mathrm{old}}(a|s^i)$ and $Q_{k}^{ij} = Q_{k}^{\pi_{\mathrm{old}}} (s^{i},a^{ij})$ 7: 8: // Compute action distribution for each objective 9: for $k = 1,\dots ,N$ do 10: $\delta_{\eta_k}\gets \nabla_{\eta_k}\eta_k\epsilon_k + \eta_k\sum_i^L\frac{1}{L}\log \left(\sum_j^M\frac{1}{M}\exp \left(\frac{Q_k^{ij}}{\eta_k}\right)\right)$ 11: Update $\eta_{k}$ based on $\delta_{\eta_k}$ , using optimizer O 12: $q_{k}^{ij}\propto \exp (\frac{Q_{k}^{ij}}{\eta_{k}})$ 13: end for 14: 15: // Update parametric policy 16: $\delta_{\pi}\gets -\nabla_{\theta}\sum_{i}^{L}\sum_{j}^{M}\sum_{k}^{N}q_{k}^{ij}\log \pi_{\theta}(a^{ij}|s^i)$ 17: (subject to additional KL regularization, see Sec. 4.2.2) 18: Update $\pi_{\theta}$ based on $\delta_{\pi}$ , using optimizer O 19: 20: until fixed number of steps 21: return $\pi_{\mathrm{old}} = \pi_{\theta}$ settings, we can indeed trace out a Pareto front of policies. # 4. Method We propose a policy iteration algorithm for multi-objective RL. Policy iteration algorithms decompose the RL problem into two sub-problems and iterate until convergence: 1. Policy evaluation: estimate Q-functions given policy 2. Policy improvement: update policy given Q-functions Algorithm 1 summarizes this two-step multi-objective policy improvement procedure. In Appendix E, we explain how this can be derived from the "RL as inference" perspective. We describe multi-objective MPO in this section and explain multi-objective V-MPO in Appendix D. When there is only one objective, MO-(V-)MPO reduces to (V-)MPO. # 4.1. Multi-Objective Policy Evaluation In this step we learn Q-functions to evaluate the previous policy $\pi_{\mathrm{old}}$ . We train a separate Q-function per objective, following the Q-decomposition approach (Russell & Zimdars, 2003). In principle, any Q-learning algorithm can be used, as long as the target Q-value is computed with respect to $\pi_{\mathrm{old}}$ .3 In this paper, we use the Retrace objective (Munos et al., 2016) to learn a Q-function $Q_{k}^{\pi_{\mathrm{old}}}(s,a;\phi_{k})$ for each objective $k$ , parameterized by $\phi_{k}$ , as follows: $$ \min _ {\{\phi_ {k} \} _ {1} ^ {N}} \sum_ {k = 1} ^ {N} \mathbb {E} _ {(s, a) \sim \mathcal {D}} \left[ \left(\hat {Q} _ {k} ^ {\mathrm {r e t}} (s, a) - Q _ {k} ^ {\pi_ {\mathrm {o l d}}} (s, a; \phi_ {k})\right) ^ {2} \right], $$ where $\hat{Q}_k^{\mathrm{ret}}$ is the Retrace target for objective $k$ and the previous policy $\pi_{\mathrm{old}}$ , and $\mathcal{D}$ is a replay buffer containing gathered transitions. See Appendix C for details. # 4.2. Multi-Objective Policy Improvement Given the previous policy $\pi_{\mathrm{old}}(a|s)$ and associated Q-functions $\{Q_k^{\pi_{\mathrm{old}}}(s,a)\}_{k = 1}^N$ , our goal is to improve the previous policy for a given visitation distribution $\mu (s)$ . To this end, we learn an action distribution for each Q-function and combine these to obtain the next policy $\pi_{\mathrm{new}}(a|s)$ . This is a multi-objective variant of the two-step policy improvement procedure employed by MPO (Abdolmaleki et al., 2018b). In the first step, for each objective $k$ we learn an improved action distribution $q_{k}(a|s)$ such that $\mathbb{E}_{q_k(a|s)}[Q_k^{\pi_{\mathrm{old}}}(s,a)] \geq \mathbb{E}_{\pi_{\mathrm{old}}(a|s)}[Q_k^{\pi_{\mathrm{old}}}(s,a)]$ , where states $s$ are drawn from a visitation distribution $\mu(s)$ . In the second step, we combine and distill the improved distributions $q_{k}$ into a new parametric policy $\pi_{\mathrm{new}}$ (with parameters $\theta_{\mathrm{new}}$ ) by minimizing the KL-divergence between the distributions and the new parametric policy, i.e, $$ \theta_ {\text {n e w}} = \underset {\theta} {\operatorname {a r g m i n}} \sum_ {k = 1} ^ {N} \mathbb {E} _ {\mu (s)} \left[ \mathrm {K L} \left(q _ {k} (a | s) \| \pi_ {\theta} (a | s)\right) \right]. \tag {1} $$ This is a supervised learning loss that performs maximum likelihood estimate of distributions $q_{k}$ . Next, we will explain these two steps in more detail. # 4.2.1. OBTAINING ACTION DISTRIBUTIONS PER OBJECTIVE (STEP 1) To obtain the per-objective improved action distributions $q_{k}(a|s)$ , we optimize the standard RL objective for each objective $Q_{k}$ : $$ \max _ {q _ {k}} \int_ {s} \mu (s) \int_ {a} q _ {k} (a | s) Q _ {k} (s, a) d a d s \tag {2} $$ $$ \mathrm {s . t .} \int_ {s} \mu (s) \operatorname {K L} \big (q _ {k} (a | s) \| \pi_ {\mathrm {o l d}} (a | s) \big) \mathrm {d} s < \epsilon_ {k}, $$ where $\epsilon_{k}$ denotes the allowed expected KL divergence for objective $k$ . We use these $\epsilon_{k}$ to encode preferences over objectives. More concretely, $\epsilon_{k}$ defines the allowed influence of objective $k$ on the change of the policy. based on its own best action for the next state, then they overestimate Q-values, because in reality the parametric policy $\pi_{\theta}$ (that considers all critics' opinions) is in charge of choosing actions. 4In practice, we use draws from the replay buffer to estimate expectations over the visitation distribution $\mu (s)$ For nonparametric action distributions $q_{k}(a|s)$ , we can solve this constrained optimization problem in closed form for each state $s$ sampled from $\mu (s)$ (Abdolmaleki et al., 2018b), $$ q _ {k} (a | s) \propto \pi_ {\text {o l d}} (a | s) \exp \left(\frac {Q _ {k} (s , a)}{\eta_ {k}}\right), \tag {3} $$ where the temperature $\eta_{k}$ is computed based on the corresponding $\epsilon_{k}$ , by solving the following convex dual function: $$ \eta_ {k} = \underset {\eta} {\operatorname {a r g m i n}} \eta \epsilon_ {k} + \tag {4} $$ $$ \eta \int_ {s} \mu (s) \log \int_ {a} \pi_ {\mathrm {o l d}} (a | s) \exp \left(\frac {Q _ {k} (s , a)}{\eta}\right) \mathrm {d} a \mathrm {d} s. $$ In order to evaluate $q_{k}(a|s)$ and the integrals in (4), we draw $L$ states from the replay buffer and, for each state, sample $M$ actions from the current policy $\pi_{\mathrm{old}}$ . In practice, we maintain one temperature parameter $\eta_{k}$ per objective. We found that optimizing the dual function by performing a few steps of gradient descent on $\eta_{k}$ is effective, and we initialize with the solution found in the previous policy iteration step. Since $\eta_{k}$ should be positive, we use a projection operator after each gradient step to maintain $\eta_{k} > 0$ . Please refer to Appendix C for derivation details. Application to Other Deep RL Algorithms. Since the constraints $\epsilon_{k}$ in (2) encode the preferences over objectives, solving this optimization problem with good satisfaction of constraints is key for learning a policy that satisfies the desired preferences. For nonparametric action distributions $q_{k}(a|s)$ , we can satisfy these constraints exactly. One could use any policy gradient method (e.g. Schulman et al., 2015; 2017; Heess et al., 2015; Haarnoja et al., 2018) to obtain $q_{k}(a|s)$ in a parametric form instead. However, solving the constrained optimization for parametric $q_{k}(a|s)$ is not exact, and the constraints may not be well satisfied, which impedes the use of $\epsilon_{k}$ to encode preferences. Moreover, assuming a parametric $q_{k}(a|s)$ requires maintaining a function approximator (e.g., a neural network) per objective, which can significantly increase the complexity of the algorithm and limits scalability. Choosing $\epsilon_{k}$ . It is more intuitive to encode preferences via $\epsilon_{k}$ rather than via scalarization weights, because the former is invariant to the scale of rewards. In other words, having a desired preference across objectives narrows down the range of reasonable choices for $\epsilon_{k}$ , but does not narrow down the range of reasonable choices for scalarization weights. In order to identify reasonable scalarization weights, a RL practitioner needs to additionally be familiar with the scale of rewards for each objective. In practice, we have found that learning performance is robust to a wide range of scales for $\epsilon_{k}$ . It is the relative scales of the $\epsilon_{k}$ that matter for encoding preferences over objectives—the larger a particular $\epsilon_{k}$ is with respect to others, the more that objective $k$ is preferred. On the other hand, if $\epsilon_{k} = 0$ , then objective $k$ will have no influence and will effectively be ignored. In Appendix A.1, we provide suggestions for setting $\epsilon_{k}$ , given a desired preference over objectives. # 4.2.2. FITTING A NEW PARAMETRIC POLICY (STEP 2) In the previous section, for each objective $k$ , we have obtained an improved action distribution $q_{k}(a|s)$ . Next, we want to combine these distributions to obtain a single parametric policy that trades off the objectives according to the constraints $\epsilon_{k}$ that we set. For this, we solve a supervised learning problem that fits a parametric policy to the per-objective action distributions from step 1, $$ \begin{array}{l} \theta_ {\text {n e w}} = \underset {\theta} {\operatorname {a r g m a x}} \sum_ {k = 1} ^ {N} \int_ {s} \mu (s) \int_ {a} q _ {k} (a | s) \log \pi_ {\theta} (a | s) \mathrm {d} a \mathrm {d} s \\ \mathrm {s . t .} \int_ {s} \mu (s) \operatorname {K L} \left(\pi_ {\text {o l d}} (a | s) \| \pi_ {\theta} (a | s)\right) \mathrm {d} s < \beta , \tag {5} \\ \end{array} $$ where $\theta$ are the parameters of our policy (a neural network) and the KL constraint enforces a trust region of size $\beta$ that limits the overall change in the parametric policy. The KL constraint in this step has a regularization effect that prevents the policy from overfitting to the sample-based action distributions, and therefore avoids premature convergence and improves stability of learning (Schulman et al., 2015; Abdelmaleki et al., 2018a;b). Similar to the first policy improvement step, we evaluate the integrals by using the $L$ states sampled from the replay buffer and the $M$ actions per state sampled from the old policy. In order to optimize (5) using gradient descent, we employ Lagrangian relaxation, similar to in MPO (Abdolmaleki et al., 2018a) (see Appendix C for more detail). # 5. Experiments: Toy Domains In the empirical evaluation that follows, we will first demonstrate the mechanics and scale-invariance of MO-MPO in a single-state environment (Sec. 5.1), and then show that MO-MPO can find all Pareto-dominant policies in a popular MORL benchmark (Sec. 5.2). Finally, we show the benefit of using MO-MPO in high-dimensional continuous control domains, including on a real robot (Sec. 6). Appendices A and B contain a detailed description of all domains and tasks, experimental setups, and implementation details. Baselines. The goal of our empirical evaluation is to analyze the benefit of using our proposed multi-objective policy improvement step (Sec. 4.2.2), that encodes preferences over objectives via constraints $\epsilon_{k}$ on expected KL-divergences, rather than via weights $w_{k}$ . Thus, we primarily compare MO-MPO against scalarized MPO, which relies on linear scalarization weights $w_{k}$ to encode preferences. The only difference between MO-MPO and scalarized MPO is the policy improvement step: for scalarized MPO, a single  Figure 2. A simple environment in which the agent starts at state $S_0$ , and chooses to navigate to one of three terminal states. There are two objectives. Taking the left action, for instance, leads to a reward of 1 for the first objective and 4 for the second. improved action distribution $q(a|s)$ is computed, based on $\sum_{k} w_{k} Q_{k}(s, a)$ and a single KL constraint $\epsilon$ . State-of-the-art approaches that combine MORL with deep RL assume linear scalarization as well, either learning a separate policy for each setting of weights (Mossalam et al., 2016) or learning a single policy conditioned on scalarization weights (Friedman & Fontaine, 2018; Abels et al., 2019). Scalarized MPO addresses the former problem, which is easier. The policy evaluation step in scalarized MPO is analogous to scalarized $Q$ -learning, proposed by Mossalam et al. (2016). As we show later in Sec. 6, even learning an optimal policy for a single scalarization is difficult in high-dimensional continuous control domains. # 5.1. Simple World First, we will examine the behavior of MO-MPO in a simple multi-armed bandit with three actions (up, right, and left) (Fig. 2), inspired by Russell & Zimdars (2003). We train policies with scalarized MPO and with MO-MPO. The policy evaluation step is exact because the $Q$ -value function for each objective is known: it is equal to the reward received for that objective after taking each action, as labeled in Fig. 2.5 We consider three possible desired preferences: equal preference for the two objectives, preferring the first, and preferring the second. Encoding preferences in scalarized MPO amounts to choosing appropriate linear scalarization weights, and in MO-MPO amounts to choosing appropriate $\epsilon$ 's. We use the following weights and $\epsilon$ 's: - equal preference: weights [0.5, 0.5] or $\epsilon$ 's [0.01, 0.01] - prefer first: weights [0.9, 0.1] or $\epsilon$ 's [0.01, 0.002] - prefer second: weights [0.1, 0.9] or $\epsilon$ 's [0.002, 0.01] We set $\epsilon = 0.01$ for scalarized MPO. If we start with a uniform policy and run MPO with $\beta = 0.001$ until the policy converges, scalarized MPO and MO-MPO result in similar policies (Fig. 3, solid bars): up for equal preference, right for prefer first, and left for prefer second. However, if we make the rewards imbalanced by multiplying the rewards obtained for the first objective by 20 (e.g., left now obtains a reward of [20, 4]), we see that the policies learned by scalarized MPO shift to preferring the optimal action for the first objective (right) in both the equal preference and prefer second cases (Fig. 3, striped bars). In contrast, the final policies for MO-MPO are the same as for balanced rewards, because in each policy improvement step, MO-MPO optimizes for a separate temperature $\eta_{k}$ that scales each objective's $Q$ -value function. This $\eta_{k}$ is computed based on the corresponding allowed KL-divergence $\epsilon_{k}$ , so when the rewards for any objective $k$ are multiplied by a factor but $\epsilon_{k}$ remains the same, the computed $\eta_{k}$ ends up being scaled by that factor as well, neutralizing the effect of the scaling of rewards (see Eq. (4)). Even in this simple environment, we see that MO-MPO's scale-invariant way of encoding preferences is valuable. In more complex domains, in which the $Q$ -value functions must be learned in parallel with the policy, the (automatic) dynamic adjustment of temperatures $\eta_{k}$ per objective becomes more essential (Sec. 6). The scale of $\epsilon_{k}$ controls the amount that objective $k$ can influence the policy's update. If we set $\epsilon_{1} = 0.01$ and sweep over the range from 0 to 0.01 for $\epsilon_{2}$ , the resulting policies go from always picking right, to splitting probability across right and up, to always picking up (Fig. 4, right). In contrast, setting weights leads to policies quickly converging to placing all probability on a single action (Fig. 4, left). We hypothesize this limits the ability of scalarized MPO to explore and find compromise policies (that perform well with respect to all objectives) in more challenging domains. # 5.2. Deep Sea Treasure An important quality of any MORL approach is the ability to find a variety of policies on the true Pareto front (Roijers et al., 2013). We demonstrate this in Deep Sea Treasure (DST) (Vamplew et al., 2011), a popular benchmark for testing MORL approaches. DST consists of a $11 \times 10$ grid world with ten treasure locations. The agent starts in the upper left corner ( $s_0 = (0,0)$ ) and has a choice of four actions (moving one square up, right, down, and left). When the agent picks up a treasure, the episode terminates. The agent has two objectives: treasure value and time penalty. The time penalty is $-1$ for each time step that the agent takes, and farther-away treasures have higher treasure values. We use the treasure values in Yang et al. (2019). We ran scalarized MPO with weightings $[w,1 - w]$ and  Figure 3. When two objectives have comparable reward scales (solid bars), scalarized MPO (first row) and MO-MPO (second row) learn similar policies, across three different preferences. However, when the scale of the first objective is much higher (striped bars), scalarized MPO shifts to always preferring the first objective. In contrast, MO-MPO is scale-invariant and still learns policies that satisfy the preferences. The y-axis denotes action probability.  Figure 4. A visualization of policies during learning—each curve corresponds to a particular setting of weights (left) or $\epsilon$ 's (right). Policies are initialized as uniform (the blue dot), and are trained until convergence. Each point $(x, y)$ corresponds to the policy with $p(\text{left}) = x$ , $p(\text{right}) = y$ , and $p(\text{up}) = 1 - x - y$ . The top left and bottom right blue stars denote the optimal policy for the first and second objectives, respectively.  $w \in [0, 0.01, 0.02, \dots, 1]$ . All policies converged to a point on the true Pareto front, and all but three found the optimal policy for that weighting. In terms of coverage, policies were found for eight out of ten points on the Pareto front.[6] We ran MO-MPO on this task as well, for a range of $\epsilon$ : $\epsilon_{\mathrm{time}} \in [0.01, 0.02, 0.05]$ and $\epsilon_{\mathrm{treatment}} = c * \epsilon_{\mathrm{time}}$ , where $c \in [0.5, 0.51, 0.52, \dots, 1.5]$ . All runs converged to a policy on the true Pareto front, and MO-MPO finds policies for all ten points on the front (Fig. 5, left). Note that it is the ratio of $\epsilon$ 's that matters, rather than the exact settings—across all settings of $\epsilon_{\mathrm{time}}$ , similar ratios of $\epsilon_{\mathrm{treatment}}$ to $\epsilon_{\mathrm{time}}$ result in similar policies; as this ratio increases, policies tend to prefer higher-value treasures (Fig. 5, right).  Figure 5. Left: Blue stars mark the true Pareto front for Deep Sea Treasure. MO-MPO, with a variety of settings for $\epsilon_{k}$ , discovers all points on the true Pareto front. The area of the orange circles is proportional to the number of $\epsilon_{k}$ settings that converged to that point. Right: As more preference is given to the treasure objective (i.e., as $x$ increases), policies tend to prefer higher-value treasures. Each orange dot in the scatterplot corresponds to a particular setting of $\epsilon_{k}$ . # 6. Experiments: Continuous Control Domains The advantage of encoding preferences via $\epsilon$ 's, rather than via weights, is apparent in more complex domains. We compared our approaches, MO-MPO and MO-V-MPO, against scalarized MPO and V-MPO in four high-dimensional continuous control domains, in MuJoCo (Todorov et al., 2012) and on a real robot. The domains we consider are: Humanoid: We use the humanoid run task defined in DeepMind Control Suite (Tassa et al., 2018). Policies must optimize for horizontal speed $h$ while minimizing energy usage. The task reward is $\min(h / 10, 1)$ where $h$ is in meters per second, and the energy usage penalty is action $\ell 2$ -norm. The humanoid has 21 degrees of freedom, and the observation consists of joint angles, joint velocities, head height, hand and feet positions, torso vertical orientation, and center-of-mass velocity, for a total of 67 dimensions. **Shadow Hand:** We consider three tasks on the Shadow Dexterous Hand: touch, turn, and orient. In the touch and turn tasks, policies must complete the task while minimizing "pain." A sparse task reward of 1.0 is given for pressing the block with greater than 5N of force or for turning the dial from a random initial location to the target location. The pain penalty penalizes the robot for colliding with objects at high speed; this penalty is defined as in Huang et al. (2019). In the orient task, there are three aligned objectives: touching the rectangular peg, lifting it to a given height, and orienting it to be perpendicular to the ground. All three rewards are between 0 and 1. The Shadow Hand has five fingers and 24 degrees of freedom, actuated by 20 motors. The observation consists of joint angles, joint velocities, and touch sensors, for a total of 63 dimensions. The touch and turn tasks terminate when the goal is reached or after 5 seconds, and the orient task terminates after 10 seconds. Humanoid Mocap: We consider the large-scale humanoid motion capture tracking task from Hasenclever et al. (2020), in which policies must learn to follow motion capture reference data. There are five objectives, each capturing a different aspect of the similarity of the pose between the simulated humanoid and the mocap target: joint orientations, joint velocities, hand and feet positions, center-of-mass positions, and certain body positions and joint angles. These objectives are described in detail in Appendix B.4. In order to balance these multiple objectives, prior work relied on heavily-tuned reward functions (e.g. Peng et al., 2018). The humanoid has 56 degrees of freedom and the observation is 1021-dimensional, consisting of proprioceptive observations as well as six steps of motion capture reference frames. In total, we use about 40 minutes of locomotion mocap data, making this an extremely challenging domain. Sawyer Peg-in-Hole: We train a Rethink Robotics Sawyer robot arm to insert a cylindrical peg into a hole, while minimizing wrist forces. The task reward is shaped toward positioning the peg directly above the hole and increases for insertion, and the penalty is the $\ell 1$ -norm of Cartesian forces measured by the wrist force-torque sensor. The latter implicitly penalizes contacts and impacts, as well as excessive directional change (due to the gripper's inertia inducing forces when accelerating). We impose a force threshold to protect the hardware—if this threshold is exceeded, the episode is terminated. The action space is the end effector's Cartesian velocity, and the observation is 102-dimensional, consisting of Cartesian position, joint position and velocity, wrist force-torque, and joint action, for three timesteps. # 6.1. Evaluation Metric We run MO-(V-)MPO and scalarized (V-)MPO with a wide range of constraint settings $\epsilon_{k}$ and scalarization weights $w_{k}$ , respectively, corresponding to a wide range of possible desired preferences. (The exact settings are provided in Appendix A.) For tasks with two objectives, we plot the Pareto front found by each approach. We also compute the hypervolume of each found Pareto front; this metric is commonly-used for evaluating MORL algorithms (Vamplew et al., 2011). Given a set of policies II and a reference policy $r$ that is dominated by all policies in this set, this metric is the hypervolume of the space of all policies that dominate $r$ and are dominated by at least one policy in II. We use DEAP (Fortin et al., 2012) to compute hypervolumes.  Figure 6. Pareto fronts found by MO-MPO and scalarized MO-MPO for humanoid run (top row) and Shadow Hand tasks. Each dot represents a single trained policy. Corresponding hypervolumes are in Table 1. Task reward is discounted for touch and turn, with a discount factor of 0.99. For orient, the $x$ and $y$ axes are the total reward for the lift and orientation objectives, respectively. # 6.2. Results: Humanoid and Shadow Hand In the run, touch, and turn tasks, the two objectives are competing—a very high preference for minimizing action norm or pain, as opposed to getting task reward, will result in a policy that always chooses zero-valued actions. Across these three tasks, the Pareto front found by MO-MPO is superior to the one found by scalarized MPO, with respect to the hypervolume metric (Table 1). In particular, MO-MPO finds more policies that perform well with respect to both objectives, i.e., are in the upper right portion of the Pareto front. MO-MPO also speeds up learning on run and touch (Fig. 6). Qualitatively, MO-MPO trains run policies that look more natural and "human-like"; videos are at http://sites.google.com/view/mo-mpo. When we scale the action norm penalty by $10 \times$ for run, scalarized MPO policies no longer achieve high task reward, whereas MO-MPO policies do (Fig. 6, top right). This supports that MO-MPO's encoding of preferences is indeed scale-invariant. When the objectives are aligned and have similarly scaled rewards, as in the orient task, MO-MPO and scalarized MPO perform similarly, as expected. Ablation. We also ran vanilla MPO on humanoid run, with the same range of weight settings as for scalarized MPO, to investigate how useful Q-decomposition is. In vanilla MPO, we train a single critic on the scalarized reward function,
| Task | scalarized MPO | MO-MPO |
| Humanoid run, mid-training | 1.1×106 | 3.3×106 |
| Humanoid run | 6.4×106 | 7.1×106 |
| Humanoid run, normal scale | 5.0×106 | 5.9×106 |
| Humanoid run, 10× penalty | 2.6×107 | 6.9×107 |
| Shadow touch, mid-training | 14.3 | 15.6 |
| Shadow touch | 16.2 | 16.4 |
| Shadow turn | 14.4 | 15.4 |
| Shadow orient | 2.8×104 | 2.8×104 |
| Humanoid Mocap | 3.86×10-6 | 3.41×10-6 |