Title: A Model-based Approach to Achieve both Robustness and Sample Efficiency via Double Dropout Planning
URL Source: https://arxiv.org/html/2108.01295
Markdown Content: Back to arXiv
This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.
Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Notations and Preliminaries 3MBDP Framework 4Experiments 5Conclusions and Future Work References
HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.
failed: eso-pic
Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.
License: arXiv.org perpetual non-exclusive license arXiv:2108.01295v2 [cs.LG] null MBDP: A Model-based Approach to Achieve both Robustness and Sample Efficiency via Double Dropout Planning Wanpeng Zhang Tsinghua University &Xi Xiao Tsinghua University Yao Yao Tsinghua University &Mingzhe Chen Tsinghua University &Dijun Luo Tencent AI Lab Abstract
Model-based reinforcement learning is a widely accepted solution for solving excessive sample demands. However, the predictions of the dynamics models are often not accurate enough, and the resulting bias may incur catastrophic decisions due to insufficient robustness. Therefore, it is highly desired to investigate how to improve the robustness of model-based RL algorithms while maintaining high sampling efficiency. In this paper, we propose Model-Based Double-dropout Planning (MBDP) to balance robustness and efficiency. MBDP consists of two kinds of dropout mechanisms, where the rollout-dropout aims to improve the robustness with a small cost of sample efficiency, while the model-dropout is designed to compensate for the lost efficiency at a slight expense of robustness. By combining them in a complementary way, MBDP provides a flexible control mechanism to meet different demands of robustness and efficiency by tuning two corresponding dropout ratios. The effectiveness of MBDP is demonstrated both theoretically and experimentally.
1Introduction
Reinforcement learning (RL) algorithms are commonly divided into two categories: model-free RL and model-based RL. Model-free RL methods learn a policy directly from samples collected in the real environment, while model-based RL approaches build approximate predictive models of the environment to assist in the optimization of the policy [1, 2]. In recent years, RL has achieved remarkable results in a wide range of areas, including continuous control [3, 4, 5], and outperforming human performances on Go and games [6, 7]. However, most of these results are achieved by model-free RL algorithms, which rely on a large number of environmental samples for training, limiting the application scenarios when deployed in practice. In contrast, model-based RL methods have shown the promising potential to cope with the lack of samples by using predictive models for simulation and planning [8, 9]. To reduce sample complexity, PILCO [10] learns a probabilistic model through Gaussian process regression, which models prediction uncertainty to boost agentโs performance in complex environments. Based on PILCO, the DeepPILCO algorithm [11] enables the modeling of more complex environments by introducing the Bayesian Neural Network (BNN), a universal function approximator with high capacity. To further enhance the interpretability of the predictive models and improve the robustness of the learned policies [12, 13], ensemble-based methods [14, 15] train an ensemble of models to comprehensively capture the uncertainty in the environment and have been empirically shown to obtain significant improvements in sample efficiency [5, 12, 16].
Despite the high sample efficiency, model-based RL methods inherently suffer from inaccurate predictions, especially when faced with high-dimensional tasks and insufficient training samples [17, 18]. Model accuracy can greatly affect the policy quality, and policies learned in inaccurate models tend to have significant performance degradation due to cumulative model error [19, 20]. Therefore, how to eliminate the effects caused by model bias has become a hot topic in model-based RL methods. Another important factor that limits the application of model-based algorithms is safety concerns. In a general RL setup, the agent needs to collect observations to extrapolate the current state before making decisions, which poses a challenge to the robustness of the learned policy because the process of acquiring observations through sensors may introduce random noise and the real environment is normally partial observable. Non-robust policies may generate disastrous decisions when faced with a noisy environment, and this safety issue is more prominent in model-based RL because the error in inferring the current state from observations may be further amplified by model bias when doing simulation and planning with the predictive models. Drawing on researches in robust control [21], a branch of control theory, robust RL methods have attracted more and more attention to improve the capability of the agent against perturbed states and model bias. The main objective of robust RL is to optimize the agentโs performance in worst-case scenarios and to improve the generalization of learned policies to noisy environments [22]. Existing robust RL methods can be roughly classified into two types, one is based on adversarial ideas such as RARL [23] and NR-MDP [24] to obtain robust policies by proposing corresponding minimax objective functions, while the other group of approaches [25] introduce conditional value at risk (CVaR) objectives to ensure the robustness of the learned policies. However, the increased robustness of these methods can lead to a substantial loss of sample efficiency due to the pessimistic manner of data use. Therefore, it is nontrival to enhance the robustness of policy while avoiding sample inefficiency.
In this paper, we propose Model-Based Reinforcement Learning with Double Dropout Planning (MBDP) algorithm for the purpose of learning policies that can reach a balance between robustness and sample efficiency. Inspried by CVaR, we design the rollout-dropout mechanism to enhance robustness by optimizing the policies with low-reward samples. On the other hand, in order to maintain high sample efficiency and reduce the impact of model bias, we learn an ensemble of models to compensate for the inaccuracy of single model. Furthermore, when generating imaginary samples to assist in the optimization of policies, we design the model-dropout mechanism to avoid the perturbation of inaccurate models by only using models with small errors. To meet different demands of robustness and sample efficiency, a flexible control can be realized via the two dropout mechanisms. We demonstrate the effectiveness of MBDP both theoretically and empirically.
2Notations and Preliminaries 2.1Reinforcement Learning
We consider a Markov decision process (MDP), defined by the tuple ( ๐ฎ , ๐ , ๐ซ , ๐ , ๐พ ) , where ๐ฎ โ โ ๐ ๐ is the state space, ๐ โ โ ๐ ๐ is the action space, ๐ โข ( ๐ , ๐ ) : ๐ฎ ร ๐ โฆ โ is the reward function, ๐พ โ [ 0 , 1 ] is the discount factor, and ๐ซ โข ( ๐ โฒ | ๐ , ๐ ) : ๐ฎ ร ๐ ร ๐ฎ โฆ [ 0 , 1 ] is the conditional probability distribution of the next state given current state ๐ and action ๐ . The form ๐ โฒ
๐ซ โข ( ๐ , ๐ ) : ๐ฎ ร ๐ โฆ ๐ฎ denotes the state transition function when the environment is deterministic. Let ๐ ๐ , ๐ซ โข ( ๐ ) denote the expected return or expectation of accumulated rewards starting from initial state ๐ , i.e., the expected sum of discounted rewards following policy ๐ โข ( ๐ | ๐ ) and state transition function ๐ซ โข ( ๐ , ๐ ) :
๐ ๐ , ๐ซ โข ( ๐ )
๐ผ { ๐ 0 , ๐ 1 , โฆ } โผ ๐ , ๐ซ โข [ โ ๐ก
0 โ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) โฃ ๐ 0
๐ ]
(2.1)
For simplicity of symbol, let ๐ ๐ , ๐ซ denote the expected return over random initial states:
๐ ๐ , ๐ซ
๐ผ ๐ 0 โ ๐ฎ โข [ ๐ ๐ , ๐ซ โข ( ๐ 0 ) ]
(2.2)
The goal of reinforcement learning is to maximize the expected return by finding the optimal decision policy, i.e., ๐ โ
arg โก max ๐ โก ๐ ๐ , ๐ซ .
2.2Model-based Methods
In model-based reinforcement learning, an approximated transition model โณ โข ( ๐ , ๐ ) is learned by interacting with the environment, the policy ๐ โข ( ๐ | ๐ ) is then optimized with samples from the environment and data generated by the model. We use the parametric notation โณ ๐ , ๐ โ ฮฆ to specifically denote the model trained by a neural network, where ฮฆ is the parameter space of models.
More specifically, to improve the ability of models to represent complex environment, we need to learn multiple models and make an ensemble of them, i.e., โณ
{ โณ ๐ 1 , โณ ๐ 2 , โฆ } . To generate a prediction from the model ensemble, we select a model โณ ๐ ๐ from โณ uniformly at random, and perform a model rollout using the selected model at each time step, i.e., ๐ ๐ก + 1 โผ โณ ๐ ๐ก โข ( ๐ ๐ก , ๐ ๐ก ) . Then we fill these rollout samples ๐ฅ
( ๐ ๐ก + 1 , ๐ ๐ก , ๐ ๐ก ) into a batch. Finally we can perform policy optimization on these generated samples.
2.3Conditional Value-at-Risk
Let ๐ denote a random variable with a cumulative distribution function (CDF) ๐น โข ( ๐ง )
Pr โข ( ๐ < ๐ง ) . Given a confidence level ๐ โ ( 0 , 1 ) , the Value-at-Risk of ๐ (at confidence level ๐ ) is denoted VaR ๐ โข ( ๐ ) , and given by
VaR ๐ โข ( ๐ )
๐น โ 1 โข ( ๐ ) โ inf { ๐ง : ๐น โข ( ๐ง ) โฅ ๐ }
(2.3)
The Conditional-Value-at-Risk of ๐ (at confidence level ๐ ) is denoted by CVaR ๐ โข ( ๐ ) and defined as the expected value of ๐ , conditioned on the ๐ -portion of the tail distribution:
CVaR ๐ โข ( ๐ ) โ ๐ผ โข [ ๐ | ๐ โฅ VaR ๐ โข ( ๐ ) ]
(2.4) 3MBDP Framework Figure 1:Overview of the MBDP algorithm. When interacting with the environment, we collect samples into environment replay buffer ๐ env , used for training the simulator model of the environment. Then we implement the model-dropout procedure and perform rollouts on the model ensemble. The sampled data from the model ensemble is filled into a temporary batch, and then we get a dropout buffer ๐ model by implementing the rollout-dropout procedure. Finally, we use samples from ๐ model to optimize the policy ๐ โข ( ๐ | ๐ ) .
In this section, we introduce how MBDP leverages Double Dropout Planning to find the balance between efficiency and robustness. The basic procedure of MBDP is to 1) sample data from the environment; 2) train an ensemble of models from the sampled data; 3) calculate model bias over observed environment samples, and choose a subset of model ensemble based on the calculated model bias; 4) collect rollout trajectories from the model ensemble, and make gradient updates based on the subsets of sampled data. The overview of the algorithm architecture is shown in figure 1 and the overall algorithm pseudo-code is demonstrated in Algorithm 1.
We will also theoretically analyze robustness and performance under the dropout planning of our MBDP algorithm. For simplicity of theoretical analysis, we only consider deterministic environment and models in this section, but the experimental part does not require this assumption. The detailed proofs can be found in the appendix as provided in supplementary materials.
Algorithm 1 Model-Based Reinforcement Learning with Double Dropout Planning (MBDP) Initialize hyperparameters, policy ๐ ๐ , environment replay buffer ๐ env , model replay buffer ๐ model for ๐ epoch iterations do Take an action in environment using policy ๐ ๐ Add samples to ๐ env for ๐ train iterations do Train probabilistic model โณ on ๐ env Build a model subset โณ ๐ฝ
{ โณ ๐ 1 , โฆ , โณ ๐ ๐ 1 โ ๐ฝ } according to bias โข ( ๐ ๐ ) for ๐ก
1 , 2 , โฆ , ๐ do Select a model โณ ๐ ๐ก from โณ ๐ฝ randomly Perform rollouts on model โณ ๐ ๐ก with policy ๐ ๐ and get samples ๐ฅ
( ๐ ๐ก + 1 , ๐ ๐ก , ๐ ๐ก ) Fill these samples into temp batch โฌ ๐ , โณ ๐ฝ end for Calculate ๐ 1 โ ๐ผ โข ( โฌ ๐ , โณ ๐ฝ | ๐ ) : the ( 1 โ ๐ผ ) percentile of batch โฌ ๐ , โณ ๐ฝ grouped by state ๐ , โ ๐ โ ๐ฎ for ๐ฅ โ โฌ ๐ , โณ ๐ฝ do if ๐ โข ( ๐ฅ ) โค ๐ 1 โ ๐ผ โข ( โฌ ๐ , โณ ๐ฝ | ๐ ๐ก ) then fill ๐ฅ into ๐ model end if end for end for Optimize ๐ ๐ on ๐ model : ๐ โ ๐ โ ๐ โข โ ๐ ๐ฝ ๐ โข ( ๐ model ) end for 3.1Rollout Dropout in MBDP
Optimizing the expected return in a general way as model-based methods allows us to learn a policy that performs best in expectation over the training model ensemble. However, best expectation does not mean that the result policies can perform well at all times. This instability typically leads to risky decisions when facing poorly-informed states at deployment.
Inspired by previous works [14, 25, 26] which optimize conditional value at risk (CVaR) to explicitly seek a robust policy, we add a dropout mechanism in the rollout procedure. Recall the model-based methods in Section 2.2, to generate a prediction from the model ensemble, we select a model โณ ๐ ๐ from โณ uniformly at random, and perform a model rollout using the selected model at each time step, i.e., ๐ ๐ก + 1 โผ โณ ๐ ๐ก โข ( ๐ ๐ก , ๐ ๐ก ) . Then we fill these rollout samples ๐ฅ
( ๐ ๐ก + 1 , ๐ ๐ก , ๐ ๐ก ) into a batch and retain a ( 1 โ ๐ผ ) percentile subset with more pessimistic rewards. We use โฌ ๐ผ ๐ , โณ to denote the ( 1 โ ๐ผ ) percentile rollout batch:
โฌ ๐ผ ๐ , โณ
{ ๐ฅ | ๐ฅ โ โฌ ๐ , โณ , ๐ โข ( ๐ฅ | ๐ ) โค ๐ 1 โ ๐ผ โข ( โฌ ๐ , โณ | ๐ ) , โ ๐ โ ๐ฎ }
(3.1)
where โฌ ๐ , โณ
{ ๐ฅ | ๐ฅ โ ( ๐ ๐ก + 1 , ๐ ๐ก , ๐ ๐ก ) โผ ๐ , โณ } and ๐ 1 โ ๐ผ โข ( โฌ ๐ , โณ | ๐ ) is the ( 1 โ ๐ผ ) percentile of reward values conditioned on state ๐ โ ๐ฎ in batch โฌ ๐ , โณ . The expected return of dropout batch rollouts is denoted by ๐ ๐ผ ๐ , โณ :
๐ ๐ผ ๐ , โณ
๐ผ โข [ โ { ๐ 0 , ๐ 0 , โฆ } โผ โฌ ๐ผ ๐ , โณ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
(3.2)
Rollout-dropout can improve the robustness with a nano cost of sample efficiency, we will analyze how it brings improvements to robustness in Section 3.3.
3.2Model Dropout in MBDP
Rollout-dropout can improve the robustness, but it is clear that dropping a certain number of samples could affect the algorithmโs sample efficiency. Model-based methods can improve this problem. However, since model bias can affect the performance of the algorithm, we also need to consider how to optimize it. Previous works use an ensemble of bootstrapped probabilistic transition models as in PETS method [12] to properly incorporate two kinds of uncertainty into the transition model.
In order to mitigate the impact of discrepancies and flexibly control the accuracy of model ensemble, we design a model-dropout mechanism. More specifically, we first learn an ensemble of transition models { โณ ๐ 1 , โณ ๐ 2 , โฆ } , each member of the ensemble is a probabilistic neural network whose outputs ๐ ๐ ๐ , ๐ ๐ ๐ parametrize a Guassian distribution: ๐ โฒ
โณ ๐ ๐ โข ( ๐ , ๐ ) โผ ๐ฉ โข ( ๐ ๐ ๐ โข ( ๐ , ๐ ) , ๐ ๐ ๐ โข ( ๐ , ๐ ) ) . While training models based on samples from environment, we calculate bias averaged over the observed state-action pair ( ๐ , ๐ด ) for each model:
bias โข ( ๐ ๐ )
๐ผ ๐ , ๐ด โผ ๐ , ๐ซ โข โ โณ ๐ ๐ โข ( ๐ , ๐ด ) โ ๐ซ โข ( ๐ , ๐ด ) โ
(3.3)
which formulates the distance of next states in model โณ ๐ ๐ and in environment ๐ซ , where โฅ โ โฅ is a distance function on state space ๐ฎ .
Then we select models from the model ensemble uniformly at random, sort them in ascending order by the calculated bias and retain a dropout subset with smaller model bias: โณ ๐ฝ
{ โณ ๐ 1 , โณ ๐ 2 , โฆ , โณ ๐ ๐ 1 โ ๐ฝ } , i.e., โณ ๐ฝ
{ โณ ๐ โฃ ๐ โ ฮฆ ๐ฝ } , where ฮฆ ๐ฝ
{ ๐ ๐ โฃ bias โข ( ๐ ๐ ) โค bias โข ( ๐ ๐ 1 โ ๐ฝ ) , ๐ ๐ โ ฮฆ } and ๐ 1 โ ๐ฝ is the max integer in the ascending order index { 1 , 2 , โฆ , ๐ 1 โ ๐ฝ } after we dropout the ๐ฝ -percentile subset with large bias.
3.3Theoretical Analysis of MBDP
We now give theoretical guarantees for the robustness and sample efficiency of the MBDP algorithm. All the proofs of this section are detailed in Appendix A.
3.3.1Guarantee of Robustness
We define the robustness as the expected performance in a perturbed environment. Consider a perturbed transition matrix ๐ซ ^
๐ซ ๐ก โ ๐ฟ ๐ก , where ๐ฟ ๐ก โ โ ๐ฎ ร ๐ ร ๐ฎ is a multiplicative probability perturbation and โ is the Hadamard Product. Recall the definition of CVaR โข ( โ ) in equation (2.4), now we propose following theorem to provide guarantee of robustness for MBDP algorithm.
Theorem 3.1.
It holds
๐ ๐ผ ๐ , โณ
โ CVaR ๐ผ โข ( โ ๐ ๐ , โณ )
sup ฮ ๐ผ ๐ผ ๐ซ ^ โข [ ๐ ๐ , โณ ]
(3.4)
given the constraint set of perturbation
ฮ ๐ผ โ { ๐ฟ ๐ | โ ๐
1 ๐ ๐ฟ ๐ โข ( ๐ ๐ โฃ ๐ ๐ โ 1 , ๐ ๐ โ 1 ) โค 1 ๐ผ , โ ๐ ๐ โ ๐ฎ , ๐ ๐ โ ๐ , ๐ผ โ ( 0 , 1 ) }
(3.5)
Since sup ฮ ๐ผ ๐ผ ๐ ^ โข [ ๐ ๐ , โณ ] means optimizing the expected performance in a perturbed environment, which is exactly our definition of robustness, then Theorem 3.1 can be interpreted as an equivalence between optimizing robustness and the expected return under rollout-dropout, i.e., ๐ ๐ผ ๐ , โณ .
3.3.2Guarantee of Efficiency
We first propose Lemma 3.2 to prove that the expected return with only rollout-dropout mechanism, compared to the expected return when it is deployed in the environment ๐ซ , has a discrepancy bound.
Lemma 3.2.
Suppose R m is the supremum of reward function ๐ โข ( ๐ , ๐ ) , i.e., R m
sup ๐ โ ๐ฎ , ๐ โ ๐ โข ๐ โข ( ๐ , ๐ ) , the expected return of dropout batch rollouts with individual model โณ ๐ has a discrepancy bound:
| ๐ ๐ผ ๐ , โณ ๐ โ ๐ ๐ , โณ ๐ | โค ๐ผ โข ( 1 + ๐ผ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R m โ ๐ ๐ผ
(3.6)
While Lemma 3.2 only provides a guarantee for the performance of rollout-dropout mechanism, we now propose Theorem 3.3 to prove that the expected return of policy derived by model dropout together with rollout-dropout, i.e., our MBDP algorithm, compared to the expected return when it is deployed in the environment ๐ซ , has a discrepancy bound.
Theorem 3.3.
Suppose ๐พ โฅ 0 is a constant. The expected return of MBDP algorithm, i.e., ๐ ๐ผ ๐ , โณ ๐ฝ , compared to the expected return when it is deployed in the environment ๐ซ , i.e., ๐ ๐ , ๐ซ , has a discrepancy bound:
| ๐ ๐ผ ๐ , โณ ๐ฝ โ ๐ ๐ , ๐ซ | โค ๐ท ๐ผ , ๐ฝ โข ( โณ )
(3.7)
where
๐ท ๐ผ , ๐ฝ โข ( โณ ) โ ( 1 โ ๐ฝ ) โข ๐พ โข ๐พ 1 โ ๐พ โข ๐ โณ + ๐ผ โข ( 1 + ๐ผ ) โข ( 1 โ ๐ฝ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R ๐
(3.8)
and
๐ โณ โ ๐ผ ๐ โ ฮฆ โข [ ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข [ โ โณ ๐ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ ] ]
(3.9)
Since MBDP algorithm is an extension of the Dyna-style algorithm [27]: a series of model-based reinforcement learning methods which jointly optimize the policy and transition model, it can be written in a general pattern as below:
๐ ๐ + 1 , โณ ๐ + 1 ๐ฝ
arg โก max ๐ ๐ , โณ ๐ ๐ฝ โข [ ๐ ๐ ๐ , โณ ๐ ๐ฝ โ ๐ท ๐ผ , ๐ฝ โข ( โณ ๐ ๐ฝ ) ]
(3.10)
where ๐ ๐ denotes the updated policy in ๐ -th iteration and โณ ๐ ๐ฝ denotes the updated dropout model ensemble in ๐ -th iteration. In this setting, we can show that, performance of the policy derived by our MBDP algorithm, is approximatively monotonically increasing when deploying in the real environment ๐ซ , with ability to robustly jump out of local optimum.
Proposition 3.4.
The expected return of policy derived by general algorithm pattern (3.10), is approximatively monotonically increasing when deploying in the real environment ๐ซ , i.e.
๐ ๐ ๐ + 1 , ๐ซ โฅ ๐ ๐ ๐ , ๐ซ + ( ๐ ๐ + 1 โ ๐ ๐ผ ) โ ๐ ๐ ๐ , ๐ซ + ๐
(3.11)
where ๐ ๐ผ is defined in (3.6) and ๐ ๐ + 1 is the update residual:
๐ ๐ + 1 โ ๐ ๐ ๐ + 1 , ๐ซ โ [ ๐ ๐ผ ๐ ๐ + 1 , โณ ๐ + 1 ๐ฝ โ ๐ท ๐ผ , ๐ฝ โข ( โณ ๐ + 1 ๐ฝ ) ]
(3.12)
Intuitively, proposition 3.12 shows that under the control of reasonable parameters ๐ผ and ๐ฝ , ๐ ๐ + 1 is often a large update value in the early learning stage, while ๐ ๐ผ as an error bound is a fixed small value. Thus ๐
๐ ๐ + 1 โ ๐ ๐ผ is a value greater than 0 most of the time in the early learning stage, which can guarantee ๐ ๐ ๐ + 1 , ๐ซ โฅ ๐ ๐ ๐ , ๐ซ + 0 . In the late stage near convergence, the update becomes slow and ๐ ๐ + 1 may be smaller than ๐ ๐ผ , which leads to the possibility that ๐ ๐ + 1 is smaller than ๐ ๐ . This makes the update process try some other convergence direction, providing an opportunity to jump out of the local optimum. We empirically verify this claim in Appendix C.
3.3.3Flexible control of robustness and efficiency
According to Theorem 3.1, rollout-dropout improves robustness, and the larger ๐ผ is, the more robustness is improved. Conversely, the smaller ๐ผ is, the worse the robustness will be. For model-dropout, it is obvious that when ๐ฝ is larger, it means that the more models we will be dropped, and the more likely the model is to overfit the environment, so the less robust it is. Conversely, when ๐ฝ is less, the model ensemble has better robustness in simulating complex environments, and the robustness is better at this point.
Turning to the efficiency. Note that the bound in equation (3.8) i.e., ๐ท ๐ผ , ๐ฝ โข ( โณ ) , is in positive ratio with ๐ผ and inverse ratio with ๐ฝ . This means that as ๐ผ increases or ๐ฝ decreases, this bound expands, causing the accuracy of the algorithm to decrease and the algorithm to take longer to converge, thus making it less efficient. Conversely, when ๐ผ decreases or ๐ฝ increases, the efficiency increases.
With the analysis above, it suggests that MBDP can provide a flexible control mechanism to meet different demands of robustness and efficiency by tuning two corresponding dropout ratios. This conclusion can be summarized as follows and we also empirically verify it in section 4.
โข
To get balanced efficiency and robustness: set ๐ผ and ๐ฝ both to a moderate value
โข
To get better robustness: set ๐ผ to a larger value and ๐ฝ to a smaller value.
โข
To get better efficiency: set ๐ผ to a smaller value and ๐ฝ to a larger value.
4Experiments
Our experiments aim to answer the following questions:
โข
How does MBDP perform on benchmark reinforcement learning tasks compared to state-of-the-art model-based and model-free RL methods?
โข
Can MBDP find a balance between robustness and benefits?
โข
How does the robustness and efficiency of MBDP change by tuning parameters ๐ผ and ๐ฝ ?
To answer the posed questions, we need to understand how well our method compares to state-of-the-art model-based and model-free methods and how our design choices affect performance. We evaluate our approach on four continuous control benchmark tasks in the Mujoco simulator [28]: Hopper, Walker, HalfCheetah, and Ant. We also need to perform the ablation study by removing the dropout modules from our algorithm. Finally, a separate analysis of the hyperparameters ( ๐ผ and ๐ฝ ) is also needed. A depiction of the environments and a detailed description of the experimental setup can be found in Appendix B.
4.1Comparison with State-of-the-Arts
In this subsection, we compare our MBDP algorithm with state-of-the-art model-free and model-based reinforcement learning algorithms in terms of sample complexity and performance. Specifically, we compare against SAC [29], which is the state-of-the-art model-free method and establishes a widely accepted baseline. For model-based methods, we compare against MBPO [16], which uses short-horizon model-based rollouts started from samples in the real environment; STEVE [30], which dynamically incorporates data from rollouts into value estimation rather than policy learning; and SLBO [31], a model-based algorithm with performance guarantees. For our MBDP algorithm, we choose ๐ผ
0.2 and ๐ฝ
0.2 as hyperparameter setting.
Figure 2:Learning curves of our MBDP algorithm and four baselines on different continuous control environments. Solid curves indicate the mean of all trials with 5 different seeds. Shaded regions correspond to standard deviation among trials. Each trial is evaluated every 1000 steps. The dashed reference lines are the asymptotic performance of SAC algorithm. These results show that our MBDP method learns faster and has better asymptotic performance and sample efficiency than existing model-based algorithms.
Figure 2 shows the learning curves for all methods, along with asymptotic performance of the model-free SAC algorithm which do not converge in the region shown. The results highlight the strength of MBDP in terms of performance and sample complexity. In all the Mujoco simulator environments, our MBDP method learns faster and has better efficiency than existing model-based algorithms, which empirically demonstrates the advantage of Dropout Planning.
4.2Analysis of Robustness Figure 3:The robustness performance is depicted as heat maps for various environment settings. Each heat map represents a set of experiments, and each square in the heat map represents the average return value in one experiment. The closer the color to red (hotter) means the higher the value, the better the algorithm is trained in that environment, and vice versa. The four different algorithms in the figure are no dropout ( ๐ผ
0 , ๐ฝ
0 ), rollout-dropout only ( ๐ผ -dropout: ๐ผ
0.2 , ๐ฝ
0 ), model-dropout only ( ๐ฝ -dropout: ๐ผ
0 , ๐ฝ
0.2 ), and both dropouts ( ๐ผ
0.2 , ๐ฝ
0.2 ). Each experiment in the Hopper environment stops after 300,000 steps, and each experiment in the HalfCheetah environment stops after 600,000 steps.
Aiming to evaluate the robustness of our MBDP algorithm by testing policies on different environment settings (i.e., different combinations of physical parameters) without any adaption, we define ranges of mass and friction coefficients as follows: 0.5 โค ๐ถ mass โค 1.5 and 0.5 โค ๐ถ friction โค 1.5 , and modify the environments by scaling the torso mass with coefficient ๐ถ mass and the friction of every geom with coefficient ๐ถ friction .
We compare the original MBDP algorithm with the ๐ผ -dropout variation ( ๐ผ
0.2 , ๐ฝ
0 ) which keeps only the rollout-dropout, the ๐ฝ -dropout variation ( ๐ผ
0 , ๐ฝ
0.2 ) which keeps only the model-dropout, and the no-dropout variation ( ๐ผ
0 , ๐ฝ
0 ) which removes both dropouts. This experiment is conducted in the modified environments mentioned above. The results are presented in Figure 3 in the form of heat maps, each square of a heat map represents the average return value that the algorithm can achieve after training in each modified environment. The closer the color to red (hotter) means the higher the value, the better the algorithm is trained in that environment, and vice versa. Obviously, if the algorithm can only achieve good training results in the central region and inadequate results in the region far from the center, it means that the algorithm is more sensitive to perturbation in environments and thus less robust.
Based on the results, we can see that the ๐ผ -dropout using only the rollout-dropout can improve the robustness of the algorithm, while the ๐ฝ -dropout using only the model-dropout will slightly weaken the robustness, and the combination of both dropouts, i.e., the MBDP algorithm, achieves robustness close to that of ๐ผ -dropout.
4.3Ablation Study
In this section, we investigate the sensitivity of MBDP algorithm to the hyperparameter ๐ผ , ๐ฝ . We conduct two sets of experiments in both Hopper and HalfCheetah environments: (1) fix ๐ฝ and change ๐ผ ( ๐ผ โ [ 0 , 0.5 ] , ๐ฝ
0.2 ); (2) fix ๐ผ and change ๐ฝ ( ๐ฝ โ [ 0 , 0.5 ] , ๐ผ
0.2 ).
The experimental results are shown in Figure 4. The first row corresponds to experiments in the Hopper environment and the second row corresponds to experiments in the HalfCheetah environment. Columns 1 and 2 correspond to the experiments conducted in the perturbed Mujoco environment with modified environment settings. We construct a total of 2 ร 2
4 different perturbed environments ( ๐ถ mass
0.8 , 1.2 , ๐ถ friction
0.8 , 1.2 ), and calculate the average of the return values after training a fixed number of steps (Hopper: 120k steps, HalfCheetah: 400k steps) in each of the four environments. The higher this average value represents the algorithm can achieve better overall performance in multiple perturbed environments, implying better robustness. Therefore, this metric can be used to evaluate the robustness of different ๐ผ , ๐ฝ . Columns 3 and 4 are the return values obtained after a fixed number of steps (Hopper: 120k steps, HalfCheetah: 400k steps) for experiments conducted in the standard Mujoco environment without any modification, which are used to evaluate the efficiency of the algorithm for different values of ๐ผ , ๐ฝ . Each box plot corresponds to 10 different random seeds.
Observing the experimental results, we can find that robustness shows a positive relationship with ๐ผ and an inverse relationship with ๐ฝ ; efficiency shows an inverse relationship with ๐ผ and a positive relationship with ๐ฝ . This result verifies our conclusion in Section 3.3.3. In addition, we use horizontal dashed lines in Figure 4 to indicate the baseline with rollout-dropout and model-dropout removed ( ๐ผ
๐ฝ
0 ). It can be seen that when ๐ผ โ [ 0.1 , 0.2 ] , ๐ฝ โ [ 0.1 , 0.2 ] , the robustness and efficiency of the algorithm can both exceed the baseline. Therefore, when ๐ผ , ๐ฝ is adjusted to a reasonable range of values, we can simultaneously improve the robustness and efficiency.
Figure 4:The horizontal axis represents the different values of ๐ผ , ๐ฝ . The vertical axis is the metric for evaluating the robustness or efficiency. The horizontal dashed line is the baseline case with both rollout-dropout and model-dropout removed ( ๐ผ
๐ฝ
0 ). 120k steps are trained for each experiment in the Hopper environment, and 400k steps are trained for each experiment in the HalfCheetah environment. Each box plot corresponds to 10 different random seeds. 5Conclusions and Future Work
In this paper, we propose the MBDP algorithm to address the dilemma of robustness and sample efficiency. Specifically, MBDP drops some overvalued imaginary samples through the rollout-dropout mechanism to focus on the bad samples for the purpose of improving robustness, while the model-dropout mechanism is designed to enhance the sample efficiency by only using accurate models. Both theoretical analysis and experiment results verify our claims that 1) MBDP algorithm can provide policies with competitive robustness while achieving state-of-the-art performance; 2) we empirically find that there is a seesaw phenomenon between robustness and efficiency, that is, the growth of one will cause a slight decline of the other; 3) we can get policies with different types of performance and robustness by tuning the hyperparameters ๐ผ and ๐ฝ , ensuring that our algorithm is capable of performing well in a wide range of tasks.
Our future work will incorporate more domain knowledge of robust control to further enhance robustness. We also plan to transfer the design of Double Dropout Planning as a more general module that can be easily embedded in more model-based RL algorithms and validate the effectiveness of Double Dropout Planning in real-world scenarios. Besides, relevant researches in the field of meta learning and transfer learning may inspire us to further optimize the design and training procedure of the predictive models. Finally, we can use more powerful function approximators to model the environment.
References [1] โ Chong Chen, Taiki Takahashi, Shin Nakagawa, Takeshi Inoue, and Ichiro Kusumi.Reinforcement learning in depression: a review of computational research.Neuroscience & Biobehavioral Reviews, 55:247โ267, 2015. [2] โ Athanasios S Polydoros and Lazaros Nalpantidis.Survey of model-based reinforcement learning: Applications on robotics.Journal of Intelligent & Robotic Systems, 86(2):153โ173, 2017. [3] โ John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel.High-dimensional continuous control using generalized advantage estimation.arXiv preprint arXiv:1506.02438, 2015. [4] โ Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra.Continuous control with deep reinforcement learning.arXiv preprint arXiv:1509.02971, 2015. [5] โ Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel.End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research, 17(1):1334โ1373, 2016. [6] โ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al.Human-level control through deep reinforcement learning.nature, 518(7540):529โ533, 2015. [7] โ David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al.Mastering the game of go with deep neural networks and tree search.nature, 529(7587):484โ489, 2016. [8] โ Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters.A survey on policy search for robotics.now publishers, 2013. [9] โ Felix Berkenkamp, Matteo Turchetta, Angela P Schoellig, and Andreas Krause.Safe model-based reinforcement learning with stability guarantees.arXiv preprint arXiv:1705.08551, 2017. [10] โ Marc Deisenroth and Carl E Rasmussen.Pilco: A model-based and data-efficient approach to policy search.In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465โ472, 2011. [11] โ Yarin Gal, Rowan McAllister, and Carl Edward Rasmussen.Improving PILCO with Bayesian neural network dynamics models.In Data-Efficient Machine Learning workshop, International Conference on Machine Learning, 2016. [12] โ Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine.Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.Advances in Neural Information Processing Systems, 2018-Decem(NeurIPS):4754โ4765, 2018. [13] โ Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, and Stefano Ermon.Calibrated model-based deep reinforcement learning.In International Conference on Machine Learning, pages 4314โ4323. PMLR, 2019. [14] โ Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine.Epopt: Learning robust neural network policies using model ensembles.arXiv preprint arXiv:1610.01283, 2016. [15] โ Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel.Model-ensemble trust-region policy optimization.arXiv preprint arXiv:1802.10592, 2018. [16] โ Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine.When to trust your model: Model-based policy optimization.In Advances in Neural Information Processing Systems, pages 12519โ12530, 2019. [17] โ Pieter Abbeel, Morgan Quigley, and Andrew Y Ng.Using inaccurate models in reinforcement learning.In Proceedings of the 23rd international conference on Machine learning, pages 1โ8, 2006. [18] โ Thomas M Moerland, Joost Broekens, and Catholijn M Jonker.Model-based reinforcement learning: A survey.arXiv preprint arXiv:2006.16712, 2020. [19] โ Leonid Kuvayev Rich Sutton.Model-based reinforcement learning with an approximate, learned model.In Proceedings of the ninth Yale workshop on adaptive and learning systems, pages 101โ105, 1996. [20] โ Kavosh Asadi, Dipendra Misra, Seungchan Kim, and Michel L Littman.Combating the compounding-error problem with a multi-step model.arXiv preprint arXiv:1905.13320, 2019. [21] โ Kemin Zhou and John Comstock Doyle.Essentials of robust control, volume 104.Prentice hall Upper Saddle River, NJ, 1998. [22] โ Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczyลski.Lectures on stochastic programming: modeling and theory.SIAM, 2014. [23] โ Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta.Robust adversarial reinforcement learning.arXiv preprint arXiv:1703.02702, 2017. [24] โ Chen Tessler, Yonathan Efroni, and Shie Mannor.Action robust reinforcement learning and applications in continuous control.arXiv preprint arXiv:1901.09184, 2019. [25] โ Aviv Tamar, Yonatan Glassner, and Shie Mannor.Optimizing the CVaR via sampling.Proceedings of the National Conference on Artificial Intelligence, 4:2993โ2999, 2015. [26] โ Yinlam Chow, Aviv Tamar, Shie Mannor, and Marco Pavone.Risk-sensitive and robust decision-making: a cvar optimization approach.In Advances in Neural Information Processing Systems, pages 1522โ1530, 2015. [27] โ Richard S. Sutton.Dyna, an integrated architecture for learning, planning, and reacting.ACM SIGART Bulletin, 2(4):160โ163, 1991. [28] โ Emanuel Todorov, Tom Erez, and Yuval Tassa.Mujoco: A physics engine for model-based control.In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026โ5033. IEEE, 2012. [29] โ Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine.Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.arXiv preprint arXiv:1801.01290, 2018. [30] โ Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee.Sample-efficient reinforcement learning with stochastic ensemble value expansion.In Advances in Neural Information Processing Systems, pages 8224โ8234, 2018. [31] โ Yuping Luo, Huazhe Xu, Yuanzhi Li, Yuandong Tian, Trevor Darrell, and Tengyu Ma.Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees.7th International Conference on Learning Representations, ICLR 2019, pages 1โ27, 2019. Appendix AProofs
In Appendix A, we will provide proofs for Theorem 3.1, Lemma 3.2, Theorem 3.3 ,and Proposition 3.12. Note that the numbering and citations in the appendices are referenced from the main manuscript.
A.1Proof of Theorem 3.1 Proof.
Recall the definition of CVaR (2.4) and ๐ ๐ผ ๐ , โณ (3.2), we need to take the negative value of rewards to represent the loss in the sense of CVaR. Then we have that,
CVaR ๐ผ โข ( โ ๐ ๐ , โณ )
๐ผ โข [ โ ๐ ๐ , โณ โฃ โ ๐ ๐ , โณ โฅ VaR ๐ผ โข ( โ ๐ ๐ , โณ ) ]
๐ผ โข [ โ โ โฌ ๐ , โณ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] โฃ โ โ โฌ ๐ , โณ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] โฅ VaR ๐ผ โข ( โ ๐ ๐ , โณ ) ]
โ ๐ผ โข [ โ โฌ ๐ , โณ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] โฃ โ โฌ ๐ , โณ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] โค โ VaR ๐ผ โข ( โ ๐ ๐ , โณ ) ]
Obviously, the condition of โ โฌ ๐ , โณ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] โค โ VaR ๐ผ โข ( โ ๐ ๐ , โณ ) in the above equation exactly meets our definition of โฌ ๐ผ ๐ , โณ , that is, eqaution (3.1). Then we can prove the first part of Theorem 3.1
โ CVaR ๐ผ โข ( โ ๐ ๐ , โณ )
๐ผ โข [ โ โฌ ๐ผ ๐ , โณ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
๐ ๐ผ ๐ , โณ
(A.1)
Considering ๐ผ ๐ซ ^ โข [ โ ๐ ๐ , โณ ] , recall the definition of ๐ซ ^ , we have that
๐ผ ๐ซ ^ โข [ ๐ ๐ , โณ ]
โ ๐ผ ๐ซ ^ โข [ โ ๐ ๐ , โณ ]
โ โ ( ๐ 0 , โฆ , ๐ ๐ ) โ ๐ฎ ๐ + 1 ๐ซ 0 โข ( ๐ 0 ) โข ๐ฟ 0 โข ( ๐ 0 ) โข โ ๐ก
1 ๐ ๐ซ ๐ก โข ( ๐ ๐ก โฃ ๐ ๐ก โ 1 ) โข ๐ฟ ๐ก โข ( ๐ ๐ก โฃ ๐ ๐ก โ 1 ) โ ( โ ๐ ๐ , โณ )
โ ( ๐ 0 , โฆ , ๐ ๐ ) โ ๐ฎ ๐ + 1 ๐ซ โข ( ๐ 0 , โฆ , ๐ ๐ ) โข ๐ฟ 0 โข ( ๐ 0 ) โข โ ๐ก
1 ๐ ๐ฟ ๐ก โข ( ๐ฅ ๐ก โฃ ๐ฅ ๐ก โ 1 ) โ ๐ ๐ , โณ
โ โ ( ๐ 0 , โฆ , ๐ ๐ ) โ ๐ฎ ๐ + 1 ๐ซ โข ( ๐ 0 , โฆ , ๐ ๐ ) โข ๐ฟ โข ( ๐ 0 , โฆ , ๐ ๐ ) โ ๐ ๐ , โณ
Since ๐ฟ is the random perturbation to the environment as we defined, itโs intuitive that
๐ผ โข [ ๐ฟ โข ( ๐ 0 , โฆ , ๐ ๐ ) ]
โ ( ๐ 0 , โฆ , ๐ ๐ ) โ ๐ฎ ๐ + 1 ๐ซ โข ( ๐ 0 , โฆ , ๐ ๐ ) โข ๐ฟ โข ( ๐ 0 , โฆ , ๐ ๐ )
1
(A.2)
Recall the definition of ฮ ๐ผ in (3.5), we can prove the second part of Theorem 3.1
sup ฮ ๐ผ ๐ผ ๐ซ ^ โข [ ๐ ๐ , โณ ]
sup ๐ฟ โข ( ๐ 0 , โฆ , ๐ ๐ ) โค 1 ๐ผ โ ( ๐ 0 , โฆ , ๐ ๐ ) โ ๐ฎ ๐ + 1 ๐ซ โข ( ๐ 0 , โฆ , ๐ ๐ ) โข ๐ฟ โข ( ๐ 0 , โฆ , ๐ ๐ ) โ ๐ ๐ , โณ
โ CVaR ๐ผ โข ( โ ๐ ๐ , โณ )
(A.3)
The last equation (A.3) is obtained by equation (A.2) and the Representation Theorem [22] for CVaR.
โ
A.2Proof of Lemma 3.2
To prove Lemma 3.2, we need to introduce two useful lemmas.
Lemma A.1.
Define
๐บ ๐ , โณ โข ( ๐ , ๐ )
๐ผ ๐ ^ โฒ โผ โณ ( โ | ๐ , ๐ ) โข ๐ ๐ , โณ โข ( ๐ ^ โฒ ) โ ๐ผ ๐ โฒ โผ ๐ซ ( โ | ๐ , ๐ ) โข ๐ ๐ , โณ โข ( ๐ โฒ )
(A.4)
For any policy ๐ and dynamical models โณ , โณ โฒ , we have that
๐ ๐ , โณ โฒ โ ๐ ๐ , โณ
๐พ 1 โ ๐พ โข ๐ผ ๐ , ๐ด โผ ๐ , โณ โข [ ๐บ ๐ , โณ โฒ โข ( ๐ , ๐ด ) ]
(A.5)
Lemma A.1 is a directly cited theorem in existing work (Lemma 4.3 in [31]), we make some modifications to fit our subsequent conclusions. With the above lemma, we first propose Lemma A.2.
Lemma A.2.
Suppose the expected return for model-based methods ๐ ๐ , โณ is Lipschitz continuous on the state space ๐ฎ , ๐พ is the Lipschitz constant, ๐ซ is the transition distribution of environment, then
| ๐ ๐ , โณ โ ๐ ๐ , ๐ซ | โค ๐พ 1 โ ๐พ โข ๐พ โ bias
(A.6)
where
bias โ ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข โ โณ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ
(A.7)
In Lemma A.2, we make the assumption that the expected return ๐ ๐ , โณ โข ( ๐ ) on the estimated model โณ is Lipschitz continuous w.r.t any norm โฅ โ โฅ , i.e.
| ๐ ๐ , โณ โข ( ๐ ) โ ๐ ๐ , โณ โข ( ๐ โฒ ) | โค ๐พ โข โ ๐ โ ๐ โฒ โ , โ ๐ , ๐ โฒ โ ๐ฎ
(A.8)
where ๐พ โ โ + is a Lipschitz constant. This assumption means that the closer states should give the closer value estimation, which should hold in most scenarios.
Proof.
By definition of ๐บ ๐ , โณ โข ( ๐ , ๐ ) in (A.4) and Assumption (A.8), i.e., ๐ ๐ , โณ โข ( ๐ ) is Lipschitz continuous, we have that
| ๐บ ๐ , โณ โข ( ๐ , ๐ ) | โค ๐พ โข โ โณ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ
(A.9)
Then, we can show that
| ๐ ๐ , โณ โ ๐ ๐ , ๐ซ |
๐พ 1 โ ๐พ โข | ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข [ ๐บ ๐ , โณ โข ( ๐ , ๐ ) ] |
(By Lemma A.1)
โค ๐พ 1 โ ๐พ โข ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข [ | ๐บ ๐ , โณ โข ( ๐ , ๐ ) | ]
(By Triangle Inequality)
โค ๐พ 1 โ ๐พ โข ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข ๐พ โข โ โณ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ
(By equation (A.9))
= ๐พ 1 โ ๐พ โข ๐พ โ ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข โ โณ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ
โ ๐พ 1 โ ๐พ โข ๐พ โ bias
โ
Now we prove Lemma 3.2.
Proof.
For two disjoint sets ๐ด and ๐ต , i.e., ๐ด โฉ ๐ต
โ , there are the following property
๐ผ ๐ด โช ๐ต โข [ ๐ ]
๐ผ ๐ด โข [ ๐ ] โข P โข ( ๐ด ) + ๐ผ ๐ต โข [ ๐ ] โข P โข ( ๐ต )
(A.10)
By this property,
๐ผ โข [ โ ๐ 0 โ ๐ฎ , { ๐ 0 , ๐ 1 , โฆ } โผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
( 1 โ ๐ผ ) โ ๐ผ โข [ โ { ๐ 0 , ๐ 0 , โฆ } โผ โฌ ๐ผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ] + ๐ผ โ ๐ผ โข [ โ { ๐ 0 , ๐ 0 , โฆ } โผฬธ โฌ ๐ผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
Recall the definition (2.1), (2.2) and (3.2), we have that
๐ ๐ผ ๐ , โณ ๐
๐ผ โข [ โ { ๐ 0 , ๐ 0 , โฆ } โผ โฌ ๐ผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
1 1 โ ๐ผ โข ๐ผ โข [ โ ๐ 0 โ ๐ฎ , { ๐ 0 , ๐ 1 , โฆ } โผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ] โ ๐ผ 1 โ ๐ผ โข ๐ผ โข [ โ { ๐ 0 , ๐ 0 , โฆ } โผฬธ โฌ ๐ผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
1 1 โ ๐ผ โข ๐ผ ๐ โ ๐ฎ โข [ ๐ ๐ , โณ ๐ โข ( ๐ ) ] โ ๐ผ 1 โ ๐ผ โข ๐ผ โข [ โ ๐ โผฬธ โฌ ๐ผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
1 1 โ ๐ผ โข ๐ ๐ , โณ ๐ โ ๐ผ 1 โ ๐ผ โข ๐ผ โข [ โ ๐ โผฬธ โฌ ๐ผ ๐ , โณ ๐ [ ๐พ ๐ก โข ๐ โข ( ๐ ๐ก , ๐ ๐ก ) ] ]
(A.11)
Where ๐ โ { ๐ 0 , ๐ 0 , โฆ } . Recall the definition (3.1) and R ๐
sup ๐ โ ๐ฎ , ๐ โ ๐ โข ๐ โข ( ๐ , ๐ ) , we have
๐ผ
โข
[
โ
๐
โผฬธ
โฌ
๐ผ
๐
,
โณ
๐
[
๐พ
๐ก
โข
๐
โข
(
๐
๐ก
,
๐
๐ก
)
]
]
โค
โซ
๐
โผฬธ
โฌ
๐ผ
๐
,
โณ
๐
[
โ
๐ก
0 โ ๐พ ๐ก โข R ๐ ] โข ๐ โข ( ๐ ) โข d ๐
[ โ ๐ก
0 โ ๐พ ๐ก ] โข R ๐ โข โซ ๐ โผฬธ โฌ ๐ผ ๐ , โณ ๐ ๐ โข ( ๐ ) โข d ๐
1 1 โ ๐พ โข R ๐ โข โซ ๐ โผฬธ โฌ ๐ผ ๐ , โณ ๐ ๐ โข ( ๐ ) โข d ๐
๐ผ 1 โ ๐พ โข R ๐
(By definition of โฌ ๐ผ ๐ , โณ ๐ )
Similarly,
๐ ๐ , โณ ๐
๐ผ
โข
[
โ
๐
โผ
โฌ
๐
,
โณ
๐
[
๐พ
๐ก
โข
๐
โข
(
๐
๐ก
,
๐
๐ก
)
]
]
โค
โซ
๐
โผ
โฌ
๐
,
โณ
๐
[
โ
๐ก
0 โ ๐พ ๐ก โข R ๐ ] โข ๐ โข ( ๐ ) โข d ๐
[ โ ๐ก
0 โ ๐พ ๐ก ] โข R ๐ โข โซ ๐ โผ โฌ ๐ , โณ ๐ ๐ โข ( ๐ ) โข d ๐
1 1 โ ๐พ โข R ๐ โข โซ ๐ โผ โฌ ๐ , โณ ๐ ๐ โข ( ๐ ) โข d ๐
1 1 โ ๐พ โข R ๐
Based on the above two inequalities and equation (A.11), we have that
| ๐ ๐ผ ๐ , โณ ๐ โ ๐ ๐ , โณ ๐ |
|
๐ผ
1
โ
๐ผ
โข
๐
๐
,
โณ
๐
โ
๐ผ
1
โ
๐ผ
โข
๐ผ
โข
[
โ
๐
โผฬธ
โฌ
๐ผ
๐
,
โณ
๐
[
๐พ
๐ก
โข
๐
โข
(
๐
๐ก
,
๐
๐ก
)
]
]
|
โค
๐ผ
1
โ
๐ผ
โข
(
|
๐
๐
,
โณ
๐
|
+
|
๐ผ
โข
[
โ
๐
โผฬธ
โฌ
๐ผ
๐
,
โณ
๐
[
๐พ
๐ก
โข
๐
โข
(
๐
๐ก
,
๐
๐ก
)
]
]
|
)
โค
๐ผ
1
โ
๐ผ
โข
(
1
1
โ
๐พ
โข
R
๐
+
๐ผ
1
โ
๐พ
โข
R
๐
)
๐ผ โข ( 1 + ๐ผ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R m
(A.12)
โ
A.3Proof of Theorem 3.3 Proof.
With Lemma A.2 and Lemma 3.2, we can show that
| ๐ ๐ผ ๐ , โณ ๐ฝ โ ๐ ๐ , ๐ซ |
| โซ ฮฆ ๐ฝ ๐ ๐ผ ๐ , โณ ๐ โข ๐ โข ( ๐ ) โข d ๐ โ ๐ ๐ , ๐ซ |
| โซ ฮฆ ๐ฝ ( ๐ ๐ผ ๐ , โณ ๐ โ ๐ ๐ , ๐ซ ) โข ๐ โข ( ๐ ) โข d ๐ |
โค โซ ฮฆ ๐ฝ | ๐ ๐ผ ๐ , โณ ๐ โ ๐ ๐ , ๐ซ | โข ๐ โข ( ๐ ) โข d ๐
(By Triangle Inequality)
โค โซ ฮฆ ๐ฝ ( | ๐ ๐ , โณ ๐ โ ๐ ๐ , ๐ซ | + | ๐ ๐ผ ๐ , โณ ๐ โ ๐ ๐ , โณ ๐ | ) โข ๐ โข ( ๐ ) โข d ๐
(By Lemma 3.2)
= โซ ฮฆ ๐ฝ | ๐ ๐ , โณ ๐ โ ๐ ๐ , ๐ซ | โข ๐ โข ( ๐ ) โข d ๐ + โซ ฮฆ ๐ฝ | ๐ ๐ผ ๐ , โณ ๐ โ ๐ ๐ , โณ ๐ | โข ๐ โข ( ๐ ) โข d ๐
(A.13)
For the first part of (A.13), let
๐ โณ โ ๐ผ ๐ โ ฮฆ โข [ ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข [ โ โณ ๐ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ ] ]
denotes the general bias between any model โณ and environment transition ๐ซ , with Lemma A.2, we now get
โซ ฮฆ ๐ฝ | ๐ ๐ , โณ ๐ โ ๐ ๐ , ๐ซ | โข ๐ โข ( ๐ ) โข d ๐
โค ๐พ โข ๐พ 1 โ ๐พ โข โซ ฮฆ ๐ฝ ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข [ โ โณ ๐ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ ] โข ๐ โข ( ๐ ) โข d ๐
(By Lemma A.2)
โค ๐พ โข ๐พ 1 โ ๐พ โข | ๐ โณ | โข โซ ฮฆ ๐ฝ | ๐ โข ( ๐ ) | โข d ๐
( 1 โ ๐ฝ ) โข ๐พ โข ๐พ 1 โ ๐พ โข ๐ โณ
(A.14)
For the second part of (A.13), by Lemma 3.2, we can show that
โซ ฮฆ ๐ฝ | ๐ ๐ผ ๐ , โณ ๐ โ ๐ ๐ , โณ ๐ | โข ๐ โข ( ๐ ) โข d ๐
โค ๐ผ โข ( 1 + ๐ผ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R ๐ โข โซ ฮฆ ๐ฝ ๐ โข ( ๐ ) โข d ๐
(By Lemma 3.2)
= ๐ผ โข ( 1 + ๐ผ ) โข ( 1 โ ๐ฝ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R ๐
(A.15)
Go back to equation (A.13), it follows that
| ๐ ๐ผ ๐ , โณ ๐ฝ โ ๐ ๐ , ๐ซ |
โค โซ ฮฆ ๐ฝ | ๐ ๐ , โณ ๐ โ ๐ ๐ , ๐ซ | โข ๐ โข ( ๐ ) โข d ๐ + โซ ฮฆ ๐ฝ | ๐ ๐ผ | โข ๐ โข ( ๐ ) โข d ๐
โค ( 1 โ ๐ฝ ) โข ๐พ โข ๐พ 1 โ ๐พ โข ๐ โณ + ๐ผ โข ( 1 + ๐ผ ) โข ( 1 โ ๐ฝ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R ๐
(By equation (A.14) and (A.15))
โ ๐ท ๐ผ , ๐ฝ โข ( โณ )
โ
A.4Proof of Proposition 3.12 Proof.
With Theorem 3.3, i.e., | ๐ ๐ผ ๐ , โณ ๐ฝ โ ๐ ๐ , ๐ซ | โค ๐ท ๐ผ , ๐ฝ โข ( โณ ) , we have
๐ ๐ ๐ + 1 , ๐ซ โฅ ๐ ๐ผ ๐ ๐ + 1 , โณ ๐ + 1 ๐ฝ โ ๐ท ๐ผ , ๐ฝ โข ( โณ ๐ + 1 ๐ฝ )
(A.16)
Since the LHS of (A.16), i.e., ๐ ๐ ๐ + 1 , ๐ซ is bigger than the RHS of (A.16), i.e., ๐ ๐ผ ๐ ๐ + 1 , โณ ๐ + 1 ๐ฝ โ ๐ท ๐ผ , ๐ฝ โข ( โณ ๐ + 1 ๐ฝ ) , we can change the inequality into an equation with the RHS plus a update residual ๐ ๐ + 1 :
๐ ๐ ๐ + 1 , ๐ซ
๐ ๐ผ ๐ ๐ + 1 , โณ ๐ + 1 ๐ฝ โ ๐ท ๐ผ , ๐ฝ โข ( โณ ๐ + 1 ๐ฝ ) + ๐ ๐ + 1
(A.17)
Where
๐ ๐ + 1 โ ๐ ๐ ๐ + 1 , ๐ซ โ [ ๐ ๐ผ ๐ ๐ + 1 , โณ ๐ + 1 ๐ฝ โ ๐ท ๐ผ , ๐ฝ โข ( โณ ๐ + 1 ๐ฝ ) ]
(A.18)
With the general pattern equation (3.10), we have
๐ ๐ผ ๐ ๐ + 1 , โณ ๐ + 1 ๐ฝ โ ๐ท ๐ผ , ๐ฝ โข ( โณ ๐ + 1 ๐ฝ ) โฅ ๐ ๐ ๐ , ๐ซ โ ๐ท ๐ผ , ๐ฝ โข ( ๐ซ )
(A.19)
Since
๐ ๐ซ
๐ผ ๐ โ ฮฆ โข [ ๐ผ ๐ , ๐ โผ ๐ , ๐ซ โข [ โ ๐ซ โข ( ๐ , ๐ ) โ ๐ซ โข ( ๐ , ๐ ) โ ] ]
(By equation (3.9))
= ๐ผ ๐ โ ฮฆ โข [ 0 ]
0
We can show that
๐ท ๐ผ , ๐ฝ โข ( ๐ซ )
( 1 โ ๐ฝ ) โข ๐พ โข ๐พ 1 โ ๐พ โข ๐ ๐ซ + ๐ผ โข ( 1 + ๐ผ ) โข ( 1 โ ๐ฝ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R ๐
0 + ๐ผ โข ( 1 + ๐ผ ) โข ( 1 โ ๐ฝ ) ( 1 โ ๐ผ ) โข ( 1 โ ๐พ ) โข R ๐
๐ ๐ผ
(A.20)
With equation (A.17), (A.19) and (A.20), it follows that
๐ ๐ ๐ + 1 , ๐ซ โฅ ๐ ๐ ๐ , ๐ซ + [ ๐ ๐ + 1 โ ๐ ๐ผ ]
(A.21)
Obviously, the update residual ๐ ๐ + 1 is much larger than ๐ ๐ผ most of the time during the training period, implying that [ ๐ ๐ + 1 โ ๐ ๐ผ ] โฅ 0 almost surely, i.e.,
๐ ๐ ๐ + 1 , ๐ซ โข โฅ a . s . โข ๐ ๐ ๐ , ๐ซ + 0
(A.22)
Where โ a . s . โ means โalmost surelyโ. Then we finally get
๐ ๐ 0 , ๐ซ โข โค a . s . โข โฏ โข โค a . s . โข ๐ ๐ ๐ , ๐ซ โข โค a . s . โข ๐ ๐ ๐ + 1 , ๐ซ โข โค a . s . โข โฏ
(A.23)
โ
Appendix BExperiment Details B.1Environment Settings
In our experiments, we evaluate our approach on four continuous control benchmark tasks in the Mujoco simulator [28]: Hopper, Walker, HalfCheetah and Ant.
โข
Hopper: Make a two-dimensional one-legged robot hop forward as fast as possible.
โข
Walker2d: Make a two-dimensional bipedal robot walk forward as fast as possible.
โข
HalfCheetah: Make a two-dimensional cheetah robot run forward as fast as possible.
โข
Ant: Make a four-legged creature walk forward as fast as possible.
(a)Hopper (b)Walker2d (c)HalfCheetah (d)Ant Figure 5:Illustrations of the four MuJoCo simulated robot environments used in our experiments.
We also make modifications to the XML model configuration files of MuJoCo environments in our robustness experiments, aiming to evaluate robustness of our MBDP algorithm. More specifically, we scale the standard (OpenAI Gym) friction of each geom part by a coefficient ๐ถ friction โ [ 0.5 , 1.5 ] , and scale the standard mass of torso part by a coefficient ๐ถ mass โ [ 0.5 , 1.5 ] .
B.2Hyperparameter Settings
environment name epochs env steps per epoch rollout batch policy update per env step model update per env step ๐ผ
๐ฝ
๐พ ensemble size network arch Hopper 120 1000 10 5 20 250 0.2 0.2 0.99 10 MLP ( 4 ร 200 ) Walker2d 300 1000 10 5 20 250 0.2 0.2 0.99 10 MLP ( 4 ร 200 ) HalfCheetah 400 1000 10 5 40 250 0.2 0.2 0.99 10 MLP ( 4 ร 200 ) Ant 300 1000 10 5 20 250 0.2 0.2 0.99 10 MLP ( 4 ร 200 )
Table 1:Hyperparameter settings for MBDP results shown in Figure 2 of the main manuscript. B.3Computational Details CPU GPU RAM Intel E5-2680@2.4GHz (56 Cores) Tesla P40 (24GB) ร 8 256GB Table 2:Computational resources for our experiments. Environment Name Hopper Walker2d HalfCheetah Ant Time โ 10 hours โ 20 hours โ 32 hours โ 48 hours Table 3:Computing time of each single experiment in different environments. Appendix CEmpirical Demonstration of Proposition 3.12 Figure 6:Scaled residual curve in Hopper and HalfCheetah environment.
The observation results of the residual value is shown in Figure 6. The horizontal axis is the number of training epochs, and the vertical axis represents the estimated value (scaled to 1) of ( ๐ ๐ โ ๐ ๐ผ ) . It can be seen from the figure that ( ๐ ๐ โ ๐ ๐ผ ) is greater than 0 most of the time, and occasionally less than 0 when it is close to convergence. This verifies our claim in Section 3.3.2.
Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors
We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:
Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.
Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.
Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.