papers-content / 2108 /2108.01295.md
Sylvestre's picture
Sylvestre HF Staff
Squashing commit
8906289 verified

Title: A Model-based Approach to Achieve both Robustness and Sample Efficiency via Double Dropout Planning

URL Source: https://arxiv.org/html/2108.01295

Markdown Content: Back to arXiv

This is experimental HTML to improve accessibility. We invite you to report rendering errors. Use Alt+Y to toggle on accessible reporting links and Alt+Shift+Y to toggle off. Learn more about this project and help improve conversions.

Why HTML? Report Issue Back to Abstract Download PDF Abstract 1Introduction 2Notations and Preliminaries 3MBDP Framework 4Experiments 5Conclusions and Future Work References

HTML conversions sometimes display errors due to content that did not convert correctly from the source. This paper uses the following packages that are not yet supported by the HTML conversion tool. Feedback on these issues are not necessary; they are known and are being worked on.

failed: eso-pic

Authors: achieve the best HTML results from your LaTeX submissions by following these best practices.

License: arXiv.org perpetual non-exclusive license arXiv:2108.01295v2 [cs.LG] null MBDP: A Model-based Approach to Achieve both Robustness and Sample Efficiency via Double Dropout Planning Wanpeng Zhang Tsinghua University &Xi Xiao Tsinghua University Yao Yao Tsinghua University &Mingzhe Chen Tsinghua University &Dijun Luo Tencent AI Lab Abstract

Model-based reinforcement learning is a widely accepted solution for solving excessive sample demands. However, the predictions of the dynamics models are often not accurate enough, and the resulting bias may incur catastrophic decisions due to insufficient robustness. Therefore, it is highly desired to investigate how to improve the robustness of model-based RL algorithms while maintaining high sampling efficiency. In this paper, we propose Model-Based Double-dropout Planning (MBDP) to balance robustness and efficiency. MBDP consists of two kinds of dropout mechanisms, where the rollout-dropout aims to improve the robustness with a small cost of sample efficiency, while the model-dropout is designed to compensate for the lost efficiency at a slight expense of robustness. By combining them in a complementary way, MBDP provides a flexible control mechanism to meet different demands of robustness and efficiency by tuning two corresponding dropout ratios. The effectiveness of MBDP is demonstrated both theoretically and experimentally.

1Introduction

Reinforcement learning (RL) algorithms are commonly divided into two categories: model-free RL and model-based RL. Model-free RL methods learn a policy directly from samples collected in the real environment, while model-based RL approaches build approximate predictive models of the environment to assist in the optimization of the policy [1, 2]. In recent years, RL has achieved remarkable results in a wide range of areas, including continuous control [3, 4, 5], and outperforming human performances on Go and games [6, 7]. However, most of these results are achieved by model-free RL algorithms, which rely on a large number of environmental samples for training, limiting the application scenarios when deployed in practice. In contrast, model-based RL methods have shown the promising potential to cope with the lack of samples by using predictive models for simulation and planning [8, 9]. To reduce sample complexity, PILCO [10] learns a probabilistic model through Gaussian process regression, which models prediction uncertainty to boost agentโ€™s performance in complex environments. Based on PILCO, the DeepPILCO algorithm [11] enables the modeling of more complex environments by introducing the Bayesian Neural Network (BNN), a universal function approximator with high capacity. To further enhance the interpretability of the predictive models and improve the robustness of the learned policies [12, 13], ensemble-based methods [14, 15] train an ensemble of models to comprehensively capture the uncertainty in the environment and have been empirically shown to obtain significant improvements in sample efficiency [5, 12, 16].

Despite the high sample efficiency, model-based RL methods inherently suffer from inaccurate predictions, especially when faced with high-dimensional tasks and insufficient training samples [17, 18]. Model accuracy can greatly affect the policy quality, and policies learned in inaccurate models tend to have significant performance degradation due to cumulative model error [19, 20]. Therefore, how to eliminate the effects caused by model bias has become a hot topic in model-based RL methods. Another important factor that limits the application of model-based algorithms is safety concerns. In a general RL setup, the agent needs to collect observations to extrapolate the current state before making decisions, which poses a challenge to the robustness of the learned policy because the process of acquiring observations through sensors may introduce random noise and the real environment is normally partial observable. Non-robust policies may generate disastrous decisions when faced with a noisy environment, and this safety issue is more prominent in model-based RL because the error in inferring the current state from observations may be further amplified by model bias when doing simulation and planning with the predictive models. Drawing on researches in robust control [21], a branch of control theory, robust RL methods have attracted more and more attention to improve the capability of the agent against perturbed states and model bias. The main objective of robust RL is to optimize the agentโ€™s performance in worst-case scenarios and to improve the generalization of learned policies to noisy environments [22]. Existing robust RL methods can be roughly classified into two types, one is based on adversarial ideas such as RARL [23] and NR-MDP [24] to obtain robust policies by proposing corresponding minimax objective functions, while the other group of approaches [25] introduce conditional value at risk (CVaR) objectives to ensure the robustness of the learned policies. However, the increased robustness of these methods can lead to a substantial loss of sample efficiency due to the pessimistic manner of data use. Therefore, it is nontrival to enhance the robustness of policy while avoiding sample inefficiency.

In this paper, we propose Model-Based Reinforcement Learning with Double Dropout Planning (MBDP) algorithm for the purpose of learning policies that can reach a balance between robustness and sample efficiency. Inspried by CVaR, we design the rollout-dropout mechanism to enhance robustness by optimizing the policies with low-reward samples. On the other hand, in order to maintain high sample efficiency and reduce the impact of model bias, we learn an ensemble of models to compensate for the inaccuracy of single model. Furthermore, when generating imaginary samples to assist in the optimization of policies, we design the model-dropout mechanism to avoid the perturbation of inaccurate models by only using models with small errors. To meet different demands of robustness and sample efficiency, a flexible control can be realized via the two dropout mechanisms. We demonstrate the effectiveness of MBDP both theoretically and empirically.

2Notations and Preliminaries 2.1Reinforcement Learning

We consider a Markov decision process (MDP), defined by the tuple ( ๐’ฎ , ๐’œ , ๐’ซ , ๐‘Ÿ , ๐›พ ) , where ๐’ฎ โˆˆ โ„ ๐‘‘ ๐‘  is the state space, ๐’œ โˆˆ โ„ ๐‘‘ ๐‘Ž is the action space, ๐‘Ÿ โข ( ๐‘  , ๐‘Ž ) : ๐’ฎ ร— ๐’œ โ†ฆ โ„ is the reward function, ๐›พ โˆˆ [ 0 , 1 ] is the discount factor, and ๐’ซ โข ( ๐‘  โ€ฒ | ๐‘  , ๐‘Ž ) : ๐’ฎ ร— ๐’œ ร— ๐’ฎ โ†ฆ [ 0 , 1 ] is the conditional probability distribution of the next state given current state ๐‘  and action ๐‘Ž . The form ๐‘  โ€ฒ

๐’ซ โข ( ๐‘  , ๐‘Ž ) : ๐’ฎ ร— ๐’œ โ†ฆ ๐’ฎ denotes the state transition function when the environment is deterministic. Let ๐‘‰ ๐œ‹ , ๐’ซ โข ( ๐‘  ) denote the expected return or expectation of accumulated rewards starting from initial state ๐‘  , i.e., the expected sum of discounted rewards following policy ๐œ‹ โข ( ๐‘Ž | ๐‘  ) and state transition function ๐’ซ โข ( ๐‘  , ๐‘Ž ) :

๐‘‰ ๐œ‹ , ๐’ซ โข ( ๐‘  )

๐”ผ { ๐‘Ž 0 , ๐‘  1 , โ€ฆ } โˆผ ๐œ‹ , ๐’ซ โข [ โˆ‘ ๐‘ก

0 โˆž ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) โˆฃ ๐‘  0

๐‘  ]

(2.1)

For simplicity of symbol, let ๐‘‰ ๐œ‹ , ๐’ซ denote the expected return over random initial states:

๐‘‰ ๐œ‹ , ๐’ซ

๐”ผ ๐‘  0 โˆˆ ๐’ฎ โข [ ๐‘‰ ๐œ‹ , ๐’ซ โข ( ๐‘  0 ) ]

(2.2)

The goal of reinforcement learning is to maximize the expected return by finding the optimal decision policy, i.e., ๐œ‹ โˆ—

arg โก max ๐œ‹ โก ๐‘‰ ๐œ‹ , ๐’ซ .

2.2Model-based Methods

In model-based reinforcement learning, an approximated transition model โ„ณ โข ( ๐‘  , ๐‘Ž ) is learned by interacting with the environment, the policy ๐œ‹ โข ( ๐‘Ž | ๐‘  ) is then optimized with samples from the environment and data generated by the model. We use the parametric notation โ„ณ ๐œ™ , ๐œ™ โˆˆ ฮฆ to specifically denote the model trained by a neural network, where ฮฆ is the parameter space of models.

More specifically, to improve the ability of models to represent complex environment, we need to learn multiple models and make an ensemble of them, i.e., โ„ณ

{ โ„ณ ๐œ™ 1 , โ„ณ ๐œ™ 2 , โ€ฆ } . To generate a prediction from the model ensemble, we select a model โ„ณ ๐œ™ ๐‘– from โ„ณ uniformly at random, and perform a model rollout using the selected model at each time step, i.e., ๐‘  ๐‘ก + 1 โˆผ โ„ณ ๐œ™ ๐‘ก โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) . Then we fill these rollout samples ๐‘ฅ

( ๐‘  ๐‘ก + 1 , ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) into a batch. Finally we can perform policy optimization on these generated samples.

2.3Conditional Value-at-Risk

Let ๐‘ denote a random variable with a cumulative distribution function (CDF) ๐น โข ( ๐‘ง )

Pr โข ( ๐‘ < ๐‘ง ) . Given a confidence level ๐‘ โˆˆ ( 0 , 1 ) , the Value-at-Risk of ๐‘ (at confidence level ๐‘ ) is denoted VaR ๐‘ โข ( ๐‘ ) , and given by

VaR ๐‘ โข ( ๐‘ )

๐น โˆ’ 1 โข ( ๐‘ ) โ‰œ inf { ๐‘ง : ๐น โข ( ๐‘ง ) โ‰ฅ ๐‘ }

(2.3)

The Conditional-Value-at-Risk of ๐‘ (at confidence level ๐‘ ) is denoted by CVaR ๐‘ โข ( ๐‘ ) and defined as the expected value of ๐‘ , conditioned on the ๐‘ -portion of the tail distribution:

CVaR ๐‘ โข ( ๐‘ ) โ‰œ ๐”ผ โข [ ๐‘ | ๐‘ โ‰ฅ VaR ๐‘ โข ( ๐‘ ) ]

(2.4) 3MBDP Framework Figure 1:Overview of the MBDP algorithm. When interacting with the environment, we collect samples into environment replay buffer ๐’Ÿ env , used for training the simulator model of the environment. Then we implement the model-dropout procedure and perform rollouts on the model ensemble. The sampled data from the model ensemble is filled into a temporary batch, and then we get a dropout buffer ๐’Ÿ model by implementing the rollout-dropout procedure. Finally, we use samples from ๐’Ÿ model to optimize the policy ๐œ‹ โข ( ๐‘Ž | ๐‘  ) .

In this section, we introduce how MBDP leverages Double Dropout Planning to find the balance between efficiency and robustness. The basic procedure of MBDP is to 1) sample data from the environment; 2) train an ensemble of models from the sampled data; 3) calculate model bias over observed environment samples, and choose a subset of model ensemble based on the calculated model bias; 4) collect rollout trajectories from the model ensemble, and make gradient updates based on the subsets of sampled data. The overview of the algorithm architecture is shown in figure 1 and the overall algorithm pseudo-code is demonstrated in Algorithm 1.

We will also theoretically analyze robustness and performance under the dropout planning of our MBDP algorithm. For simplicity of theoretical analysis, we only consider deterministic environment and models in this section, but the experimental part does not require this assumption. The detailed proofs can be found in the appendix as provided in supplementary materials.

Algorithm 1 Model-Based Reinforcement Learning with Double Dropout Planning (MBDP)   Initialize hyperparameters, policy ๐œ‹ ๐œƒ , environment replay buffer ๐’Ÿ env , model replay buffer ๐’Ÿ model   for  ๐‘ epoch iterations do      Take an action in environment using policy ๐œ‹ ๐œƒ      Add samples to ๐’Ÿ env      for  ๐‘ train iterations do         Train probabilistic model โ„ณ on ๐’Ÿ env         Build a model subset โ„ณ ๐›ฝ

{ โ„ณ ๐œ™ 1 , โ€ฆ , โ„ณ ๐œ™ ๐‘ 1 โˆ’ ๐›ฝ } according to bias โข ( ๐œ™ ๐‘– )         for  ๐‘ก

1 , 2 , โ€ฆ , ๐‘‡  do            Select a model โ„ณ ๐œ™ ๐‘ก from โ„ณ ๐›ฝ randomly            Perform rollouts on model โ„ณ ๐œ™ ๐‘ก with policy ๐œ‹ ๐œƒ and get samples ๐‘ฅ

( ๐‘  ๐‘ก + 1 , ๐‘  ๐‘ก , ๐‘Ž ๐‘ก )            Fill these samples into temp batch โ„ฌ ๐œ‹ , โ„ณ ๐›ฝ         end for         Calculate ๐‘Ÿ 1 โˆ’ ๐›ผ โข ( โ„ฌ ๐œ‹ , โ„ณ ๐›ฝ | ๐‘  ) : the ( 1 โˆ’ ๐›ผ ) percentile of batch โ„ฌ ๐œ‹ , โ„ณ ๐›ฝ grouped by state ๐‘  , โˆ€ ๐‘  โˆˆ ๐’ฎ         for  ๐‘ฅ โˆˆ โ„ฌ ๐œ‹ , โ„ณ ๐›ฝ  do            if  ๐‘Ÿ โข ( ๐‘ฅ ) โ‰ค ๐‘Ÿ 1 โˆ’ ๐›ผ โข ( โ„ฌ ๐œ‹ , โ„ณ ๐›ฝ | ๐‘  ๐‘ก )  then               fill ๐‘ฅ into ๐’Ÿ model            end if         end for      end for      Optimize ๐œ‹ ๐œƒ on ๐’Ÿ model : ๐œƒ โ† ๐œƒ โˆ’ ๐œ† โข โˆ‡ ๐œƒ ๐ฝ ๐œƒ โข ( ๐’Ÿ model )   end for 3.1Rollout Dropout in MBDP

Optimizing the expected return in a general way as model-based methods allows us to learn a policy that performs best in expectation over the training model ensemble. However, best expectation does not mean that the result policies can perform well at all times. This instability typically leads to risky decisions when facing poorly-informed states at deployment.

Inspired by previous works [14, 25, 26] which optimize conditional value at risk (CVaR) to explicitly seek a robust policy, we add a dropout mechanism in the rollout procedure. Recall the model-based methods in Section 2.2, to generate a prediction from the model ensemble, we select a model โ„ณ ๐œ™ ๐‘– from โ„ณ uniformly at random, and perform a model rollout using the selected model at each time step, i.e., ๐‘  ๐‘ก + 1 โˆผ โ„ณ ๐œ™ ๐‘ก โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) . Then we fill these rollout samples ๐‘ฅ

( ๐‘  ๐‘ก + 1 , ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) into a batch and retain a ( 1 โˆ’ ๐›ผ ) percentile subset with more pessimistic rewards. We use โ„ฌ ๐›ผ ๐œ‹ , โ„ณ to denote the ( 1 โˆ’ ๐›ผ ) percentile rollout batch:

โ„ฌ ๐›ผ ๐œ‹ , โ„ณ

{ ๐‘ฅ | ๐‘ฅ โˆˆ โ„ฌ ๐œ‹ , โ„ณ , ๐‘Ÿ โข ( ๐‘ฅ | ๐‘  ) โ‰ค ๐‘Ÿ 1 โˆ’ ๐›ผ โข ( โ„ฌ ๐œ‹ , โ„ณ | ๐‘  ) , โˆ€ ๐‘  โˆˆ ๐’ฎ }

(3.1)

where โ„ฌ ๐œ‹ , โ„ณ

{ ๐‘ฅ | ๐‘ฅ โ‰œ ( ๐‘  ๐‘ก + 1 , ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) โˆผ ๐œ‹ , โ„ณ } and ๐‘Ÿ 1 โˆ’ ๐›ผ โข ( โ„ฌ ๐œ‹ , โ„ณ | ๐‘  ) is the ( 1 โˆ’ ๐›ผ ) percentile of reward values conditioned on state ๐‘  โˆˆ ๐’ฎ in batch โ„ฌ ๐œ‹ , โ„ณ . The expected return of dropout batch rollouts is denoted by ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ :

๐‘‰ ๐›ผ ๐œ‹ , โ„ณ

๐”ผ โข [ โˆ‘ { ๐‘  0 , ๐‘Ž 0 , โ€ฆ } โˆผ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

(3.2)

Rollout-dropout can improve the robustness with a nano cost of sample efficiency, we will analyze how it brings improvements to robustness in Section 3.3.

3.2Model Dropout in MBDP

Rollout-dropout can improve the robustness, but it is clear that dropping a certain number of samples could affect the algorithmโ€™s sample efficiency. Model-based methods can improve this problem. However, since model bias can affect the performance of the algorithm, we also need to consider how to optimize it. Previous works use an ensemble of bootstrapped probabilistic transition models as in PETS method [12] to properly incorporate two kinds of uncertainty into the transition model.

In order to mitigate the impact of discrepancies and flexibly control the accuracy of model ensemble, we design a model-dropout mechanism. More specifically, we first learn an ensemble of transition models { โ„ณ ๐œ™ 1 , โ„ณ ๐œ™ 2 , โ€ฆ } , each member of the ensemble is a probabilistic neural network whose outputs ๐œ‡ ๐œ™ ๐‘– , ๐œŽ ๐œ™ ๐‘– parametrize a Guassian distribution: ๐‘  โ€ฒ

โ„ณ ๐œ™ ๐‘– โข ( ๐‘  , ๐‘Ž ) โˆผ ๐’ฉ โข ( ๐œ‡ ๐œ™ ๐‘– โข ( ๐‘  , ๐‘Ž ) , ๐œŽ ๐œ™ ๐‘– โข ( ๐‘  , ๐‘Ž ) ) . While training models based on samples from environment, we calculate bias averaged over the observed state-action pair ( ๐‘† , ๐ด ) for each model:

bias โข ( ๐œ™ ๐‘– )

๐”ผ ๐‘† , ๐ด โˆผ ๐œ‹ , ๐’ซ โข โ€– โ„ณ ๐œ™ ๐‘– โข ( ๐‘† , ๐ด ) โˆ’ ๐’ซ โข ( ๐‘† , ๐ด ) โ€–

(3.3)

which formulates the distance of next states in model โ„ณ ๐œ™ ๐‘– and in environment ๐’ซ , where โˆฅ โ‹… โˆฅ is a distance function on state space ๐’ฎ .

Then we select models from the model ensemble uniformly at random, sort them in ascending order by the calculated bias and retain a dropout subset with smaller model bias: โ„ณ ๐›ฝ

{ โ„ณ ๐œ™ 1 , โ„ณ ๐œ™ 2 , โ€ฆ , โ„ณ ๐œ™ ๐‘ 1 โˆ’ ๐›ฝ } , i.e., โ„ณ ๐›ฝ

{ โ„ณ ๐œ™ โˆฃ ๐œ™ โˆˆ ฮฆ ๐›ฝ } , where ฮฆ ๐›ฝ

{ ๐œ™ ๐‘– โˆฃ bias โข ( ๐œ™ ๐‘– ) โ‰ค bias โข ( ๐œ™ ๐‘ 1 โˆ’ ๐›ฝ ) , ๐œ™ ๐‘– โˆˆ ฮฆ } and ๐‘ 1 โˆ’ ๐›ฝ is the max integer in the ascending order index { 1 , 2 , โ€ฆ , ๐‘ 1 โˆ’ ๐›ฝ } after we dropout the ๐›ฝ -percentile subset with large bias.

3.3Theoretical Analysis of MBDP

We now give theoretical guarantees for the robustness and sample efficiency of the MBDP algorithm. All the proofs of this section are detailed in Appendix A.

3.3.1Guarantee of Robustness

We define the robustness as the expected performance in a perturbed environment. Consider a perturbed transition matrix ๐’ซ ^

๐’ซ ๐‘ก โˆ˜ ๐›ฟ ๐‘ก , where ๐›ฟ ๐‘ก โˆˆ โ„ ๐’ฎ ร— ๐’œ ร— ๐’ฎ is a multiplicative probability perturbation and โˆ˜ is the Hadamard Product. Recall the definition of CVaR โข ( โ‹… ) in equation (2.4), now we propose following theorem to provide guarantee of robustness for MBDP algorithm.

Theorem 3.1.

It holds

๐‘‰ ๐›ผ ๐œ‹ , โ„ณ

โˆ’ CVaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ )

sup ฮ” ๐›ผ ๐”ผ ๐’ซ ^ โข [ ๐‘‰ ๐œ‹ , โ„ณ ]

(3.4)

given the constraint set of perturbation

ฮ” ๐›ผ โ‰œ { ๐›ฟ ๐‘– | โˆ ๐‘–

1 ๐‘‡ ๐›ฟ ๐‘– โข ( ๐‘  ๐‘– โˆฃ ๐‘  ๐‘– โˆ’ 1 , ๐‘Ž ๐‘– โˆ’ 1 ) โ‰ค 1 ๐›ผ , โˆ€ ๐‘  ๐‘– โˆˆ ๐’ฎ , ๐‘Ž ๐‘– โˆˆ ๐’œ , ๐›ผ โˆˆ ( 0 , 1 ) }

(3.5)

Since sup ฮ” ๐›ผ ๐”ผ ๐‘ƒ ^ โข [ ๐‘‰ ๐œ‹ , โ„ณ ] means optimizing the expected performance in a perturbed environment, which is exactly our definition of robustness, then Theorem 3.1 can be interpreted as an equivalence between optimizing robustness and the expected return under rollout-dropout, i.e., ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ .

3.3.2Guarantee of Efficiency

We first propose Lemma 3.2 to prove that the expected return with only rollout-dropout mechanism, compared to the expected return when it is deployed in the environment ๐’ซ , has a discrepancy bound.

Lemma 3.2.

Suppose R m is the supremum of reward function ๐‘Ÿ โข ( ๐‘  , ๐‘Ž ) , i.e., R m

sup ๐‘  โˆˆ ๐’ฎ , ๐‘Ž โˆˆ ๐’œ โข ๐‘Ÿ โข ( ๐‘  , ๐‘Ž ) , the expected return of dropout batch rollouts with individual model โ„ณ ๐œ™ has a discrepancy bound:

| ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ | โ‰ค ๐›ผ โข ( 1 + ๐›ผ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R m โ‰œ ๐œ– ๐›ผ

(3.6)

While Lemma 3.2 only provides a guarantee for the performance of rollout-dropout mechanism, we now propose Theorem 3.3 to prove that the expected return of policy derived by model dropout together with rollout-dropout, i.e., our MBDP algorithm, compared to the expected return when it is deployed in the environment ๐’ซ , has a discrepancy bound.

Theorem 3.3.

Suppose ๐พ โ‰ฅ 0 is a constant. The expected return of MBDP algorithm, i.e., ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐›ฝ , compared to the expected return when it is deployed in the environment ๐’ซ , i.e., ๐‘‰ ๐œ‹ , ๐’ซ , has a discrepancy bound:

| ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐›ฝ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | โ‰ค ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ )

(3.7)

where

๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ) โ‰œ ( 1 โˆ’ ๐›ฝ ) โข ๐›พ โข ๐พ 1 โˆ’ ๐›พ โข ๐œ– โ„ณ + ๐›ผ โข ( 1 + ๐›ผ ) โข ( 1 โˆ’ ๐›ฝ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R ๐‘š

(3.8)

and

๐œ– โ„ณ โ‰œ ๐”ผ ๐œ™ โˆˆ ฮฆ โข [ ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข [ โ€– โ„ณ ๐œ™ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€– ] ]

(3.9)

Since MBDP algorithm is an extension of the Dyna-style algorithm [27]: a series of model-based reinforcement learning methods which jointly optimize the policy and transition model, it can be written in a general pattern as below:

๐œ‹ ๐‘˜ + 1 , โ„ณ ๐‘˜ + 1 ๐›ฝ

arg โก max ๐œ‹ ๐‘˜ , โ„ณ ๐‘˜ ๐›ฝ โข [ ๐‘‰ ๐œ‹ ๐‘˜ , โ„ณ ๐‘˜ ๐›ฝ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ๐‘˜ ๐›ฝ ) ]

(3.10)

where ๐œ‹ ๐‘˜ denotes the updated policy in ๐‘˜ -th iteration and โ„ณ ๐‘˜ ๐›ฝ denotes the updated dropout model ensemble in ๐‘˜ -th iteration. In this setting, we can show that, performance of the policy derived by our MBDP algorithm, is approximatively monotonically increasing when deploying in the real environment ๐’ซ , with ability to robustly jump out of local optimum.

Proposition 3.4.

The expected return of policy derived by general algorithm pattern (3.10), is approximatively monotonically increasing when deploying in the real environment ๐’ซ , i.e.

๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โ‰ฅ ๐‘‰ ๐œ‹ ๐‘˜ , ๐’ซ + ( ๐œ– ๐‘˜ + 1 โˆ’ ๐œ– ๐›ผ ) โ‰œ ๐‘‰ ๐œ‹ ๐‘˜ , ๐’ซ + ๐œ‚

(3.11)

where ๐œ– ๐›ผ is defined in (3.6) and ๐œ– ๐‘˜ + 1 is the update residual:

๐œ– ๐‘˜ + 1 โ‰œ ๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โˆ’ [ ๐‘‰ ๐›ผ ๐œ‹ ๐‘˜ + 1 , โ„ณ ๐‘˜ + 1 ๐›ฝ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ๐‘˜ + 1 ๐›ฝ ) ]

(3.12)

Intuitively, proposition 3.12 shows that under the control of reasonable parameters ๐›ผ and ๐›ฝ , ๐œ– ๐‘˜ + 1 is often a large update value in the early learning stage, while ๐œ– ๐›ผ as an error bound is a fixed small value. Thus ๐œ‚

๐œ– ๐‘˜ + 1 โˆ’ ๐œ– ๐›ผ is a value greater than 0 most of the time in the early learning stage, which can guarantee ๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โ‰ฅ ๐‘‰ ๐œ‹ ๐‘˜ , ๐’ซ + 0 . In the late stage near convergence, the update becomes slow and ๐œ– ๐‘˜ + 1 may be smaller than ๐œ– ๐›ผ , which leads to the possibility that ๐‘‰ ๐‘˜ + 1 is smaller than ๐‘‰ ๐‘˜ . This makes the update process try some other convergence direction, providing an opportunity to jump out of the local optimum. We empirically verify this claim in Appendix C.

3.3.3Flexible control of robustness and efficiency

According to Theorem 3.1, rollout-dropout improves robustness, and the larger ๐›ผ is, the more robustness is improved. Conversely, the smaller ๐›ผ is, the worse the robustness will be. For model-dropout, it is obvious that when ๐›ฝ is larger, it means that the more models we will be dropped, and the more likely the model is to overfit the environment, so the less robust it is. Conversely, when ๐›ฝ is less, the model ensemble has better robustness in simulating complex environments, and the robustness is better at this point.

Turning to the efficiency. Note that the bound in equation (3.8) i.e., ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ) , is in positive ratio with ๐›ผ and inverse ratio with ๐›ฝ . This means that as ๐›ผ increases or ๐›ฝ decreases, this bound expands, causing the accuracy of the algorithm to decrease and the algorithm to take longer to converge, thus making it less efficient. Conversely, when ๐›ผ decreases or ๐›ฝ increases, the efficiency increases.

With the analysis above, it suggests that MBDP can provide a flexible control mechanism to meet different demands of robustness and efficiency by tuning two corresponding dropout ratios. This conclusion can be summarized as follows and we also empirically verify it in section 4.

โ€ข

To get balanced efficiency and robustness: set ๐›ผ and ๐›ฝ both to a moderate value

โ€ข

To get better robustness: set ๐›ผ to a larger value and ๐›ฝ to a smaller value.

โ€ข

To get better efficiency: set ๐›ผ to a smaller value and ๐›ฝ to a larger value.

4Experiments

Our experiments aim to answer the following questions:

โ€ข

How does MBDP perform on benchmark reinforcement learning tasks compared to state-of-the-art model-based and model-free RL methods?

โ€ข

Can MBDP find a balance between robustness and benefits?

โ€ข

How does the robustness and efficiency of MBDP change by tuning parameters ๐›ผ and ๐›ฝ ?

To answer the posed questions, we need to understand how well our method compares to state-of-the-art model-based and model-free methods and how our design choices affect performance. We evaluate our approach on four continuous control benchmark tasks in the Mujoco simulator [28]: Hopper, Walker, HalfCheetah, and Ant. We also need to perform the ablation study by removing the dropout modules from our algorithm. Finally, a separate analysis of the hyperparameters ( ๐›ผ and ๐›ฝ ) is also needed. A depiction of the environments and a detailed description of the experimental setup can be found in Appendix B.

4.1Comparison with State-of-the-Arts

In this subsection, we compare our MBDP algorithm with state-of-the-art model-free and model-based reinforcement learning algorithms in terms of sample complexity and performance. Specifically, we compare against SAC [29], which is the state-of-the-art model-free method and establishes a widely accepted baseline. For model-based methods, we compare against MBPO [16], which uses short-horizon model-based rollouts started from samples in the real environment; STEVE [30], which dynamically incorporates data from rollouts into value estimation rather than policy learning; and SLBO [31], a model-based algorithm with performance guarantees. For our MBDP algorithm, we choose ๐›ผ

0.2 and ๐›ฝ

0.2 as hyperparameter setting.

Figure 2:Learning curves of our MBDP algorithm and four baselines on different continuous control environments. Solid curves indicate the mean of all trials with 5 different seeds. Shaded regions correspond to standard deviation among trials. Each trial is evaluated every 1000 steps. The dashed reference lines are the asymptotic performance of SAC algorithm. These results show that our MBDP method learns faster and has better asymptotic performance and sample efficiency than existing model-based algorithms.

Figure 2 shows the learning curves for all methods, along with asymptotic performance of the model-free SAC algorithm which do not converge in the region shown. The results highlight the strength of MBDP in terms of performance and sample complexity. In all the Mujoco simulator environments, our MBDP method learns faster and has better efficiency than existing model-based algorithms, which empirically demonstrates the advantage of Dropout Planning.

4.2Analysis of Robustness Figure 3:The robustness performance is depicted as heat maps for various environment settings. Each heat map represents a set of experiments, and each square in the heat map represents the average return value in one experiment. The closer the color to red (hotter) means the higher the value, the better the algorithm is trained in that environment, and vice versa. The four different algorithms in the figure are no dropout ( ๐›ผ

0 , ๐›ฝ

0 ), rollout-dropout only ( ๐›ผ -dropout: ๐›ผ

0.2 , ๐›ฝ

0 ), model-dropout only ( ๐›ฝ -dropout: ๐›ผ

0 , ๐›ฝ

0.2 ), and both dropouts ( ๐›ผ

0.2 , ๐›ฝ

0.2 ). Each experiment in the Hopper environment stops after 300,000 steps, and each experiment in the HalfCheetah environment stops after 600,000 steps.

Aiming to evaluate the robustness of our MBDP algorithm by testing policies on different environment settings (i.e., different combinations of physical parameters) without any adaption, we define ranges of mass and friction coefficients as follows: 0.5 โ‰ค ๐ถ mass โ‰ค 1.5 and 0.5 โ‰ค ๐ถ friction โ‰ค 1.5 , and modify the environments by scaling the torso mass with coefficient ๐ถ mass and the friction of every geom with coefficient ๐ถ friction .

We compare the original MBDP algorithm with the ๐›ผ -dropout variation ( ๐›ผ

0.2 , ๐›ฝ

0 ) which keeps only the rollout-dropout, the ๐›ฝ -dropout variation ( ๐›ผ

0 , ๐›ฝ

0.2 ) which keeps only the model-dropout, and the no-dropout variation ( ๐›ผ

0 , ๐›ฝ

0 ) which removes both dropouts. This experiment is conducted in the modified environments mentioned above. The results are presented in Figure 3 in the form of heat maps, each square of a heat map represents the average return value that the algorithm can achieve after training in each modified environment. The closer the color to red (hotter) means the higher the value, the better the algorithm is trained in that environment, and vice versa. Obviously, if the algorithm can only achieve good training results in the central region and inadequate results in the region far from the center, it means that the algorithm is more sensitive to perturbation in environments and thus less robust.

Based on the results, we can see that the ๐›ผ -dropout using only the rollout-dropout can improve the robustness of the algorithm, while the ๐›ฝ -dropout using only the model-dropout will slightly weaken the robustness, and the combination of both dropouts, i.e., the MBDP algorithm, achieves robustness close to that of ๐›ผ -dropout.

4.3Ablation Study

In this section, we investigate the sensitivity of MBDP algorithm to the hyperparameter ๐›ผ , ๐›ฝ . We conduct two sets of experiments in both Hopper and HalfCheetah environments: (1) fix ๐›ฝ and change ๐›ผ ( ๐›ผ โˆˆ [ 0 , 0.5 ] , ๐›ฝ

0.2 ); (2) fix ๐›ผ and change ๐›ฝ ( ๐›ฝ โˆˆ [ 0 , 0.5 ] , ๐›ผ

0.2 ).

The experimental results are shown in Figure 4. The first row corresponds to experiments in the Hopper environment and the second row corresponds to experiments in the HalfCheetah environment. Columns 1 and 2 correspond to the experiments conducted in the perturbed Mujoco environment with modified environment settings. We construct a total of 2 ร— 2

4 different perturbed environments ( ๐ถ mass

0.8 , 1.2 , ๐ถ friction

0.8 , 1.2 ), and calculate the average of the return values after training a fixed number of steps (Hopper: 120k steps, HalfCheetah: 400k steps) in each of the four environments. The higher this average value represents the algorithm can achieve better overall performance in multiple perturbed environments, implying better robustness. Therefore, this metric can be used to evaluate the robustness of different ๐›ผ , ๐›ฝ . Columns 3 and 4 are the return values obtained after a fixed number of steps (Hopper: 120k steps, HalfCheetah: 400k steps) for experiments conducted in the standard Mujoco environment without any modification, which are used to evaluate the efficiency of the algorithm for different values of ๐›ผ , ๐›ฝ . Each box plot corresponds to 10 different random seeds.

Observing the experimental results, we can find that robustness shows a positive relationship with ๐›ผ and an inverse relationship with ๐›ฝ ; efficiency shows an inverse relationship with ๐›ผ and a positive relationship with ๐›ฝ . This result verifies our conclusion in Section 3.3.3. In addition, we use horizontal dashed lines in Figure 4 to indicate the baseline with rollout-dropout and model-dropout removed ( ๐›ผ

๐›ฝ

0 ). It can be seen that when ๐›ผ โˆˆ [ 0.1 , 0.2 ] , ๐›ฝ โˆˆ [ 0.1 , 0.2 ] , the robustness and efficiency of the algorithm can both exceed the baseline. Therefore, when ๐›ผ , ๐›ฝ is adjusted to a reasonable range of values, we can simultaneously improve the robustness and efficiency.

Figure 4:The horizontal axis represents the different values of ๐›ผ , ๐›ฝ . The vertical axis is the metric for evaluating the robustness or efficiency. The horizontal dashed line is the baseline case with both rollout-dropout and model-dropout removed ( ๐›ผ

๐›ฝ

0 ). 120k steps are trained for each experiment in the Hopper environment, and 400k steps are trained for each experiment in the HalfCheetah environment. Each box plot corresponds to 10 different random seeds. 5Conclusions and Future Work

In this paper, we propose the MBDP algorithm to address the dilemma of robustness and sample efficiency. Specifically, MBDP drops some overvalued imaginary samples through the rollout-dropout mechanism to focus on the bad samples for the purpose of improving robustness, while the model-dropout mechanism is designed to enhance the sample efficiency by only using accurate models. Both theoretical analysis and experiment results verify our claims that 1) MBDP algorithm can provide policies with competitive robustness while achieving state-of-the-art performance; 2) we empirically find that there is a seesaw phenomenon between robustness and efficiency, that is, the growth of one will cause a slight decline of the other; 3) we can get policies with different types of performance and robustness by tuning the hyperparameters ๐›ผ and ๐›ฝ , ensuring that our algorithm is capable of performing well in a wide range of tasks.

Our future work will incorporate more domain knowledge of robust control to further enhance robustness. We also plan to transfer the design of Double Dropout Planning as a more general module that can be easily embedded in more model-based RL algorithms and validate the effectiveness of Double Dropout Planning in real-world scenarios. Besides, relevant researches in the field of meta learning and transfer learning may inspire us to further optimize the design and training procedure of the predictive models. Finally, we can use more powerful function approximators to model the environment.

References [1] โ†‘ Chong Chen, Taiki Takahashi, Shin Nakagawa, Takeshi Inoue, and Ichiro Kusumi.Reinforcement learning in depression: a review of computational research.Neuroscience & Biobehavioral Reviews, 55:247โ€“267, 2015. [2] โ†‘ Athanasios S Polydoros and Lazaros Nalpantidis.Survey of model-based reinforcement learning: Applications on robotics.Journal of Intelligent & Robotic Systems, 86(2):153โ€“173, 2017. [3] โ†‘ John Schulman, Philipp Moritz, Sergey Levine, Michael Jordan, and Pieter Abbeel.High-dimensional continuous control using generalized advantage estimation.arXiv preprint arXiv:1506.02438, 2015. [4] โ†‘ Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra.Continuous control with deep reinforcement learning.arXiv preprint arXiv:1509.02971, 2015. [5] โ†‘ Sergey Levine, Chelsea Finn, Trevor Darrell, and Pieter Abbeel.End-to-end training of deep visuomotor policies.The Journal of Machine Learning Research, 17(1):1334โ€“1373, 2016. [6] โ†‘ Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al.Human-level control through deep reinforcement learning.nature, 518(7540):529โ€“533, 2015. [7] โ†‘ David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al.Mastering the game of go with deep neural networks and tree search.nature, 529(7587):484โ€“489, 2016. [8] โ†‘ Marc Peter Deisenroth, Gerhard Neumann, and Jan Peters.A survey on policy search for robotics.now publishers, 2013. [9] โ†‘ Felix Berkenkamp, Matteo Turchetta, Angela P Schoellig, and Andreas Krause.Safe model-based reinforcement learning with stability guarantees.arXiv preprint arXiv:1705.08551, 2017. [10] โ†‘ Marc Deisenroth and Carl E Rasmussen.Pilco: A model-based and data-efficient approach to policy search.In Proceedings of the 28th International Conference on machine learning (ICML-11), pages 465โ€“472, 2011. [11] โ†‘ Yarin Gal, Rowan McAllister, and Carl Edward Rasmussen.Improving PILCO with Bayesian neural network dynamics models.In Data-Efficient Machine Learning workshop, International Conference on Machine Learning, 2016. [12] โ†‘ Kurtland Chua, Roberto Calandra, Rowan McAllister, and Sergey Levine.Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models.Advances in Neural Information Processing Systems, 2018-Decem(NeurIPS):4754โ€“4765, 2018. [13] โ†‘ Ali Malik, Volodymyr Kuleshov, Jiaming Song, Danny Nemer, Harlan Seymour, and Stefano Ermon.Calibrated model-based deep reinforcement learning.In International Conference on Machine Learning, pages 4314โ€“4323. PMLR, 2019. [14] โ†‘ Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine.Epopt: Learning robust neural network policies using model ensembles.arXiv preprint arXiv:1610.01283, 2016. [15] โ†‘ Thanard Kurutach, Ignasi Clavera, Yan Duan, Aviv Tamar, and Pieter Abbeel.Model-ensemble trust-region policy optimization.arXiv preprint arXiv:1802.10592, 2018. [16] โ†‘ Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine.When to trust your model: Model-based policy optimization.In Advances in Neural Information Processing Systems, pages 12519โ€“12530, 2019. [17] โ†‘ Pieter Abbeel, Morgan Quigley, and Andrew Y Ng.Using inaccurate models in reinforcement learning.In Proceedings of the 23rd international conference on Machine learning, pages 1โ€“8, 2006. [18] โ†‘ Thomas M Moerland, Joost Broekens, and Catholijn M Jonker.Model-based reinforcement learning: A survey.arXiv preprint arXiv:2006.16712, 2020. [19] โ†‘ Leonid Kuvayev Rich Sutton.Model-based reinforcement learning with an approximate, learned model.In Proceedings of the ninth Yale workshop on adaptive and learning systems, pages 101โ€“105, 1996. [20] โ†‘ Kavosh Asadi, Dipendra Misra, Seungchan Kim, and Michel L Littman.Combating the compounding-error problem with a multi-step model.arXiv preprint arXiv:1905.13320, 2019. [21] โ†‘ Kemin Zhou and John Comstock Doyle.Essentials of robust control, volume 104.Prentice hall Upper Saddle River, NJ, 1998. [22] โ†‘ Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczyล„ski.Lectures on stochastic programming: modeling and theory.SIAM, 2014. [23] โ†‘ Lerrel Pinto, James Davidson, Rahul Sukthankar, and Abhinav Gupta.Robust adversarial reinforcement learning.arXiv preprint arXiv:1703.02702, 2017. [24] โ†‘ Chen Tessler, Yonathan Efroni, and Shie Mannor.Action robust reinforcement learning and applications in continuous control.arXiv preprint arXiv:1901.09184, 2019. [25] โ†‘ Aviv Tamar, Yonatan Glassner, and Shie Mannor.Optimizing the CVaR via sampling.Proceedings of the National Conference on Artificial Intelligence, 4:2993โ€“2999, 2015. [26] โ†‘ Yinlam Chow, Aviv Tamar, Shie Mannor, and Marco Pavone.Risk-sensitive and robust decision-making: a cvar optimization approach.In Advances in Neural Information Processing Systems, pages 1522โ€“1530, 2015. [27] โ†‘ Richard S. Sutton.Dyna, an integrated architecture for learning, planning, and reacting.ACM SIGART Bulletin, 2(4):160โ€“163, 1991. [28] โ†‘ Emanuel Todorov, Tom Erez, and Yuval Tassa.Mujoco: A physics engine for model-based control.In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5026โ€“5033. IEEE, 2012. [29] โ†‘ Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine.Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor.arXiv preprint arXiv:1801.01290, 2018. [30] โ†‘ Jacob Buckman, Danijar Hafner, George Tucker, Eugene Brevdo, and Honglak Lee.Sample-efficient reinforcement learning with stochastic ensemble value expansion.In Advances in Neural Information Processing Systems, pages 8224โ€“8234, 2018. [31] โ†‘ Yuping Luo, Huazhe Xu, Yuanzhi Li, Yuandong Tian, Trevor Darrell, and Tengyu Ma.Algorithmic framework for model-based deep reinforcement learning with theoretical guarantees.7th International Conference on Learning Representations, ICLR 2019, pages 1โ€“27, 2019. Appendix AProofs

In Appendix A, we will provide proofs for Theorem 3.1, Lemma 3.2, Theorem 3.3 ,and Proposition 3.12. Note that the numbering and citations in the appendices are referenced from the main manuscript.

A.1Proof of Theorem 3.1 Proof.

Recall the definition of CVaR (2.4) and ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ (3.2), we need to take the negative value of rewards to represent the loss in the sense of CVaR. Then we have that,

CVaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ )

๐”ผ โข [ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ โˆฃ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ โ‰ฅ VaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ) ]

๐”ผ โข [ โˆ’ โˆ‘ โ„ฌ ๐œ‹ , โ„ณ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] โˆฃ โˆ’ โˆ‘ โ„ฌ ๐œ‹ , โ„ณ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] โ‰ฅ VaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ) ]

โˆ’ ๐”ผ โข [ โˆ‘ โ„ฌ ๐œ‹ , โ„ณ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] โˆฃ โˆ‘ โ„ฌ ๐œ‹ , โ„ณ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] โ‰ค โˆ’ VaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ) ]

Obviously, the condition of โˆ‘ โ„ฌ ๐œ‹ , โ„ณ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] โ‰ค โˆ’ VaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ) in the above equation exactly meets our definition of โ„ฌ ๐›ผ ๐œ‹ , โ„ณ , that is, eqaution (3.1). Then we can prove the first part of Theorem 3.1

โˆ’ CVaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ )

๐”ผ โข [ โˆ‘ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

๐‘‰ ๐›ผ ๐œ‹ , โ„ณ

(A.1)

Considering ๐”ผ ๐’ซ ^ โข [ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ] , recall the definition of ๐’ซ ^ , we have that

๐”ผ ๐’ซ ^ โข [ ๐‘‰ ๐œ‹ , โ„ณ ]

โˆ’ ๐”ผ ๐’ซ ^ โข [ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ]

โˆ’ โˆ‘ ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โˆˆ ๐’ฎ ๐‘‡ + 1 ๐’ซ 0 โข ( ๐‘  0 ) โข ๐›ฟ 0 โข ( ๐‘  0 ) โข โˆ ๐‘ก

1 ๐‘‡ ๐’ซ ๐‘ก โข ( ๐‘  ๐‘ก โˆฃ ๐‘  ๐‘ก โˆ’ 1 ) โข ๐›ฟ ๐‘ก โข ( ๐‘  ๐‘ก โˆฃ ๐‘  ๐‘ก โˆ’ 1 ) โ‹… ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ )

โˆ‘ ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โˆˆ ๐’ฎ ๐‘‡ + 1 ๐’ซ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โข ๐›ฟ 0 โข ( ๐‘  0 ) โข โˆ ๐‘ก

1 ๐‘‡ ๐›ฟ ๐‘ก โข ( ๐‘ฅ ๐‘ก โˆฃ ๐‘ฅ ๐‘ก โˆ’ 1 ) โ‹… ๐‘‰ ๐œ‹ , โ„ณ

โ‰œ โˆ‘ ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โˆˆ ๐’ฎ ๐‘‡ + 1 ๐’ซ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โข ๐›ฟ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โ‹… ๐‘‰ ๐œ‹ , โ„ณ

Since ๐›ฟ is the random perturbation to the environment as we defined, itโ€™s intuitive that

๐”ผ โข [ ๐›ฟ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) ]

โˆ‘ ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โˆˆ ๐’ฎ ๐‘‡ + 1 ๐’ซ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โข ๐›ฟ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ )

1

(A.2)

Recall the definition of ฮ” ๐›ผ in (3.5), we can prove the second part of Theorem 3.1

sup ฮ” ๐›ผ ๐”ผ ๐’ซ ^ โข [ ๐‘‰ ๐œ‹ , โ„ณ ]

sup ๐›ฟ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โ‰ค 1 ๐›ผ โˆ‘ ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โˆˆ ๐’ฎ ๐‘‡ + 1 ๐’ซ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โข ๐›ฟ โข ( ๐‘  0 , โ€ฆ , ๐‘  ๐‘‡ ) โ‹… ๐‘‰ ๐œ‹ , โ„ณ

โˆ’ CVaR ๐›ผ โข ( โˆ’ ๐‘‰ ๐œ‹ , โ„ณ )

(A.3)

The last equation (A.3) is obtained by equation (A.2) and the Representation Theorem [22] for CVaR.

โˆŽ

A.2Proof of Lemma 3.2

To prove Lemma 3.2, we need to introduce two useful lemmas.

Lemma A.1.

Define

๐บ ๐œ‹ , โ„ณ โข ( ๐‘  , ๐‘Ž )

๐”ผ ๐‘  ^ โ€ฒ โˆผ โ„ณ ( โ‹… | ๐‘  , ๐‘Ž ) โข ๐‘‰ ๐œ‹ , โ„ณ โข ( ๐‘  ^ โ€ฒ ) โˆ’ ๐”ผ ๐‘  โ€ฒ โˆผ ๐’ซ ( โ‹… | ๐‘  , ๐‘Ž ) โข ๐‘‰ ๐œ‹ , โ„ณ โข ( ๐‘  โ€ฒ )

(A.4)

For any policy ๐œ‹ and dynamical models โ„ณ , โ„ณ โ€ฒ , we have that

๐‘‰ ๐œ‹ , โ„ณ โ€ฒ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ

๐›พ 1 โˆ’ ๐›พ โข ๐”ผ ๐‘† , ๐ด โˆผ ๐œ‹ , โ„ณ โข [ ๐บ ๐œ‹ , โ„ณ โ€ฒ โข ( ๐‘† , ๐ด ) ]

(A.5)

Lemma A.1 is a directly cited theorem in existing work (Lemma 4.3 in [31]), we make some modifications to fit our subsequent conclusions. With the above lemma, we first propose Lemma A.2.

Lemma A.2.

Suppose the expected return for model-based methods ๐‘‰ ๐œ‹ , โ„ณ is Lipschitz continuous on the state space ๐’ฎ , ๐พ is the Lipschitz constant, ๐’ซ is the transition distribution of environment, then

| ๐‘‰ ๐œ‹ , โ„ณ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | โ‰ค ๐›พ 1 โˆ’ ๐›พ โข ๐พ โ‹… bias

(A.6)

where

bias โ‰œ ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข โ€– โ„ณ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€–

(A.7)

In Lemma A.2, we make the assumption that the expected return ๐‘‰ ๐œ‹ , โ„ณ โข ( ๐‘  ) on the estimated model โ„ณ is Lipschitz continuous w.r.t any norm โˆฅ โ‹… โˆฅ , i.e.

| ๐‘‰ ๐œ‹ , โ„ณ โข ( ๐‘  ) โˆ’ ๐‘‰ ๐œ‹ , โ„ณ โข ( ๐‘  โ€ฒ ) | โ‰ค ๐พ โข โ€– ๐‘  โˆ’ ๐‘  โ€ฒ โ€– , โˆ€ ๐‘  , ๐‘  โ€ฒ โˆˆ ๐’ฎ

(A.8)

where ๐พ โˆˆ โ„ + is a Lipschitz constant. This assumption means that the closer states should give the closer value estimation, which should hold in most scenarios.

Proof.

By definition of ๐บ ๐œ‹ , โ„ณ โข ( ๐‘  , ๐‘Ž ) in (A.4) and Assumption (A.8), i.e., ๐‘‰ ๐œ‹ , โ„ณ โข ( ๐‘  ) is Lipschitz continuous, we have that

| ๐บ ๐œ‹ , โ„ณ โข ( ๐‘  , ๐‘Ž ) | โ‰ค ๐พ โข โ€– โ„ณ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€–

(A.9)

Then, we can show that

| ๐‘‰ ๐œ‹ , โ„ณ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ |

๐›พ 1 โˆ’ ๐›พ โข | ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข [ ๐บ ๐œ‹ , โ„ณ โข ( ๐‘  , ๐‘Ž ) ] |

(By Lemma A.1)

โ‰ค ๐›พ 1 โˆ’ ๐›พ โข ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข [ | ๐บ ๐œ‹ , โ„ณ โข ( ๐‘  , ๐‘Ž ) | ]

(By Triangle Inequality)

โ‰ค ๐›พ 1 โˆ’ ๐›พ โข ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข ๐พ โข โ€– โ„ณ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€–

(By equation (A.9))

= ๐›พ 1 โˆ’ ๐›พ โข ๐พ โ‹… ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข โ€– โ„ณ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€–

โ‰œ ๐›พ 1 โˆ’ ๐›พ โข ๐พ โ‹… bias

โˆŽ

Now we prove Lemma 3.2.

Proof.

For two disjoint sets ๐ด and ๐ต , i.e., ๐ด โˆฉ ๐ต

โˆ… , there are the following property

๐”ผ ๐ด โˆช ๐ต โข [ ๐‘‹ ]

๐”ผ ๐ด โข [ ๐‘‹ ] โข P โข ( ๐ด ) + ๐”ผ ๐ต โข [ ๐‘‹ ] โข P โข ( ๐ต )

(A.10)

By this property,

๐”ผ โข [ โˆ‘ ๐‘  0 โˆˆ ๐’ฎ , { ๐‘Ž 0 , ๐‘  1 , โ€ฆ } โˆผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

( 1 โˆ’ ๐›ผ ) โ‹… ๐”ผ โข [ โˆ‘ { ๐‘  0 , ๐‘Ž 0 , โ€ฆ } โˆผ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ] + ๐›ผ โ‹… ๐”ผ โข [ โˆ‘ { ๐‘  0 , ๐‘Ž 0 , โ€ฆ } โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

Recall the definition (2.1), (2.2) and (3.2), we have that

๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™

๐”ผ โข [ โˆ‘ { ๐‘  0 , ๐‘Ž 0 , โ€ฆ } โˆผ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

1 1 โˆ’ ๐›ผ โข ๐”ผ โข [ โˆ‘ ๐‘  0 โˆˆ ๐’ฎ , { ๐‘Ž 0 , ๐‘  1 , โ€ฆ } โˆผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ] โˆ’ ๐›ผ 1 โˆ’ ๐›ผ โข ๐”ผ โข [ โˆ‘ { ๐‘  0 , ๐‘Ž 0 , โ€ฆ } โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

1 1 โˆ’ ๐›ผ โข ๐”ผ ๐‘  โˆˆ ๐’ฎ โข [ ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ โข ( ๐‘  ) ] โˆ’ ๐›ผ 1 โˆ’ ๐›ผ โข ๐”ผ โข [ โˆ‘ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

1 1 โˆ’ ๐›ผ โข ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐›ผ 1 โˆ’ ๐›ผ โข ๐”ผ โข [ โˆ‘ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]

(A.11)

Where ๐œ โ‰œ { ๐‘  0 , ๐‘Ž 0 , โ€ฆ } . Recall the definition (3.1) and R ๐‘š

sup ๐‘  โˆˆ ๐’ฎ , ๐‘Ž โˆˆ ๐’œ โข ๐‘Ÿ โข ( ๐‘  , ๐‘Ž ) , we have

๐”ผ โข [ โˆ‘ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]
โ‰ค โˆซ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ โˆ‘ ๐‘ก

0 โˆž ๐›พ ๐‘ก โข R ๐‘š ] โข ๐‘ โข ( ๐œ ) โข d ๐œ

[ โˆ‘ ๐‘ก

0 โˆž ๐›พ ๐‘ก ] โข R ๐‘š โข โˆซ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ ๐‘ โข ( ๐œ ) โข d ๐œ

1 1 โˆ’ ๐›พ โข R ๐‘š โข โˆซ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ ๐‘ โข ( ๐œ ) โข d ๐œ

๐›ผ 1 โˆ’ ๐›พ โข R ๐‘š

(By definition of โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ )

Similarly,

๐‘‰ ๐œ‹ , โ„ณ ๐œ™

๐”ผ โข [ โˆ‘ ๐œ โˆผ โ„ฌ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ]
โ‰ค โˆซ ๐œ โˆผ โ„ฌ ๐œ‹ , โ„ณ ๐œ™ [ โˆ‘ ๐‘ก

0 โˆž ๐›พ ๐‘ก โข R ๐‘š ] โข ๐‘ โข ( ๐œ ) โข d ๐œ

[ โˆ‘ ๐‘ก

0 โˆž ๐›พ ๐‘ก ] โข R ๐‘š โข โˆซ ๐œ โˆผ โ„ฌ ๐œ‹ , โ„ณ ๐œ™ ๐‘ โข ( ๐œ ) โข d ๐œ

1 1 โˆ’ ๐›พ โข R ๐‘š โข โˆซ ๐œ โˆผ โ„ฌ ๐œ‹ , โ„ณ ๐œ™ ๐‘ โข ( ๐œ ) โข d ๐œ

1 1 โˆ’ ๐›พ โข R ๐‘š

Based on the above two inequalities and equation (A.11), we have that

| ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ |

| ๐›ผ 1 โˆ’ ๐›ผ โข ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐›ผ 1 โˆ’ ๐›ผ โข ๐”ผ โข [ โˆ‘ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ] |

โ‰ค ๐›ผ 1 โˆ’ ๐›ผ โข ( | ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ | + | ๐”ผ โข [ โˆ‘ ๐œ โˆผฬธ โ„ฌ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ [ ๐›พ ๐‘ก โข ๐‘Ÿ โข ( ๐‘  ๐‘ก , ๐‘Ž ๐‘ก ) ] ] | )

โ‰ค ๐›ผ 1 โˆ’ ๐›ผ โข ( 1 1 โˆ’ ๐›พ โข R ๐‘š + ๐›ผ 1 โˆ’ ๐›พ โข R ๐‘š )

๐›ผ โข ( 1 + ๐›ผ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R m

(A.12)

โˆŽ

A.3Proof of Theorem 3.3 Proof.

With Lemma A.2 and Lemma 3.2, we can show that

| ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐›ฝ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ |

| โˆซ ฮฆ ๐›ฝ ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ |

| โˆซ ฮฆ ๐›ฝ ( ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ ) โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™ |

โ‰ค โˆซ ฮฆ ๐›ฝ | ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™

(By Triangle Inequality)

โ‰ค โˆซ ฮฆ ๐›ฝ ( | ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | + | ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ | ) โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™

(By Lemma 3.2)

= โˆซ ฮฆ ๐›ฝ | ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™ + โˆซ ฮฆ ๐›ฝ | ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ | โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™

(A.13)

For the first part of (A.13), let

๐œ– โ„ณ โ‰œ ๐”ผ ๐œ™ โˆˆ ฮฆ โข [ ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข [ โ€– โ„ณ ๐œ™ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€– ] ]

denotes the general bias between any model โ„ณ and environment transition ๐’ซ , with Lemma A.2, we now get

โˆซ ฮฆ ๐›ฝ | ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™

โ‰ค ๐›พ โข ๐พ 1 โˆ’ ๐›พ โข โˆซ ฮฆ ๐›ฝ ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข [ โ€– โ„ณ ๐œ™ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€– ] โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™

(By Lemma A.2)

โ‰ค ๐›พ โข ๐พ 1 โˆ’ ๐›พ โข | ๐œ– โ„ณ | โข โˆซ ฮฆ ๐›ฝ | ๐‘ โข ( ๐œ™ ) | โข d ๐œ™

( 1 โˆ’ ๐›ฝ ) โข ๐›พ โข ๐พ 1 โˆ’ ๐›พ โข ๐œ– โ„ณ

(A.14)

For the second part of (A.13), by Lemma 3.2, we can show that

โˆซ ฮฆ ๐›ฝ | ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ | โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™

โ‰ค ๐›ผ โข ( 1 + ๐›ผ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R ๐‘š โข โˆซ ฮฆ ๐›ฝ ๐‘ โข ( ๐œ™ ) โข d ๐œ™

(By Lemma 3.2)

= ๐›ผ โข ( 1 + ๐›ผ ) โข ( 1 โˆ’ ๐›ฝ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R ๐‘š

(A.15)

Go back to equation (A.13), it follows that

| ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐›ฝ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ |

โ‰ค โˆซ ฮฆ ๐›ฝ | ๐‘‰ ๐œ‹ , โ„ณ ๐œ™ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™ + โˆซ ฮฆ ๐›ฝ | ๐œ– ๐›ผ | โข ๐‘ โข ( ๐œ™ ) โข d ๐œ™

โ‰ค ( 1 โˆ’ ๐›ฝ ) โข ๐›พ โข ๐พ 1 โˆ’ ๐›พ โข ๐œ– โ„ณ + ๐›ผ โข ( 1 + ๐›ผ ) โข ( 1 โˆ’ ๐›ฝ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R ๐‘š

(By equation (A.14) and (A.15))

โ‰œ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ )

โˆŽ

A.4Proof of Proposition 3.12 Proof.

With Theorem 3.3, i.e., | ๐‘‰ ๐›ผ ๐œ‹ , โ„ณ ๐›ฝ โˆ’ ๐‘‰ ๐œ‹ , ๐’ซ | โ‰ค ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ) , we have

๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โ‰ฅ ๐‘‰ ๐›ผ ๐œ‹ ๐‘˜ + 1 , โ„ณ ๐‘˜ + 1 ๐›ฝ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ๐‘˜ + 1 ๐›ฝ )

(A.16)

Since the LHS of (A.16), i.e., ๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ is bigger than the RHS of (A.16), i.e., ๐‘‰ ๐›ผ ๐œ‹ ๐‘˜ + 1 , โ„ณ ๐‘˜ + 1 ๐›ฝ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ๐‘˜ + 1 ๐›ฝ ) , we can change the inequality into an equation with the RHS plus a update residual ๐œ– ๐‘˜ + 1 :

๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ

๐‘‰ ๐›ผ ๐œ‹ ๐‘˜ + 1 , โ„ณ ๐‘˜ + 1 ๐›ฝ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ๐‘˜ + 1 ๐›ฝ ) + ๐œ– ๐‘˜ + 1

(A.17)

Where

๐œ– ๐‘˜ + 1 โ‰œ ๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โˆ’ [ ๐‘‰ ๐›ผ ๐œ‹ ๐‘˜ + 1 , โ„ณ ๐‘˜ + 1 ๐›ฝ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ๐‘˜ + 1 ๐›ฝ ) ]

(A.18)

With the general pattern equation (3.10), we have

๐‘‰ ๐›ผ ๐œ‹ ๐‘˜ + 1 , โ„ณ ๐‘˜ + 1 ๐›ฝ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( โ„ณ ๐‘˜ + 1 ๐›ฝ ) โ‰ฅ ๐‘‰ ๐œ‹ ๐‘˜ , ๐’ซ โˆ’ ๐ท ๐›ผ , ๐›ฝ โข ( ๐’ซ )

(A.19)

Since

๐œ– ๐’ซ

๐”ผ ๐œ™ โˆˆ ฮฆ โข [ ๐”ผ ๐‘  , ๐‘Ž โˆผ ๐œ‹ , ๐’ซ โข [ โ€– ๐’ซ โข ( ๐‘  , ๐‘Ž ) โˆ’ ๐’ซ โข ( ๐‘  , ๐‘Ž ) โ€– ] ]

(By equation (3.9))

= ๐”ผ ๐œ™ โˆˆ ฮฆ โข [ 0 ]

0

We can show that

๐ท ๐›ผ , ๐›ฝ โข ( ๐’ซ )

( 1 โˆ’ ๐›ฝ ) โข ๐›พ โข ๐พ 1 โˆ’ ๐›พ โข ๐œ– ๐’ซ + ๐›ผ โข ( 1 + ๐›ผ ) โข ( 1 โˆ’ ๐›ฝ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R ๐‘š

0 + ๐›ผ โข ( 1 + ๐›ผ ) โข ( 1 โˆ’ ๐›ฝ ) ( 1 โˆ’ ๐›ผ ) โข ( 1 โˆ’ ๐›พ ) โข R ๐‘š

๐œ– ๐›ผ

(A.20)

With equation (A.17), (A.19) and (A.20), it follows that

๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โ‰ฅ ๐‘‰ ๐œ‹ ๐‘˜ , ๐’ซ + [ ๐œ– ๐‘˜ + 1 โˆ’ ๐œ– ๐›ผ ]

(A.21)

Obviously, the update residual ๐œ– ๐‘˜ + 1 is much larger than ๐œ– ๐›ผ most of the time during the training period, implying that [ ๐œ– ๐‘˜ + 1 โˆ’ ๐œ– ๐›ผ ] โ‰ฅ 0 almost surely, i.e.,

๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โข โ‰ฅ a . s . โข ๐‘‰ ๐œ‹ ๐‘˜ , ๐’ซ + 0

(A.22)

Where โ€œ a . s . โ€ means โ€œalmost surelyโ€. Then we finally get

๐‘‰ ๐œ‹ 0 , ๐’ซ โข โ‰ค a . s . โข โ‹ฏ โข โ‰ค a . s . โข ๐‘‰ ๐œ‹ ๐‘˜ , ๐’ซ โข โ‰ค a . s . โข ๐‘‰ ๐œ‹ ๐‘˜ + 1 , ๐’ซ โข โ‰ค a . s . โข โ‹ฏ

(A.23)

โˆŽ

Appendix BExperiment Details B.1Environment Settings

In our experiments, we evaluate our approach on four continuous control benchmark tasks in the Mujoco simulator [28]: Hopper, Walker, HalfCheetah and Ant.

โ€ข

Hopper: Make a two-dimensional one-legged robot hop forward as fast as possible.

โ€ข

Walker2d: Make a two-dimensional bipedal robot walk forward as fast as possible.

โ€ข

HalfCheetah: Make a two-dimensional cheetah robot run forward as fast as possible.

โ€ข

Ant: Make a four-legged creature walk forward as fast as possible.

(a)Hopper (b)Walker2d (c)HalfCheetah (d)Ant Figure 5:Illustrations of the four MuJoCo simulated robot environments used in our experiments.

We also make modifications to the XML model configuration files of MuJoCo environments in our robustness experiments, aiming to evaluate robustness of our MBDP algorithm. More specifically, we scale the standard (OpenAI Gym) friction of each geom part by a coefficient ๐ถ friction โˆˆ [ 0.5 , 1.5 ] , and scale the standard mass of torso part by a coefficient ๐ถ mass โˆˆ [ 0.5 , 1.5 ] .

B.2Hyperparameter Settings

environment name epochs env steps per epoch rollout batch policy update per env step model update per env step ๐›ผ

๐›ฝ

๐›พ ensemble size network arch Hopper 120 1000 10 5 20 250 0.2 0.2 0.99 10 MLP ( 4 ร— 200 ) Walker2d 300 1000 10 5 20 250 0.2 0.2 0.99 10 MLP ( 4 ร— 200 ) HalfCheetah 400 1000 10 5 40 250 0.2 0.2 0.99 10 MLP ( 4 ร— 200 ) Ant 300 1000 10 5 20 250 0.2 0.2 0.99 10 MLP ( 4 ร— 200 )

Table 1:Hyperparameter settings for MBDP results shown in Figure 2 of the main manuscript. B.3Computational Details CPU GPU RAM Intel E5-2680@2.4GHz (56 Cores) Tesla P40 (24GB) ร— 8 256GB Table 2:Computational resources for our experiments. Environment Name Hopper Walker2d HalfCheetah Ant Time โ‰ˆ 10 hours โ‰ˆ 20 hours โ‰ˆ 32 hours โ‰ˆ 48 hours Table 3:Computing time of each single experiment in different environments. Appendix CEmpirical Demonstration of Proposition 3.12 Figure 6:Scaled residual curve in Hopper and HalfCheetah environment.

The observation results of the residual value is shown in Figure 6. The horizontal axis is the number of training epochs, and the vertical axis represents the estimated value (scaled to 1) of ( ๐œ– ๐‘˜ โˆ’ ๐œ– ๐›ผ ) . It can be seen from the figure that ( ๐œ– ๐‘˜ โˆ’ ๐œ– ๐›ผ ) is greater than 0 most of the time, and occasionally less than 0 when it is close to convergence. This verifies our claim in Section 3.3.2.

Report Issue Report Issue for Selection Generated by L A T E xml Instructions for reporting errors

We are continuing to improve HTML versions of papers, and your feedback helps enhance accessibility and mobile support. To report errors in the HTML that will help us improve conversion and rendering, choose any of the methods listed below:

Click the "Report Issue" button. Open a report feedback form via keyboard, use "Ctrl + ?". Make a text selection and click the "Report Issue for Selection" button near your cursor. You can use Alt+Y to toggle on and Alt+Shift+Y to toggle off accessible reporting links at each section.

Our team has already identified the following issues. We appreciate your time reviewing and reporting rendering errors we may not have found yet. Your efforts will help us improve the HTML versions for all readers, because disability should not be a barrier to accessing research. Thank you for your continued support in championing open access for all.

Have a free development cycle? Help support accessibility at arXiv! Our collaborators at LaTeXML maintain a list of packages that need conversion, and welcome developer contributions.