Add 1 files
Browse files- 2306/2306.07290.md +367 -0
2306/2306.07290.md
ADDED
|
@@ -0,0 +1,367 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Value function estimation using conditional diffusion models for control
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2306.07290
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
Bogdan Mazoure
|
| 7 |
+
|
| 8 |
+
Apple
|
| 9 |
+
|
| 10 |
+
&Walter Talbott
|
| 11 |
+
|
| 12 |
+
Apple
|
| 13 |
+
|
| 14 |
+
&Miguel Angel Bautista
|
| 15 |
+
|
| 16 |
+
Apple
|
| 17 |
+
|
| 18 |
+
&Devon Hjelm
|
| 19 |
+
|
| 20 |
+
Apple
|
| 21 |
+
|
| 22 |
+
&Alexander Toshev
|
| 23 |
+
|
| 24 |
+
Apple &Josh Susskind
|
| 25 |
+
|
| 26 |
+
Apple
|
| 27 |
+
|
| 28 |
+
###### Abstract
|
| 29 |
+
|
| 30 |
+
A fairly reliable trend in deep reinforcement learning is that the performance scales with the number of parameters, provided a complimentary scaling in amount of training data. As the appetite for large models increases, it is imperative to address, sooner than later, the potential problem of running out of high-quality demonstrations. In this case, instead of collecting only new data via costly human demonstrations or risking a simulation-to-real transfer with uncertain effects, it would be beneficial to leverage vast amounts of readily-available low-quality data. Since classical control algorithms such as behavior cloning or temporal difference learning cannot be used on reward-free or action-free data out-of-the-box, this solution warrants novel training paradigms for continuous control. We propose a simple algorithm called Diffused Value Function (DVF), which learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model. This model can be efficiently learned from state sequences (i.e., without access to reward functions nor actions), and subsequently used to estimate the value of each action out-of-the-box. We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers, and show promising qualitative and quantitative results on challenging robotics benchmarks.
|
| 31 |
+
|
| 32 |
+
1 Introduction
|
| 33 |
+
--------------
|
| 34 |
+
|
| 35 |
+
The success of foundation models[[1](https://arxiv.org/html/2306.07290#bib.bib1), [2](https://arxiv.org/html/2306.07290#bib.bib2)] is often attributed to their size[[3](https://arxiv.org/html/2306.07290#bib.bib3)] and abundant training data, a handful of which is usually annotated by a preference model trained on human feedback[[4](https://arxiv.org/html/2306.07290#bib.bib4)]. Similarly, the robotics community has seen a surge in large multimodal learners[[5](https://arxiv.org/html/2306.07290#bib.bib5), [6](https://arxiv.org/html/2306.07290#bib.bib6), [7](https://arxiv.org/html/2306.07290#bib.bib7)], which also require vast amounts of high-quality training demonstrations. What can we do when annotating demonstrations is prohibitively costly, and the sim2real gap is too large? Recent works show that partially pre-training the controller on large amounts of low-returns data with missing information can help accelerate learning from optimal demonstration[[8](https://arxiv.org/html/2306.07290#bib.bib8), [9](https://arxiv.org/html/2306.07290#bib.bib9)]. A major drawback of these works lies in the compounding prediction error: training a preference model on optimal demonstrations and subsequently using this model in reinforcement learning (RL) or behavior cloning (BC) approaches includes both the uncertainty from the preference bootstrapping, as well as the RL algorithm itself. Instead, we opt for a different path: decompose the value function, a fundamental quantity for continuous control, into components that depend only on states, only on rewards, and only on actions. These individual pieces can then be trained separately on different subsets of available data, and re-combined together to construct a value function estimate, as shown in later sections.
|
| 36 |
+
|
| 37 |
+
Factorizing the value function into dynamics, decision and reward components poses a major challenge, since it requires disentangling the non-stationarity induced by the controller from that of the dynamical system. Model-based approaches address this problem by learning a differentiable transition model of the dynamic system, through which the information from the controller can be propagated[[10](https://arxiv.org/html/2306.07290#bib.bib10), [11](https://arxiv.org/html/2306.07290#bib.bib11), [12](https://arxiv.org/html/2306.07290#bib.bib12), [13](https://arxiv.org/html/2306.07290#bib.bib13)]. While these approaches can work well on some benchmarks, they can be complex and expensive: the model must predict high-dimensional observations, and determining the value of an action may require unrolling the model for multiple steps into the future.
|
| 38 |
+
|
| 39 |
+
In this paper, we show how we can estimate the environment dynamics in an efficient way while avoiding the dependence of model-based rollouts on the episode horizon. The model learned by our method _(1)_ does not require predicting high-dimensional observations at every timestep, _(2)_ directly predicts the future state without the need of autoregressive unrolls and _(3)_ can be used to estimate the value function without requiring expensive rollouts or temporal difference learning, nor does it need action or reward labels during the pre-training phase. Precisely, we learn a generative model of the discounted state occupancy measure, i.e. a function which takes in a state, action and timestep, and returns a future state proportional to the likelihood of visiting that future state under some fixed policy. This occupancy measure has a resemblance to successor features[[14](https://arxiv.org/html/2306.07290#bib.bib14)], and can be seen as its generative, normalized version. By scoring these future states by the corresponding rewards, we form an unbiased estimate of the value function. We name our proposed algorithm Diffused Value Function (DVF). Because DVF represents multi-step transitions implicitly, it avoids having to predict high-dimensional observations at every timestep and thus scales to long-horizon tasks with high-dimensional observations. Using the same algorithm, we can handle settings where reward-free and action-free data is provided, which cannot be directly handled by classical TD-based methods. Specifically, the generative model can be pre-trained on sequences of states without the need for reward or action labels, provided that some representation of the data generating process (i.e., logging policy) is known.
|
| 40 |
+
|
| 41 |
+
We highlight the strengths of DVF both qualitatively and quantitatively on challenging robotic tasks, and show how generative models can be used to accelerate _tabula rasa_ learning.
|
| 42 |
+
|
| 43 |
+
2 Preliminaries
|
| 44 |
+
---------------
|
| 45 |
+
|
| 46 |
+
#### Reinforcement learning
|
| 47 |
+
|
| 48 |
+
Let M 𝑀 M italic_M be a Markov decision process (MDP) defined by the tuple M=⟨𝒮,S 0,𝒜,𝒯,r,γ⟩𝑀 𝒮 subscript 𝑆 0 𝒜 𝒯 𝑟 𝛾 M=\langle\mathcal{S},S_{0},\mathcal{A},\mathcal{T},r,\gamma\rangle italic_M = ⟨ caligraphic_S , italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , caligraphic_A , caligraphic_T , italic_r , italic_γ ⟩, where 𝒮 𝒮\mathcal{S}caligraphic_S is a state space, S 0⊆𝒮 subscript 𝑆 0 𝒮 S_{0}\subseteq\mathcal{S}italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ⊆ caligraphic_S is the set of starting states, 𝒜 𝒜\mathcal{A}caligraphic_A is an action space, 𝒯=p(⋅|s t,a t):𝒮×𝒜→Δ(𝒮)\mathcal{T}=p(\cdot|s_{t},a_{t}):\mathcal{S}\times\mathcal{A}\to\Delta(% \mathcal{S})caligraphic_T = italic_p ( ⋅ | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) : caligraphic_S × caligraphic_A → roman_Δ ( caligraphic_S ) is a one-step transition function 1 1 1 Δ(𝒳)Δ 𝒳\Delta(\mathcal{X})roman_Δ ( caligraphic_X ) denotes the entire set of distributions over the space 𝒳 𝒳\mathcal{X}caligraphic_X., r:𝒮×𝒜→[r min,r max]:𝑟→𝒮 𝒜 subscript 𝑟 min subscript 𝑟 max r:\mathcal{S}\times\mathcal{A}\to[r_{\text{min}},r_{\text{max}}]italic_r : caligraphic_S × caligraphic_A → [ italic_r start_POSTSUBSCRIPT min end_POSTSUBSCRIPT , italic_r start_POSTSUBSCRIPT max end_POSTSUBSCRIPT ] is a reward function and γ∈[0,1)𝛾 0 1\gamma\in[0,1)italic_γ ∈ [ 0 , 1 ) is a discount factor. The system starts in one of the initial states s 0∈S 0 subscript 𝑠 0 subscript 𝑆 0 s_{0}\in S_{0}italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. At every timestep t>0 𝑡 0 t>0 italic_t > 0, the policy π:𝒮→Δ(𝒜):𝜋→𝒮 Δ 𝒜\pi:\mathcal{S}\to\Delta(\mathcal{A})italic_π : caligraphic_S → roman_Δ ( caligraphic_A ) samples an action a t∼π(⋅|s t)a_{t}\sim\pi(\cdot|s_{t})italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∼ italic_π ( ⋅ | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). The environment transitions into a next state s t+1∼𝒯(⋅|s t,a t)s_{t+1}\sim\mathcal{T}(\cdot|s_{t},a_{t})italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ∼ caligraphic_T ( ⋅ | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) and emits a reward r t=r(s t,a t)subscript 𝑟 𝑡 𝑟 subscript 𝑠 𝑡 subscript 𝑎 𝑡 r_{t}=r(s_{t},a_{t})italic_r start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = italic_r ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). The aim is to learn a Markovian policy π(a∣s)𝜋 conditional 𝑎 𝑠\pi(a\mid s)italic_π ( italic_a ∣ italic_s ) that maximizes the return, defined as discounted sum of rewards, over an episode of length H 𝐻 H italic_H:
|
| 49 |
+
|
| 50 |
+
max π∈Π𝔼 p 0:H π,S 0[∑t=0 H γ tr(s t,a t)],subscript 𝜋 Π subscript 𝔼 subscript superscript 𝑝 𝜋:0 𝐻 subscript 𝑆 0 delimited-[]superscript subscript 𝑡 0 𝐻 superscript 𝛾 𝑡 𝑟 subscript 𝑠 𝑡 subscript 𝑎 𝑡\max_{\pi\in\Pi}\mathbb{E}_{p^{\pi}_{0:H},S_{0}}\left[\sum_{t=0}^{H}\gamma^{t}% r(s_{t},a_{t})\right],roman_max start_POSTSUBSCRIPT italic_π ∈ roman_Π end_POSTSUBSCRIPT blackboard_E start_POSTSUBSCRIPT italic_p start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 0 : italic_H end_POSTSUBSCRIPT , italic_S start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ ∑ start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT italic_γ start_POSTSUPERSCRIPT italic_t end_POSTSUPERSCRIPT italic_r ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ] ,(1)
|
| 51 |
+
|
| 52 |
+
where p t:t+K π subscript superscript 𝑝 𝜋:𝑡 𝑡 𝐾 p^{\pi}_{t:t+K}italic_p start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_t : italic_t + italic_K end_POSTSUBSCRIPT denotes the joint distribution of {s t+k,a t+k}k=1 K superscript subscript subscript 𝑠 𝑡 𝑘 subscript 𝑎 𝑡 𝑘 𝑘 1 𝐾\{s_{t+k},a_{t+k}\}_{k=1}^{K}{ italic_s start_POSTSUBSCRIPT italic_t + italic_k end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t + italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_K end_POSTSUPERSCRIPT obtained by rolling-out π 𝜋\pi italic_π in the environment for K 𝐾 K italic_K timesteps starting at timestep t 𝑡 t italic_t. To solve[Eq.1](https://arxiv.org/html/2306.07290#S2.E1 "1 ‣ Reinforcement learning ‣ 2 Preliminaries ‣ Value function estimation using conditional diffusion models for control"), value-based RL algorithms estimate the future expected discounted sum of rewards, known as the _value function_:
|
| 53 |
+
|
| 54 |
+
Q π(s t,a t)=𝔼 p t π[∑k=1 H γ k−1r(s t+k,a t+k)|s t,a t],superscript 𝑄 𝜋 subscript 𝑠 𝑡 subscript 𝑎 𝑡 subscript 𝔼 superscript subscript 𝑝 𝑡 𝜋 delimited-[]conditional superscript subscript 𝑘 1 𝐻 superscript 𝛾 𝑘 1 𝑟 subscript 𝑠 𝑡 𝑘 subscript 𝑎 𝑡 𝑘 subscript 𝑠 𝑡 subscript 𝑎 𝑡 Q^{\pi}(s_{t},a_{t})=\mathbb{E}_{p_{t}^{\pi}}\left[\sum_{k=1}^{H}\gamma^{k-1}r% (s_{t+k},a_{t+k})|s_{t},a_{t}\right],italic_Q start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = blackboard_E start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [ ∑ start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT italic_γ start_POSTSUPERSCRIPT italic_k - 1 end_POSTSUPERSCRIPT italic_r ( italic_s start_POSTSUBSCRIPT italic_t + italic_k end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t + italic_k end_POSTSUBSCRIPT ) | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ] ,(2)
|
| 55 |
+
|
| 56 |
+
for s t∈𝒮,a t∈𝒜 formulae-sequence subscript 𝑠 𝑡 𝒮 subscript 𝑎 𝑡 𝒜 s_{t}\in\mathcal{S},a_{t}\in\mathcal{A}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ caligraphic_S , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ∈ caligraphic_A, and V π(s t)=𝔼 π[Q(s t,a t)]superscript 𝑉 𝜋 subscript 𝑠 𝑡 subscript 𝔼 𝜋 delimited-[]𝑄 subscript 𝑠 𝑡 subscript 𝑎 𝑡 V^{\pi}(s_{t})=\mathbb{E}_{\pi}[Q(s_{t},a_{t})]italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = blackboard_E start_POSTSUBSCRIPT italic_π end_POSTSUBSCRIPT [ italic_Q ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ]. Alternatively, the value function can be written as the expectation of the reward over the discounted occupancy measure:
|
| 57 |
+
|
| 58 |
+
Q π(s t,a t)=1−γ H−t 1−γ𝔼 s,a∼ρ π(s t,a t),π(s)[r(s,a)]superscript 𝑄 𝜋 subscript 𝑠 𝑡 subscript 𝑎 𝑡 1 superscript 𝛾 𝐻 𝑡 1 𝛾 subscript 𝔼 formulae-sequence similar-to 𝑠 𝑎 superscript 𝜌 𝜋 subscript 𝑠 𝑡 subscript 𝑎 𝑡 𝜋 𝑠 delimited-[]𝑟 𝑠 𝑎\displaystyle Q^{\pi}(s_{t},a_{t})=\frac{1-\gamma^{H-t}}{1-\gamma}\mathbb{E}_{% s,a\sim\rho^{\pi}(s_{t},a_{t}),\pi(s)}[r(s,a)]italic_Q start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = divide start_ARG 1 - italic_γ start_POSTSUPERSCRIPT italic_H - italic_t end_POSTSUPERSCRIPT end_ARG start_ARG 1 - italic_γ end_ARG blackboard_E start_POSTSUBSCRIPT italic_s , italic_a ∼ italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) , italic_π ( italic_s ) end_POSTSUBSCRIPT [ italic_r ( italic_s , italic_a ) ](3)
|
| 59 |
+
|
| 60 |
+
where ρ π(s|s t,a t)=(1−γ)∑Δt=1 H γ Δt−1ρ π(s|s t,a t,Δt,π)superscript 𝜌 𝜋 conditional 𝑠 subscript 𝑠 𝑡 subscript 𝑎 𝑡 1 𝛾 superscript subscript Δ 𝑡 1 𝐻 superscript 𝛾 Δ 𝑡 1 superscript 𝜌 𝜋 conditional 𝑠 subscript 𝑠 𝑡 subscript 𝑎 𝑡 Δ 𝑡 𝜋\rho^{\pi}(s|s_{t},a_{t})=(1-\gamma)\sum_{\Delta t=1}^{H}\gamma^{\Delta t-1}% \rho^{\pi}(s|s_{t},a_{t},\Delta t,\pi)italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = ( 1 - italic_γ ) ∑ start_POSTSUBSCRIPT roman_Δ italic_t = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT italic_γ start_POSTSUPERSCRIPT roman_Δ italic_t - 1 end_POSTSUPERSCRIPT italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , roman_Δ italic_t , italic_π ) and ρ π(s|s t,a t,Δt,π)=ℙ[S t+Δt=s|s t,a t;π]superscript 𝜌 𝜋 conditional 𝑠 subscript 𝑠 𝑡 subscript 𝑎 𝑡 Δ 𝑡 𝜋 ℙ delimited-[]subscript 𝑆 𝑡 Δ 𝑡 conditional 𝑠 subscript 𝑠 𝑡 subscript 𝑎 𝑡 𝜋\rho^{\pi}(s|s_{t},a_{t},\Delta t,\pi)=\mathbb{P}[S_{t+\Delta t}=s|s_{t},a_{t}% ;\pi]italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , roman_Δ italic_t , italic_π ) = blackboard_P [ italic_S start_POSTSUBSCRIPT italic_t + roman_Δ italic_t end_POSTSUBSCRIPT = italic_s | italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_π ] as defined in[[15](https://arxiv.org/html/2306.07290#bib.bib15)].
|
| 61 |
+
|
| 62 |
+
This decomposition of the value function has been shown to be useful in previous works based on the successor representation[[14](https://arxiv.org/html/2306.07290#bib.bib14), [16](https://arxiv.org/html/2306.07290#bib.bib16)] and γ 𝛾\gamma italic_γ-models[[15](https://arxiv.org/html/2306.07290#bib.bib15)], and we will leverage this formulation to build a diffusion-based estimate of the value function below.
|
| 63 |
+
|
| 64 |
+
#### Diffusion models
|
| 65 |
+
|
| 66 |
+
Diffusion models form a class of latent variable models[[17](https://arxiv.org/html/2306.07290#bib.bib17)] which represent the distribution of the data as an iterative process:
|
| 67 |
+
|
| 68 |
+
𝒙 0∼p(𝒙 0)=𝔼 p θ(𝒙 1:T)[p θ(𝒙 0|𝒙 1:T)]=p(𝒙 T)∏t d=1 T p θ(𝒙 t d−1|𝒙 t d),similar-to subscript 𝒙 0 𝑝 subscript 𝒙 0 subscript 𝔼 subscript 𝑝 𝜃 subscript 𝒙:1 𝑇 delimited-[]subscript 𝑝 𝜃 conditional subscript 𝒙 0 subscript 𝒙:1 𝑇 𝑝 subscript 𝒙 𝑇 superscript subscript product subscript 𝑡 𝑑 1 𝑇 subscript 𝑝 𝜃 conditional subscript 𝒙 subscript 𝑡 𝑑 1 subscript 𝒙 subscript 𝑡 𝑑\displaystyle\bm{x}_{0}\sim p(\bm{x}_{0})=\mathbb{E}_{p_{\theta}(\bm{x}_{1:T})% }[p_{\theta}(\bm{x}_{0}|\bm{x}_{1:T})]=p(\bm{x}_{T})\prod_{t_{d}=1}^{T}p_{% \theta}(\bm{x}_{t_{d-1}}|\bm{x}_{t_{d}}),bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ italic_p ( bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = blackboard_E start_POSTSUBSCRIPT italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT 1 : italic_T end_POSTSUBSCRIPT ) end_POSTSUBSCRIPT [ italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT 1 : italic_T end_POSTSUBSCRIPT ) ] = italic_p ( bold_italic_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ) ∏ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d - 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) ,(4)
|
| 69 |
+
|
| 70 |
+
for T 𝑇 T italic_T latents 𝒙 1:T subscript 𝒙:1 𝑇\bm{x}_{1:T}bold_italic_x start_POSTSUBSCRIPT 1 : italic_T end_POSTSUBSCRIPT with conditional distributions parameterized by θ 𝜃\theta italic_θ. The joint distribution of data and latents factorizes into a Markov Chain with parameters
|
| 71 |
+
|
| 72 |
+
p θ(𝒙 t d−1|x t d)=𝒩(μ θ(𝒙 t d,t d),Σ θ(𝒙 t d,t d))),𝒙 T∼𝒩(𝟎,𝐈)p_{\theta}(\bm{x}_{t_{d-1}}|x_{t_{d}})=\mathcal{N}(\mu_{\theta}(\bm{x}_{t_{d}}% ,t_{d}),\;\Sigma_{\theta}(\bm{x}_{t_{d}},t_{d}))),\quad\bm{x}_{T}~{}\sim% \mathcal{N}(\mathbf{0},\mathbf{I})italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d - 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT | italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) = caligraphic_N ( italic_μ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) , roman_Σ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) ) ) , bold_italic_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ∼ caligraphic_N ( bold_0 , bold_I )(5)
|
| 73 |
+
|
| 74 |
+
which is called the _reverse_ process. The posterior q(𝒙 1:T|𝒙 0)𝑞 conditional subscript 𝒙:1 𝑇 subscript 𝒙 0 q(\bm{x}_{1:T}|\bm{x}_{0})italic_q ( bold_italic_x start_POSTSUBSCRIPT 1 : italic_T end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ), called the _forward_ process, typically takes the form of a Markov Chain with progressively increasing Gaussian noise parameterized by variance schedule β(t d)𝛽 subscript 𝑡 𝑑\beta(t_{d})italic_β ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ):
|
| 75 |
+
|
| 76 |
+
q(𝒙 1:T|𝒙 0)=∏t d=1 T q(𝒙 t d|𝒙 t d−1),q(𝒙 t d|𝒙 t d−1)=𝒩(1−β(t d)𝒙 t d−1,β(t d)𝐈))q(\bm{x}_{1:T}|\bm{x}_{0})=\prod_{t_{d}=1}^{T}q(\bm{x}_{t_{d}}|\bm{x}_{t_{d}-1% }),\quad q(\bm{x}_{t_{d}}|\bm{x}_{t_{d-1}})=\mathcal{N}(\sqrt{1-\beta(t_{d})}% \bm{x}_{t_{d-1}},\;\beta(t_{d})\mathbf{I}))italic_q ( bold_italic_x start_POSTSUBSCRIPT 1 : italic_T end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = ∏ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT italic_q ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT - 1 end_POSTSUBSCRIPT ) , italic_q ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d - 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) = caligraphic_N ( square-root start_ARG 1 - italic_β ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) end_ARG bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d - 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_β ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) bold_I ) )(6)
|
| 77 |
+
|
| 78 |
+
where β 𝛽\beta italic_β can be either learned or fixed as hyperparameter. The parameters θ 𝜃\theta italic_θ of the reverse process are found by minimizing the variational upper-bound on the negative log-likelihood of the data:
|
| 79 |
+
|
| 80 |
+
𝔼 q[−logp(𝒙 T)−∑t d=1 T logp θ(𝒙 t d−1|𝒙 t d)q(𝒙 t d|𝒙 t d−1)]subscript 𝔼 𝑞 delimited-[]𝑝 subscript 𝒙 𝑇 superscript subscript subscript 𝑡 𝑑 1 𝑇 subscript 𝑝 𝜃 conditional subscript 𝒙 subscript 𝑡 𝑑 1 subscript 𝒙 subscript 𝑡 𝑑 𝑞 conditional subscript 𝒙 subscript 𝑡 𝑑 subscript 𝒙 subscript 𝑡 𝑑 1\mathbb{E}_{q}\bigg{[}-\log p(\bm{x}_{T})-\sum_{{t_{d}}=1}^{T}\log\frac{p_{% \theta}(\bm{x}_{{t_{d}}-1}|\bm{x}_{t_{d}})}{q(\bm{x}_{t_{d}}|\bm{x}_{{t_{d}}-1% })}\bigg{]}blackboard_E start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT [ - roman_log italic_p ( bold_italic_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ) - ∑ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT roman_log divide start_ARG italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT - 1 end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) end_ARG start_ARG italic_q ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT | bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT - 1 end_POSTSUBSCRIPT ) end_ARG ](7)
|
| 81 |
+
|
| 82 |
+
Later works, such as Denoising Diffusion Probabilistic Models[DDPM, [18](https://arxiv.org/html/2306.07290#bib.bib18)] make specific assumptions regarding the form of p θ subscript 𝑝 𝜃 p_{\theta}italic_p start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT, leading to the following simplified loss with modified variance scale α¯(t d)=∏s=1 t d(1−β(s))¯𝛼 subscript 𝑡 𝑑 superscript subscript product 𝑠 1 subscript 𝑡 𝑑 1 𝛽 𝑠\bar{\alpha}(t_{d})=\prod_{s=1}^{t_{d}}(1-\beta(s))over¯ start_ARG italic_α end_ARG ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) = ∏ start_POSTSUBSCRIPT italic_s = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( 1 - italic_β ( italic_s ) ):
|
| 83 |
+
|
| 84 |
+
ℓ Diffusion=𝔼 𝒙 0,t d,ϵ[‖ϵ−ϵ θ(α¯(t d)𝒙 0+1−α¯(t d)ϵ,t d)‖2 2],subscript ℓ Diffusion subscript 𝔼 subscript 𝒙 0 subscript 𝑡 𝑑 italic-ϵ delimited-[]superscript subscript norm italic-ϵ subscript italic-ϵ 𝜃¯𝛼 subscript 𝑡 𝑑 subscript 𝒙 0 1¯𝛼 subscript 𝑡 𝑑 italic-ϵ subscript 𝑡 𝑑 2 2\ell_{\text{Diffusion}}=\mathbb{E}_{\bm{x}_{0},t_{d},\epsilon}\bigg{[}||% \epsilon-\epsilon_{\theta}(\sqrt{\bar{\alpha}(t_{d})}\bm{x}_{0}+\sqrt{1-\bar{% \alpha}(t_{d})}\epsilon,t_{d})||_{2}^{2}\bigg{]},roman_ℓ start_POSTSUBSCRIPT Diffusion end_POSTSUBSCRIPT = blackboard_E start_POSTSUBSCRIPT bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT , italic_ϵ end_POSTSUBSCRIPT [ | | italic_ϵ - italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( square-root start_ARG over¯ start_ARG italic_α end_ARG ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) end_ARG bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + square-root start_ARG 1 - over¯ start_ARG italic_α end_ARG ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) end_ARG italic_ϵ , italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ] ,
|
| 85 |
+
|
| 86 |
+
𝒙 0∼q(𝒙 0),t d∼Uniform(1,T),ϵ∼𝒩(𝟎,𝐈)formulae-sequence similar-to subscript 𝒙 0 𝑞 subscript 𝒙 0 formulae-sequence similar-to subscript 𝑡 𝑑 Uniform 1 𝑇 similar-to italic-ϵ 𝒩 𝟎 𝐈\bm{x}_{0}\sim q(\bm{x}_{0}),\quad t_{d}\sim\text{Uniform}(1,T),\quad\epsilon% \sim\mathcal{N}(\textbf{0},\mathbf{I})bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∼ italic_q ( bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) , italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ∼ Uniform ( 1 , italic_T ) , italic_ϵ ∼ caligraphic_N ( 0 , bold_I )(8)
|
| 87 |
+
|
| 88 |
+
by training a denoising network ϵ θ subscript italic-ϵ 𝜃\epsilon_{\theta}italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT to predict noise ϵ italic-ϵ\epsilon italic_ϵ from a corrupted version of 𝒙 0 subscript 𝒙 0\bm{x}_{0}bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT at timestep t d subscript 𝑡 𝑑 t_{d}italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT. Samples from p(𝒙 0)𝑝 subscript 𝒙 0 p(\bm{x}_{0})italic_p ( bold_italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) can be generated by following the reverse process:
|
| 89 |
+
|
| 90 |
+
𝒙 t d−1=1 α(t d)(𝒙 t d−1−α(t d)1−α¯ϵ θ(𝒙 t d,t d))+σ t d𝒛,𝒙 T∼𝒩(𝟎,𝐈),𝒛∼𝒩(𝟎,𝐈).formulae-sequence subscript 𝒙 subscript 𝑡 𝑑 1 1 𝛼 subscript 𝑡 𝑑 subscript 𝒙 subscript 𝑡 𝑑 1 𝛼 subscript 𝑡 𝑑 1¯𝛼 subscript italic-ϵ 𝜃 subscript 𝒙 subscript 𝑡 𝑑 subscript 𝑡 𝑑 subscript 𝜎 subscript 𝑡 𝑑 𝒛 formulae-sequence similar-to subscript 𝒙 𝑇 𝒩 0 𝐈 similar-to 𝒛 𝒩 0 𝐈\bm{x}_{t_{d}-1}=\frac{1}{\sqrt{\alpha(t_{d})}}\bigg{(}\bm{x}_{t_{d}}-\frac{1-% \alpha(t_{d})}{\sqrt{1-\bar{\alpha}}}\epsilon_{\theta}(\bm{x}_{t_{d}},t_{d})% \bigg{)}+\sigma_{t_{d}}\bm{z},\quad\bm{x}_{T}\sim\mathcal{N}(\mathbf{0},% \mathbf{I}),\bm{z}\sim\mathcal{N}(\mathbf{0},\mathbf{I}).bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT - 1 end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_α ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) end_ARG end_ARG ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT - divide start_ARG 1 - italic_α ( italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) end_ARG start_ARG square-root start_ARG 1 - over¯ start_ARG italic_α end_ARG end_ARG end_ARG italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( bold_italic_x start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT ) ) + italic_σ start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT end_POSTSUBSCRIPT bold_italic_z , bold_italic_x start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT ∼ caligraphic_N ( bold_0 , bold_I ) , bold_italic_z ∼ caligraphic_N ( bold_0 , bold_I ) .(9)
|
| 91 |
+
|
| 92 |
+
3 Methodology
|
| 93 |
+
-------------
|
| 94 |
+
|
| 95 |
+
Through the lens of[Eq.3](https://arxiv.org/html/2306.07290#S2.E3 "3 ‣ Reinforcement learning ‣ 2 Preliminaries ‣ Value function estimation using conditional diffusion models for control"), the value function can be decomposed into three components: (1) occupancy measure ρ π(s)superscript 𝜌 𝜋 𝑠\rho^{\pi}(s)italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s ), dependent on states and policy, (2) reward model r(s,a)𝑟 𝑠 𝑎 r(s,a)italic_r ( italic_s , italic_a ) dependent on states and actions and (3) policy representation ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ), dependent on the policy. Equipped with these components, we could estimate the value of any given policy in a zero-shot manner. However, two major issues arise:
|
| 96 |
+
|
| 97 |
+
* •
|
| 98 |
+
For offline 2 2 2 Online training can use implicit conditioning by re-collecting data with the current policy π 𝜋\pi italic_π training, ρ π superscript 𝜌 𝜋\rho^{\pi}italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT has to be _explicitly_ conditioned on the target policy, via the policy representation ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ).
|
| 99 |
+
|
| 100 |
+
* •
|
| 101 |
+
Maximizing Q(s,a,ϕ(π))𝑄 𝑠 𝑎 italic-ϕ 𝜋 Q(s,a,\phi(\pi))italic_Q ( italic_s , italic_a , italic_ϕ ( italic_π ) ) directly as opposed to indirectly via r(s,a)+γ𝔼[V(s′,ϕ(π))]𝑟 𝑠 𝑎 𝛾 𝔼 delimited-[]𝑉 superscript 𝑠′italic-ϕ 𝜋 r(s,a)+\gamma\mathbb{E}[V(s^{\prime},\phi(\pi))]italic_r ( italic_s , italic_a ) + italic_γ blackboard_E [ italic_V ( italic_s start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT , italic_ϕ ( italic_π ) ) ] is too costly due to the large size of diffusion denoising networks.
|
| 102 |
+
|
| 103 |
+

|
| 104 |
+
|
| 105 |
+
Figure 1: The three crucial components of DVF: (left) construct tuples (s t,s t+1,s t+Δt)subscript 𝑠 𝑡 subscript 𝑠 𝑡 1 subscript 𝑠 𝑡 Δ 𝑡(s_{t},s_{t+1},s_{t+\Delta t})( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT , italic_s start_POSTSUBSCRIPT italic_t + roman_Δ italic_t end_POSTSUBSCRIPT ) for training the diffusion model; (middle) architecture of the diffusion model, which takes in future noisy state x 𝑥 x italic_x, current state s t subscript 𝑠 𝑡 s_{t}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT, time offset Δt Δ 𝑡\Delta t roman_Δ italic_t, policy embedding ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ) and diffusion timestep t d subscript 𝑡 𝑑 t_{d}italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT and processes them using the Perceiver I/O architecture[[19](https://arxiv.org/html/2306.07290#bib.bib19)] to predict the noise; (right) Sampling mechanism based on DPPM[[18](https://arxiv.org/html/2306.07290#bib.bib18)] is used with a reward model to estimate the value function
|
| 106 |
+
|
| 107 |
+
If both challenges are mitigated, then the value function V π(s t)superscript 𝑉 𝜋 subscript 𝑠 𝑡 V^{\pi}(s_{t})italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) can be estimated by first sampling a collection of n 𝑛 n italic_n states from the learned diffusion model s t+Δt,1,..,s t+Δt,n∼ρ π(s t)s_{t+\Delta t,1},..,s_{t+\Delta t,n}\sim\rho^{\pi}(s_{t})italic_s start_POSTSUBSCRIPT italic_t + roman_Δ italic_t , 1 end_POSTSUBSCRIPT , . . , italic_s start_POSTSUBSCRIPT italic_t + roman_Δ italic_t , italic_n end_POSTSUBSCRIPT ∼ italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) and then evaluating the reward predictor at those states ∑i=1 n r(s t+Δt,i,π(s t+Δt,i))∝V π(s t)proportional-to superscript subscript 𝑖 1 𝑛 𝑟 subscript 𝑠 𝑡 Δ 𝑡 𝑖 𝜋 subscript 𝑠 𝑡 Δ 𝑡 𝑖 superscript 𝑉 𝜋 subscript 𝑠 𝑡\sum_{i=1}^{n}r(s_{t+\Delta t,i},\pi(s_{t+\Delta t,i}))\propto V^{\pi}(s_{t})∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_r ( italic_s start_POSTSUBSCRIPT italic_t + roman_Δ italic_t , italic_i end_POSTSUBSCRIPT , italic_π ( italic_s start_POSTSUBSCRIPT italic_t + roman_Δ italic_t , italic_i end_POSTSUBSCRIPT ) ) ∝ italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ). A similar result can be derived for the state-action value function by training a state-action conditionned diffusion model ρ π(s t,a t)superscript 𝜌 𝜋 subscript 𝑠 𝑡 subscript 𝑎 𝑡\rho^{\pi}(s_{t},a_{t})italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ), from which a policy can be decoded using e.g. the information projection method such as in[[20](https://arxiv.org/html/2306.07290#bib.bib20)].
|
| 108 |
+
|
| 109 |
+
### 3.1 Challenge 1: Off-policy evaluation through conditioning
|
| 110 |
+
|
| 111 |
+
Explicit policy conditioning has (and still remains) a hard task for reinforcement learning settings. Assuming that the policy π 𝜋\pi italic_π has a lossless finite-dimensional representation ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ), passing it to an ideal value function network as Q(s,a,ϕ(π))𝑄 𝑠 𝑎 italic-ϕ 𝜋 Q(s,a,\phi(\pi))italic_Q ( italic_s , italic_a , italic_ϕ ( italic_π ) ) could allow for zero-shot policy evaluation. That is, given two policy sets Π 1,Π 2⊆Π subscript Π 1 subscript Π 2 Π\Pi_{1},\Pi_{2}\subseteq\Pi roman_Π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , roman_Π start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ⊆ roman_Π, training Q(s,a,ϕ(π))𝑄 𝑠 𝑎 italic-ϕ 𝜋 Q(s,a,\phi(\pi))italic_Q ( italic_s , italic_a , italic_ϕ ( italic_π ) ) on {s,a,ϕ(π)},π∈Π 1 𝑠 𝑎 italic-ϕ 𝜋 𝜋 subscript Π 1\{s,a,\phi(\pi)\},\;\pi\in\Pi_{1}{ italic_s , italic_a , italic_ϕ ( italic_π ) } , italic_π ∈ roman_Π start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and then swapping out ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ) for ϕ(π′)italic-ϕ superscript 𝜋′\phi(\pi^{\prime})italic_ϕ ( italic_π start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) where π′∈Π 2 superscript 𝜋′subscript Π 2\pi^{\prime}\in\Pi_{2}italic_π start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ∈ roman_Π start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT would immediately give the estimate of Q π′superscript 𝑄 superscript 𝜋′Q^{\pi^{\prime}}italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT.
|
| 112 |
+
|
| 113 |
+
We address this issue by studying sufficient statistics of π 𝜋\pi italic_π. Since the policy is a conditional distribution, it is possible to use a kernel embedding for conditional distributions such as a Reproducing Kernel Hilbert Space[[21](https://arxiv.org/html/2306.07290#bib.bib21), [22](https://arxiv.org/html/2306.07290#bib.bib22)], albeit it is ill-suited for high-dimensional non-stationary problems. Recent works have studied using the trajectories {s i,a i}i n superscript subscript subscript 𝑠 𝑖 subscript 𝑎 𝑖 𝑖 𝑛\{s_{i},a_{i}\}_{i}^{n}{ italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT as a sufficient statistic for π 𝜋\pi italic_π evaluated a _key_ states s 1,..,s n s_{1},..,s_{n}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , . . , italic_s start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT[[23](https://arxiv.org/html/2306.07290#bib.bib23)]. Similarly, we studied two policy representations:
|
| 114 |
+
|
| 115 |
+
1. 1.
|
| 116 |
+
Scalar: Given a countable policy set Π Π\Pi roman_Π indexed by i=1,2,..𝑖 1 2 i=1,2,..italic_i = 1 , 2 , . ., we let ϕ(π)=i italic-ϕ 𝜋 𝑖\phi(\pi)=i italic_ϕ ( italic_π ) = italic_i. One example of such sets is the value improvement path, i.e. the number of training gradient steps performed since initialization.
|
| 117 |
+
|
| 118 |
+
2. 2.
|
| 119 |
+
Sequential: Inspired by[[21](https://arxiv.org/html/2306.07290#bib.bib21)], we embed π 𝜋\pi italic_π using its rollouts in the environment {s i,a i}i n superscript subscript subscript 𝑠 𝑖 subscript 𝑎 𝑖 𝑖 𝑛\{s_{i},a_{i}\}_{i}^{n}{ italic_s start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT. In the case where actions are unknown, then the sequence of states can be sufficient, under some mild assumptions 3 3 3 One such case is MDPs with deterministic dynamics, as it allows to figure out the corresponding action sequence., for recovering π 𝜋\pi italic_π.
|
| 120 |
+
|
| 121 |
+
Both representations have their own advantages: scalar representations are compact and introduce an ordering into the policy set Π Π\Pi roman_Π, while sequential representations can handle cases where no natural ordering is present in Π Π\Pi roman_Π (e.g. learning from offline data).
|
| 122 |
+
|
| 123 |
+
### 3.2 Challenge 2. Maximizing the value with large models
|
| 124 |
+
|
| 125 |
+
In domains with continuous actions, the policy is usually decoded using the information projection onto the value function estimate (see[[20](https://arxiv.org/html/2306.07290#bib.bib20)]) by minimizing
|
| 126 |
+
|
| 127 |
+
ℓ Policy(ϕ)=𝔼 s∼𝒟[KL(π ϕ(⋅|s)||e Q π old(s,⋅)∑a′e Q π old(s,a′))].\ell_{\text{Policy}}(\phi)=\mathbb{E}_{s\sim\mathcal{D}}\bigg{[}\text{KL}\bigg% {(}\pi_{\phi}(\cdot|s)||\frac{e^{Q^{\pi_{\text{old}}}(s,\cdot)}}{\sum_{a^{% \prime}}e^{Q^{\pi_{\text{old}}}(s,a^{\prime})}}\bigg{)}\bigg{]}\;.roman_ℓ start_POSTSUBSCRIPT Policy end_POSTSUBSCRIPT ( italic_ϕ ) = blackboard_E start_POSTSUBSCRIPT italic_s ∼ caligraphic_D end_POSTSUBSCRIPT [ KL ( italic_π start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( ⋅ | italic_s ) | | divide start_ARG italic_e start_POSTSUPERSCRIPT italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUBSCRIPT old end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( italic_s , ⋅ ) end_POSTSUPERSCRIPT end_ARG start_ARG ∑ start_POSTSUBSCRIPT italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT italic_Q start_POSTSUPERSCRIPT italic_π start_POSTSUBSCRIPT old end_POSTSUBSCRIPT end_POSTSUPERSCRIPT ( italic_s , italic_a start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT end_ARG ) ] .(10)
|
| 128 |
+
|
| 129 |
+
However, (a) estimating Q*(s,a)superscript 𝑄 𝑠 𝑎 Q^{*}(s,a)italic_Q start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_s , italic_a ) requires estimation of ρ*(s,a)superscript 𝜌 𝑠 𝑎\rho^{*}(s,a)italic_ρ start_POSTSUPERSCRIPT * end_POSTSUPERSCRIPT ( italic_s , italic_a ) which cannot be pre-trained on videos (i.e. state sequences) and (b) requires the differentiation of the sampling operator from the ρ 𝜌\rho italic_ρ network, which, in our work, is parameterized by a large generative model. The same problem arises in both model-free[[20](https://arxiv.org/html/2306.07290#bib.bib20)] and model-based methods[[24](https://arxiv.org/html/2306.07290#bib.bib24)], where the networks are sufficiently small that the overhead is minimal. In our work, we circumvent the computational overhead by unrolling one step of Bellman backup
|
| 130 |
+
|
| 131 |
+
Q π(s t,a t)=r(s t,a t)+γ𝔼 s t+1[V π(s t+1)]superscript 𝑄 𝜋 subscript 𝑠 𝑡 subscript 𝑎 𝑡 𝑟 subscript 𝑠 𝑡 subscript 𝑎 𝑡 𝛾 subscript 𝔼 subscript 𝑠 𝑡 1 delimited-[]superscript 𝑉 𝜋 subscript 𝑠 𝑡 1 Q^{\pi}(s_{t},a_{t})=r(s_{t},a_{t})+\gamma\mathbb{E}_{s_{t+1}}[V^{\pi}(s_{t+1})]italic_Q start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = italic_r ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) + italic_γ blackboard_E start_POSTSUBSCRIPT italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ italic_V start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ) ](11)
|
| 132 |
+
|
| 133 |
+
and consequently
|
| 134 |
+
|
| 135 |
+
∇a t Q π(s t,a t)=∇a t r(s t,a t),subscript∇subscript 𝑎 𝑡 superscript 𝑄 𝜋 subscript 𝑠 𝑡 subscript 𝑎 𝑡 subscript∇subscript 𝑎 𝑡 𝑟 subscript 𝑠 𝑡 subscript 𝑎 𝑡\nabla_{a_{t}}Q^{\pi}(s_{t},a_{t})=\nabla_{a_{t}}r(s_{t},a_{t}),∇ start_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_Q start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) = ∇ start_POSTSUBSCRIPT italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT end_POSTSUBSCRIPT italic_r ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ,(12)
|
| 136 |
+
|
| 137 |
+
allowing to learn ρ π(s)superscript 𝜌 𝜋 𝑠\rho^{\pi}(s)italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s ) instead of ρ π(s,a)superscript 𝜌 𝜋 𝑠 𝑎\rho^{\pi}(s,a)italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT ( italic_s , italic_a ) and using it to construct the state value function.
|
| 138 |
+
|
| 139 |
+
### 3.3 Practical algorithm
|
| 140 |
+
|
| 141 |
+
As an alternative to classical TD learning, we propose to separately estimate the occupancy measure ρ 𝜌\rho italic_ρ, using a denoising diffusion model and the reward r 𝑟 r italic_r, using a simple regression in symlog space[[24](https://arxiv.org/html/2306.07290#bib.bib24)]. While it is hard to estimate the occupancy measure ρ 𝜌\rho italic_ρ directly, we instead learn a denoising diffusion probabilistic model ϵ θ subscript italic-ϵ 𝜃\epsilon_{\theta}italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT[DDPM, [18](https://arxiv.org/html/2306.07290#bib.bib18)], which we call the _de-noising network_. Since we know what the true forward process looks like at timestep t d subscript 𝑡 𝑑 t_{d}italic_t start_POSTSUBSCRIPT italic_d end_POSTSUBSCRIPT, the de-noising network ϵ θ:𝒮→[−1,1]:subscript italic-ϵ 𝜃→𝒮 1 1\epsilon_{\theta}:\mathcal{S}\to[-1,1]italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT : caligraphic_S → [ - 1 , 1 ] is trained to predict the input noise.
|
| 142 |
+
|
| 143 |
+
The high-level idea behind the algorithm is as follows:
|
| 144 |
+
|
| 145 |
+
1. 1.
|
| 146 |
+
Pre-train a diffusion model on sequences of states s 1,..,s H s_{1},..,s_{H}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , . . , italic_s start_POSTSUBSCRIPT italic_H end_POSTSUBSCRIPT and, optionally, policy embeddings ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ). This step can be performed on large amounts of demonstration videos without the need of any action nor reward labels. The policy embedding ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ) can be chosen to be any auxiliary information which allows the model to distinguish between policies 4 4 4 In MDPs with deterministic transitions, one such example is s 1,..,s t s_{1},..,s_{t}italic_s start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , . . , italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT.. This step yields ρ(s t;ϕ(π))𝜌 subscript 𝑠 𝑡 italic-ϕ 𝜋\rho(s_{t};\phi(\pi))italic_ρ ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ; italic_ϕ ( italic_π ) ).
|
| 147 |
+
|
| 148 |
+
2. 2.
|
| 149 |
+
Using labeled samples, train a reward predictor r(s,a)𝑟 𝑠 𝑎 r(s,a)italic_r ( italic_s , italic_a ). This reward predictor will be used as importance weight to score each state-action pair generated by the diffusion model.
|
| 150 |
+
|
| 151 |
+
3. 3.
|
| 152 |
+
Sample a state from ρ(⋅,ϕ(π))𝜌⋅italic-ϕ 𝜋\rho(\cdot,\phi(\pi))italic_ρ ( ⋅ , italic_ϕ ( italic_π ) ) and score it using r(s t,π(s t))𝑟 subscript 𝑠 𝑡 𝜋 subscript 𝑠 𝑡 r(s_{t},\pi(s_{t}))italic_r ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_π ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ), thus obtaining an estimate proportional to the value function of policy π 𝜋\pi italic_π at state s t subscript 𝑠 𝑡 s_{t}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT.
|
| 153 |
+
|
| 154 |
+
4. 4.
|
| 155 |
+
_Optionally:_ Maximize the resulting value function estimator using the information projection of π 𝜋\pi italic_π onto the polytope of value functions (see[Eq.10](https://arxiv.org/html/2306.07290#S3.E10 "10 ‣ 3.2 Challenge 2. Maximizing the value with large models ‣ 3 Methodology ‣ Value function estimation using conditional diffusion models for control")) and decoding a new policy π′superscript 𝜋′\pi^{\prime}italic_π start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT. If in the online setting, use π′superscript 𝜋′\pi^{\prime}italic_π start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT to collect new data in the environment and update ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ) to ϕ(π′)italic-ϕ superscript 𝜋′\phi(\pi^{\prime})italic_ϕ ( italic_π start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ).
|
| 156 |
+
|
| 157 |
+
Input :Dataset
|
| 158 |
+
|
| 159 |
+
𝒟∼μ similar-to 𝒟 𝜇\mathcal{D}\sim\mu caligraphic_D ∼ italic_μ
|
| 160 |
+
,
|
| 161 |
+
|
| 162 |
+
ϵ θ,r ψ,π ϕ subscript italic-ϵ 𝜃 subscript 𝑟 𝜓 subscript 𝜋 italic-ϕ\epsilon_{\theta},r_{\psi},\pi_{\phi}italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT , italic_r start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT , italic_π start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT
|
| 163 |
+
networks, number of Monte-Carlo samples
|
| 164 |
+
|
| 165 |
+
n 𝑛 n italic_n
|
| 166 |
+
|
| 167 |
+
/* Normalize states from 𝒟 𝒟\mathcal{D}caligraphic_D to lie in [−1,1]1 1[-1,1][ - 1 , 1 ] interval */
|
| 168 |
+
|
| 169 |
+
1
|
| 170 |
+
|
| 171 |
+
𝒟[s]←𝒟[s]−min𝒟[s]max𝒟[s]−min𝒟[s]←𝒟 delimited-[]𝑠 𝒟 delimited-[]𝑠 𝒟 delimited-[]𝑠 𝒟 delimited-[]𝑠 𝒟 delimited-[]𝑠\mathcal{D}[s]\leftarrow\frac{\mathcal{D}[s]-\min\mathcal{D}[s]}{\max\mathcal{% D}[s]-\min\mathcal{D}[s]}caligraphic_D [ italic_s ] ← divide start_ARG caligraphic_D [ italic_s ] - roman_min caligraphic_D [ italic_s ] end_ARG start_ARG roman_max caligraphic_D [ italic_s ] - roman_min caligraphic_D [ italic_s ] end_ARG
|
| 172 |
+
;
|
| 173 |
+
|
| 174 |
+
2 for _epoch j=1,2,..,J j=1,2,..,J italic\_j = 1 , 2 , . . , italic\_J_ do
|
| 175 |
+
|
| 176 |
+
3 for _minibatch ℬ∼𝒟 similar-to ℬ 𝒟\mathcal{B}\sim\mathcal{D}caligraphic\_B ∼ caligraphic\_D_ do
|
| 177 |
+
|
| 178 |
+
/* Update diffusion model ρ θ subscript 𝜌 𝜃\rho_{\theta}italic_ρ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT using[Eq.8](https://arxiv.org/html/2306.07290#S2.E8 "8 ‣ Diffusion models ‣ 2 Preliminaries ‣ Value function estimation using conditional diffusion models for control") */
|
| 179 |
+
|
| 180 |
+
4 Update
|
| 181 |
+
|
| 182 |
+
ϵ θ subscript italic-ϵ 𝜃\epsilon_{\theta}italic_ϵ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT
|
| 183 |
+
using
|
| 184 |
+
|
| 185 |
+
∇θ ℓ Diffusion(θ(j))subscript∇𝜃 subscript ℓ Diffusion superscript 𝜃 𝑗\nabla_{\theta}\ell_{\text{Diffusion}}(\theta^{(j)})∇ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT Diffusion end_POSTSUBSCRIPT ( italic_θ start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT )
|
| 186 |
+
over
|
| 187 |
+
|
| 188 |
+
s t+Δt subscript 𝑠 𝑡 Δ 𝑡 s_{t+\Delta t}italic_s start_POSTSUBSCRIPT italic_t + roman_Δ italic_t end_POSTSUBSCRIPT
|
| 189 |
+
;
|
| 190 |
+
|
| 191 |
+
/* Update the reward estimator */
|
| 192 |
+
|
| 193 |
+
5 Update
|
| 194 |
+
|
| 195 |
+
r ψ subscript 𝑟 𝜓 r_{\psi}italic_r start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT
|
| 196 |
+
using
|
| 197 |
+
|
| 198 |
+
∇ψ 𝔼 s,a[‖r ψ(s,a)−r(s,a)‖2 2]subscript∇𝜓 subscript 𝔼 𝑠 𝑎 delimited-[]subscript superscript norm subscript 𝑟 𝜓 𝑠 𝑎 𝑟 𝑠 𝑎 2 2\nabla_{\psi}\mathbb{E}_{s,a}\bigg{[}||r_{\psi}(s,a)-r(s,a)||^{2}_{2}\bigg{]}∇ start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT blackboard_E start_POSTSUBSCRIPT italic_s , italic_a end_POSTSUBSCRIPT [ | | italic_r start_POSTSUBSCRIPT italic_ψ end_POSTSUBSCRIPT ( italic_s , italic_a ) - italic_r ( italic_s , italic_a ) | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT ]
|
| 199 |
+
;
|
| 200 |
+
|
| 201 |
+
/* Estimate V */
|
| 202 |
+
|
| 203 |
+
6
|
| 204 |
+
|
| 205 |
+
V(s t+1)←1−γ H−t−1 1−γ∑i=1 n r(s t+1+Δt,i,π ϕ(s t+1+Δt,i))←𝑉 subscript 𝑠 𝑡 1 1 superscript 𝛾 𝐻 𝑡 1 1 𝛾 superscript subscript 𝑖 1 𝑛 𝑟 subscript 𝑠 𝑡 1 Δ 𝑡 𝑖 subscript 𝜋 italic-ϕ subscript 𝑠 𝑡 1 Δ 𝑡 𝑖 V(s_{t+1})\leftarrow\frac{1-\gamma^{H-t-1}}{1-\gamma}\sum_{i=1}^{n}r(s_{t+1+% \Delta t,i},\pi_{\phi}(s_{t+1+\Delta t,i}))italic_V ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT ) ← divide start_ARG 1 - italic_γ start_POSTSUPERSCRIPT italic_H - italic_t - 1 end_POSTSUPERSCRIPT end_ARG start_ARG 1 - italic_γ end_ARG ∑ start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_r ( italic_s start_POSTSUBSCRIPT italic_t + 1 + roman_Δ italic_t , italic_i end_POSTSUBSCRIPT , italic_π start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t + 1 + roman_Δ italic_t , italic_i end_POSTSUBSCRIPT ) )
|
| 206 |
+
,
|
| 207 |
+
|
| 208 |
+
s t+1+Δt∼DDPMρ θ(s t+1)subscript 𝑠 𝑡 1 Δ 𝑡 DDPM similar-to subscript 𝜌 𝜃 subscript 𝑠 𝑡 1 s_{t+1+\Delta t}\overset{\text{DDPM}}{\sim}\rho_{\theta}(s_{t+1})italic_s start_POSTSUBSCRIPT italic_t + 1 + roman_Δ italic_t end_POSTSUBSCRIPT overDDPM start_ARG ∼ end_ARG italic_ρ start_POSTSUBSCRIPT italic_θ end_POSTSUBSCRIPT ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT )
|
| 209 |
+
;
|
| 210 |
+
|
| 211 |
+
/* Estimate Q */
|
| 212 |
+
|
| 213 |
+
7
|
| 214 |
+
|
| 215 |
+
Q(s t,a t)←r(s t,a t)+γV(s t+1)←𝑄 subscript 𝑠 𝑡 subscript 𝑎 𝑡 𝑟 subscript 𝑠 𝑡 subscript 𝑎 𝑡 𝛾 𝑉 subscript 𝑠 𝑡 1 Q(s_{t},a_{t})\leftarrow r(s_{t},a_{t})+\gamma V(s_{t+1})italic_Q ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) ← italic_r ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT ) + italic_γ italic_V ( italic_s start_POSTSUBSCRIPT italic_t + 1 end_POSTSUBSCRIPT )
|
| 216 |
+
;
|
| 217 |
+
|
| 218 |
+
/* Decode policy from Q-function using[Eq.10](https://arxiv.org/html/2306.07290#S3.E10 "10 ‣ 3.2 Challenge 2. Maximizing the value with large models ‣ 3 Methodology ‣ Value function estimation using conditional diffusion models for control") */
|
| 219 |
+
|
| 220 |
+
8 Update
|
| 221 |
+
|
| 222 |
+
π ϕ subscript 𝜋 italic-ϕ\pi_{\phi}italic_π start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT
|
| 223 |
+
using
|
| 224 |
+
|
| 225 |
+
∇ϕ ℓ Policy(ϕ)subscript∇italic-ϕ subscript ℓ Policy italic-ϕ\nabla_{\phi}\ell_{\text{Policy}}(\phi)∇ start_POSTSUBSCRIPT italic_ϕ end_POSTSUBSCRIPT roman_ℓ start_POSTSUBSCRIPT Policy end_POSTSUBSCRIPT ( italic_ϕ )
|
| 226 |
+
and
|
| 227 |
+
|
| 228 |
+
Q(s t,a t)𝑄 subscript 𝑠 𝑡 subscript 𝑎 𝑡 Q(s_{t},a_{t})italic_Q ( italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_a start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT )
|
| 229 |
+
;
|
| 230 |
+
|
| 231 |
+
9
|
| 232 |
+
|
| 233 |
+
Algorithm 1 Diffused Value Function (DVF)
|
| 234 |
+
|
| 235 |
+
Algorithm[1](https://arxiv.org/html/2306.07290#algorithm1 "1 ‣ 3.3 Practical algorithm ‣ 3 Methodology ‣ Value function estimation using conditional diffusion models for control") describes the exact training mechanism, which first learns a diffusion model and maximizes its reward-weighted expectation to learn a policy suitable for control. Note that the method is suitable for both online and offline reinforcement learning tasks, albeit conditioning on the policy representation ϕ(π)italic-ϕ 𝜋\phi(\pi)italic_ϕ ( italic_π ) has to be done explicitly in the case of offline RL.
|
| 236 |
+
|
| 237 |
+
DVF can also be shown to learn an occupancy measure which corresponds to the normalized successor features[[14](https://arxiv.org/html/2306.07290#bib.bib14)] that allows sampling future states through the reverse diffusion process.
|
| 238 |
+
|
| 239 |
+
4 Experiments
|
| 240 |
+
-------------
|
| 241 |
+
|
| 242 |
+
### 4.1 Mountain Car
|
| 243 |
+
|
| 244 |
+
Before studying the behavior of DVF on robotic tasks, we conduct experiments on the continuous Mountain Car problem, a simple domain for analysing sequential decision making methods. We trained DVF for 500 gradient steps until convergence, and computed correlations between the true environment returns, the value function estimator based on the diffusion model, as well as the reward prediction at states sampled from ρ π superscript 𝜌 𝜋\rho^{\pi}italic_ρ start_POSTSUPERSCRIPT italic_π end_POSTSUPERSCRIPT.[Fig.2](https://arxiv.org/html/2306.07290#S4.F2 "Figure 2 ‣ 4.1 Mountain Car ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control") shows that all three quantities exhibit strong positive correlation, even though the value function estimator is not learned using temporal difference learning.
|
| 245 |
+
|
| 246 |
+

|
| 247 |
+
|
| 248 |
+

|
| 249 |
+
|
| 250 |
+

|
| 251 |
+
|
| 252 |
+
Figure 2: (Left) Pairwise plot of normalized returns versus the value function estimated by DVF, (Middle) Pairwise plot of normalized value function versus normalized reward at future state and (Right) normalized value function and normalized environment returns versus training gradient steps.
|
| 253 |
+
|
| 254 |
+
### 4.2 Maze 2d
|
| 255 |
+
|
| 256 |
+
We examine the qualitative behavior of the diffusion model of DVF on a simple locomotion task inside mazes of various shapes, as introduced in the D4RL offline suite[[25](https://arxiv.org/html/2306.07290#bib.bib25)]. In these experiments, the agent starts in the lower left of the maze and uses a waypoint planner with three separate goals to collect data in the environment (see [Fig.3](https://arxiv.org/html/2306.07290#S4.F3 "Figure 3 ‣ 4.2 Maze 2d ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control")(a) and [Fig.3](https://arxiv.org/html/2306.07290#S4.F3 "Figure 3 ‣ 4.2 Maze 2d ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control")(c) for the samples of the collected data). The diffusion model of DVF is trained on the data from the three data-collecting policies, using the scalar policy conditioning described in [Section 3.1](https://arxiv.org/html/2306.07290#S3.SS1 "3.1 Challenge 1: Off-policy evaluation through conditioning ‣ 3 Methodology ‣ Value function estimation using conditional diffusion models for control").
|
| 257 |
+
|
| 258 |
+

|
| 259 |
+
|
| 260 |
+
Figure 3: (a, c) Ground truth data distribution for the u-maze and large maze from the Maze 2d environment. (b, d) Conditional distribution of future states s t+Δt|s 0,ϕ(π i)conditional subscript 𝑠 𝑡 Δ 𝑡 subscript 𝑠 0 italic-ϕ subscript 𝜋 𝑖 s_{t+\Delta t}|s_{0},\phi(\pi_{i})italic_s start_POSTSUBSCRIPT italic_t + roman_Δ italic_t end_POSTSUBSCRIPT | italic_s start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_ϕ ( italic_π start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) given the starting state in the bottom left corner and the policy index. The diffusion model correctly identifies and separates the three state distributions in both mazes.
|
| 261 |
+
|
| 262 |
+
[Fig.3](https://arxiv.org/html/2306.07290#S4.F3 "Figure 3 ‣ 4.2 Maze 2d ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control") shows full trajectories sampled by conditioning the diffusion model on the start state in the lower left, the policy index, and a time offset. [Fig.4](https://arxiv.org/html/2306.07290#S4.F4 "Figure 4 ‣ 4.2 Maze 2d ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control") shows sampled trajectories as the discount factor γ 𝛾\gamma italic_γ increases, leading to sampling larger time offsets.
|
| 263 |
+
|
| 264 |
+
The results show the ability of the diffusion model to represent long-horizon data faithfully, and highlight some benefits of the approach. DVF can sample trajectories without the need to evaluate a policy or specify intermediate actions. Because DVF samples each time offset independently, there is also no concern of compounding model error as the horizon increases. Additionally, the cost of predicting s t+k subscript 𝑠 𝑡 𝑘 s_{t+k}italic_s start_POSTSUBSCRIPT italic_t + italic_k end_POSTSUBSCRIPT from s t subscript 𝑠 𝑡 s_{t}italic_s start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT is 𝒪(1)𝒪 1\mathcal{O}(1)caligraphic_O ( 1 ) for DVF, while it is 𝒪(k)𝒪 𝑘\mathcal{O}(k)caligraphic_O ( italic_k ) for classical autoregressive models.
|
| 265 |
+
|
| 266 |
+

|
| 267 |
+
|
| 268 |
+
Figure 4: Samples from the learned diffusion model with increasing values of discount factor γ 𝛾\gamma italic_γ, with a starting state in the lower left of the maze. As γ 𝛾\gamma italic_γ increases, the model generates samples further along the trajectory leading to the furthest point of the maze. Ground truth data shown in [Fig.3](https://arxiv.org/html/2306.07290#S4.F3 "Figure 3 ‣ 4.2 Maze 2d ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control")(a)
|
| 269 |
+
|
| 270 |
+
### 4.3 PyBullet
|
| 271 |
+
|
| 272 |
+
Our final set of experiments consists in ablations performed on offline data collected from classical PyBullet environments 5 5 5 Data is taken from [https://github.com/takuseno/d4rl-pybullet/tree/master](https://github.com/takuseno/d4rl-pybullet/tree/master), as oppposed to D4RL, which has faced criticism due to poor sim2real transfer capabilities[[26](https://arxiv.org/html/2306.07290#bib.bib26)]. We compare DVF to behavior cloning and Conservative Q-learning[[27](https://arxiv.org/html/2306.07290#bib.bib27)], two strong offline RL baselines. We also plot the average normalized returns of each dataset to facilitate the comparison over 5 random seeds. Medium dataset contains data collected by medium-level policy and mixed contains data from SAC[[20](https://arxiv.org/html/2306.07290#bib.bib20)] training.
|
| 273 |
+
|
| 274 |
+

|
| 275 |
+
|
| 276 |
+
Figure 5: Normalized returns obtained by DVF, behavior cloning, CQL on 4 challenging robotic tasks from the PyBullet offline suite, together with average returns in each dataset (Data in the plot).
|
| 277 |
+
|
| 278 |
+
[Fig.5](https://arxiv.org/html/2306.07290#S4.F5 "Figure 5 ‣ 4.3 PyBullet ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control") highlights the ability of DVF to match and sometimes outperform the performance of classical offline RL algorithms, especially on data of lower quality, e.g. coming from a random policy. In domains where online rollouts can be prohibitively expensive, the ability to learn from low-quality, incomplete offline demonstrations is a strength of DVF. This benchmark also demonstrates the shortcomings of scalar policy representation, which is unknown for a given offline dataset, and also doesn’t scale well when the number of policies is large (e.g. number of gradient steps j 𝑗 j italic_j in logging policy μ(j)superscript 𝜇 𝑗\mu^{(j)}italic_μ start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT). For this reason, we opted for a sequential policy representation.
|
| 279 |
+
|
| 280 |
+
5 Related works
|
| 281 |
+
---------------
|
| 282 |
+
|
| 283 |
+
#### Offline pre-training for reinforcement learning
|
| 284 |
+
|
| 285 |
+
Multiple approaches have tried to alleviate the heavy cost of training agents _tabula rasa_ by pre-training some parts of the system offline. For example, inverse dynamics models which predict the action leading from the current state to the next state have seen success in complex domains such as Atari[[28](https://arxiv.org/html/2306.07290#bib.bib28)], as well as Minecraft[[8](https://arxiv.org/html/2306.07290#bib.bib8), [9](https://arxiv.org/html/2306.07290#bib.bib9)]. Return-conditioned sequence models have also seen a rise in popularity, specifically due to their ability to learn performance-action-state correlations over long horizons[[29](https://arxiv.org/html/2306.07290#bib.bib29)].
|
| 286 |
+
|
| 287 |
+
#### Unsupervised reinforcement learning
|
| 288 |
+
|
| 289 |
+
Using temporal difference[[30](https://arxiv.org/html/2306.07290#bib.bib30)] for policy iteration or evaluation requires all data tuples to contain state, action and reward information. However, in some real-world scenarios, the reward might only be available for a small subset of data (e.g. problems with delayed feedback[[31](https://arxiv.org/html/2306.07290#bib.bib31)]). In this case, it is possible to decompose the value function into a reward-dependent and dynamics components, as was first suggested in the successor representation framework[[14](https://arxiv.org/html/2306.07290#bib.bib14), [16](https://arxiv.org/html/2306.07290#bib.bib16)]. More recent approaches[[15](https://arxiv.org/html/2306.07290#bib.bib15), [32](https://arxiv.org/html/2306.07290#bib.bib32), [33](https://arxiv.org/html/2306.07290#bib.bib33), [34](https://arxiv.org/html/2306.07290#bib.bib34)] use a density model to learn the occupancy measure over future states for each state-action pair in the dataset. However, learning an explicit multi-step model such as[[15](https://arxiv.org/html/2306.07290#bib.bib15)] can be unstable due to the bootstrapping term in the temporal difference loss, and these approaches still require large amounts of reward and action labels. While our proposed method is a hybrid between model-free and model-based learning, it avoids the computational overhead incurred by classical world models such as Dreamer[[24](https://arxiv.org/html/2306.07290#bib.bib24)] by introducing constant-time rollouts. The main issue with infinite-horizon models is the implicit dependence of the model on the policy, which imposes an upper-bound on the magnitude of the policy improvement step achievable in the offline case. Our work solves this issue by adding an explicit policy conditioning mechanism, which allows to generate future states from unseen policy embeddings.
|
| 290 |
+
|
| 291 |
+
#### Diffusion models
|
| 292 |
+
|
| 293 |
+
Learning a conditional probability distribution over a highly complex space can be a challenging task, which is why it is often easier to instead approximate it using a density ratio specified by an inner product in a much lower-dimensional latent space. To learn an occupancy measure over future states without passing via the temporal difference route, one can use denoising diffusion models to approximate the corresponding future state density under a given policy. Diffusion has previously been used in the static unsupervised setting such as image generation[[18](https://arxiv.org/html/2306.07290#bib.bib18)] and text-to-image generation[[35](https://arxiv.org/html/2306.07290#bib.bib35)]. Diffusion models have also been used to model trajectory data for planning in small-dimensional environments[[36](https://arxiv.org/html/2306.07290#bib.bib36)]. However, no work so far has managed to efficiently predict infinite-horizon rollouts.
|
| 294 |
+
|
| 295 |
+
6 Discussion
|
| 296 |
+
------------
|
| 297 |
+
|
| 298 |
+
In this work, we introduced a simple model-free algorithm for learning reward-maximizing policies, which can be efficiently used to solve complex robotic tasks. Diffused Value Function (DVF) avoids the pitfalls of both temporal difference learning and autoregressive model-based methods by pre-training an infinite-horizon transition model from state sequences using a diffusion model. This model does not require any action nor reward information, and can then be used to construct the state-action value function, from which one can decode the optimal action. DVF fully leverages the power of diffusion models to generate states far ahead into the future without intermediate predictions. Our experiments demonstrate that DVF matches and sometimes outperforms strong offline RL baselines on realistic robotic tasks based on PyBullet, and opens an entire new direction of research.
|
| 299 |
+
|
| 300 |
+
7 Limitations
|
| 301 |
+
-------------
|
| 302 |
+
|
| 303 |
+
The main limitation of our method is that it operates directly on observations instead of latent state embeddings, which requires tuning the noise schedule for each set of tasks, instead of using a unified noise schedule similarly to latent diffusion models[[37](https://arxiv.org/html/2306.07290#bib.bib37)]. Another limitation is the need to explicitly condition the rollouts from the diffusion model on the policy, something that single-step models avoid. Finally, online learning introduces the challenge of capturing the non-stationarity of the environment using the generative model ρ 𝜌\rho italic_ρ, which, in itself, is a hard task.
|
| 304 |
+
|
| 305 |
+
References
|
| 306 |
+
----------
|
| 307 |
+
|
| 308 |
+
* Chowdhery et al. [2022] A.Chowdhery, S.Narang, J.Devlin, M.Bosma, G.Mishra, A.Roberts, P.Barham, H.W. Chung, C.Sutton, S.Gehrmann, et al. Palm: Scaling language modeling with pathways. _arXiv preprint arXiv:2204.02311_, 2022.
|
| 309 |
+
* Touvron et al. [2023] H.Touvron, T.Lavril, G.Izacard, X.Martinet, M.-A. Lachaux, T.Lacroix, B.Rozière, N.Goyal, E.Hambro, F.Azhar, et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_, 2023.
|
| 310 |
+
* Kaplan et al. [2020] J.Kaplan, S.McCandlish, T.Henighan, T.B. Brown, B.Chess, R.Child, S.Gray, A.Radford, J.Wu, and D.Amodei. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_, 2020.
|
| 311 |
+
* Ouyang et al. [2022] L.Ouyang, J.Wu, X.Jiang, D.Almeida, C.Wainwright, P.Mishkin, C.Zhang, S.Agarwal, K.Slama, A.Ray, et al. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_, 35:27730–27744, 2022.
|
| 312 |
+
* Brohan et al. [2023] A.Brohan, Y.Chebotar, C.Finn, K.Hausman, A.Herzog, D.Ho, J.Ibarz, A.Irpan, E.Jang, R.Julian, et al. Do as i can, not as i say: Grounding language in robotic affordances. In _Conference on Robot Learning_, pages 287–318. PMLR, 2023.
|
| 313 |
+
* Stone et al. [2023] A.Stone, T.Xiao, Y.Lu, K.Gopalakrishnan, K.-H. Lee, Q.Vuong, P.Wohlhart, B.Zitkovich, F.Xia, C.Finn, et al. Open-world object manipulation using pre-trained vision-language models. _arXiv preprint arXiv:2303.00905_, 2023.
|
| 314 |
+
* Driess et al. [2023] D.Driess, F.Xia, M.S. Sajjadi, C.Lynch, A.Chowdhery, B.Ichter, A.Wahid, J.Tompson, Q.Vuong, T.Yu, et al. Palm-e: An embodied multimodal language model. _arXiv preprint arXiv:2303.03378_, 2023.
|
| 315 |
+
* Baker et al. [2022] B.Baker, I.Akkaya, P.Zhokov, J.Huizinga, J.Tang, A.Ecoffet, B.Houghton, R.Sampedro, and J.Clune. Video pretraining (vpt): Learning to act by watching unlabeled online videos. _Advances in Neural Information Processing Systems_, 35:24639–24654, 2022.
|
| 316 |
+
* Fan et al. [2022] L.Fan, G.Wang, Y.Jiang, A.Mandlekar, Y.Yang, H.Zhu, A.Tang, D.-A. Huang, Y.Zhu, and A.Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. _arXiv preprint arXiv:2206.08853_, 2022.
|
| 317 |
+
* Yu et al. [2020] T.Yu, G.Thomas, L.Yu, S.Ermon, J.Y. Zou, S.Levine, C.Finn, and T.Ma. Mopo: Model-based offline policy optimization. _Advances in Neural Information Processing Systems_, 33:14129–14142, 2020.
|
| 318 |
+
* Argenson and Dulac-Arnold [2020] A.Argenson and G.Dulac-Arnold. Model-based offline planning. _arXiv preprint arXiv:2008.05556_, 2020.
|
| 319 |
+
* Kidambi et al. [2020] R.Kidambi, A.Rajeswaran, P.Netrapalli, and T.Joachims. Morel: Model-based offline reinforcement learning. _arXiv preprint arXiv:2005.05951_, 2020.
|
| 320 |
+
* Yu et al. [2021] T.Yu, A.Kumar, R.Rafailov, A.Rajeswaran, S.Levine, and C.Finn. Combo: Conservative offline model-based policy optimization. _Advances in neural information processing systems_, 34:28954–28967, 2021.
|
| 321 |
+
* Dayan [1993] P.Dayan. Improving generalization for temporal difference learning: The successor representation. _Neural Computation_, 5(4):613–624, 1993.
|
| 322 |
+
* Janner et al. [2020] M.Janner, I.Mordatch, and S.Levine. Generative temporal difference learning for infinite-horizon prediction. _arXiv preprint arXiv:2010.14496_, 2020.
|
| 323 |
+
* Barreto et al. [2016] A.Barreto, W.Dabney, R.Munos, J.J. Hunt, T.Schaul, H.Van Hasselt, and D.Silver. Successor features for transfer in reinforcement learning. _arXiv preprint arXiv:1606.05312_, 2016.
|
| 324 |
+
* Sohl-Dickstein et al. [2015] J.Sohl-Dickstein, E.Weiss, N.Maheswaranathan, and S.Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _International Conference on Machine Learning_, pages 2256–2265. PMLR, 2015.
|
| 325 |
+
* Ho et al. [2020] J.Ho, A.Jain, and P.Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840–6851, 2020.
|
| 326 |
+
* Jaegle et al. [2021] A.Jaegle, S.Borgeaud, J.-B. Alayrac, C.Doersch, C.Ionescu, D.Ding, S.Koppula, D.Zoran, A.Brock, E.Shelhamer, et al. Perceiver io: A general architecture for structured inputs & outputs. _arXiv preprint arXiv:2107.14795_, 2021.
|
| 327 |
+
* Haarnoja et al. [2018] T.Haarnoja, A.Zhou, P.Abbeel, and S.Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _International conference on machine learning_, pages 1861–1870. PMLR, 2018.
|
| 328 |
+
* Song et al. [2013] L.Song, K.Fukumizu, and A.Gretton. Kernel embeddings of conditional distributions: A unified kernel framework for nonparametric inference in graphical models. _IEEE Signal Processing Magazine_, 30(4):98–111, 2013.
|
| 329 |
+
* Mazoure et al. [2022] B.Mazoure, T.Doan, T.Li, V.Makarenkov, J.Pineau, D.Precup, and G.Rabusseau. Low-rank representation of reinforcement learning policies. _Journal of Artificial Intelligence Research_, 75:597–636, 2022.
|
| 330 |
+
* Harb et al. [2020] J.Harb, T.Schaul, D.Precup, and P.-L. Bacon. Policy evaluation networks. _arXiv preprint arXiv:2002.11833_, 2020.
|
| 331 |
+
* Hafner et al. [2023] D.Hafner, J.Pasukonis, J.Ba, and T.Lillicrap. Mastering diverse domains through world models. _arXiv preprint arXiv:2301.04104_, 2023.
|
| 332 |
+
* Fu et al. [2020] J.Fu, A.Kumar, O.Nachum, G.Tucker, and S.Levine. D4rl: Datasets for deep data-driven reinforcement learning, 2020.
|
| 333 |
+
* Körber et al. [2021] M.Körber, J.Lange, S.Rediske, S.Steinmann, and R.Glück. Comparing popular simulation environments in the scope of robotics and reinforcement learning. _arXiv preprint arXiv:2103.04616_, 2021.
|
| 334 |
+
* Kumar et al. [2020] A.Kumar, A.Zhou, G.Tucker, and S.Levine. Conservative q-learning for offline reinforcement learning. _arXiv preprint arXiv:2006.04779_, 2020.
|
| 335 |
+
* Schwarzer et al. [2021] M.Schwarzer, N.Rajkumar, M.Noukhovitch, A.Anand, L.Charlin, D.Hjelm, P.Bachman, and A.Courville. Pretraining representations for data-efficient reinforcement learning. _arXiv preprint arXiv:2106.04799_, 2021.
|
| 336 |
+
* Lee et al. [2022] K.-H. Lee, O.Nachum, M.S. Yang, L.Lee, D.Freeman, S.Guadarrama, I.Fischer, W.Xu, E.Jang, H.Michalewski, et al. Multi-game decision transformers. _Advances in Neural Information Processing Systems_, 35:27921–27936, 2022.
|
| 337 |
+
* Sutton and Barto [2018] R.S. Sutton and A.G. Barto. _Reinforcement learning: An introduction_. MIT press, 2018.
|
| 338 |
+
* Howson et al. [2021] B.Howson, C.Pike-Burke, and S.Filippi. Delayed feedback in episodic reinforcement learning. _arXiv preprint arXiv:2111.07615_, 2021.
|
| 339 |
+
* Eysenbach et al. [2020] B.Eysenbach, R.Salakhutdinov, and S.Levine. C-learning: Learning to achieve goals via recursive classification. _arXiv preprint arXiv:2011.08909_, 2020.
|
| 340 |
+
* Eysenbach et al. [2022] B.Eysenbach, T.Zhang, R.Salakhutdinov, and S.Levine. Contrastive learning as goal-conditioned reinforcement learning. _arXiv preprint arXiv:2206.07568_, 2022.
|
| 341 |
+
* Mazoure et al. [2022] B.Mazoure, B.Eysenbach, O.Nachum, and J.Tompson. Contrastive value learning: Implicit models for simple offline rl, 2022.
|
| 342 |
+
* Rombach et al. [2022] R.Rombach, A.Blattmann, D.Lorenz, P.Esser, and B.Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684–10695, 2022.
|
| 343 |
+
* Janner et al. [2022] M.Janner, Y.Du, J.B. Tenenbaum, and S.Levine. Planning with diffusion for flexible behavior synthesis. _arXiv preprint arXiv:2205.09991_, 2022.
|
| 344 |
+
* Rombach et al. [2022] R.Rombach, A.Blattmann, D.Lorenz, P.Esser, and B.Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 10684–10695, June 2022.
|
| 345 |
+
|
| 346 |
+
Appendix
|
| 347 |
+
--------
|
| 348 |
+
|
| 349 |
+
### 7.1 Experimental details
|
| 350 |
+
|
| 351 |
+
#### Model architecture
|
| 352 |
+
|
| 353 |
+
DVF uses a Perceiver I/O model[[19](https://arxiv.org/html/2306.07290#bib.bib19)] with 1×1 1 1 1\times 1 1 × 1 convolution encodings for states, sinusoidal encoding for diffusion timestep and a linear layer for action embedding. The Perceiver I/O model has positional encodings for all inputs, followed by 8 blocks with 4 cross-attention heads and 4 self-attention heads and latent size 256. The scalar policy representation was encoded using sinusoidal encoding, while the sequential representation was passed through the 1×1 1 1 1\times 1 1 × 1 convolution and linear embedding layers and masked-out to handle varying context lengths, before being passed to the Perceiver model.
|
| 354 |
+
|
| 355 |
+
Table 1: Hyperparameters that are consistent between methods.
|
| 356 |
+
|
| 357 |
+
Table 2: Hyperparameters that are different between methods.
|
| 358 |
+
|
| 359 |
+
All experiments were run on the equivalent of 2 V100 GPUs with 32 Gb of VRAM and 8 CPUs.
|
| 360 |
+
|
| 361 |
+
#### Dataset composition
|
| 362 |
+
|
| 363 |
+
The Maze2d datasets were constructed based on waypoint planning scripts provided in the D4RL repository, and modifying the target goal locations to lie in each corner of the maze (u-maze), or in randomly chosen pathways (large maze). The PyBullet dataset has a data composition similar to the original D4RL suite, albeit collected in the PyBullet simulator instead of MuJoCo.
|
| 364 |
+
|
| 365 |
+
### 7.2 Additional results
|
| 366 |
+
|
| 367 |
+
We include three videos of the training of the diffusion model ρ 𝜌\rho italic_ρ on the large maze dataset shown in[Fig.4](https://arxiv.org/html/2306.07290#S4.F4 "Figure 4 ‣ 4.2 Maze 2d ‣ 4 Experiments ‣ Value function estimation using conditional diffusion models for control") for 128, 512 and 1024 diffusion timesteps, in the supplementary material. Note that increasing the number of timesteps leads to faster convergence of the diffusion model samples to the true data distribution.
|