markdown_text stringlengths 1 2.5k | pdf_metadata dict | header_metadata dict | chunk_metadata dict |
|---|---|---|---|
##### The sum of the Bayes generalization error and the cross validation error satisfies B g (n) + C v (n) = (β − 1) [V][ (][n][)] + [2][λ] n βn [+][ o] [p] [( 1] n [)][.] | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.3 Generalization Error and Cross Validation Error",
"Header 5": "The sum of the Bayes generalization error an... | {
"chunk_type": "body"
} |
##### In particular, if β = 1, (Proof) By eq.(38), | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.3 Generalization Error and Cross Validation Error",
"Header 5": "In particular, if β = 1, (Proof) By eq.(38),... | {
"chunk_type": "body"
} |
##### B g (n) + C v (n) = [2][λ] n [+][ o] [p] [( 1] n [)][.] | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.3 Generalization Error and Cross Validation Error",
"Header 5": "B g (n) + C v (n) = [2][λ] n [+][ o] [p] [( ... | {
"chunk_type": "body"
} |
##### λ − ν 1 E[B g (n − 1)] = + ν � β � n [+][ o][( 1] n [)][.] Since E[C v (n)] = E[B g (n − 1)], lim = lim n→∞ [n][E][[][C] [v] [(][n][)]] n→∞ [n][E][[][B] [g] [(][n][ −] [1)]] λ − ν = + ν. β From eq.(40) and Corollary 1, B t (n) = C v (n) − [β] n [V][ (][n][) +][ O] [p] [( 1] n [3][/][2] [)][,] it follows that n(B ... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "4 Main Results",
"Header 4": "4.3 Generalization Error and Cross Validation Error",
"Header 5": "λ − ν 1 E[B g (n − 1)] = + ν � β � n [+][ o]... | {
"chunk_type": "body"
} |
### 5 Discussion
##### Let us discuss the results of this paper. | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": null,
"Header 5": "Let us discuss the results of this paper."
} | {
"chunk_type": "body"
} |
#### 5.1 From Regular to Singular
##### Firstly, we summarize regular and singular learning theory. In regular statistical models, the generalization error of the maximum likeli- hood method is asymptotically equal to that of the Bayes estimation. In both the maximum likelihood and Bayes methods, the cross validation... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.1 From Regular to Singular",
"Header 5": "Firstly, we summarize regular and singular learning theory. In regula... | {
"chunk_type": "body"
} |
#### 5.2 Cross Validation and Information Criterion
##### Secondly, let us compare cross validation and information criterion from the practical point of view. In Theorem 1, we theoretically proved that the Bayes cross validation leaving one out is asymptotically equivalent to the widely applicable information criter... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "Secondly, let us compare cross validation and infor... | {
"chunk_type": "body"
} |
##### CV 1 = − [1] n
n | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "CV 1 = − [1] n"
} | {
"chunk_type": "body"
} |
##### � log E w [(][i][)] [[][p][(][X] [i] [|][w][)]]
i=1 | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "� log E w [(][i][)] [[][p][(][X] [i] [|][w][)]]"
} | {
"chunk_type": "body"
} |
##### 16
----- | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "16"
} | {
"chunk_type": "body"
} |
##### is calculated. In this method, we need to realize n different posterior distributions, which requires heavy computational costs. In the latter method, the posterior dis- tribution leaving X i out is estimated by using the posterior average E w [ ] in the same way as eq.(30), E [(] w [i][)] [[][p][(][X] [i] [|][w]... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "is calculated. In this method, we need to realize n... | {
"chunk_type": "body"
} |
##### CV 2 [∼] = − [1] n | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "CV 2 [∼] = − [1] n"
} | {
"chunk_type": "body"
} |
##### � i=1n log [E] [w] [[][p] E [(][X] w [ [i] p [|][w] (X [)][ p] i |w [(][X] ) [−][i] [|] [β] [w] ] [)] [−][β] [ ]] . | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "� i=1n log [E] [w] [[][p] E [(][X] w [ [i] p [|][w]... | {
"chunk_type": "body"
} |
##### If the posterior distribution is completely realized, then CV 1 and CV 2 coincide to each other and they are also asymptotically equivalent to the widely applicable information criterion. However, if the posterior distribution was not sufficiently approximated, these three values, CV 1, CV 2, and WAIC(n) might be... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.2 Cross Validation and Information Criterion",
"Header 5": "If the posterior distribution is completely realize... | {
"chunk_type": "body"
} |
#### 5.3 Birational Invariant
##### Thirdly, we study the statistical problem from the algebraic geometrical point of view. In this paper, we proved in Theorem 1 that E[B g L(n)] = E[C v L(n)] + o(1/n). (41) However, in Theorem 2 we proved that B g (n) + C v (n) = [2][λ] (42) n [+][ o] [p] [(1][/n][)][.] In practical... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.3 Birational Invariant",
"Header 5": "Thirdly, we study the statistical problem from the algebraic geometrical ... | {
"chunk_type": "body"
} |
##### Note that the log canonical threshold λ is a birational invariant [Atiyah 70, Hiroanaka 64, Kashiwara 76, Koll´or et al. 98, Mustata 02, Watanabe 09] that rep- resents the algebraic geometrical relation between the set of the parameters W and the set of the optimal parameters W 0 . To clarify the algebraic geomet... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "5 Discussion",
"Header 4": "5.3 Birational Invariant",
"Header 5": "Note that the log canonical threshold λ is a birational invariant [Atiyah... | {
"chunk_type": "body"
} |
### 6 Conclusion
##### In this paper, we theoretically show that the cross validation leaving one out is asymptotically equal to the widely applicable information criterion and that the sum of the cross validation error and the generalization error is equal to the log canonical threshold divided by the number of trai... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "6 Conclusion",
"Header 4": null,
"Header 5": "In this paper, we theoretically show that the cross validation leaving one out is asymptoticall... | {
"chunk_type": "body"
} |
### References
##### [Akaike 74] H. Akaike. A new look at the statistical model identification. IEEE Trans. on Automatic Control, Vol.19, pp.716-723, 1974. [Aamari 93] S. Amari. A universal theorem on learning curves, Neural Networks, Vol. 6, No.2, pp.161-166, 1993. [Aoyagi & Watanabe 05] M.Aoyagi, S.Watanabe. Stocha... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "References",
"Header 4": null,
"Header 5": "[Akaike 74] H. Akaike. A new look at the statistical model identification. IEEE Trans. on Automat... | {
"chunk_type": "references"
} |
##### [Hagiwara 02] K. Hagiwara. On the problem in model selection of neural network regression in overrealizable scenario. Neural Computation, Vol.14, pp.1979-2002, 2002. [Hartigan 85] J. A. Hartigan. A failure of likelihood asymptotics for normal mix- tures. Proc. Barkeley Conference in Honor of J. Neyman and J. Kief... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "References",
"Header 4": null,
"Header 5": "[Hagiwara 02] K. Hagiwara. On the problem in model selection of neural network regression in over... | {
"chunk_type": "references"
} |
##### [van der Vaart 96] A. W. van der Vaart, J. A. Wellner. Weak Convergence and Em- pirical Processes. Springer,1996. [Rusakov & Geiger 05] D.Rusakov, D.Geiger. Asymptotic model selection for naive Bayesian network. Journal of Machine Learning Research. Vol.6, pp.1-35, 2005. [Schwarz 78] G. Schwarz. Estimating the di... | {
"id": "1004.2316",
"title": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable\n Information Criterion in Singular Learning Theory",
"categories": [
"cs.LG"
]
} | {
"Header 1": "Asymptotic Equivalence of Bayes Cross Validation and Widely Applicable Information Criterion in Singular Learning Theory",
"Header 2": null,
"Header 3": "References",
"Header 4": null,
"Header 5": "[van der Vaart 96] A. W. van der Vaart, J. A. Wellner. Weak Convergence and Em- pirical Processes... | {
"chunk_type": "references"
} |
## **No-Regret Reductions** **for Imitation Learning and Structured Prediction**
**Stéphane Ross** **Geoffrey J. Gordon** **James A. Bagnell**
Robotics Institute Machine Learning Department Robotics Institute
Carnegie Mellon University Carnegie Mellon University Carnegie Mellon University
Pittsburgh, PA 15213, USA Pi... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": null,
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "title"
} |
### **Abstract**
Sequential prediction problems such as imitation
learning, where future observations depend on
previous predictions (actions), violate the common i.i.d. assumptions made by in statistical
learning. This leads to poor performance in both
theory and often in practice. Some recent approaches (Daume 2009... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**Abstract**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
### **1 INTRODUCTION**
Sequence Prediction problems arize commonly in practice.
For instance, most robotic systems must be able to predict/make a sequence of actions given a sequence of observations releaved to them over time. In complex robotic systems where standard control methods fail, we must often
resort to lea... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**1 INTRODUCTION**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
starting from the first step, conditioned on the previously
learned policies. Unfortunately this is impractical when the
task horizon *T* is large or ill-defined. Another approach
called SMILe (Ross 2010a), similar to SEARN (Daume
2009) and CPI (Kakade 2002), involves training a stationary stochastic policy (a distribu... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**1 INTRODUCTION**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
on two challenging imitation learning problems: 1) learning to steer a car in a 3D racing game ( *Super Tux Kart* )
and 2) and learning to play *Super Mario Bros.*, given input image features and corresponding actions by a human
expert and near-optimal planner respectively. Following
(Daume 2009) in treating structured... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**1 INTRODUCTION**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
### **2 PRELIMINARIES**
We begin by introducing notation relevant to our imitation
learning setting. We denote by Π the class of policies the
learner is considering and *T* the task horizon. For any policy *π*, we let *d* *[t]* *π* [denote the distribution of states at time] *[ t]*
if the learner executed policy *π* ... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**2 PRELIMINARIES**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
*ℓ* ( *s, ·* ) is convex in *π* for all states *s* .
We briefly review previous approaches and their perfor
mance guarantees.
**2.1** **Supervised Approach to Imitation**
The traditional approach to imitation learning ignores the
change in distribution and simply trains a policy *π* that performs well under the d... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**2 PRELIMINARIES**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
step *t* ) iteratively over *T* iterations, where at iteration *t*, *π* *t*
is train to mimic *π* *[∗]* on the distribution of states at time *t* in
duced by the previously trained policies *π* 1 *, π* 2 *, . . ., π* *t−* 1 .
The key idea is that by doing so, *π* *t* is trained on the actual
distribution of states it w... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**2 PRELIMINARIES**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
= *J* ( *π* *[∗]* ) + *uTϵ*
The inequality follows from the fact that since *ℓ* ( *s, π* ) is an
upper bound on the 0-1 loss of *π* in *s*, then with probability
*T* 2 *[−]* [1] *[−]* [(] [1] *[−]* 4 [2] *ϵ* *[ϵ]* [)] *[T]* [ +1] + [1] 2 [over sequences of length] *[ T]* [ at test time.]
We can see this is bounde... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**2 PRELIMINARIES**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
interpreted as adding probability *α* (1 *−* *α* ) *[n][−]* [1] to executing
policy ˆ *π* *n* at any step and removing probability *α* (1 *−α* ) *[n][−]* [1]
of executing the queried expert’s action. At iteration *n*, *π* *n*
is a mixture of *n* policies and the probability of using the
queried expert’s action is (1 ... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**2 PRELIMINARIES**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
### **3 DATASET AGGREGATION**
We now present DA GGER (Dataset Aggregation), an iterative algorithm that trains a deterministic policy that achieve
good performance guarantees under its induced distribution
of states.
In its simplest form, the algorithm proceeds as follows.
At the first iteration, it uses the expert... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**3 DATASET AGGREGATION**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
is handled with *β* *i* = *I* ( *i* = 1) for *I* the indicator function. The general DA GGER algorithm is detailed in Algorithm 3.1. The main result of our analysis in the next
section is the following guarantee for DA GGER . Let
*π* 1: *N* denote the sequence of policies *π* 1 *, π* 2 *, . . ., π* *N* . Assume *ℓ* is ... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**3 DATASET AGGREGATION**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
best policy on the sampled trajectories, then using the Hoeffding bound leads to the following guarantee:
**Theorem 3.3.** *For* DA GGER *, if N is O* ( *T* [2] log(1 */δ* )) *and*
*m is O* (1) *then with probability at least* 1 *−* *δ there exists a*
ˆ ˆ
*policy* ˆ *π ∈* *π* 1: *N* *s.t.* E *s∼d* *π* ˆ [ *ℓ* ( *s,... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**3 DATASET AGGREGATION**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
### **4 THEORETICAL ANALYSIS**
The theoretical analysis of DA GGER only relies on the noregret property of the underlying *Follow-The-Leader* algorithm on strongly convex losses (Kakade 2009) which
picks the sequence of policies ˆ *π* 1: *N* . Hence the presented
results also hold for *any* other no regret online lea... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**4 THEORETICAL ANALYSIS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
To do so, we choose the loss functions to be the loss under
the distribution of states of the current policy chosen by the
online algorithm: *ℓ* *i* ( *π* ) = E *s∼d* *πi* [ *ℓ* ( *s, π* )].
*N*
Let *ϵ* *N* = min *π∈* Π *N* [1] � *i* =1 [E] *[s][∼][d]* *πi* [[] *[ℓ]* [(] *[s, π]* [)]][ denote the]
loss of the bes... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**4 THEORETICAL ANALYSIS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
2 *ℓ* max min(1 *, Tβ* *i* ). Then:
min *π* ˆ *∈π* ˆ 1: *N* E *s∼d* *π* ˆ [ *ℓ* ( *s,* ˆ *π* )]
*N*
*≤* *N* [1] � *i* =1 [E] *[s][∼][d]* *πi* ˆ [(] *[ℓ]* [(] *[s,]* [ ˆ] *[π]* *[i]* [))]
*N*
*≤* *N* [1] � *i* =1 [[][E] *[s][∼][d]* *[π]* *i* [(] *[ℓ]* [(] *[s,]* [ ˆ] *[π]* *[i]* [)) + 2] *[ℓ]* [max] [ min(1] *... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**4 THEORETICAL ANALYSIS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
dataset of those *m* trajectories. In this case, the online
learning algorithm guarantees *N* 1 � *Ni* =1 [E] *[s][∼][D]* *i* [(] *[ℓ]* [(] *[s, π]* *[i]* [))] *[ −]*
-----
**No-Re** **g** **ret Reductions for Imitation Learnin** **g** **and Structured Prediction**
*N*
min *π∈* Π *N* [1] � *i* =1 [E] *[s][∼][D]... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**4 THEORETICAL ANALYSIS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
*N* 2 lo g( 1 */* *δ* )
*≤* *N* [1] � *i* =1 [E] *[s][∼][D]* *[i]* [[] *[ℓ]* [(] *[s,]* [ ˆ] *[π]* *[i]* [)] +] *[ ℓ]* [max] ~~�~~ *mN*
+ [2] *[ℓ]* *N* [max] [ *n* *β* + *T* [�] *[N]* *i* = *n* *β* +1 *[β]* *[i]* []]
*≤* *ϵ* ˆ *N* + *γ* *N* + *ℓ* max � 2 lo *mN* g( 1 */* *δ* ) + [2] *[ℓ]* *N* [max] [ *n* *β* + *T... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**4 THEORETICAL ANALYSIS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
### **5 EXPERIMENTS**
To demonstrate the efficacy and scalability of DA GGER,
we apply it to two challenging imitation learning problems
and a sequence labeling task (handwriting recognition).
**5.1** **Super Tux Kart**
Super Tux Kart is an open source 3D racing game similar
to the popular Mario Kart. Our goal is... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**5 EXPERIMENTS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
learn how to recover from mistakes it makes. With SMILe
we obtain some improvements but the policy after 20 iterations still falls off the track about twice per lap on average. This is in part due to the stochasticity of the policy
which sometimes make bad choices of actions. For DA G
GER, we were able to obtain a po... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**5 EXPERIMENTS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |

Figure 3: Captured image from Mario Bros.
base learner which update the 4 binary actions at 5Hz; i.e.
given the vector of image features [6] *x*, the *k* *[th]* output binary
6 For the input features, each ima... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**5 EXPERIMENTS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
not help the particular errors the learned controller makes.
In particular, a reason the supervised approach gets such
a low score is that under the learned controller, Mario is
blocks and other special items). We use a history of those features over the last 4 images as input, in addition to other features
describin... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**5 EXPERIMENTS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
52000 characters) partitioned in 10 folds. The image of
each characters is 8x16 binary pixels (128 input features).
We consider the large data-set experiment which consists
in training on 9 folds and testing on 1 fold and repeating
this over all folds. Performance is measured in terms of the
character accuracy on the... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**5 EXPERIMENTS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
policy iteration approach performs very well on this experiment and better than using a small *α* = 0 *.* 1. We believe
this is due to the fact that only a small part of the input
is influenced by the current policy (the previous predicted
character feature) so this makes this approach no as unstable as in general rein... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**5 EXPERIMENTS**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
### **6 FUTURE WORK**
We show that by batching over iterations of interaction
with a system, no-regret methods, including the presented
DA GGER approach can provide a learning reduction with
strong performance guarantees in both imitation learning
and structured prediction. In future work, we will consider
more sophi... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**6 FUTURE WORK**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
S. Kakade and J. Langford (2002). Approximately Optimal
Approximate Reinforcement Learning. In *Proceedings of*
*the International Conference on Machine Learning* .
S. Kakade and S. Shalev-Shwartz (2008). Mind the duality
gap: Logarithmic regret algorithms for online optimization.
In *Advances in Neural Information P... | {
"id": "1011.0686",
"title": "A Reduction of Imitation Learning and Structured Prediction to No-Regret\n Online Learning",
"categories": [
"cs.LG",
"cs.AI",
"stat.ML"
]
} | {
"Header 1": null,
"Header 2": "**No-Regret Reductions** **for Imitation Learning and Structured Prediction**",
"Header 3": "**6 FUTURE WORK**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
## A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning
#### Eric Brochu, Vlad M. Cora and Nando de Freitas December 14, 2010
**Abstract**
We present a tutorial on Bayesian optimization, a method of finding
the maximum of... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": null,
"Header 4": "Eric Brochu, Vlad M. Cora and Nando de Freitas December 14, 2010",
"Header 5": null
} | {
"chunk_type": "title"
} |
### **1 Introduction**
An enormous body of scientific literature has been devoted to the problem of
optimizing a nonlinear function *f* ( **x** ) over a compact set *A* . In the realm of
optimization, this problem is formulated concisely as follows:
max
**x** *∈A⊂* R *[d]* *[ f]* [(] **[x]** [)]
One typically ass... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**1 Introduction**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **1.1 An Introduction to Bayesian Optimization**
Bayesian optimization is a powerful strategy for finding the extrema of objective
functions that are expensive to evaluate. It is applicable in situations where one
does not have a closed-form expression for the objective function, but where one
can obtain observa... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**1 Introduction**",
"Header 4": "**1.1 An Introduction to Bayesian Optimization**",
"Header 5": null
} | {
"chunk_type": "body"
} |
data that barely deviate from the mean. Now, we can combine these to obtain
our posterior distribution:
*P* ( *f* *|D* 1: *t* ) *∝* *P* ( *D* 1: *t* *|f* ) *P* ( *f* ) *.*
1 Here we use subscripts to denote sequences of data, i.e. *y* 1: *t* = *{y* 1 *, . . ., y* *t* *}* .
2
-----

and describe covariance functions ( *§* 2.2), acquisition functions ( *§* 2.3) and the role
of Gaussian noise ( *§* 2.4). In *§* ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**1 Introduction**",
"Header 4": "**1.2 Overview**",
"Header 5": null
} | {
"chunk_type": "body"
} |
### **2 The Bayesian Optimization Approach**
Optimization is a broad and fundamental field of mathematics. In order to
harness it to our ends, we need to narrow it down by defining the conditions we
are concerned with.
Our first restriction is to simply specify that the form of the problem we
are concerned with is ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
proaches include interval optimization and branch and bound methods. *Stochas-*
*tic approximation* is a popular idea for optimizing unknown objective functions in machine learning contexts [Kushner and Yin, 1997]. It is the core
idea in most reinforcement learning algorithms [Bertsekas and Tsitsiklis, 1996,
Sutton and... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
deciding where to sample next requires the choice of a utility function and a
5
-----
**Al** **g** **orithm 1** Ba y esian O p timization
1: **for** *t* = 1 *,* 2 *, . . .* **do**
2: Find **x** *t* by optimizing the acquisition function over the GP: **x** *t* = argmax **x** *u* ( **x** *|D* 1: *t−* 1 ).
3: Sa... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **2.1 Priors over functions**
Any Bayesian method depends on a prior distribution, by definition. A Bayesian
optimization method will converge to the optimum if (i) the acquisition function is continuous and approximately minimizes the risk (defined as the expected deviation from the global minimum at a fixed po... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.1 Priors over functions**",
"Header 5": null
} | {
"chunk_type": "body"
} |
instead of returning a scalar *f* ( **x** ) for an arbitrary **x**, it returns the mean and
variance of a normal distribution (Figure 2) over the possible values of *f* at **x** .
Stochastic processes are sometimes called “random functions”, by analogy to
random variables.
For convenience, we assume here that the pri... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.1 Priors over functions**",
"Header 5": null
} | {
"chunk_type": "body"
} |
by the properties of Gaussian processes, **f** 1: *t* and *f* *t* +1 are jointly Gaussian:
� *f* **f** *t* 1:+1 *t* � *∼N* � **0** *,* � **kK** *[T]* *k* ( **x** *t* +1 **k** *,* **x** *t* +1 )�� *,*
where
**k** = � *k* ( **x** *t* +1 *,* **x** 1 ) *k* ( **x** *t* +1 *,* **x** 2 ) *· · ·* *k* ( **x** *t* +1 *,* **x**... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.1 Priors over functions**",
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **2.2 Choice of covariance functions**
The choice of covariance function for the Gaussian Process is crucial, as it
determines the smoothness properties of samples drawn from it. The squared
exponential kernel in Eqn (1) is actually a little naive, in that divergences of all
features of **x** affect the covarian... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.2 Choice of covariance functions**",
"Header 5": null
} | {
"chunk_type": "body"
} |
1
*k* ( **x** *i* *,* **x** *j* ) =
2 *[ς][−]* [1] Γ( *ς* ) [(2] *[√][ς][ ∥]* **[x]** *[i]* *[ −]* **[x]** *[j]* *[∥]* [)] *[ς]* *[ H]* *[ς]* [ (2] *[√][ς][ ∥]* **[x]** *[i]* *[ −]* **[x]** *[j]* *[∥]* [)] *[,]*
where Γ( *·* ) and *H* *ς* ( *·* ) are the Gamma function and the Bessel function of order *ς* .
Note that... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.2 Choice of covariance functions**",
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **2.3 Acquisition Functions for Bayesian Optimization**
Now that we have discussed placing priors over smooth functions and how to
update these priors in light of new observations, we will focus our attention on
the acquisition component of Bayesian optimization. The role of the acquisition
function is to guide ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.3 Acquisition Functions for Bayesian Optimization**",
"H... | {
"chunk_type": "body"
} |
domains [T¨orn and Zilinskas, 1989, Jones, 2001, Lizotte, 2008]. [ˇ]
An appealing characteristic of this formulation for perceptual and preference
*·*
models is that while maximizing PI( ) is still greedy, it selects the point most
likely to offer an improvement of at least *ξ* . This can be useful in psychoperceptua... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.3 Acquisition Functions for Bayesian Optimization**",
"H... | {
"chunk_type": "body"
} |
**x** �
Note that this decision process is myopic in that it only considers one-stepahead choices. However, if we want to plan two steps ahead, we can easily
apply recursion:
**x** *t* +1 = argmin E min
**x** � **x** *[′]* [ E][(] *[∥][f]* *[t]* [+2] [(] **[x]** *[′]* [)] *[ −]* *[f]* [(] **[x]** *[⋆]* [)] *[∥|D]* ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.3 Acquisition Functions for Bayesian Optimization**",
"H... | {
"chunk_type": "body"
} |
It should be said that being myopic is not a requirement here. For example, it is possible to derive analytical expressions for the two-step ahead
expected improvement [Ginsbourger *et al.*, 2008] and multistep Bayesian optimization [Garnett *et al.*, 2010b]. This is indeed a very promising recent direction.
**2.3.2*... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.3 Acquisition Functions for Bayesian Optimization**",
"H... | {
"chunk_type": "body"
} |
UCB( **x** ) = *µ* ( **x** ) + *κσ* ( *x* ) *.*
14
-----

|ξ =0.01 ξ =0.10 ξ =1.00|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|ξ =0.01 ξ =0.10 ξ =1.00|Col2|Col3|Col4|Col5|
|---|---|---|---|---|
|ν =0.2 ν... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.3 Acquisition Functions for Bayesian Optimization**",
"H... | {
"chunk_type": "body"
} |
that this method is *no regret*, i.e. lim *T →∞* *R* *T* */T* = 0, where *R* *T* is the cumulative
regret
*T*
*R* *T* = � *f* ( **x** *[⋆]* ) *−* *f* ( **x** *t* ) *.*
*t* =1
This in turn implies a lower-bound on the convergence rate for the optimization
problem.
Figures 5 and 6 show how with the same GP poster... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.3 Acquisition Functions for Bayesian Optimization**",
"H... | {
"chunk_type": "body"
} |
#### **2.4 Noise**
The model we’ve used so far assumes that we have perfectly noise-free observations. In real life, this is rarely possible, and instead of observing *f* ( **x** ), we can
often only observe a noisy transformation of *f* ( **x** ).
The simplest transformation arises when *f* ( **x** ) is corrupted wi... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.4 Noise**",
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **2.5 A brief history of Bayesian optimization**
The earliest work we are aware of resembling the modern Bayesian optimization
approach is the early work of Kushner [1964], who used Wiener processes for
unconstrained one-dimensional problems. Kushner’s decision model was based
on maximizing the probability of im... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.5 A brief history of Bayesian optimization**",
"Header 5... | {
"chunk_type": "body"
} |
There exist several consistency proofs for this algorithm in the one-dimensional
setting [Locatelli, 1997] and one for a simplification of the algorithm using simplicial partitioning in higher dimensions [Zilinskas and [ˇ] Zilinskas, 2002]. [ˇ] The
convergence of the algorithm using multivariate Gaussian processes has ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.5 A brief history of Bayesian optimization**",
"Header 5... | {
"chunk_type": "body"
} |
#### **2.6 Kriging**
Kriging has been used in geostatistics and environmental science since the 1950s
and remains important today. We will briefly summarize the connection to
Bayesian optimization here. More detailed examinations can be found in, for
example, [Stein, 1999, Sasena, 2002, Diggle and Ribeiro, 2007]. Thi... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.6 Kriging**",
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **2.7 Experimental design**
Kriging has been applied to experimental design under the name *DACE*, after
“Design and Analysis of Computer Experiments”, the title of a paper by Sacks
*et al.* [1989] (and more recently a book by Santner *et al.* [2003]). In DACE, the
regression model is a best linear unbiased pred... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.7 Experimental design**",
"Header 5": null
} | {
"chunk_type": "body"
} |
E-optimality, which minimizes the maximum eigenvalue.
Experimental design is usually non-adaptive: the entire experiment is designed before data is collected. However, *sequential design* is an important and
active subfield (e.g. [Williams *et al.*, 2000, Busby, 2009]. | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.7 Experimental design**",
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **2.8 Active learning**
Active learning is another area related to Bayesian optimization, and of particular relevance to our task. Active learning is closely related to experimental design and, indeed, the decision to describe a particular problem as active
21
-----
learning or experimental design is often... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.8 Active learning**",
"Header 5": null
} | {
"chunk_type": "body"
} |
Interesting recent work with GPs that straddles the boundary between active learning and experimental design is the sensor placement problem of Krause
*et al.* [2008]. They examine several criteria, including maximum entropy, and
argue for using mutual information. Ideally, they would like to simultaneously
select a se... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.8 Active learning**",
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **2.9 Applications**
Bayesian optimization has recently begun to appear in the machine learning
literature as a means of optimizing difficult black box optimizations. A few
recent examples include:
*•* Lizotte *et al.* [2007, 2008] used Bayesian optimization to learn a set of
robot gait parameters that maximiz... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**2 The Bayesian Optimization Approach**",
"Header 4": "**2.9 Applications**",
"Header 5": null
} | {
"chunk_type": "body"
} |
### **3 Bayesian Optimization for Preference Galleries**
The model described above requires that each function evaluation have a scalar
response. However, this is not always the case. In applications requiring hu
23
-----
man judgement, for instance, preferences are often more accurate than ratings. Prospect theo... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
psychology and econometrics [Thurstone, 1927, McFadden, 1980, Stern, 1990].
They have been studied extensively, for example, in rating chess players, and
the Elo system [El˝o, 1978] was adopted by the World Chess Federation FIDE to [´]
model the probability of one player beating another. It has since been adopted
to ma... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **3.1 Probit model for binary observations**
The *probit model* allows us to deal with binary observations of *f* ( *·* ) in general.
That is, every time we try a value of **x**, we get back a binary variable, say either
zero or one. From the binary observations, we have to infer the latent function
*·*
*f* ( ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": "**3.1 Probit model for binary observations**",
"... | {
"chunk_type": "body"
} |
*P* ( **r** *i* *≻* **c** *i* *|f* ( **r** *i* ) *, f* ( **c** *i* )) = *P* ( *v* ( **r** *i* ) *> v* ( **c** *i* ) *|f* ( **r** *i* ) *, f* ( **c** *i* ))
= *P* ( *ε −* *ε < f* ( **r** *i* ) *−* *f* ( **c** *i* ))
= Φ( *Z* *i* ) *,*
where
*Z* *i* = *[f]* [(] **[r]** *[i]* [)] *[ −]* *[f]* [(] **[c]** *[i]* [)]... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": "**3.1 Probit model for binary observations**",
"... | {
"chunk_type": "body"
} |
where **g** = *∇* **f** log *P* ( **f** *|D* ) and **H** = *−∇* **f** *∇* **f** log *P* ( **f** *|D* ). At the mode of the
posterior ( [�] **f** = **f** MAP ), the gradient **g** vanishes, and we obtain:
*P* ( **f** *|D* ) *≈* *P* ( [�] **f** *|D* ) exp *−* [1]
� 2 [(] **[f]** *[ −]* [�] **[f]** [)] **[H]** [(] **[f]... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": "**3.1 Probit model for binary observations**",
"... | {
"chunk_type": "body"
} |
*M*
= 2 *σ* 1 [2] � *i* =1 *h* *i* ( **x** *m* ) *h* *i* ( **x** *n* ) � Φ *φ* [2] ( ( *ZZ* *ii* ) ) [+] *[φ]* Φ( [2] [(] *Z* *[Z]* *i* *[i]* ) [)] *[Z]* *[i]*
�
The Hessian is a positive semi-definite matrix. Hence, one can find the MAP
estimate with a simple Newton–Raphson recursion:
**f** [new] = **f** [old]... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": "**3.1 Probit model for binary observations**",
"... | {
"chunk_type": "body"
} |
#### **3.2 Application: Interactive Bayesian optimization for ma-** **terial design**


*Target* 1 *.* ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": "**3.2 Application: Interactive Bayesian optimizati... | {
"chunk_type": "body"
} |
using a preference gallery approach, in which users are simply required to view
two or more images rendered with different material properties and indicate
which they prefer, in an iterative process.
We use the interactive Bayesian optimization model with probit responses on
an example gallery application for helping u... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": "**3.2 Application: Interactive Bayesian optimizati... | {
"chunk_type": "body"
} |
MERL database that we deemed to be representative of the appearance space of
the measured materials. The user is given the task of finding a single randomlyselected image from that set by indicating preferences. Figure 9 shows a typical
user run, where we ask the user to use the preference gallery to find a provided
ta... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**3 Bayesian Optimization for Preference Galleries**",
"Header 4": "**3.2 Application: Interactive Bayesian optimizati... | {
"chunk_type": "body"
} |
### **4 Bayesian Optimization for Hierarchical Con-** **trol**
In general, problem solving and planning becomes easier when it is broken down
into subparts. Variants of functional hierarchies appear consistently in video
3 An empirical study of various methods on a variety of test functions, and a discussion of
why... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
We demonstrate an integration of the MAXQ hierarchical task learner with
Bayesian active exploration that significantly speeds up the learning process,
applied to hybrid discrete and continuous state and action spaces. Section 4.2
describes an extended Taxi domain, running under The Open Racing Car Simulator [Wymann *e... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
#### **4.1 Hierarchical Reinforcement Learning**
Manually coding hierarchical policies is the mainstay of video game AI development. The requirements for automated HRL to be a viable solution are it must
be easy to customize task-specific implementations, state abstractions, reward
models, termination criteria and it... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.1 Hierarchical Reinforcement Learning**"... | {
"chunk_type": "body"
} |
designer. An automatic solution to this problem would be an agent that can
learn how to program, and anything less than that will have limited applicability.
We can use Bayesian optimization to learn the relevant aspects of value
functions by focusing on the most relevant parts of the parameter space. In the
work on th... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.1 Hierarchical Reinforcement Learning**"... | {
"chunk_type": "body"
} |
learned by the agent (or used to explore during learning). Each task is effectively a separate, decomposed SMDP that has allowed us to integrate active
learning for discrete map navigation with continuous low-level vehicle control.
This is accomplished by decomposing the *Q* function into two parts:
*a* = *π* *i* ( *... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.1 Hierarchical Reinforcement Learning**"... | {
"chunk_type": "body"
} |
policy (computed by the HAR [Ghavamzadeh, 2005] and HAM [Andre, 2003]
three-part value decompositions) would pick the exit to minimize total travelling time, given the destination. A recursively optimal learning algorithm
however generalizes subtasks easier since they only depend on the local state,
ignoring what would... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.1 Hierarchical Reinforcement Learning**"... | {
"chunk_type": "body"
} |
#### **4.2 Application: The Vancouver Taxi Domain**
Our domain is a city map roughly based on a portion of downtown Vancouver,
British Columbia, illustrated in Figure 10. The data structure is a topological
map (a set of intersection nodes and adjacency matrix) with 61 nodes and 22
possible passenger pickup and drop-... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.2 Application: The Vancouver Taxi Domain... | {
"chunk_type": "body"
} |
on the trajectory and vehicle
*V* *y* meters/second lateral velocity (to detect drift)
*V* *err* meters/second error between desired and real speed
Ω *err* radians error between trajectory angle and
vehicle yaw
**4.2.1** **State Abstraction, Termination and Rewards**
Figure 11 compares the original task hierarchy, ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.2 Application: The Vancouver Taxi Domain... | {
"chunk_type": "body"
} |
*Z* *P ickup* = *{LegalLoad, Stopped}* .
**Dropoff** - this is a primitive action, with a reward of 1500 if successful, and
*−* 2500 if a dropoff is invalid. *Z* *Dropoff* = *{LegalLoad, Stopped}* .
**Navigate** - this task learns the sequence of intersections from the current
*TaxiLoc* to a target destination *T* . ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.2 Application: The Vancouver Taxi Domain... | {
"chunk_type": "body"
} |
#### **4.3 Bayesian Optimization for Hierarchical Policies**
The objective of Bayesian optimization is to learn properties of the value function or policy with as few samples as possible. In direct policy search, where
this idea has been explored previously [Martinez–Cantin *et al.*, 2007], the evaluation of the expe... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.3 Bayesian Optimization for Hierarchical... | {
"chunk_type": "body"
} |
4: Evaluate *V* *N* +1 = *V* ( **x** *N* +1 ) and halt if a stopping criterion is met.
5: Augment the data *D* 1: *N* +1 = *{D* 1: *N* *,* ( **x** *N* +1 *, V* *N* +1 ) *}* .
6: *N* = *N* + 1 and go to step 2.
The lowest level *Drive* task uses the parameterized function illustrated in
Figure 12 to generate continu... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.3 Bayesian Optimization for Hierarchical... | {
"chunk_type": "body"
} |
**4.3.2** **Active Value Function Learning**
The *Navigate* task learns path finding from any intersection in the topological
map to any of the destinations. Although this task operates on a discrete set
of waypoints, the underlying map coordinates are continuous, and we can again
apply active exploration with GPs.
U... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.3 Bayesian Optimization for Hierarchical... | {
"chunk_type": "body"
} |
20: *V* *s* *[′]* *[←]* [(1] *[ −]* *[α]* [)] *[ ×][ V]* *[ ′]* *s* [+] *[ α][ ×][ R]*
21: **end for**
22: **else**
23: **append** *{s, N, R}* onto the front of *intersections*
24: *visits* ( *TaxiLoc* *s* ) *←* *visits* ( *TaxiLoc* *s* ) + 1
25: *penalty ←* *V* *s* *× visits* ( *TaxiLoc* *s* ) *{* prevent loops ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.3 Bayesian Optimization for Hierarchical... | {
"chunk_type": "body"
} |
#### **4.4 Simulations**
The nature of the domain requires that we run policy optimization first to
train the *Follow* task. This is reasonable, since the agent cannot be expected
to learn map navigation before learning to drive the car. Figure 13 compares the results of three different values for the GP kernel *k*, ... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**4 Bayesian Optimization for Hierarchical Con-** **trol**",
"Header 4": "**4.4 Simulations**",
"Header 5": null
} | {
"chunk_type": "body"
} |
### **5 Discussion and advice to practitioners**
Bayesian optimization is a powerful tool for machine learning, where the problem is often not acquiring data, but acquiring labels. In many ways, it is like
conventional active learning, but instead of acquiring training data for classification or regression, it allows... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": null,
"Header 2": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning",
"Header 3": "**5 Discussion and advice to practitioners**",
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
# of Parameter Samples
Figure 13: ***Active Policy Optimizer:*** *searching for the* 15 *policy parameters, and*
*comparing different values for the GP kernel size k. We used the Expected Improvement*
*function 3, and the three experiments are initialized with the the same set of* 30 *Latin*
*hypercube samples. A tot... | {
"id": "1012.2599",
"title": "A Tutorial on Bayesian Optimization of Expensive Cost Functions, with\n Application to Active User Modeling and Hierarchical Reinforcement Learning",
"categories": [
"cs.LG"
]
} | {
"Header 1": "of Parameter Samples",
"Header 2": null,
"Header 3": null,
"Header 4": null,
"Header 5": null
} | {
"chunk_type": "body"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.