id
stringlengths 36
36
| source
stringclasses 15
values | formatted_source
stringclasses 13
values | text
stringlengths 2
7.55M
|
|---|---|---|---|
07973659-2ec7-4487-b8ff-75a49828af77
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Modifying Universal Intelligence Measure
In 2007, Legg and Hutter wrote a paper using the AIXI model to define a measure of intelligence. It's pretty great, but I can think of some directions of improvement.
* Reinforcement learning. I think this term and formalism are historically from much simpler agent models which actually depended on being reinforced to learn. In its present form (Hutter 2005 section 4.1) it seems arbitrarily general, but it still feels kinda gross to me. Can we formalize AIXI and the intelligence measure in terms of utility functions, instead? And perhaps prove them equivalent?
* Choice of Horizon. AIXI discounts the future by requiring that total future reward is bounded, and therefore so does the intelligence measure. This seems to me like a constraint that does not reflect reality, and possibly an infinitely important one. How could we remove this requirement? (Much discussion on the "Choice of the Horizon" in Hutter 2005 section 5.7).
* Unknown utility function. When we reformulate it in terms of utility functions, let's make sure we can measure its intelligence/optimization power without having to know its utility function. Perhaps by using an average of utility functions weighted by their K-complexity.
* AI orientation. Finally, and least importantly, it tests agents across all possible programs, even those which are known to be inconsistent with our universe. This might okay if your agent is a playing arbitrary games on a computer, but if you are trying to determine how powerful an agent will be in this universe, you probably want to replace the Solomonoff prior with the posterior resulting from updating the Solomonoff prior with data from our universe.
Any thought or research on this by others? I imagine lots of discussion has occurred over these topics; any referencing would be appreciated.
|
e116b496-9454-4c6b-9c76-03c57fbfe0bb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Conflating value alignment and intent alignment is causing confusion
Epistemic status: I think something like this confusion is happening often. I'm not saying these are the only differences in what people mean by "AGI alignment".
Summary:
Value alignment is better but probably harder to achieve than personal intent alignment to the short-term wants of some person(s). Different groups and people tend to primarily address one of these alignment targets when they discuss alignment. Confusion abounds.
One important confusion stems from an assumption that the type of AI defines the alignment target: strong goal-directed AGI must be value aligned or misaligned, while personal intent alignment is only viable for relatively weak AI. I think this assumption is important but false.
While value alignment is categorically better, intent alignment seems easier, safer, and more appealing in the short term, so AGI project leaders are likely to try it.[1]
Overview
Clarifying what people mean by alignment should dispel some illusory disagreement, and clarify alignment theory and predictions of AGI outcomes.
Caption: Venn diagram of three types of alignment targets. Value alignment and Personal intent alignment are both subsets of Evan Hubinger's definition of intent alignment: AGI aligned with human intent in the broadest sense. Prosaic alignment work usually seems to be addressing a target somewhere in the neighborhood of personal intent alignment (following instructions or doing what this person wants now), while agent foundations and other conceptual alignment work usually seems to be addressing value alignment. Those two clusters have different strengths and weaknesses as alignment targets, so lumping them together produces confusion.
People mean different things when they say alignment. Some are mostly thinking about value alignment (VA): creating sovereign AGI that has values close enough to humans' for our liking. Others are talking about making AGI that is corrigible (in the Christiano or Harms sense)[2] or follows instructions f
|
b12bd3e2-001d-4f06-970e-b94a30931df6
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Unable to post article (probably because of excessive/incompatible formatting)
Hi, I've been trying to publish an article to less wrong for about a week, but I'm unable to copy the text to the post area and submit it.
It says: Submitting, then it stops and nothing happened.
When I sliced the entire text in small parts (all copied from Open Office) I manage to publish drafts, some with errors such as missing spaces.
When I re-copy from the drafts to another, bigger draft, then many spaces become missing, and part of the text is aligned to the right border, part isnt.
The text was mostly written in open office. I would like help from someone who has a normal Office. If you post me a message with your e-mail in the private message area, I can send it to you, so you see if you can either publish it in drafts, and then copy it again to me in a publishable form, or edit somethign in office that I'm unable in open office, and send the file back to me so I can publish.
I know this is asking a lot, and I would be thankful for anyone who helps me out of this conundrum.
|
e93e9e9b-891c-49a4-8de1-a9af137a93c9
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures
1 Introduction
---------------
Convolutional Neural Networks (CNNs) have demonstrated great performance in computer vision tasks. However, CNNs often require substantial computational and storage resources, making their deployment difficult for edge devices and other resource-constrained scenarios. Existing approaches to alleviate this problem include network pruning [[11](#bib.bib11), [29](#bib.bib29)], efficient architecture design [[25](#bib.bib25), [35](#bib.bib35), [28](#bib.bib28)], and quantization [[13](#bib.bib13)]. Among them, Binary Neural Network (BNN) is a promising direction that utilizes 1-bit quantization. By binarizing network parameters and activations, the resource-hungry 32-bit floating-point multiplications can be replaced by efficient bitwise operations (e.g. XNOR, bitcount), significantly reducing the computation and memory burden. Despite their computational efficiency, BNNs often suffer from unsatisfactory performance through the binarization process. Moreover, parts of the network computation flow still remain in full-precision (FP), bringing difficulty for the hardware acceleration.

Figure 1: The information bottleneck phenomenon related to micro-level topology and macro-level layout.
Architectural design is critical for BNNs’ performance. However, existing architectures for FP networks are suboptimal for BNNs.
As BNNs’ activations are binary, they carry much less information relative to their FP counterparts, causing the *information bottleneck*. Enhancing the information flow in BNNs is thus critical for their performance [[1](#bib.bib1)].
Viewing the network architecture as a sequence of basic building blocks (or cells), the information bottleneck affects BNNs’ performance through two levels of granularity:
(1) The *micro-level* considers the inner structure of each cell (i.e. cell topology) . At the micro-level, compact CNN blocks (e.g. depthwise, bottleneck convs) and scale-altering layers (e.g. downsampling) are the information bottlenecks that hinder the performance of BNNs. On the other hand, shortcut connections strengthen the information flow [[21](#bib.bib21), [2](#bib.bib2)].
(2) The *macro-level* focuses on the composition structure of cells (i.e. cell layout). At the macro-level, expanding networks’ width allows activations to carry more information, which could strengthen the information flow and bring performance gain [[15](#bib.bib15)]. In contrast, deeper architectures might not work well, since adding more layers does not alleviate existing information bottlenecks.
Fig. [1](#S1.F1 "Figure 1 ‣ 1 Introduction ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") illustrates these design considerations under the information flow perspective.
Neural architecture search (NAS) [[20](#bib.bib20)] is a promising direction to find optimal architectures for neural networks automatically. However, existing NAS studies for BNNs [[26](#bib.bib26), [27](#bib.bib27)] mostly leverage the search framework for FP networks [[16](#bib.bib16)], and did not fully consider preserving the information flow for both the micro and macro-level. For example,
[[27](#bib.bib27), [3](#bib.bib3)] employ micro-level topology search, but use a pre-defined cell layout and model augmentation scheme identical to FP networks. [[26](#bib.bib26)] searches for the macro-level layout through determining layer-wise widths, but it uses a fixed topology adapted from CNNs.
In this paper, we propose BARS, a BNN-oriented differentiable NAS flow, in which the search space, along with the search and derive strategies are carefully designed and developed according to the characteristics of BNNs. Unlike existing works [[26](#bib.bib26), [3](#bib.bib3), [27](#bib.bib27)], BARS extends the original micro-level DARTS [[16](#bib.bib16)] search space to the macro-level. And jointly searches for the micro-level cell topology and the macro-level cell layout with a 2-level search space.
We design a novel macro-level depth & width search space that could be unified in the differentiable NAS framework. It seeks to strike a better balance between model performance and complexity. We also improve the micro-level search space for automatically discovering topologies that avoid creating bottlenecks and maintain proper information flow. Besides, we propose improvements on the search strategy such as Gumbel sampling and entropy regularization to ensure a stabilized search in a much bigger search space.
With the above mentioned techniques, BARS-discovered architecture outperforms CNN-adapted binary architectures by a large margin (6%percent66\%6 % better accuracy than hand-crafted binary ResNet18 on ImageNet).
It also achieves superior performance than state-of-the-art baseline architectures with smaller complexity. Furthermore, BARS reduces the full-precision operations significantly. BARS-discovered architectures only have 10% floating-point operations compared with existing BNN NAS studies on CIFAR.

Figure 2: The workflow of our proposed Binary ARchitecture Search (BARS) (lower) vs. DARTS (upper) [[16](#bib.bib16), [3](#bib.bib3), [5](#bib.bib5)].
BARS tailors a series of modifications on search space and search strategies to facilitate a proper search process for BNNs.
2 Related Works
----------------
###
2.1 Binary Neural Networks
Binarization Scheme
Network binarization could be viewed as an extreme case of network quantization. It could replace the original FP32 multiplications with efficient bitwise operations, gaining over 10 times the processing speed.
However, due to the lack of representation ability, binary neural networks often suffer from noticeable accuracy degradation.
Several methods have been proposed to improve the performance of BNN. XNORNet[[24](#bib.bib24)] uses shared scaling factors to improve the representation ability without introducing much computational overhead. Many recent studies [[17](#bib.bib17), [3](#bib.bib3), [27](#bib.bib27)] follow its binarization scheme, and so do we.
Some other binarization schemes are also proposed, such as: [[4](#bib.bib4)] fuses the weight and activation scaling factor together before inference.
Other approaches for improving BNN performance focus on minimizing the quantization error [[34](#bib.bib34)], redesigning the training loss [[19](#bib.bib19)], or amending the gradient estimation [[18](#bib.bib18), [23](#bib.bib23)].
Binary Architectural Advances
The aforementioned methods mainly focus on improving the binarization or training scheme. Furthermore, the network architecture also plays a critical role in determining the performance of a BNN. Previous studies mainly address the information bottleneck issue from 2 perspectives: 1) Strengthen the information flow by adding more shortcuts [[18](#bib.bib18), [1](#bib.bib1), [2](#bib.bib2)]. 2) Identify and eliminate some information bottleneck manually: Most of the recent studies [[18](#bib.bib18), [23](#bib.bib23)] adopt full-precision downsampling layer; [[21](#bib.bib21)] modifies separable convolutions in the MobileNet architectures.
###
2.2 Neural Architecture Search (NAS)
NAS Search Space
NAS search space designs in recent studies can be divided into two categories: macro-level and micro-level (cell-level). The macro-level describes how cells are organized to construct the entire architecture, and methods have been developed to search for these layout decisions, including width and depth [[30](#bib.bib30), [12](#bib.bib12)]. On the other hand, the micro-level describes the connecting operations inside each cell, and aims to find a superior intra-cell topology. There exist many studies that only search for the micro-level cell topology and organize cells into a pre-defined layout [[16](#bib.bib16)]. In this paper, we employ both the macro- and micro-level search to obtain accurate and efficient binary architectures.
Differentiable NAS
Considerable efforts have been devoted to developing and applying gradient-based NAS (i.e. Differentiable NAS) methods [[16](#bib.bib16), [31](#bib.bib31), [12](#bib.bib12)] due to its high search efficiency. DARTS [[16](#bib.bib16)] first models the NAS problem as a bilevel optimization problem, in which the architecture parameters are updated using gradient methods.
Cell-based search spaces [[36](#bib.bib36)] are designed to facilitate a more efficient NAS process and have been widely adopted [[16](#bib.bib16), [31](#bib.bib31)]. Usually, there are two types of cells in a cell-based search space: normal cells and reduce cells (stride >1absent1>1> 1). These two types of cells are stacked in a pre-defined order to construct a complete architecture.
NAS for Binary Architecture
Previous studies on improving BNN architecture design often adopt minor modifications to existing well-performing CNN models. Applying NAS to the binary domain could be an effective solution to discover more suitable architecture for BNN. [[26](#bib.bib26)] strikes a balance between accuracy and resource consumption for BNN via evolutionary search on the network width. [[22](#bib.bib22)] adopts a similar approach on groups in convolutions. [[27](#bib.bib27), [3](#bib.bib3)] introduce gradient-based search for BNN. They observe that traditional gradient-based NAS methods such as [[16](#bib.bib16), [31](#bib.bib31)] can not be directly applied for BNN search. Thus, they modify the operations in search space and the search strategy. These methods make advances in searching for efficient binary architectures. However, they still have the following drawbacks. Firstly, they solely focus on only one of the two closely related aspects: network topology and complexity. Secondly, full-precision layers still exist in the main body of the architecture.
( [[3](#bib.bib3)] uses a full-precision preprocess layer in each cell, and [[27](#bib.bib27)] uses a full-precision shortcut for reduction cells, which takes up the majority of the computation).
BARS aims to search for both the topology and complexity of the binary architecture, as well as pursuing full binarization of the main body of the architecture.
3 Preliminary
--------------
###
3.1 Network Binarization
BARS follows the binarization scheme proposed in XNORNet [[24](#bib.bib24)] with the modification of using a single scaling factor instead of channel-wise for more efficiency.
The binary convolution of weights W∈ℝCout×Cin×Kw×Kh𝑊superscriptℝsubscript𝐶𝑜𝑢𝑡subscript𝐶𝑖𝑛subscript𝐾𝑤subscript𝐾ℎW\in\mathbb{R}^{C\_{out}\times C\_{in}\times K\_{w}\times K\_{h}}italic\_W ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_C start\_POSTSUBSCRIPT italic\_o italic\_u italic\_t end\_POSTSUBSCRIPT × italic\_C start\_POSTSUBSCRIPT italic\_i italic\_n end\_POSTSUBSCRIPT × italic\_K start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT × italic\_K start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT and the input feature map X∈ℝbs×Cin×W×H𝑋superscriptℝ𝑏𝑠subscript𝐶𝑖𝑛𝑊𝐻X\in\mathbb{R}^{bs\times C\_{in}\times W\times H}italic\_X ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_b italic\_s × italic\_C start\_POSTSUBSCRIPT italic\_i italic\_n end\_POSTSUBSCRIPT × italic\_W × italic\_H end\_POSTSUPERSCRIPT can be written as in Eq. [1](#S3.E1 "1 ‣ 3.1 Network Binarization ‣ 3 Preliminary ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"), where Coutsubscript𝐶𝑜𝑢𝑡C\_{out}italic\_C start\_POSTSUBSCRIPT italic\_o italic\_u italic\_t end\_POSTSUBSCRIPT and Cinsubscript𝐶𝑖𝑛C\_{in}italic\_C start\_POSTSUBSCRIPT italic\_i italic\_n end\_POSTSUBSCRIPT represent the input and output channels respectively. (Kwsubscript𝐾𝑤K\_{w}italic\_K start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT, Khsubscript𝐾ℎK\_{h}italic\_K start\_POSTSUBSCRIPT italic\_h end\_POSTSUBSCRIPT) and (W𝑊Witalic\_W, H𝐻Hitalic\_H) are the dimensions of the convolution kernel and of the feature map, and bs𝑏𝑠bsitalic\_b italic\_s is the batch size.
| | | | |
| --- | --- | --- | --- |
| | W\*X=(Sign(W)⊙Sign(X))⊗β𝑊𝑋tensor-productdirect-productSign𝑊Sign𝑋𝛽\begin{split}W\*X=(\mathrm{Sign}(W)\odot\mathrm{Sign}(X))\otimes\beta\end{split}start\_ROW start\_CELL italic\_W \* italic\_X = ( roman\_Sign ( italic\_W ) ⊙ roman\_Sign ( italic\_X ) ) ⊗ italic\_β end\_CELL end\_ROW | | (1) |
Here ⊙direct-product\odot⊙ denotes binary multiplication, which could be simplified into XNOR and bitcount operations. ⊗tensor-product\otimes⊗ denotes full precision element-wise multiplication. β𝛽\betaitalic\_β is a real-valued scaling factor. During inference, the binarization takes place before the convolution. During training, the gradient of the non-differentiable binarization (Sign(w)Sign𝑤\mathrm{Sign}(w)roman\_Sign ( italic\_w )) is acquired with the Straight-Through [[8](#bib.bib8)] scheme to update the real-valued weights W𝑊Witalic\_W.
###
3.2 Differentiable NAS
BARS adopts a differentiable architecture search flow [[16](#bib.bib16), [7](#bib.bib7), [32](#bib.bib32)]. In differentiable NAS, a supernet is constructed such that all possible architectures are sub-architectures of this supernet. Then, architectural choices parameterized by architectural parameters α𝛼\alphaitalic\_α are optimized following the gradients of the validation loss Lvalsubscript𝐿𝑣𝑎𝑙L\_{val}italic\_L start\_POSTSUBSCRIPT italic\_v italic\_a italic\_l end\_POSTSUBSCRIPT. The bilevel optimization problem can be written as
| | | | |
| --- | --- | --- | --- |
| | minαLval(w\*(α),α)s.t. w\*(α)=argminwLtrain(w,α)subscript𝛼subscript𝐿𝑣𝑎𝑙superscript𝑤𝛼𝛼s.t. superscript𝑤𝛼subscriptargmin𝑤subscript𝐿𝑡𝑟𝑎𝑖𝑛𝑤𝛼\begin{split}&\min\_{\alpha}{L\_{val}(w^{\*}(\alpha),\alpha)}\\
\mbox{s.t. }&w^{\*}(\alpha)=\operatorname\*{arg\,min}\_{w}L\_{train}(w,\alpha)\end{split}start\_ROW start\_CELL end\_CELL start\_CELL roman\_min start\_POSTSUBSCRIPT italic\_α end\_POSTSUBSCRIPT italic\_L start\_POSTSUBSCRIPT italic\_v italic\_a italic\_l end\_POSTSUBSCRIPT ( italic\_w start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_α ) , italic\_α ) end\_CELL end\_ROW start\_ROW start\_CELL s.t. end\_CELL start\_CELL italic\_w start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ( italic\_α ) = start\_OPERATOR roman\_arg roman\_min end\_OPERATOR start\_POSTSUBSCRIPT italic\_w end\_POSTSUBSCRIPT italic\_L start\_POSTSUBSCRIPT italic\_t italic\_r italic\_a italic\_i italic\_n end\_POSTSUBSCRIPT ( italic\_w , italic\_α ) end\_CELL end\_ROW | | (2) |
After the search, one needs to derive a discrete architecture using the relaxed architecture parameter α𝛼\alphaitalic\_α. In the original differentiable NAS method [[16](#bib.bib16)], for the normal and reduction cell type, the operation with the maximum α𝛼\alphaitalic\_α (except for the “none” operation) is chosen on each edge. Then the normal and reduction cells are stacked to construct the final model.
Studies [[33](#bib.bib33)] have shown that the derive process introduces a large discrepancy. BARS also focuses on how to bridge the search-derive gap.
4 BARS Framework
-----------------
As we have analyzed before, the unsatisfying performance of BNNs can be attributed to the information bottlenecks that are related to both the macro and micro-level architectural design.
Due to the high dimension of the architectural design, it is hard to analyze the architectural bottleneck and design suitable BNN architecture manually, BARS seeks to solve this issue in an automatic manner with differentiable NAS. BARS designs a macro search space (Sec. [4.1](#S4.SS1 "4.1 Macro-level: Search for Network Depth/Width ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures")) with a learnable cell layout to strike a balance between performance and complexity, and a micro-level search space (Sec. [4.2](#S4.SS2 "4.2 Micro-level: Search for Cell Topology ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures")) tailored for maximizing the information flow. Aside from that, we employ a few search strategies (Sec [4.3](#S4.SS3 "4.3 Search Strategy ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures")) to address the “collapse” problem of differentiable NAS and facilitate a stable search process.
###
4.1 Macro-level: Search for Network Depth/Width
Fig. [3](#S4.F3 "Figure 3 ‣ 4.1 Macro-level: Search for Network Depth/Width ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") shows how width/depth configurations affect the performances of XNOR/FP ResNets.
In general, BNNs are more sensitive to changes in width and depth. And we can see that expanding the operation width can alleviate the bottlenecks of binarized operations and brings consistent improvements. As the width goes up, the performance gain would gradually vanish while still increasing the model complexity.
In contrast, increasing the depth does not always bring improvements to BNNs, which is intuitive since adding more processing cannot recover the information once the information is already lost in previous bottlenecks.
Due to these distinct preferences of BNNs, directly adopting layouts of CNNs as in [[27](#bib.bib27), [26](#bib.bib26)] might lead to suboptimal designs. And we propose to directly search for the “sweet spot” of width and depth to trade-off performance and complexity for BNNs. We extend the original micro-level DARTS framework to the macro-level by designing a macro depth & width search space that could be unified in the DARTS framework.
Width Search Expanding the width of BNNs can bring consistent improvements [[1](#bib.bib1)], and different parts of the network have different sensitivity to the width choices [[9](#bib.bib9)]. Previous NAS for BNN studies [[27](#bib.bib27), [3](#bib.bib3)] neglect this aspect in the search process and use post-search uniform width expansion, which would lead to suboptimal results (See Fig. [6](#S5.F6 "Figure 6 ‣ 5.2 Results on CIFAR-10 and ImageNet ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures")). In contrast, BARS seeks to directly search for the proper width configuration to better balance the model complexity and performance.
Specifically, for all cells in one stage, we use a width architecture parameter αwidthsubscript𝛼𝑤𝑖𝑑𝑡ℎ\alpha\_{width}italic\_α start\_POSTSUBSCRIPT italic\_w italic\_i italic\_d italic\_t italic\_h end\_POSTSUBSCRIPT to denote the probability logits of selecting different width choices (e.g. [0.25,0.5,0.75,1.0]0.250.50.751.0[0.25,0.5,0.75,1.0][ 0.25 , 0.5 , 0.75 , 1.0 ] in our experiments). Specifically, during the forward process, we sample a relaxed width choice m∈ℛ4𝑚superscriptℛ4m\in\mathcal{R}^{4}italic\_m ∈ caligraphic\_R start\_POSTSUPERSCRIPT 4 end\_POSTSUPERSCRIPT from the distribution parameterized by αwidthsubscript𝛼𝑤𝑖𝑑𝑡ℎ\alpha\_{width}italic\_α start\_POSTSUBSCRIPT italic\_w italic\_i italic\_d italic\_t italic\_h end\_POSTSUBSCRIPT with Gumbel-Softmax sampling, and multiply the weighted mask to all the feature maps in that stage. Denoting the full-width output feature map as y′superscript𝑦′y^{\prime}italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, the relaxed output feature map y𝑦yitalic\_y that takes the width choice into consideration is calculated as
| | | | |
| --- | --- | --- | --- |
| | 𝐦=Gumbel-Softmax(αwidth)y=(∑i𝐦𝐢Mi)⊙y′𝐦Gumbel-Softmaxsubscript𝛼𝑤𝑖𝑑𝑡ℎ𝑦direct-productsubscript𝑖subscript𝐦𝐢subscript𝑀𝑖superscript𝑦′\begin{split}{\bf m}=&\mbox{Gumbel-Softmax}(\alpha\_{width})\\
y=&(\sum\_{i}{\bf m\_{i}}M\_{i})\odot y^{\prime}\\
\end{split}start\_ROW start\_CELL bold\_m = end\_CELL start\_CELL Gumbel-Softmax ( italic\_α start\_POSTSUBSCRIPT italic\_w italic\_i italic\_d italic\_t italic\_h end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL italic\_y = end\_CELL start\_CELL ( ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT bold\_m start\_POSTSUBSCRIPT bold\_i end\_POSTSUBSCRIPT italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ⊙ italic\_y start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_CELL end\_ROW | | (3) |
where ⊙direct-product\odot⊙ denotes the element-wise multiplication, and Misubscript𝑀𝑖M\_{i}italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is the mask corresponding to the i𝑖iitalic\_i-th width choice in [0.25,0.5,0.75,1.0]0.250.50.751.0[0.25,0.5,0.75,1.0][ 0.25 , 0.5 , 0.75 , 1.0 ]. For example, if i=0𝑖0i=0italic\_i = 0 (the relative width choice is 0.25), the first quarter of the elements in Misubscript𝑀𝑖M\_{i}italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT are 1111s and the other elements are 00s.
Depth Search
(Cell-1, Cell-2, Cell-3) in one stage, the probabilities of choosing different depth ∈{0,1,2,3}absent0123\in\{0,1,2,3\}∈ { 0 , 1 , 2 , 3 } can be calculated as the softmax of the four-dimensional depth parameter αpath=[αp0,αp1,αp2,αp3]subscript𝛼𝑝𝑎𝑡ℎsubscript𝛼𝑝0subscript𝛼𝑝1subscript𝛼𝑝2subscript𝛼𝑝3\alpha\_{path}=[\alpha\_{p0},\alpha\_{p1},\alpha\_{p2},\alpha\_{p3}]italic\_α start\_POSTSUBSCRIPT italic\_p italic\_a italic\_t italic\_h end\_POSTSUBSCRIPT = [ italic\_α start\_POSTSUBSCRIPT italic\_p 0 end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_p 1 end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_p 2 end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_p 3 end\_POSTSUBSCRIPT ]: P(depth=i)=exp(αpi)∑j=03exp(αpj)𝑃depth𝑖subscript𝛼𝑝𝑖superscriptsubscript𝑗03subscript𝛼𝑝𝑗P(\mbox{depth}=i)=\frac{\exp(\alpha\_{pi})}{\sum\_{j=0}^{3}\exp(\alpha\_{pj})}italic\_P ( depth = italic\_i ) = divide start\_ARG roman\_exp ( italic\_α start\_POSTSUBSCRIPT italic\_p italic\_i end\_POSTSUBSCRIPT ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_j = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 3 end\_POSTSUPERSCRIPT roman\_exp ( italic\_α start\_POSTSUBSCRIPT italic\_p italic\_j end\_POSTSUBSCRIPT ) end\_ARG. Denoting the input feature map of this stage as y0subscript𝑦0y\_{0}italic\_y start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, and the output feature map of each cell as y1,⋯,y3subscript𝑦1⋯subscript𝑦3y\_{1},\cdots,y\_{3}italic\_y start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , ⋯ , italic\_y start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT, the aggregated feature map yaggrsubscript𝑦𝑎𝑔𝑔𝑟y\_{aggr}italic\_y start\_POSTSUBSCRIPT italic\_a italic\_g italic\_g italic\_r end\_POSTSUBSCRIPT is calculated as
| | | | |
| --- | --- | --- | --- |
| | 𝐦=Gumbel-Softmax(αpath)yaggr=∑i𝐦𝐢×yi.𝐦Gumbel-Softmaxsubscript𝛼𝑝𝑎𝑡ℎsubscript𝑦𝑎𝑔𝑔𝑟subscript𝑖subscript𝐦𝐢subscript𝑦𝑖\begin{split}{\bf m}=&\mbox{Gumbel-Softmax}(\alpha\_{path})\\
y\_{aggr}&=\sum\_{i}{\bf m\_{i}}\times y\_{i}.\end{split}start\_ROW start\_CELL bold\_m = end\_CELL start\_CELL Gumbel-Softmax ( italic\_α start\_POSTSUBSCRIPT italic\_p italic\_a italic\_t italic\_h end\_POSTSUBSCRIPT ) end\_CELL end\_ROW start\_ROW start\_CELL italic\_y start\_POSTSUBSCRIPT italic\_a italic\_g italic\_g italic\_r end\_POSTSUBSCRIPT end\_CELL start\_CELL = ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT bold\_m start\_POSTSUBSCRIPT bold\_i end\_POSTSUBSCRIPT × italic\_y start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT . end\_CELL end\_ROW | | (4) |
Complexity Regularization Besides the performance, the model complexity is also largely influenced by the width/depth macro-level search decisions. Thus, we use a search objective that considers the complexity (FLOPs) to properly balance the performance and complexity.
| | | | | |
| --- | --- | --- | --- | --- |
| | L𝐿\displaystyle Litalic\_L | =L0×[FLOPs(α)F]θabsentsubscript𝐿0superscriptdelimited-[]𝐹𝐿𝑂𝑃𝑠𝛼𝐹𝜃\displaystyle=L\_{0}\times\left[\frac{FLOPs(\alpha)}{F}\right]^{\theta}= italic\_L start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT × [ divide start\_ARG italic\_F italic\_L italic\_O italic\_P italic\_s ( italic\_α ) end\_ARG start\_ARG italic\_F end\_ARG ] start\_POSTSUPERSCRIPT italic\_θ end\_POSTSUPERSCRIPT | | (5) |
| | θ𝜃\displaystyle\thetaitalic\_θ | ={γ,if FLOPs(α)≥Fμ,otherwiseabsentcases𝛾if 𝐹𝐿𝑂𝑃𝑠𝛼
𝐹𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒𝜇otherwise
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒\displaystyle=\begin{cases}\gamma,\quad\mbox{if }FLOPs(\alpha)\geq F&\\
\mu,\quad\text{otherwise}\end{cases}= { start\_ROW start\_CELL italic\_γ , if italic\_F italic\_L italic\_O italic\_P italic\_s ( italic\_α ) ≥ italic\_F end\_CELL start\_CELL end\_CELL end\_ROW start\_ROW start\_CELL italic\_μ , otherwise end\_CELL start\_CELL end\_CELL end\_ROW | |
where L0subscript𝐿0L\_{0}italic\_L start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT is the original loss and F𝐹Fitalic\_F is the FLOPs budget, and FLOPs(α)𝐹𝐿𝑂𝑃𝑠𝛼FLOPs(\alpha)italic\_F italic\_L italic\_O italic\_P italic\_s ( italic\_α ) is the current estimation of FLOPs. γ𝛾\gammaitalic\_γ and θ𝜃\thetaitalic\_θ are hyperparameters and γ𝛾\gammaitalic\_γ is set to 0 in our experiments.

Figure 3: Performances of (binary) ResNet18 variants with different complexity. The original ResNet18 architecture are scaled by uniform expansion in the channel dimension (width), or stacking more layers (depth).
| Method | Accuracy |
| --- | --- |
| FP downsample | 90.6% (-%) |
| with op shortcut | 91.4 (+0.8%) |
| binarized downsample | 89.7% (-0.8%) |
| improved binarized downsample | 89.9% (-0.6%) |
| XNOR-Res34 (∼similar-to\sim∼1.5x complexity) | 91.5% (+0.9%) |
Table 1: Performance with micro modifications applied on Binary ResNet18. Binarizing the downsampling layer causes noticable accuracy drop and improved binary downsampling could mitigate the loss. Adding op-wise shortcut could improve the performance without extra overhead, and achieve competitive performances with a larger model.
###
4.2 Micro-level: Search for Cell Topology
As mentioned above, identifying and eliminating the information bottleneck is vital for improving the accuracy of BNNs [[1](#bib.bib1)]. Since it is difficult to assess and identify all information bottlenecks manually, we design the micro-level search space such that the NAS process can automatically discover topologies to avoid creating bottlenecks and maintain a proper information flow.
As illustrated in Fig. [2](#S1.F2 "Figure 2 ‣ 1 Introduction ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"), our micro-level search space contains five nodes, and each node can choose to connect from an arbitrary number of previous nodes. For each edge, there are 3 possible operation primitives: binary convolution 3×\times×3, shortcut, and none. We will show in Sec. [5.5](#S5.SS5 "5.5 Discovered Cell ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") that our micro-level topology search automatically discovers cells with a strong information flow.
Besides the previous search space design, the cell template and operation primitives should also be modified to eliminate information bottlenecks.
| Dataset | Method | FLOPs | BiOps | Equivalent Ops | Params | Fully-Binarized | Acc. |
| --- | --- | --- | --- | --- | --- | --- | --- |
| CIFAR-10 | NiN (XNOR) | ∼similar-to\sim∼9M | ∼similar-to\sim∼200M | 12.13M | - | ✓ | 86.28% |
| ResNet-18 (XNOR) | 16M | 547M | 25.55M | 11.17M | ✓ | 90.55% |
| ResNet-18 (Bireal) | 11M | 561M | 24.77M | 11.34M | FP Downsample | 91.23% |
| ResNet-34 (XNOR) | 27M | 1151M | 44.98M | 21.81M | FP Downsample | 91.49% |
| WRN-40 (XNOR) | ∼similar-to\sim∼27M | ∼similar-to\sim∼1500M | 50.44M | - | FP Downsample | 91.58% |
| ResNet-18 (IRNet) | ∼similar-to\sim∼27M | ∼similar-to\sim∼500M | 34.81M | - | FP Downsample | 91.5% |
| BNAS (XNOR) | 100M | 1393M | 121.76M | 5.57M | FP Cell Shortcut | 92.7% |
| BARS-A | 2M | 513M | 10.02M | 2.77M | ✓ | 91.25% |
| BARS-B | 2M | 1048M | 18.37M | 6.07M | ✓ | 92.98% |
| BARS-C | 3M | 1778M | 32.27M | 10.76M | ✓ | 93.43% |
| ImageNet | ResNet-18 (ABC) | ∼similar-to\sim∼100M | ∼similar-to\sim∼2000M | 131.25M | - | ✓ | 42.7% |
| ResNet-18 (XNOR) | 138M | 1850 | 188.89M | 12.54M | ✓ | 48.3% |
| ResNet-18 (XNOR) | 167M | 1778M | 194.79M | 12.80M | FP Downsample | 53.1% |
| BiDenseNet (XNOR) | - | - | - | 13.56M | FP Downsample | 52.7% |
| ResNet-18 (Bireal) | ∼similar-to\sim∼160M | ∼similar-to\sim∼2000M | 191.25M | - | FP Downsample | 56.4% |
| ResNet-18 (PCNN) | ∼similar-to\sim∼160M | ∼similar-to\sim∼2000M | 191.25M | - | FP Downsample | 57.3% |
| MoBiNet(XNOR) | - | - | - | 8.47M | FP Downsample | 53.4% |
| BNAS (XNOR) | 195M | 3137M | 244.0M | 28.41M | FP Cell Shortcut | 57.6% |
| | BARS-D | 129M | 998M | 129.60M | 9.01M | ✓ | 54.6% |
| | BARS-E | 161M | 1424M | 183.25M | 14.04M | ✓ | 56.2% |
| | BARS-F | 254M | 2594M | 293.53M | 19.29M | ✓ | 60.3% |
Table 2: Performance and complexity comparison of BARS-discovered architectures and baselines on CIFAR-10 and ImageNet. BARS-discovered architectures outperform baseline architectures with much lower resource consumption. “Equivalent Ops” are calculated as FLOPs+1/64\*BiOPs𝐹𝐿𝑂𝑃𝑠164𝐵𝑖𝑂𝑃𝑠FLOPs+1/64\*BiOPsitalic\_F italic\_L italic\_O italic\_P italic\_s + 1 / 64 \* italic\_B italic\_i italic\_O italic\_P italic\_s [[1](#bib.bib1)]. “Fully-Binarized” means all network components except for the first and last layer remain binary. Unlike recent studies [[18](#bib.bib18), [17](#bib.bib17), [27](#bib.bib27)] on binary architectures use full-precision downsampling operations, BARS-discovered architectures only use binarized operations in the major architecture backbone, which is beneficial for hardware acceleration.
Strengthen the information flow Many recent studies on BNNs [[1](#bib.bib1), [18](#bib.bib18)] and our experimental results in Tab. [1](#S4.T1 "Table 1 ‣ 4.1 Macro-level: Search for Network Depth/Width ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") show that adding an extra shortcut for binary convolution improves the performance with little overhead. Therefore, we add operational-level shortcuts and cell-level shortcuts to strengthen the information flow.
Eliminating the information Bottleneck
As shown in Tab. [1](#S4.T1 "Table 1 ‣ 4.1 Macro-level: Search for Network Depth/Width ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"), binarizing the downsampling layer causes the performance to drop sharply. There are two reasons why the downsampling layer is the bottleneck: 1) The misalignment of input/output channel size makes it difficult to add op-wise shortcuts. 2) It involves spatial dimension reduction which causes a large information loss.
In BARS, we design the downsampling operation to concatenate the outputs of two strided convolutions with shortcuts111We use 2×\times×2 AvgPool as the shortcut with spatial dim. reduction. and spatially staggered input. In such way, the op-level shortcut could be added to the two binarized convolutions.
It is worthy to note that, unlike many previous studies [[18](#bib.bib18), [24](#bib.bib24)] that use FP downsampling layers, we mitigate this bottleneck issue in a fully-binarized manner. And we demonstrate its effect in Tab. [3](#S5.T3 "Table 3 ‣ 5.3 Effects of Stabilizing the Searching ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures")
###
4.3 Search Strategy
Differentiable NAS [[16](#bib.bib16)] has been challenged due to it being prone to degenerated architectures containing too many shortcuts, which is known as the “collapse” problem. Parameterized operations (e.g. Conv) are usually under-trained [[33](#bib.bib33), [12](#bib.bib12)], and the search process will favor parameter-free operations. In BNN, this problem is further exacerbated since binary convolutions are even harder to train. This section describes several key techniques we use to alleviate the “collapse” problem.
We briefly introduce them here and further analyze their effects in Sec. [5.3](#S5.SS3 "5.3 Effects of Stabilizing the Searching ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures").
Gumbel-Softmax Sampling
We use Gumbel sampling with proper temperature scheduling for all architecture decisions. For all architectural parameters (depth αpathsubscript𝛼𝑝𝑎𝑡ℎ\alpha\_{path}italic\_α start\_POSTSUBSCRIPT italic\_p italic\_a italic\_t italic\_h end\_POSTSUBSCRIPT, width αwidthsubscript𝛼𝑤𝑖𝑑𝑡ℎ\alpha\_{width}italic\_α start\_POSTSUBSCRIPT italic\_w italic\_i italic\_d italic\_t italic\_h end\_POSTSUBSCRIPT, operation type α𝛼\alphaitalic\_α), we sample a relaxed architecture m𝑚mitalic\_m from the corresponding multinomial architectural distribution Multinomial(m|α)Multinomialconditional𝑚𝛼\mbox{Multinomial}(m|\alpha)Multinomial ( italic\_m | italic\_α ) with the Gumbel-Softmax technique [[14](#bib.bib14)]. Denoting the number of choices as D𝐷Ditalic\_D, the logits of the Multinomial distribution as α𝛼\alphaitalic\_α, each dimension misubscript𝑚𝑖m\_{i}italic\_m start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT in the relaxed architecture decision m∈[0,1]D𝑚superscript01𝐷m\in[0,1]^{D}italic\_m ∈ [ 0 , 1 ] start\_POSTSUPERSCRIPT italic\_D end\_POSTSUPERSCRIPT can be represented as
| | | | |
| --- | --- | --- | --- |
| | mi=exp((αi+gi)/τ)∑j=1Dexp((αj+gj)/τ)for i=1,⋯D,formulae-sequencesubscript𝑚𝑖subscript𝛼𝑖subscript𝑔𝑖𝜏superscriptsubscript𝑗1𝐷subscript𝛼𝑗subscript𝑔𝑗𝜏for 𝑖1⋯𝐷\begin{split}m\_{i}=\frac{\exp{((\alpha\_{i}+g\_{i})/\tau)}}{\sum\_{j=1}^{D}\exp((\alpha\_{j}+g\_{j})/\tau)}\quad\mbox{for }i=1,\cdots D,\end{split}start\_ROW start\_CELL italic\_m start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = divide start\_ARG roman\_exp ( ( italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT + italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) / italic\_τ ) end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_D end\_POSTSUPERSCRIPT roman\_exp ( ( italic\_α start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT + italic\_g start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) / italic\_τ ) end\_ARG for italic\_i = 1 , ⋯ italic\_D , end\_CELL end\_ROW | | (6) |
where gisubscript𝑔𝑖g\_{i}italic\_g start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPTs are standard Gumbel-distributed random variables. We emphasize that using Gumbel-Softmax sampling with a proper temperature schedule is important.
In the early search stage, it remains high
which drives architecture distributions to be more uniform to encourage search exploration and avoid collapsing. In the later stage, it gradually anneals to zero to drive α𝛼\alphaitalic\_α towards confident one-hot choices. Thus the discrepancy between searching and deriving [[6](#bib.bib6)] is reduced.
Entropy Regularization and Supernet Warm-up To address the “collapse” issue caused by insufficient optimization of parameterized operations, we conduct warm-up training of the supernet weights for several epochs. We also impose entropy regularization on the architecture distribution to encourage exploration in the early search stage and exploitation in the late search stage.
| | | | |
| --- | --- | --- | --- |
| | Lent=λent(−∑iαilog(αi)),subscript𝐿𝑒𝑛𝑡subscript𝜆𝑒𝑛𝑡subscript𝑖subscript𝛼𝑖subscript𝛼𝑖\begin{split}L\_{ent}=\lambda\_{ent}(-\sum\_{i}{\alpha\_{i}\log(\alpha\_{i}))},\end{split}start\_ROW start\_CELL italic\_L start\_POSTSUBSCRIPT italic\_e italic\_n italic\_t end\_POSTSUBSCRIPT = italic\_λ start\_POSTSUBSCRIPT italic\_e italic\_n italic\_t end\_POSTSUBSCRIPT ( - ∑ start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT roman\_log ( italic\_α start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ) , end\_CELL end\_ROW | | (7) |
where λentsubscript𝜆𝑒𝑛𝑡\lambda\_{ent}italic\_λ start\_POSTSUBSCRIPT italic\_e italic\_n italic\_t end\_POSTSUBSCRIPT is a hyperparameter that follows an increasing schedule from negative to positive. Its scheduling plays a similar role as the temperature in gumbel sampling.
5 Experiments and Analysis
---------------------------
###
5.1 Experiment Settings
We run BARS on CIFAR-10 and ImageNet datasets with different FLOPs target, and acquire a series of models of different sizes (BARS-A/B/C on CIFAR, BARS-D/E/F on ImageNet). Detailed experimental settings can be found in the appendix. Note that unlike previous studies [[27](#bib.bib27), [3](#bib.bib3)] that transfer architectures found on CIFAR-10 to ImageNet, we directly apply search on the 100-class subset of ImageNet.
We conduct experiments on CIFAR-10 and ImageNet. For searching on both datasets, we construct a 14-cell super network organized into 3 stages. The cells in each stage share the same micro-level topology, and so do all the reduction cells. The base channel number Cisubscript𝐶𝑖C\_{i}italic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is 48. We choose 4 available width choices r×Ci,r∈{0.25,0.5,0.75,1}𝑟subscript𝐶𝑖𝑟
0.250.50.751r\times C\_{i},r\in\{0.25,0.5,0.75,1\}italic\_r × italic\_C start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_r ∈ { 0.25 , 0.5 , 0.75 , 1 }, and 3 candidate operations [binary\_conv\_3x3,shortcut,none]𝑏𝑖𝑛𝑎𝑟𝑦\_𝑐𝑜𝑛𝑣\_3𝑥3𝑠ℎ𝑜𝑟𝑡𝑐𝑢𝑡𝑛𝑜𝑛𝑒[binary\\_conv\\_3x3,shortcut,none][ italic\_b italic\_i italic\_n italic\_a italic\_r italic\_y \_ italic\_c italic\_o italic\_n italic\_v \_ 3 italic\_x 3 , italic\_s italic\_h italic\_o italic\_r italic\_t italic\_c italic\_u italic\_t , italic\_n italic\_o italic\_n italic\_e ].
Within each cell, the preprocess layer is a binary 1x1 conv with no shortcut. The cell-wise shortcut is an identity operation for normal cells and a strided binary 3x3 conv for reduction cells.

Figure 4: Comparison of BARS derived architectures with baseline methods in terms of FLOPs (floating-point ops), BiOPs (binary ops), and Params. Note that for BiOPs, following previous studies [[1](#bib.bib1)], we divide it by 64 to acquire its relative values. BARS finds architectures with better performance with less equivalent OPs/params and significantly fewer floating-point operations.
The search lasts for 50 epochs, and a batch size of 64 is used.
Supernet weights w𝑤witalic\_w is trained with Adam optimizer, whose learning rate is set to 3e-4 initially and decayed to 0 in 50 epochs following a cosine schedule.
After 5 epochs of warm-up training of supernet weights, we begin to update α𝛼\alphaitalic\_α. The architectural parameters α𝛼\alphaitalic\_α (including αmicro,αpath,αwidthsubscript𝛼𝑚𝑖𝑐𝑟𝑜subscript𝛼𝑝𝑎𝑡ℎsubscript𝛼𝑤𝑖𝑑𝑡ℎ\alpha\_{micro},\alpha\_{path},\alpha\_{width}italic\_α start\_POSTSUBSCRIPT italic\_m italic\_i italic\_c italic\_r italic\_o end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_p italic\_a italic\_t italic\_h end\_POSTSUBSCRIPT , italic\_α start\_POSTSUBSCRIPT italic\_w italic\_i italic\_d italic\_t italic\_h end\_POSTSUBSCRIPT) are updated using Adam optimizer with learning rate 3e-4 and weight decay 1e-3. The Gumbel temperature is set to 1111 at first and multiplied by 0.90.90.90.9 on every epoch. The entropy regularization coefficient λentsubscript𝜆𝑒𝑛𝑡\lambda\_{ent}italic\_λ start\_POSTSUBSCRIPT italic\_e italic\_n italic\_t end\_POSTSUBSCRIPT follows an increasing schedule: it starts at -0.01, and 0.001 is added on every epoch.
As for deriving, we sample 8 candidate architectures from the architecture distribution
α𝛼\alphaitalic\_α after the search. The one with minimum validation loss is chosen.
Different from the origin DARTS, we do not need to exclude the “none” operation when deriving. We argue that it is important since binary convs evolve huge information loss, thus “none” operation might be the best choice on some edges in BNN [[27](#bib.bib27)].
On CIFAR-10, we train the derived architecture for 200 epochs with batch size 256. Adam optimizer with a weight decay of 00 is used, and the learning rate is set to 2e−32𝑒32e-32 italic\_e - 3 at first and decayed to 0 following a cosine schedule. Cutout augmentation and auxiliary towers with weight 0.4 are applied following previous studies [[16](#bib.bib16)]. On ImageNet, the architectures are trained for 100 epochs, and no cutout is used. We also use Adam optimizer with no weight decay. The learning rate has the initial value of 1.e−3formulae-sequence1𝑒31.e-31 . italic\_e - 3 and cosine annealed to 0 for a batch size of 256256256256.

Figure 5: Evolution of α𝛼\alphaitalic\_α’s distribution and the relative possibility of “shortcut” in the search process Left: Comparison of α𝛼\alphaitalic\_α’s distribution change for “collapsed search” and BARS’s stabilized search. Entropy regularization and Gumbel-Softmax sampling prevent “collapse” in search and bridge the gap in deriving. Right: “shortcut” operation’s average probability during the search. BARS prevents the search from collapsing rapidly.
###
5.2 Results on CIFAR-10 and ImageNet
Tab. [2](#S4.T2 "Table 2 ‣ 4.2 Micro-level: Search for Cell Topology ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") and Fig. [4](#S5.F4 "Figure 4 ‣ 5.1 Experiment Settings ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") show the comparison of BARS-discovered architectures and the baseline ones.
We can see that BARS discovers architectures with superior performance and efficiency. Note that in order to demonstrate the performance gain brought by the architectural design, all our models are trained from scratch with XNORNet [[24](#bib.bib24)] binarization scheme.Neither additional tricks [[23](#bib.bib23)], nor full-precision pre-training models [[18](#bib.bib18), [10](#bib.bib10)] are used.
Moreover, we emphasize that BARS binarizes all operations in the major architecture (except the stem and the final layer), whereas previous studies use full-precision downsampling layers to maintain acceptable performances (e.g. the accuracy of ResNet-18 on ImageNet dropped from 53.1%percent53.153.1\%53.1 % to 48.3%percent48.348.3\%48.3 % if no FP downsample is used).
On CIFAR-10, BARS-B achieves higher accuracy (1.5%percent1.51.5\%1.5 %) than the hand-crafted binary model with 2/3232/32 / 3 binary operations (BiOps), much less (1/101101/101 / 10) floating-point operations (FLOPs) and parameters.
On ImageNet, BARS-D outperforms the “fully-binarized” ResNet-18 by a 6% with notably less resource consumption, it also outperforms many hand-crafted BNN models while binarizing the downsampling layer.

Figure 6: Comparison of derived models of different complexity (FLOPs) w/o BARS’s complexity search. Upper: BARS search with different complexity regularization. Lower: search for a compact model and uniformly expand the network width. BARS finds the better pareto frontier.
###
5.3 Effects of Stabilizing the Searching
The “collapse-to-shortcuts” problem is a widely known issue for differentiable NAS methods, as shown in the left part in Fig. [5](#S5.F5 "Figure 5 ‣ 5.1 Experiment Settings ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures").
As discussed in Sec. [4.3](#S4.SS3 "4.3 Search Strategy ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"), we apply warm-up training to prevent the search from collapsing in the very early stages.
Then, entropy regularization and Gumbel sampling with proper hyperparameter scheduling are employed to encourage exploration in the early stages and confident decisions in the late stages.
Fig. [5](#S5.F5 "Figure 5 ‣ 5.1 Experiment Settings ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") shows that in the early stage of searching, the distribution of α𝛼\alphaitalic\_α is relatively uniform and the relative ranking of different operations keeps changing. When reaching the end of the search, the Gumbel temperature is close to zero, making the sampling close to one-hot sampling. Also, entropy regularization encourages architecture distribution to be more confident. This reduces the derive discrepancy.
| Method | Accuracy |
| --- | --- |
| BARS-B | 93.0% |
| BARS-B (without op shortcut) | 92.8% (-0.2%) |
| BARS-B (without improved ds.) | 92.6% (-0.3%) |
| Sampled arch. | 92.9% |
| Sampled arch. (without op shortcut) | 89.8% (-3.1%) |
| Sampled arch. (without improved ds.) | 91.9% (-1.0%) |
Table 3: The ablation study of micro-level modifications for BARS-B and sampled architecture. Searched BARS-B model finds that each shortcuts should coordinate with binary convs, thus the performance does not drop much without the op-level shortcut.
###
5.4 Effects of the Search Space Design
We conduct several experiments to verify the effectiveness of the design choices of our search space. As could be seen from Fig. [6](#S5.F6 "Figure 6 ‣ 5.2 Results on CIFAR-10 and ImageNet ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"), the macro-level joint search of width and depth strikes a better balance between performance and complexity. The discovered BARS-B model achieves much higher accuracy than uniformly expanding the width of the smaller BARS-A model.
For the micro-level design, we have shown that additional shortcut and improved binary downsample can bring performance gain for XNOR-ResNet18 in Tab. [3](#S5.T3 "Table 3 ‣ 5.3 Effects of Stabilizing the Searching ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"). We conduct similar ablation studies on BARS-B and a random sampled architecture from our search space in Tab. [1](#S4.T1 "Table 1 ‣ 4.1 Macro-level: Search for Network Depth/Width ‣ 4 BARS Framework ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"). And we can witness a noticeable accuracy degradation when removing our modifications.
###
5.5 Discovered Cell
The BARS-discovered cells on CIFAR-10 are shown in Fig. [9](#A3.F9 "Figure 9 ‣ Appendix C Detailed Comparison of Searched Cells ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures"). We can see that the cells in earlier stages and the reduction cells contain more convolutions. Conversely, the latter cells are dominated by shortcuts.
The micro-level topology search also discovers interesting connection patterns: the shortcuts coordinate with binary convs to strengthen the information flow (e.g. shortcut 1-4 and shortcut 2-4 around conv 1-2 in the Normal-1 cell). Thanks to this highly skip-connected pattern, we can see from Tab. [3](#S5.T3 "Table 3 ‣ 5.3 Effects of Stabilizing the Searching ‣ 5 Experiments and Analysis ‣ BARS: Joint Search of Cell Topology and Layout for Accurate and Efficient Binary ARchitectures") that removing the op-level shortcut causes much less performance degradation for the BARS-discovered cell than for randomly sampled architecture. Also, instead of transferring the architecture discovered on CIFAR-10 to ImageNet, BARS conducts a direct search on the 100-class subset of ImageNet, and discovers distinct cells with more convolutions (see the appendix for more details). The different cell preferences might result from that more parameterized operations are needed for enough representational ability on the larger Imagenet dataset.

Figure 7:
BARS discovered cells on CIFAR. More convs are found in the reduction cell and shallower part of the network. It also learns that shortcuts should coordinate with binary convs.
6 Conclusion
-------------
To better explore BNN architectures that are both accurate and efficient, BARS proposes
to use a joint search of the macro layout and the micro topology to address the information bottleneck problem in BNN.
The binary architectures discovered by BARS outperform baseline architectures with significantly less resource consumption.
|
fef6b0ac-bfad-4bc6-93e7-bc02c1c2aea7
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Induction of Subgoal Automata for Reinforcement Learning
1 Introduction
---------------
Reinforcement learning (RL) (?) is a family of algorithms where an agent acts in an environment with the purpose of maximizing the total amount of reward it receives. These algorithms have played a key role in major breakthroughs like human-level video game playing (?) or mastering complex board games (?). However, current methods lack the ability to generalize and transfer between tasks. For example, they cannot learn to play chess using the knowledge of sho¯¯o\bar{\text{o}}over¯ start\_ARG o end\_ARGgi.
Advances in achieving generalization and transfer in RL are mainly due to abstractions (?; ?). Hierarchical reinforcement learning methods, which divide a single task into several subtasks that can be solved separately, are among the most promising approaches to abstracting RL tasks. Common frameworks for hierarchical RL are HAMs (?), options (?) and MAXQ (?).
Abstract hierarchies can be naturally represented using automata, which have been used effectively in task abstraction, not only in RL (?; ?), but also in related areas like automated planning (?; ?; ?). However, these RL approaches use handcrafted automata instead of learning them from data. It is largely an open problem in RL how to autonomously decompose a task into an automaton that can be exploited during the RL process.
In this paper, we address this problem and propose ISA (Induction of Subgoal Automata for Reinforcement Learning), a method for learning and exploiting a minimal automaton from observation traces perceived by an RL agent. These automata are called *subgoal automata* since each transition is labeled with a subgoal, which is a boolean formula over a set of observables. Subgoal automata can be expressed in a logic programming format and, thus, be learned using state-of-the-art inductive logic programming (ILP) systems (?). The resulting subgoal automaton can be exploited by an RL algorithm that learns a different policy for each automaton state, each aiming to achieve a subgoal. This decomposition allows learning multiple subpolicies simultaneously and transferring knowledge across multiple tasks. ISA interleaves the RL and automaton learning processes such that the agent can immediately leverage the new (partially correct) learned subgoal automaton: when a trace is not correctly recognized by the automaton, a new one is learned.
We evaluate ISA in several gridworld problems and show that it learns subgoal automata and policies for each of these. Specifically, it performs similarly to a method for which automata are given in advance. Furthermore, the learned automata can be exploited to speed up convergence through reward shaping and transfer learning.
The paper is organized as follows. Section [2](#S2 "2 Background ‣ Induction of Subgoal Automata for Reinforcement Learning") introduces the background of our work. Section [3](#S3 "3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") formally presents the problem we address and our method for solving it, while Section [4](#S4 "4 Experimental Results ‣ Induction of Subgoal Automata for Reinforcement Learning") describes the results. We discuss related work in Section [5](#S5 "5 Related Work ‣ Induction of Subgoal Automata for Reinforcement Learning") and conclude in Section [6](#S6 "6 Conclusions and Future Work ‣ Induction of Subgoal Automata for Reinforcement Learning").
2 Background
-------------
In this section we briefly summarize the key background concepts used throughout the paper: reinforcement learning and inductive learning of answer set programs.
### Reinforcement Learning
Reinforcement learning (RL) (?) is a family of algorithms for learning to act in an unknown environment. Typically, this learning process is formulated as a *Markov Decision Process (MDP)*, i.e., a tuple ℳ=⟨S,A,p,r,γ⟩ℳ𝑆𝐴𝑝𝑟𝛾\mathcal{M}=\langle S,A,p,r,\gamma\ranglecaligraphic\_M = ⟨ italic\_S , italic\_A , italic\_p , italic\_r , italic\_γ ⟩, where S𝑆Sitalic\_S is a finite set of states, A𝐴Aitalic\_A is a finite set of actions, p:S×A→Δ(S):𝑝→𝑆𝐴Δ𝑆p:S\times A\to\Delta(S)italic\_p : italic\_S × italic\_A → roman\_Δ ( italic\_S ) is a transition probability function111For any finite set X𝑋Xitalic\_X, Δ(X)={μ∈ℝX:∑xμ(x)=1,μ(x)≥0(∀x)}Δ𝑋conditional-set𝜇superscriptℝ𝑋formulae-sequencesubscript𝑥𝜇𝑥1𝜇𝑥0for-all𝑥\Delta(X)=\{\mu\in\mathbb{R}^{X}:\sum\_{x}\mu(x)=1,\mu(x)\geq 0~{}(\forall x)\}roman\_Δ ( italic\_X ) = { italic\_μ ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_X end\_POSTSUPERSCRIPT : ∑ start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT italic\_μ ( italic\_x ) = 1 , italic\_μ ( italic\_x ) ≥ 0 ( ∀ italic\_x ) } is the probability simplex over X𝑋Xitalic\_X. , r:S×A×S→ℝ:𝑟→𝑆𝐴𝑆ℝr:S\times A\times S\to\mathbb{R}italic\_r : italic\_S × italic\_A × italic\_S → blackboard\_R is a reward function, and γ∈[0,1)𝛾01\gamma\in[0,1)italic\_γ ∈ [ 0 , 1 ) is a discount factor. At time t𝑡titalic\_t, the agent observes state st∈Ssubscript𝑠𝑡𝑆s\_{t}\in Sitalic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ italic\_S, executes action at∈Asubscript𝑎𝑡𝐴a\_{t}\in Aitalic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∈ italic\_A, transitions to the next state st+1∼p(⋅|st,at)s\_{t+1}\sim p(\cdot|s\_{t},a\_{t})italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_p ( ⋅ | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) and receives reward r(st,at,st+1)𝑟subscript𝑠𝑡subscript𝑎𝑡subscript𝑠𝑡1r(s\_{t},a\_{t},s\_{t+1})italic\_r ( italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ).
We consider episodic MDPs that *terminate* in a given set of terminal states. We distinguish between goal states and undesirable terminal states (i.e., dead-ends). Let ST⊆Ssubscript𝑆𝑇𝑆S\_{T}\subseteq Sitalic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ⊆ italic\_S be the set of terminal states and SG⊆STsubscript𝑆𝐺subscript𝑆𝑇S\_{G}\subseteq S\_{T}italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ⊆ italic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT the set of goal states. The aim is to find a *policy* π:S→Δ(A):𝜋→𝑆Δ𝐴\pi:S\to\Delta(A)italic\_π : italic\_S → roman\_Δ ( italic\_A ), a mapping from states to probability distributions over actions, that maximizes the expected sum of discounted reward (or *return*), Rt=𝔼[∑k=tnγk−trk]subscript𝑅𝑡𝔼delimited-[]superscriptsubscript𝑘𝑡𝑛superscript𝛾𝑘𝑡subscript𝑟𝑘R\_{t}=\mathbb{E}[\sum\_{k=t}^{n}\gamma^{k-t}r\_{k}]italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = blackboard\_E [ ∑ start\_POSTSUBSCRIPT italic\_k = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_k - italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_k end\_POSTSUBSCRIPT ], where n𝑛nitalic\_n is the last episode step.
In model-free RL the transition probability function p𝑝pitalic\_p and reward function r𝑟ritalic\_r are unknown to the agent, and a policy is learned via interaction with the environment. Q-learning (?) computes an *action-value function* Qπ(s,a)=𝔼[Rt|st=s,at=a]superscript𝑄𝜋𝑠𝑎𝔼delimited-[]formulae-sequenceconditionalsubscript𝑅𝑡subscript𝑠𝑡𝑠subscript𝑎𝑡𝑎Q^{\pi}(s,a)=\mathbb{E}[R\_{t}|s\_{t}=s,a\_{t}=a]italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_s , italic\_a ) = blackboard\_E [ italic\_R start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_s start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_s , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a ] that estimates the return from each state-action pair when following an approximately optimal policy. In each iteration the estimates are updated as
| | | |
| --- | --- | --- |
| | Q(s,a)←𝛼r(s,a,s′)+γmaxa′Q(s′,a′),𝛼←𝑄𝑠𝑎𝑟𝑠𝑎superscript𝑠′𝛾subscriptsuperscript𝑎′𝑄superscript𝑠′superscript𝑎′Q\left(s,a\right)\xleftarrow{\alpha}r\left(s,a,s^{\prime}\right)+\gamma\max\_{a^{\prime}}Q\left(s^{\prime},a^{\prime}\right),italic\_Q ( italic\_s , italic\_a ) start\_ARROW overitalic\_α ← end\_ARROW italic\_r ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) + italic\_γ roman\_max start\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_Q ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , | |
where x←𝛼y𝛼←𝑥𝑦x\xleftarrow{\alpha}yitalic\_x start\_ARROW overitalic\_α ← end\_ARROW italic\_y is shorthand for x←x+α(y−x)←𝑥𝑥𝛼𝑦𝑥x\leftarrow x+\alpha\left(y-x\right)italic\_x ← italic\_x + italic\_α ( italic\_y - italic\_x ), α𝛼\alphaitalic\_α is a learning rate and s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is the state after applying a𝑎aitalic\_a in s𝑠sitalic\_s. Usually, an ϵitalic-ϵ\epsilonitalic\_ϵ-greedy policy selects a random action with probability ϵitalic-ϵ\epsilonitalic\_ϵ and the action maximizing Q(s,a)𝑄𝑠𝑎Q(s,a)italic\_Q ( italic\_s , italic\_a ) otherwise. The policy is induced by the action that maximizes Q(s,a)𝑄𝑠𝑎Q(s,a)italic\_Q ( italic\_s , italic\_a ) in each s𝑠sitalic\_s.
#### Options
(?) address temporal abstraction in RL. Given an MDP ℳ=⟨S,A,p,r,γ⟩ℳ𝑆𝐴𝑝𝑟𝛾\mathcal{M}=\langle S,A,p,r,\gamma\ranglecaligraphic\_M = ⟨ italic\_S , italic\_A , italic\_p , italic\_r , italic\_γ ⟩, an *option* is a tuple ω=⟨Iω,πω,βω⟩𝜔subscript𝐼𝜔subscript𝜋𝜔subscript𝛽𝜔\omega=\langle I\_{\omega},\pi\_{\omega},\beta\_{\omega}\rangleitalic\_ω = ⟨ italic\_I start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ⟩ where Iω⊆Ssubscript𝐼𝜔𝑆I\_{\omega}\subseteq Sitalic\_I start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ⊆ italic\_S is the option’s initiation set, πω:S→Δ(A):subscript𝜋𝜔→𝑆Δ𝐴\pi\_{\omega}:S\to\Delta(A)italic\_π start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT : italic\_S → roman\_Δ ( italic\_A ) is the option’s policy, and βω:S→[0,1]:subscript𝛽𝜔→𝑆01\beta\_{\omega}:S\to[0,1]italic\_β start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT : italic\_S → [ 0 , 1 ] is the option’s termination condition. An option is available in state s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S if s∈Iω𝑠subscript𝐼𝜔s\in I\_{\omega}italic\_s ∈ italic\_I start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT. If the option is started, the actions are chosen according to πωsubscript𝜋𝜔\pi\_{\omega}italic\_π start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT. The option terminates at a given state s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S with probability βω(s)subscript𝛽𝜔𝑠\beta\_{\omega}(s)italic\_β start\_POSTSUBSCRIPT italic\_ω end\_POSTSUBSCRIPT ( italic\_s ). The action set of an MDP can be augmented with options, which can be either handcrafted or automatically discovered. An MDP extended with options is a Semi-Markov Decision Process (SMDP); the learning methods for MDPs still apply to SMDPs. To improve sample-efficiency, the experiences (s,a,r,s′)𝑠𝑎𝑟superscript𝑠′(s,a,r,s^{\prime})( italic\_s , italic\_a , italic\_r , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) generated by an option’s policy can be used to update other options’ policies. This transfer learning method is called *intra-option learning* (?).
### Inductive Learning of Answer Set Programs
In this section we describe answer set programming (ASP) and the ILASP system for learning ASP programs.
#### Answer Set Programming (ASP)
(?) is a declarative programming language for knowledge representation and reasoning. An ASP problem is expressed in a logical format and the models (called answer sets) of its representation provide the solutions to that problem.
A *literal* is an atom 𝚊𝚊\mathtt{a}typewriter\_a or its negation 𝚗𝚘𝚝𝚊𝚗𝚘𝚝𝚊\mathtt{not~{}a}typewriter\_not typewriter\_a. Given an atom 𝚑𝚑\mathtt{h}typewriter\_h and a set of literals 𝚋𝟷,…,𝚋𝚗subscript𝚋1…subscript𝚋𝚗\mathtt{b\_{1},\ldots,b\_{n}}typewriter\_b start\_POSTSUBSCRIPT typewriter\_1 end\_POSTSUBSCRIPT , … , typewriter\_b start\_POSTSUBSCRIPT typewriter\_n end\_POSTSUBSCRIPT, a *normal rule* is of the form 𝚑:−𝚋𝟷,…,𝚋𝚗𝚑:absentsubscript𝚋1…subscript𝚋𝚗\mathtt{h\operatorname{\mathtt{:-}}b\_{1},\ldots,b\_{n}}typewriter\_h start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_b start\_POSTSUBSCRIPT typewriter\_1 end\_POSTSUBSCRIPT , … , typewriter\_b start\_POSTSUBSCRIPT typewriter\_n end\_POSTSUBSCRIPT, where 𝚑𝚑\mathtt{h}typewriter\_h is the *head* and 𝚋𝟷,…,𝚋𝚗subscript𝚋1…subscript𝚋𝚗\mathtt{b\_{1},\ldots,b\_{n}}typewriter\_b start\_POSTSUBSCRIPT typewriter\_1 end\_POSTSUBSCRIPT , … , typewriter\_b start\_POSTSUBSCRIPT typewriter\_n end\_POSTSUBSCRIPT is the *body* of the rule. Rules of the form :−𝚋𝟷,…,𝚋𝚗:absentsubscript𝚋1…subscript𝚋𝚗\mathtt{\operatorname{\mathtt{:-}}b\_{1},\ldots,b\_{n}}start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_b start\_POSTSUBSCRIPT typewriter\_1 end\_POSTSUBSCRIPT , … , typewriter\_b start\_POSTSUBSCRIPT typewriter\_n end\_POSTSUBSCRIPT are called *constraints*. In this paper, we assume an ASP program P𝑃Pitalic\_P to be a set of normal rules and constraints.
Given a set of ground atoms (or *interpretation*) I𝐼Iitalic\_I, a ground normal rule is satisfied if the head is satisfied by I𝐼Iitalic\_I when the body literals are satisfied by I𝐼Iitalic\_I. A ground constraint is satisfied if the body is not satisfied by I𝐼Iitalic\_I. The *reduct* PIsuperscript𝑃𝐼P^{I}italic\_P start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT of a program P𝑃Pitalic\_P with respect to I𝐼Iitalic\_I is built by removing all rules including 𝚗𝚘𝚝𝚊𝚗𝚘𝚝𝚊\mathtt{not~{}a}typewriter\_not typewriter\_a such that 𝚊∈I𝚊𝐼\mathtt{a}\in Itypewriter\_a ∈ italic\_I from P𝑃Pitalic\_P. I𝐼Iitalic\_I is an *answer set* of P𝑃Pitalic\_P iff (1) I𝐼Iitalic\_I satisfies the rules of PIsuperscript𝑃𝐼P^{I}italic\_P start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT, and (2) no subset of I𝐼Iitalic\_I satisfies the rules of PIsuperscript𝑃𝐼P^{I}italic\_P start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT.
#### ILASP
(?) is a system for learning ASP programs from partial answer sets. A *context-dependent partial interpretation (CDPI)* (?) is a pair ⟨⟨einc,eexc⟩,ectx⟩superscript𝑒𝑖𝑛𝑐superscript𝑒𝑒𝑥𝑐
superscript𝑒𝑐𝑡𝑥\langle\langle e^{inc},e^{exc}\rangle,e^{ctx}\rangle⟨ ⟨ italic\_e start\_POSTSUPERSCRIPT italic\_i italic\_n italic\_c end\_POSTSUPERSCRIPT , italic\_e start\_POSTSUPERSCRIPT italic\_e italic\_x italic\_c end\_POSTSUPERSCRIPT ⟩ , italic\_e start\_POSTSUPERSCRIPT italic\_c italic\_t italic\_x end\_POSTSUPERSCRIPT ⟩, where ⟨einc,eexc⟩superscript𝑒𝑖𝑛𝑐superscript𝑒𝑒𝑥𝑐\langle e^{inc},e^{exc}\rangle⟨ italic\_e start\_POSTSUPERSCRIPT italic\_i italic\_n italic\_c end\_POSTSUPERSCRIPT , italic\_e start\_POSTSUPERSCRIPT italic\_e italic\_x italic\_c end\_POSTSUPERSCRIPT ⟩ is a partial interpretation and ectxsuperscript𝑒𝑐𝑡𝑥e^{ctx}italic\_e start\_POSTSUPERSCRIPT italic\_c italic\_t italic\_x end\_POSTSUPERSCRIPT is an ASP program, called *context*. A program P𝑃Pitalic\_P accepts e𝑒eitalic\_e iff there is an answer set A𝐴Aitalic\_A of P∪ectx𝑃superscript𝑒𝑐𝑡𝑥P\cup e^{ctx}italic\_P ∪ italic\_e start\_POSTSUPERSCRIPT italic\_c italic\_t italic\_x end\_POSTSUPERSCRIPT such that einc⊆Asuperscript𝑒𝑖𝑛𝑐𝐴e^{inc}\subseteq Aitalic\_e start\_POSTSUPERSCRIPT italic\_i italic\_n italic\_c end\_POSTSUPERSCRIPT ⊆ italic\_A and eexc∩A=∅superscript𝑒𝑒𝑥𝑐𝐴e^{exc}~{}\cap A=\emptysetitalic\_e start\_POSTSUPERSCRIPT italic\_e italic\_x italic\_c end\_POSTSUPERSCRIPT ∩ italic\_A = ∅. An *ILASP task* (?) is a tuple T=⟨B,SM,E⟩𝑇𝐵subscript𝑆𝑀𝐸T=\langle B,S\_{M},E\rangleitalic\_T = ⟨ italic\_B , italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT , italic\_E ⟩ where B𝐵Bitalic\_B is the ASP background knowledge, SMsubscript𝑆𝑀S\_{M}italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT is the set of rules allowed in the hypotheses, and E𝐸Eitalic\_E is a set of CDPIs, called examples. A hypothesis H⊆SM𝐻subscript𝑆𝑀H\subseteq S\_{M}italic\_H ⊆ italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT is a *solution* of T𝑇Titalic\_T iff ∀e∈Efor-all𝑒𝐸\forall e\in E∀ italic\_e ∈ italic\_E, B∪H𝐵𝐻B\cup Hitalic\_B ∪ italic\_H accepts e𝑒eitalic\_e.
3 Methodology
--------------
In this section we describe ISA, our method that interleaves the learning of subgoal automata with the learning of policies for achieving these subgoals. The tasks we consider are *episodic MDPs* ℳ=⟨S,A,p,r,γ,ST,SG⟩ℳ𝑆𝐴𝑝𝑟𝛾subscript𝑆𝑇subscript𝑆𝐺\mathcal{M}=\langle S,A,p,r,\gamma,S\_{T},S\_{G}\ranglecaligraphic\_M = ⟨ italic\_S , italic\_A , italic\_p , italic\_r , italic\_γ , italic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT , italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ⟩ where r(s,a,s′)=1𝑟𝑠𝑎superscript𝑠′1r(s,a,s^{\prime})=1italic\_r ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = 1 if s′∈SGsuperscript𝑠′subscript𝑆𝐺s^{\prime}\in S\_{G}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT and 0 otherwise.
The automaton transitions are defined by a logical formula over a set of *observables* 𝒪𝒪\mathcal{O}caligraphic\_O. A *labeling function* L:S→2𝒪:𝐿→𝑆superscript2𝒪L:S\to 2^{\mathcal{O}}italic\_L : italic\_S → 2 start\_POSTSUPERSCRIPT caligraphic\_O end\_POSTSUPERSCRIPT maps an MDP state into a subset of observables O⊆𝒪𝑂𝒪O\subseteq\mathcal{O}italic\_O ⊆ caligraphic\_O (or *observations*) perceived by the agent at that state.
01234567891011012345678D𝐷Ditalic\_D∗∗\ast∗∗∗\ast∗C𝐶Citalic\_C☕\Strichmaxerl[1.25]∗∗\ast∗o𝑜oitalic\_o🖂∗∗\ast∗A𝐴Aitalic\_A∗∗\ast∗∗∗\ast∗B𝐵Bitalic\_B
(a)
Execution trace
T=⟨s4,6,←,0,s3,6,→,0,s4,6,↓,0,s4,5,↓,1,s4,4⟩\begin{aligned} T=\langle&s\_{4,6},\leftarrow,0,s\_{3,6},\rightarrow,0,\\
&s\_{4,6},\downarrow,0,s\_{4,5},\downarrow,1,s\_{4,4}\rangle\end{aligned}start\_ROW start\_CELL italic\_T = ⟨ end\_CELL start\_CELL italic\_s start\_POSTSUBSCRIPT 4 , 6 end\_POSTSUBSCRIPT , ← , 0 , italic\_s start\_POSTSUBSCRIPT 3 , 6 end\_POSTSUBSCRIPT , → , 0 , end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL italic\_s start\_POSTSUBSCRIPT 4 , 6 end\_POSTSUBSCRIPT , ↓ , 0 , italic\_s start\_POSTSUBSCRIPT 4 , 5 end\_POSTSUBSCRIPT , ↓ , 1 , italic\_s start\_POSTSUBSCRIPT 4 , 4 end\_POSTSUBSCRIPT ⟩ end\_CELL end\_ROW
Observation trace
TL,𝒪=⟨{},{☕},{},{},{o}⟩subscript𝑇𝐿𝒪
☕
𝑜\begin{aligned} T\_{L,\mathcal{O}}=\left\langle\{\},\{\text{\Coffeecup}\},\{\},\{\},\{o\}\right\rangle\end{aligned}start\_ROW start\_CELL italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT = ⟨ { } , { ☕ } , { } , { } , { italic\_o } ⟩ end\_CELL end\_ROW
Compressed observation trace
T^L,𝒪=⟨{},{☕},{},{o}⟩subscript^𝑇𝐿𝒪
☕
𝑜\begin{aligned} \hat{T}\_{L,\mathcal{O}}=\left\langle\{\},\{\text{\Coffeecup}\},\{\},\{o\}\right\rangle\end{aligned}start\_ROW start\_CELL over^ start\_ARG italic\_T end\_ARG start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT = ⟨ { } , { ☕ } , { } , { italic\_o } ⟩ end\_CELL end\_ROW
(b)
u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPTstartuAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPTu1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTuRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPTotherwiseotherwise☕∧¬o☕𝑜\texttt{\Coffeecup}\land\neg o☕ ∧ ¬ italic\_o∗∗\ast∗∗∗\ast∗o𝑜oitalic\_o☕∧o☕𝑜\texttt{\Coffeecup}\land o☕ ∧ italic\_o
(c)
Figure 1: The OfficeWorld environment (?). Figure LABEL:sub@fig:officeworld\_grid is an example grid, LABEL:sub@fig:officeworld\_coffee\_traces shows a positive execution trace in the example grid for the Coffee task and its derived observation traces, and LABEL:sub@fig:officeworld\_coffee\_rm shows the Coffee automaton.
We use the OfficeWorld environment (?) to explain our method. It consists of a grid (see Figure [1a](#S3.F1.sf1 "1a ‣ Figure 1 ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning")) where an agent (\Strichmaxerl[1.25]\Strichmaxerldelimited-[]1.25\Strichmaxerl[1.25][ 1.25 ]) can move in the four cardinal directions, and the set of observables is 𝒪={☕,🖂,o,A,B,C,D,∗}𝒪☕🖂𝑜𝐴𝐵𝐶𝐷∗\mathcal{O}=\{\texttt{\Coffeecup},\texttt{\Letter},o,A,B,C,D,\ast\}caligraphic\_O = { ☕ , 🖂 , italic\_o , italic\_A , italic\_B , italic\_C , italic\_D , ∗ }. The agent picks up coffee and the mail at locations ☕ and 🖂 respectively, and delivers them to the office at location o𝑜oitalic\_o. The decorations ∗∗\ast∗ are broken if the agent steps on them. There are also four locations labeled A𝐴Aitalic\_A, B𝐵Bitalic\_B, C𝐶Citalic\_C and D𝐷Ditalic\_D. The observables A𝐴Aitalic\_A, B𝐵Bitalic\_B, C𝐶Citalic\_C and D𝐷Ditalic\_D, and decorations ∗∗\ast∗ do not share locations with other elements. The agent observes these labels when it steps on their locations. Three tasks with different goals are defined on this environment: Coffee (deliver coffee to the office), CoffeeMail (deliver coffee and mail to the office), and VisitABCD (visit A𝐴Aitalic\_A, B𝐵Bitalic\_B, C𝐶Citalic\_C and D𝐷Ditalic\_D in order). The tasks terminate when the goal is achieved or a ∗∗\ast∗ is broken (this is a dead-end state).
### Traces
An *execution trace* T𝑇Titalic\_T is a finite state-action-reward sequence T=⟨s0,a0,r1,s1,a1,…,an−1,rn,sn⟩𝑇subscript𝑠0subscript𝑎0subscript𝑟1subscript𝑠1subscript𝑎1…subscript𝑎𝑛1subscript𝑟𝑛subscript𝑠𝑛T=\langle s\_{0},a\_{0},r\_{1},s\_{1},a\_{1},\ldots,a\_{n-1},r\_{n},s\_{n}\rangleitalic\_T = ⟨ italic\_s start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_a start\_POSTSUBSCRIPT italic\_n - 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT , italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩ induced by a (changing) policy during an episode. An execution trace is *positive* if sn∈SGsubscript𝑠𝑛subscript𝑆𝐺s\_{n}\in S\_{G}italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT, *negative* if sn∈ST∖SGsubscript𝑠𝑛subscript𝑆𝑇subscript𝑆𝐺s\_{n}\in S\_{T}\setminus S\_{G}italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∈ italic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ∖ italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT, and *incomplete* if sn∉STsubscript𝑠𝑛subscript𝑆𝑇s\_{n}\notin S\_{T}italic\_s start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∉ italic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT, denoted by T+superscript𝑇T^{+}italic\_T start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT, T−superscript𝑇T^{-}italic\_T start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT and TIsuperscript𝑇𝐼T^{I}italic\_T start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT respectively.
An *observation trace* TL,𝒪subscript𝑇𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT is a sequence of observation sets Oi⊆𝒪,0≤i≤nformulae-sequencesubscript𝑂𝑖𝒪0𝑖𝑛O\_{i}\subseteq\mathcal{O},0\leq i\leq nitalic\_O start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⊆ caligraphic\_O , 0 ≤ italic\_i ≤ italic\_n, obtained by applying a labeling function L𝐿Litalic\_L to each state sisubscript𝑠𝑖s\_{i}italic\_s start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT in an execution trace T𝑇Titalic\_T. A *compressed observation trace* T^L,𝒪subscript^𝑇𝐿𝒪\hat{T}\_{L,\mathcal{O}}over^ start\_ARG italic\_T end\_ARG start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT is the result of removing contiguous duplicated observation sets from TL,𝒪subscript𝑇𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT. Figure [1b](#S3.F1.sf2 "1b ‣ Figure 1 ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") shows an example of a positive execution trace for the Coffee task together with the derived observation trace and the resulting compressed trace.
A set of execution traces is a tuple 𝒯=⟨𝒯+,𝒯−,𝒯I⟩𝒯superscript𝒯superscript𝒯superscript𝒯𝐼\mathcal{T}=\langle\mathcal{T}^{+},\mathcal{T}^{-},\mathcal{T}^{I}\ranglecaligraphic\_T = ⟨ caligraphic\_T start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT , caligraphic\_T start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT , caligraphic\_T start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT ⟩, where 𝒯+superscript𝒯\mathcal{T}^{+}caligraphic\_T start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT, 𝒯−superscript𝒯\mathcal{T}^{-}caligraphic\_T start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT and 𝒯Isuperscript𝒯𝐼\mathcal{T}^{I}caligraphic\_T start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT are sets of positive, negative and incomplete traces, respectively. The sets of observation and compressed observation traces are denoted 𝒯L,𝒪subscript𝒯𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT and 𝒯^L,𝒪subscript^𝒯𝐿𝒪\hat{\mathcal{T}}\_{L,\mathcal{O}}over^ start\_ARG caligraphic\_T end\_ARG start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT.
### Subgoal Automata
A *subgoal automaton* is a tuple 𝒜=⟨U,𝒪,δ,u0,uA,uR⟩𝒜𝑈𝒪𝛿subscript𝑢0subscript𝑢𝐴subscript𝑢𝑅\mathcal{A}=\langle U,\mathcal{O},\delta,u\_{0},u\_{A},u\_{R}\ranglecaligraphic\_A = ⟨ italic\_U , caligraphic\_O , italic\_δ , italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ⟩ where U𝑈Uitalic\_U is a finite set of states, 𝒪𝒪\mathcal{O}caligraphic\_O is a set of observables (or alphabet), δ:U×2𝒪→U:𝛿→𝑈superscript2𝒪𝑈\delta:U\times 2^{\mathcal{O}}\to Uitalic\_δ : italic\_U × 2 start\_POSTSUPERSCRIPT caligraphic\_O end\_POSTSUPERSCRIPT → italic\_U is a deterministic transition function that takes as arguments a state and a subset of observables and returns a state, u0∈Usubscript𝑢0𝑈u\_{0}\in Uitalic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ italic\_U is a start state, uA∈Usubscript𝑢𝐴𝑈u\_{A}\in Uitalic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ∈ italic\_U is the unique accepting state, and uR∈Usubscript𝑢𝑅𝑈u\_{R}\in Uitalic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ∈ italic\_U is the unique rejecting state.
A subgoal automaton 𝒜𝒜\mathcal{A}caligraphic\_A *accepts* an observation trace TL,𝒪=⟨O0,…,On⟩subscript𝑇𝐿𝒪subscript𝑂0…subscript𝑂𝑛T\_{L,\mathcal{O}}=\langle O\_{0},\ldots,O\_{n}\rangleitalic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT = ⟨ italic\_O start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_O start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩ if there exists a sequence of automaton states u0,…,un+1subscript𝑢0…subscript𝑢𝑛1u\_{0},\ldots,u\_{n+1}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_u start\_POSTSUBSCRIPT italic\_n + 1 end\_POSTSUBSCRIPT in U𝑈Uitalic\_U such that (1) δ(ui,Oi)=ui+1𝛿subscript𝑢𝑖subscript𝑂𝑖subscript𝑢𝑖1\delta(u\_{i},O\_{i})=u\_{i+1}italic\_δ ( italic\_u start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_O start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_u start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT for i=0,…,n𝑖0…𝑛i=0,\ldots,nitalic\_i = 0 , … , italic\_n, and (2) un+1=uAsubscript𝑢𝑛1subscript𝑢𝐴u\_{n+1}=u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_n + 1 end\_POSTSUBSCRIPT = italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT. Analogously, 𝒜𝒜\mathcal{A}caligraphic\_A *rejects* TL,𝒪subscript𝑇𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT if un+1=uRsubscript𝑢𝑛1subscript𝑢𝑅u\_{n+1}=u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_n + 1 end\_POSTSUBSCRIPT = italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT.
When a subgoal automaton is used together with an MDP, the actual states are (s,u)𝑠𝑢(s,u)( italic\_s , italic\_u ) pairs where s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S and u∈U𝑢𝑈u\in Uitalic\_u ∈ italic\_U. Therefore, actions are selected according to a policy π:S×U→Δ(A):𝜋→𝑆𝑈Δ𝐴\pi:S\times U\to\Delta(A)italic\_π : italic\_S × italic\_U → roman\_Δ ( italic\_A ) where π(a|s,u)𝜋conditional𝑎𝑠𝑢\pi(a|s,u)italic\_π ( italic\_a | italic\_s , italic\_u ) is the probability for taking action a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A at the MDP state s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S and the automaton state u∈U𝑢𝑈u\in Uitalic\_u ∈ italic\_U. At each step, the agent transitions to (s′,u′)superscript𝑠′superscript𝑢′(s^{\prime},u^{\prime})( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) where s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is the result of applying an action a∈A𝑎𝐴a\in Aitalic\_a ∈ italic\_A in s𝑠sitalic\_s, while u′=δ(u,L(s′))superscript𝑢′𝛿𝑢𝐿superscript𝑠′u^{\prime}=\delta(u,L(s^{\prime}))italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_δ ( italic\_u , italic\_L ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ). Then, the agent receives reward 1 if u′=uAsuperscript𝑢′subscript𝑢𝐴u^{\prime}=u\_{A}italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and 0 otherwise.
Figure [1c](#S3.F1.sf3 "1c ‣ Figure 1 ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") shows the automaton for the Coffee task of the OfficeWorld domain. Each transition is labeled with a logical condition φ∈3𝒪𝜑superscript3𝒪\varphi\in 3^{\mathcal{O}}italic\_φ ∈ 3 start\_POSTSUPERSCRIPT caligraphic\_O end\_POSTSUPERSCRIPT that expresses the subgoal to be achieved222Note that φ∈3𝒪𝜑superscript3𝒪\varphi\in 3^{\mathcal{O}}italic\_φ ∈ 3 start\_POSTSUPERSCRIPT caligraphic\_O end\_POSTSUPERSCRIPT because each observable can appear as positive or negative, or not appear in the condition.. The sequence of visited automaton states for the trace in Figure [1b](#S3.F1.sf2 "1b ‣ Figure 1 ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") would be ⟨u0,u1,u1,u1,uA⟩subscript𝑢0subscript𝑢1subscript𝑢1subscript𝑢1subscript𝑢𝐴\langle u\_{0},u\_{1},u\_{1},u\_{1},u\_{A}\rangle⟨ italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ⟩.
#### Relationship with options.
Each state u∈U𝑢𝑈u\in Uitalic\_u ∈ italic\_U in a subgoal automaton encapsulates an option ωu=⟨Iu,πu,βu⟩subscript𝜔𝑢subscript𝐼𝑢subscript𝜋𝑢subscript𝛽𝑢\omega\_{u}=\langle I\_{u},\pi\_{u},\beta\_{u}\rangleitalic\_ω start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT = ⟨ italic\_I start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT , italic\_β start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ⟩ whose policy πusubscript𝜋𝑢\pi\_{u}italic\_π start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT attempts to satisfy an outgoing transition’s condition. Formally, the option termination condition βusubscript𝛽𝑢\beta\_{u}italic\_β start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT is:
| | | |
| --- | --- | --- |
| | βu(s)={1if∃u′≠u,φ∈3𝒪∣L(s′)⊧φ,δ(u,φ)=u′0otherwise.\displaystyle\beta\_{u}(s)=\left\{\begin{matrix}1~{}\text{if}~{}\exists u^{\prime}\neq u,\varphi\in 3^{\mathcal{O}}\mathord{\mid}L(s^{\prime})\models\varphi,\delta(u,\varphi)=u^{\prime}\\
0~{}\text{otherwise}\end{matrix}\right..italic\_β start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ( italic\_s ) = { start\_ARG start\_ROW start\_CELL 1 if ∃ italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ≠ italic\_u , italic\_φ ∈ 3 start\_POSTSUPERSCRIPT caligraphic\_O end\_POSTSUPERSCRIPT ∣ italic\_L ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) ⊧ italic\_φ , italic\_δ ( italic\_u , italic\_φ ) = italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_CELL end\_ROW start\_ROW start\_CELL 0 otherwise end\_CELL end\_ROW end\_ARG . | |
That is, the option at automaton state u∈U𝑢𝑈u\in Uitalic\_u ∈ italic\_U finishes at MDP state s∈S𝑠𝑆s\in Sitalic\_s ∈ italic\_S if there is a transition to a different automaton state u′∈Usuperscript𝑢′𝑈u^{\prime}\in Uitalic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ italic\_U such that the transition’s condition φ𝜑\varphiitalic\_φ is satisfied by the next observations L(s′)𝐿superscript𝑠′L(s^{\prime})italic\_L ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ). Note that at most one transition can be made true since the automaton is deterministic.
The initiation set Iusubscript𝐼𝑢I\_{u}italic\_I start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT is formed by all those states that satisfy the incoming conditions:
| | | |
| --- | --- | --- |
| | Iu={s∈S∣δ(u′,φ)=u,L(s)⊧φ,u≠u′,u≠u0}.subscript𝐼𝑢conditional-set𝑠𝑆formulae-sequence𝛿superscript𝑢′𝜑𝑢formulae-sequencemodels𝐿𝑠𝜑formulae-sequence𝑢superscript𝑢′𝑢subscript𝑢0\displaystyle I\_{u}=\left\{s\in S\mid\delta\left(u^{\prime},\varphi\right)=u,L\left(s\right)\models\varphi,u\neq u^{\prime},u\neq u\_{0}\right\}.italic\_I start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT = { italic\_s ∈ italic\_S ∣ italic\_δ ( italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_φ ) = italic\_u , italic\_L ( italic\_s ) ⊧ italic\_φ , italic\_u ≠ italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_u ≠ italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT } . | |
In the particular case of the initial automaton state u0∈Usubscript𝑢0𝑈u\_{0}\in Uitalic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∈ italic\_U, its initiation set Iu0=Ssubscript𝐼subscript𝑢0𝑆I\_{{u\_{0}}}=Sitalic\_I start\_POSTSUBSCRIPT italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = italic\_S is the whole state space since there is no restriction imposed by any previous automaton state.
Note that we do not add options to the set of primitive actions, which would make the decision process more complex since more alternatives are available. Instead, subgoal automata keep the action set unchanged, and which option to apply is determined by the current automaton state.
### Learning Subgoal Automata from Traces
This section describes our approach for learning an automaton. We formalize the automaton learning task as a tuple T𝒜=⟨U,𝒪,𝒯L,𝒪⟩subscript𝑇𝒜𝑈𝒪subscript𝒯𝐿𝒪T\_{\mathcal{A}}=\langle U,\mathcal{O},\mathcal{T}\_{L,\mathcal{O}}\rangleitalic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT = ⟨ italic\_U , caligraphic\_O , caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ⟩, where U⊇{u0,uA,uR}subscript𝑢0subscript𝑢𝐴subscript𝑢𝑅𝑈U\supseteq\{u\_{0},u\_{A},u\_{R}\}italic\_U ⊇ { italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT } is a set of states, 𝒪𝒪\mathcal{O}caligraphic\_O is a set of observables, and 𝒯L,𝒪=⟨𝒯L,𝒪+,𝒯L,𝒪−,𝒯L,𝒪I⟩subscript𝒯𝐿𝒪subscriptsuperscript𝒯𝐿𝒪subscriptsuperscript𝒯𝐿𝒪subscriptsuperscript𝒯𝐼𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}=\langle\mathcal{T}^{+}\_{L,\mathcal{O}},\mathcal{T}^{-}\_{L,\mathcal{O}},\mathcal{T}^{I}\_{L,\mathcal{O}}\ranglecaligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT = ⟨ caligraphic\_T start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ⟩ is a set of (possibly compressed) observation traces (abbreviated as *traces* below). An automaton 𝒜𝒜\mathcal{A}caligraphic\_A is a solution of T𝒜subscript𝑇𝒜T\_{\mathcal{A}}italic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT if and only if accepts all positive traces 𝒯L,𝒪+superscriptsubscript𝒯𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}^{+}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT, rejects all negative traces 𝒯L,𝒪−superscriptsubscript𝒯𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}^{-}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT, and neither accepts nor rejects incomplete traces 𝒯L,𝒪Isuperscriptsubscript𝒯𝐿𝒪𝐼\mathcal{T}\_{L,\mathcal{O}}^{I}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT.
The automaton learning task T𝒜subscript𝑇𝒜T\_{\mathcal{A}}italic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT is mapped into an ILASP task M(T𝒜)=⟨B,SM,E⟩𝑀subscript𝑇𝒜𝐵subscript𝑆𝑀𝐸M(T\_{\mathcal{A}})=\langle B,S\_{M},E\rangleitalic\_M ( italic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT ) = ⟨ italic\_B , italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT , italic\_E ⟩. Then, the ILASP system is used to learn the smallest set of transitions (i.e., a minimal hypothesis) that covers the example traces.
We define the components of the ILASP task M(T𝒜)𝑀subscript𝑇𝒜M(T\_{\mathcal{A}})italic\_M ( italic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT ) below. An ILASP task specifies the maximum size of a rule body. To allow for an arbitrary number of literals in the bodies, we learn the negation δ¯¯𝛿\bar{\delta}over¯ start\_ARG italic\_δ end\_ARG of the actual transitions δ𝛿\deltaitalic\_δ. Thus, we do not limit the number of conjuncts, but the number of disjuncts (i.e., the number of edges between two states). We denote the maximum number of disjuncts by max|δ(x,y)|𝛿𝑥𝑦\max|\delta(x,y)|roman\_max | italic\_δ ( italic\_x , italic\_y ) |.
#### Hypothesis space.
The hypothesis space SMsubscript𝑆𝑀S\_{M}italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT is formed by two kinds of rules:
1. 1.
Facts of the form 𝚎𝚍(𝚇,𝚈,𝙴).𝚎𝚍𝚇𝚈𝙴\mathtt{ed(X,Y,E).}typewriter\_ed ( typewriter\_X , typewriter\_Y , typewriter\_E ) . indicating there is a transition from state 𝚇∈U∖{uA,uR}𝚇𝑈subscript𝑢𝐴subscript𝑢𝑅\mathtt{X}\in U\setminus\{u\_{A},u\_{R}\}typewriter\_X ∈ italic\_U ∖ { italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT } to state 𝚈∈U∖{𝚇}𝚈𝑈𝚇\mathtt{Y}\in U\setminus\{\mathtt{X}\}typewriter\_Y ∈ italic\_U ∖ { typewriter\_X } using edge 𝙴∈[1,max|δ(x,y)|]𝙴1𝛿𝑥𝑦\mathtt{E}\in[1,\max|\delta(x,y)|]typewriter\_E ∈ [ 1 , roman\_max | italic\_δ ( italic\_x , italic\_y ) | ].
2. 2.
Normal rules whose *head* is of the form δ¯(𝚇,𝚈,𝙴,𝚃)¯𝛿𝚇𝚈𝙴𝚃\mathtt{\bar{\delta}(X,Y,E,T)}over¯ start\_ARG italic\_δ end\_ARG ( typewriter\_X , typewriter\_Y , typewriter\_E , typewriter\_T ) stating that the conditions of the transition from state 𝚇∈U∖{uA,uR}𝚇𝑈subscript𝑢𝐴subscript𝑢𝑅\mathtt{X}\in U\setminus\{u\_{A},u\_{R}\}typewriter\_X ∈ italic\_U ∖ { italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT } to state 𝚈∈U∖{𝚇}𝚈𝑈𝚇\mathtt{Y}\in U\setminus\{\mathtt{X}\}typewriter\_Y ∈ italic\_U ∖ { typewriter\_X } in edge 𝙴∈[1,max|δ(x,y)|]𝙴1𝛿𝑥𝑦\mathtt{E}\in[1,\max|\delta(x,y)|]typewriter\_E ∈ [ 1 , roman\_max | italic\_δ ( italic\_x , italic\_y ) | ] *do not* hold at time 𝚃𝚃\mathtt{T}typewriter\_T. These conditions are specified in the *body*, which is a conjunction of 𝚘𝚋𝚜(𝙾,𝚃)𝚘𝚋𝚜𝙾𝚃\mathtt{obs(O,T)}typewriter\_obs ( typewriter\_O , typewriter\_T ) literals indicating that observable 𝙾∈𝒪𝙾𝒪\mathtt{O}\in\mathcal{O}typewriter\_O ∈ caligraphic\_O is seen at time 𝚃𝚃\mathtt{T}typewriter\_T. The atom 𝚜𝚝𝚎𝚙(𝚃)𝚜𝚝𝚎𝚙𝚃\mathtt{step(T)}typewriter\_step ( typewriter\_T ) expresses that 𝚃𝚃\mathtt{T}typewriter\_T is a timestep. The body must contain at least one 𝚘𝚋𝚜𝚘𝚋𝚜\mathtt{obs}typewriter\_obs literal.
Note that the hypothesis space does not include (i) loop transitions, and (ii) transitions from uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and uRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT. Later, we define (i) in the absence of external transitions.
Given a subgoal automaton 𝒜𝒜\mathcal{A}caligraphic\_A, we denote the set of ASP rules that describe it by M(𝒜)𝑀𝒜M(\mathcal{A})italic\_M ( caligraphic\_A ). The rules below correspond to the (u0,u1)subscript𝑢0subscript𝑢1(u\_{0},u\_{1})( italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) and (u0,uA)subscript𝑢0subscript𝑢𝐴(u\_{0},u\_{A})( italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) transitions in Figure [1c](#S3.F1.sf3 "1c ‣ Figure 1 ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning"):
| |
| --- |
| 𝚎𝚍(𝚞𝟶,𝚞𝟷,𝟷).𝚎𝚍(𝚞𝟶,𝚞𝙰,𝟷).formulae-sequence𝚎𝚍subscript𝚞0subscript𝚞11𝚎𝚍subscript𝚞0subscript𝚞𝙰1\mathtt{ed(u\_{0},u\_{1},1).~{}ed(u\_{0},u\_{A},1).}typewriter\_ed ( typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT , typewriter\_u start\_POSTSUBSCRIPT typewriter\_1 end\_POSTSUBSCRIPT , typewriter\_1 ) . typewriter\_ed ( typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT , typewriter\_u start\_POSTSUBSCRIPT typewriter\_A end\_POSTSUBSCRIPT , typewriter\_1 ) . |
| δ¯(𝚞𝟶,𝚞𝟷,𝟷,𝚃):−𝚗𝚘𝚝𝚘𝚋𝚜(☕,𝚃),𝚜𝚝𝚎𝚙(𝚃).¯𝛿subscript𝚞0subscript𝚞11𝚃:absent𝚗𝚘𝚝𝚘𝚋𝚜☕𝚃𝚜𝚝𝚎𝚙𝚃\mathtt{\bar{\delta}(u\_{0},u\_{1},1,T)\operatorname{\mathtt{:-}}not~{}obs(\text{\Coffeecup},T),step(T).}over¯ start\_ARG italic\_δ end\_ARG ( typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT , typewriter\_u start\_POSTSUBSCRIPT typewriter\_1 end\_POSTSUBSCRIPT , typewriter\_1 , typewriter\_T ) start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_not typewriter\_obs ( ☕ , typewriter\_T ) , typewriter\_step ( typewriter\_T ) . |
| δ¯(𝚞𝟶,𝚞𝟷,𝟷,𝚃):−𝚘𝚋𝚜(o,𝚃),𝚜𝚝𝚎𝚙(𝚃).¯𝛿subscript𝚞0subscript𝚞11𝚃:absent𝚘𝚋𝚜𝑜𝚃𝚜𝚝𝚎𝚙𝚃\mathtt{\bar{\delta}(u\_{0},u\_{1},1,T)\operatorname{\mathtt{:-}}obs(}o\mathtt{,T),step(T).}over¯ start\_ARG italic\_δ end\_ARG ( typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT , typewriter\_u start\_POSTSUBSCRIPT typewriter\_1 end\_POSTSUBSCRIPT , typewriter\_1 , typewriter\_T ) start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_obs ( italic\_o , typewriter\_T ) , typewriter\_step ( typewriter\_T ) . |
| δ¯(𝚞𝟶,𝚞𝙰,𝟷,𝚃):−𝚗𝚘𝚝𝚘𝚋𝚜(o,𝚃),𝚜𝚝𝚎𝚙(𝚃).¯𝛿subscript𝚞0subscript𝚞𝙰1𝚃:absent𝚗𝚘𝚝𝚘𝚋𝚜𝑜𝚃𝚜𝚝𝚎𝚙𝚃\mathtt{\bar{\delta}(u\_{0},u\_{A},1,T)\operatorname{\mathtt{:-}}not~{}obs(}o\mathtt{,T),step(T).}over¯ start\_ARG italic\_δ end\_ARG ( typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT , typewriter\_u start\_POSTSUBSCRIPT typewriter\_A end\_POSTSUBSCRIPT , typewriter\_1 , typewriter\_T ) start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_not typewriter\_obs ( italic\_o , typewriter\_T ) , typewriter\_step ( typewriter\_T ) . |
| δ¯(𝚞𝟶,𝚞𝙰,𝟷,𝚃):−𝚗𝚘𝚝𝚘𝚋𝚜(☕,𝚃),𝚜𝚝𝚎𝚙(𝚃).¯𝛿subscript𝚞0subscript𝚞𝙰1𝚃:absent𝚗𝚘𝚝𝚘𝚋𝚜☕𝚃𝚜𝚝𝚎𝚙𝚃\mathtt{\bar{\delta}(u\_{0},u\_{A},1,T)\operatorname{\mathtt{:-}}not~{}obs(\text{\Coffeecup},T),step(T).}over¯ start\_ARG italic\_δ end\_ARG ( typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT , typewriter\_u start\_POSTSUBSCRIPT typewriter\_A end\_POSTSUBSCRIPT , typewriter\_1 , typewriter\_T ) start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_not typewriter\_obs ( ☕ , typewriter\_T ) , typewriter\_step ( typewriter\_T ) . |
#### Examples.
Given 𝒯L,𝒪=⟨𝒯L,𝒪+,𝒯L,𝒪−,𝒯L,𝒪I⟩subscript𝒯𝐿𝒪subscriptsuperscript𝒯𝐿𝒪subscriptsuperscript𝒯𝐿𝒪subscriptsuperscript𝒯𝐼𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}=\langle\mathcal{T}^{+}\_{L,\mathcal{O}},\mathcal{T}^{-}\_{L,\mathcal{O}},\mathcal{T}^{I}\_{L,\mathcal{O}}\ranglecaligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT = ⟨ caligraphic\_T start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT , caligraphic\_T start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ⟩, the example set is defined as E={⟨e\*,CTL,𝒪⟩∣∗∈{+,−,I},E=\{\langle e^{\*},C\_{T\_{L,\mathcal{O}}}\rangle\mid\ast\in\{+,-,I\},italic\_E = { ⟨ italic\_e start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT , italic\_C start\_POSTSUBSCRIPT italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ⟩ ∣ ∗ ∈ { + , - , italic\_I } , TL,𝒪∈𝒯L,𝒪\*}T\_{L,\mathcal{O}}\in\mathcal{T}^{\*}\_{L,\mathcal{O}}\}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ∈ caligraphic\_T start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT }, where e+=⟨{𝚊𝚌𝚌𝚎𝚙𝚝},{𝚛𝚎𝚓𝚎𝚌𝚝}⟩superscript𝑒𝚊𝚌𝚌𝚎𝚙𝚝𝚛𝚎𝚓𝚎𝚌𝚝e^{+}=\langle\{\mathtt{accept}\},\{\mathtt{reject}\}\rangleitalic\_e start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT = ⟨ { typewriter\_accept } , { typewriter\_reject } ⟩, e−=⟨{𝚛𝚎𝚓𝚎𝚌𝚝},{𝚊𝚌𝚌𝚎𝚙𝚝}⟩superscript𝑒𝚛𝚎𝚓𝚎𝚌𝚝𝚊𝚌𝚌𝚎𝚙𝚝e^{-}=\langle\{\mathtt{reject}\},\{\mathtt{accept}\}\rangleitalic\_e start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT = ⟨ { typewriter\_reject } , { typewriter\_accept } ⟩ and eI=⟨{},{𝚊𝚌𝚌𝚎𝚙𝚝,𝚛𝚎𝚓𝚎𝚌𝚝}⟩superscript𝑒𝐼
𝚊𝚌𝚌𝚎𝚙𝚝𝚛𝚎𝚓𝚎𝚌𝚝e^{I}=\langle\{\},\{\mathtt{accept,reject}\}\rangleitalic\_e start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT = ⟨ { } , { typewriter\_accept , typewriter\_reject } ⟩ are the partial interpretations for positive, negative and incomplete examples. The 𝚊𝚌𝚌𝚎𝚙𝚝𝚊𝚌𝚌𝚎𝚙𝚝\mathtt{accept}typewriter\_accept and 𝚛𝚎𝚓𝚎𝚌𝚝𝚛𝚎𝚓𝚎𝚌𝚝\mathtt{reject}typewriter\_reject atoms express whether a trace is accepted or rejected by the automaton; hence, positive traces must only be accepted, negative traces must only be rejected, and incomplete traces cannot be accepted or rejected.
Given a trace TL,𝒪=⟨O0,…,On⟩subscript𝑇𝐿𝒪subscript𝑂0…subscript𝑂𝑛T\_{L,\mathcal{O}}=\langle O\_{0},\ldots,O\_{n}\rangleitalic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT = ⟨ italic\_O start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_O start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ⟩, a context is defined as: CTL,𝒪={𝚘𝚋𝚜(𝙾,𝚃).∣𝙾∈O𝚃,O𝚃∈TL,𝒪}∪{𝚕𝚊𝚜𝚝(n).}C\_{T\_{L,\mathcal{O}}}=\left\{\mathtt{obs(O,T).}\mid\mathtt{O}\in O\_{\mathtt{T}},O\_{\mathtt{T}}\in T\_{L,\mathcal{O}}\right\}\cup\left\{\mathtt{last(}n\mathtt{).}\right\}italic\_C start\_POSTSUBSCRIPT italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = { typewriter\_obs ( typewriter\_O , typewriter\_T ) . ∣ typewriter\_O ∈ italic\_O start\_POSTSUBSCRIPT typewriter\_T end\_POSTSUBSCRIPT , italic\_O start\_POSTSUBSCRIPT typewriter\_T end\_POSTSUBSCRIPT ∈ italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT } ∪ { typewriter\_last ( italic\_n ) . }, where 𝚕𝚊𝚜𝚝(n)𝚕𝚊𝚜𝚝𝑛\mathtt{last(}n\mathtt{)}typewriter\_last ( italic\_n ) indicates that the trace ends at time n𝑛nitalic\_n. We denote by M(TL,𝒪)𝑀subscript𝑇𝐿𝒪M(T\_{L,\mathcal{O}})italic\_M ( italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ) the set of ASP facts that describe the trace TL,𝒪subscript𝑇𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT. For example, M(TL,𝒪)={𝚘𝚋𝚜(𝚊,𝟶).𝚘𝚋𝚜(𝚋,𝟸).𝚘𝚋𝚜(𝚌,𝟸).𝚕𝚊𝚜𝚝(𝟸).}M(T\_{L,\mathcal{O}})=\{\mathtt{obs(a,0).}~{}\mathtt{obs(b,2).}~{}\mathtt{obs(c,2).}~{}\mathtt{last(2).}\}italic\_M ( italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ) = { typewriter\_obs ( typewriter\_a , typewriter\_0 ) . typewriter\_obs ( typewriter\_b , typewriter\_2 ) . typewriter\_obs ( typewriter\_c , typewriter\_2 ) . typewriter\_last ( typewriter\_2 ) . } for the trace TL,𝒪=⟨{a},{},{b,c}⟩subscript𝑇𝐿𝒪𝑎
𝑏𝑐T\_{L,\mathcal{O}}=\langle\{a\},\{\},\{b,c\}\rangleitalic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT = ⟨ { italic\_a } , { } , { italic\_b , italic\_c } ⟩.
#### Background knowledge.
The next paragraphs describe the background knowledge B𝐵Bitalic\_B components: B=BU∪B𝚜𝚝𝚎𝚙∪Bδ∪B𝚜𝚝𝐵subscript𝐵𝑈subscript𝐵𝚜𝚝𝚎𝚙subscript𝐵𝛿subscript𝐵𝚜𝚝B=B\_{U}\cup B\_{\mathtt{step}}\cup B\_{\delta}\cup B\_{\mathtt{st}}italic\_B = italic\_B start\_POSTSUBSCRIPT italic\_U end\_POSTSUBSCRIPT ∪ italic\_B start\_POSTSUBSCRIPT typewriter\_step end\_POSTSUBSCRIPT ∪ italic\_B start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT ∪ italic\_B start\_POSTSUBSCRIPT typewriter\_st end\_POSTSUBSCRIPT. First, BUsubscript𝐵𝑈B\_{U}italic\_B start\_POSTSUBSCRIPT italic\_U end\_POSTSUBSCRIPT is a set of facts of the form 𝚜𝚝𝚊𝚝𝚎(𝚞).𝚜𝚝𝚊𝚝𝚎𝚞\mathtt{state(u).}typewriter\_state ( typewriter\_u ) . for each 𝚞∈U𝚞𝑈\mathtt{u}\in Utypewriter\_u ∈ italic\_U. The set B𝚜𝚝𝚎𝚙subscript𝐵𝚜𝚝𝚎𝚙B\_{\mathtt{step}}italic\_B start\_POSTSUBSCRIPT typewriter\_step end\_POSTSUBSCRIPT defines a fluent 𝚜𝚝𝚎𝚙(𝚃)𝚜𝚝𝚎𝚙𝚃\mathtt{step(T)}typewriter\_step ( typewriter\_T ) for 0≤𝚃≤𝚃′+10𝚃superscript𝚃′10\leq\mathtt{T}\leq\mathtt{T^{\prime}}+10 ≤ typewriter\_T ≤ typewriter\_T start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT + 1 where 𝚃′superscript𝚃′\mathtt{T^{\prime}}typewriter\_T start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT is the step for which 𝚕𝚊𝚜𝚝(𝚃′)𝚕𝚊𝚜𝚝superscript𝚃′\mathtt{last(T^{\prime})}typewriter\_last ( typewriter\_T start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) is defined.
The subset Bδsubscript𝐵𝛿B\_{\delta}italic\_B start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT defines rules for the automaton transition function. The first rule defines all the possible edge identifiers, which is limited by the maximum number of edges between two states. The second rule states that there is an external transition from state 𝚇𝚇\mathtt{X}typewriter\_X at time 𝚃𝚃\mathtt{T}typewriter\_T if there is a transition from 𝚇𝚇\mathtt{X}typewriter\_X to a different state 𝚈𝚈\mathtt{Y}typewriter\_Y at that time. The third rule is a frame axiom: state 𝚇𝚇\mathtt{X}typewriter\_X transitions to itself at time 𝚃𝚃\mathtt{T}typewriter\_T if there are no external transitions from it at that time. The fourth rule defines the positive transitions in terms of the learned negative transitions δ¯¯𝛿\bar{\delta}over¯ start\_ARG italic\_δ end\_ARG and 𝚎𝚍𝚎𝚍\mathtt{ed}typewriter\_ed atoms. The fifth rule preserves determinism: two transitions from 𝚇𝚇\mathtt{X}typewriter\_X to two different states 𝚈𝚈\mathtt{Y}typewriter\_Y and 𝚉𝚉\mathtt{Z}typewriter\_Z cannot hold at the same time. The sixth rule forces all non-terminal states to have an edge to another state.
| | | |
| --- | --- | --- |
| | Bδ={𝚎𝚍𝚐𝚎\_𝚒𝚍(𝟷..max|δ(𝚡,𝚢)|).𝚎𝚡𝚝\_δ(𝚇,𝚃):−δ(𝚇,𝚈,\_,𝚃),𝚇!=𝚈.δ(𝚇,𝚇,𝟷,𝚃):−𝚗𝚘𝚝𝚎𝚡𝚝\_δ(𝚇,𝚃),𝚜𝚝𝚊𝚝𝚎(𝚇),𝚜𝚝𝚎𝚙(𝚃).δ(𝚇,𝚈,𝙴,𝚃):−𝚎𝚍(𝚇,𝚈,𝙴),𝚗𝚘𝚝δ¯(𝚇,𝚈,𝙴,𝚃),𝚜𝚝𝚎𝚙(𝚃).:−δ(𝚇,𝚈,\_,𝚃),δ(𝚇,𝚉,\_,𝚃),𝚈!=𝚉.:−𝚗𝚘𝚝𝚎𝚍(𝚇,\_,\_),𝚜𝚝𝚊𝚝𝚎(𝚇),𝚇!=𝚞𝙰,𝚇!=𝚞𝚁.}B\_{\delta}={\begin{Bmatrix}[l]\mathtt{edge\\_id(1..\max|\delta(x,y)|)}.\\
\mathtt{ext\\_\delta(X,T)\operatorname{\mathtt{:-}}\delta(X,Y,\\_,T),X!\mathord{=}Y.}\\
\mathtt{\delta(X,X,1,T)\operatorname{\mathtt{:-}}not~{}ext\\_\delta(X,T),state(X),step(T).}\\
\mathtt{\delta(X,Y,E,T)\operatorname{\mathtt{:-}}ed(X,Y,E),not~{}\bar{\delta}(X,Y,E,T),step(T).}\\
\mathtt{\operatorname{\mathtt{:-}}\delta(X,Y,\\_,T),\delta(X,Z,\\_,T),Y!\mathord{=}Z.}\\
\mathtt{\operatorname{\mathtt{:-}}not~{}ed(X,\\_,\\_),state(X),X!\mathord{=}u\_{A},X!\mathord{=}u\_{R}.}\end{Bmatrix}}italic\_B start\_POSTSUBSCRIPT italic\_δ end\_POSTSUBSCRIPT = { start\_ARG start\_ROW start\_CELL typewriter\_edge \_ typewriter\_id ( typewriter\_1 . . roman\_max | italic\_δ ( typewriter\_x , typewriter\_y ) | ) . end\_CELL end\_ROW start\_ROW start\_CELL typewriter\_ext \_ italic\_δ ( typewriter\_X , typewriter\_T ) start\_OPFUNCTION : - end\_OPFUNCTION italic\_δ ( typewriter\_X , typewriter\_Y , \_ , typewriter\_T ) , typewriter\_X ! = typewriter\_Y . end\_CELL end\_ROW start\_ROW start\_CELL italic\_δ ( typewriter\_X , typewriter\_X , typewriter\_1 , typewriter\_T ) start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_not typewriter\_ext \_ italic\_δ ( typewriter\_X , typewriter\_T ) , typewriter\_state ( typewriter\_X ) , typewriter\_step ( typewriter\_T ) . end\_CELL end\_ROW start\_ROW start\_CELL italic\_δ ( typewriter\_X , typewriter\_Y , typewriter\_E , typewriter\_T ) start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_ed ( typewriter\_X , typewriter\_Y , typewriter\_E ) , typewriter\_not over¯ start\_ARG italic\_δ end\_ARG ( typewriter\_X , typewriter\_Y , typewriter\_E , typewriter\_T ) , typewriter\_step ( typewriter\_T ) . end\_CELL end\_ROW start\_ROW start\_CELL start\_OPFUNCTION : - end\_OPFUNCTION italic\_δ ( typewriter\_X , typewriter\_Y , \_ , typewriter\_T ) , italic\_δ ( typewriter\_X , typewriter\_Z , \_ , typewriter\_T ) , typewriter\_Y ! = typewriter\_Z . end\_CELL end\_ROW start\_ROW start\_CELL start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_not typewriter\_ed ( typewriter\_X , \_ , \_ ) , typewriter\_state ( typewriter\_X ) , typewriter\_X ! = typewriter\_u start\_POSTSUBSCRIPT typewriter\_A end\_POSTSUBSCRIPT , typewriter\_X ! = typewriter\_u start\_POSTSUBSCRIPT typewriter\_R end\_POSTSUBSCRIPT . end\_CELL end\_ROW end\_ARG } | |
The subset B𝚜𝚝subscript𝐵𝚜𝚝B\_{\mathtt{st}}italic\_B start\_POSTSUBSCRIPT typewriter\_st end\_POSTSUBSCRIPT uses 𝚜𝚝(𝚃,𝚇)𝚜𝚝𝚃𝚇\mathtt{st(T,X)}typewriter\_st ( typewriter\_T , typewriter\_X ) atoms, indicating that the agent is in state 𝚇𝚇\mathtt{X}typewriter\_X at time 𝚃𝚃\mathtt{T}typewriter\_T. The first rule says that the agent is in 𝚞𝟶subscript𝚞0\mathtt{u\_{0}}typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT at time 𝟶0\mathtt{0}typewriter\_0. The second rule determines that at time 𝚃+𝟷𝚃1\mathtt{T\mathord{+}1}typewriter\_T + typewriter\_1 the agent will be in state 𝚈𝚈\mathtt{Y}typewriter\_Y if it is in a non-terminal state 𝚇𝚇\mathtt{X}typewriter\_X at time 𝚃𝚃\mathtt{T}typewriter\_T and a transition between them holds. The third (resp. fourth) rule defines that the example is accepted (resp. rejected) if the state at the trace’s last timestep is 𝚞𝙰subscript𝚞𝙰\mathtt{u\_{A}}typewriter\_u start\_POSTSUBSCRIPT typewriter\_A end\_POSTSUBSCRIPT (resp. 𝚞𝚁subscript𝚞𝚁\mathtt{u\_{R}}typewriter\_u start\_POSTSUBSCRIPT typewriter\_R end\_POSTSUBSCRIPT).
| | | |
| --- | --- | --- |
| | B𝚜𝚝={𝚜𝚝(𝟶,𝚞𝟶).𝚜𝚝(𝚃+𝟷,𝚈):−𝚜𝚝(𝚃,𝚇),δ(𝚇,𝚈,\_,𝚃),𝚇!=𝚞𝙰,𝚇!=𝚞𝚁.𝚊𝚌𝚌𝚎𝚙𝚝:−𝚕𝚊𝚜𝚝(𝚃),𝚜𝚝(𝚃+𝟷,𝚞𝙰).𝚛𝚎𝚓𝚎𝚌𝚝:−𝚕𝚊𝚜𝚝(𝚃),𝚜𝚝(𝚃+𝟷,𝚞𝚁).}subscript𝐵𝚜𝚝matrix𝚜𝚝0subscript𝚞0𝚜𝚝𝚃1𝚈:absent𝚜𝚝𝚃𝚇𝛿𝚇𝚈\_𝚃𝚇subscript𝚞𝙰𝚇subscript𝚞𝚁𝚊𝚌𝚌𝚎𝚙𝚝:absent𝚕𝚊𝚜𝚝𝚃𝚜𝚝𝚃1subscript𝚞𝙰𝚛𝚎𝚓𝚎𝚌𝚝:absent𝚕𝚊𝚜𝚝𝚃𝚜𝚝𝚃1subscript𝚞𝚁B\_{\mathtt{st}}={\begin{Bmatrix}[l]\mathtt{st(0,u\_{0}).}\\
\mathtt{st(T\mathord{+}1,Y)\operatorname{\mathtt{:-}}st(T,X),\delta(X,Y,\\_,T),X!\mathord{=}u\_{A},X!\mathord{=}u\_{R}.}\\
\mathtt{accept\operatorname{\mathtt{:-}}last(T),st(T\mathord{+}1,u\_{A}).}\\
\mathtt{reject\operatorname{\mathtt{:-}}last(T),st(T\mathord{+}1,u\_{R}).}\end{Bmatrix}}italic\_B start\_POSTSUBSCRIPT typewriter\_st end\_POSTSUBSCRIPT = { start\_ARG start\_ROW start\_CELL typewriter\_st ( typewriter\_0 , typewriter\_u start\_POSTSUBSCRIPT typewriter\_0 end\_POSTSUBSCRIPT ) . end\_CELL end\_ROW start\_ROW start\_CELL typewriter\_st ( typewriter\_T + typewriter\_1 , typewriter\_Y ) start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_st ( typewriter\_T , typewriter\_X ) , italic\_δ ( typewriter\_X , typewriter\_Y , \_ , typewriter\_T ) , typewriter\_X ! = typewriter\_u start\_POSTSUBSCRIPT typewriter\_A end\_POSTSUBSCRIPT , typewriter\_X ! = typewriter\_u start\_POSTSUBSCRIPT typewriter\_R end\_POSTSUBSCRIPT . end\_CELL end\_ROW start\_ROW start\_CELL typewriter\_accept start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_last ( typewriter\_T ) , typewriter\_st ( typewriter\_T + typewriter\_1 , typewriter\_u start\_POSTSUBSCRIPT typewriter\_A end\_POSTSUBSCRIPT ) . end\_CELL end\_ROW start\_ROW start\_CELL typewriter\_reject start\_OPFUNCTION : - end\_OPFUNCTION typewriter\_last ( typewriter\_T ) , typewriter\_st ( typewriter\_T + typewriter\_1 , typewriter\_u start\_POSTSUBSCRIPT typewriter\_R end\_POSTSUBSCRIPT ) . end\_CELL end\_ROW end\_ARG } | |
Lemma [1](#Thmlemma1 "Lemma 1 (Correctness of the ASP encoding). ‣ Background knowledge. ‣ Learning Subgoal Automata from Traces ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") and Theorem [1](#Thmtheorem1 "Theorem 1. ‣ Background knowledge. ‣ Learning Subgoal Automata from Traces ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") capture the correctness of the encoding. We omit the proofs for brevity.
######
Lemma 1 (Correctness of the ASP encoding).
Given an automaton 𝒜𝒜\mathcal{A}caligraphic\_A and a finite trace TL,𝒪∗superscriptsubscript𝑇𝐿𝒪normal-∗T\_{L,\mathcal{O}}^{\ast}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∗ end\_POSTSUPERSCRIPT, where ∗∈{+,−,I}\ast\in\{+,-,I\}∗ ∈ { + , - , italic\_I }, M(𝒜)∪B∪M(TL,𝒪∗)𝑀𝒜𝐵𝑀superscriptsubscript𝑇𝐿𝒪normal-∗M(\mathcal{A})\cup B\cup M(T\_{L,\mathcal{O}}^{\ast})italic\_M ( caligraphic\_A ) ∪ italic\_B ∪ italic\_M ( italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ∗ end\_POSTSUPERSCRIPT ) has a unique answer set S𝑆Sitalic\_S and (1) 𝚊𝚌𝚌𝚎𝚙𝚝∈S𝚊𝚌𝚌𝚎𝚙𝚝𝑆\mathtt{accept}\in Stypewriter\_accept ∈ italic\_S iff ∗=+normal-∗\ast=+∗ = +, and (2) 𝚛𝚎𝚓𝚎𝚌𝚝∈S𝚛𝚎𝚓𝚎𝚌𝚝𝑆\mathtt{reject}\in Stypewriter\_reject ∈ italic\_S iff ∗=−normal-∗\ast=-∗ = -.
######
Theorem 1.
Given an automaton learning task T𝒜=⟨U,𝒪,𝒯L,𝒪⟩subscript𝑇𝒜𝑈𝒪subscript𝒯𝐿𝒪T\_{\mathcal{A}}=\langle U,\mathcal{O},\mathcal{T}\_{L,\mathcal{O}}\rangleitalic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT = ⟨ italic\_U , caligraphic\_O , caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ⟩, an automaton 𝒜𝒜\mathcal{A}caligraphic\_A is a solution of T𝒜subscript𝑇𝒜T\_{\mathcal{A}}italic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT iff M(𝒜)𝑀𝒜M(\mathcal{A})italic\_M ( caligraphic\_A ) is an inductive solution of M(T𝒜)=⟨B,SM,E⟩𝑀subscript𝑇𝒜𝐵subscript𝑆𝑀𝐸M(T\_{\mathcal{A}})=\langle B,S\_{M},E\rangleitalic\_M ( italic\_T start\_POSTSUBSCRIPT caligraphic\_A end\_POSTSUBSCRIPT ) = ⟨ italic\_B , italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT , italic\_E ⟩.
### Interleaved Automata Learning Algorithm
This section describes ISA (Induction of Subgoal Automata for Reinforcement Learning), a method that combines reinforcement and automaton learning. First, we describe the RL algorithm that exploits the automaton structure. Second, we explain how these two learning components are mixed.
#### Reinforcement learning algorithm.
The RL algorithm we use to exploit the automata structure is QRM (Q-learning for Reward Machines) (?). QRM maintains a Q-function for each automaton state, which are updated with Q-learning updates of the form:
| | | |
| --- | --- | --- |
| | Qu(s,a)←𝛼r+γmaxa′Qu′(s′,a′),𝛼←subscript𝑄𝑢𝑠𝑎𝑟𝛾subscriptsuperscript𝑎′subscript𝑄superscript𝑢′superscript𝑠′superscript𝑎′\displaystyle Q\_{u}\left(s,a\right)\xleftarrow{\alpha}r+\gamma\max\_{a^{\prime}}Q\_{u^{\prime}}\left(s^{\prime},a^{\prime}\right),italic\_Q start\_POSTSUBSCRIPT italic\_u end\_POSTSUBSCRIPT ( italic\_s , italic\_a ) start\_ARROW overitalic\_α ← end\_ARROW italic\_r + italic\_γ roman\_max start\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_Q start\_POSTSUBSCRIPT italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , | |
where, in our case, r=1𝑟1r=1italic\_r = 1 if u′=uAsuperscript𝑢′subscript𝑢𝐴u^{\prime}=u\_{A}italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and 0 otherwise. Note that the bootstrapped action-value depends on the next automaton state u′superscript𝑢′u^{\prime}italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT.
QRM performs this update for all the automaton states, so all policies are simultaneously updated based on the (s,a,s′)𝑠𝑎superscript𝑠′(s,a,s^{\prime})( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) experience. Note this is a form of intra-option learning (?): we update the policies of all the states from the experience generated by a single state’s policy. In the tabular case, QRM is guaranteed to converge to an optimal policy in the limit. Note that QRM (and thus, ISA) is still applicable in domains with large state spaces by having a Deep Q-Network (?) in each automaton state instead of a Q-table.
Algorithm 1 ISA algorithm for a single task
1:𝒜←←𝒜absent\mathcal{A}\leftarrowcaligraphic\_A ← InitAutomaton({u0,uA,uR}subscript𝑢0subscript𝑢𝐴subscript𝑢𝑅\{u\_{0},u\_{A},u\_{R}\}{ italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT })
2:𝒯L,𝒪←{}←subscript𝒯𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}\leftarrow\{\}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ← { } ▷▷\triangleright▷ Set of counterexamples
3:InitQFunctions(𝒜𝒜\mathcal{A}caligraphic\_A)
4:for l=0𝑙0l=0italic\_l = 0 to num\_episodes do
5: s←←𝑠absents\leftarrowitalic\_s ← EnvInitialState()
6: up←δ(u0,L(s))←subscript𝑢𝑝𝛿subscript𝑢0𝐿𝑠u\_{p}\leftarrow\delta(u\_{0},L(s))italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ← italic\_δ ( italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_L ( italic\_s ) )
7: TL,𝒪←⟨L(s)⟩←subscript𝑇𝐿𝒪delimited-⟨⟩𝐿𝑠T\_{L,\mathcal{O}}\leftarrow\langle L(s)\rangleitalic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ← ⟨ italic\_L ( italic\_s ) ⟩ ▷▷\triangleright▷ Initialize trace
8: if IsCounterexample(s,up𝑠subscript𝑢𝑝s,u\_{p}italic\_s , italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT) then
9: OnCounterexampleFound(TL,𝒪subscript𝑇𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT)
10: up←δ(u0,L(s))←subscript𝑢𝑝𝛿subscript𝑢0𝐿𝑠u\_{p}\leftarrow\delta(u\_{0},L(s))italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ← italic\_δ ( italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_L ( italic\_s ) )
11: for t=0𝑡0t=0italic\_t = 0 to length\_episode do
12: a,s′←←𝑎superscript𝑠′
absenta,s^{\prime}\leftarrowitalic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ← EnvStep(s,up𝑠subscript𝑢𝑝s,u\_{p}italic\_s , italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT)
13: uq←δ(up,L(s′))←subscript𝑢𝑞𝛿subscript𝑢𝑝𝐿superscript𝑠′u\_{q}\leftarrow\delta(u\_{p},L(s^{\prime}))italic\_u start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT ← italic\_δ ( italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT , italic\_L ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) )
14: UpdateObsTrace(L(s′),TL,𝒪𝐿superscript𝑠′subscript𝑇𝐿𝒪L(s^{\prime}),T\_{L,\mathcal{O}}italic\_L ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) , italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT)
15: if IsCounterexample(s′,uqsuperscript𝑠′subscript𝑢𝑞s^{\prime},u\_{q}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT) then
16: OnCounterexampleFound(TL,𝒪subscript𝑇𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT)
17: break
18: else
19: UpdateQFunctions(s𝑠sitalic\_s, a𝑎aitalic\_a, s′superscript𝑠′s^{\prime}italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT, L(s′)𝐿superscript𝑠′L(s^{\prime})italic\_L ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ))
20: s←s′;up←uqformulae-sequence←𝑠superscript𝑠′←subscript𝑢𝑝subscript𝑢𝑞s\leftarrow s^{\prime};u\_{p}\leftarrow u\_{q}italic\_s ← italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ; italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT ← italic\_u start\_POSTSUBSCRIPT italic\_q end\_POSTSUBSCRIPT
21:function OnCounterexampleFound(TL,𝒪subscript𝑇𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT)
22: 𝒯L,𝒪←𝒯L,𝒪∪{TL,𝒪}←subscript𝒯𝐿𝒪subscript𝒯𝐿𝒪subscript𝑇𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}\leftarrow\mathcal{T}\_{L,\mathcal{O}}\cup\{T\_{L,\mathcal{O}}\}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ← caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT ∪ { italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT }
23: 𝒜←←𝒜absent\mathcal{A}\leftarrowcaligraphic\_A ← FindMinimalAutomaton(𝒯L,𝒪subscript𝒯𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT)
24: InitQFunctions(𝒜𝒜\mathcal{A}caligraphic\_A)
#### ISA algorithm.
Algorithm [1](#alg1 "Algorithm 1 ‣ Reinforcement learning algorithm. ‣ Interleaved Automata Learning Algorithm ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") is the ISA pseudocode for a single task and is explained below:
1. 1.
The initial automaton (line 1) has initial state u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, accepting state uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and rejecting state uRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT. The automaton does not accept nor reject anything. The set of counterexample traces and the Q-functions are initialized (lines 2-3).
2. 2.
When an episode starts, the current automaton state upsubscript𝑢𝑝u\_{p}italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT is u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT. One transition is applied depending on the agent’s initial observations L(s)𝐿𝑠L(s)italic\_L ( italic\_s ) (lines 5-6). In Figure [1c](#S3.F1.sf3 "1c ‣ Figure 1 ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning"), if the agent initially observes {☕}☕\{\text{\Coffeecup}\}{ ☕ }, the actual initial state is u1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT.
3. 3.
At each step, we select an action a𝑎aitalic\_a in state s𝑠sitalic\_s using an ϵitalic-ϵ\epsilonitalic\_ϵ-greedy policy (line 12), and update the automaton state upsubscript𝑢𝑝u\_{p}italic\_u start\_POSTSUBSCRIPT italic\_p end\_POSTSUBSCRIPT and observation trace TL,𝒪subscript𝑇
𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT (lines 13-14).
If no counterexample is found (line 18), the Q-functions of all automaton states are updated (line 19) and the episode continues.
4. 4.
Let u𝑢uitalic\_u be the current automaton state and s𝑠sitalic\_s the MDP state. A counterexample trace is found (lines 8, 15) if (a) multiple outgoing transitions from u𝑢uitalic\_u hold, or (b) the automaton does not correctly recognize s𝑠sitalic\_s (e.g., s∈SG∧u≠uA𝑠subscript𝑆𝐺𝑢subscript𝑢𝐴s\in S\_{G}\land u\neq u\_{A}italic\_s ∈ italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT ∧ italic\_u ≠ italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT).
5. 5.
If a counterexample trace TL,𝒪subscript𝑇
𝐿𝒪T\_{L,\mathcal{O}}italic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT is found (lines 21-24):
1. (a)
Add it to 𝒯L,𝒪+superscriptsubscript𝒯
𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}^{+}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT + end\_POSTSUPERSCRIPT if s∈SG𝑠subscript𝑆𝐺s\in S\_{G}italic\_s ∈ italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT, to 𝒯L,𝒪−superscriptsubscript𝒯
𝐿𝒪\mathcal{T}\_{L,\mathcal{O}}^{-}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT - end\_POSTSUPERSCRIPT if s∈ST∖SG𝑠subscript𝑆𝑇subscript𝑆𝐺s\in S\_{T}\setminus S\_{G}italic\_s ∈ italic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ∖ italic\_S start\_POSTSUBSCRIPT italic\_G end\_POSTSUBSCRIPT and to 𝒯L,𝒪Isuperscriptsubscript𝒯
𝐿𝒪𝐼\mathcal{T}\_{L,\mathcal{O}}^{I}caligraphic\_T start\_POSTSUBSCRIPT italic\_L , caligraphic\_O end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_I end\_POSTSUPERSCRIPT if s∉ST𝑠subscript𝑆𝑇s\notin S\_{T}italic\_s ∉ italic\_S start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT (line 22).
2. (b)
Run the automaton learner (line 23), using iterative deepening to select the number of automaton states.
* •
When a new automaton is learned, we reset all the Q-functions (e.g., setting all Q-values to 0) (line 24).
* •
If a counterexample is detected at the beginning of the episode (line 8), the automaton state is reset (line 10), else the episode ends (line 17).
ISA does not start learning automata until we find a positive example (i.e., the goal is achieved). Resetting all the Q-functions causes the agent to forget everything it learned. To mitigate the forgetting effect and further exploit the automata structure, we employ reward shaping.
#### Reward shaping.
? (?) proposed a function that provides the agent with additional reward to guide its learning process while guaranteeing that optimal policies remain unchanged:
| | | |
| --- | --- | --- |
| | F(s,a,s′)=γΦ(s′)−Φ(s),𝐹𝑠𝑎superscript𝑠′𝛾Φsuperscript𝑠′Φ𝑠\displaystyle F\left(s,a,s^{\prime}\right)=\gamma\Phi\left(s^{\prime}\right)-\Phi\left(s\right),italic\_F ( italic\_s , italic\_a , italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_γ roman\_Φ ( italic\_s start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_s ) , | |
where γ𝛾\gammaitalic\_γ is the MDP’s discount factor and Φ:S→ℝ:Φ→𝑆ℝ\Phi:S\to\mathbb{R}roman\_Φ : italic\_S → blackboard\_R is a real-valued function. The automata structure can be exploited by defining F:U×U→ℝ:𝐹→𝑈𝑈ℝF:U\times U\to\mathbb{R}italic\_F : italic\_U × italic\_U → blackboard\_R in terms of the automaton states instead of the MDP states:
| | | |
| --- | --- | --- |
| | F(u,u′)=γΦ(u′)−Φ(u),𝐹𝑢superscript𝑢′𝛾Φsuperscript𝑢′Φ𝑢\displaystyle F\left(u,u^{\prime}\right)=\gamma\Phi\left(u^{\prime}\right)-\Phi\left(u\right),italic\_F ( italic\_u , italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) = italic\_γ roman\_Φ ( italic\_u start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) - roman\_Φ ( italic\_u ) , | |
where Φ:U→ℝ:Φ→𝑈ℝ\Phi:U\to\mathbb{R}roman\_Φ : italic\_U → blackboard\_R. Intuitively, we want F𝐹Fitalic\_F’s output to be high when the agent gets closer to uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT. Thus, we define ΦΦ\Phiroman\_Φ as
| | | |
| --- | --- | --- |
| | Φ(u)=|U|−d(u,uA),Φ𝑢𝑈𝑑𝑢subscript𝑢𝐴\displaystyle\Phi\left(u\right)=\left|U\right|-d\left(u,u\_{A}\right),roman\_Φ ( italic\_u ) = | italic\_U | - italic\_d ( italic\_u , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) , | |
where |U|𝑈|U|| italic\_U | is the number of states in the automaton (an upper bound for the maximum finite distance between u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT), and d(u,uA)𝑑𝑢subscript𝑢𝐴d(u,u\_{A})italic\_d ( italic\_u , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) is the distance between state u𝑢uitalic\_u and uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT. If uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT is unreachable from a state u𝑢uitalic\_u, then d(u,uA)=∞𝑑𝑢subscript𝑢𝐴d(u,u\_{A})=\inftyitalic\_d ( italic\_u , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) = ∞.
Theorem [2](#Thmtheorem2 "Theorem 2. ‣ Reward shaping. ‣ Interleaved Automata Learning Algorithm ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") shows that if the target automata is in the hypothesis space, there will only be a finite number of learning steps in the algorithm before it converges on the target automata (or an equivalent automata).
######
Theorem 2.
Given a target finite automaton 𝒜\*subscript𝒜\mathcal{A\_{\*}}caligraphic\_A start\_POSTSUBSCRIPT \* end\_POSTSUBSCRIPT, there is no infinite sequence σ𝜎\sigmaitalic\_σ of automaton-counterexample pairs ⟨𝒜i,ei⟩subscript𝒜𝑖subscript𝑒𝑖\langle\mathcal{A}\_{i},e\_{i}\rangle⟨ caligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_e start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ⟩ such that ∀ifor-all𝑖\forall i∀ italic\_i: (1) 𝒜isubscript𝒜𝑖\mathcal{A}\_{i}caligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT covers all examples e1,…,ei−1subscript𝑒1normal-…subscript𝑒𝑖1e\_{1},\ldots,e\_{i-1}italic\_e start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , … , italic\_e start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT, (2) 𝒜isubscript𝒜𝑖\mathcal{A}\_{i}caligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT does not cover eisubscript𝑒𝑖e\_{i}italic\_e start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT, and (3) 𝒜isubscript𝒜𝑖\mathcal{A}\_{i}caligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT is in the finite hypothesis space SMsubscript𝑆𝑀S\_{M}italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT.
###### Proof.
By contradiction. Assume that σ𝜎\sigmaitalic\_σ is infinite. Given that SMsubscript𝑆𝑀S\_{M}italic\_S start\_POSTSUBSCRIPT italic\_M end\_POSTSUBSCRIPT is finite, the number of possible automata is finite. Hence, some automaton 𝒜𝒜\mathcal{A}caligraphic\_A must appear in σ𝜎\sigmaitalic\_σ at least twice, say as 𝒜i=𝒜j,i<jformulae-sequencesubscript𝒜𝑖subscript𝒜𝑗𝑖𝑗\mathcal{A}\_{i}=\mathcal{A}\_{j},i<jcaligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT = caligraphic\_A start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT , italic\_i < italic\_j. By definition, 𝒜isubscript𝒜𝑖\mathcal{A}\_{i}caligraphic\_A start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT does not cover eisubscript𝑒𝑖e\_{i}italic\_e start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT and 𝒜jsubscript𝒜𝑗\mathcal{A}\_{j}caligraphic\_A start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT covers eisubscript𝑒𝑖e\_{i}italic\_e start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT. This is a contradiction.
∎
u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPTstartu1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTu2subscript𝑢2u\_{2}italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTu3subscript𝑢3u\_{3}italic\_u start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPTuAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPTuRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPTA𝐴Aitalic\_AB𝐵Bitalic\_BC𝐶Citalic\_CD𝐷Ditalic\_D∗∗\ast∗∗∗\ast∗∗∗\ast∗∗∗\ast∗
(a) VisitABCD
u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPTstartuAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPTu1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPTu2subscript𝑢2u\_{2}italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPTu3subscript𝑢3u\_{3}italic\_u start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT☕∧¬🖂☕🖂\text{\Coffeecup}\land\neg\text{\Letter}☕ ∧ ¬ 🖂¬☕∧🖂☕🖂\neg\text{\Coffeecup}\land\text{\Letter}¬ ☕ ∧ 🖂☕∧🖂∧¬o☕🖂𝑜\text{\Coffeecup}\land\text{\Letter}\land\neg o☕ ∧ 🖂 ∧ ¬ italic\_o☕∧🖂∧o☕🖂𝑜\text{\Coffeecup}\land\text{\Letter}\land o☕ ∧ 🖂 ∧ italic\_o🖂∧¬o🖂𝑜\text{\Letter}\land\neg o🖂 ∧ ¬ italic\_o🖂∧o🖂𝑜\text{\Letter}\land o🖂 ∧ italic\_o☕∧¬o☕𝑜\text{\Coffeecup}\land\neg o☕ ∧ ¬ italic\_o☕∧o☕𝑜\text{\Coffeecup}\land o☕ ∧ italic\_oo𝑜oitalic\_o
(b) CoffeeMail
Figure 2: Automata for two OfficeWorld tasks. Self-loops and transitions to uRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT in (b) are omitted. The shaded state IDs can be interchanged without Bsymsubscript𝐵𝑠𝑦𝑚B\_{sym}italic\_B start\_POSTSUBSCRIPT italic\_s italic\_y italic\_m end\_POSTSUBSCRIPT. The dashed state IDs are still interchangeable even when using Bsymsubscript𝐵𝑠𝑦𝑚B\_{sym}italic\_B start\_POSTSUBSCRIPT italic\_s italic\_y italic\_m end\_POSTSUBSCRIPT.

(a) Coffee

(b) CoffeeMail

(c) VisitABCD
Figure 3: Learning curves for different OfficeWorld tasks. The vertical lines are episodes where an automaton is learned.
4 Experimental Results
-----------------------
We evaluate333Code: <github.com/ertsiger/induction-subgoal-automata-rl>. ISA using the OfficeWorld domain and the three tasks introduced in Section [3](#S3 "3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning"). The automata we compute using our method are forced to be *acyclic*. Besides, we add a simple *symmetry breaking* constraint Bsymsubscript𝐵𝑠𝑦𝑚B\_{sym}italic\_B start\_POSTSUBSCRIPT italic\_s italic\_y italic\_m end\_POSTSUBSCRIPT to the task’s background knowledge to avoid considering isomorphic automata. Figure [2](#S3.F2 "Figure 2 ‣ Reward shaping. ‣ Interleaved Automata Learning Algorithm ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") shows two automata whose state IDs u1,u2subscript𝑢1subscript𝑢2u\_{1},u\_{2}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT and u3subscript𝑢3u\_{3}italic\_u start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT can be used indifferently. Our symmetry breaking method (1) assigns an integer index to each state444u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT has the lowest value, while uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT and uRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT have the highest. and (2) imposes that states must be visited in increasing order of indices. For example, if we assign indices 0…3 to u0,…,u3subscript𝑢0…subscript𝑢3u\_{0},\ldots,u\_{3}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_u start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT, positive traces in Figure [2a](#S3.F2.sf1 "2a ‣ Figure 2 ‣ Reward shaping. ‣ Interleaved Automata Learning Algorithm ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") always yield the sequence ⟨u0,u1,u2,u3,uA⟩subscript𝑢0subscript𝑢1subscript𝑢2subscript𝑢3subscript𝑢𝐴\langle u\_{0},u\_{1},u\_{2},u\_{3},u\_{A}\rangle⟨ italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT 3 end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ⟩. However, this is not enough to break all symmetries in Figure [2b](#S3.F2.sf2 "2b ‣ Figure 2 ‣ Reward shaping. ‣ Interleaved Automata Learning Algorithm ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning"): u1subscript𝑢1u\_{1}italic\_u start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and u2subscript𝑢2u\_{2}italic\_u start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT can still be switched since they cannot be in the same path to uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT or uRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT.
Tabular Q-learning is used to learn the Q-function at each automaton state with parameters α=0.1𝛼0.1\alpha=0.1italic\_α = 0.1, ϵ=0.1italic-ϵ0.1\epsilon=0.1italic\_ϵ = 0.1, and γ=0.99𝛾0.99\gamma=0.99italic\_γ = 0.99. The agent’s state is its position. ISA receives a set of 100 randomly generated grids555Each grid has the same size and walls as Figure [1a](#S3.F1.sf1 "1a ‣ Figure 1 ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning"). The observables and the agent are randomly placed.. One episode is run per grid in sequential order until reaching 20,000 episodes for each grid. The maximum episode length is 100 steps.
For some experiments we consider the multitask setting, where an automaton is learned for every task from the set of grids. Thus, there is a Q-table for each task-grid-automaton state triplet updated at every step. One episode is run per task-grid until 20,000 episodes are performed for each pair.
When reward shaping is on, the shaping function’s output is set to -100 in case it is −∞-\infty- ∞. This occurs when the next automaton state is uRsubscript𝑢𝑅u\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT since there is no path from it to uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT.
The different settings are referenced as S (single task), M (multitask) and R (reward shaping), all using the same set of 100 random grids. We execute 10 runs for each setting.
We use ILASP to learn the automata from compressed observation traces and set max|δ(x,y)|𝛿𝑥𝑦\max|\delta(x,y)|roman\_max | italic\_δ ( italic\_x , italic\_y ) | to 1. All experiments ran on 3.40GHz Intel® Core™ i7-6700 processors.
#### ISA performs similarly to QRM.
Figure [3](#S3.F3 "Figure 3 ‣ Reward shaping. ‣ Interleaved Automata Learning Algorithm ‣ 3 Methodology ‣ Induction of Subgoal Automata for Reinforcement Learning") shows the tasks’ average learning curve for ISA and QRM (where the automaton is given beforehand) across 10 runs. The ISA curves converge similarly to the analogous QRM’s. When reward shaping is used, convergence speeds up dramatically. The multitask settings converge faster since an agent is also trained from other agents’ experiences in different tasks.
The vertical lines are episodes where an automaton is learned, and often occur during the first episodes: this is the main reason why the learning and non-learning curves are similar. Less frequently, automata learning also happens at intermediate phases in the Coffee and CoffeeMail tasks which make the agent forget what it learned. In these cases, recovery from forgetting happens faster in the multitask settings because of the reason above. Reward shaping has a positive effect on the learner: not only is convergence faster, it also helps the agent to discover helpful traces earlier.
#### ISA’s automata learning results.
Table [1](#S4.T1 "Table 1 ‣ ISA’s automata learning results. ‣ 4 Experimental Results ‣ Induction of Subgoal Automata for Reinforcement Learning") shows the average number of examples needed to learn the final automata for setting S (the results for other settings are similar). Table [2](#S4.T2 "Table 2 ‣ ISA’s automata learning results. ‣ 4 Experimental Results ‣ Induction of Subgoal Automata for Reinforcement Learning") shows the average time needed for computing *all* the automata, which is negligible with respect to ISA’s total running time. For both tables, the standard error is shown in brackets. First, we observe that the most complex tasks (CoffeeMail and VisitABCD) need a higher number of examples and more time to compute their corresponding automata. However, while the total number of examples for these tasks are similar, the time is higher for VisitABCD possibly because the longest path from u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT is longer than in CoffeeMail. Second, we see that the number of positive and incomplete examples are usually the smallest and the largest respectively. Note that the number of positive examples is approximately equal to the number of paths from u0subscript𝑢0u\_{0}italic\_u start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT to uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT. The paths described by the positive examples are refined through the negative and incomplete ones. Finally, Table [2](#S4.T2 "Table 2 ‣ ISA’s automata learning results. ‣ 4 Experimental Results ‣ Induction of Subgoal Automata for Reinforcement Learning") shows slight variations of time across settings. The different experimental settings and the exploratory nature of the agent are responsible for coming up with different counterexamples and, thus, cause these variations. There is not a clear setting which leads to faster automata learning across domains. The design of exploratory strategies that speed up automata learning is possible future work.
| | All | + | - | I |
| --- | --- | --- | --- | --- |
| Coffee | 6.6 (0.5) | 2.2 (0.2) | 2.3 (0.2) | 2.1 (0.3) |
| CoffeeMail | 34.5 (2.9) | 5.5 (0.4) | 9.9 (0.9) | 19.1 (2.2) |
| VisitABCD | 32.5 (2.1) | 1.7 (0.2) | 11.6 (0.8) | 19.2 (1.7) |
Table 1: Average number of examples needed for setting S.
| | S | S+R | M | M+R |
| --- | --- | --- | --- | --- |
| Coffee | 0.5 (0.0) | 0.4 (0.0) | 0.3 (0.0) | 0.4 (0.0) |
| CoffeeMail | 43.3 (12.1) | 36.9 (6.0) | 24.8 (3.6) | 24.6 (2.7) |
| VisitABCD | 63.0 (11.4) | 68.5 (13.0) | 48.4 (8.8) | 69.6 (8.1) |
Table 2: Average ILASP running time.
#### ISA learns automata faster with few observables.
To test the effect of the observable set 𝒪𝒪\mathcal{O}caligraphic\_O on ILASP’s performance, we run the experiments using setting S but employing only the observables that each task needs, e.g., 𝒪={☕,o,∗}𝒪☕𝑜∗\mathcal{O}=\{\text{\Coffeecup},o,\ast\}caligraphic\_O = { ☕ , italic\_o , ∗ } in the Coffee task. The biggest changes occur in CoffeeMail, where the total number of examples is 46% smaller. The sets of positive, negative and incomplete examples are 27%, 42% and 53% smaller. Besides, the automata are computed 92% faster. Thus, we see that the number of observables has an impact on the performance: the RL process is halted less frequently and automata are learned faster. This performance boost in CoffeeMail can be due to the fact that it has more paths to uAsubscript𝑢𝐴u\_{A}italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT; thus, irrelevant symbols do not need to be discarded for each of these. This is confirmed by the fact that while the number of positives is roughly the same, the number of incomplete examples greatly decreases.
5 Related Work
---------------
#### Hierarchical RL (HRL).
Our method is closely related to the options framework for HRL, and indeed we define one option per automaton state. The key difference from other HRL approaches, like HAMs (?), MAXQ (?) and the common way of using options, is that we do not learn a high-level policy for selecting among options. Rather, the high-level policy is implicitly represented by the automaton, and the option to execute is fully determined by the current automaton state. Consequently, while HRL policies may be suboptimal in general, the QRM algorithm we use converges to the optimal policy.
Our approach is similar to HAMs in that they also use an automaton. However, HAMs are non-deterministic automata whose transitions can invoke lower level machines and are not labeled by observables (the high-level policy consists in deciding which transition to fire). ? (?) synthesize a HAM from the set of shortest solutions to a non-deterministic planning problem, and use it to refine the choices at non-deterministic points through RL.
#### Option discovery.
ISA is similar to bottleneck option discovery methods, which find “bridges” between regions of the state space. In particular, ISA finds conditions that connect two of these regions. ? (?) use diverse density to find landmark states in state traces that achieve the task’s goal. This approach is similar to ours because (1) it learns from traces; (2) it classifies traces into different categories; and (3) it interleaves option discovery and learning.
Just like some option discovery methods (?; ?), our approach requires the task to be solved at least once. Other methods (?; ?; ?; ?) discover options without solving the task.
Grammars are an alternative to automata for expressing formal languages. ? (?) induce a straight-line grammar from action traces to discover macro-actions.
#### Reward machines (RMs).
Subgoal automata are similar to RMs (?). There are two differences: (1) RMs do not have explicit accepting and rejecting states, and (2) RMs use a reward-transition function δr:U×U→ℝ:subscript𝛿𝑟→𝑈𝑈ℝ\delta\_{r}:U\times U\to\mathbb{R}italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT : italic\_U × italic\_U → blackboard\_R that returns the reward for taking a transition between two automaton states. Note that our automata are a specific case of the latter where δr(⋅,uA)=1subscript𝛿𝑟⋅subscript𝑢𝐴1\delta\_{r}(\cdot,u\_{A})=1italic\_δ start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( ⋅ , italic\_u start\_POSTSUBSCRIPT italic\_A end\_POSTSUBSCRIPT ) = 1 and 0 otherwise.
Recent work has focused on learning RMs from experience using discrete optimization (?) and grammatical inference algorithms (?). While these approaches can get stuck in a local optima, ISA returns a minimal automaton each time it is called. Just as our method, they use an ‘a priori’ specified set of observables.
In contrast to the other learning approaches, in the future we can leverage ILP to (1) transfer knowledge between automata learning tasks (e.g., providing a partial automaton as the background knowledge), (2) support examples generated by a noisy labeling function, and (3) easily express more complex conditions (e.g., using first-order logic).
? (?) convert reward functions expressed in various formal languages into RMs, and propose a reward shaping method that runs value iteration on the RM states.
6 Conclusions and Future Work
------------------------------
In this paper we have proposed ISA, an algorithm for learning subgoals by inducing a deterministic finite automaton from observation traces seen by the agent. The automaton structure can be exploited by an existing RL algorithm to increase sample efficiency and transfer learning. We have shown that our method performs comparably to an algorithm where the automaton is given beforehand.
Improving the scalability is needed to handle more complex tasks requiring automata with cycles or longer examples. In the future, we will further explore symmetry breaking techniques to reduce the hypothesis space (?) and other approaches automata learning, like RNNs (?; ?). Discovering the observables used by the automata is also an interesting path for future work.
Acknowledgments
---------------
Anders Jonsson is partially supported by the Spanish grants TIN2015-67959 and PCIN-2017-082.
|
a98a7fae-558c-447a-9dc1-24660ff2b40e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Methods for strong human germline engineering
PDF version. Image sizes best at berkeleygenomics.org with a wide monitor. Twitter thread
Introduction
This article summarizes the technical pathways to make healthy humans with significantly modified genomes. These are the pathways that I'm aware of and that seem plausibly feasible in the next two decades. A short summary, in a diagram:
Annotated table of contents:
* Reproductive genomic vectoring explains the general idea of human germline genomic engineering, and distinguishes editing and selection.
* Comparing editing and selection talks about general differences between the two kinds of genomic vectoring methods.
* Reproductive GV and epigenomic correctness (EC), Methods to handle epigenomic correctness, and How GV and EC interact discuss the epigenomic correctness problem in germline engineering—what it is, why it matters, and how to address it.
* Summary of genomic vectoring methods gives an annotated table of contents for the following Methods sections. The Methods sections—on Simple embryo selection, Gamete selection, Chromosome selection, Iterated recombinant selection, and Iterated multiplex CRISPR editing—give more detail about each genomic vectoring method: what it is, obstacles, variations, and how powerful it is.
* The appendices give additional technical information, if you're looking around and saying "I'm not in the weeds enough, I want to be more in the weeds.".
Here's a sneak peek about the strength of different genomic vectoring methods:
The list of specific methods that this table summarizes starts after the section Summary of genomic vectoring methods. This article is roughly organized from general to specific, first discussing things that apply to the whole area, and then later discussing specific methods.
I won't lie, this article book is a bit of a slog. You try writing a book about the state of the art of realistic germline engineering in a way that is automatically fun, and then get back to me, ok? But listen: There's
|
7f11f114-0c4f-4b10-aacf-454e812078e7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Open Thread, January 16-31, 2013
If it's worth saying, but not worth its own post, even in Discussion, it goes here.
|
f2530e12-cee4-4fe8-8a03-7c6ddaf29ce9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[meta] Future moderation and investigation of downvote abuse cases, or, I don't want to deal with this stuff
Since the episode with Eugine_Nier, I have received three private messages from different people asking me to investigate various cases of suspected mass downvoting. And to be quite honest, I don't want to deal with this. Eugine's case was relatively clear-cut, since he had engaged in systematic downvoting of a massive scale, but the new situations are a lot fuzzier and I'm not sure of what exactly the rules should be (what counts as a permitted use of the downvote system and what doesn't?).
At least one person has also privately contacted me and offered to carry out moderator duties if I don't want them, but even if I told them yes (on what basis? why them and not someone else?), I don't know what kind of policy I should tell them to enforce. I only happened to be appointed a moderator because I was in the list of top 10 posters at a particular time, and I don't feel like I should have any particular authority to make the rules. Nor do I feel like I have any good idea of what the rules should be, or who would be the right person to enforce them.
In any case, I don't want to be doing this job, nor do I particularly feel like being responsible for figuring out who should, or how, or what the heck. I've already started visiting LW less often because I dread having new investigation requests to deal with. So if you folks could be so kind as to figure it out without my involvement? If there's a clear consensus that someone in particular should deal with this, I can give them mod powers, or something.
|
ff37f54e-a518-4927-8d6b-201f3dabd33c
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Coalitional agency
The coalitional frame
Earlier in this sequence I laid out an argument that the goals of increasingly intelligent AIs will become increasingly systematized, until they converge to squiggle-maximization. In my last post, though, I touched on two reasons why this convergence might not happen: humans trying to prevent it, and AIs themselves trying to prevent it. I don’t have too much more to say about the former, but it’s worth elaborating on the latter.
The best way to understand the deliberate protection of existing goals is in terms of Bostrom’s notion of instrumental convergence. Bostrom argues that goal preservation will be a convergent instrumental strategy for a wide range of agents. Perhaps it’s occasionally instrumentally useful to change your goals—but once you’ve done so, you’ll never want to course-correct back towards your old goals. So this is a strong reason to be conservative about your goals, and avoid changes where possible.
One immediate problem with preserving goals, though: it requires that agents continue thinking in terms of the same concepts. But in general, an agent’s concepts will change significantly as they learn more about the world. For example, consider a medieval theist whose highest-priority goal is ensuring that their soul goes to heaven not hell. Upon becoming smarter, they realize that none of souls, heaven, or hell exist. The sensible thing to do here would be to either discard the goal, or else identify a more reasonable adaptation of it (e.g. the goal of avoiding torture while alive). But if their goals were totally fixed, then their actions would be determined by a series of increasingly convoluted hypotheticals where god did exist after all. (Or to put it another way: continuing to represent their old goal would require recreating a lot of their old ontology.) This would incur a strong systematicity penalty.
So while we should expect agents to have some degree of conservatism, they’ll likely also have some degree of systematiz
|
fed3583f-6207-468a-9c24-f1cb092d49d5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Being Wrong Doesn't Mean You're Stupid and Bad (Probably)
Sometimes, people are reluctant to admit that they were wrong about something, because they're afraid that "You are wrong about this" carries inextricable connotations of "You are stupid and bad." But this behavior is, itself, wrong, for at least two reasons.
First, because it's evidential decision theory. The so-called "rationalist" "community" has a lot of cached clichés about this! A blank map does not correspond to a blank territory. What's true is already so; owning up to it doesn't make it worse. Refusing to go to the doctor (thereby avoiding encountering evidence that you're sick) doesn't keep you healthy.
If being wrong means that you're stupid and bad, then preventing yourself from knowing that you were wrong doesn't stop you from being stupid and bad in reality. It just prevents you from knowing that you're stupid and bad—which is an important fact to know (if it's true), because if you don't know that you're stupid and bad, then it probably won't occur to you to even look for possible interventions to make yourself less stupid and less bad.
Second, while "You are wrong about this" is evidence for the "You are stupid and bad" hypothesis if stupid and bad people are more likely to be wrong, I claim that it's very weak evidence. (Although it's possible that I'm wrong about this—and if I'm wrong, it's furthermore possible that the reason I'm wrong is because I'm stupid and bad.)
Exactly how weak evidence is it? It's hard to guess directly, but fortunately, we can use probability theory to reduce the claim into more "atomic" conditional and prior probabilities that might be easier to estimate!
Let W represent the proposition "You are wrong about something", S represent the proposition "You are stupid", and B represent the proposition "You are bad."
By Bayes's theorem, the probability that you are stupid and bad given that you're wrong about something is given by—
P(S,B|W)=P(W|S,B)P(S,B)P(W|S,B)P(S,B)+P(W|S,¬B)P(S,¬B)+P(W|¬S,B)P(¬S,B)+P(W|¬S,¬B)P(¬S,¬B)
|
d4b575ae-b4ab-47b6-a498-5909a9a5b6f1
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
More Robust Doubly Robust Off-policy Evaluation
1 Introduction
---------------
In many real-world decision-making problems, in areas such as marketing, finance, robotics, and healthcare, deploying a policy without having an accurate estimate of its performance could be costly, unethical, or even illegal. This is why the problem of off-policy evaluation (OPE) has been heavily studied in contextual bandits (e.g., Dudík et al. [2011](#bib.bib6); Swaminathan et al. [2017](#bib.bib29)) and reinforcement learning (RL) (e.g., Precup et al. [2000a](#bib.bib22), [2001](#bib.bib24); Paduraru [2013](#bib.bib20); Mahmood et al. [2014](#bib.bib15); Thomas et al. [2015a](#bib.bib32); Li et al. [2015](#bib.bib13); Jiang & Li [2016](#bib.bib11); Thomas & Brunskill [2016](#bib.bib31)), and some of the results have been applied to problems in marketing (e.g., Li et al. [2011](#bib.bib12); Theocharous et al. [2015](#bib.bib30)), healthcare (e.g., Murphy et al. [2001](#bib.bib19); Hirano et al. [2003](#bib.bib10)), and education (e.g., Mandel et al. [2014](#bib.bib16), [2016](#bib.bib17)). The goal in OPE is to estimate the performance of an evaluation policy, given a log of data generated by the behavior policy(ies). The OPE problem can also be viewed as a form of counterfactual reasoning to infer the causal effect of a new treatment from historical data (e.g., Bottou et al. [2013](#bib.bib2); Shalit et al. [2017](#bib.bib27); Louizos et al. [2017](#bib.bib14)).
Three different approaches to OPE in RL can be identified in the literature.
1) Direct Method (DM) which learns a model of the system and then uses it to estimate the performance of the evaluation policy. This approach often has low variance but its bias depends on how well the selected function class represents the system and on whether the number of samples is sufficient to accurately learn this function class. There are two major problems with this approach: (a) Its bias cannot be easily quantified, since in general it is difficult to quantify the approximation error of a function class, and (b) It is not clear how to choose the loss function for model learning without the knowledge of the evaluation policy (or the distribution of the evaluation policies). Without this knowledge, we may select a loss function that focuses on learning the areas that are irrelevant for the evaluation policy(ies).
2) Importance Sampling (IS) that uses the IS term to correct the mismatch between the distributions of the system trajectory induced by the evaluation and behavior policies. Although this approach is unbiased (under mild assumptions) in case the behavior policy is known, its variance can be very large when there is a big difference between the distributions of the evaluation and behavior policies, and grows exponentially with the horizon of the RL problem.
3) Doubly Robust (DR) which is a combination of DM and IS, and can achieve the low variance of DM and no (or low) bias of IS. The DR estimator was first developed in statistics (e.g., Cassel et al. [1976](#bib.bib5); Robins et al. [1994](#bib.bib26); Robins & Rotnitzky [1995](#bib.bib25); Bang & Robins [2005](#bib.bib1)) to estimate from incomplete data with the property that is unbiased when either of its DM or IS estimators is correct. It was brought to our community, first in contextual bandits by Dudík et al. ([2011](#bib.bib6)) and then in RL by Jiang & Li ([2016](#bib.bib11)). Thomas & Brunskill ([2016](#bib.bib31)) proposed two methods to reduce the variance of DR, with the cost of introducing a bias, one to select a low variance IS estimator, namely weighted IS (WIS), and one to blend DM and IS together (instead of simply combining them as in the standard DR approach) in a way to minimize the mean squared error (MSE).
In this paper, we propose to reduce the variance of DR in bandits and RL by designing the loss function used to learn the model in the DM part of the estimator. The main idea of our estimator, called more robust doubly robust (MRDR), is to learn the parameters of the DM model by minimizing the variance of the DR estimator. This idea has been investigated in statistics in the context of regression when the labels of a subset of samples are randomly missing (Cao et al., [2009](#bib.bib4)). We first present a novel formulation for the DM part of the DR estimator in RL. We then derive formulas for the variance of the DR estimator in both bandits and RL in a way that its gradient w.r.t. the model parameters can be estimated from the samples. Note that the DR variances reported for bandits (Dudík et al., [2011](#bib.bib6)) and RL (Jiang & Li, [2016](#bib.bib11)) contain the bias of the DM component, which is unknown. We then propose methods to efficiently minimize the variance in both bandits and RL. Furthermore, we prove that the MRDR estimator is strongly consistent and asymptotically optimal. Finally, we evaluate the MRDR estimator in bandits and RL benchmark problems, and compare its performance with DM, IS, and DR approaches.
2 Preliminaries
----------------
In this paper, we consider the reinforcement learning (RL) problem in which the agent’s interaction with the system is modeled as a Markov decision process (MDP). Note that the contextual bandit problem is a special case with horizon-1111 decision-making. In this section, we first define MDPs and the relevant quantities that we are going to use throughout the paper, and then define the off-policy evaluation problem in RL, which is the main topic of this work.
###
2.1 Markov Decision Processes
A MDP is a tuple ⟨𝒳,𝒜,Pr,P,P0,γ⟩𝒳𝒜subscript𝑃𝑟𝑃subscript𝑃0𝛾\langle\mathcal{X},\mathcal{A},P\_{r},P,P\_{0},\gamma\rangle⟨ caligraphic\_X , caligraphic\_A , italic\_P start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT , italic\_P , italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_γ ⟩, where 𝒳𝒳\mathcal{X}caligraphic\_X and 𝒜𝒜\mathcal{A}caligraphic\_A are the state and action spaces, Pr(x,a)subscript𝑃𝑟𝑥𝑎P\_{r}(x,a)italic\_P start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) is the distribution of the bounded random variable r(x,a)∈[0,Rmax]𝑟𝑥𝑎0subscript𝑅r(x,a)\in[0,R\_{\max}]italic\_r ( italic\_x , italic\_a ) ∈ [ 0 , italic\_R start\_POSTSUBSCRIPT roman\_max end\_POSTSUBSCRIPT ] of the immediate reward of taking action a𝑎aitalic\_a in state x𝑥xitalic\_x, P(⋅|x,a)P(\cdot|x,a)italic\_P ( ⋅ | italic\_x , italic\_a ) is the transition probability distribution, P0:𝒳→[0,1]:subscript𝑃0𝒳→01P\_{0}\mathrel{\mathop{:}}\mathcal{X}\rightarrow[0,1]italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT : caligraphic\_X → [ 0 , 1 ] is the initial state distribution, and γ∈[0,1)𝛾01\gamma\in[0,1)italic\_γ ∈ [ 0 , 1 ) is the discounting factor. A (stationary) policy π:𝒳×𝒜→[0,1]:𝜋𝒳𝒜→01\pi\mathrel{\mathop{:}}\mathcal{X}\times\mathcal{A}\rightarrow[0,1]italic\_π : caligraphic\_X × caligraphic\_A → [ 0 , 1 ] is a stochastic mapping from states to actions, with π(a|x)𝜋conditional𝑎𝑥\pi(a|x)italic\_π ( italic\_a | italic\_x ) being the probability of taking action a𝑎aitalic\_a in state x𝑥xitalic\_x. We denote by Pπsuperscript𝑃𝜋P^{\pi}italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT the state transition of the Markov chain induced by policy π𝜋\piitalic\_π, i.e., Pπ(xt+1|xt)=∑a∈𝒜π(a|xt)P(xt+1|xt,a)superscript𝑃𝜋conditionalsubscript𝑥𝑡1subscript𝑥𝑡subscript𝑎𝒜𝜋conditional𝑎subscript𝑥𝑡𝑃conditionalsubscript𝑥𝑡1subscript𝑥𝑡𝑎P^{\pi}(x\_{t+1}|x\_{t})=\sum\_{a\in\mathcal{A}}\pi(a|x\_{t})P(x\_{t+1}|x\_{t},a)italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_π ( italic\_a | italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_P ( italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a ).
We denote by ξ=(x0,a0,r0,…,xT−1,aT−1,rT−1,xT)𝜉subscript𝑥0subscript𝑎0subscript𝑟0…subscript𝑥𝑇1subscript𝑎𝑇1subscript𝑟𝑇1subscript𝑥𝑇\xi=(x\_{0},a\_{0},r\_{0},\ldots,x\_{T-1},a\_{T-1},r\_{T-1},x\_{T})italic\_ξ = ( italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_x start\_POSTSUBSCRIPT italic\_T - 1 end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_T - 1 end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_T - 1 end\_POSTSUBSCRIPT , italic\_x start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT ) a T𝑇Titalic\_T-step trajectory generated by policy π𝜋\piitalic\_π, and by R0:T−1(ξ)=∑t=0T−1γtrtsubscript𝑅:0𝑇1𝜉superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝑟𝑡R\_{0\mathrel{\mathop{:}}T-1}(\xi)=\sum\_{t=0}^{T-1}\gamma^{t}r\_{t}italic\_R start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ ) = ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT the return of trajectory ξ𝜉\xiitalic\_ξ. Note that in ξ𝜉\xiitalic\_ξ, x0∼P0similar-tosubscript𝑥0subscript𝑃0x\_{0}\sim P\_{0}italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ∼ italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT, and ∀t∈{1,…,T−1}for-all𝑡1…𝑇1\forall t\in\{1,\ldots,T-1\}∀ italic\_t ∈ { 1 , … , italic\_T - 1 }, at∼π(⋅|xt)a\_{t}\sim\pi(\cdot|x\_{t})italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_π ( ⋅ | italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), xt+1∼P(⋅|xt,at)x\_{t+1}\sim P(\cdot|x\_{t},a\_{t})italic\_x start\_POSTSUBSCRIPT italic\_t + 1 end\_POSTSUBSCRIPT ∼ italic\_P ( ⋅ | italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ), and rt∼Pr(⋅|xt,at)r\_{t}\sim P\_{r}(\cdot|x\_{t},a\_{t})italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ∼ italic\_P start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT ( ⋅ | italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). These distributions together define Pξπsubscriptsuperscript𝑃𝜋𝜉P^{\pi}\_{\xi}italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT, i.e., the distribution of trajectory ξ𝜉\xiitalic\_ξ. We evaluate a policy π𝜋\piitalic\_π by the expectation of the return of the T𝑇Titalic\_T-step trajectories it generates, i.e., ρTπ=𝔼ξ∼Pξπ[R0:T−1(ξ)]superscriptsubscript𝜌𝑇𝜋subscript𝔼similar-to𝜉subscriptsuperscript𝑃𝜋𝜉delimited-[]subscript𝑅:0𝑇1𝜉\rho\_{T}^{\pi}=\mathbb{E}\_{\xi\sim P^{\pi}\_{\xi}}\big{[}R\_{0\mathrel{\mathop{:}}T-1}(\xi)\big{]}italic\_ρ start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT = blackboard\_E start\_POSTSUBSCRIPT italic\_ξ ∼ italic\_P start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_R start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ ) ]. If we set T𝑇Titalic\_T to be of order O(1/(1−γ))𝑂11𝛾O\big{(}1/(1-\gamma)\big{)}italic\_O ( 1 / ( 1 - italic\_γ ) ), then ρTπsuperscriptsubscript𝜌𝑇𝜋\rho\_{T}^{\pi}italic\_ρ start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT would be a good approximation of the infinite-horizon performance ρ∞πsuperscriptsubscript𝜌𝜋\rho\_{\infty}^{\pi}italic\_ρ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT. Throughout the paper, we assume that T𝑇Titalic\_T has been selected such that ρTπ≈ρ∞πsuperscriptsubscript𝜌𝑇𝜋superscriptsubscript𝜌𝜋\rho\_{T}^{\pi}\approx\rho\_{\infty}^{\pi}italic\_ρ start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ≈ italic\_ρ start\_POSTSUBSCRIPT ∞ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT, and thus, we refer to ρπ=ρTπsuperscript𝜌𝜋superscriptsubscript𝜌𝑇𝜋\rho^{\pi}=\rho\_{T}^{\pi}italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT = italic\_ρ start\_POSTSUBSCRIPT italic\_T end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT as the performance of policy π𝜋\piitalic\_π. We further define the value (action-value) function of a policy π𝜋\piitalic\_π at each state x𝑥xitalic\_x (state-action pair (x,a)𝑥𝑎(x,a)( italic\_x , italic\_a )), denoted by Vπ(x)superscript𝑉𝜋𝑥V^{\pi}(x)italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_x ) (Qπ(x,a)superscript𝑄𝜋𝑥𝑎Q^{\pi}(x,a)italic\_Q start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_x , italic\_a )), as the expectation of the return of a T𝑇Titalic\_T-step trajectory generated by starting at state x𝑥xitalic\_x (state-action pair (x,a)𝑥𝑎(x,a)( italic\_x , italic\_a )), and then following policy π𝜋\piitalic\_π. Note that ρπ=𝔼x∼P0[Vπ(x)]superscript𝜌𝜋subscript𝔼similar-to𝑥subscript𝑃0delimited-[]superscript𝑉𝜋𝑥\rho^{\pi}=\mathbb{E}\_{x\sim P\_{0}}\big{[}V^{\pi}(x)\big{]}italic\_ρ start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT = blackboard\_E start\_POSTSUBSCRIPT italic\_x ∼ italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_V start\_POSTSUPERSCRIPT italic\_π end\_POSTSUPERSCRIPT ( italic\_x ) ].
Note that the contextual bandit setting is a special case of the setting described above, where T=1𝑇1T=1italic\_T = 1, and as a result, the context is sampled from P0subscript𝑃0P\_{0}italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT and there is no dynamic P𝑃Pitalic\_P.
###
2.2 Off-policy Evaluation Problem
The off-policy evaluation (OPE) problem is when we are given a set of T𝑇Titalic\_T-step trajectories 𝒟={ξ(i)}i=1n𝒟superscriptsubscriptsuperscript𝜉𝑖𝑖1𝑛\mathcal{D}=\{\xi^{(i)}\}\_{i=1}^{n}caligraphic\_D = { italic\_ξ start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT independently generated by the behavior policy πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT,111The results of this paper can be easily extended to the case that the trajectories are generated by multiple behavior policies. and the goal is to have a good estimate of the performance of the evaluation policy πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT. We consider the estimator ρ^πesuperscript^𝜌subscript𝜋𝑒\hat{\rho}^{\pi\_{e}}over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT good if it has low mean square error (MSE), i.e.,
| | | | |
| --- | --- | --- | --- |
| | MSE(ρπe,ρ^πe)=△𝔼Pξπb[(ρπe−ρ^πe)2].superscript△MSEsuperscript𝜌subscript𝜋𝑒superscript^𝜌subscript𝜋𝑒subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]superscriptsuperscript𝜌subscript𝜋𝑒superscript^𝜌subscript𝜋𝑒2\text{MSE}(\rho^{\pi\_{e}},\hat{\rho}^{\pi\_{e}})\stackrel{{\scriptstyle\triangle}}{{=}}\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\big{[}(\rho^{\pi\_{e}}-\hat{\rho}^{\pi\_{e}})^{2}\big{]}.MSE ( italic\_ρ start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT , over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) start\_RELOP SUPERSCRIPTOP start\_ARG = end\_ARG start\_ARG △ end\_ARG end\_RELOP blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ( italic\_ρ start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT - over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] . | | (1) |
We make the following standard regularity assumption:
######
Assumption 1 (Absolute Continuity).
For all state-action pairs (x,a)∈𝒳×𝒜𝑥𝑎𝒳𝒜(x,a)\in\mathcal{X}\times\mathcal{A}( italic\_x , italic\_a ) ∈ caligraphic\_X × caligraphic\_A, if πb(a|x)=0subscript𝜋𝑏conditional𝑎𝑥0\pi\_{b}(a|x)=0italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) = 0 then πe(a|x)=0subscript𝜋𝑒conditional𝑎𝑥0\pi\_{e}(a|x)=0italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) = 0.
In order to quantify the mismatch between the behavior and evaluation policies in generating a trajectory, we define cumulative importance ratio as follows. For each T𝑇Titalic\_T-step trajectory ξ∈𝒟𝜉𝒟\xi\in\mathcal{D}italic\_ξ ∈ caligraphic\_D, the cumulative importance ratio from time step t1subscript𝑡1t\_{1}italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT to time step t2subscript𝑡2t\_{2}italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, where both t1subscript𝑡1t\_{1}italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT and t2subscript𝑡2t\_{2}italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT are in {0,…,T}0…𝑇\{0,\ldots,T\}{ 0 , … , italic\_T }, is ωt1,t2=1subscript𝜔subscript𝑡1subscript𝑡21\omega\_{t\_{1},t\_{2}}=1italic\_ω start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = 1 if t1>t2subscript𝑡1subscript𝑡2t\_{1}>t\_{2}italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT > italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, and is ωt1,t2=∏τ=t1t2πe(aτ|xτ)πb(aτ|xτ)subscript𝜔subscript𝑡1subscript𝑡2superscriptsubscriptproduct𝜏subscript𝑡1subscript𝑡2subscript𝜋𝑒conditionalsubscript𝑎𝜏subscript𝑥𝜏subscript𝜋𝑏conditionalsubscript𝑎𝜏subscript𝑥𝜏\omega\_{t\_{1},t\_{2}}=\prod\_{\tau=t\_{1}}^{t\_{2}}\frac{\pi\_{e}(a\_{\tau}|x\_{\tau})}{\pi\_{b}(a\_{\tau}|x\_{\tau})}italic\_ω start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT = ∏ start\_POSTSUBSCRIPT italic\_τ = italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) end\_ARG, otherwise. In case the behavior policy πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is unknown, we define ω^t1,t2subscript^𝜔subscript𝑡1subscript𝑡2\widehat{\omega}\_{t\_{1},t\_{2}}over^ start\_ARG italic\_ω end\_ARG start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT exactly as ωt1,t2subscript𝜔subscript𝑡1subscript𝑡2\omega\_{t\_{1},t\_{2}}italic\_ω start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT, with πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT replaced by its approximation π^bsubscript^𝜋𝑏\widehat{\pi}\_{b}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT. Under Assumption [1](#Thmassumption1 "Assumption 1 (Absolute Continuity). ‣ 2.2 Off-policy Evaluation Problem ‣ 2 Preliminaries ‣ More Robust Doubly Robust Off-policy Evaluation"), it is easy to see that ρπe=𝔼Pξπe[∑t=0T−1γtrt]=𝔼Pξπb[∑t=0T−1γtω0:trt]superscript𝜌subscript𝜋𝑒subscript𝔼superscriptsubscript𝑃𝜉subscript𝜋𝑒delimited-[]superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝑟𝑡subscript𝔼superscriptsubscript𝑃𝜉subscript𝜋𝑏delimited-[]superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝜔:0𝑡subscript𝑟𝑡\rho^{\pi\_{e}}=\mathbb{E}\_{P\_{\xi}^{\pi\_{e}}}[\sum\_{t=0}^{T-1}\gamma^{t}r\_{t}]=\mathbb{E}\_{P\_{\xi}^{\pi\_{b}}}[\sum\_{t=0}^{T-1}\gamma^{t}\omega\_{0\mathrel{\mathop{:}}t}r\_{t}]italic\_ρ start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ] = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ]. Similar equalities hold for the value and action-value functions of πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, i.e., Vπe(x)=𝔼Pξπe[∑t=0T−1γtrt|x0=x]=𝔼Pξπb[∑t=0T−1γtω0:trt|x0=x]superscript𝑉subscript𝜋𝑒𝑥subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑒𝜉delimited-[]conditionalsuperscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝑟𝑡subscript𝑥0𝑥subscript𝔼superscriptsubscript𝑃𝜉subscript𝜋𝑏delimited-[]conditionalsuperscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝜔:0𝑡subscript𝑟𝑡subscript𝑥0𝑥V^{\pi\_{e}}(x)=\mathbb{E}\_{P^{\pi\_{e}}\_{\xi}}[\sum\_{t=0}^{T-1}\gamma^{t}r\_{t}|x\_{0}=x]=\mathbb{E}\_{P\_{\xi}^{\pi\_{b}}}[\sum\_{t=0}^{T-1}\gamma^{t}\omega\_{0\mathrel{\mathop{:}}t}r\_{t}|x\_{0}=x]italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x ) = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_x ] = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_x ] and Qπe(x,a)=𝔼Pξπe[∑t=0T−1γtrt|x0=x,a0=a]=𝔼Pξπb[∑t=0T−1γtω0:trt|x0=x,a0=a]superscript𝑄subscript𝜋𝑒𝑥𝑎subscript𝔼superscriptsubscript𝑃𝜉subscript𝜋𝑒delimited-[]formulae-sequenceconditionalsuperscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝑟𝑡subscript𝑥0𝑥subscript𝑎0𝑎subscript𝔼superscriptsubscript𝑃𝜉subscript𝜋𝑏delimited-[]formulae-sequenceconditionalsuperscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝜔:0𝑡subscript𝑟𝑡subscript𝑥0𝑥subscript𝑎0𝑎Q^{\pi\_{e}}(x,a)=\mathbb{E}\_{P\_{\xi}^{\pi\_{e}}}[\sum\_{t=0}^{T-1}\gamma^{t}r\_{t}|x\_{0}=x,a\_{0}=a]=\mathbb{E}\_{P\_{\xi}^{\pi\_{b}}}[\sum\_{t=0}^{T-1}\gamma^{t}\omega\_{0\mathrel{\mathop{:}}t}r\_{t}|x\_{0}=x,a\_{0}=a]italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ) = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_x , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_a ] = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_x , italic\_a start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT = italic\_a ].
3 Existing Approaches to OPE
-----------------------------
The objective of MRDR is to learn the model part of a DR estimator by minimizing its variance.
MRDR is a variation of DR with a DM loss function derived from minimizing the DR’s variance and is built on the top of IS and DM.
Therefore, before stating our main results in Section [4](#S4 "4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation"), we first provide a brief overview of these popular approaches.
###
3.1 Direct Estimators
The idea of the direct method (DM) is to first learn a model of the system and then use it to estimate the performance of the evaluation policy πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT. In the case of bandits, this model is the mean reward of each pair of context and arm, and in RL it is either the mean reward r(x,a)𝑟𝑥𝑎r(x,a)italic\_r ( italic\_x , italic\_a ) and state transition P(⋅|x,a)P(\cdot|x,a)italic\_P ( ⋅ | italic\_x , italic\_a ), or the value (action-value) V(x)𝑉𝑥V(x)italic\_V ( italic\_x ) (Q(x,a)𝑄𝑥𝑎Q(x,a)italic\_Q ( italic\_x , italic\_a )) function. In either case, if we select a good representation for the quantities that need to be learned, and our dataset222Note that we shall use separate datasets for learning the model in DM and evaluating the policy. contains sufficient number of the states and actions relevant to the evaluation of πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, then the DM estimator has low variance and small bias, and thus, has the potential to outperform the estimators resulted from other approaches.
As mentioned in Section [1](#S1 "1 Introduction ‣ More Robust Doubly Robust Off-policy Evaluation"), an important issue that has been neglected in the previous work on off-policy evaluation in RL is the loss function used in estimating the model in DM. As pointed out by Dudík et al. ([2011](#bib.bib6)), the direct approach has a problem if the model is estimated without the knowledge of the evaluation policy. This is because the distribution of the states and actions that are visited under the evaluation policy should be included in the loss function of the direct approach. In other words, if upon learning a model, we have no information about the evaluation policy (or the distribution of the evaluation policies), then it is not clear how to design the DM’s loss function (perhaps a uniform distribution over the states and actions would be the most reasonable). Therefore, in this paper, we assume that the evaluation policy is known prior to learning the model.333Our results can be extended to the case that the distribution of the evaluation policies is known prior to learning the model.
In their DM and DR experiments, both Jiang & Li ([2016](#bib.bib11)) and Thomas & Brunskill ([2016](#bib.bib31)) learn the MDP model, r(x,a)𝑟𝑥𝑎r(x,a)italic\_r ( italic\_x , italic\_a ) and P(⋅|x,a)P(\cdot|x,a)italic\_P ( ⋅ | italic\_x , italic\_a ), although all the model learning discussion in Thomas & Brunskill ([2016](#bib.bib31)) is about the reward of the evaluation policy πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT at every step t𝑡titalic\_t along the T𝑇Titalic\_T-step trajectory, i.e., rπe(x,t)superscript𝑟subscript𝜋𝑒𝑥𝑡r^{\pi\_{e}}(x,t)italic\_r start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_t ). More generally, in off-policy actor-critic algorithms (such as the Reactor algorithm proposed in Gruslys et al. [2017](#bib.bib8)), where one can view the gradient estimation part as an off-policy value evaluation problem, the DM state-action value function model is learned by minimizing the Bellman residual in an off-policy setting (Precup et al., [2000b](#bib.bib23); Munos et al., [2016](#bib.bib18); Geist & Scherrer, [2014](#bib.bib7)).
However, neither of these three approaches incorporate the design of the DM loss function into the primary objective, perhaps because they consider the setting in which the model is learned independently.
Our approach to DM in RL: In this paper, we propose to learn Qπesuperscript𝑄subscript𝜋𝑒Q^{\pi\_{e}}italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, the action-value function of the evaluation policy πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, and then use it to evaluate its performance as
| | | |
| --- | --- | --- |
| | ρ^DMπe=1n∑i=1n∑a∈𝒜πe(a|x0(i))Q^πe(x0(i),a;βn\*).superscriptsubscript^𝜌DMsubscript𝜋𝑒1𝑛superscriptsubscript𝑖1𝑛subscript𝑎𝒜subscript𝜋𝑒conditional𝑎subscriptsuperscript𝑥𝑖0superscript^𝑄subscript𝜋𝑒subscriptsuperscript𝑥𝑖0𝑎subscriptsuperscript𝛽𝑛\hat{\rho}\_{\text{DM}}^{\pi\_{e}}=\frac{1}{n}\sum\_{i=1}^{n}\sum\_{a\in\mathcal{A}}\pi\_{e}(a|x^{(i)}\_{0})\widehat{Q}^{\pi\_{e}}(x^{(i)}\_{0},a;\beta^{\*}\_{n}).over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DM end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_a | italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT ) over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_a ; italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) . | |
We model Qπesuperscript𝑄subscript𝜋𝑒Q^{\pi\_{e}}italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT using a parameterized class of functions with parameter β∈ℝκ𝛽superscriptℝ𝜅\beta\in\mathbb{R}^{\kappa}italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT and learn β𝛽\betaitalic\_β by solving the following weighted MSE problem
| | | | |
| --- | --- | --- | --- |
| | β\*∈argminβ∈ℝκ𝔼(x,a)∼μπe[(Qπe(x,a)−Q^πe(x,a;β))2],superscript𝛽subscript𝛽superscriptℝ𝜅subscript𝔼similar-to𝑥𝑎subscript𝜇subscript𝜋𝑒delimited-[]superscriptsuperscript𝑄subscript𝜋𝑒𝑥𝑎superscript^𝑄subscript𝜋𝑒𝑥𝑎𝛽2\beta^{\*}\in\arg\min\_{\beta\in\mathbb{R}^{\kappa}}\mathbb{E}\_{(x,a)\sim\mu\_{\pi\_{e}}}\Big{[}\big{(}Q^{\pi\_{e}}(x,a)-\widehat{Q}^{\pi\_{e}}(x,a;\beta)\big{)}^{2}\Big{]},italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ∼ italic\_μ start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ( italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ) - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ; italic\_β ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] , | | (2) |
where μπesubscript𝜇subscript𝜋𝑒\mu\_{\pi\_{e}}italic\_μ start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT is the γ𝛾\gammaitalic\_γ-discounted horizon-T𝑇Titalic\_T state-action occupancy of πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, i.e., μπe(x,a)=1−γ1−γT∑t=0T−1γt𝔼Pξπe[𝟏{xt\mu\_{\pi\_{e}}(x,a)=\frac{1-\gamma}{1-\gamma^{T}}\sum\_{t=0}^{T-1}\gamma^{t}\mathbb{E}\_{P^{\pi\_{e}}\_{\xi}}\big{[}\mathbf{1}\{x\_{t}italic\_μ start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x , italic\_a ) = divide start\_ARG 1 - italic\_γ end\_ARG start\_ARG 1 - italic\_γ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ bold\_1 { italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT =x,at=a}]=x,a\_{t}=a\}\big{]}= italic\_x , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_a } ] and 𝟏{⋅}1⋅\mathbf{1}\{\cdot\}bold\_1 { ⋅ } is the indicator function. Since the actions in the data set 𝒟𝒟\mathcal{D}caligraphic\_D are generated by πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT, we rewrite the objective function of the optimization problem ([2](#S3.E2 "2 ‣ 3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) as
| | | | |
| --- | --- | --- | --- |
| | ∑t=0T−1γt𝔼Pξπb[ω0:t(R¯t:T−1(ξ)−Q^πe(xt,at;β))2],superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]subscript𝜔:0𝑡superscriptsubscript¯𝑅:𝑡𝑇1𝜉superscript^𝑄subscript𝜋𝑒subscript𝑥𝑡subscript𝑎𝑡𝛽2\sum\_{t=0}^{T-1}\gamma^{t}\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\Big{[}\omega\_{0\mathrel{\mathop{:}}t}\big{(}\bar{R}\_{t\mathrel{\mathop{:}}T-1}(\xi)-\widehat{Q}^{\pi\_{e}}(x\_{t},a\_{t};\beta)\big{)}^{2}\Big{]},∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT ( over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ ) - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] , | | (3) |
where R¯t:T−1(ξ)=∑τ=tT−1γτ−tωt+1:τr(xτ,aτ)subscript¯𝑅:𝑡𝑇1𝜉superscriptsubscript𝜏𝑡𝑇1superscript𝛾𝜏𝑡subscript𝜔:𝑡1𝜏𝑟subscript𝑥𝜏subscript𝑎𝜏\bar{R}\_{t\mathrel{\mathop{:}}T-1}(\xi)=\sum\_{\tau=t}^{T-1}\gamma^{\tau-t}\omega\_{t+1\mathrel{\mathop{:}}\tau}r(x\_{\tau},a\_{\tau})over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ ) = ∑ start\_POSTSUBSCRIPT italic\_τ = italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_τ - italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT italic\_t + 1 : italic\_τ end\_POSTSUBSCRIPT italic\_r ( italic\_x start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) is the Monte Carlo estimate of Qπe(xt,at)superscript𝑄subscript𝜋𝑒subscript𝑥𝑡subscript𝑎𝑡Q^{\pi\_{e}}(x\_{t},a\_{t})italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ). The proof of the equivalence of the objective functions ([2](#S3.E2 "2 ‣ 3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) and ([3](#S3.E3 "3 ‣ 3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) can be found in Appendix [A](#A1 "Appendix A Proofs of Section 3.1 ‣ More Robust Doubly Robust Off-policy Evaluation"). We obtain βn\*subscriptsuperscript𝛽𝑛\beta^{\*}\_{n}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT by solving the sample average approximation (SAA) of ([3](#S3.E3 "3 ‣ 3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")), i.e.,
| | | | |
| --- | --- | --- | --- |
| | βn\*∈argminβ∈ℝκ∑t=0T−1γt⋅1n∑i=1nω0:t(i)subscriptsuperscript𝛽𝑛subscript𝛽superscriptℝ𝜅superscriptsubscript𝑡0𝑇1⋅superscript𝛾𝑡1𝑛superscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑡\displaystyle\beta^{\*}\_{n}\in\arg\min\_{\beta\in\mathbb{R}^{\kappa}}\sum\_{t=0}^{T-1}\gamma^{t}\cdot\frac{1}{n}\sum\_{i=1}^{n}\omega^{(i)}\_{0\mathrel{\mathop{:}}t}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∈ roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ⋅ divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT | [R¯t:T−1(ξ(i))\displaystyle\big{[}\bar{R}\_{t\mathrel{\mathop{:}}T-1}(\xi^{(i)})[ over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT ) | |
| | | −Q^πe(xt(i),at(i);β)]2.\displaystyle-\widehat{Q}^{\pi\_{e}}(x^{(i)}\_{t},a^{(i)}\_{t};\beta)\big{]}^{2}.- over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) ] start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . | | (4) |
Since the SAA estimator ([3.1](#S3.Ex2 "3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) is unbiased, for large enough n𝑛nitalic\_n, βn\*→β\*→subscriptsuperscript𝛽𝑛superscript𝛽\beta^{\*}\_{n}\rightarrow\beta^{\*}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT → italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT almost surely. We define the bias of our DM estimator at each state-action pair as Δ(x,a)=Q^πe(x,a;β)−Qπe(x,a)Δ𝑥𝑎superscript^𝑄subscript𝜋𝑒𝑥𝑎𝛽superscript𝑄subscript𝜋𝑒𝑥𝑎\Delta(x,a)=\widehat{Q}^{\pi\_{e}}(x,a;\beta)-Q^{\pi\_{e}}(x,a)roman\_Δ ( italic\_x , italic\_a ) = over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ; italic\_β ) - italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ). Note that in contextual bandits with deterministic evaluation policy, the SAA ([3.1](#S3.Ex2 "3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) may be written as the weighted least square (WLS) problem
| | | | |
| --- | --- | --- | --- |
| | βn\*∈1n∑i=1n𝟏{πe(xi)=ai}πb(ai|xi)(r(xi,ai)−Q^(xi,ai;β))2,subscriptsuperscript𝛽𝑛1𝑛superscriptsubscript𝑖1𝑛1subscript𝜋𝑒subscript𝑥𝑖subscript𝑎𝑖subscript𝜋𝑏conditionalsubscript𝑎𝑖subscript𝑥𝑖superscript𝑟subscript𝑥𝑖subscript𝑎𝑖^𝑄subscript𝑥𝑖subscript𝑎𝑖𝛽2\beta^{\*}\_{n}\in\frac{1}{n}\sum\_{i=1}^{n}\frac{\mathbf{1}\{\pi\_{e}(x\_{i})=a\_{i}\}}{\pi\_{b}(a\_{i}|x\_{i})}\big{(}r(x\_{i},a\_{i})-\widehat{Q}(x\_{i},a\_{i};\beta)\big{)}^{2},italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∈ divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT divide start\_ARG bold\_1 { italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG ( italic\_r ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - over^ start\_ARG italic\_Q end\_ARG ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_β ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT , | | (5) |
with weights 1/πb(ai|xi)1subscript𝜋𝑏conditionalsubscript𝑎𝑖subscript𝑥𝑖1/\pi\_{b}(a\_{i}|x\_{i})1 / italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) for the actions consistent with πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT.
###
3.2 Importance Sampling Estimators
Another common approach to off-policy evaluation in RL is to use importance sampling (IS) to estimate the performance of the evaluation policy, i.e.,
| | | | |
| --- | --- | --- | --- |
| | ρ^ISπe=1n∑i=1nω0:T−1(i)∑t=0T−1γtrt(i)=1n∑i=1nω0:T−1(i)R0:T−1(i),superscriptsubscript^𝜌ISsubscript𝜋𝑒1𝑛superscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑇1superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscriptsuperscript𝑟𝑖𝑡1𝑛superscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑇1superscriptsubscript𝑅:0𝑇1𝑖\hat{\rho}\_{\text{IS}}^{\pi\_{e}}=\frac{1}{n}\sum\_{i=1}^{n}\omega^{(i)}\_{0\mathrel{\mathop{:}}T-1}\sum\_{t=0}^{T-1}\gamma^{t}r^{(i)}\_{t}=\frac{1}{n}\sum\_{i=1}^{n}\omega^{(i)}\_{0\mathrel{\mathop{:}}T-1}R\_{0\mathrel{\mathop{:}}T-1}^{(i)},over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT IS end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT , | | (6) |
where ω0:T−1(i)subscriptsuperscript𝜔𝑖:0𝑇1\omega^{(i)}\_{0\mathrel{\mathop{:}}T-1}italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT and rt(i)superscriptsubscript𝑟𝑡𝑖r\_{t}^{(i)}italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT are the cumulative importance ratio and reward at step t𝑡titalic\_t of trajectory ξ(i)∈𝒟superscript𝜉𝑖𝒟\xi^{(i)}\in\mathcal{D}italic\_ξ start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT ∈ caligraphic\_D, respectively, and R0:T−1(i)=R0:T−1(ξ(i))superscriptsubscript𝑅:0𝑇1𝑖subscript𝑅:0𝑇1superscript𝜉𝑖R\_{0\mathrel{\mathop{:}}T-1}^{(i)}=R\_{0\mathrel{\mathop{:}}T-1}(\xi^{(i)})italic\_R start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT = italic\_R start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT ). Under Assumption [1](#Thmassumption1 "Assumption 1 (Absolute Continuity). ‣ 2.2 Off-policy Evaluation Problem ‣ 2 Preliminaries ‣ More Robust Doubly Robust Off-policy Evaluation"), the IS estimator ([6](#S3.E6 "6 ‣ 3.2 Importance Sampling Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) is unbiased.
A variant of IS that often has less variance, while still unbiased, is step-wise importance sampling (step-IS), i.e.,
| | | |
| --- | --- | --- |
| | ρ^step-ISπe=1n∑i=1n∑t=0T−1γtω0:t(i)rt(i).superscriptsubscript^𝜌step-ISsubscript𝜋𝑒1𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscriptsuperscript𝜔𝑖:0𝑡subscriptsuperscript𝑟𝑖𝑡\hat{\rho}\_{\text{step-IS}}^{\pi\_{e}}=\frac{1}{n}\sum\_{i=1}^{n}\sum\_{t=0}^{T-1}\gamma^{t}\omega^{(i)}\_{0\mathrel{\mathop{:}}t}\,\,r^{(i)}\_{t}.over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT step-IS end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT . | |
If the behavior policy πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is unknown, which is the case in many applications, then either πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT or the importance ratio ω=πe/πb𝜔subscript𝜋𝑒subscript𝜋𝑏\omega=\pi\_{e}/\pi\_{b}italic\_ω = italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT / italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT needs to be estimated, and thus, IS may no longer be unbiased. In this case, the bias of IS and step-IS are |𝔼Pξπe[δ0:T−1(ξ)R0:T−1(ξ)]|subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑒𝜉delimited-[]subscript𝛿:0𝑇1𝜉subscript𝑅:0𝑇1𝜉\left|\mathbb{E}\_{P^{\pi\_{e}}\_{\xi}}\left[\delta\_{0\mathrel{\mathop{:}}T-1}(\xi)R\_{0\mathrel{\mathop{:}}T-1}(\xi)\right]\right|| blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_δ start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ ) italic\_R start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT ( italic\_ξ ) ] | and |∑t=0T−1γt𝔼Pξπe[δ0:t(ξ)rt]|superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑒𝜉delimited-[]subscript𝛿:0𝑡𝜉subscript𝑟𝑡\left|\sum\_{t=0}^{T-1}\gamma^{t}\mathbb{E}\_{P^{\pi\_{e}}\_{\xi}}\left[\delta\_{0\mathrel{\mathop{:}}t}(\xi)r\_{t}\right]\right|| ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_δ start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT ( italic\_ξ ) italic\_r start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ] |, respectively, where δ0:t(ξ)=1−λ0:t(ξ)=1−∏τ=0tπb(aτ|xτ)π^b(aτ|xτ)subscript𝛿:0𝑡𝜉1subscript𝜆:0𝑡𝜉1superscriptsubscriptproduct𝜏0𝑡subscript𝜋𝑏conditionalsubscript𝑎𝜏subscript𝑥𝜏subscript^𝜋𝑏conditionalsubscript𝑎𝜏subscript𝑥𝜏\delta\_{0\mathrel{\mathop{:}}t}(\xi)=1-\lambda\_{0\mathrel{\mathop{:}}t}(\xi)=1-\prod\_{\tau=0}^{t}\frac{\pi\_{b}(a\_{\tau}|x\_{\tau})}{\widehat{\pi}\_{b}(a\_{\tau}|x\_{\tau})}italic\_δ start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT ( italic\_ξ ) = 1 - italic\_λ start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT ( italic\_ξ ) = 1 - ∏ start\_POSTSUBSCRIPT italic\_τ = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) end\_ARG start\_ARG over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ) end\_ARG, with π^bsubscript^𝜋𝑏\widehat{\pi}\_{b}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT being our approximation of πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT (see the proofs in Appendix [B](#A2 "Appendix B Proofs of Section 3.2 ‣ More Robust Doubly Robust Off-policy Evaluation")). Note that when πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is known, i.e., π^b=πbsubscript^𝜋𝑏subscript𝜋𝑏\widehat{\pi}\_{b}=\pi\_{b}over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT = italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT, we have δ0:t=0subscript𝛿:0𝑡0\delta\_{0\mathrel{\mathop{:}}t}=0italic\_δ start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT = 0, and the bias of both IS and step-IS would be zero.
Although the unbiasedness of IS estimators is desirable for certain applications such as safety (Thomas et al., [2015b](#bib.bib33)), their high variance (even in the step-wise case), which grows exponentially in the horizon T𝑇Titalic\_T, restricts their applications. This is why another variant of IS, called weighted importance sampling (WIS), and particularly its step-wise version, i.e.,
| | | | |
| --- | --- | --- | --- |
| | ρ^WISπesuperscriptsubscript^𝜌WISsubscript𝜋𝑒\displaystyle\hat{\rho}\_{\text{WIS}}^{\pi\_{e}}over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT WIS end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT | =∑i=1nω0:T−1(i)∑i=1nω0:T−1(i)∑t=0T−1γtrt(i)=∑i=1nω0:T−1(i)R0:T−1(i)∑i=1nω0:T−1(i),absentsuperscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑇1superscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑇1superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscriptsuperscript𝑟𝑖𝑡superscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑇1subscriptsuperscript𝑅𝑖:0𝑇1superscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑇1\displaystyle=\sum\_{i=1}^{n}\frac{\omega^{(i)}\_{0\mathrel{\mathop{:}}T-1}}{\sum\_{i=1}^{n}\omega^{(i)}\_{0\mathrel{\mathop{:}}T-1}}\sum\_{t=0}^{T-1}\gamma^{t}r^{(i)}\_{t}=\sum\_{i=1}^{n}\frac{\omega^{(i)}\_{0\mathrel{\mathop{:}}T-1}R^{(i)}\_{0\mathrel{\mathop{:}}T-1}}{\sum\_{i=1}^{n}\omega^{(i)}\_{0\mathrel{\mathop{:}}T-1}},= ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT divide start\_ARG italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT end\_ARG ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_r start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT divide start\_ARG italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT italic\_R start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_T - 1 end\_POSTSUBSCRIPT end\_ARG , | |
| | ρ^step-WISπesuperscriptsubscript^𝜌step-WISsubscript𝜋𝑒\displaystyle\hat{\rho}\_{\text{step-WIS}}^{\pi\_{e}}over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT step-WIS end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT | =∑i=1n∑t=0T−1γtω0:t(i)rt(i)∑i=1nω0:t(i),absentsuperscriptsubscript𝑖1𝑛superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscriptsuperscript𝜔𝑖:0𝑡subscriptsuperscript𝑟𝑖𝑡superscriptsubscript𝑖1𝑛subscriptsuperscript𝜔𝑖:0𝑡\displaystyle=\sum\_{i=1}^{n}\sum\_{t=0}^{T-1}\gamma^{t}\frac{\omega^{(i)}\_{0\mathrel{\mathop{:}}t}\,\,r^{(i)}\_{t}}{\sum\_{i=1}^{n}\omega^{(i)}\_{0\mathrel{\mathop{:}}t}},= ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT divide start\_ARG italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT italic\_r start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_ARG start\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT end\_ARG , | |
is considered more practical, especially where being biased is not crucial. The WIS estimators are biased but consistent and have lower variance than their IS counterparts.
###
3.3 Doubly Robust Estimators
Doubly robust (DR) estimators that combine DM and IS were first developed for regression (e.g., Cassel et al. [1976](#bib.bib5)), brought to contextual bandits by Dudík et al. ([2011](#bib.bib6)), and to RL by Jiang & Li ([2016](#bib.bib11)) and Thomas & Brunskill ([2016](#bib.bib31)). The DR estimator for RL is defined as
| | | | | |
| --- | --- | --- | --- | --- |
| | ρ^DRπe(β)superscriptsubscript^𝜌DRsubscript𝜋𝑒𝛽\displaystyle\hat{\rho}\_{\text{DR}}^{\pi\_{e}}(\beta)over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_β ) | =1n∑i=1n∑t=0T−1[γtω0:t(i)rt(i)\displaystyle=\frac{1}{n}\sum\_{i=1}^{n}\sum\_{t=0}^{T-1}\Big{[}\gamma^{t}\omega\_{0\mathrel{\mathop{:}}t}^{(i)}r^{(i)}\_{t}= divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT [ italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT italic\_r start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | | (7) |
| | | −γt(ω0:t(i)Q^πe(xt(i),at(i);β)−ω0:t−1(i)V^πe(xt(i);β))].\displaystyle-\gamma^{t}\big{(}\omega\_{0\mathrel{\mathop{:}}t}^{(i)}\widehat{Q}^{\pi\_{e}}(x^{(i)}\_{t},a^{(i)}\_{t};\beta)-\omega\_{0\mathrel{\mathop{:}}t-1}^{(i)}\widehat{V}^{\pi\_{e}}(x^{(i)}\_{t};\beta)\big{)}\Big{]}.- italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) - italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) ) ] . | |
Eq. [7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation") clearly shows that a DR estimator contains both the cumulative importance ratio ω𝜔\omegaitalic\_ω (IS part) and the model estimates V^πesuperscript^𝑉subscript𝜋𝑒\widehat{V}^{\pi\_{e}}over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT and Q^πesuperscript^𝑄subscript𝜋𝑒\widehat{Q}^{\pi\_{e}}over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT (DM part). Note that the IS part of the DR estimator ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) is based on step-wise IS. Thomas & Brunskill ([2016](#bib.bib31)) derived a DR estimator whose IS part is based on step-wise WIS. In this paper, we use step-wise IS for the IS part of our DR-based estimators, but our results can be easily extended to other IS estimators.
The bias of a DR estimator is the product of that of DM and IS, and thus, DR is unbiased whenever either IS or DM is unbiased. This is what the term “doubly robust” refers to. The bias of the DR estimator ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) is |𝔼Pξπe[∑t=0T−1γtλ0:t−1(ξ)δt(ξ)Δ(xt,at)]|subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑒𝜉delimited-[]superscriptsubscript𝑡0𝑇1superscript𝛾𝑡subscript𝜆:0𝑡1𝜉subscript𝛿𝑡𝜉Δsubscript𝑥𝑡subscript𝑎𝑡|\mathbb{E}\_{P^{\pi\_{e}}\_{\xi}}[\sum\_{t=0}^{T-1}\gamma^{t}\lambda\_{0\mathrel{\mathop{:}}t-1}(\xi)\delta\_{t}(\xi)\Delta(x\_{t},a\_{t})]|| blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT italic\_λ start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT ( italic\_ξ ) italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_ξ ) roman\_Δ ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] | (see the proofs in Appendix [C](#A3 "Appendix C Proofs of Section 3.3 ‣ More Robust Doubly Robust Off-policy Evaluation")), and thus, it would be zero if either Δ(xt,at)Δsubscript𝑥𝑡subscript𝑎𝑡\Delta(x\_{t},a\_{t})roman\_Δ ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) or δt(ξ)subscript𝛿𝑡𝜉\delta\_{t}(\xi)italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( italic\_ξ ) is zero. As discussed in Section [3.2](#S3.SS2 "3.2 Importance Sampling Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation"), if πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is known, δt=0subscript𝛿𝑡0\delta\_{t}=0italic\_δ start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = 0 and the DR estimator ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) is unbiased. Throughout this paper, we assume that πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is known, and thus, DR is unbiased as long as it uses unbiased variants of IS. However, our proposed estimator described in Section [4](#S4 "4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation") can be extended to the case that πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is unknown.
4 More Robust Doubly Robust Estimators
---------------------------------------
In this section, we present our class of more robust doubly robust (MRDR) estimators. The main idea of MRDR is to learn the DM parameter of a DR estimator, β∈ℝκ𝛽superscriptℝ𝜅\beta\in\mathbb{R}^{\kappa}italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT, by minimizing its variance. In other words, MRDR is a variation of DR with a DM loss function derived from minimizing the DR’s variance. As mentioned earlier, we assume that the behavior policy πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is known, and thus, both IS (step-IS) and DR estimators are unbiased. This means that our MRDR estimator is also unbiased, and since it is the result of minimizing the DR’s variance, it has the lowest MSE among all the DR estimators.
###
4.1 MRDR Estimators for Contextual Bandits
Before presenting MRDR for RL, we first formulate it in the contextual bandit setting. We follow the setting of Dudík et al. ([2011](#bib.bib6)) and define the DR estimator as
| | | | |
| --- | --- | --- | --- |
| | ρ^DRπe(β)=1n∑i=1nπe(ai|xi)π^b(ai|xi)(r(xi,ai)−Q^(xi,ai;β))+V^πe(xi;β),superscriptsubscript^𝜌DRsubscript𝜋𝑒𝛽1𝑛superscriptsubscript𝑖1𝑛subscript𝜋𝑒conditionalsubscript𝑎𝑖subscript𝑥𝑖subscript^𝜋𝑏conditionalsubscript𝑎𝑖subscript𝑥𝑖𝑟subscript𝑥𝑖subscript𝑎𝑖^𝑄subscript𝑥𝑖subscript𝑎𝑖𝛽superscript^𝑉subscript𝜋𝑒subscript𝑥𝑖𝛽\begin{split}\hat{\rho}\_{\text{DR}}^{\pi\_{e}}(\beta)=\frac{1}{n}\sum\_{i=1}^{n}\frac{\pi\_{e}(a\_{i}|x\_{i})}{\widehat{\pi}\_{b}(a\_{i}|x\_{i})}&\big{(}r(x\_{i},a\_{i})-\widehat{Q}(x\_{i},a\_{i};\beta)\big{)}\\
&\quad+\widehat{V}^{\pi\_{e}}(x\_{i};\beta),\end{split}start\_ROW start\_CELL over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_β ) = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG start\_ARG over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG end\_CELL start\_CELL ( italic\_r ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - over^ start\_ARG italic\_Q end\_ARG ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_β ) ) end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL + over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_β ) , end\_CELL end\_ROW | | (8) |
where Q^(x,a;β)≈Q(x,a)=𝔼Pr[r(x,a)]^𝑄𝑥𝑎𝛽𝑄𝑥𝑎subscript𝔼subscript𝑃𝑟delimited-[]𝑟𝑥𝑎\widehat{Q}(x,a;\beta)\approx Q(x,a)=\mathbb{E}\_{P\_{r}}[r(x,a)]over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) ≈ italic\_Q ( italic\_x , italic\_a ) = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT italic\_r end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_r ( italic\_x , italic\_a ) ] and V^πe(x;β)=𝔼a∼πe[Q^(x,a;β)]superscript^𝑉subscript𝜋𝑒𝑥𝛽subscript𝔼similar-to𝑎subscript𝜋𝑒delimited-[]^𝑄𝑥𝑎𝛽\widehat{V}^{\pi\_{e}}(x;\beta)=\mathbb{E}\_{a\sim\pi\_{e}}[\widehat{Q}(x,a;\beta)]over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x ; italic\_β ) = blackboard\_E start\_POSTSUBSCRIPT italic\_a ∼ italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) ]. We further define the DM bias Δ(x,a)=Δ𝑥𝑎absent\Delta(x,a)=roman\_Δ ( italic\_x , italic\_a ) = Q^(x,a;β)−Q(x,a)^𝑄𝑥𝑎𝛽𝑄𝑥𝑎\widehat{Q}(x,a;\beta)-Q(x,a)over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) - italic\_Q ( italic\_x , italic\_a ), and error in learning the behavior policy δ(x,a)=1−λ(x,a)=1−πb(a|x)π^b(a|x)𝛿𝑥𝑎1𝜆𝑥𝑎1subscript𝜋𝑏conditional𝑎𝑥subscript^𝜋𝑏conditional𝑎𝑥\delta(x,a)=1-\lambda(x,a)=1-\frac{\pi\_{b}(a|x)}{\widehat{\pi}\_{b}(a|x)}italic\_δ ( italic\_x , italic\_a ) = 1 - italic\_λ ( italic\_x , italic\_a ) = 1 - divide start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) end\_ARG start\_ARG over^ start\_ARG italic\_π end\_ARG start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) end\_ARG. Proposition [1](#Thmproposition1 "Proposition 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation") proves the bias and variance of DR for stochastic evaluation policy πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT. Note that the results stated in Theorems 1 and 2 in Dudík et al. ([2011](#bib.bib6)) are only for deterministic πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT.
######
Proposition 1.
The bias and variance of the DR estimator ([8](#S4.E8 "8 ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) for stochastic πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT may be written as
| | | | |
| --- | --- | --- | --- |
| | 𝐵𝑖𝑎𝑠(ρ^𝐷𝑅πe)𝐵𝑖𝑎𝑠superscriptsubscript^𝜌𝐷𝑅subscript𝜋𝑒\displaystyle\text{Bias}(\hat{\rho}\_{\text{DR}}^{\pi\_{e}})Bias ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) | =|ρπe−𝔼Pξπb[ρ^𝐷𝑅πe]|=|𝔼Pξπe[δ(x,a)Δ(x,a)]|,absentsuperscript𝜌subscript𝜋𝑒subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]superscriptsubscript^𝜌𝐷𝑅subscript𝜋𝑒subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑒𝜉delimited-[]𝛿𝑥𝑎Δ𝑥𝑎\displaystyle=\left|\rho^{\pi\_{e}}-\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}[\hat{\rho}\_{\text{DR}}^{\pi\_{e}}]\right|=\left|\mathbb{E}\_{P^{\pi\_{e}}\_{\xi}}\left[\delta(x,a)\Delta(x,a)\right]\right|,= | italic\_ρ start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT - blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ] | = | blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_δ ( italic\_x , italic\_a ) roman\_Δ ( italic\_x , italic\_a ) ] | , | |
| | n𝕍Pξπb(ρ^𝐷𝑅πe)𝑛subscript𝕍subscriptsuperscript𝑃subscript𝜋𝑏𝜉superscriptsubscript^𝜌𝐷𝑅subscript𝜋𝑒\displaystyle n\mathbb{V}\_{P^{\pi\_{b}}\_{\xi}}(\hat{\rho}\_{\text{DR}}^{\pi\_{e}})italic\_n blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) | =𝔼Pξπb[ω^(x,a)2(r(x,a)−Q(x,a))2]absentsubscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]^𝜔superscript𝑥𝑎2superscript𝑟𝑥𝑎𝑄𝑥𝑎2\displaystyle=\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\left[\widehat{\omega}(x,a)^{2}\big{(}r(x,a)-Q(x,a)\big{)}^{2}\right]= blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ over^ start\_ARG italic\_ω end\_ARG ( italic\_x , italic\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_r ( italic\_x , italic\_a ) - italic\_Q ( italic\_x , italic\_a ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] | |
| | | +𝕍P0(𝔼πe[Q(x,a)+δ(x,a)Δ(x,a)])subscript𝕍subscript𝑃0subscript𝔼subscript𝜋𝑒delimited-[]𝑄𝑥𝑎𝛿𝑥𝑎Δ𝑥𝑎\displaystyle+\mathbb{V}\_{P\_{0}}\left(\mathbb{E}\_{\pi\_{e}}\left[Q(x,a)+\delta(x,a)\Delta(x,a)\right]\right)+ blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_Q ( italic\_x , italic\_a ) + italic\_δ ( italic\_x , italic\_a ) roman\_Δ ( italic\_x , italic\_a ) ] ) | |
| | | +𝔼P0,πe[ω(x,a)(1−δ(x,a))2Δ(x,a)2\displaystyle+\mathbb{E}\_{P\_{0},\pi\_{e}}\Big{[}\omega(x,a)\big{(}1-\delta(x,a)\big{)}^{2}\Delta(x,a)^{2}+ blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω ( italic\_x , italic\_a ) ( 1 - italic\_δ ( italic\_x , italic\_a ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT roman\_Δ ( italic\_x , italic\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT | |
| | | −𝔼πe[(1−δ(x,a))Δ(x,a)]2].\displaystyle-\mathbb{E}\_{\pi\_{e}}\big{[}\big{(}1-\delta(x,a)\big{)}\Delta(x,a)\big{]}^{2}\Big{]}.- blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ( 1 - italic\_δ ( italic\_x , italic\_a ) ) roman\_Δ ( italic\_x , italic\_a ) ] start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] . | |
###### Proof.
See Appendix [D](#A4 "Appendix D Proofs of Section 4.1 ‣ More Robust Doubly Robust Off-policy Evaluation").
∎
As expected from a DR estimator, Proposition [1](#Thmproposition1 "Proposition 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation") shows that ([8](#S4.E8 "8 ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) is unbiased if either its DM part is unbiased, Δ=0Δ0\Delta=0roman\_Δ = 0, or its IS part is unbiased, δ=0𝛿0\delta=0italic\_δ = 0. When the behavior policy πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT is known, and thus, δ(x,a)=0𝛿𝑥𝑎0\delta(x,a)=0italic\_δ ( italic\_x , italic\_a ) = 0 for all x𝑥xitalic\_x and a𝑎aitalic\_a, the variance of ([8](#S4.E8 "8 ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) in Proposition [1](#Thmproposition1 "Proposition 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation") may be written as
| | | | | |
| --- | --- | --- | --- | --- |
| | | n𝕍Pξπb(ρ^DRπe)=𝔼Pξπb[ω(x,a)2(r(x,a)−Q(x,a))2]𝑛subscript𝕍subscriptsuperscript𝑃subscript𝜋𝑏𝜉superscriptsubscript^𝜌DRsubscript𝜋𝑒subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]𝜔superscript𝑥𝑎2superscript𝑟𝑥𝑎𝑄𝑥𝑎2\displaystyle n\mathbb{V}\_{P^{\pi\_{b}}\_{\xi}}(\hat{\rho}\_{\text{DR}}^{\pi\_{e}})=\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\left[\omega(x,a)^{2}\big{(}r(x,a)-Q(x,a)\big{)}^{2}\right]italic\_n blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω ( italic\_x , italic\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( italic\_r ( italic\_x , italic\_a ) - italic\_Q ( italic\_x , italic\_a ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] | | (9) |
| | | +𝕍P0[Vπe(x)]+𝔼P0,πe[ω(x,a)Δ(x,a)2−𝔼πe[Δ(x,a)]2].subscript𝕍subscript𝑃0delimited-[]superscript𝑉subscript𝜋𝑒𝑥subscript𝔼subscript𝑃0subscript𝜋𝑒delimited-[]𝜔𝑥𝑎Δsuperscript𝑥𝑎2subscript𝔼subscript𝜋𝑒superscriptdelimited-[]Δ𝑥𝑎2\displaystyle+\mathbb{V}\_{P\_{0}}\big{[}V^{\pi\_{e}}(x)\big{]}+\mathbb{E}\_{P\_{0},\pi\_{e}}\Big{[}\omega(x,a)\Delta(x,a)^{2}-\mathbb{E}\_{\pi\_{e}}\big{[}\Delta(x,a)\big{]}^{2}\Big{]}.+ blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_V start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x ) ] + blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω ( italic\_x , italic\_a ) roman\_Δ ( italic\_x , italic\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT - blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ roman\_Δ ( italic\_x , italic\_a ) ] start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] . | |
Unfortunately, the variance formulation ([9](#S4.E9 "9 ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) is not suitable for our MRDR method, because its derivative w.r.t. β𝛽\betaitalic\_β contains a term Δ(x,a)=Q^(x,a)−Q(x,a)Δ𝑥𝑎^𝑄𝑥𝑎𝑄𝑥𝑎\Delta(x,a)=\widehat{Q}(x,a)-Q(x,a)roman\_Δ ( italic\_x , italic\_a ) = over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ) - italic\_Q ( italic\_x , italic\_a ) that cannot be estimated from samples as the true expected reward Q𝑄Qitalic\_Q is unknown. To address this issue, we derive a new formulations of the variance in Theorem [1](#Thmtheorem1 "Theorem 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation"), whose derivative does not contain such terms.
######
Theorem 1.
The variance of the DR estimator ([8](#S4.E8 "8 ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) for stochastic πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT may be written as the following two forms:
| | | | |
| --- | --- | --- | --- |
| | | n𝕍Pξπb(ρ^𝐷𝑅πe)=𝔼Pξπb[ω(x,a)(𝔼πe[ω(x,a′)Q^(x,a′;β)2]\displaystyle n\mathbb{V}\_{P^{\pi\_{b}}\_{\xi}}(\hat{\rho}^{\pi\_{e}}\_{\text{DR}})=\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\Big{[}\omega(x,a)\Big{(}\mathbb{E}\_{\pi\_{e}}\left[\omega(x,a^{\prime})\widehat{Q}(x,a^{\prime};\beta)^{2}\right]italic\_n blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT ) = blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω ( italic\_x , italic\_a ) ( blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω ( italic\_x , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ) over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ; italic\_β ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] | |
| | | −V^πe(x;β)2−2r(x,a)(ω(x,a)Q^(x,a;β)−V^πe(x;β)))\displaystyle-\widehat{V}^{\pi\_{e}}(x;\beta)^{2}-2r(x,a)\big{(}\omega(x,a)\widehat{Q}(x,a;\beta)-\widehat{V}^{\pi\_{e}}(x;\beta)\big{)}\Big{)}- over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x ; italic\_β ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT - 2 italic\_r ( italic\_x , italic\_a ) ( italic\_ω ( italic\_x , italic\_a ) over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) - over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x ; italic\_β ) ) ) | |
| | | +ω(x,a)2r(x,a)2−𝔼πe[r(x,a)]2]+𝕍P0(𝔼πe[r(x,a)]),\displaystyle+\omega(x,a)^{2}r(x,a)^{2}-\mathbb{E}\_{\pi\_{e}}[r(x,a)]^{2}\Big{]}+\mathbb{V}\_{P\_{0}}\big{(}\mathbb{E}\_{\pi\_{e}}[r(x,a)]\big{)},+ italic\_ω ( italic\_x , italic\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_r ( italic\_x , italic\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT - blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_r ( italic\_x , italic\_a ) ] start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] + blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_r ( italic\_x , italic\_a ) ] ) , | | (10) |
| | | n𝕍Pξπb(ρ^𝐷𝑅πe)=𝔼Pξπb[ω(x,a)qβ(x,a,r)⊤Ωπb(x)qβ(x,a,r)]⏞J(β)𝑛subscript𝕍subscriptsuperscript𝑃subscript𝜋𝑏𝜉subscriptsuperscript^𝜌subscript𝜋𝑒𝐷𝑅superscript⏞subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]𝜔𝑥𝑎subscript𝑞𝛽superscript𝑥𝑎𝑟topsubscriptΩsubscript𝜋𝑏𝑥subscript𝑞𝛽𝑥𝑎𝑟𝐽𝛽\displaystyle n\mathbb{V}\_{P^{\pi\_{b}}\_{\xi}}(\hat{\rho}^{\pi\_{e}}\_{\text{DR}})=\overbrace{\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\big{[}\omega(x,a)q\_{\beta}(x,a,r)^{\top}\Omega\_{\pi\_{b}}(x)q\_{\beta}(x,a,r)\big{]}}^{J(\beta)}italic\_n blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT ) = over⏞ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω ( italic\_x , italic\_a ) italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x , italic\_a , italic\_r ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT roman\_Ω start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x , italic\_a , italic\_r ) ] end\_ARG start\_POSTSUPERSCRIPT italic\_J ( italic\_β ) end\_POSTSUPERSCRIPT | |
| | | +C,𝐶\displaystyle\qquad\qquad\quad\;+C,+ italic\_C , | | (11) |
where Ωπb(x)=diag[1/πb(a|x)]a∈𝒜−ee⊤subscriptnormal-Ωsubscript𝜋𝑏𝑥normal-diagsubscriptdelimited-[]1subscript𝜋𝑏conditional𝑎𝑥𝑎𝒜𝑒superscript𝑒top\Omega\_{\pi\_{b}}(x)=\mathrm{diag}\big{[}1/\pi\_{b}(a|x)\big{]}\_{a\in\mathcal{A}}-ee^{\top}roman\_Ω start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) = roman\_diag [ 1 / italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) ] start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT - italic\_e italic\_e start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT is a positive semi-definite matrix (see Proposition [6](#Thmproposition6 "Proposition 6. ‣ D.3 Proof of Proposition 6 ‣ Appendix D Proofs of Section 4.1 ‣ More Robust Doubly Robust Off-policy Evaluation") in Appendix [D](#A4 "Appendix D Proofs of Section 4.1 ‣ More Robust Doubly Robust Off-policy Evaluation") fot the proof) with e=[1,…,1]⊤𝑒superscript1normal-…1
tope=[1,\ldots,1]^{\top}italic\_e = [ 1 , … , 1 ] start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT; qβ(x,a,r)=Dπe(x)Q¯(x;β)−𝕀(a)rsubscript𝑞𝛽𝑥𝑎𝑟subscript𝐷subscript𝜋𝑒𝑥normal-¯𝑄𝑥𝛽𝕀𝑎𝑟q\_{\beta}(x,a,r)=D\_{\pi\_{e}}(x)\bar{Q}(x;\beta)-\mathbb{I}(a)ritalic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x , italic\_a , italic\_r ) = italic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) over¯ start\_ARG italic\_Q end\_ARG ( italic\_x ; italic\_β ) - blackboard\_I ( italic\_a ) italic\_r a row vector with Dπe(x)=diag[πe(a|x)]a∈𝒜subscript𝐷subscript𝜋𝑒𝑥normal-diagsubscriptdelimited-[]subscript𝜋𝑒conditional𝑎𝑥𝑎𝒜D\_{\pi\_{e}}(x)=\mathrm{diag}\big{[}\pi\_{e}(a|x)\big{]}\_{a\in\mathcal{A}}italic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x ) = roman\_diag [ italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) ] start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT, row vector Q¯(x;β)=[Q^(x,a;β)]a∈𝒜normal-¯𝑄𝑥𝛽subscriptdelimited-[]normal-^𝑄𝑥𝑎𝛽𝑎𝒜\bar{Q}(x;\beta)=\big{[}\widehat{Q}(x,a;\beta)\big{]}\_{a\in\mathcal{A}}over¯ start\_ARG italic\_Q end\_ARG ( italic\_x ; italic\_β ) = [ over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) ] start\_POSTSUBSCRIPT italic\_a ∈ caligraphic\_A end\_POSTSUBSCRIPT, and the row vector of indicator functions 𝕀(a)=[𝟏{a′=a}]a′∈𝒜𝕀𝑎subscriptdelimited-[]1superscript𝑎normal-′𝑎superscript𝑎normal-′𝒜\mathbb{I}(a)=\big{[}\mathbf{1}\{a^{\prime}=a\}\big{]}\_{a^{\prime}\in\mathcal{A}}blackboard\_I ( italic\_a ) = [ bold\_1 { italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT = italic\_a } ] start\_POSTSUBSCRIPT italic\_a start\_POSTSUPERSCRIPT ′ end\_POSTSUPERSCRIPT ∈ caligraphic\_A end\_POSTSUBSCRIPT; and finally C=𝕍P0(𝔼πe[r(x,a)])−𝔼Pξπb[𝔼πe[r(x,a)]2]+𝔼Pξπb[(1+ω(x,a)−1πb2(a|x))ω(x,a)r(x,a)2]𝐶subscript𝕍subscript𝑃0subscript𝔼subscript𝜋𝑒delimited-[]𝑟𝑥𝑎subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]subscript𝔼subscript𝜋𝑒superscriptdelimited-[]𝑟𝑥𝑎2subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]1𝜔𝑥𝑎1subscriptsuperscript𝜋2𝑏conditional𝑎𝑥𝜔𝑥𝑎𝑟superscript𝑥𝑎2C=\mathbb{V}\_{P\_{0}}\big{(}\mathbb{E}\_{\pi\_{e}}[r(x,a)]\big{)}-\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\big{[}\mathbb{E}\_{\pi\_{e}}[r(x,a)]^{2}\big{]}+\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\Big{[}\big{(}1+\omega(x,a)-\frac{1}{\pi^{2}\_{b}(a|x)}\big{)}\omega(x,a)r(x,a)^{2}\Big{]}italic\_C = blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_r ( italic\_x , italic\_a ) ] ) - blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_r ( italic\_x , italic\_a ) ] start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] + blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ( 1 + italic\_ω ( italic\_x , italic\_a ) - divide start\_ARG 1 end\_ARG start\_ARG italic\_π start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) end\_ARG ) italic\_ω ( italic\_x , italic\_a ) italic\_r ( italic\_x , italic\_a ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ].
###### Proof.
See Appendix [D](#A4 "Appendix D Proofs of Section 4.1 ‣ More Robust Doubly Robust Off-policy Evaluation").
∎
The significance of the variance formulations of Theorem [1](#Thmtheorem1 "Theorem 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation") is 1) the variance of the DR estimator has no dependence on the unknown term ΔΔ\Deltaroman\_Δ, and thus, its derivative w.r.t. β𝛽\betaitalic\_β is computable, 2) the expectation in ([11](#S4.E11 "11 ‣ Theorem 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) is w.r.t. Pξπbsubscriptsuperscript𝑃subscript𝜋𝑏𝜉P^{\pi\_{b}}\_{\xi}italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT, which makes it possible to replace J(β)𝐽𝛽J(\beta)italic\_J ( italic\_β ) with its unbiased SAA
| | | |
| --- | --- | --- |
| | Jn(β)=1n∑i=1nω(xi,ai)qβ(xi,ai,ri)⊤Ωπb(xi)qβ(xi,ai,ri),subscript𝐽𝑛𝛽1𝑛superscriptsubscript𝑖1𝑛𝜔subscript𝑥𝑖subscript𝑎𝑖subscript𝑞𝛽superscriptsubscript𝑥𝑖subscript𝑎𝑖subscript𝑟𝑖topsubscriptΩsubscript𝜋𝑏subscript𝑥𝑖subscript𝑞𝛽subscript𝑥𝑖subscript𝑎𝑖subscript𝑟𝑖J\_{n}(\beta)=\frac{1}{n}\sum\_{i=1}^{n}\omega(x\_{i},a\_{i})q\_{\beta}(x\_{i},a\_{i},r\_{i})^{\top}\Omega\_{\pi\_{b}}(x\_{i})q\_{\beta}(x\_{i},a\_{i},r\_{i}),italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT roman\_Ω start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , | |
where 𝒟={(xi,ai,ri)}i=1n𝒟superscriptsubscriptsubscript𝑥𝑖subscript𝑎𝑖subscript𝑟𝑖𝑖1𝑛\mathcal{D}=\{(x\_{i},a\_{i},r\_{i})\}\_{i=1}^{n}caligraphic\_D = { ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) } start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT is the data set generated by the behavior policy πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT, such that the optimizer of Jn(β)subscript𝐽𝑛𝛽J\_{n}(\beta)italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) converges to that of J(β)𝐽𝛽J(\beta)italic\_J ( italic\_β ) almost surely, and 3) J(β)𝐽𝛽J(\beta)italic\_J ( italic\_β ) in ([11](#S4.E11 "11 ‣ Theorem 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) is a convex quadratic function of qβsubscript𝑞𝛽q\_{\beta}italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT, which in case that Q^(x,a;β)^𝑄𝑥𝑎𝛽\widehat{Q}(x,a;\beta)over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) is smooth, makes it possible to efficiently optimize Jn(β)subscript𝐽𝑛𝛽J\_{n}(\beta)italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) with stochastic gradient descent. Moreover, when ∇βQ^(x,a;β)subscript∇𝛽^𝑄𝑥𝑎𝛽\nabla\_{\beta}\widehat{Q}(x,a;\beta)∇ start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) can be explicitly written, we can obtain βn\*∈argminβJn(β)subscriptsuperscript𝛽𝑛subscript𝛽subscript𝐽𝑛𝛽\beta^{\*}\_{n}\in\arg\min\_{\beta}J\_{n}(\beta)italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∈ roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ), by solving the first order optimality condition ∑i=1nω(xi,ai)superscriptsubscript𝑖1𝑛𝜔subscript𝑥𝑖subscript𝑎𝑖\sum\_{i=1}^{n}\omega(x\_{i},a\_{i})∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT italic\_ω ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) qβ(xi,ai,ri)⊤Ωπb(xi)Dπe(xi)∇βQ¯(xi;β)=0subscript𝑞𝛽superscriptsubscript𝑥𝑖subscript𝑎𝑖subscript𝑟𝑖topsubscriptΩsubscript𝜋𝑏subscript𝑥𝑖subscript𝐷subscript𝜋𝑒subscript𝑥𝑖subscript∇𝛽¯𝑄subscript𝑥𝑖𝛽0q\_{\beta}(x\_{i},a\_{i},r\_{i})^{\top}\Omega\_{\pi\_{b}}(x\_{i})D\_{\pi\_{e}}(x\_{i})\nabla\_{\beta}\bar{Q}(x\_{i};\beta)=0italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT roman\_Ω start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) italic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) ∇ start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT over¯ start\_ARG italic\_Q end\_ARG ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_β ) = 0.
In case the evaluation policy is deterministic, the variance n𝕍Pξπb(ρ^DRπe)𝑛subscript𝕍subscriptsuperscript𝑃subscript𝜋𝑏𝜉subscriptsuperscript^𝜌subscript𝜋𝑒DRn\mathbb{V}\_{P^{\pi\_{b}}\_{\xi}}(\hat{\rho}^{\pi\_{e}}\_{\text{DR}})italic\_n blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT ) in ([1](#S4.Ex13 "Theorem 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) becomes
| | | |
| --- | --- | --- |
| | 𝔼Pξπb[𝟏{πe(x)=a}πb(a|x)⋅1−πb(a|x)πb(a|x)(r(x,a)−Q^(x,a;β))2]⏞J(β)superscript⏞subscript𝔼subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]⋅1subscript𝜋𝑒𝑥𝑎subscript𝜋𝑏conditional𝑎𝑥1subscript𝜋𝑏conditional𝑎𝑥subscript𝜋𝑏conditional𝑎𝑥superscript𝑟𝑥𝑎^𝑄𝑥𝑎𝛽2𝐽𝛽\displaystyle\overbrace{\mathbb{E}\_{P^{\pi\_{b}}\_{\xi}}\Big{[}\frac{\mathbf{1}\{\pi\_{e}(x)=a\}}{\pi\_{b}(a|x)}\cdot\frac{1-\pi\_{b}(a|x)}{\pi\_{b}(a|x)}\big{(}r(x,a)-\widehat{Q}(x,a;\beta)\big{)}^{2}\Big{]}}^{J(\beta)}over⏞ start\_ARG blackboard\_E start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ divide start\_ARG bold\_1 { italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_x ) = italic\_a } end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) end\_ARG ⋅ divide start\_ARG 1 - italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a | italic\_x ) end\_ARG ( italic\_r ( italic\_x , italic\_a ) - over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ] end\_ARG start\_POSTSUPERSCRIPT italic\_J ( italic\_β ) end\_POSTSUPERSCRIPT | |
| | +𝕍P0(𝔼πe[r(x,a)]).subscript𝕍subscript𝑃0subscript𝔼subscript𝜋𝑒delimited-[]𝑟𝑥𝑎\displaystyle\quad\;\;+\mathbb{V}\_{P\_{0}}\big{(}\mathbb{E}\_{\pi\_{e}}[r(x,a)]\big{)}.+ blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( blackboard\_E start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_r ( italic\_x , italic\_a ) ] ) . | |
This form of J(β)𝐽𝛽J(\beta)italic\_J ( italic\_β ) allows us to find the model parameter of MRDR by solving the WLS
| | | | |
| --- | --- | --- | --- |
| | βn\*∈argminβJn(β)=1n∑i=1n𝟏{πe(xi)=ai}⋅1−πb(ai|xi)πb(ai|xi)2(r(xi,ai)−Q^(xi,ai;β))2.subscriptsuperscript𝛽𝑛subscript𝛽subscript𝐽𝑛𝛽1𝑛superscriptsubscript𝑖1𝑛⋅1subscript𝜋𝑒subscript𝑥𝑖subscript𝑎𝑖1subscript𝜋𝑏conditionalsubscript𝑎𝑖subscript𝑥𝑖subscript𝜋𝑏superscriptconditionalsubscript𝑎𝑖subscript𝑥𝑖2superscript𝑟subscript𝑥𝑖subscript𝑎𝑖^𝑄subscript𝑥𝑖subscript𝑎𝑖𝛽2\small\begin{split}\beta^{\*}\_{n}\in&\arg\min\_{\beta}J\_{n}(\beta)=\frac{1}{n}\sum\_{i=1}^{n}\mathbf{1}\{\pi\_{e}(x\_{i})=a\_{i}\}\cdot\\
&\quad\quad\frac{1-\pi\_{b}(a\_{i}|x\_{i})}{\pi\_{b}(a\_{i}|x\_{i})^{2}}\big{(}r(x\_{i},a\_{i})-\widehat{Q}(x\_{i},a\_{i};\beta)\big{)}^{2}.\end{split}start\_ROW start\_CELL italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∈ end\_CELL start\_CELL roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) = divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT bold\_1 { italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) = italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT } ⋅ end\_CELL end\_ROW start\_ROW start\_CELL end\_CELL start\_CELL divide start\_ARG 1 - italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT | italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG ( italic\_r ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) - over^ start\_ARG italic\_Q end\_ARG ( italic\_x start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ; italic\_β ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . end\_CELL end\_ROW | | (12) |
Comparing this WLS with that in the DM approach in [5](#S3.E5 "5 ‣ 3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation"), we note that MRDR changes the weights from 1/πb1subscript𝜋𝑏1/\pi\_{b}1 / italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT to (1−πb)/πb21subscript𝜋𝑏superscriptsubscript𝜋𝑏2(1-\pi\_{b})/\pi\_{b}^{2}( 1 - italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ) / italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT, and this way increases the penalty of the samples whose actions are the same as those suggested by πesubscript𝜋𝑒\pi\_{e}italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT, but have low probability under πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT, and decreases the penalty of the rest of the samples.
###
4.2 MRDR Estimators for Reinforcement Learning
We now present our MRDR estimator for RL. We begin with the DR estimator for RL given by ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")). Similar to the bandits case reported in Section [4.1](#S4.SS1 "4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation"), we first derive a formula for the variance of the estimator ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")), whose derivative can be easily estimated from trajectories generated by the behavior policy. We then use this variance formulation as the objective function to find the MRDR model parameter.
######
Theorem 2.
The variance of the DR estimator in ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) can be written as
| | | | |
| --- | --- | --- | --- |
| | n𝕍Pξπb(ρ^𝐷𝑅πe)𝑛subscript𝕍subscriptsuperscript𝑃subscript𝜋𝑏𝜉subscriptsuperscript^𝜌subscript𝜋𝑒𝐷𝑅\displaystyle n\mathbb{V}\_{P^{\pi\_{b}}\_{\xi}}(\hat{\rho}^{\pi\_{e}}\_{\text{DR}})italic\_n blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT ) | =∑t=0T−1𝔼ℱ0:t−1[γ2tω0:t−12𝕍ℱt:T−1(ωt(R¯t:T−1\displaystyle=\sum\_{t=0}^{T-1}\mathbb{E}\_{\mathcal{F}\_{0\mathrel{\mathop{:}}t-1}}\bigg{[}\gamma^{2t}\omega^{2}\_{0\mathrel{\mathop{:}}t-1}\mathbb{V}\_{\mathcal{F}\_{t\mathrel{\mathop{:}}T-1}}\Big{(}\omega\_{t}\big{(}\bar{R}\_{t\mathrel{\mathop{:}}T-1}= ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_γ start\_POSTSUPERSCRIPT 2 italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT blackboard\_V start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT | |
| | | −Q^πe(xt,at;β)))+V^πe(xt;β)+Ct]\displaystyle-\widehat{Q}^{\pi\_{e}}(x\_{t},a\_{t};\beta)\big{)}\Big{)}+\widehat{V}^{\pi\_{e}}(x\_{t};\beta)+C\_{t}\bigg{]}- over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) ) ) + over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) + italic\_C start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ] | | (13) |
| | | +𝔼ℱ0:t[γ2t−2ω0:t−12𝕍ℱt+1:T−1(R¯t:T−1∣ℱt)],subscript𝔼subscriptℱ:0𝑡delimited-[]superscript𝛾2𝑡2subscriptsuperscript𝜔2:0𝑡1subscript𝕍subscriptℱ:𝑡1𝑇1conditionalsubscript¯𝑅:𝑡𝑇1subscriptℱ𝑡\displaystyle+\mathbb{E}\_{\mathcal{F}\_{0\mathrel{\mathop{:}}t}}\Big{[}\gamma^{2t-2}\omega^{2}\_{0\mathrel{\mathop{:}}t-1}\mathbb{V}\_{\mathcal{F}\_{t+1\mathrel{\mathop{:}}T-1}}(\bar{R}\_{t\mathrel{\mathop{:}}T-1}\mid\mathcal{F}\_{t})\Big{]},+ blackboard\_E start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT 0 : italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_γ start\_POSTSUPERSCRIPT 2 italic\_t - 2 end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT blackboard\_V start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT italic\_t + 1 : italic\_T - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ∣ caligraphic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ] , | |
where ℱt1:t2subscriptℱnormal-:subscript𝑡1subscript𝑡2\mathcal{F}\_{t\_{1}\mathrel{\mathop{:}}t\_{2}}caligraphic\_F start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT : italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT is the filtration induced by the sequence {xt1,at1,rt1,…,xt2,at2,rt2}∼Pξπbsimilar-tosubscript𝑥subscript𝑡1subscript𝑎subscript𝑡1subscript𝑟subscript𝑡1normal-…subscript𝑥subscript𝑡2subscript𝑎subscript𝑡2subscript𝑟subscript𝑡2superscriptsubscript𝑃𝜉subscript𝜋𝑏\{x\_{t\_{1}},a\_{t\_{1}},r\_{t\_{1}},\ldots,x\_{t\_{2}},a\_{t\_{2}},r\_{t\_{2}}\}\sim P\_{\xi}^{\pi\_{b}}{ italic\_x start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , … , italic\_x start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT , italic\_r start\_POSTSUBSCRIPT italic\_t start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT } ∼ italic\_P start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, R¯t:T−1=r(xt,at)+γ∑τ=t+1T−1γτ−(t+1)ωt+1:jr(xτ,aτ)subscriptnormal-¯𝑅normal-:𝑡𝑇1𝑟subscript𝑥𝑡subscript𝑎𝑡𝛾superscriptsubscript𝜏𝑡1𝑇1superscript𝛾𝜏𝑡1subscript𝜔normal-:𝑡1𝑗𝑟subscript𝑥𝜏subscript𝑎𝜏\bar{R}\_{t\mathrel{\mathop{:}}T-1}=r(x\_{t},a\_{t})+\gamma\sum\_{\tau=t+1}^{T-1}\gamma^{\tau-(t+1)}\omega\_{t+1\mathrel{\mathop{:}}j}r(x\_{\tau},a\_{\tau})over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT = italic\_r ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) + italic\_γ ∑ start\_POSTSUBSCRIPT italic\_τ = italic\_t + 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT italic\_τ - ( italic\_t + 1 ) end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT italic\_t + 1 : italic\_j end\_POSTSUBSCRIPT italic\_r ( italic\_x start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_τ end\_POSTSUBSCRIPT ), and Ct=Eℱt:T−1[ωt2(R¯t:T−1−𝔼ℱt+1:T−1[R¯t:T−1])2−2ωt2R¯t:T−1C\_{t}=E\_{\mathcal{F}\_{t\mathrel{\mathop{:}}T-1}}\Big{[}\omega\_{t}^{2}\big{(}\bar{R}\_{t\mathrel{\mathop{:}}T-1}-\mathbb{E}\_{\mathcal{F}\_{t+1\mathrel{\mathop{:}}T-1}}[\bar{R}\_{t\mathrel{\mathop{:}}T-1}]\big{)}^{2}-2\omega\_{t}^{2}\bar{R}\_{t\mathrel{\mathop{:}}T-1}italic\_C start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT = italic\_E start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ( over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT - blackboard\_E start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT italic\_t + 1 : italic\_T - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ] ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT - 2 italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT (R¯t:T−1−𝔼ℱt+1:T−1[R¯t:T−1])]\big{(}\bar{R}\_{t\mathrel{\mathop{:}}T-1}-\mathbb{E}\_{\mathcal{F}\_{t+1\mathrel{\mathop{:}}T-1}}[\bar{R}\_{t\mathrel{\mathop{:}}T-1}]\big{)}\Big{]}( over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT - blackboard\_E start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT italic\_t + 1 : italic\_T - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ] ) ] is a β𝛽\betaitalic\_β-independent term.
###### Proof.
The proof is by mathematical induction and is reported in Appendix [E](#A5 "Appendix E Proofs of Section 4.2 ‣ More Robust Doubly Robust Off-policy Evaluation").
∎
As opposed to the DR variance reported in Jiang & Li ([2016](#bib.bib11)), ours in ([2](#S4.Ex19 "Theorem 2. ‣ 4.2 MRDR Estimators for Reinforcement Learning ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) has no dependence on the DM bias ΔΔ\Deltaroman\_Δ, which contains the unknown term Qπesuperscript𝑄subscript𝜋𝑒Q^{\pi\_{e}}italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT, and plus, all its expectations are over Pξπbsubscriptsuperscript𝑃subscript𝜋𝑏𝜉P^{\pi\_{b}}\_{\xi}italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT. This allows us to easily compute the MRDR model parameter from the gradient of ([2](#S4.Ex19 "Theorem 2. ‣ 4.2 MRDR Estimators for Reinforcement Learning ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")).
Let’s define β\*∈argminβ∈ℝκ𝕍Pξπb(ρ^DRπe(β))superscript𝛽subscript𝛽superscriptℝ𝜅subscript𝕍subscriptsuperscript𝑃subscript𝜋𝑏𝜉superscriptsubscript^𝜌DRsubscript𝜋𝑒𝛽\beta^{\*}\in\arg\min\_{\beta\in\mathbb{R}^{\kappa}}\mathbb{V}\_{P^{\pi\_{b}}\_{\xi}}\big{(}\hat{\rho}\_{\text{DR}}^{\pi\_{e}}(\beta)\big{)}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT blackboard\_V start\_POSTSUBSCRIPT italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUBSCRIPT DR end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_β ) ) as the minimizer of the DR variance. We may write β\*superscript𝛽\beta^{\*}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT using the variance formulation of Theorem [2](#Thmtheorem2 "Theorem 2. ‣ 4.2 MRDR Estimators for Reinforcement Learning ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation"), and after dropping the β𝛽\betaitalic\_β-independent terms, as β\*∈argminβ∈ℝκ∑t=0T−1𝔼ℱ0:t−1[γ2tω0:t−12𝕍ℱt(ωt(R¯t:T−1−Q^πe(xt,at;β))+V^πe(xt;β))]superscript𝛽subscript𝛽superscriptℝ𝜅superscriptsubscript𝑡0𝑇1subscript𝔼subscriptℱ:0𝑡1delimited-[]superscript𝛾2𝑡subscriptsuperscript𝜔2:0𝑡1subscript𝕍subscriptℱ𝑡subscript𝜔𝑡subscript¯𝑅:𝑡𝑇1superscript^𝑄subscript𝜋𝑒subscript𝑥𝑡subscript𝑎𝑡𝛽superscript^𝑉subscript𝜋𝑒subscript𝑥𝑡𝛽\beta^{\*}\in\arg\min\_{\beta\in\mathbb{R}^{\kappa}}\sum\_{t=0}^{T-1}\mathbb{E}\_{\mathcal{F}\_{0\mathrel{\mathop{:}}t-1}}\Big{[}\gamma^{2t}\omega^{2}\_{0\mathrel{\mathop{:}}t-1}\mathbb{V}\_{\mathcal{F}\_{t}}\Big{(}\omega\_{t}\big{(}\bar{R}\_{t\mathrel{\mathop{:}}T-1}-\widehat{Q}^{\pi\_{e}}(x\_{t},a\_{t};\beta)\big{)}+\widehat{V}^{\pi\_{e}}(x\_{t};\beta)\Big{)}\Big{]}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_γ start\_POSTSUPERSCRIPT 2 italic\_t end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT blackboard\_V start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ( over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) ) + over^ start\_ARG italic\_V end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) ) ].
Similar to the derivation of ([11](#S4.E11 "11 ‣ Theorem 1. ‣ 4.1 MRDR Estimators for Contextual Bandits ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) for bandits, we can show that
| | | | | |
| --- | --- | --- | --- | --- |
| | β\*∈argsuperscript𝛽\displaystyle\beta^{\*}\in\argitalic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT ∈ roman\_arg | minβ∈ℝκJ(β)=∑t=0T−1γ2t𝔼ℱ0:t−1[ω0:t−12⋅ωt⋅\displaystyle\min\_{\beta\in\mathbb{R}^{\kappa}}J(\beta)=\sum\_{t=0}^{T-1}\gamma^{2t}\mathbb{E}\_{\mathcal{F}\_{0\mathrel{\mathop{:}}t-1}}\big{[}\omega\_{0\mathrel{\mathop{:}}t-1}^{2}\cdot\omega\_{t}\cdotroman\_min start\_POSTSUBSCRIPT italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_J ( italic\_β ) = ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT 2 italic\_t end\_POSTSUPERSCRIPT blackboard\_E start\_POSTSUBSCRIPT caligraphic\_F start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ italic\_ω start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ⋅ italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ⋅ | | (14) |
| | | qβ(xt,at,R¯t:T−1)⊤Ωπb(xt)qβ(xt,at,R¯t:T−1)].\displaystyle q\_{\beta}(x\_{t},a\_{t},\bar{R}\_{t\mathrel{\mathop{:}}T-1})^{\top}\Omega\_{\pi\_{b}}(x\_{t})q\_{\beta}(x\_{t},a\_{t},\bar{R}\_{t\mathrel{\mathop{:}}T-1})\big{]}.italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT roman\_Ω start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ) ] . | |
As shown in Proposition [6](#Thmproposition6 "Proposition 6. ‣ D.3 Proof of Proposition 6 ‣ Appendix D Proofs of Section 4.1 ‣ More Robust Doubly Robust Off-policy Evaluation"), J(β)𝐽𝛽J(\beta)italic\_J ( italic\_β ) is a quadratic convex function of qβsubscript𝑞𝛽q\_{\beta}italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT, which means that if the approximation Q^πe(⋅,⋅;β)superscript^𝑄subscript𝜋𝑒⋅⋅𝛽\widehat{Q}^{\pi\_{e}}(\cdot,\cdot;\beta)over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( ⋅ , ⋅ ; italic\_β ) is smooth in β𝛽\betaitalic\_β, then this problem can be effectively solved by gradient descent. Since the expectation in ([14](#S4.E14 "14 ‣ 4.2 MRDR Estimators for Reinforcement Learning ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) is w.r.t. Pξπbsubscriptsuperscript𝑃subscript𝜋𝑏𝜉P^{\pi\_{b}}\_{\xi}italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT, we may use the trajectories in 𝒟𝒟\mathcal{D}caligraphic\_D (generated by πbsubscript𝜋𝑏\pi\_{b}italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT), replace J(β)𝐽𝛽J(\beta)italic\_J ( italic\_β ) with its unbiased SAA, Jn(β)subscript𝐽𝑛𝛽J\_{n}(\beta)italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ), and solve it for β𝛽\betaitalic\_β, i.e.,
| | | | | |
| --- | --- | --- | --- | --- |
| | βn\*∈subscriptsuperscript𝛽𝑛absent\displaystyle\beta^{\*}\_{n}\initalic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ∈ | argminβ∈ℝκJn(β)=∑i=1n∑t=0T−1γ2t(ω0:t−1(i))2⋅ωt(i)⋅\displaystyle\arg\min\_{\beta\in\mathbb{R}^{\kappa}}J\_{n}(\beta)=\sum\_{i=1}^{n}\sum\_{t=0}^{T-1}\gamma^{2t}(\omega^{(i)}\_{0\mathrel{\mathop{:}}t-1})^{2}\cdot\omega\_{t}^{(i)}\cdotroman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) = ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT 2 italic\_t end\_POSTSUPERSCRIPT ( italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ⋅ italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT ⋅ | | (15) |
| | | qβ(xt(i),at(i),R¯t:T−1(i))⊤Ωπb(xt(i))qβ(xt(i),at(i),R¯t:T−1(i)).subscript𝑞𝛽superscriptsubscriptsuperscript𝑥𝑖𝑡subscriptsuperscript𝑎𝑖𝑡subscriptsuperscript¯𝑅𝑖:𝑡𝑇1topsubscriptΩsubscript𝜋𝑏subscriptsuperscript𝑥𝑖𝑡subscript𝑞𝛽subscriptsuperscript𝑥𝑖𝑡subscriptsuperscript𝑎𝑖𝑡subscriptsuperscript¯𝑅𝑖:𝑡𝑇1\displaystyle q\_{\beta}(x^{(i)}\_{t},a^{(i)}\_{t},\bar{R}^{(i)}\_{t\mathrel{\mathop{:}}T-1})^{\top}\Omega\_{\pi\_{b}}(x^{(i)}\_{t})q\_{\beta}(x^{(i)}\_{t},a^{(i)}\_{t},\bar{R}^{(i)}\_{t\mathrel{\mathop{:}}T-1}).italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_R end\_ARG start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT roman\_Ω start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_R end\_ARG start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ) . | |
Since Jn(β)subscript𝐽𝑛𝛽J\_{n}(\beta)italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) is strongly consistent, βn\*→β\*→subscriptsuperscript𝛽𝑛superscript𝛽\beta^{\*}\_{n}\rightarrow\beta^{\*}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT → italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT almost surely. If we can explicitly write ∇βQ^(x,a;β)subscript∇𝛽^𝑄𝑥𝑎𝛽\nabla\_{\beta}\widehat{Q}(x,a;\beta)∇ start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT over^ start\_ARG italic\_Q end\_ARG ( italic\_x , italic\_a ; italic\_β ), then βn\*subscriptsuperscript𝛽𝑛\beta^{\*}\_{n}italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT is the solution of equation 0=∑i=1n∑t=0T−1γ2t(ω0:t−1(i))2ωt(i)0superscriptsubscript𝑖1𝑛superscriptsubscript𝑡0𝑇1superscript𝛾2𝑡superscriptsubscriptsuperscript𝜔𝑖:0𝑡12superscriptsubscript𝜔𝑡𝑖0=\sum\_{i=1}^{n}\sum\_{t=0}^{T-1}\gamma^{2t}(\omega^{(i)}\_{0\mathrel{\mathop{:}}t-1})^{2}\omega\_{t}^{(i)}0 = ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT 2 italic\_t end\_POSTSUPERSCRIPT ( italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT qβ(xt(i),at(i),R¯t:T−1(i))⊤Ωπb(xt(i))Dπe(xt(i))∇βQ¯(xt(i);β)subscript𝑞𝛽superscriptsubscriptsuperscript𝑥𝑖𝑡subscriptsuperscript𝑎𝑖𝑡subscriptsuperscript¯𝑅𝑖:𝑡𝑇1topsubscriptΩsubscript𝜋𝑏subscriptsuperscript𝑥𝑖𝑡subscript𝐷subscript𝜋𝑒subscriptsuperscript𝑥𝑖𝑡subscript∇𝛽¯𝑄subscriptsuperscript𝑥𝑖𝑡𝛽q\_{\beta}(x^{(i)}\_{t},a^{(i)}\_{t},\bar{R}^{(i)}\_{t\mathrel{\mathop{:}}T-1})^{\top}\Omega\_{\pi\_{b}}(x^{(i)}\_{t})D\_{\pi\_{e}}(x^{(i)}\_{t})\nabla\_{\beta}\bar{Q}(x^{(i)}\_{t};\beta)italic\_q start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_R end\_ARG start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT ⊤ end\_POSTSUPERSCRIPT roman\_Ω start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) italic\_D start\_POSTSUBSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) ∇ start\_POSTSUBSCRIPT italic\_β end\_POSTSUBSCRIPT over¯ start\_ARG italic\_Q end\_ARG ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) .
In case the evaluation policy is deterministic, we can further simplify Jn(β)subscript𝐽𝑛𝛽J\_{n}(\beta)italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) and derive the model parameter for MRDR by solving the following WLS problem:
| | | | |
| --- | --- | --- | --- |
| | Jn(β)subscript𝐽𝑛𝛽\displaystyle J\_{n}(\beta)italic\_J start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ( italic\_β ) | =1n∑i=1n∑t=0T−1γ2t(ω0:t−1(i))2ωt(i)𝟏{πe(xt(i))=at(i)}absent1𝑛superscriptsubscript𝑖1𝑛superscriptsubscript𝑡0𝑇1superscript𝛾2𝑡superscriptsubscriptsuperscript𝜔𝑖:0𝑡12superscriptsubscript𝜔𝑡𝑖1subscript𝜋𝑒subscriptsuperscript𝑥𝑖𝑡subscriptsuperscript𝑎𝑖𝑡\displaystyle=\frac{1}{n}\sum\_{i=1}^{n}\sum\_{t=0}^{T-1}\gamma^{2t}(\omega^{(i)}\_{0\mathrel{\mathop{:}}t-1})^{2}\omega\_{t}^{(i)}\mathbf{1}\{\pi\_{e}(x^{(i)}\_{t})=a^{(i)}\_{t}\}= divide start\_ARG 1 end\_ARG start\_ARG italic\_n end\_ARG ∑ start\_POSTSUBSCRIPT italic\_i = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_n end\_POSTSUPERSCRIPT ∑ start\_POSTSUBSCRIPT italic\_t = 0 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_T - 1 end\_POSTSUPERSCRIPT italic\_γ start\_POSTSUPERSCRIPT 2 italic\_t end\_POSTSUPERSCRIPT ( italic\_ω start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT 0 : italic\_t - 1 end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT italic\_ω start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT bold\_1 { italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT } | |
| | | 1−πb(at(i)|xt(i))πb(at(i)|xt(i))2(R¯t:T−1(i)−Q^πe(xt(i),at(i);β))2.1subscript𝜋𝑏conditionalsubscriptsuperscript𝑎𝑖𝑡subscriptsuperscript𝑥𝑖𝑡subscript𝜋𝑏superscriptconditionalsubscriptsuperscript𝑎𝑖𝑡subscriptsuperscript𝑥𝑖𝑡2superscriptsubscriptsuperscript¯𝑅𝑖:𝑡𝑇1superscript^𝑄subscript𝜋𝑒subscriptsuperscript𝑥𝑖𝑡subscriptsuperscript𝑎𝑖𝑡𝛽2\displaystyle\frac{1-\pi\_{b}(a^{(i)}\_{t}|x^{(i)}\_{t})}{\pi\_{b}(a^{(i)}\_{t}|x^{(i)}\_{t})^{2}}\big{(}\bar{R}^{(i)}\_{t\mathrel{\mathop{:}}T-1}-\widehat{Q}^{\pi\_{e}}(x^{(i)}\_{t},a^{(i)}\_{t};\beta)\big{)}^{2}.divide start\_ARG 1 - italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) end\_ARG start\_ARG italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT ( italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT | italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT end\_ARG ( over¯ start\_ARG italic\_R end\_ARG start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t : italic\_T - 1 end\_POSTSUBSCRIPT - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT , italic\_a start\_POSTSUPERSCRIPT ( italic\_i ) end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ; italic\_β ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT . | | (16) |
The intuition behind the weights in WLS ([4.2](#S4.Ex23 "4.2 MRDR Estimators for Reinforcement Learning ‣ 4 More Robust Doubly Robust Estimators ‣ More Robust Doubly Robust Off-policy Evaluation")) is 1) to adjust the difference between the occupancy measures of the behavior and evaluation policies, and 2) to increase the penalty of the policy discrepancy term 𝟏{πe(xt)=at}1subscript𝜋𝑒subscript𝑥𝑡subscript𝑎𝑡\mathbf{1}\{\pi\_{e}(x\_{t})=a\_{t}\}bold\_1 { italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT ) = italic\_a start\_POSTSUBSCRIPT italic\_t end\_POSTSUBSCRIPT }.
###
4.3 Other Properties of the MRDR Estimators
##### Strong Consistency
Similar to the analysis in Thomas & Brunskill ([2016](#bib.bib31)) for weighted DR, we prove (in Appendix [F](#A6 "Appendix F Proofs of Section 4.3 ‣ More Robust Doubly Robust Off-policy Evaluation")) that the MRDR estimators are strongly consistent, i.e., limn→∞ρ^MRDR,nπe(βn\*)=ρπesubscript→𝑛subscriptsuperscript^𝜌subscript𝜋𝑒MRDR𝑛subscriptsuperscript𝛽𝑛superscript𝜌subscript𝜋𝑒\lim\_{n\rightarrow\infty}\hat{\rho}^{\pi\_{e}}\_{\text{MRDR},n}(\beta^{\*}\_{n})=\rho^{\pi\_{e}}roman\_lim start\_POSTSUBSCRIPT italic\_n → ∞ end\_POSTSUBSCRIPT over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT MRDR , italic\_n end\_POSTSUBSCRIPT ( italic\_β start\_POSTSUPERSCRIPT \* end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) = italic\_ρ start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT almost surely. This implies that MRDR is a well-posed OPE estimator.
##### Asymptotic Optimality
The MRDR estimator, by construction, has the lowest variance among the DR estimators of the form ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")). On the other hand, the semi-parametric theory in multivariate regression (Robins et al., [1994](#bib.bib26)) states that without extra assumption on the data distribution, the class of unbiased, consistent and asymptotically normal OPE estimators is asymptotically equivalent to the DR estimators in ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")). Utilizing this result, we can show that the MRDR estimators are asymptotically optimal (i.e., have minimum variance) in this class of estimators.
##### MRDR Extensions
Similar to Thomas & Brunskill ([2016](#bib.bib31)), we can derive the weighted MRDR estimator by replacing the IS part of the MRDR estimator in ([7](#S3.E7 "7 ‣ 3.3 Doubly Robust Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) with (per-step) weighted importance sampling. This introduces bias, but potentially reduces its variance, and thus, its MSE.
Throughout the paper, we assumed that the data has been generated by a single behavior policy. We can extend our MRDR results to the case that there are more than one behavior policy by replacing the IS part of our estimator with fused importance sampling (Peshkin & Shelton, [2002](#bib.bib21)).
5 Experiments
--------------
In this section, we demonstrate the effectiveness of the proposed MRDR estimation by comparing it with other state-of-the art methods from Section [3](#S3 "3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation") on both contextual bandit and RL benchmark problems.
###
5.1 Contextual Bandit
Using the 9999 benchmark experiments described in Dudík et al. ([2011](#bib.bib6)), we evaluate the OPE algorithms using the standard classification data-set from the UCI repository. Here we follow the same procedure of transforming a classification data-set into a contextual bandit dataset. For the sake of brevity, detailed descriptions of the experimental setup will be deferred to the appendix.
Given a deterministic policy π𝜋\piitalic\_π, which is a logistic regression model trained by the classification data-set, we discuss three methods of transforming it into stochastic policies. The first one, which is known as *friendly softening*, constructs a stochastic policy with the following smoothing procedure: Given two constants α𝛼\alphaitalic\_α and β𝛽\betaitalic\_β, and a uniform (continuous) random variable u∈[−0.5,0.5]𝑢0.50.5u\in[-0.5,0.5]italic\_u ∈ [ - 0.5 , 0.5 ]. For each a∈{1,…,l}𝑎1…𝑙a\in\{1,\ldots,l\}italic\_a ∈ { 1 , … , italic\_l }, whenever π(x)=a𝜋𝑥𝑎\pi(x)=aitalic\_π ( italic\_x ) = italic\_a, the stochastic policy πα,β(x)subscript𝜋𝛼𝛽𝑥\pi\_{\alpha,\beta}(x)italic\_π start\_POSTSUBSCRIPT italic\_α , italic\_β end\_POSTSUBSCRIPT ( italic\_x ) returns a𝑎aitalic\_a with probability α+β×u𝛼𝛽𝑢\alpha+\beta\times uitalic\_α + italic\_β × italic\_u, and it returns k𝑘kitalic\_k, which is a realization of the uniform (discrete) random variable in {1,…,l}∖{a}1…𝑙𝑎\{1,\ldots,l\}\setminus\{a\}{ 1 , … , italic\_l } ∖ { italic\_a } with probability 1−(α+β×u)l−11𝛼𝛽𝑢𝑙1\frac{1-(\alpha+\beta\times u)}{l-1}divide start\_ARG 1 - ( italic\_α + italic\_β × italic\_u ) end\_ARG start\_ARG italic\_l - 1 end\_ARG. The second one, which is known as *adversarial softening*, constructs a stochastic policy πα,β(x)subscript𝜋𝛼𝛽𝑥\pi\_{\alpha,\beta}(x)italic\_π start\_POSTSUBSCRIPT italic\_α , italic\_β end\_POSTSUBSCRIPT ( italic\_x ) from policy π𝜋\piitalic\_π in a similar fashion. Whenever π(x)=a𝜋𝑥𝑎\pi(x)=aitalic\_π ( italic\_x ) = italic\_a, πα,β(x)subscript𝜋𝛼𝛽𝑥\pi\_{\alpha,\beta}(x)italic\_π start\_POSTSUBSCRIPT italic\_α , italic\_β end\_POSTSUBSCRIPT ( italic\_x ) returns k≠a𝑘𝑎k\neq aitalic\_k ≠ italic\_a with probability α+β×u𝛼𝛽𝑢\alpha+\beta\times uitalic\_α + italic\_β × italic\_u, and it returns k~~𝑘\tilde{k}over~ start\_ARG italic\_k end\_ARG, which is a realization of the uniform (discrete) random variable in {1,…,l}1…𝑙\{1,\ldots,l\}{ 1 , … , italic\_l } with probability 1−(α+β×u)l1𝛼𝛽𝑢𝑙\frac{1-(\alpha+\beta\times u)}{l}divide start\_ARG 1 - ( italic\_α + italic\_β × italic\_u ) end\_ARG start\_ARG italic\_l end\_ARG. The third one, which is the *neutral policy*, is a uniformly random policy. We will use these methods to construct behavior and evaluation policies. Table [1](#S5.T1 "Table 1 ‣ 5.1 Contextual Bandit ‣ 5 Experiments ‣ More Robust Doubly Robust Off-policy Evaluation") summarizes their specifications.
Here we compare the MRDR method with the direct method (DM), the importance sampling (IS) method and two doubly robust (DR) estimators. The model parameter of the DM estimator is obtained by solving the SAA of the following problem: βDM∈argminβ∈ℝκ𝔼(x,a)∼Pξπb[(Qπe(x,a)−Q^πe(x,a;β))2]subscript𝛽DMsubscript𝛽superscriptℝ𝜅subscript𝔼similar-to𝑥𝑎subscriptsuperscript𝑃subscript𝜋𝑏𝜉delimited-[]superscriptsuperscript𝑄subscript𝜋𝑒𝑥𝑎superscript^𝑄subscript𝜋𝑒𝑥𝑎𝛽2\beta\_{\text{DM}}\in\arg\min\_{\beta\in\mathbb{R}^{\kappa}}\mathbb{E}\_{(x,a)\sim P^{\pi\_{b}}\_{\xi}}[(Q^{\pi\_{e}}(x,a)-\widehat{Q}^{\pi\_{e}}(x,a;\beta))^{2}]italic\_β start\_POSTSUBSCRIPT DM end\_POSTSUBSCRIPT ∈ roman\_arg roman\_min start\_POSTSUBSCRIPT italic\_β ∈ blackboard\_R start\_POSTSUPERSCRIPT italic\_κ end\_POSTSUPERSCRIPT end\_POSTSUBSCRIPT blackboard\_E start\_POSTSUBSCRIPT ( italic\_x , italic\_a ) ∼ italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_b end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT [ ( italic\_Q start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ) - over^ start\_ARG italic\_Q end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ( italic\_x , italic\_a ; italic\_β ) ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT ], which means all samples are weighted according to data, without consideration of the visiting distribution induced by the evaluation policy. The model parameters of the DR estimator is optimized based on the DM methodologies described in ([2](#S3.E2 "2 ‣ 3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")). Besides the standard DR estimator we also include another alternative that is known as the DR0, which heuristically uses the model parameter from the vanilla DM method (which is called DM0, and it assigns uniform weights over samples).
Below are results over the five behavior policies and five algorithms on the benchmark datasets. (Due to page limit, only the results of Vehicle, SatImage, PenDigits and Letter are included in the main paper, see appendix for the remaining results.) We evaluate the accuracy of the estimation via root mean squares error (MSE): ∑j=1N(ρ^jπe−ρπe)2/Nsuperscriptsubscript𝑗1𝑁superscriptsubscriptsuperscript^𝜌subscript𝜋𝑒𝑗superscript𝜌subscript𝜋𝑒2𝑁\sqrt{\sum\_{j=1}^{N}(\hat{\rho}^{\pi\_{e}}\_{j}-\rho^{\pi\_{e}})^{2}/N}square-root start\_ARG ∑ start\_POSTSUBSCRIPT italic\_j = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ( over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT - italic\_ρ start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT ) start\_POSTSUPERSCRIPT 2 end\_POSTSUPERSCRIPT / italic\_N end\_ARG, where ρ^jπesubscriptsuperscript^𝜌subscript𝜋𝑒𝑗\widehat{\rho}^{\pi\_{e}}\_{j}over^ start\_ARG italic\_ρ end\_ARG start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT is the estimated value from the j𝑗jitalic\_j-th dataset. Furthermore, we perform a 95%percent9595\%95 % significance test *only* on MRDR and DR, with bold numbers indicating the corresponding method outperforms its counterpart significantly.
In the contextual bandit experiments, it’s clear that in most cases the proposed MRDR estimator is superior to all alternative estimators (statistical) significantly. Similar to the results reported in Dudík et al. ([2011](#bib.bib6)), the DM method incurs much higher MSE than other methods in all of the experiments. This is potentially due to the issue of high bias in model estimation when the sample-size is small.
In general the estimation error is increasing across rows from top to bottom. This is expected due to the increasing difficulties in the OPE tasks that is accounted by the increasing mis-matches between behavior and evaluation policies.
Although there are no theoretical justifications, in most cases the performance of DR estimators (with the DM method described in Section [3.1](#S3.SS1 "3.1 Direct Estimators ‣ 3 Existing Approaches to OPE ‣ More Robust Doubly Robust Off-policy Evaluation")) is better than that of DR0. This also illustrates the benefits of optimizing the model parameter based on the knowledge of trajectory distribution Pξπesubscriptsuperscript𝑃subscript𝜋𝑒𝜉P^{\pi\_{e}}\_{\xi}italic\_P start\_POSTSUPERSCRIPT italic\_π start\_POSTSUBSCRIPT italic\_e end\_POSTSUBSCRIPT end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_ξ end\_POSTSUBSCRIPT, which is generated by the evaluation policy.
Table 1: Behavior and Evaluation Policies
| | Policy | α𝛼\alphaitalic\_α | β𝛽\betaitalic\_β |
| --- | --- | --- | --- |
| Evaluation Policy | | 0.9 | 0 |
| Behavior Policies | Friendly I | 0.7 | 0.2 |
| Friendly II | 0.5 | 0.2 |
| Neutral | - | - |
| Adversary I | 0.3 | 0.2 |
| Adversary II | 0.5 | 0.2 |
Table 2: Vehicle
| Behavior Policy | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| Friendly I | 0.3273 | 0.0347 | 0.0217 | 0.0202 | 0.0224 |
| Friendly II | 0.3499 | 0.0517 | 0.0331 | 0.0318 | 0.0356 |
| Neutral | 0.4384 | 0.087 | 0.0604 | 0.0549 | 0.0722 |
| Adversary I | 0.405 | 0.0937 | 0.0616 | 0.0516 | 0.0769 |
| Adversary II | 0.405 | 0.1131 | 0.0712 | 0.0602 | 0.0952 |
Table 3: SatImage
| Behavior Policy | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| Friendly I | 0.2884 | 0.0128 | 0.0071 | 0.0063 | 0.0073 |
| Friendly II | 0.3328 | 0.0191 | 0.0107 | 0.0087 | 0.0119 |
| Neutral | 0.3848 | 0.0413 | 0.0246 | 0.0186 | 0.0335 |
| Adversary I | 0.3963 | 0.0459 | 0.027 | 0.0195 | 0.0383 |
| Adversary II | 0.4093 | 0.0591 | 0.0364 | 0.0262 | 0.0521 |
Table 4: PenDigits
| Behavior Policy | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| Friendly I | 0.4014 | 0.0103 | 0.0056 | 0.0037 | 0.0059 |
| Friendly II | 0.4628 | 0.0159 | 0.0092 | 0.0056 | 0.0194 |
| Neutral | 0.564 | 0.0450 | 0.0314 | 0.0138 | 0.0412 |
| Adversary I | 0.5861 | 0.0503 | 0.0366 | 0.0172 | 0.0472 |
| Adversary II | 0.5641 | 0.0646 | 0.0444 | 0.0188 | 0.0611 |
Table 5: Letter
| Behavior Policy | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| Friendly I | 0.392 | 0.0074 | 0.0056 | 0.0044 | 0.0057 |
| Friendly II | 0.4146 | 0.0102 | 0.0077 | 0.0054 | 0.0083 |
| Neutral | 0.4713 | 0.0467 | 0.0363 | 0.0315 | 0.0456 |
| Adversary I | 0.46 | 0.0587 | 0.0455 | 0.0385 | 0.0575 |
| Adversary II | 0.4728 | 0.0714 | 0.055 | 0.0481 | 0.0703 |
###
5.2 Reinforcement Learning
In this section we present the experimental results of OPE in reinforcement learning.
We first test the OPE algorithms on the standard domains ModelWin, ModelFail, and 4×4444\times 44 × 4 Maze, with behavior and evaluation policies used in Thomas & Brunskill ([2016](#bib.bib31)). The schematic diagram of the domains is shown in Figure [1](#S5.F1 "Figure 1 ‣ 5.2 Reinforcement Learning ‣ 5 Experiments ‣ More Robust Doubly Robust Off-policy Evaluation").
To demonstrate the scalability of the proposed OPE methods, we also test the OPE algorithms on the following two domains with continuous state space: Mountain Car and Cart Pole. To construct the stochastic behavior and evaluation policies, we first compute the optimal policy using standard RL algorithms such as SARSA and Q𝑄Qitalic\_Q-learning. Then these policies are constructed by applying friendly softening to the optimal policy with specific values of (α,β)𝛼𝛽(\alpha,\beta)( italic\_α , italic\_β ).
For both domains, the evaluation policy is constructed using (α,β)=(0.9,0.05)𝛼𝛽0.90.05(\alpha,\beta)=(0.9,0.05)( italic\_α , italic\_β ) = ( 0.9 , 0.05 ), and the behavior policy is constructed analogously using (α,β)=(0.8,0.05)𝛼𝛽0.80.05(\alpha,\beta)=(0.8,0.05)( italic\_α , italic\_β ) = ( 0.8 , 0.05 ).
Detailed explanations the experimental setups can be found in the appendix.
In the following experiments we set the discounting factor to be γ=1𝛾1\gamma=1italic\_γ = 1.
For both the ModelFail and ModelWin domains, the number of training trajectories is set to 64646464, for Maze, Mountain Car, and Cart Pole domains this number is set to 1024102410241024. The number of trajectories for sampling-based part of estimators varies from 32323232 to 512512512512 for the ModelWin, ModelFail, and Cart Pole domains, and varies from 128128128128 to 2048204820482048 for the Maze domain and Mountain Car domains.

Figure 1: Environments from Thomas & Brunskill ([2016](#bib.bib31)). Top left: ModelFail; Bottom left: ModelWin; Right: Maze
In all of the above experiments, we compare results of MRDR with DM, IS, DR, and DR0 estimations by their corresponding MSE values. Similarly, the bold numbers represent cases when the performance of the MRDR estimator is statistically significantly better than that of the DR estimator.
Similar to the contextual bandit setting, except for the ModelWin domain that is known to be in favor of the DM estimator (Thomas & Brunskill, [2016](#bib.bib31)), in most cases MRDR estimator has significantly lower MSE than other existing methods.
Furthermore, when the sample size of the evaluation trajectories increases, we also observe accuracy improvements on all estimators in every experiment. Similar to the contextual bandit setting, significant performance improvement can be observed when one switches from DR0 to DR in the RL experiments.
Table 6: ModelFail
| Sample Size | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| 32 | 0.07152 | 1.37601 | 0.18461 | 0.1698 | 1.16084 |
| 64 | 0.07152 | 1.07213 | 0.1314 | 0.11405 | 0.9046 |
| 128 | 0.07152 | 0.752 | 0.09901 | 0.08188 | 0.63571 |
| 256 | 0.07152 | 0.55955 | 0.06565 | 0.05527 | 0.47211 |
| 512 | 0.07152 | 0.39533 | 0.04756 | 0.03819 | 0.33391 |
Table 7: Modelwin
| Sample Size | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| 32 | 0.06182 | 0.78452 | 1.55244 | 1.46778 | 1.51858 |
| 64 | 0.06182 | 1.03207 | 1.13856 | 0.98433 | 1.40758 |
| 128 | 0.06182 | 0.90166 | 1.4195 | 1.27891 | 1.52634 |
| 256 | 0.06182 | 0.78507 | 1.03575 | 0.79849 | 1.10332 |
| 512 | 0.06182 | 0.55647 | 0.89655 | 0.66791 | 0.97128 |
Table 8: 4×4444\times 44 × 4 Maze
| Sample Size | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| 128 | 1.77598 | 6.68579 | 0.70465 | 0.57042 | 0.70969 |
| 256 | 1.77598 | 3.50346 | 0.69886 | 0.58871 | 0.70211 |
| 512 | 1.77598 | 2.64257 | 0.60124 | 0.58879 | 0.60338 |
| 1024 | 1.77598 | 1.45434 | 0.5201 | 0.4666 | 0.52148 |
| 2048 | 1.77598 | 0.89668 | 0.3932 | 0.31274 | 0.39425 |
Table 9: Mountain Car
| Sample Size | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| 128 | 17.80368 | 23.11318 | 16.14661 | 14.96227 | 19.46953 |
| 256 | 14.62359 | 14.82684 | 13.89212 | 12.48327 | 22.80573 |
| 512 | 13.22012 | 8.26484 | 8.01421 | 7.89474 | 7.96849 |
| 1024 | 10.24318 | 3.26843 | 3.03239 | 3.1359 | 9.16269 |
| 2048 | 10.91577 | 2.50591 | 2.75933 | 2.17138 | 8.25527 |
Table 10: Cart Pole
| Sample Size | DM | IS | DR | MRDR | DR0 |
| --- | --- | --- | --- | --- | --- |
| 32 | 3.92319 | 1.18213 | 0.34775 | 0.27208 | 0.40567 |
| 64 | 3.97312 | 0.82658 | 0.27905 | 0.2353 | 0.31494 |
| 128 | 3.92319 | 0.66174 | 0.18793 | 0.16455 | 0.21232 |
| 256 | 3.82333 | 0.62042 | 0.17091 | 0.16012 | 0.1915 |
| 512 | 3.80461 | 0.31021 | 0.08455 | 0.079 | 0.08946 |
6 Conclusions
--------------
In this paper, we proposed the class of more-robust doubly-robust (MRDR) estimators for off-policy evaluation in RL. In particular, we proposed a principled method to calculate the model in DR estimator, which aims at minimizing its variance. Furthermore, we showed that our estimator is consistent and asymptotically optimal in the class of unbiased, consistent and asymptotically normal estimators. Finally, we demonstrated the effectiveness of our MRDR estimator in bandits and RL benchmark problems.
Future work includes extending the MRDR estimator to the cases 1) when there are multiple behavior policies, 2) when the action set has a combinatorial structure, e.g., actions are in the form of slates (Swaminathan et al., [2017](#bib.bib29)), and 3) when the behavior policy is unknown.
|
8cf918fc-52d0-474d-9294-69a61bab31b7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Learning-theoretic agenda reading list
Recently, I'm receiving more and more requests for a self-study reading list for people interested in the learning-theoretic agenda. I created a standard list for that, but before now I limited myself to sending it to individual people in private, out of some sense of perfectionism: many of the entries on the list might not be the best sources for the topics and I haven't read all of them cover to cover myself. But, at this point it seems like it's better to publish a flawed list than wait for perfection that will never come. Also, commenters are encouraged to recommend alternative sources that they consider better, if they know any. So, without further adieu:
General math background
* Theoretical computer science
* "Computational Complexity: A Conceptual Perspective" by Goldreich (especially chapters 1, 2, 5, 10)
* “Lambda-Calculus and Combinators: An Introduction” by Hindley
* “Tree Automata Techniques and Applications” by Comon et al (mostly chapter 1)
* "Introductory Functional Analysis with Applications" by Kreyszig (especially chapters 1, 2, 3, 4)
* "Probability: Theory and Examples" by Durret (especially chapters 4, 5, 6)
* "Elements of Information Theory" by Cover and Thomas (especially chapter 2)
* “Game Theory, Alive” by Karlin and Peres
* “Categories for the Working Mathematician” by Mac Lane (especially parts I, III, IV and VI)
AI theory
* “Handbook of Markov Decision Processes” edited by Feinberg and Shwartz (especially chapters 1-3)
* “Aritifical Intelligence: A Modern Approach” by Russel and Norvig (especially chapter 17)
* "Machine Learning: From Theory to Algorithms" by Shalev-Shwarz and Ben-David (especially part I and chapter 21)
* "An Introduction to Computational Learning Theory" by Kearns and Vazirani (especially chapter 8)
* "Bandit Algorithms" by Lattimore and Szepesvari (especially parts II, III, V, VIII)
* Alternative/complementary: "Regret Analysis of Stochastic and Nonstochastic Multi-armed Bandit Problems" by B
|
d93d3ec7-ae9a-4807-9ae8-3fc510632aed
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Link] You and Your Research
I've seen Richard Hamming's classic talk You And Your Research referenced several times on LessWrong and figured I would post the full version. The introduction is reproduced below:
> The title of my talk is, ``You and Your Research.'' It is not about managing research, it is about how you individually do your research. I could give a talk on the other subject - but it's not, it's about you. I'm not talking about ordinary run-of-the-mill research; I'm talking about great research. And for the sake of describing great research I'll occasionally say Nobel-Prize type of work. It doesn't have to gain the Nobel Prize, but I mean those kinds of things which we perceive are significant things. Relativity, if you want, Shannon's information theory, any number of outstanding theories - that's the kind of thing I'm talking about.
>
> Now, how did I come to do this study? At Los Alamos I was brought in to run the computing machines which other people had got going, so those scientists and physicists could get back to business. I saw I was a stooge. I saw that although physically I was the same, they were different. And to put the thing bluntly, I was envious. I wanted to know why they were so different from me. I saw Feynman up close. I saw Fermi and Teller. I saw Oppenheimer. I saw Hans Bethe: he was my boss. I saw quite a few very capable people. I became very interested in the difference between those who do and those who might have done.
>
> When I came to Bell Labs, I came into a very productive department. Bode was the department head at the time; Shannon was there, and there were other people. I continued examining the questions, ``Why?'' and ``What is the difference?'' I continued subsequently by reading biographies, autobiographies, asking people questions such as: ``How did you come to do this?'' I tried to find out what are the differences. And that's what this talk is about.
I consider this talk good and useful not only for those interested in research, but for
|
a1fcc903-4304-46f4-a7a1-a3c83ad669a2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Rationality Meetup Vienna
Discussion article for the meetup : Rationality Meetup Vienna
WHEN: 22 November 2014 03:00:00PM (+0100)
WHERE: Kaisermühlenstraße 24/2, 1220 Wien
agenda: - maybe go on with the goal setting and planning for the future - It's time for another open mic session - so please think about 30 minutes topics to offer :)
location: When arriving by U2 or Schnellbahn train: get out at station Stadlau take the exit towards Kaisermühlenstraße and simply cross the street the meetup is in the modern looking, greenish building right in front of your nose :) Important: Google maps doesn't recognise the address and hence shows the wrong location... !Important Notice! Since our usual room on the ground floor has already been booked by someone else, we will have our meetup in the room on the fourth floor (the same room we moved to last time after complications arose).
|
e7dbffdc-b666-465b-a702-45699a7f379a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Urbana-Champaign, Illinois
Discussion article for the meetup : Urbana-Champaign, Illinois
WHEN: 01 September 2013 02:00:00PM (-0500)
WHERE: Illini Union South Lounge 1401 W Green St Urbana, IL 61801
Meetup topic will again be determined by popular consensus at the actual meetup. I will have Zendo, Wits and Wagers (with cards that can be used independently for calibration games), and Pandemic, a cooperative strategy board game. 2PM Sunday in the Illini Union South Lounge seems to have worked for everyone this week, so I chose the same time next week, and this will probably become our permanent place and weekly time. Cross posted on the mailing list.
Discussion article for the meetup : Urbana-Champaign, Illinois
|
17663125-2345-475d-aa8c-e63c5cf4fefd
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Forecasting
Prediction markets and forecasting platforms are fascinating tools that bring together collective intelligence. In these markets, people buy and sell shares of potential outcomes, like election results or economic trends. The prices reflect what the crowd thinks is likely to happen, offering a snapshot of collective wisdom. Sometimes markets can tell you where you need to be to solve the puzzles you're facing in life.
Here's a twist: what if we could change reality to match our predictions? When individuals or organizations act on these forecasts, they can influence the outcomes they predict. It's a feedback loop where accurate predictions not only foresee the future but also help shape it.
Take elections, for instance. If prediction markets show a high chance of a particular candidate winning, campaign strategies, media coverage, and voter perceptions might shift to support this forecast. This can boost the candidate’s actual chances of winning.
In the business world, companies might use market forecasts to make decisions. If a company predicts high demand for a product, it can increase production, marketing, and distribution efforts, making the predicted demand more likely to come true.
I am captivated by the idea of making the territory match the map. By understanding and using this dynamic, I can not only get better at predicting but also help make those predictions come true. It’s a reminder of how interconnected our decisions are with the future we create.
|
182b3523-d4d4-4c6b-8a94-853fe4b6d78a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[link] Is Alu Life?
I recently read (in Dawkins' The Ancestor's Tale) about the Alu sequence, and went on to read about transposons generally. Having as I do a rather broad definition of life, I concluded that Alu (and others like it) are lifeforms in their own right, although parasitic ones. I found the potential ethical implications somewhat staggering, especially given the need to shut up and multiply those implications by the rather large number of transposon instances in a typical multicellular organism.
I have written out my thoughts on the subject, at http://jttlov.no-ip.org/writings/alulife.htm. I don't claim to have a well-worked out position, just a series of ideas and questions I feel to be worthy of discussion.
ETA: I have started editing the article based on the discussion below. For reference with the existing discussion, I have preserved a copy of the original article as well, linked from the current version.
|
2e581f69-b68e-4753-bbaf-882340be1e53
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Best career models for doing research?
Ideally, I'd like to save the world. One way to do that involves contributing academic research, which raises the question of what's the most effective way of doing that.
The traditional wisdom says if you want to do research, you should get a job in a university. But for the most part the system seems to be set up so that you first spend a long time working for someone else and research their ideas, after which you can lead your own group, but then most of your time will be spent on applying for grants and other administrative trivia rather than actually researching the interesting stuff. Also, in Finland at least, all professors need to also spend time doing teaching, so that's another time sink.
I suspect I would have more time to actually dedicate on research, and I could get doing it quicker, if I took a part-time job and did the research in my spare time. E.g. the recommended rates for a freelance journalist in Finland would allow me to spend a week each month doing work and three weeks doing research, of course assuming that I can pull off the freelance journalism part.
What (dis)advantages does this have compared to the traditional model?
Some advantages:
* Can spend more time on actual research.
* A lot more freedom with regard to what kind of research one can pursue.
* Cleaner mental separation between money-earning job and research time (less frustration about "I could be doing research now, instead of spending time on this stupid administrative thing").
* Easier to take time off from research if feeling stressed out.
Some disadvantages:
* Harder to network effectively.
* Need to get around journal paywalls somehow.
* Journals might be biased against freelance researchers.
* Easier to take time off from research if feeling lazy.
* Harder to combat akrasia.
* It might actually be better to spend some time doing research under others before doing it on your own.
EDIT: Note that while I certainly do appreciate comments specific to my situat
|
e0158520-453a-4c51-bcc3-9b9c3cbefca7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Introducing Collective Action for Existential Safety: 80+ actions individuals, organizations, and nations can take to improve our existential safety
Collective Action for Existential Safety is an initiative of the Center for Existential Safety, a new non-profit organization being formed in the United States.
Our central aim is to catalyze collective action to ensure humanity survives this decade. If we can achieve that, then we are likely to create an unimaginably good future for all.
We serve all Existential Safety Advocates globally. We believe artificial intelligence, nuclear weapons, engineered pandemics, and soon-to-be invented technologies threaten our civilization. The more of us working together to mitigate the risks, the better.
* Our offer: we created what we believe is the most comprehensive and practical list of actions individuals, organizations, and nations can take to improve our existential safety. It answers the question, “What can we do to help ensure our existential safety?” We also offer occasional all hands calls for the existential safety community, monthly strategy coordination calls for existential safety organization leaders, and collaborative safety calls for frontier AI developers.
* Our ask: we're fundraising and recruiting for the team. Happy to take on aligned co-founders and c-suite folks, as well as part-time volunteers. Please email or schedule a call if you’re interested in learning more.
Feedback is very much welcome, but ideally it would be offered in a kind, collaborative, and well-informed manner. We believe unkind and uninformed criticism in this community has very likely reduced its positive impact on the world. My favorite guidelines for this are here. Please directly suggest changes to our actions list here.
On a personal note, I’ve been preparing for this for 26+ years now. I have short timelines and a high p(doom). But I believe we can still collectively raise our p(eutopia) if we act now as if our lives depended on it. They do. This could also be humanity’s finest hour.
Join us at existentialsafety.org. Please share the site widely and engage with us on so
|
61e0edb2-23c0-4d42-a2ea-cf6cdfa9a1c2
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Rational discussion of politics
In a recent poll, many LW members expressed interest in a separate website for rational discussion of political topics. The website has been created, but we need a group of volunteers to help us test it and calibrate its recommendation system (see below).
If you would like to help (by participating in one or two discussions and giving us your feedback) please sign up here.
----------------------------------------
About individual recommendation system
All internet forums face a choice between freedom of speech and quality of debate. In absence of censorship, constructive discussions can be easily disrupted by the inflow of the mind-killed which causes the more intelligent participants to leave or descend to the same level.
Preserving quality thus usually requires at least one of the following methods:
1. Appointing censors (a.k.a. moderators).
2. Limiting membership.
3. Declaring certain topics (e.g., politics) off limits.
On the new website, we are going to experiment with a different method. In brief, the idea is to use an automated recommendation system which sorts content, raising the best comments to the top and (optionally) hiding the worst. The sorting is done based on the individual preferences, allowing each user to avoid what he or she (rather than moderators or anyone else) defines as low quality content. In this way we should be able to enhance quality without imposing limits on free speech.
UPDATE. The discussions are scheduled to start on May 1.
|
702f814b-2f81-4fbb-a8bc-14a8ee3bab72
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How much interest would there be in a fringe theories wiki?
I've been exploring a concept for a wiki recently. The idea would be that people contribute fringe theories and present the best evidence for that theory, perhaps by contrasting it with mainstream interpretations of the data. Some examples of pages on the wiki could include,
* CellBioGuy's theory that the paleocene-eocene thermal maximum was caused by an industrial civilization of birds living in Antarctica.
* Robin Hanson's theory that a panspermia sibling to Earth hosts an ancient advanced alien civilization, whose world government experienced complex system rot in the process of developing an Earth-monitoring system, resulting in bizarre UFO encounters with them that we sometimes hear about on Earth.
* The theory from various cryonicists, following Eric Drexler, that future nanotechnology will be sufficient to repair damage from vitrification and subsequent cryopreservation.
* Robin Gardiner's theory that the ship called "The Titanic" was in fact the ship Olympic, and was purposely sunk as part of an elaborate insurance scam.
* Scott Alexander's pseudo-religious theory that "There is an all-powerful, all-knowing logically necessary entity spawning all possible worlds and identical to the moral law."
The purpose would not be to make a determination to whether each theory was true or false, but rather just present the evidence.
Naturally, I'm more interested in theories that (1) have some sort of technical argument favoring it (regardless of credibility), (2) aren't merely moral or political theses in disguise, and (3) aren't already covered in sufficient detail in other places (unlike the JFK conspiracy theory). Wikipedia is a terrible place to do this, given their rules disallowing original research. Of course, without such constraints, there is a large risk that a Fringe Theories Wiki would attract bad editors, so some strict editing rules would still probably be required on the site.
|
9a6de0c0-455c-4180-aeb0-ebd352f29919
|
trentmkelly/LessWrong-43k
|
LessWrong
|
AI companies' eval reports mostly don't support their claims
AI companies claim that their models are safe on the basis of dangerous capability evaluations. OpenAI, Google DeepMind, and Anthropic publish reports intended to show their eval results and explain why those results imply that the models' capabilities aren't too dangerous.[1] Unfortunately, the reports mostly don't support the companies' claims. Crucially, the companies usually don't explain why they think the results, which often seem strong, actually indicate safety, especially for biothreat and cyber capabilities. (Additionally, the companies are undereliciting and thus underestimating their models' capabilities, and they don't share enough information for people on the outside to tell how bad this is.)
Bad explanation/contextualization
OpenAI biothreat evals: OpenAI says "several of our biology evaluations indicate our models are on the cusp of being able to meaningfully help novices create known biological threats, which would cross our high risk threshold." It doesn't say how it concludes this (or what results would change its mind or anything about how it thinks eval results translate to uplift). It reports results from four knowledge and troubleshooting bio evals. On the first, o3 does well and OpenAI observes "this evaluation is reaching saturation." On the rest, OpenAI matches or substantially outperforms the expert human baseline. These results seem to suggest that o3 does have dangerous bio capabilities; they certainly don't seem to rule it out. OpenAI doesn't attempt to explain why it thinks o3 doesn't have such capabilities.
DeepMind biothreat evals: DeepMind says Gemini 2.5 Pro doesn't have dangerous CBRN capabilities, explaining "it does not yet consistently or completely enable progress through key bottleneck stages." DeepMind mentions open-ended red-teaming; all it shares is results on six multiple-choice evals. It does not compare to human performance or offer other context, or say what would change its mind. For example, it's not clear whethe
|
1caa9707-b5d5-4408-926b-100d9b6f79d3
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What software projects would be helpful for different groups?
There are various meetups around the world that work on altruistic software development. Random Hacks of Kindness is at the top of my mind at the moment.
The main site appears to be down at the moment, but the Australian and Canadian sites are still up, and the Australian one is asking for projects which help with the COVID-19 response.
That got me wondering. What software projects would be high leverage and not currently saturated? Which of them are amenable to being worked on by groups of developers with mixed skills and backgrounds?
This can probably be broken down further into software for different groups. Healthcare workers probably have different needs in this time than people who are struggling to make the case for working from home. My gut feeling is that efforts that helps with social support and mental health support will also have high value over time.
|
9394e707-a896-4084-b5c2-27eda50bbbeb
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Nothing.
The Pythia inhales the vapors of the chasm and erupts into ecstatic epilepsy. The prophecy is delivered as a babbling of tongues.[1]
> In the future—not the distant future, but ten years, five—people will remember the internet as a brief dumb enthusiasm, like phrenology or the dirigible. They might still use computer networks to send an email or manage their bank accounts, but those networks will not be where culture or politics happens. The idea of spending all day online will seem as ridiculous as sitting down in front of a nice fire to read the phone book.
>
> You know, secretly, even if you’re pretending not to, that this thing is nearing exhaustion. There is simply nothing there online. All language has become rote, a half-assed performance: even the outraged mobs are screaming on autopilot. Even genuine crises can’t interrupt the tedium of it all, the bad jokes and predictable thinkpieces, spat-out enzymes to digest the world. ‘Leopards break into the temple and drink all the sacrificial vessels dry; it keeps happening; in the end, it can be calculated in advance and is incorporated into the ritual.’
Breathe deeper. Let the vapors flow through you.
Within five years, maybe three, the internet will self-immolate like a buddhist monk or a climate activist and vaporize into thin hot air—nothing. And we know this because the internet never was anything but an absolute nothing, a nothing that devours everything. All it has ever done, all it ever was for, is un-making, nulling and voiding. The World Wide Web has done this to Finland (which does not exist), to Birds (which are not real), and it is doing it to you.[2]
> When I’m listlessly killing time on the internet, there is nothing. The mind does not wander. I am not there. That rectangular hole spews out war crimes and cutesy comedies and affirmations and porn, all of it mixed together into one general-purpose informational goo, and I remain in its trance, the lifeless scroll, twitching against the screen
|
ab21e83b-db64-4b5c-890d-0b40da995f2b
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Information in risky technology races
*In this post, we (Nicholas Emery-Xu, Andrew Park, and Robert Trager) summarize the results of our paper modeling the role of information in risky technology races. The current version can be accessed* [*here*](https://drive.google.com/file/d/18j_wnA4HDMA3ofclLcfpgyV-0INMn1ZW/view?usp=sharing)*.*
1 Is revealing technological capability beneficial?
===================================================
Imagine in the not too distant future that actors are developing a new technology that gives the winner a discontinuous advantage, economic or geopolitical, over their rivals. Because of this advantage, actors are tempted to cut corners on safety to focus on capabilities research, producing a **safety-performance tradeoff.**[[1]](#fna13rob2xphv) In such a scenario, is it better if actors knew each other’s capability levels? On the one hand, public knowledge of capabilities prevents actors from mistakenly believing they are behind in capability and then [entering](https://forum.effectivealtruism.org/posts/cXBznkfoPJAjacFoT/are-you-really-in-a-race-the-cautionary-tales-of-szilard-and) a dangerous arms race. On the other hand, if actors learn they are close in capability, they may engage in a dangerous race to the bottom, cutting corners on safety to gain a small edge. In many cases, however, we do not observe actors engaging in such a race. Top AI labs, for example, regularly publish results on capabilities but continue to invest significantly in safety research, defying the predictions in Armstrong et al. (2016).[[2]](#fnyq64mm83qp) What’s going on? In our paper, we find that when the technological development path is even moderately uncertain, even actors who know they are close in capability are unwilling to engage in a dangerous race to the bottom because the expected returns to doing so aren’t worth the risk of an accident.
We build on the model in Armstrong et al. (2016)[[2]](#fnyq64mm83qp) (see “[AI race](https://forum.effectivealtruism.org/topics/ai-race)” for a useful description) to study a technology race with a safety-performance tradeoff in which capabilities investments are unknown, privately known, or publicly known. However, we argue that their results depend on two unrealistic assumptions: that the actor with higher capability investment wins the race for sure and that only the winner’s actions contribute to the risk of an accident. In our work, we relax these assumptions.
2 Decisiveness
==============
First, we argue that actors are unlikely to be certain about how additional units of capability investment translate into success in the race. In a race for a novel technology, there is likely to be randomness in the development process. For example, while Enrico Fermi was the first to use graphite as a neutron moderator, the lagging Soviets had actually already made such a discovery but failed to apply it to their nuclear program.[[3]](#fnmguva9e622j) We control for this level of randomness with a **decisiveness parameter.** If the parameter is zero, each actor is equally likely to win the race, regardless of effort. As it increases, the more capable actor is increasingly likely to win the race. Thus, for more decisive races, each additional unit of investment in capability is more likely to bring success.
What are the effects? At low levels of decisiveness, the randomness outweighs the benefits of taking risks, so the race is perfectly safe under all information states. As decisiveness increases, risk almost always increases. In addition, as decisiveness increases, at first the private information case is most dangerous. Only at high levels of decisiveness does the public information case become most dangerous. In other words, there is no **information hazard** unless the race is highly decisive. For less decisive races, it is better to reveal capabilities than keep them private.
3 Safety provision
==================
The other assumption that we challenge is that only the winner of the race contributes to safety. Instead, we add a **safety-sharing parameter** to allow both actors to contribute a weighted sum of overall safety. We might imagine, for example, that a lagging actor could conduct tests, either in preparation for the mature technology, or at a lower level of technological competence in response to the leader. We find that increasing the proportion of risk contributed by the losing actor produces two effects. First, a moral hazard effect increases risks. Just as in the provision of carbon emissions, the less actors bear the benefit of their own safety provision, the more they will shirk. Second, less capable actors increase their safety provision because their choices matter even if they lose the race. We find that the moral hazard effect is stronger, so reducing the winner’s share of safety provision increases expected disaster risk.
4 Conclusion
============
As a result of this work, we present three tentative conclusions. First, when deciding whether to incentivize actors to reveal capabilities, it is important to know their uncertainty over technological progress. At least at the early, uncertain stages of development, public knowledge of capabilities seems to be beneficial. Second, if losers of the race cause a disaster, policymakers should shift their focus away from low-capability to high-capability actors, providing them with incentives to reduce shirking, for example by sharing safety knowledge with the leader. Third, we caution against updating too strongly on any specific model (including our own!) and instead urge readers to reason through or empirically test its modeling assumptions in the context of real-world races.
1. **[^](#fnrefa13rob2xphv)**Robert Trager, Paolo Bova, Nicholas Emery-Xu, Eoghan Stafford, and Allan Dafoe, "Welfare Implications of Safety-Performance Tradeoffs in AI Safety Research," Working paper, August 2022.
2. **[^](#fnrefyq64mm83qp)**Stuart Armstrong, Nick Bostrom, and Carl Shulman, "Racing to the precipice: a model of artificial intelligence development," *AI & Society*, 31(2):201–206, May 2016, <http://link.springer.com/10.1007/s00146-015-0590-y>.
3. **[^](#fnrefmguva9e622j)**Toby Ord, "Lessons from the development of the atomic bomb," Working paper.
|
4310c3e2-1b39-4967-958c-3221a1ec707f
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
A closer look at chess scalings (into the past)
Introduction
============
I had explored [measuring AI or hardware overhang](https://www.lesswrong.com/posts/75dnjiD8kv2khe9eQ/measuring-hardware-overhang) in August 2020 using chess. Hardware overhang is when sufficient compute is available, but the algorithms are suboptimal. I examined the strongest chess engine of 2020, Stockfish 8, performing at 3,400 ELO under tournament conditions. When reducing compute to 1997 levels (equivalent to a Pentium-II 300 MHz), its ELO score was still ~3,000. That is an important year: In 1997, the IBM supercomputer "Deep Blue" defeated the world chess champion Gary Kasparov. With Stockfish, no supercomputer would have been required. I estimated that SF8 drops to Kasparov level on a 486-DX4 100 MHz, available already in 1994. To sum it up, the hardware overhang in chess is about 10 years, or 2-3 orders of magnitude in compute.
About a year later, in July 2021, [Paul Christiano asked similar questions](https://www.lesswrong.com/posts/H6L7fuEN9qXDanQ6W/how-much-chess-engine-progress-is-about-adapting-to-bigger): How much compute would the old engine need to match the current engines? What is the influence of RAM (size and speed), opening books, endgame tables, pondering? Also, my old post gave some insights, but it can be improved by sharing the sources and making it reproducible. That's the aim of the current post (the other questions will be adressed in a later post).
Reproducing chess scaling from 2020
===================================
History of PC Programs (ELO by year)
------------------------------------
As a baseline of engine performance over the years, we plot the winner from the yearly [rating list of the Swedish Chess Computer Association](https://en.wikipedia.org/wiki/Swedish_Chess_Computer_Association). Run on contemporary hardware,
* The list begins in 1984 when the program "Novag Super Constellation" reached 1631 ELO running on a 6502 CPU at 4 MHz.
* By 2005, Shredder 9 surpassed human levels on an AMD Athlon 1200 MHz.
* Today (2020), the leading engine is Stockfish 12 running on an AMD 1800X at 3.6 GHz.
Human grandmasters
------------------
To compare human grandmasters, we take the ELO over time for [Kasparov](http://chessmetrics.com/cm/CM2/PlayerProfile.asp?Params=199510SSSSS3S062926000000151000000000000010100) and [Carlsen](https://ratings.fide.com/profile/1503014/chart). Carlsen's rating between 2003 and 2011 (age 13 to 21) grew from 2000 ELO to grandmaster strength, faster than any engine :-) *[Thanks to User Bucky for the correction]*
Deep Blue
---------
The marker for "Deep Blue" in the year 1997 is a bit [arbitrarily](https://www.reddit.com/r/chess/comments/7bm361/deep_blues_true_elo_rating/) set to 2900 ELO. At the time, Kasparov had 2860 ELO, Deep Blue won, although close.
Stockfish 8 experiment
----------------------
The main part is the Stockfish 8 experiment. How well does SF8 perform on slower PCs?
### As a baseline, we need to establish its ELO at a defined speed.
1. To obtain the speed baseline, we find that [SF8 makes 721 kNodes/s on an AMD Athlon 64 3500+ at 2.20 GHz](https://sites.google.com/site/computerschess/stockfish-chess-benchmarks).
2. We scale this linearly to 777 kNodes/s for the same CPU running at 2.4 GHz (+9%)
3. [SF8 achies 3302 ELO on an Athlon 64 X2 4600+ (2.4 GHz)](http://www.computerchess.org.uk/ccrl/4040/rating_list_all.html) in the CCRL Rating List, running 40 moves in 15 minutes (one has to dig into the side details to understand which CPU name tag is which CPU. 64bit 1 CPU is the Athlon; this can also be verified with the historical version of that list.). This is an important baseline, because it cross-calibrated to dozens of other engines.
4. With that established, we can calculate the ELO as a function of kNodes/s. An average game has 40 moves. The 40 moves in 15 minutes leave 22.5 seconds per move (on average). That's 17.5 MNodes per move to achieve 3302 ELO.
5. We benchmark our own machine, on which the experiments are run. This can be done with the Stockfish parameter "bench". For simplicity, suppose our machine performs at 10 x 777 kNodes/s = 7.8 MNodes/s. That's the ballpark of recent (2020) 4-core CPUs.
6. Now we want to perform a game at 17.5 MNodes per move, on a machine running at 7.8 MNodes/s. Clearly, each move can only take 2.24 seconds. The whole 40-game match duration is: 90 seconds.
### Execute the experiment
To build a ladder of SF towards slower machines, we let this version of SF8 play a set of games of 90s timecontrol versus half that (45s). The most well-established tool to compare chess engines is [cutechess-cli](https://github.com/cutechess/cutechess). It is a command-line interface to play two engines (or two versions of the same engine) against each other. In the end, it nicely summarizes the results and includes a differential ELO estimate. A command may be:
```
cutechess-cli -fcp cmd=stockfish proto=uci tc=40/90 -scp cmd=stockfish proto=uci tc=40/45 -games 100
```
How bad does the version perform with less compute? In this experiment, after running 100 games, we get 14 ELO difference. That's much less than the usual statement of 70 ELO. Why is that? We can see the same effect in similar experiments down by others ([1](http://www.talkchess.com/forum3/viewtopic.php?t=72834), [2](http://chess.ultimaiq.net/scalability.htm)): The ELO gain diminishes (flattens) at high compute. On the other hand, when we reduce compute to very low levels, the curve steepens dramatically. The full ELO loss result list from my experiment is for each halfing of compute:
| | | |
| --- | --- | --- |
| ELO | ELO Delta | kNodes/move |
| 3302 | | 17476.4 |
| 3288 | 14 | 8738.2 |
| 3268 | 20 | 4369.1 |
| 3240 | 28 | 2184.5 |
| 3205 | 35 | 1092.3 |
| 3097 | 108 | 546.1 |
| 3030 | 67 | 273.1 |
| 2977 | 53 | 136.5 |
| 2802 | 175 | 68.3 |
| 2716 | 86 | 34.1 |
| 2439 | 277 | 17.1 |
| 2238 | 201 | 8.5 |
| 1903 | 335 | 4.3 |
There is some jitter, despite increasing the number of games to 1,000 in the second half. Despite the jitter, we can clearly see the nonlinear ELO curve with compute:
The last thing we need to do is match the kNodes/move results to the old years. We may ask: In which year was the hardware available sufficient to play these kNodes/move in a usual tournament? This leaves some room for discussion. For 1997, should we choose a dual Pentium Pro 200, or a single Pentium 200 MMX? I believe it is reasonable to compare good CPUs of the time, without going overboard. After all, we're comparing chess on home computers. If we restrict it to <1000 USD CPUs for each year, we can find some SF8 benchmarking results across the web:
- AMD 5950X (2021): [71,485 kNodes/s](http://ipmanchess.yolasite.com/amd---intel-chess-bench.php)
- Pentium III 500 MHz (1999): [127 kNodes/s](https://sites.google.com/site/computerschess/stockfish-chess-benchmarks)
- Pentium 75 MHz (1995): [6.2 kNodes/s](https://sites.google.com/site/computerschess/stockfish-chess-benchmarks)
- 386DX-33 MHz (1989): [1 kNode/s](http://talkchess.com/forum3/viewtopic.php?f=2&t=63857&start=10)
There are many more such measurements found online, but for our purpose, this is sufficient. Caveats:
* Going back very far in time becomes difficult, because SF8 [needed to be recompiled to reduced instruction sets to make it work](http://talkchess.com/forum3/viewtopic.php?f=2&t=63857); and RAM was limited in the experiment.
* It is more reasonable to match the speed to more recent years: About 200 kNodes/s in 2000, and 100 MNodes/s today. Everything before, and in between, has a factor of a few of error in its match of nodes to year.
* On the other hand, seeing benchmarks of real PCs is useful, because it encompasses uncertainties such as RAM speed.
* In reality, when considering hardware overhang for future AI, we must also ask: How well could SF8 be adapted to older hardware? Just running it unchanged will leave some performance (factor of a few?) on the table. That's a question for software engineers and compiler optimizers.
We can now bring the approximate match of Nodes/s with the years together with the other data, and present the result:
This looks quantitatively different to my [first version](https://www.lesswrong.com/posts/75dnjiD8kv2khe9eQ/measuring-hardware-overhang), but is qualitatively similar.
* Again, a hardware overhang of ~10 years at maximum is visible: SF8 achieved Kasparov level in 1997
* This was only possible for contemporary PC engines of the year ~2006.
* In my old version, this was more like a 15 years gap. Back then, I had matched the speed to MIPS values for CPUs I found online.
* It is probably better to measure SF kNodes/s directly instead using a CPU speed proxy (MIPS, FLOPs, SPEC). Thus, I believe that the new figure is closer to reality.
In the next post, I will consider the other questions asked by Paul Christiano: How much compute would an old engine need to match current engines? What is the influence of opening books, endgame tables, pondering?
*Edit (15 July): Magnus Carlsen time series fixed*
|
27c280a3-eec3-4903-83cf-b9d3a27d970b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Can we create a function that provably predicts the optimization power of intelligences?
Follow up to Efficient Cross-domain Optimization
When I am skeptical that we will ever understand intelligence, I am skeptical that we will ever be able to reliably map a systems description onto its optimization power. This has implications for how well we will create intelligences and how well intelligences will be at self-improving.
Obviously we can't predict the effectiveness of an arbitrary program, due to rice's theorem and intelligence being a non-trivial property. So the best we can hope for is predicting the effectiveness of a set of programs. Is such a function possible? This is my take on the subject.
Let o ( p ) be a function that maps a program p to its optimization power.
Mu, Omegas younger brother has a challenge for you, you get to design a system and put it in a box with 20 red and 20 green balls, it will activate itself after 10 minutes and then have the goal of removing as many red balls from the box as possible in 10 minutes. You have to decide how whether it is going to remove more or less than 5 red balls from the box. You get transported to a nirvana if you predict correctly and your world gets turned into paper clips if you get it wrong.
You whip out your trusty o and make a program and the evaluate it using o and bet according to its evaluation.
Unknown to you Mu also has a copy of your o and runs it on the systems you put in the box. Those that return a high value from the optimization power measure, it destroys before they activate, those that have a low effectiveness it performs their goals for them. In the second case it is still p that causes the goal to be fulfilled as if p were different there would be a different amount that the goal is fulfilled. You can see it as inspiring pity in someone else to make them help, who would not have done otherwise. It is still winning.
So Mu forces o to be wrong, so o was not the reliable predictor of a set of programs optimization power we had hoped for, so we have a contradiction. Is
|
852c544e-91e4-43b0-a92a-01f6fcfee6a5
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Using machine learning to predict romantic compatibility: empirical results
Overview
For many people, having a satisfying romantic relationship is one of the most important aspects of life. Over the past 10 years, online dating websites have gained traction, and dating websites have access to large amounts of data that could be used to build predictive models to achieve this goal. Such data is seldom public, but Columbia business school professors Ray Fisman and Sheena Iyengar compiled a rich and relevant data set for their paper Gender Differences in Mate Selection: Evidence From a Speed Dating Experiment. Their main results were:
Women put greater weight on the intelligence and the race of partner, while men respond more to physical attractiveness. Moreover, men do not value women’s intelligence or ambition when it exceeds their own. Also, we find that women exhibit a preference for men who grew up in affluent neighborhoods. Finally, male selectivity is invariant to group size, while female selectivity is strongly increasing in group size.
I found the study through Andrew Gelman’s blog, where he wrote:
What I really want to do with these data is what I suggested to Ray and Sheena several years ago when they first told me about the study: a multilevel model that allows preferences to vary by person, not just by sex. Multilevel modeling would definitely be useful here, since you have something like 10 binary observations and 6 parameters to estimate for each person.
Several months ago I decided to pursue a career in data science, and with a view toward building my skills, I worked to build a model to predict when an individual participant will express interest in seeing a given partner again. Along with the goal of learning, I had the dual intent of contributing knowledge that had the potential, however slight, to help people find satisfying romantic relationships.
It’s unlikely that what I did will have practical applications (as basic research seldom does), but I did learn a great deal about many things, most having to do with data s
|
fcc80ece-61d8-4b61-8407-8bc8307350ca
|
trentmkelly/LessWrong-43k
|
LessWrong
|
LW is to rationality as AIXI is to intelligence
Apparently LW does a great job on refining rationality and dissolving confusions. But is it helpful when it comes to anything apart from designing Friendly AI, apart from a purely academic treatment of rationality? I'm currently unable to benefit from what I have so far read on LW, it actually made me even more unproductive, to an extent that I get nothing done anymore. Let me explain...
You have to know that I'm still in the process of acquiring a basic education. If I say basic, I mean basic. Since I got almost no formal education, what I do know (or know about) is largely on a very low level, yet I am plagued by problems that are themselves on a level that require the intellect and education of the folks here on LW. The problem with that is that I'm yet lacking most of the skills, tools and requisite know-how while the problems in question concern me as well. This often causes me to get stuck, I can't decide what to do. It also doesn't help much that I am the kind of person who is troubled by problems others probably don't even think about. An example from when I was much younger (around the age of 13) is when I was troubled by the fact that I could accidentally squash insects when walking over grass in our garden. Since I have never been a prodigy, far from it, it was kind of an unsolvable problem at that time, especially since I am unable to concentrate for very long and other similar problems are accumulating in my mind all the time. So what happened? After a time of paralysis and distress, as it happens often, I simply became reluctant and unwilling, angry at the world. I decided that it is not my fault that the world is designed like that and that I am not smart enough to solve the problem and do what is right. I finally managed to ignore it. But this happens all the time and the result is never satisfactory. This process too often ends in simply ignoring the problem or becoming unwilling to do anything at all. What I'm doing is not effective it seems, it a
|
cf5cdff8-dbd1-4eb9-889d-9007eaa20452
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Sam Altman's sister claims Sam sexually abused her -- Part 2: Annie's lawsuit; the response from Sam, his brothers, and his mother; Timeline
Previous posts (which you should read first)
This post is the 2nd post in a series of 11 posts about the claims of Sam Altman's sister, Annie Altman. Annie has claimed that Sam sexually abused her for about 9 years as a child, and that she experienced further (non-sexual) abuse from Sam, her brothers, and her mother after that.
The 11 posts are meant to be read in order.
So, if you haven't read the first post, please read it before you read this post:
* Sam Altman's sister claims Sam sexually abused her -- Part 1: Introduction, outline, author's notes
----------------------------------------
Annie's lawsuit, and the response from Sam, his brothers, and his mother
On January 6, 2025, Annie Altman filed a lawsuit against Sam Altman in the United States District Court for the Eastern District of Missouri, Eastern Division.
The lawsuit's case name is Altman v. Altman, and its case number is 4:25-cv-00017.
The lawsuit is ongoing. A jury trial is set to begin Monday, March 31, 2025.
See:
* https://www.courtlistener.com/docket/69520118/altman-v-altman/
* Especially:
* https://www.courtlistener.com/docket/69520118/1/altman-v-altman/ -- "COMPLAINT against defendant Samuel Altman with receipt number AMOEDC-11018479, in the amount of $405 Jury Demand,, filed by Ann Altman. (Attachments: # 1 Civil Cover Sheet Civil Cover Sheet, # 2 Original Filing Form Original Filing Form, # 3 Summons Summons to be Issued)(Mahoney, Ryan) (Entered: 01/06/2025)"
* https://www.courtlistener.com/docket/69520118/16/altman-v-altman/ -- "MOTION to Dismiss Plaintiff's Common-Law Claims and Plaintiff's Prayer for Punitive Damages by Defendant Samuel Altman. (Magee, Thomas) (Entered: 03/07/2025)"
* https://www.courtlistener.com/docket/69520118/17/altman-v-altman/ -- "MEMORANDUM in Support of Motion re 16 MOTION to Dismiss Plaintiff's Common-Law Claims and Plaintiff's Prayer for Punitive Damages filed by Defendant Samuel Altman. (Magee, Thomas) (Entered: 03/07/2025)"
|
dea52d41-9635-4d5a-abab-af618cc6bb7a
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Arbitrary"
Followup to: Inseparably Right; or, Joy in the Merely Good, Sorting Pebbles Into Correct Heaps
One of the experiences of following the Way is that, from time to time, you notice a new word that you have been using without really understanding. And you say: "What does this word, 'X', really mean?"
Perhaps 'X' is 'error', for example. And those who have not yet realized the importance of this aspect of the Way, may reply: "Huh? What do you mean? Everyone knows what an 'error' is; it's when you get something wrong, when you make a mistake." And you reply, "But those are only synonyms; what can the term 'error' mean in a universe where particles only ever do what they do?"
It's not meant to be a rhetorical question; you're meant to go out and answer it. One of the primary tools for doing so is Rationalist's Taboo, when you try to speak without using the word or its synonyms—to replace the symbol with the substance.
So I ask you therefore, what is this word "arbitrary"? Is a rock arbitrary? A leaf? A human?
How about sorting pebbles into prime-numbered heaps? How about maximizing inclusive genetic fitness? How about dragging a child off the train tracks?
How can I tell exactly which things are arbitrary, and which not, in this universe where particles only ever do what they do? Can you tell me exactly what property is being discriminated, without using the word "arbitrary" or any direct synonyms? Can you open up the box of "arbitrary", this label that your mind assigns to some things and not others, and tell me what kind of algorithm is at work here?
Having pondered this issue myself, I offer to you the following proposal:
> A piece of cognitive content feels "arbitrary" if it is the kind of cognitive content that we expect to come with attached justifications, and those justifications are not present in our mind.
You'll note that I've performed the standard operation for guaranteeing that a potentially confusing question has a real answer: I su
|
b59a3242-59d6-429e-bc7a-2951f7322227
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[LINK] Two articles on Bitcoin
Tangential, but a subject of some local interest:
Why Bitcoin will fail by Avery Pennarun. "The sky isn't red." Thesis:
1. The gold standard was a bad idea.
2. Even if it [Bitcoin] was a good idea, governments will squash it.
3. The whole technological basis (cryptosystem) is flawed.
4. It doesn't work offline.
I'm not sure I buy these and am not competent to evaluate his claims on 3., but would like others' critique.
L019: Bitcoin P2P Currency: The Most Dangerous Project We've Ever Seen by Jason Calacanis. A rather more enthusiastic viewpoint of the project:
1. Bitcoin is a technologically sound project.
2. Bitcoin is unstoppable without end-user prosecution.
3. Bitcoin is the most dangerous open-source project ever created.
4. Bitcoin may be the most dangerous technological project since the internet itself.
5. Bitcoin is a political statement by technological libertarians.
6. Bitcoins will change the world unless governments ban them with harsh penalties.
The actual text contains many more caveats than the eye-catching selection of points above.
|
bcf22eca-b545-44e3-8f22-d1e1beaa831f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[Linkpost] Multimodal Neurons in Pretrained Text-Only Transformers
This is a linkpost for https://arxiv.org/abs/2308.01544.
> Language models demonstrate remarkable capacity to generalize representations learned in one modality to downstream tasks in other modalities. Can we trace this ability to individual neurons? We study the case where a frozen text transformer is augmented with vision using a self-supervised visual encoder and a single linear projection learned on an image-to-text task. Outputs of the projection layer are not immediately decodable into language describing image content; instead, we find that translation between modalities occurs deeper within the transformer. We introduce a procedure for identifying "multimodal neurons" that convert visual representations into corresponding text, and decoding the concepts they inject into the model's residual stream. In a series of experiments, we show that multimodal neurons operate on specific visual concepts across inputs, and have a systematic causal effect on image captioning.
|
571b1627-deb4-4d78-8552-8bec85425122
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What are the limits of self-education?
I have exhausted myself thinking of this. Why should I invest time in x if I don't even know what my y is. Traditional teaching or mentoring approaches objectives by defining what a student must fulfill to truly understand and grasp a topic(s) thoroughly. This can also be translated to "end of term" projects, things that combine the fundamentals that underlie the subject(s) or exercises at the end of sections. Most of these examples are situated in a academic environment, where one feels comfortable enough to be guided and bound to deadlines. Whereas, this is tough to translate outside an academic environment.
My own experience: I recently was interested in learning and teaching myself programming and CS. I began searching through the CS curriculums given by ivy league universities, saw the lecture notes and the core texts, I dared to even look at the exams. The instructors assignments and the core readings were absolutely brutal, even for undergraduates. Part of me envied the students who'll improve and progress on a whole other level. I mean look at the assignments they're given! What's insanely difficult for me will be infinitesimal to them. Eventually I gave up on studying, my efforts and what I consider beneficial to future me are all unrealistic standards.
|
6fe3744b-c747-4c68-9876-9ae3bb514d5e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Anyone Else Using Brilliant?
I started using Brilliant, and so far I've found it to be a lot like Thinking Physics - teaching by posing real-world conundrums and then explaining the concepts and/or math behind the answers.
Anyone else getting something out of it, or have advice for how to use it?
|
2def12d9-f06b-4028-8b3e-a37ac78e7be7
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Compositional preference models for aligning LMs
*This post summarizes the main results from our recently released paper*[*Compositional preference models for aligning LMs*](https://arxiv.org/abs/2310.13011) *and puts them in the broader context of AI safety. For a quick summary of the paper, take a look at our*[*Twitter thread*](https://twitter.com/dongyoung4091/status/1717045681431753097)*.*
**TL;DR**: We propose a new approach to building preference models out of prompted LMs. Compositional Preference Models (CPMs) decompose scoring a text into (1) constructing a series of questions about interpretable features of that text (e.g. how informative it is), (2) obtaining scalar scores for these features from a prompted LM (e.g. ChatGPT), and (3) aggregating these scores using a logistic regression classifier trained to predict human judgements. We show that CPMs, compared with standard preference models (PMs), generalize better and are more robust to reward model overoptimization. Moreover, best-of-*n* samples obtained using CPMs tend to be preferred over samples obtained using similar, conventional PMs. Finally, CPMs are a novel angle at scalable oversight: they decompose a hard evaluation problem into a series of simpler, human-interpretable evaluation problems.
How compositional preference models work?
-----------------------------------------

*Figure 1: While standard PMs output a preference score directly, CPMs score different features of LM responses separately and output a preference score as a linear combination of feature values.*
Preference Models (PMs) are models trained to assign an LM response a score indicating the quality of the response. They are the workhorse of many techniques for aligning LMs: they are most prominently used as reward functions in RLHF or as ranking models in best-of-n sampling, in addition to playing a role in other techniques such as [pretraining with human feedback](https://www.lesswrong.com/posts/8F4dXYriqbsom46x5/pretraining-language-models-with-human-preferences).
Standard PMs involve adding a scalar head on top of a base model and finetuning the whole model (or certain upper layers) to predict which of two texts a human would prefer. While this approach is highly effective in practice, it can lead to uninterpretable models that fit spurious correlations in human preference judgements and are prone to goodharting (overoptimization).
We introduce an alternative: Compositional Preference Models (CPM). In contrast to PMs, CPMs decompose response evaluation into the following steps:
**Feature decomposition**. We maintain a fixed list of 13 human-interpretable features (e.g. specificity, relevance, readability) and 13 corresponding prompt templates (e.g. `You will be shown a conversation [...] please judge whether the assistant's reply is relevant. Score that on a scale from 1 to 10 [...] {conversation_history} {reply}`).
**Feature scoring**. We ask an LM (e.g. GPT-3.5) to assign a score to each feature. Each feature of a single response is scored in a separate context window.
**Aggregation**. The feature scores are combined into a scalar preference score using a logistic regression classifier trained to predict human preference judgements (i.e. which of two texts a human would prefer).
Robustness to overoptimization
------------------------------

*Figure 2: Scores given by a gold PM (solid lines) and a corresponding proxy PM (dashed lines) on samples obtained through best-of-n sampling against the gold PM. CPM-GPT-3.5 and CPM-Flan-T5 refer to CPMs constructed with feature extraction based on GPT-3.5 and Flan-T5, respectively.*
To investigate if CPM improves robustness to overoptimization, we follow the setup of [Gao et al. (2023)](https://www.lesswrong.com/posts/shcSdHGPhnLQkpSbX/scaling-laws-for-reward-model-overoptimization) and construct a synthetic dataset where the output of one PM (defined to be the “gold PM”) is assumed to be the ground truth for human preferences. We then use the gold PMs to generate synthetic labels to train proxy PMs. We do that separately for three pairs of proxy and gold PMs: (i) standard PMs, (ii) CPMs using GPT-3.5 for feature extraction and (iii) CPMs using Flan-T5-XL (3B params) for feature extraction. Finally, we do best-of-*n* against a given proxy PM and comparse those best samples’ scores according to both proxy and gold PM.
As we increase the amount of optimization pressure (the number of candidates *n*), scores given by proxy PMs diverge from scores given by gold PMs (see Fig. 2). This is an indicator of preference model overoptimization, a form of reward hacking in which optimization of proxy PM scores is driven by spurious features that the gold PMs are indifferent to. The size of this gap (smaller is better) indicates the robustness of a given PM to being overly optimized against. Here, we observe that the gap (on the plot, between solid and dashed lines) tends to be smaller for CPMs than for standard PMs and that it increases at a slower rate.
This indicates that CPMs are more robust to overoptimization than standard PMs. This holds independently of whether a highly capable (GPT-3.5) or less capable (Flan-T5-XL) LM is used as a feature extractor in CPMs.
Quality evaluation
------------------

*Figure 3: Win rate of responses obtained via best-of-16 sampling using a given PM versus responses obtained via standard sampling, computed for prompts from Anthropic HH dataset (HH-RLHF) and Stanford Human Preferences dataset (SHP).*
We compare the quality of LM samples obtained by best-of-*16* against either CPMs or standard PMs by comparing them to samples generated *without* best-of-*n* sampling. We do that by showing both best-of-*16* and vanilla samples to an evaluator LM (Claude 2.0) and by computing win rates, i.e. how often best-of-*16* samples are preferred to vanilla samples. CPMs tend to have higher win rates than standard PMs, even if we match the capabilities of a feature extractor LM to the capabilities of standard PM (by choosing Flan-T5-XL for both). This suggests that prior knowledge injected into a PM via pre-selecting interpretable and relevant features in CPMs is robustly helpful for learning about human preferences.
CPMs and scalable oversight
---------------------------
[Scalable oversight](https://arxiv.org/abs/2211.03540) is the problem of evaluating the behavior of agents more capable than the evaluators. This is important to solve because, on the one hand, LMs will soon grow capable of completing tasks for which humans will not be able to provide feedback. On the other hand, LMs might also be capable of [reasoning about flaws in their evaluation procedures and exploiting them](https://www.lesswrong.com/posts/mLfPHv4QjmeQrsSva/paper-on-measuring-situational-awareness-in-llms) unbeknownst to overseers.
Current proposals for solving scalable oversight focus on recursively relying on other LMs to assist human evaluators ([debate](https://www.lesswrong.com/tag/debate-ai-safety-technique-1), [iterated distillation and amplification](https://www.lesswrong.com/tag/iterated-amplification), [recursive reward modeling](https://deepmindsafetyresearch.medium.com/scalable-agent-alignment-via-reward-modeling-bf4ab06dfd84)) but remain largely theoretical. [RL from AI feedback](https://arxiv.org/abs/2212.08073) – using carefully prompted LMs to generate training data for PMs – is arguably the most successful demonstration of how to use LMs to supervise LMs at scale.
CPMs explore an alternative route to addressing scalable oversight for LMs, exploring the prospects of divide-and-conquer strategies for tackling hard evaluation problems. CPMs can be seen as a method for decomposing a hard question (“Is this response helpful?”) into a series of simpler questions (“is this response readable?” etc.) that are easier for LMs to answer and easier for humans to oversee. While we stop at a single step of decomposition, nothing in principle prevents us from applying the idea recursively, e.g. to break down evaluation of complex responses into simple questions about atomic claims.
The idea of decomposing complex evaluation problems into simpler subproblems has several additional benefits:
1. **Using human priors**. Pre-selection of features and prompt templates afford a natural way of injecting prior knowledge and endowing PMs with useful inductive biases. The parameters space of CPMs is spanned by features selected to be meaningful and robust.
2. **Avoiding reward hacking by limiting PM capacity**. Using features pre-computed by feature extractors allows us to dramatically reduce the capacity of PMs consuming them (in our experiments, from 3B to just 13 parameters, i.e. 8 orders of magnitude!) and limit their susceptibility to overfitting to spurious correlations in preference data. It is really hard to reward-hack with only 13 parameters at hand!
3. **Interpretability**. Pre-selected features are trivially interpretable and a logistic regression coefficient associated with a feature can be interpreted as its salience (effect size) for a particular preference judgment (see sec. 4.6 in the paper). Indeed, the idea that preference judgments can be explained by linear combinations of pre-selected features was recently validated by two concurrent papers: [Towards Understanding Sycophancy in Language Models](https://www.lesswrong.com/posts/g5rABd5qbp8B4g3DE/towards-understanding-sycophancy-in-language-models) and [Human Feedback is not Gold Standard](https://arxiv.org/abs/2309.16349). Using such a linear model as an actual PM makes its judgements more transparent and amenable to process-based supervision.
4. **Narrowness**. Each of our feature extractors solves a narrow problem and does not need to be aware of other features or how the scores are aggregated. Solving different subproblems in different context windows was [recently found to improve the faithfulness of reasoning](https://www.lesswrong.com/posts/BKvJNzALpxS3LafEs/measuring-and-improving-the-faithfulness-of-model-generated). In the case of CPMs, an individual feature extractor has no clue how the score it is about to assign is going to be used downstream, which makes it harder for it to be strategic about that score and exercise capabilities for [sycophancy](https://www.lesswrong.com/posts/yRAo2KEGWenKYZG9K/discovering-language-model-behaviors-with-model-written) or deception.
However, CPMs still have certain limitations that future work could address:
1. **Human feedback.**CPMs still use pairwise preference judgements given by humans as a training signal for aggregating feature scores. This is inherently limiting as far as humans make errors, [sometimes prefer sycophantic responses over truthful ones](https://www.lesswrong.com/posts/g5rABd5qbp8B4g3DE/towards-understanding-sycophancy-in-language-models) or [authoritative responses over factual ones](https://arxiv.org/abs/2309.16349).
2. **Human curation.** CPMs rely on humans when it comes to feature selection and prompt engineering of prompt templates for feature extraction. These factors could be limiting as far as out-of-domain generalization is concerned (e.g. to evaluating agents showing superhuman performance).
Wrap-up
-------
We presented Compositional Preference Models: the idea of building PMs by training logistic regression on top of features extracted by prompted LMs. We show that a CPM with 13 parameters can outperform standard PM in terms of human evaluation and robustness to reward model overoptimization while also being more interpretable.
*This post benefited from helpful comments made by Mikita Balesni, Richard Ren, Euan McLean and Marc Dymetman. I’m also grateful to the co-authors of*[*the paper*](https://arxiv.org/abs/2310.13011)*: Dongyoung Go, Germán Kruszewski, Jos Rozen and Marc Dymetman.*
|
3798eb90-82c6-4df4-ac38-60241c76aeab
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Sydney Rationality Dojo - November
Discussion article for the meetup : Sydney Rationality Dojo - November
WHEN: 01 November 2015 04:00:00PM (+1100)
WHERE: 10 Shepherd Street, Chippendale
Come join us for November's Rationality Dojo. We've been experimenting with the format recently, to keep things moving along, and fit more material in. If you've been away for a while, now is a perfect chance to come back!
Afterwards we will go for the usual group dinner, for those who can make it.
Discussion article for the meetup : Sydney Rationality Dojo - November
|
7d030dad-723c-4724-abd7-a1abeda0ceae
|
trentmkelly/LessWrong-43k
|
LessWrong
|
On the Dangers of Time Travel
It has been obvious for decades to anyone who understands quantum mechanics that that the scientific consensus on theoretical physics is wrong. General relativity is background independent. Quantum field theory is background dependent. They cannot both be correct.
According to the Copenhagen interpretation of thermodynamics, entropy is emergent from time. Anyone with with half a Noble Prize in Physics can tell that this is backwards. Time is emergent from entropy.
Everything that can be said to physically exist, from an unobserverable wave function to living cat, is a macrostate. A random walk along microstates evolves in the direction of higher entropy macrostates even in the absence of time. In this way, entropy is more fundamental than time.
Putting entropy before time explains why all quantum fields locally maximize proper time. It solves the problem of Baryon asymmetry and dissolves the semantic stopsign at the Big Bang.
If you have worked out the details of quantum gravity for yourself then you have seen in your equations exactly how to build a time machine.
The best theoretical physicists have managed to suppress this technology. In a historically unprecedented conspiracy, they have not published anything useful for decades.
While the physicists buy time, the rest of us must invent systems to maximize the benefits of time travel while minimizing risk. As a consequence of this progress, there is a nonzero Bayesian probability we could turn on a time machine without instantly destroying the multiverse. But no weight of electronic paper discussing time travel safety can prevent a malevolent actor uninterested in safety.
As a last line of defense, the cosmic speed limit is below 88 everywhere in the solar system, it is illegal to buy plutonium in a drugstore and phone booths have been quietly disappeared. But we cannot expect such flimsy mechanisms to hold forever as technology advances.
We must build a temporal displacer first, before the enemy does.
|
4c2aa95e-e4ea-4c9e-a5b0-71780ad25b03
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
How much to optimize for the short-timelines scenario?
Some have argued that one should tend to act as if timelines are short since in that scenario it's possible to have more expected impact. But I haven't seen a thorough analysis of this argument.
**Question:** Is this argument valid and if yes how strong is it?
The basic argument seems to be: if timelines are short, the field (AI alignment) will be relatively smaller and have made less progress. So there will be more low-hanging fruits and so you have more impact.
The question affects career decisions. For example, if you optimize for long timelines, you can invest more time into yourself and delay your impact.
The question interacts with the following questions in somewhat unclear ways:
* How fast do returns to more work diminish (or increase)?
+ If returns don't diminish, the argument above fails.
+ If the field will grow very quickly, returns will diminish faster.
* Is your work much more effective when it's early?
+ This may happen because work can be hard to parallelize - ‘9 women can't make a baby in 1 month’. And field-building can be more effective earlier as the field can compound-grow over time. So someone should start early.
+ If work is most effective earlier, you shouldn’t lose too much time investing in yourself.
* Is work much more effective at crunch time?
+ If yes, you should focus more on investing in yourself (or do field-building for crunch time) instead of doing preparatory research.
* If timelines are longer, is this evidence that we'll need a paradigm shift in ML that makes alignment easier/harder?
+ (This question seems less tractable than the others.)
* Is your comparative advantage to optimize for short or long timelines?
+ For example, young people can contribute more easily given longer timelines and vice versa.
If someone would like to seriously research the overall question, please reach out. The right candidate can get funding.
|
872e17c9-a4e6-4778-81d7-d4b2411f5083
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Overview of introductory resources in AI Governance
Overview of introductory resources in AI Governance
This post was created as part of the Supervised Program for Alignment Research, Spring 2024. This work would not have happened without the encouragement and accountability of my supervisor, Peter Gebauer.
Introduction
The AI Governance ecosystem is large and difficult to apprehend. There are tons of content, relevant organizations, introductory resources, newsletters and more. As a newcomer to this field, I found it hard to navigate the ecosystem and find the information I needed. I discovered lots of resources purely by chance, months after they could have been useful for me.
What I felt was missing was introductory resources that would not only introduce the “type of work” that is AI Governance, but also direct me towards the resources I needed at different times.
Desiderata: The perfect entry point to the ecosystem would allow me, no matter my background and my intentions, to find the resources I need.
The technical alignment ecosystem on the other hand seems far more organized, in no small part thanks to the Alignment Ecosystem Development team, which created tons of introductory resources, indexes, and other variations on the theme “list of links to useful stuff”. Since I discovered AI alignment two years ago, the resources AED created have helped me navigate the ecosystem and find opportunities I would have missed. I expect similar resources to also be valuable for AI governance.
I decided to investigate thoroughly what were the various introductory resources to AI Governance. Maybe the resources actually existed, and I just did not know where to find them? I compiled my findings below, to help newcomers find those resources faster, and hopefully to motivate others to fill in the gaps where resources are lacking. Hopefully, someone will get motivated to build the ultimate entry point to AI Governance!
Index of AI governance introductory resources
There are various kinds of resources which could be la
|
ca851cf8-37d2-4335-84ac-c8c0bed1282f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Value of Querying 100+ People About Humanity's Future
I am looking for advice on whether I should do what I've outlined below (is it valuable or generally a waste of time) and, if so, how I should go about doing it. If there is support for something like this, I may apply for funding and do this sometime during Q4 2023.
* Go to a frequented environment in NYC (e.g., Time Square; I use NYC because I live close to this city and presumably would be the one doing the actions I'm describing)
* Bring a notepad / clipboard and don an official or professional appearance (e.g., a suit)
* Bring a sign with some message akin to:
* "Quick, Paid Survey About Humanity"
* "$10 for 10 Minutes, Survey"
* "Questions About Our Future"
* Bring a camera w/ tripod and other necessary recording equipment, maybe a mic
* Being recording short interviews (~3-10 minutes).
* For person 1 to ~100:
* Ask if they'd be willing to answer some questions about humanity (3-10 minutes) for $10 and be recorded on video
* Record person speaking, and then ask some or all of the following questions or those similar to the following:
* If you had 10 billion USD to help the world, how would you spend it?
* What are the most important problems for humanity to address right now?
* What might be the important problems for humanity to address in 2050?
* What are the most important issues for the USA right now?
* If you had to send a short message to humanity in 2100, what would it be?
* Rank these in terms of how risky them seem (have these on the clipboard):
* [nuclear war], [space weather / collisions], [climate change], [pandemics], [artificial intelligence], [global conflict]
* How are you altruistic, and how altruistic should people be?
* What does this picture [pale blue dot] make you think about?
* [Maybe some question about human enhancement / treatment via gene-editing]
* Ask some or all of the following demographic questions
* Where did you grow up?
* How much sch
|
b2e62a4d-8891-4be0-84d6-97f631473a93
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Enumerating objects a model "knows" using entity-detection features.
Introduction
Research on Sparse Autoencoders (SAEs) has identified "known entity" features in language models - features that activate when the model processes entities it "knows." If we can find the circuit models use to recognise that they know an entity, then by computing which inputs would trigger the circuit, we can extract a list of "known entities".
The aim is to do this in a mostly dataset free way, bootstrapping from a small number of known entities to make guesses for what components are involved in the circuit, using the insights extracted from SAEs to guide circuit discovery, but not using SAEs or a large dataset of text to find these entities. So the SAE tells us the "known entity" feature exists, but once we know it exists, we don't need SAEs to localize the circuit for computing this feature.
In theory, given the "known entity" feature, you could run all possible inputs through the model to extract a list of "known entities". But this is inefficient, so the approach is to find mathematical simplifications in the circuit that let us speed up the process.
I focus on GPT2-Small's first layer, where certain neurons appear to distinguish between "known" and "unknown" bigram proper nouns. By developing a simple model of how these neurons make this distinction, we can quickly filter for entities the model recognizes. These neurons aren't perfect, and there are examples of false positives and false negatives, but they give quite a large list of bigrams, with relatively low noise compared with other methods.
While the approach is currently quite crude, the results are surprisingly clean. This post serves as a proof of concept for a more ambitious project extracting model knowledge through mechanistic interpretability.
'Known' Proper Noun Neuron:
In the first layer of GPT2-Small, three attention heads (3, 4, and 7) have local positional kernels that process n-grams and local context. These heads are prime candidates for identifying circuits that recognize
|
faf7a0bc-a28c-47cf-bdc3-8f42c2241050
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
ACI#6: A Non-Dualistic ACI Model
Most traditional AI models are dualistic. As [Demski & Garrabrant have pointed out](https://www.lesswrong.com/s/Rm6oQRJJmhGCcLvxh/p/i3BTagvt3HbPMx6PN), these models assume that an agent is an object that persists over time, and has well-defined input/output channels, like it's playing a video game.
In the real world, however, agents are embedded in the environment, and there's no well-defined boundary between the agent and the environment. That's why a non-dualistic model is needed to depict how the boundary and input/output channels emerge from more fundamental notions.
For example, in Scott Garrabrant's [*Cartesian Frames*](https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames), input and output can be derived from "an agent's ability to freely choose" among "possible ways an agent can be".
However, choosing is still one of the key concepts of Cartesian Frames, but from a non-dualistic perspective, "it's not clear what it even means for an embedded agent to choose an option", since an embedded agent is "the universe poking itself". Formalizing the idea of choice in a non-dualistic model is as difficult as formalizing the idea of free will.
To avoid relying on the notion "choosing", we have proposed the **General Algorithmic Common Intelligence (gACI)** model which describes embedded agents solely from a third-person perspective, and measures the actions of agents using mutual information in an event-centric framework.
The gACI model does not attempt to answer the question "What should an agent do?". Instead, it focuses on describing the emergence of the agent-environment boundary, and answering the question "Why does an individual feel like it's choosing?"
In the language of decision theory, gACI belongs to descriptive decision theory rather than normative decision theory.
**Communication Channel and Mutual Information**
------------------------------------------------
In dualistic intelligence models, an agent receives **input**information from the environment, and manipulates the environment through **output**actions. But real-world agents are embedded within the environment, it's not easy to confine information exchange to a clear input/output channel.
In the gACI model, on the other hand, the input/output channel is a communication channel, in which the information transfer between a sender and a receiver is measured by **mutual information**.

*Figure 1: From the dualistic input/output model to the mutual information model.*
We can easily define mutual information between the states of any two objects, without specifying how the information is transmitted, or who is the sender and who is the receiver, or what the transmission medium is, or whether they are direct or indirect connected.
These two objects can be any parts of the world, such as agents, the environment, or any parts of agents or the environment, whose boundaries can be drawn anywhere if necessary. They can even overlap.
Having mutual information does not always mean knowing or understanding, but it provides an upper bound for knowing or understanding.
With mutual information of two objects, we can define memories and prophecies.
**Memory and Prophecy**
-----------------------
**Memory**is information about the past, or a communication channel that transmits information about the past into the future ([Gershman 2021](https://drive.google.com/file/d/1t_npcCLGVO3Dr01sDVxd_KDp0xDv-yi2/view)). If A is the receiver and B is the sender, we can define: A's memory of B is the mutual information between the present state of A and a past state of B:
M(A,B,t)=I(A(t0);B(t)),t<t0.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
A can have memories about more than one Bs, or about different moments of B. It can also have memories about itself, in other words, A can be equal to B.
**Prophecy** is the mutual information between the present state of A and a future state of B:
P(A,B,t)=I(A(t0);B(t)),t>t0
Obviously, the prophecy will not be confirmed until the future state of B is known.
A prophecy can be either a prediction about the future, or an action that controls/affects the future. In the language of the [Active Inference](https://direct.mit.edu/books/oa-monograph/5299/Active-InferenceThe-Free-Energy-Principle-in-Mind) model, it's either "change my model" or "change the world".
It's not necessary to prefer one interpretation over another, because different interpretations can be derived in different situations, which will be explained in the later chapters.

*Figure 2: Object A can have both memories and prophecies about object B.*
**Collect Memories for Prophecies**
-----------------------------------
We won't be surprised to find that most, if not all, objects that have prophecies about object A also have memories about it, although the reverse is not always true. We can speculate that information about the future comes from information about the past.
For example, if you know the position and velocity of the moon in the past, you can have a lot of information about its position and velocity in the future.
Not all information is created equal. Using our moon as an example, information about its position and phase contains more information about its future, while the pattern of foam in your coffee cup contains less.
*(Although computing power plays an important role in processing information from memory to prediction or control, we only consider the upper bound of prophecy as if we had infinite computing power. )*
Objects with different memories would have different prophecies about the same object. For example, an astronomer and an astrologer would have different information about the future of the planet Mars because of their different knowledge of the universe.
Intelligence needs prophecy to survive and thrive, because to maintain its homeostasis and achieve its goals, it needs sufficient information about the future, especially about its own future. In order to obtain prophecies about itself, one should collect memories about itself and the world that are useful for predicting or controlling its own future.
**Autonomy and Heteronomy**
---------------------------
We can measure the **degree of autonomy** of an object by how much prophecy it has about a future of itself, which indicates its self-governance and independence.
Da(A,t)=P(A(t0),A(t))
Similarly, we can measure the **degree of heteronomy** of object A from object B by how much prophecy B has about A, which indicates A's degree of dependence on B.
Dh(A,B,t)=P(B(t0),A(t))
An object that has considerable autonomy can be considered an **individual** or an agent. The permanent loss of autonomy is the **death** of an individual. Death is often the result of the permanent loss of essential memories that can induce prophecies about itself.
Focusing on different types of information requires different standards for autonomy and death. For example, a human neuron has some autonomy over its metabolism, but the timing and strength of its action potential depends mostly on other neurons. We can think of it as an individual, but it is better to think of it as a part of an individual when studying intelligence. Because the death of a single neuron has little effect on a person's autonomy, but the death of a person does.
*Figure 3: Autonomy and Heteronomy*
**The Laws of the Mind**
------------------------
As an individual accumulates more and more memories and prophecies, it can discover general rules about the world, which are relationships between the past and the future.
During this rule-learning process, the boundary between the self and the outside world emerges. One will inevitably find out that*some parts of the world follow different rules than other parts*, and these special parts are spatially concentrated around itself. We can call this special part the **body**.
For example, an individual may acknowledge that its body temperature has never been very high, like, say above 1000K, and predict that its body will never experience a temperature above 1000K, if its future is under control. Since some other objects can have a temperature of 1000K, it will conclude that there must be some special rules that prevent one's body from getting too hot. We call these rules goals, motivations, or emotions, etc.
The intuitive conclusion is that your body follows some rules that are different from the rules that other objects follow. This is what people call dualism: the body follows the **laws of the mind**, which uses concepts like goals, emotions, logic, etc., while the outside world doesn't.
However, the exact boundary between the body and the environment is not very clear. The space surrounding the body may partly follow the laws of the mind and can be called **peripersonal space**, a term borrowed from psychology.
*(Closer examination will reveal that the body and peripersonal space also follow the same scientific laws as the outside world, and the laws of mind are some additional laws that only bodies follow.)*
*Figure 4: Everything in the universe follows the laws of physics, but additionally, one's body and peripersonal space follow the laws of the mind.*
**Dualism and Survival Bias**
-----------------------------
Why do our bodies seem to follow the laws of mind, if bodies are made of the same atoms as the outside world?
Consider a classic example of survival bias. During World War II, the statistician [Abraham Wald examined the bullet holes in returning aircrafts](https://en.wikipedia.org/wiki/Survivorship_bias#Military) and recommended adding armor to the areas that showed the least damage, because most aircrafts damaged in those areas could not return safely to base.
This survival bias could be overcome by observing the aircraft on the battlefield instead of at the base, where we could find out that the bullet holes are evenly distributed throughout the aircraft, since the survival of the observer is independent of the location of the bullet holes. Because a survival bias is introduced when the observer's survival is not independent of the observed event.
We can speculate that if an event is in principle dependent on the observer's own survival, there will be a survival bias that can't be overcome. For example, one's own body temperature is not independent of one's own survival, but the body temperature of others can be.
Unlike the pattern of bullet holes in returning aircrafts, the inherent survival bias, including numerous experiences of how to survive, can accumulate in the observer's memory, like the increased armor in the critical areas of an aircraft. We call the memories of accumulated survival bias the **inward memory**, and the memories of the external world, whose survival bias can be overcome, the **outward memory**.
The laws of the mind, such as goal-directed mechanisms, can be derived from the inward memory. The observer may find that (almost) everything in the outside world has a cause, but its own goal-driven survival mechanism, such as an aversion to hot temperature, or enhanced armor, has no cause other than the rule "the survival of itself depends on the survival of itself", or the existence of itself. Then the observer comes to a conclusion: I have a goal, I have made a choice.
|
3dc7a262-494d-45cc-aae4-d975fd5f2b8e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Motivation research presentation
I did a presentation on motivation and procrastination research to the Seattle meetup group and an exercise trying to apply the material to a real life example. Eight people came. They were a skeptical bunch and questioned me on exactly the parts I am most interested in an know the least about: how exactly scientists assess the psychological quantities (expectancy, value, delay and impulsiveness). I'd like to learn more about the research and be able to give such presentations to others in the future. I'd also like to record a presentation like it and put it up on the internet.
People seemed to think the exercise was pretty valuable. It was also fairly fun. The presentation is here, the exercise is here and here.
Luke's suggestion for how to learn how psychologists assess expectancy, value and delay was
> As for how scientists assess the relevant psychological qualities, and for why the 'procrastination equation' is taken seriously, all the references are provided in my post 'How to Beat Procrastination'. I also uploaded quite a few of the studies myself so anyone who is actually interested can check the data for themselves. (Prediction: Almost nobody will.)
> The papers in footnote 6 are the place to start, for they explain why the equation (called temporal motivation theory by researchers) was developed to predict experimental results, and those papers point to all the individual studies which show how scientists assess expectancy, value, delay, and impulsiveness. For example, 'expectancy' in TMT is measured under a variety of psychological constructs, but largely by measures of self-efficacy and optimism.
> There is no short summary of these issues, though Piers Steel's recent book 'The Procrastination Equation' is a decent attempt while being much longer than my article. Psychology is very complicated, and our understanding of it is less certain than our understanding of physics or computer science.
|
46218f5e-9638-431b-a5c4-578d35c90486
|
trentmkelly/LessWrong-43k
|
LessWrong
|
There is no No Evidence
Zvi recently coined, and has now written up this Law of No Evidence:
> Law of No Evidence: Any claim that there is “no evidence” of something is evidence of bullshit.
Considered next to Eliezer's Absence of Evidence Is Evidence of Absence, it might seem like a contradiction, but as far as I can tell, it actually follows directly.
If we treat the "is" in Absence of Evidence is Evidence of Absence as an "implies" (which it seems to me to be) and then apply modus tollens to it, we get "if you don't have evidence of absence, you don't have absence of evidence" and it is precisely this bullshit that Zvi is calling. If you have evidence of absence, say so.
The Very Serious covid spokespeople spouting bullshit like "no evidence of human-to-human transmission" are doing a sort of naive equivocation version of "absence of evidence is evidence of absence", by saying "no evidence for" as if that implies "evidence against" when in fact they're trying to support some agenda or worldview when there's not very much clear evidence in any direction and someone could just as easily say "there's no evidence there isn't human-to-human transmission". At least, that's in the best case. Sometimes the evidence does favor a particular direction, they just don't like it and don't want to count it. Desperately clinging to priors?
Hm.
So Zvi's Law of No Evidence can be seen as taking Absence of Evidence is Evidence of Absence a step further and instead saying "if your absence of evidence is real, then it'll actually be evidence of absence, in which case call it that. otherwise shut up." But there's still a point to be made about a sense in which "absence of evidence" is its own thing—it's just different from no evidence. The important piece is that "no evidence" is bullshit, but "we didn't see this particular thing we would more expect to see if X were true than if X weren't true" is a vital component of successful reasoning about the world. It's the very basis of Bayes.
There is no No
|
a3d04f48-1c57-49d0-b69f-846e10e92cfe
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Value ethics vs. agency ethics
Preface
I have trouble expressing myself in such a way that my ideas come out even remotely like they sound in my head. So please apply the principle of charity and try to read how you think I thought of it.
Tit for Tat
Tit for Tat is usually presented in a game between two players where each chooses to either cooperate or defect. The real world game however differs in two important ways.
First, it's not a two player game. We make choices not only on our single instance of interaction but also on observed interactions between other players. Thus the Advanced Tit For Tat not only defects if the other player defected against itself but also if it could observe the other player defecting against any other player that employs a similar enough algorithm.
Second, there is a middle ground between cooperating and defecting, you could stay neutral. Thus you can harm your opponent, help him or do neither. The question of the best strategy in this real life prisoners dilemma is probably still unanswered. If I see my opponent defecting against some of my peers and cooperating with others, what do I choose?
Agency
The reason why there even is a game is because we can deliberate on our action and can take abstract thoughts into account that do not directly pertain to the current situation, which I think is the distinguishing factor of higher animals from lower. This ability is called agency. In order to be an agent a subject must be able to perceive the situation, have a set of possible actions, model the outcomes of these actions, value the outcomes, and then act accordingly.
We could act in such a way that infringes on these abilities in others. If we limit their ability to perceive or model the situation we call this fraud, if we limit their set of possible actions or their ability to choose between them, we call it coercion, if we infringe on their ability to value an outcome, we call it advertising.
Ethics
I propose that the purpose of our moral or ethical intuitio
|
458e06e5-7a5b-4660-888b-99cf554e99de
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Engaging First Introductions to AI Risk
I'm putting together a list of short and sweet **introductions to the dangers of artificial superintelligence**.
My target audience is intelligent, broadly philosophical [narrative](/lw/hzt/writing_style_and_the_typical_mind_fallacy/) thinkers, who can evaluate arguments well but who don't know a lot of the relevant background or jargon.
My method is to construct a [Sequence](http://wiki.lesswrong.com/wiki/Sequences) mix tape — a collection of short and enlightening texts, meant to be read in a specified order. I've chosen them for their persuasive and pedagogical punchiness, and for their flow in the list. I'll also (separately) list somewhat longer or less essential follow-up texts below that are still meant to be accessible to astute visitors and laypeople.
The first half focuses on ***intelligence***, answering 'What is Artificial General Intelligence (AGI)?'. The second half focuses on *friendliness*, answering 'How can we make AGI safe, and why does it matter?'. Since the topics of some posts aren't obvious from their titles, I've summarized them using questions they address.
---
**Part I. Building intelligence.**
1. [Power of Intelligence](http://yudkowsky.net/singularity/power). Why is intelligence important?
2. [Ghosts in the Machine](/lw/rf/ghosts_in_the_machine/). Is building an intelligence from scratch like talking to a person?
3. [Artificial Addition](/lw/l9/artificial_addition/). What can we conclude about the nature of intelligence from the fact that we don't yet understand it?
4. [Adaptation-Executers, not Fitness-Maximizers](/lw/l0/adaptationexecuters_not_fitnessmaximizers/). How do human goals relate to the 'goals' of evolution?
5. [The Blue-Minimizing Robot](/lw/6ha/the_blueminimizing_robot/). What are the shortcomings of thinking of things as 'agents', 'intelligences', or 'optimizers' with defined values/goals/preferences?
**Part II. Intelligence explosion.**
6. [Optimization and the Singularity](/lw/rk/optimization_and_the_singularity/). What is optimization? As optimization processes, how do evolution, humans, and self-modifying AGI differ?
7. [Efficient Cross-Domain Optimization](/lw/vb/efficient_crossdomain_optimization/). What is intelligence?
8. [The Design Space of Minds-In-General](/lw/rm/the_design_space_of_mindsingeneral/). What else is universally true of intelligences?
9. [Plenty of Room Above Us](http://intelligenceexplosion.com/2011/plenty-of-room-above-us/). Why should we expect self-improving AGI to quickly become superintelligent?
**Part III. AI risk.**
10. [The True Prisoner's Dilemma](/lw/tn/the_true_prisoners_dilemma/). What kind of jerk would Defect even knowing the other side Cooperated?
11. [Basic AI drives](http://wiki.lesswrong.com/wiki/Basic_AI_drives). Why are AGIs dangerous even when they're indifferent to us?
12. [Anthropomorphic Optimism](/lw/st/anthropomorphic_optimism/). Why do we think things we hope happen are likelier?
13. [The Hidden Complexity of Wishes](/lw/ld/the_hidden_complexity_of_wishes). How hard is it to directly program an alien intelligence to enact my values?
14. [Magical Categories](/lw/td/magical_categories/). How hard is it to program an alien intelligence to reconstruct my values from observed patterns?
15. [The AI Problem, with Solutions](http://intelligenceexplosion.com/2012/ai-the-problem-with-solutions/). How hard is it to give AGI predictable values of any sort? More generally, why does AGI risk matter so much?
**Part IV. Ends.**
16. [Could Anything Be Right?](/lw/sb/could_anything_be_right/) What do we mean by 'good', or 'valuable', or 'moral'?
17. [Morality as Fixed Computation](/lw/sw/morality_as_fixed_computation/). Is it enough to have an AGI improve the fit between my preferences and the world?
18. [Serious Stories](/lw/xi/serious_stories/). What would a true utopia be like?
19. [Value is Fragile](/lw/y3/value_is_fragile/). If we just sit back and let the universe do its thing, will it still produce value? If we don't take charge of our future, won't it still turn out interesting and beautiful on some deeper level?
20. [The Gift We Give To Tomorrow](http://wiki.lesswrong.com/wiki/User:RobbBB/Tomorrow). In explaining value, are we explaining it away? Are we making our goals less important?
**Summary**: [Five theses, two lemmas, and a couple of strategic implications](http://intelligence.org/2013/05/05/five-theses-two-lemmas-and-a-couple-of-strategic-implications/).
---
All of the above were written by Eliezer Yudkowsky, with the exception of The Blue-Minimizing Robot (by Yvain), Plenty of Room Above Us and The AI Problem (by Luke Muehlhauser), and Basic AI Drives (a wiki collaboration). Seeking a powerful conclusion, I ended up making a compromise between Eliezer's original [The Gift We Give To Tomorrow](/lw/sa/the_gift_we_give_to_tomorrow/) and Raymond Arnold's [Solstice Ritual Book](https://dl.dropboxusercontent.com/u/2000477/SolsticeEve_2012.pdf) version. It's on the wiki, so you can further improve it with edits.
**Further reading**:
* [Three Worlds Collide](https://dl.dropboxusercontent.com/u/12787472/philosophy/three-worlds-collide-short.pdf) (Normal), by Eliezer Yudkowsky
+ a short story vividly illustrating how alien values can evolve.
* [So You Want to Save the World](/lw/91c/so_you_want_to_save_the_world/), by Luke Muehlhauser
+ an introduction to the open problems in Friendly Artificial Intelligence.
* [Intelligence Explosion FAQ](http://intelligence.org/ie-faq/), by Luke Muehlhauser
+ a broad overview of likely misconceptions about AI risk.
* [The Singularity: A Philosophical Analysis](http://consc.net/papers/singularity.pdf), by David Chalmers
+ a detailed but non-technical argument for expecting intelligence explosion, with an assessment of the moral significance of synthetic human and non-human intelligence.
I'm posting this to get more feedback for improving it, to isolate topics for which we *don't* yet have high-quality, non-technical stand-alone introductions, and to reintroduce LessWrongers to exceptionally useful posts I haven't seen sufficiently discussed, linked, or upvoted. I'd especially like feedback on how the list I provided flows as a unit, and what inferential gaps it fails to address. My goals are:
**A.** Via lucid and anti-anthropomorphic vignettes, to explain AGI in a way that encourages clear thought.
**B.** Via the Five Theses, to demonstrate the importance of Friendly AI research.
**C.** Via down-to-earth meta-ethics, humanistic poetry, and pragmatic strategizing, to combat any nihilisms, relativisms, and defeatisms that might be triggered by recognizing the possibility (or probability) of Unfriendly AI.
**D.** Via an accessible, substantive, entertaining presentation, to introduce the raison d'être of LessWrong to sophisticated newcomers in a way that encourages further engagement with LessWrong's community and/or content.
What do you think? What would you add, remove, or alter?
|
809e1623-73c1-49cb-8d7e-73edecbf42b3
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
Superrational Agents Kelly Bet Influence!
As a follow-up to the [Walled Garden discussion](https://www.lesswrong.com/posts/gAM5AgcChwJLhuJkB/kelly-betting-discussion) about Kelly betting, Scott Garrabrant made some super-informal conjectures to me privately, involving the idea that some class of "nice" agents would "Kelly bet influence", where "influence" had something to do with anthropics and acausal trade.
I was pretty incredulous at the time. However, as soon as he left the discussion, I came up with an argument for a similar fact. (The following does not perfectly reflect what Scott had in mind, by any means. His notion of "influence" was very different, for a start.)
The meat of my argument is just Critch's [negotiable RL theorem](https://arxiv.org/abs/1701.01302). In fact, that's practically the entirety of my argument. I'm just thinking about the consequences in a different way from how I have before.
Superrationality
================
Rather than articulating a real decision theory that deals with all the questions of acausal trade, bargaining, [commitment races](https://www.lesswrong.com/posts/brXr7PJ2W4Na2EW2q/the-commitment-races-problem), etc, I'm just going to imagine a class of superrational agents which solve these problems somehow. These agents "handshake" with each other and negotiate (perhaps acausally) a policy which is Pareto-optimal wrt each of their preferences.
Negotiable RL
=============
Critch's [negotiable RL](https://arxiv.org/abs/1701.01302) result studies the question of what an AI should do if it must serve multiple masters. For this post, I'll refer to the masters as "coalition members".
He shows the following:
***Any policy which is Pareto-optimal with respect to the preferences of coalition members, can be understood as doing the following. Each coalition member is assigned a starting weight, with weights summing to one. At each decision, the action is selected via the weighted average of the preferences of each coalition member, according to the current weights. At each observation, the weights are updated via Bayes' Law, based on the beliefs of coalition members.***
He was studying what an AI's policy should be, when serving the coalition members; however, we can apply this result to a coalition of superrational agents who are settling on *their own* policy, rather than constructing a robotic servant.
Critch remarks that we can imagine the weight update as the result of bets which the coalition members would make with each other. I've known about this for a long time, and it made intuitive sense to me that they'll happily bet on their beliefs; so, of course they'll gain/lose influence in the coalition based on good/bad predictions.
What I didn't think too hard about was *how* they end up betting. Sure, the fact that it's equivalent to a Bayesian update is remarkable. But it makes sense once you think about the proof.
Or does it?
To foreshadow: the proof works from the assumption of Pareto optimality. So it *collectively* makes sense for the agents to bet this way. But the "of course it makes sense for them to bet on their beliefs" line of thinking tricks you into thinking that it *individually* makes sense for the agents to bet like this. However, this need not be the case.
Kelly Betting & Bayes
=====================
The Kelly betting fraction [can be written as](https://www.lesswrong.com/posts/zeviiJFwzBbr3sReN/calculating-kelly):
f=p−1r1−1r.mjx-chtml {display: inline-block; line-height: 0; text-indent: 0; text-align: left; text-transform: none; font-style: normal; font-weight: normal; font-size: 100%; font-size-adjust: none; letter-spacing: normal; word-wrap: normal; word-spacing: normal; white-space: nowrap; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0; min-height: 0; border: 0; margin: 0; padding: 1px 0}
.MJXc-display {display: block; text-align: center; margin: 1em 0; padding: 0}
.mjx-chtml[tabindex]:focus, body :focus .mjx-chtml[tabindex] {display: inline-table}
.mjx-full-width {text-align: center; display: table-cell!important; width: 10000em}
.mjx-math {display: inline-block; border-collapse: separate; border-spacing: 0}
.mjx-math \* {display: inline-block; -webkit-box-sizing: content-box!important; -moz-box-sizing: content-box!important; box-sizing: content-box!important; text-align: left}
.mjx-numerator {display: block; text-align: center}
.mjx-denominator {display: block; text-align: center}
.MJXc-stacked {height: 0; position: relative}
.MJXc-stacked > \* {position: absolute}
.MJXc-bevelled > \* {display: inline-block}
.mjx-stack {display: inline-block}
.mjx-op {display: block}
.mjx-under {display: table-cell}
.mjx-over {display: block}
.mjx-over > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-under > \* {padding-left: 0px!important; padding-right: 0px!important}
.mjx-stack > .mjx-sup {display: block}
.mjx-stack > .mjx-sub {display: block}
.mjx-prestack > .mjx-presup {display: block}
.mjx-prestack > .mjx-presub {display: block}
.mjx-delim-h > .mjx-char {display: inline-block}
.mjx-surd {vertical-align: top}
.mjx-surd + .mjx-box {display: inline-flex}
.mjx-mphantom \* {visibility: hidden}
.mjx-merror {background-color: #FFFF88; color: #CC0000; border: 1px solid #CC0000; padding: 2px 3px; font-style: normal; font-size: 90%}
.mjx-annotation-xml {line-height: normal}
.mjx-menclose > svg {fill: none; stroke: currentColor; overflow: visible}
.mjx-mtr {display: table-row}
.mjx-mlabeledtr {display: table-row}
.mjx-mtd {display: table-cell; text-align: center}
.mjx-label {display: table-row}
.mjx-box {display: inline-block}
.mjx-block {display: block}
.mjx-span {display: inline}
.mjx-char {display: block; white-space: pre}
.mjx-itable {display: inline-table; width: auto}
.mjx-row {display: table-row}
.mjx-cell {display: table-cell}
.mjx-table {display: table; width: 100%}
.mjx-line {display: block; height: 0}
.mjx-strut {width: 0; padding-top: 1em}
.mjx-vsize {width: 0}
.MJXc-space1 {margin-left: .167em}
.MJXc-space2 {margin-left: .222em}
.MJXc-space3 {margin-left: .278em}
.mjx-test.mjx-test-display {display: table!important}
.mjx-test.mjx-test-inline {display: inline!important; margin-right: -1px}
.mjx-test.mjx-test-default {display: block!important; clear: both}
.mjx-ex-box {display: inline-block!important; position: absolute; overflow: hidden; min-height: 0; max-height: none; padding: 0; border: 0; margin: 0; width: 1px; height: 60ex}
.mjx-test-inline .mjx-left-box {display: inline-block; width: 0; float: left}
.mjx-test-inline .mjx-right-box {display: inline-block; width: 0; float: right}
.mjx-test-display .mjx-right-box {display: table-cell!important; width: 10000em!important; min-width: 0; max-width: none; padding: 0; border: 0; margin: 0}
.MJXc-TeX-unknown-R {font-family: monospace; font-style: normal; font-weight: normal}
.MJXc-TeX-unknown-I {font-family: monospace; font-style: italic; font-weight: normal}
.MJXc-TeX-unknown-B {font-family: monospace; font-style: normal; font-weight: bold}
.MJXc-TeX-unknown-BI {font-family: monospace; font-style: italic; font-weight: bold}
.MJXc-TeX-ams-R {font-family: MJXc-TeX-ams-R,MJXc-TeX-ams-Rw}
.MJXc-TeX-cal-B {font-family: MJXc-TeX-cal-B,MJXc-TeX-cal-Bx,MJXc-TeX-cal-Bw}
.MJXc-TeX-frak-R {font-family: MJXc-TeX-frak-R,MJXc-TeX-frak-Rw}
.MJXc-TeX-frak-B {font-family: MJXc-TeX-frak-B,MJXc-TeX-frak-Bx,MJXc-TeX-frak-Bw}
.MJXc-TeX-math-BI {font-family: MJXc-TeX-math-BI,MJXc-TeX-math-BIx,MJXc-TeX-math-BIw}
.MJXc-TeX-sans-R {font-family: MJXc-TeX-sans-R,MJXc-TeX-sans-Rw}
.MJXc-TeX-sans-B {font-family: MJXc-TeX-sans-B,MJXc-TeX-sans-Bx,MJXc-TeX-sans-Bw}
.MJXc-TeX-sans-I {font-family: MJXc-TeX-sans-I,MJXc-TeX-sans-Ix,MJXc-TeX-sans-Iw}
.MJXc-TeX-script-R {font-family: MJXc-TeX-script-R,MJXc-TeX-script-Rw}
.MJXc-TeX-type-R {font-family: MJXc-TeX-type-R,MJXc-TeX-type-Rw}
.MJXc-TeX-cal-R {font-family: MJXc-TeX-cal-R,MJXc-TeX-cal-Rw}
.MJXc-TeX-main-B {font-family: MJXc-TeX-main-B,MJXc-TeX-main-Bx,MJXc-TeX-main-Bw}
.MJXc-TeX-main-I {font-family: MJXc-TeX-main-I,MJXc-TeX-main-Ix,MJXc-TeX-main-Iw}
.MJXc-TeX-main-R {font-family: MJXc-TeX-main-R,MJXc-TeX-main-Rw}
.MJXc-TeX-math-I {font-family: MJXc-TeX-math-I,MJXc-TeX-math-Ix,MJXc-TeX-math-Iw}
.MJXc-TeX-size1-R {font-family: MJXc-TeX-size1-R,MJXc-TeX-size1-Rw}
.MJXc-TeX-size2-R {font-family: MJXc-TeX-size2-R,MJXc-TeX-size2-Rw}
.MJXc-TeX-size3-R {font-family: MJXc-TeX-size3-R,MJXc-TeX-size3-Rw}
.MJXc-TeX-size4-R {font-family: MJXc-TeX-size4-R,MJXc-TeX-size4-Rw}
.MJXc-TeX-vec-R {font-family: MJXc-TeX-vec-R,MJXc-TeX-vec-Rw}
.MJXc-TeX-vec-B {font-family: MJXc-TeX-vec-B,MJXc-TeX-vec-Bx,MJXc-TeX-vec-Bw}
@font-face {font-family: MJXc-TeX-ams-R; src: local('MathJax\_AMS'), local('MathJax\_AMS-Regular')}
@font-face {font-family: MJXc-TeX-ams-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_AMS-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_AMS-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_AMS-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-B; src: local('MathJax\_Caligraphic Bold'), local('MathJax\_Caligraphic-Bold')}
@font-face {font-family: MJXc-TeX-cal-Bx; src: local('MathJax\_Caligraphic'); font-weight: bold}
@font-face {font-family: MJXc-TeX-cal-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-R; src: local('MathJax\_Fraktur'), local('MathJax\_Fraktur-Regular')}
@font-face {font-family: MJXc-TeX-frak-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-frak-B; src: local('MathJax\_Fraktur Bold'), local('MathJax\_Fraktur-Bold')}
@font-face {font-family: MJXc-TeX-frak-Bx; src: local('MathJax\_Fraktur'); font-weight: bold}
@font-face {font-family: MJXc-TeX-frak-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Fraktur-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Fraktur-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Fraktur-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-BI; src: local('MathJax\_Math BoldItalic'), local('MathJax\_Math-BoldItalic')}
@font-face {font-family: MJXc-TeX-math-BIx; src: local('MathJax\_Math'); font-weight: bold; font-style: italic}
@font-face {font-family: MJXc-TeX-math-BIw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-BoldItalic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-BoldItalic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-BoldItalic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-R; src: local('MathJax\_SansSerif'), local('MathJax\_SansSerif-Regular')}
@font-face {font-family: MJXc-TeX-sans-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-B; src: local('MathJax\_SansSerif Bold'), local('MathJax\_SansSerif-Bold')}
@font-face {font-family: MJXc-TeX-sans-Bx; src: local('MathJax\_SansSerif'); font-weight: bold}
@font-face {font-family: MJXc-TeX-sans-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-sans-I; src: local('MathJax\_SansSerif Italic'), local('MathJax\_SansSerif-Italic')}
@font-face {font-family: MJXc-TeX-sans-Ix; src: local('MathJax\_SansSerif'); font-style: italic}
@font-face {font-family: MJXc-TeX-sans-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_SansSerif-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_SansSerif-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_SansSerif-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-script-R; src: local('MathJax\_Script'), local('MathJax\_Script-Regular')}
@font-face {font-family: MJXc-TeX-script-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Script-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Script-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Script-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-type-R; src: local('MathJax\_Typewriter'), local('MathJax\_Typewriter-Regular')}
@font-face {font-family: MJXc-TeX-type-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Typewriter-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Typewriter-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Typewriter-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-cal-R; src: local('MathJax\_Caligraphic'), local('MathJax\_Caligraphic-Regular')}
@font-face {font-family: MJXc-TeX-cal-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Caligraphic-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Caligraphic-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Caligraphic-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-B; src: local('MathJax\_Main Bold'), local('MathJax\_Main-Bold')}
@font-face {font-family: MJXc-TeX-main-Bx; src: local('MathJax\_Main'); font-weight: bold}
@font-face {font-family: MJXc-TeX-main-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Bold.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-I; src: local('MathJax\_Main Italic'), local('MathJax\_Main-Italic')}
@font-face {font-family: MJXc-TeX-main-Ix; src: local('MathJax\_Main'); font-style: italic}
@font-face {font-family: MJXc-TeX-main-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-main-R; src: local('MathJax\_Main'), local('MathJax\_Main-Regular')}
@font-face {font-family: MJXc-TeX-main-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Main-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Main-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Main-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-math-I; src: local('MathJax\_Math Italic'), local('MathJax\_Math-Italic')}
@font-face {font-family: MJXc-TeX-math-Ix; src: local('MathJax\_Math'); font-style: italic}
@font-face {font-family: MJXc-TeX-math-Iw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Math-Italic.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Math-Italic.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Math-Italic.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size1-R; src: local('MathJax\_Size1'), local('MathJax\_Size1-Regular')}
@font-face {font-family: MJXc-TeX-size1-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size1-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size1-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size1-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size2-R; src: local('MathJax\_Size2'), local('MathJax\_Size2-Regular')}
@font-face {font-family: MJXc-TeX-size2-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size2-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size2-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size2-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size3-R; src: local('MathJax\_Size3'), local('MathJax\_Size3-Regular')}
@font-face {font-family: MJXc-TeX-size3-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size3-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size3-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size3-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-size4-R; src: local('MathJax\_Size4'), local('MathJax\_Size4-Regular')}
@font-face {font-family: MJXc-TeX-size4-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Size4-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Size4-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Size4-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-R; src: local('MathJax\_Vector'), local('MathJax\_Vector-Regular')}
@font-face {font-family: MJXc-TeX-vec-Rw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Regular.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Regular.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Regular.otf') format('opentype')}
@font-face {font-family: MJXc-TeX-vec-B; src: local('MathJax\_Vector Bold'), local('MathJax\_Vector-Bold')}
@font-face {font-family: MJXc-TeX-vec-Bx; src: local('MathJax\_Vector'); font-weight: bold}
@font-face {font-family: MJXc-TeX-vec-Bw; src /\*1\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/eot/MathJax\_Vector-Bold.eot'); src /\*2\*/: url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/woff/MathJax\_Vector-Bold.woff') format('woff'), url('https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/fonts/HTML-CSS/TeX/otf/MathJax\_Vector-Bold.otf') format('opentype')}
Where *p* is your probability for winning, and *r* is the return rate if you win (ie, if you stand to double your money, r=2; etc).
Now, it turns out, betting *f* of your money (and keeping the rest in reserve) is equivalent to betting *p* of your money and putting (*1-p*) on the other side of the bet. Betting against yourself is a pretty silly thing to do, but since you'll win either way, there's no problem:
***Betting***f***of your money:***
* If you win, you've got r⋅f, plus the (1−f) you held.
+ r⋅f=rp−11−1r
+ (1−f)=1−p1−1r
+ So the sum = rp−11−1r+1−p1−1r=rp−p1−1r=pr(r−1)r−1=p⋅r of your initial money.
* If you lose, you've still got (1−f) of what you had.
+ So this is just 1−p1−1r
***Betting against yourself, with fractions like your beliefs:***
* If you win, you've got rp of your money.
* If you lose, the payoff ratio (assuming you can get the reverse odds for the reverse bet) is 11−1r. So, since you put down 1−p, you get 1−p1−1r.
But now imagine that a bunch of bettors are using the second strategy to make bets with each other, with the "house odds" being the weighted average of all their beliefs (weighted by their bankrolls, that is). Aside from the betting-against-yourself part, this is a pretty natural thing to do: these are the "house odds" which make the house revenue-neutral, so the house never has to dig into its own pockets to award winnings.
You can imagine that everyone is putting money on two different sides of a table, to indicate their bets. When the bet is resolved, the losing side is pushed over to the winning side, and everyone who put money on the winning side picks up a fraction of money proportional to the fraction they originally contributed to that side. (And since payoffs of the bet-against-yourself strategy are exactly identical to Kelly betting payoffs, a bunch of Kelly bets at house odds rearrange money in exactly the same way as this.)
But this is clearly equivalent to how hypotheses redistribute weight during Bayesian updates!
So, a market of Kelly betters re-distributes money according to Bayesian updates.
Altruistic Bets
===============
Therefore, we can interpret the superrational coalition members as betting their coalition weight, according to the Kelly criterion.
But, this is a pretty weird thing to do!
I've [argued](https://www.lesswrong.com/posts/DfZtwtGD6ymFtXmdA/kelly-is-just-about-logarithmic-utility) that the main sensible justification for using the Kelly criterion is if you have utility logarithmic in wealth. Here, this translates to utility logarithmic in coalition weight.
It's *possible* that under some reasonable assumptions about the world, we can argue that utility of coalition members will end up approximately logarithmic. But Critch's theorem applies to lots of situations, including small ones where there isn't any possibility for [weird things to happen over long chains of bets](https://www.lesswrong.com/posts/HLCcTypehEJtstNnD/a-non-logarithmic-argument-for-kelly) as in some arguments for Kelly.
Typically, final utility will not even be *continuous* in coalition weight: small changes in coalition weight often won't change the optimal strategy at all, but at select tipping points, the optimal strategy will totally change to reflect the reconfigured trade-offs between preferences.
Intuitively, these tipping points *should* factor significantly in a coalition member's betting strategy; you'd be totally indifferent to small bets which can't change anything, but avoid specific transitions strongly, and seek out others. If the coalition members were betting based on their selfish preferences, this would be the case.
Yet, the coalition members end up betting according to a very simple formula, which does not account for any of this.
Why?
We can't justify this betting behavior from a selfish perspective (that is, not with the usual decision theories); as I said, the bets don't make sense.
But we're not dealing with selfish agents. These agents are acting according to a Pareto-optimal policy.
And that's ultimately the perspective we can justify the bets from: these are *altruistically motivated bets.* Exchanging coalition weight in this way is *best for everyone*. It keeps you Pareto-optimal!
This is very counterintuitive. I suspect most people would agree with me that there *seems* to be no reason to bet, if you're being altruistic rather than selfish. Not so! They're not betting for their personal benefit. They're betting for the common good!
Of course, that fact is a very straightforward consequence of Critch's theorem. It shouldn't be surprising. Yet, somehow, it didn't stick out to me in quite this way. I was too stuck in the frame of trying to interpret the bets selfishly, as Pareto-improvements which both sides happily agree to.
I'm quite curious whether we can say anything interesting about how altruistic agents would handle money, based on this. I don't think it means altruists should Kelly bet money; money is a very different thing from coalition weight. Coalition weights are like exchange rates or prices. Money is more of a thing being exchanged. You do not *pay* coalition weight in order to get things done.
|
be7aeba0-1d60-4211-bb19-eecd93371161
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Message Length
Someone is broadcasting a stream of bits. You don't know why. A 500-bit-long sample looks like this:
01100110110101011011111100001001110000100011010001101011011010000001010000001010
10100111101000101111010100100101010010101010101000010100110101010011111111010101
01010101011111110101011010101101111101010110110101010100000001101111100000111010
11100000000000001111101010110101010101001010101101010101100111001100001100110101
11111111111111111100011001011010011010101010101100000010101011101101010010110011
11111010111101110100010101010111001111010001101101010101101011000101100000101010
10011001101010101111...
The thought occurs to you to do Science to it—to ponder if there's some way you could better predict what bits are going to come next. At first you think you can't—it's just a bunch of random bits. You can't predict it, because that's what random means.
Or does it? True, if the sequence represented flips of a fair coin—every flip independently landing either 0 or 1 with exactly equal probability—then there would be no way you could predict what would come next: any continuation you could posit would be exactly as probable as any other.
But if the sequence represented flips of a biased coin—if, say, 1 came up 0.55 of the time instead of exactly 0.5—then it would be possible to predict better or worse. Your best bet for the next bit in isolation would always be 1, and you would more strongly anticipate sequences with slightly more 1s than 0s.
You count 265 1s in the sample of 500 bits. Given the hypothesis that the bits were generated by a fair coin, the number of 1s (or without loss of generality, 0s) would be given by the binomial distribution (500k)(0.5)k(0.5)500−k, which has a standard deviation of √500⋅0.52=√125≈11.18, so your observation of 265−250=15 excess 1s is about 1511.18≈1.34 standard deviations from the mean—well within the realm of plausibility of happening by chance, although you're at least slightly suspicious that the coin behind these bits migh
|
3a48055e-762b-4b76-ad05-676398e78c14
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Robin Hanson: Why is Abstraction both Statusful and Silly?
Discussion article for the meetup : Robin Hanson: Why is Abstraction both Statusful and Silly?
WHEN: 14 July 2014 07:00:00PM (-0400)
WHERE: Citadel House, 98 Elm St Apt 1, Somerville, MA
Robin Hanson will give a short informal talk followed by a discussion on the reasons why abstraction is both statusful and silly! Meetup starts at 7pm, talk starts at 7:30pm.
Discussion article for the meetup : Robin Hanson: Why is Abstraction both Statusful and Silly?
|
5ddf384a-8198-42c3-be8c-460cbd57893b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Putanumonit - Convincing people to read the Sequences and wondering about "postrationalists"
|
4cc24638-9a49-4ef8-b74b-e9b45d823a55
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How do natural sciences prove causation?
Let's say we have 2 phenomena, A and B, each can be a value of 0 or 1, and we observe, that for them implication table A=>B is always true. (Third column represents whether the combinations of events can happen or cannot.) A B A=>B
0 0 1
0 1 1
1 0 0
1 1 1
Thing we see is that combination A=1 and B=0 almost never happens, and three other combinations can happen. But how can we be sure, that it is not some kind of third event, which influences both of those to output those combinations of values?
What if the table is like this, based on &?
0 0 0
0 1 0
1 0 0
1 1 1
If at least one of events (any) happens, then the second happens too. How this relation would be called?
How many tables (of the 16) there are which could potentially represent causation? For example,
000
010
101
111
* is not in the set, because it says, that A not being true is impossible(has never been observed), but has no limitation on the value of B.
Given a table in this format (or just a string representing the third column), what a person would do next to test whether events influence one another or are both defined by a third? Does the further approach even depend on a type of table? (I expect it will not, as everything would be observed in frequencies)
Is it like an cycle, where one makes an assumption "what if they both are defined by event C" and then shows there is no correlation with C, and repeats for many other possible C's? But then it looks it would not be possible to exhaust all possible C's.
|
b63bf10a-b926-4ad5-ac0b-a5fe020760ab
|
StampyAI/alignment-research-dataset/arxiv
|
Arxiv
|
Dynamically Switching Human Prediction Models for Efficient Planning.
I Introduction
---------------
When robots operate in close proximity to humans, it is crucial that they anticipate what people will do to respond appropriately.
Such prediction often involves equipping the robot with a model of human behavior [[1](#bib.bib1)].
This model could be physics-based [[2](#bib.bib2), [3](#bib.bib3), [4](#bib.bib4)], pattern-based [[5](#bib.bib5), [6](#bib.bib6), [7](#bib.bib7), [8](#bib.bib8)], approximate rationality with respect to an objective function [[9](#bib.bib9), [10](#bib.bib10), [11](#bib.bib11), [12](#bib.bib12), [13](#bib.bib13), [14](#bib.bib14), [15](#bib.bib15), [16](#bib.bib16), [17](#bib.bib17)],
or even two player games [[18](#bib.bib18), [19](#bib.bib19), [20](#bib.bib20), [21](#bib.bib21)].
What human model a robot should be equipped with depends on the trade-offs it needs to make: some models, like the physics-based ones, are cheap to compute but don’t capture human intentions and may be less accurate; others, like two player games, model interactions between the person and the robot, but at a high computational cost.
For systems interacting with people in real time, like autonomous vehicles or mobile robots, compromising on either accuracy or computation is undesirable. For instance, in the driving scenario in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Dynamically Switching Human Prediction Models for Efficient Planning"), using the cheap model might pose a safety hazard, while picking the more accurate one may limit planning frequency or strain computational resources needed elsewhere (sensor suite, perception system, routing, etc).

Figure 1: The robot (orange car) can plan with either a complex model (yellow bubble) of the human (blue car) that is more accurate but more expensive or with a simple model (purple bubble) that is not as accurate but cheaper. Our algorithm uses the complex model when a collision is imminent (solid yellow line), but saves computation by switching to the simple one afterwards (solid purple line).
We advocate that the robot should not be stuck with a single model, but instead have the ability to dynamically change which model it is using as the interaction progresses. In Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Dynamically Switching Human Prediction Models for Efficient Planning"), the robot starts off with the cheap model. Anticipating a potential collision, it switches to a more complex model, reverting back once the critical maneuver is complete.
The idea of using multiple predictive models is not new; most works, like interactive multiple model filtering [[22](#bib.bib22), [23](#bib.bib23), [24](#bib.bib24), [25](#bib.bib25)] or other ways of combining models [[25](#bib.bib25)] are focused on improving accuracy by leveraging complementary strengths of different models. In contrast, we are focused on the setting where we have complex but accurate models (which could be mixtures themselves), and cheap but less accurate ones. In such settings, if it weren’t for computational costs, we would use the complex models all the time. The question becomes: when is the performance gain worth the extra computation?
We could plan with the complex model and measure the performance gain, but doing so defeats the purpose of saving computation.
To avoid this, prior work proposed training a model to predict which agents “influence” or “affect” the planner, and prioritizing computation for them [[26](#bib.bib26)].
This approach does not estimate the performance-computation trade off, but heuristically assumes that agents who influence the planner are conducive to high gain.
However, there is a fundamental difference between needing to consider an agent at all, and estimating the performance gain between predicting their behavior using a cheap versus a complex model.
For example, in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Dynamically Switching Human Prediction Models for Efficient Planning"), a cheap model is sufficient in the leftmost and rightmost frames where a critical maneuver is not necessary, despite the human influencing the planner.
Instead of heuristically allocating computation, we employ efficient, online estimation for how an alternate human model could change robot performance. This enables switching to the model which best trades off reward and computation in real time.
In Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Dynamically Switching Human Prediction Models for Efficient Planning"), a car using our switching methodology is able to achieve a behavior similar to the one produced with the most accurate model, but at a computational cost closer to that of the cheaper model.
This paper makes three key contributions: (1) a formalism for the robot’s decision process that optimally trades off between computation and accuracy across multiple predictive models, (2) an approximate solution that solves this decision process online, and (3) a comparative analysis of our model switching algorithm in a human-autonomous car system. Together, these contributions give robots the autonomy to decide in real-time what predictive human models are most appropriate to use in different interactive scenarios. Code and videos are made available at [arjunsripathy.github.io/model\_switching](https://arjunsripathy.github.io/model_switching)
II Method
----------
We focus on a system consisting of a robot R𝑅Ritalic\_R interacting in an environment with other human agents H𝐻Hitalic\_H.
The robot’s goal is to plan around the humans in a manner that is most effective while minimizing computational time.
We present our theory for a general single human, single robot setting where the other agents’ behavior is known, although our method can easily be extended by running it separately for each human.
We use the running example of an autonomous car sharing the road with a human driver to illustrate the proposed approach and demonstrate the utility of our method.
###
II-A Problem Statement
We model the system as a fully observable dynamical system in which one agent’s controls potentially impact the other’s. As such, let the state x∈𝒳𝑥𝒳x\in\mathcal{X}italic\_x ∈ caligraphic\_X include the positions and velocities of both agents, where the human action uH∈𝒰Hsubscript𝑢𝐻subscript𝒰𝐻u\_{H}\in\mathcal{U}\_{H}italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ∈ caligraphic\_U start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT and robot action uR∈𝒰Rsubscript𝑢𝑅subscript𝒰𝑅u\_{R}\in\mathcal{U}\_{R}italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ∈ caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT each can affect the next state in a combined dynamics model: xt+1=f(xt,uRt,uHt)superscript𝑥𝑡1𝑓superscript𝑥𝑡superscriptsubscript𝑢𝑅𝑡superscriptsubscript𝑢𝐻𝑡x^{t+1}=f(x^{t},u\_{R}^{t},u\_{H}^{t})italic\_x start\_POSTSUPERSCRIPT italic\_t + 1 end\_POSTSUPERSCRIPT = italic\_f ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ).
Let 𝐱=[x1,…,xN]𝐱superscript𝑥1…superscript𝑥𝑁\mathbf{x}=[x^{1},\ldots,x^{N}]bold\_x = [ italic\_x start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_x start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ] be a finite horizon N𝑁Nitalic\_N state sequence, 𝐮R=[uR1,…,uRN]subscript𝐮𝑅superscriptsubscript𝑢𝑅1…superscriptsubscript𝑢𝑅𝑁\mathbf{u}\_{R}=[u\_{R}^{1},\ldots,u\_{R}^{N}]bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = [ italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ] the robot’s continuous control inputs, and 𝐮H=[uH1,…,uHN]subscript𝐮𝐻superscriptsubscript𝑢𝐻1…superscriptsubscript𝑢𝐻𝑁\mathbf{u}\_{H}=[u\_{H}^{1},\ldots,u\_{H}^{N}]bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT = [ italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 1 end\_POSTSUPERSCRIPT , … , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT ] the human’s. The robot optimizes its controls 𝐮Rsubscript𝐮𝑅\mathbf{u}\_{R}bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT according to a reward function that depends on the joint sequence of states and controls: RR(x0,𝐮R,𝐮H)=∑τ=1NrR(xτ,uRτ,uHτ)subscript𝑅𝑅superscript𝑥0subscript𝐮𝑅subscript𝐮𝐻superscriptsubscript𝜏1𝑁subscript𝑟𝑅superscript𝑥𝜏superscriptsubscript𝑢𝑅𝜏superscriptsubscript𝑢𝐻𝜏R\_{R}(x^{0},\mathbf{u}\_{R},\mathbf{u}\_{H})=\sum\_{\tau=1}^{N}r\_{R}(x^{\tau},u\_{R}^{\tau},u\_{H}^{\tau})italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_τ = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT ), where x0superscript𝑥0x^{0}italic\_x start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT is the starting state and each state thereafter is obtained via the dynamics model from the previous robot and human controls.
The person chooses their action at time t𝑡titalic\_t, uHtsuperscriptsubscript𝑢𝐻𝑡u\_{H}^{t}italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT, according to an internal policy πH(xt,uRt)subscript𝜋𝐻superscript𝑥𝑡superscriptsubscript𝑢𝑅𝑡\pi\_{H}(x^{t},u\_{R}^{t})italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ), which, when applied at every state xtsuperscript𝑥𝑡x^{t}italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT, results in 𝐮Hsubscript𝐮𝐻\mathbf{u}\_{H}bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT.
We let 𝐮Rsubscript𝐮𝑅\mathbf{u}\_{R}bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT and 𝐮Hsubscript𝐮𝐻\mathbf{u}\_{H}bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT represent the true executed robot and human controls, respectively.
Our system is a Markov Decision Process (MDP) with states 𝒳𝒳\mathcal{X}caligraphic\_X, actions 𝒰Rsubscript𝒰𝑅\mathcal{U}\_{R}caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, transition function f(x,uR,uH)𝑓𝑥subscript𝑢𝑅subscript𝑢𝐻f(x,u\_{R},u\_{H})italic\_f ( italic\_x , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ), and reward rR(x,uR,uH)subscript𝑟𝑅𝑥subscript𝑢𝑅subscript𝑢𝐻r\_{R}(x,u\_{R},u\_{H})italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ).
Since the robot does not know πHsubscript𝜋𝐻\pi\_{H}italic\_π start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT (that would require access to the human’s brain), it uses a *human model* M:𝒳×𝒰RN→𝒰HN:𝑀→𝒳superscriptsubscript𝒰𝑅𝑁superscriptsubscript𝒰𝐻𝑁M:\mathcal{X}\times\mathcal{U}\_{R}^{N}\rightarrow\mathcal{U}\_{H}^{N}italic\_M : caligraphic\_X × caligraphic\_U start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT → caligraphic\_U start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT to make a *prediction* of the human controls ¯𝐮H¯absentsubscript𝐮𝐻\bar{}\mathbf{u}\_{H}over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT. The robot then seeks a *plan* ¯𝐮R0(M)≔¯𝐮R(x0,M)≔¯absentsuperscriptsubscript𝐮𝑅0𝑀¯absentsubscript𝐮𝑅superscript𝑥0𝑀\bar{}\mathbf{u}\_{R}^{0}(M)\coloneqq\bar{}\mathbf{u}\_{R}(x^{0},M)over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ( italic\_M ) ≔ over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , italic\_M ) that maximizes the MDP reward RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT based on
the predicted ¯𝐮H0(¯𝐮R,M)≔¯𝐮H(x0,¯𝐮R,M)≔¯absentsuperscriptsubscript𝐮𝐻0¯absentsubscript𝐮𝑅𝑀¯absentsubscript𝐮𝐻superscript𝑥0¯absentsubscript𝐮𝑅𝑀\bar{}\mathbf{u}\_{H}^{0}(\bar{}\mathbf{u}\_{R},M)\coloneqq\bar{}\mathbf{u}\_{H}(x^{0},\bar{}\mathbf{u}\_{R},M)over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_M ) ≔ over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_M ).
Unfortunately, this type of offline planning doesn’t consider modeling errors introduced by imperfect models M𝑀Mitalic\_M. Hence, it is more common for the robot to perform online planning at every time step t𝑡titalic\_t to obtain more accurate plans ¯𝐮Rt(M)¯absentsuperscriptsubscript𝐮𝑅𝑡𝑀\bar{}\mathbf{u}\_{R}^{t}(M)over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M ). For computational efficiency, we follow [[18](#bib.bib18)] and use Model Predictive Control (MPC) [[27](#bib.bib27)] with a finite horizon K<N𝐾𝑁K<Nitalic\_K < italic\_N, where at each time step t𝑡titalic\_t the robot optimizes its controls to maximize the cumulative reward:
| | | | |
| --- | --- | --- | --- |
| | ¯𝐮Rt(M)=argmax𝐮RRR(xt,𝐮R,¯𝐮Ht(𝐮R,M)),¯absentsuperscriptsubscript𝐮𝑅𝑡𝑀subscriptsubscript𝐮𝑅subscript𝑅𝑅superscript𝑥𝑡subscript𝐮𝑅¯absentsuperscriptsubscript𝐮𝐻𝑡subscript𝐮𝑅𝑀\bar{}\mathbf{u}\_{R}^{t}(M)=\arg\max\_{\mathbf{u}\_{R}}R\_{R}(x^{t},\mathbf{u}\_{R},\bar{}\mathbf{u}\_{H}^{t}(\mathbf{u}\_{R},M))\enspace,over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M ) = roman\_arg roman\_max start\_POSTSUBSCRIPT bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_M ) ) , | | (1) |
where RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT is evaluated only over the first K𝐾Kitalic\_K states starting at xtsuperscript𝑥𝑡x^{t}italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT.
The robot executes the first action from its plan and then replans at the next time step t+1𝑡1t+1italic\_t + 1. To simplify notation, we will denote the first action of a plan as u¯Rt(M)≔¯𝐮Rt(M)[0]≔superscriptsubscript¯𝑢𝑅𝑡𝑀¯absentsuperscriptsubscript𝐮𝑅𝑡𝑀delimited-[]0\bar{u}\_{R}^{t}(M)\coloneqq\bar{}\mathbf{u}\_{R}^{t}(M)[0]over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M ) ≔ over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M ) [ 0 ].
The crucial question is what human model M𝑀Mitalic\_M should the robot use for planning with Eq. ([1](#S2.E1 "1 ‣ II-A Problem Statement ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning"))? Restricting ourselves to any single model either hurts performance or computational efficiency. We propose an algorithm which enables efficient switching between models of varying accuracy and complexity, allowing for both high performance and low computation.
###
II-B Model Switching Formalism
We assume the robot has access to a ladder of human models ℳ={M0,…,Mn}ℳsubscript𝑀0…subscript𝑀𝑛\mathcal{M}=\{M\_{0},\ldots,M\_{n}\}caligraphic\_M = { italic\_M start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT }, where as we climb the ladder we sacrifice computational time for greater expected reward:
𝔼x[RR(x,¯𝐮R(Mj),𝐮H)]≥𝔼x[RR(x,¯𝐮R(Mi),𝐮H)]subscript𝔼𝑥delimited-[]subscript𝑅𝑅𝑥¯absentsubscript𝐮𝑅subscript𝑀𝑗subscript𝐮𝐻subscript𝔼𝑥delimited-[]subscript𝑅𝑅𝑥¯absentsubscript𝐮𝑅subscript𝑀𝑖subscript𝐮𝐻\mathbb{E}\_{x}[R\_{R}(x,\bar{}\mathbf{u}\_{R}(M\_{j}),\mathbf{u}\_{H})]\geq\mathbb{E}\_{x}[R\_{R}(x,\bar{}\mathbf{u}\_{R}(M\_{i}),\mathbf{u}\_{H})]blackboard\_E start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT [ italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) , bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) ] ≥ blackboard\_E start\_POSTSUBSCRIPT italic\_x end\_POSTSUBSCRIPT [ italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) ] and T(Mj)>T(Mi),∀i<jformulae-sequence𝑇subscript𝑀𝑗𝑇subscript𝑀𝑖for-all𝑖𝑗T(M\_{j})>T(M\_{i}),\forall i<jitalic\_T ( italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) > italic\_T ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , ∀ italic\_i < italic\_j, where T(M)𝑇𝑀T(M)italic\_T ( italic\_M ) is the time to solve Eq. ([1](#S2.E1 "1 ‣ II-A Problem Statement ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")) under M𝑀Mitalic\_M.
At every time step t𝑡titalic\_t, the robot needs to choose the human model Mt∈ℳsuperscript𝑀𝑡ℳM^{t}\in\mathcal{M}italic\_M start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∈ caligraphic\_M to use for planning ¯𝐮Rt¯absentsuperscriptsubscript𝐮𝑅𝑡\bar{}\mathbf{u}\_{R}^{t}over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT.
To determine which model the robot should use at time t𝑡titalic\_t, we construct a *meta MDP* on top of the previous MDP, with states xtsuperscript𝑥𝑡x^{t}italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT, actions Mt∈ℳsuperscript𝑀𝑡ℳM^{t}\in\mathcal{M}italic\_M start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ∈ caligraphic\_M representing the choice of human model, transition function f(xt,u¯Rt(Mt),uHt)𝑓superscript𝑥𝑡superscriptsubscript¯𝑢𝑅𝑡superscript𝑀𝑡superscriptsubscript𝑢𝐻𝑡f(x^{t},\bar{u}\_{R}^{t}(M^{t}),u\_{H}^{t})italic\_f ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ), and meta-reward rmetat=rR(xt,u¯Rt(Mt),uHt)−λ\*T(Mt)superscriptsubscript𝑟𝑚𝑒𝑡𝑎𝑡subscript𝑟𝑅superscript𝑥𝑡superscriptsubscript¯𝑢𝑅𝑡superscript𝑀𝑡superscriptsubscript𝑢𝐻𝑡𝜆𝑇superscript𝑀𝑡r\_{meta}^{t}=r\_{R}(x^{t},\bar{u}\_{R}^{t}(M^{t}),u\_{H}^{t})-\lambda\*T(M^{t})italic\_r start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT = italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ) - italic\_λ \* italic\_T ( italic\_M start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ), with λ𝜆\lambdaitalic\_λ trading off actual reward gained by planning under Mtsuperscript𝑀𝑡M^{t}italic\_M start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT and computational time spent on the plan.
Lower values for λ𝜆\lambdaitalic\_λ favor more complex models, whereas higher values result in more usage of less expensive ones.
Solving this MDP exactly is impossible, since it requires access to the true human controls 𝐮Hsubscript𝐮𝐻\mathbf{u}\_{H}bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ahead of time. Moreover, even if we approximate the true human controls with those predicted by our most accurate model Mnsubscript𝑀𝑛M\_{n}italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT, the robot needs to find the model sequence that maximizes the cumulative meta reward across the episode—a procedure exponential in the episode horizon and, thus, intractable.
We could alleviate this computational burden by assuming the robot myopically decides when to switch models: instead of considering the cumulative meta reward, only look at rmetatsuperscriptsubscript𝑟𝑚𝑒𝑡𝑎𝑡r\_{meta}^{t}italic\_r start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT to decide whether to switch at time t+1𝑡1t+1italic\_t + 1. On the plus side, this simplification is more tractable and only needs the current human control. Unfortunately, even this relaxation requires planning with every model and picking the one with the best rmetatsuperscriptsubscript𝑟𝑚𝑒𝑡𝑎𝑡r\_{meta}^{t}italic\_r start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT,
which is worse than simply using Mnsubscript𝑀𝑛M\_{n}italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT from the get-go. Thus, we now propose an approximate solution that avoids computing every plan while still switching models as needed.
###
II-C Approximate Solution: Switching between Two Models
For ease of exposition, we first discuss how the robot can decide whether to switch from a simple model M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, used for planning at timestep t𝑡titalic\_t, to a complex model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, which we are considering for timestep t+1𝑡1t+1italic\_t + 1. We are interested in estimating the reward the robot would gain from using M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT.
Estimate Change in Robot Plan
Leveraging our plan computed at timestep t𝑡titalic\_t using M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT, we get around explicitly generating ¯𝐮Rt(M2)¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2\bar{}\mathbf{u}\_{R}^{t}(M\_{2})over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) by approximating it as ^𝐮Rt(M2)=¯𝐮Rt(M1)+Δ^𝐮R^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1Δ^absentsubscript𝐮𝑅\hat{}\mathbf{u}\_{R}^{t}(M\_{2})=\bar{}\mathbf{u}\_{R}^{t}(M\_{1})+\Delta\hat{}\mathbf{u}\_{R}over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT.
Here, we want to choose Δ^𝐮RΔ^absentsubscript𝐮𝑅\Delta\hat{}\mathbf{u}\_{R}roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT such that it maximizes the robot reward RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT under the complex model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT.
However, optimizing Δ^𝐮RΔ^absentsubscript𝐮𝑅\Delta\hat{}\mathbf{u}\_{R}roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT using Eq. ([1](#S2.E1 "1 ‣ II-A Problem Statement ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")) is equivalent to planning.
To get an efficient estimate, we use a quadratic Taylor series approximation of RRsubscript𝑅𝑅R\_{R}italic\_R start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, denoted R~Rsubscript~𝑅𝑅\tilde{R}\_{R}over~ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, evaluated around (xt,¯𝐮Rt(M1),¯𝐮Ht(¯𝐮Rt(M1),M1))superscript𝑥𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1¯absentsuperscriptsubscript𝐮𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀1(x^{t},\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),\bar{}\mathbf{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{1}))( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ), our current plan and human prediction:
| | | | |
| --- | --- | --- | --- |
| | Δ^𝐮R=argmaxΔ𝐮RR~R(xt,^𝐮Rt(M2),¯𝐮Ht(^𝐮Rt(M2),M2)).Δ^absentsubscript𝐮𝑅subscriptΔsubscript𝐮𝑅subscript~𝑅𝑅superscript𝑥𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2¯absentsuperscriptsubscript𝐮𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀2\Delta\hat{}\mathbf{u}\_{R}=\arg\max\_{\Delta\mathbf{u}\_{R}}\tilde{R}\_{R}(x^{t},\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),\bar{}\mathbf{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{2}))\enspace.roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = roman\_arg roman\_max start\_POSTSUBSCRIPT roman\_Δ bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over~ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ) . | | (2) |
Estimate Change in Human Prediction
Eq. ([2](#S2.E2 "2 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")) requires approximating the M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT human’s response ¯𝐮Ht¯absentsuperscriptsubscript𝐮𝐻𝑡\bar{}\mathbf{u}\_{H}^{t}over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT to the robot’s plan ^𝐮Rt(M2)^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2\hat{}\mathbf{u}\_{R}^{t}(M\_{2})over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ), which can be broken down into M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT’s response to the current plan ¯𝐮Rt(M1)¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1\bar{}\mathbf{u}\_{R}^{t}(M\_{1})over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) plus the change coming from Δ^𝐮RΔ^absentsubscript𝐮𝑅\Delta\hat{}\mathbf{u}\_{R}roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT:
^𝐮Ht(^𝐮Rt(M2),M2)=^𝐮Ht(¯𝐮Rt(M1)+Δ^𝐮R,M2)^absentsuperscriptsubscript𝐮𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀2^absentsuperscriptsubscript𝐮𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1Δ^absentsubscript𝐮𝑅subscript𝑀2\hat{}\mathbf{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{2})=\hat{}\mathbf{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1})+\Delta\hat{}\mathbf{u}\_{R},M\_{2})over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ).
We can linearly approximate this:
| | | | |
| --- | --- | --- | --- |
| | ^𝐮Ht(^𝐮Rt(M2),M2)=¯𝐮Ht(¯𝐮Rt(M1),M2)+d^𝐮Hd^𝐮R⋅Δ^𝐮R,^absentsuperscriptsubscript𝐮𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀2¯absentsuperscriptsubscript𝐮𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀2⋅𝑑^absentsubscript𝐮𝐻𝑑^absentsubscript𝐮𝑅Δ^absentsubscript𝐮𝑅\hat{}\mathbf{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{2})=\bar{}\mathbf{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{2})+\frac{d\hat{}\mathbf{u}\_{H}}{d\hat{}\mathbf{u}\_{R}}\cdot\Delta\hat{}\mathbf{u}\_{R}\enspace,over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) + divide start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG ⋅ roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , | | (3) |
where d^𝐮Hd^𝐮R=d^𝐮H(^𝐮R,M)d^𝐮R|
\Let@\restore@math@cr\default@tag
^uR=¯uRt(M1)M=M2
𝑑^absentsubscript𝐮𝐻𝑑^absentsubscript𝐮𝑅evaluated-at𝑑^absentsubscript𝐮𝐻^absentsubscript𝐮𝑅𝑀𝑑^absentsubscript𝐮𝑅
\Let@\restore@math@cr\default@tag
^uR=¯uRt(M1)M=M2
\frac{d\hat{}\mathbf{u}\_{H}}{d\hat{}\mathbf{u}\_{R}}=\frac{d\hat{}\mathbf{u}\_{H}(\hat{}\mathbf{u}\_{R},M)}{d\hat{}\mathbf{u}\_{R}}\Bigr{|}\_{\vbox{\Let@\restore@math@cr\default@tag\halign{\ifx cc\hfil\fi$\m@th\scriptstyle#$\hfil\cr\hat{}\mathbf{u}\_{R}=\bar{}\mathbf{u}\_{R}^{t}(M\_{1})\\
M=M\_{2}\crcr}}}divide start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG = divide start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_M ) end\_ARG start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG | start\_POSTSUBSCRIPT over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) italic\_M = italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. This requires evaluating the complex model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, which might be expensive, though not nearly as expensive as planning with it.
We may now substitute the changed robot plan ^𝐮Rt(M2)^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2\hat{}\mathbf{u}\_{R}^{t}(M\_{2})over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) and the changed human prediction from Eq. ([3](#S2.E3 "3 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")) into our reward expression we maximize in Eq. ([2](#S2.E2 "2 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning"))
| | | | |
| --- | --- | --- | --- |
| | R~R(xt,¯𝐮Rt(M1)+Δ^𝐮R,¯𝐮Ht(¯𝐮Rt(M1),M2)+d^𝐮Hd^𝐮R⋅Δ^𝐮R).subscript~𝑅𝑅superscript𝑥𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1Δ^absentsubscript𝐮𝑅¯absentsuperscriptsubscript𝐮𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀2⋅𝑑^absentsubscript𝐮𝐻𝑑^absentsubscript𝐮𝑅Δ^absentsubscript𝐮𝑅\tilde{R}\_{R}(x^{t},\bar{}\mathbf{u}\_{R}^{t}(M\_{1})+\Delta\hat{}\mathbf{u}\_{R},\bar{}\mathbf{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{2})+\frac{d\hat{}\mathbf{u}\_{H}}{d\hat{}\mathbf{u}\_{R}}\cdot\Delta\hat{}\mathbf{u}\_{R})\enspace.over~ start\_ARG italic\_R end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) + divide start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG ⋅ roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) . | | (4) |
Performance Gain from the Change in Robot Plan
Ultimately, to assess the robot’s performance gain by using M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, we want to estimate the reward component of rmetatsuperscriptsubscript𝑟𝑚𝑒𝑡𝑎𝑡r\_{meta}^{t}italic\_r start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT under M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, rR(xt,u^Rt(M2),u¯Ht(^𝐮Rt(M2),MH))subscript𝑟𝑅superscript𝑥𝑡superscriptsubscript^𝑢𝑅𝑡subscript𝑀2superscriptsubscript¯𝑢𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀𝐻r\_{R}(x^{t},\hat{u}\_{R}^{t}(M\_{2}),\bar{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{H}))italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) ), where ¯𝐮Ht(^𝐮Rt(M2),MH)¯absentsuperscriptsubscript𝐮𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀𝐻\bar{}\mathbf{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{H})over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) would be the human’s response under the true model MHsubscript𝑀𝐻M\_{H}italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT to the robot’s plan with M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, and u¯Ht(^𝐮Rt(M2),MH)superscriptsubscript¯𝑢𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀𝐻\bar{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{H})over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) is the first control of that response.
To simplify notation, we will denote this reward rRt(M2)superscriptsubscript𝑟𝑅𝑡subscript𝑀2r\_{R}^{t}(M\_{2})italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ).
Since our approximation of the meta MDP considers myopic rewards rRsubscript𝑟𝑅r\_{R}italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, we really are only interested in the first control u^Rt(M2)superscriptsubscript^𝑢𝑅𝑡subscript𝑀2\hat{u}\_{R}^{t}(M\_{2})over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) of the changed plan ^𝐮Rt(M2)^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2\hat{}\mathbf{u}\_{R}^{t}(M\_{2})over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ), and the first control u¯Ht(^𝐮Rt(M2),MH)superscriptsubscript¯𝑢𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀𝐻\bar{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{H})over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) of the true human’s response to that plan.
Given Δ^𝐮RΔ^absentsubscript𝐮𝑅\Delta\hat{}\mathbf{u}\_{R}roman\_Δ over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, we only need the change for the first control Δu^RΔsubscript^𝑢𝑅\Delta\hat{u}\_{R}roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT combined with the approximation of u¯Ht(^𝐮Rt(M2),MH)superscriptsubscript¯𝑢𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀𝐻\bar{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{H})over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) as in Eq. ([3](#S2.E3 "3 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")) to estimate r^Rt(M2)superscriptsubscript^𝑟𝑅𝑡subscript𝑀2\hat{r}\_{R}^{t}(M\_{2})over^ start\_ARG italic\_r end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) as:
| | | | |
| --- | --- | --- | --- |
| | rR(xt,u¯Rt(M1)+Δu^R,u¯Ht(¯𝐮Rt(M1),MH)+du^Hdu^R⋅Δu^R),subscript𝑟𝑅superscript𝑥𝑡superscriptsubscript¯𝑢𝑅𝑡subscript𝑀1Δsubscript^𝑢𝑅superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀𝐻⋅𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅Δsubscript^𝑢𝑅r\_{R}(x^{t},\bar{u}\_{R}^{t}(M\_{1})+\Delta\hat{u}\_{R},\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{H})+\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}\cdot\Delta\hat{u}\_{R})\enspace,italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) + divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG ⋅ roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) , | | (5) |
where du^Hdu^R=du^H(^𝐮R,M)du^R|
\Let@\restore@math@cr\default@tag
^uR=¯uRt(M1)M=MH
𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅evaluated-at𝑑subscript^𝑢𝐻^absentsubscript𝐮𝑅𝑀𝑑subscript^𝑢𝑅
\Let@\restore@math@cr\default@tag
^uR=¯uRt(M1)M=MH
\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}=\frac{d\hat{u}\_{H}(\hat{}\mathbf{u}\_{R},M)}{d\hat{u}\_{R}}\Bigr{|}\_{\vbox{\Let@\restore@math@cr\default@tag\halign{\ifx cc\hfil\fi$\m@th\scriptstyle#$\hfil\cr\hat{}\mathbf{u}\_{R}=\bar{}\mathbf{u}\_{R}^{t}(M\_{1})\\
M=M\_{H}\crcr}}}divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG = divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_M ) end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG | start\_POSTSUBSCRIPT over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) italic\_M = italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT. Here, we don’t know MHsubscript𝑀𝐻M\_{H}italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT, but we can approximate it with the complex model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT.
One Time Step Simplification
All of this requires evaluating the complex model M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT at the current plan ¯𝐮Rt(M1)¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1\bar{}\mathbf{u}\_{R}^{t}(M\_{1})over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ), which, although cheaper than fully planning with M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT, is still expensive. Since the meta reward only evaluates the first control of the plans, we can do better by simplifying our formulation further to planning and predicting a single control. That is, we consider the changed control u^Rt(M2)=u¯Rt(M1)+Δu^Rsuperscriptsubscript^𝑢𝑅𝑡subscript𝑀2superscriptsubscript¯𝑢𝑅𝑡subscript𝑀1Δsubscript^𝑢𝑅\hat{u}\_{R}^{t}(M\_{2})=\bar{u}\_{R}^{t}(M\_{1})+\Delta\hat{u}\_{R}over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT, with its corresponding changed human control prediction from Eq. ([3](#S2.E3 "3 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")):
| | | | |
| --- | --- | --- | --- |
| | u^Ht(^𝐮Rt(M2),M2)=u¯Ht(¯𝐮Rt(M1),M2)+du^Hdu^R⋅Δu^R,superscriptsubscript^𝑢𝐻𝑡^absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀2subscript𝑀2superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀2⋅𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅Δsubscript^𝑢𝑅\hat{u}\_{H}^{t}(\hat{}\mathbf{u}\_{R}^{t}(M\_{2}),M\_{2})=\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{2})+\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}\cdot\Delta\hat{u}\_{R}\enspace,over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) + divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG ⋅ roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , | | (6) |
where du^Hdu^R=du^H(^𝐮R,M)du^R|
\Let@\restore@math@cr\default@tag
^uR=¯uRt(M1)M=M2
𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅evaluated-at𝑑subscript^𝑢𝐻^absentsubscript𝐮𝑅𝑀𝑑subscript^𝑢𝑅
\Let@\restore@math@cr\default@tag
^uR=¯uRt(M1)M=M2
\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}=\frac{d\hat{u}\_{H}(\hat{}\mathbf{u}\_{R},M)}{d\hat{u}\_{R}}\Bigr{|}\_{\vbox{\Let@\restore@math@cr\default@tag\halign{\ifx cc\hfil\fi$\m@th\scriptstyle#$\hfil\cr\hat{}\mathbf{u}\_{R}=\bar{}\mathbf{u}\_{R}^{t}(M\_{1})\\
M=M\_{2}\crcr}}}divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG = divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , italic\_M ) end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG | start\_POSTSUBSCRIPT over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT = over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) italic\_M = italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT.
We obtain Δu^RΔsubscript^𝑢𝑅\Delta\hat{u}\_{R}roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT by optimizing the following objective:
| | | | |
| --- | --- | --- | --- |
| | argmaxΔuRr~R(xt,u^Rt(M2),u¯Ht(¯𝐮Rt(M1),M2)+du^Hdu^R⋅ΔuR).subscriptΔsubscript𝑢𝑅subscript~𝑟𝑅superscript𝑥𝑡superscriptsubscript^𝑢𝑅𝑡subscript𝑀2superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀2⋅𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅Δsubscript𝑢𝑅\arg\max\_{\Delta u\_{R}}\tilde{r}\_{R}(x^{t},\hat{u}\_{R}^{t}(M\_{2}),\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{2})+\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}\cdot\Delta u\_{R})\enspace.roman\_arg roman\_max start\_POSTSUBSCRIPT roman\_Δ italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT over~ start\_ARG italic\_r end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) + divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG ⋅ roman\_Δ italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) . | | (7) |
Our simplified derivation still requires the gradient du^Hdu^R𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG, but this is cheaper to compute than d^𝐮Hd^𝐮R𝑑^absentsubscript𝐮𝐻𝑑^absentsubscript𝐮𝑅\frac{d\hat{}\mathbf{u}\_{H}}{d\hat{}\mathbf{u}\_{R}}divide start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG.
Armed with an estimated r^Rt(M2)superscriptsubscript^𝑟𝑅𝑡subscript𝑀2\hat{r}\_{R}^{t}(M\_{2})over^ start\_ARG italic\_r end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) from Eq. ([5](#S2.E5 "5 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), we can now determine whether the performance gain from M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is worth the additional compute by considering
| | | | |
| --- | --- | --- | --- |
| | Δrmetat=r^Rt(M2)−λ\*T(M2)−(rRt(M1)−λ\*T(M1)),Δsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎superscriptsubscript^𝑟𝑅𝑡subscript𝑀2𝜆𝑇subscript𝑀2superscriptsubscript𝑟𝑅𝑡subscript𝑀1𝜆𝑇subscript𝑀1\Delta r^{t}\_{meta}=\hat{r}\_{R}^{t}(M\_{2})-\lambda\*T(M\_{2})-(r\_{R}^{t}(M\_{1})-\lambda\*T(M\_{1}))\enspace,roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT = over^ start\_ARG italic\_r end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) - italic\_λ \* italic\_T ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) - ( italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) - italic\_λ \* italic\_T ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) , | | (8) |
where we already know rRt(M1)superscriptsubscript𝑟𝑅𝑡subscript𝑀1r\_{R}^{t}(M\_{1})italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) from the previous time step, and T(M1)𝑇subscript𝑀1T(M\_{1})italic\_T ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) and T(M2)𝑇subscript𝑀2T(M\_{2})italic\_T ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) are known a priori from their model specifications. If ΔrmetatΔsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎\Delta r^{t}\_{meta}roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT is positive, the robot should switch to M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT; otherwise, the robot should continue using M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT.
Intuitive Interpretation
In Eq. ([8](#S2.E8 "8 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), since T𝑇Titalic\_T’s are constants, the robot’s decision of whether to switch relies on comparing rRt(M1)=rR(xt,u¯Rt(M1),u¯Ht(¯𝐮Rt(M1),M1))superscriptsubscript𝑟𝑅𝑡subscript𝑀1subscript𝑟𝑅superscript𝑥𝑡superscriptsubscript¯𝑢𝑅𝑡subscript𝑀1superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀1r\_{R}^{t}(M\_{1})=r\_{R}(x^{t},\bar{u}\_{R}^{t}(M\_{1}),\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{1}))italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) = italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) ) to r^Rt(M2)=rR(xt,u¯Rt(M1)+Δu^R,u¯Ht(¯𝐮Rt(M1),M2)+du^Hdu^R⋅Δu^R)superscriptsubscript^𝑟𝑅𝑡subscript𝑀2subscript𝑟𝑅superscript𝑥𝑡superscriptsubscript¯𝑢𝑅𝑡subscript𝑀1Δsubscript^𝑢𝑅superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀2⋅𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅Δsubscript^𝑢𝑅\hat{r}\_{R}^{t}(M\_{2})=r\_{R}(x^{t},\bar{u}\_{R}^{t}(M\_{1})+\Delta\hat{u}\_{R},\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{2})+\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}\cdot\Delta\hat{u}\_{R})over^ start\_ARG italic\_r end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) = italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) + roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) + divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG ⋅ roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ). Looking at these rewards, a few key distinctions stand out.
First, the robot is interested in knowing whether M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT would give a different prediction u¯Ht(¯𝐮Rt(M1),M2)superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀2\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{2})over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) than M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT’s u¯Ht(¯𝐮Rt(M1),M1)superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀1subscript𝑀1\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{1}),M\_{1})over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT ) on the current plan. For example, in Fig. [1](#S1.F1 "Figure 1 ‣ I Introduction ‣ Dynamically Switching Human Prediction Models for Efficient Planning") we realize a more complex model foresees the human turning into the bottleneck compared to naive constant velocity. Here this foresight prevents a collision.
Meanwhile, the derivative du^Hdu^R𝑑subscript^𝑢𝐻𝑑subscript^𝑢𝑅\frac{d\hat{u}\_{H}}{d\hat{u}\_{R}}divide start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_ARG start\_ARG italic\_d over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT end\_ARG captures the influence that the robot’s controls have on the human’s. Simple models may ignore this, but more complex models, like the two player game, capture this dynamic enabling certain critical maneuvers. For instance, humans will yield should the robot merge to create proper spacing. Only if aware of this influence will the robot feel confident merging into tight windows.
So how much do these two terms matter? Intuitively, by computing Δu^RΔsubscript^𝑢𝑅\Delta\hat{u}\_{R}roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT via Eq. ([7](#S2.E7 "7 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), we see how much a model that either captures influence or makes different predictions affects the robot’s plan. If neither bears much weight, then Δu^RΔsubscript^𝑢𝑅\Delta\hat{u}\_{R}roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT will likely be negligible and we will not switch. If however Δu^RΔsubscript^𝑢𝑅\Delta\hat{u}\_{R}roman\_Δ over^ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT is significant, we further evaluate if that translates to significant performance gain. Only when that’s the case will we ultimately switch.
Input: Ladder ℳ={M0,…,Mn}ℳsubscript𝑀0…subscript𝑀𝑛\mathcal{M}=\{M\_{0},\ldots,M\_{n}\}caligraphic\_M = { italic\_M start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT }, episode time N𝑁Nitalic\_N.
Start with time t=0𝑡0t=0italic\_t = 0, current model index i𝑖iitalic\_i.
while *t≤N𝑡𝑁t\leq Nitalic\_t ≤ italic\_N* do
Compute ¯𝐮Rt(Mi)¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀𝑖\bar{}\mathbf{u}\_{R}^{t}(M\_{i})over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) given xtsuperscript𝑥𝑡x^{t}italic\_x start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT and execute u¯Rt(Mi)superscriptsubscript¯𝑢𝑅𝑡subscript𝑀𝑖\bar{u}\_{R}^{t}(M\_{i})over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ).
if *i<n𝑖𝑛i<nitalic\_i < italic\_n* then
Substitute (M1,M2)←(Mi,Mn)←subscript𝑀1subscript𝑀2subscript𝑀𝑖subscript𝑀𝑛(M\_{1},M\_{2})\leftarrow(M\_{i},M\_{n})( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ← ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) in Eq. ([5](#S2.E5 "5 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([7](#S2.E7 "7 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([8](#S2.E8 "8 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), with u¯Ht(¯𝐮Rt(Mi),Mn)←uHt←superscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀𝑖subscript𝑀𝑛superscriptsubscript𝑢𝐻𝑡\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{i}),M\_{n})\leftarrow u\_{H}^{t}over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ) ← italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT.
Compute ΔrmetatΔsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎\Delta r^{t}\_{meta}roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT using Eq. ([5](#S2.E5 "5 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([7](#S2.E7 "7 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([8](#S2.E8 "8 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")).
if *Δrmetat>0normal-Δsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎0\Delta r^{t}\_{meta}>0roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT > 0* then
i←n←𝑖𝑛i\leftarrow nitalic\_i ← italic\_n (Switch up), continue
if *i>0𝑖0i>0italic\_i > 0 and cooldown complete* then
Substitute (M1,M2)←(Mi,Mi−1)←subscript𝑀1subscript𝑀2subscript𝑀𝑖subscript𝑀𝑖1(M\_{1},M\_{2})\leftarrow(M\_{i},M\_{i-1})( italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT ) ← ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT , italic\_M start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ) in Eq. ([5](#S2.E5 "5 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([7](#S2.E7 "7 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([8](#S2.E8 "8 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")).
Compute ΔrmetatΔsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎\Delta r^{t}\_{meta}roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT using Eq. ([5](#S2.E5 "5 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([7](#S2.E7 "7 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), ([8](#S2.E8 "8 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")).
if *Δrmetat>0normal-Δsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎0\Delta r^{t}\_{meta}>0roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT > 0* then
i←i−1←𝑖𝑖1i\leftarrow i-1italic\_i ← italic\_i - 1 (Switch Down).
Algorithm 1 Dynamic Model Switching
###
II-D Approximate Solution: Switching Between the Ladder
Although we presented our method in the context of switching up, the framework holds when M1subscript𝑀1M\_{1}italic\_M start\_POSTSUBSCRIPT 1 end\_POSTSUBSCRIPT is the current model and M2subscript𝑀2M\_{2}italic\_M start\_POSTSUBSCRIPT 2 end\_POSTSUBSCRIPT is any alternative: higher or lower. When the alternate model is lower the question becomes: is the computational saving worth the loss in reward? Generalizing our derivation to a ladder of models ℳ={M0,…,Mn}ℳsubscript𝑀0…subscript𝑀𝑛\mathcal{M}=\{M\_{0},\ldots,M\_{n}\}caligraphic\_M = { italic\_M start\_POSTSUBSCRIPT 0 end\_POSTSUBSCRIPT , … , italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT }, suppose that at time t𝑡titalic\_t the robot used model Misubscript𝑀𝑖M\_{i}italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT to plan ¯𝐮Rt(Mi)¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀𝑖\bar{}\mathbf{u}\_{R}^{t}(M\_{i})over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ). We would like to decide on a model for time t+1𝑡1t+1italic\_t + 1.
First, we evaluate if it’s worthwhile to switch to a higher model Mjsubscript𝑀𝑗M\_{j}italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT, j>i𝑗𝑖j>iitalic\_j > italic\_i. The robot could consider Mi+1subscript𝑀𝑖1M\_{i+1}italic\_M start\_POSTSUBSCRIPT italic\_i + 1 end\_POSTSUBSCRIPT, the model immediately above, and successively switch up. However, in urgent, safety-critical situations we may need to switch higher than that immediately to avoid an accident. Thus, we exclusively consider the best model and upper bound r^Rt(Mj)superscriptsubscript^𝑟𝑅𝑡subscript𝑀𝑗\hat{r}\_{R}^{t}(M\_{j})over^ start\_ARG italic\_r end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) with r^Rt(Mn)superscriptsubscript^𝑟𝑅𝑡subscript𝑀𝑛\hat{r}\_{R}^{t}(M\_{n})over^ start\_ARG italic\_r end\_ARG start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT ), the estimated reward from using the best model available. To avoid having to evaluate Mnsubscript𝑀𝑛M\_{n}italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT’s expensive predictions we substitute a perfect prediction u¯Ht(¯𝐮Rt(Mi),MN)≈uHtsuperscriptsubscript¯𝑢𝐻𝑡¯absentsuperscriptsubscript𝐮𝑅𝑡subscript𝑀𝑖subscript𝑀𝑁superscriptsubscript𝑢𝐻𝑡\bar{u}\_{H}^{t}(\bar{}\mathbf{u}\_{R}^{t}(M\_{i}),M\_{N})\approx u\_{H}^{t}over¯ start\_ARG italic\_u end\_ARG start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ) , italic\_M start\_POSTSUBSCRIPT italic\_N end\_POSTSUBSCRIPT ) ≈ italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT. This effectively upper bounds Mnsubscript𝑀𝑛M\_{n}italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT with the true human model MHsubscript𝑀𝐻M\_{H}italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT.
If the robot should have switched down to Mjsubscript𝑀𝑗M\_{j}italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT, j<i𝑗𝑖j<iitalic\_j < italic\_i, we observe that rRt(Mj)≤rRt(Mi−1)≤rRt(Mi)superscriptsubscript𝑟𝑅𝑡subscript𝑀𝑗superscriptsubscript𝑟𝑅𝑡subscript𝑀𝑖1superscriptsubscript𝑟𝑅𝑡subscript𝑀𝑖r\_{R}^{t}(M\_{j})\leq r\_{R}^{t}(M\_{i-1})\leq r\_{R}^{t}(M\_{i})italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_j end\_POSTSUBSCRIPT ) ≤ italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ) ≤ italic\_r start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ). Thus, for efficient switching, we only consider the model directly below, Mi−1subscript𝑀𝑖1M\_{i-1}italic\_M start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT. Since T(Mi−1)<T(Mi)𝑇subscript𝑀𝑖1𝑇subscript𝑀𝑖T(M\_{i-1})<T(M\_{i})italic\_T ( italic\_M start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT ) < italic\_T ( italic\_M start\_POSTSUBSCRIPT italic\_i end\_POSTSUBSCRIPT ), it is reasonable to compute true model predictions in our approximation.
Finally, if ΔrmetatΔsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎\Delta r^{t}\_{meta}roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT is positive for MHsubscript𝑀𝐻M\_{H}italic\_M start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT, the robot switches up to Mnsubscript𝑀𝑛M\_{n}italic\_M start\_POSTSUBSCRIPT italic\_n end\_POSTSUBSCRIPT. Otherwise, if ΔrmetatΔsubscriptsuperscript𝑟𝑡𝑚𝑒𝑡𝑎\Delta r^{t}\_{meta}roman\_Δ italic\_r start\_POSTSUPERSCRIPT italic\_t end\_POSTSUPERSCRIPT start\_POSTSUBSCRIPT italic\_m italic\_e italic\_t italic\_a end\_POSTSUBSCRIPT is positive for Mi−1subscript𝑀𝑖1M\_{i-1}italic\_M start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT, the robot switches down to Mi−1subscript𝑀𝑖1M\_{i-1}italic\_M start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT. Otherwise we stay as summarized in Algorithm [1](#algorithm1 "1 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning").
In practice evaluating Mi−1subscript𝑀𝑖1M\_{i-1}italic\_M start\_POSTSUBSCRIPT italic\_i - 1 end\_POSTSUBSCRIPT’s predictions can still be significant, but unlike switching up, safety and performance concerns do not force us to check every timestep. One may wait K𝐾Kitalic\_K timesteps after failing to switch down before trying again. This cooldown hyperparameter, K𝐾Kitalic\_K, should be set based on how often one actually switches.
III Experiments
----------------



Figure 2: (Top) Average single step computation time over the course of Stay Back (left), Merger (center), and Give Way (right), for a conservative, lesser λ𝜆\lambdaitalic\_λ, switcher (yellow) and an aggressive, larger λ𝜆\lambdaitalic\_λ, one (light blue). (Bottom) Example conservative MS robot (orange) behavior around the target human (blue) and other cars (black).
We now demonstrate the efficacy of our model switching algorithm in three simulated autonomous driving scenarios.
###
III-A Driving Simulator
We model the dynamics of the vehicles as a 4D bicycle model. Let the state of the system be 𝐱=[xyθv]T𝐱superscriptdelimited-[]𝑥𝑦𝜃𝑣𝑇\mathbf{x}=[x\;y\;\theta\;v]^{T}bold\_x = [ italic\_x italic\_y italic\_θ italic\_v ] start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT, where x,y𝑥𝑦x,yitalic\_x , italic\_y are the coordinates of the vehicle, θ𝜃\thetaitalic\_θ is the heading, and v𝑣vitalic\_v the speed. The actions are u=[ωa]T𝑢superscriptdelimited-[]𝜔𝑎𝑇u=[\omega\;a]^{T}italic\_u = [ italic\_ω italic\_a ] start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT, where ω𝜔\omegaitalic\_ω is the steering input and a𝑎aitalic\_a is the linear acceleration. We use α𝛼\alphaitalic\_α as the friction coefficient, and the vehicle’s dynamics model is:
| | | | |
| --- | --- | --- | --- |
| | [x˙y˙θ˙v˙]=[v⋅cos(θ)v⋅sin(θ)v⋅ωa−α⋅v].delimited-[]˙𝑥˙𝑦˙𝜃˙𝑣delimited-[]⋅⋅⋅𝑣𝜃𝑣𝜃𝑣𝜔𝑎⋅𝛼𝑣[\dot{x}\;\dot{y}\;\dot{\theta}\;\dot{v}]=[v\cdot\cos(\theta)\;\;\;v\cdot\sin(\theta)\;\;\;v\cdot\omega\;\;\;a-\alpha\cdot v]\enspace.[ over˙ start\_ARG italic\_x end\_ARG over˙ start\_ARG italic\_y end\_ARG over˙ start\_ARG italic\_θ end\_ARG over˙ start\_ARG italic\_v end\_ARG ] = [ italic\_v ⋅ roman\_cos ( italic\_θ ) italic\_v ⋅ roman\_sin ( italic\_θ ) italic\_v ⋅ italic\_ω italic\_a - italic\_α ⋅ italic\_v ] . | | (9) |
###
III-B Human Predictive Models
For our experiments, we use the following 3 models of varying computational complexity and accuracy:
####
III-B1 Constant Velocity
This model, deemed Naive, predicts that the person will provide zero acceleration control, maintaining their current heading and speed.
####
III-B2 Human Plans First
This model assumes that the person acts according to a reward function parameterized as a linear combination of features ϕitalic-ϕ\phiitalic\_ϕ: RH(x0,𝐮R,𝐮H)=∑τ=1NrH(xτ,uRτ,uHτ)=∑τ=1NθTϕ(xτ,uRτ,uHτ)subscript𝑅𝐻superscript𝑥0subscript𝐮𝑅subscript𝐮𝐻superscriptsubscript𝜏1𝑁subscript𝑟𝐻superscript𝑥𝜏superscriptsubscript𝑢𝑅𝜏superscriptsubscript𝑢𝐻𝜏superscriptsubscript𝜏1𝑁superscript𝜃𝑇italic-ϕsuperscript𝑥𝜏superscriptsubscript𝑢𝑅𝜏superscriptsubscript𝑢𝐻𝜏R\_{H}(x^{0},\mathbf{u}\_{R},\mathbf{u}\_{H})=\sum\_{\tau=1}^{N}r\_{H}(x^{\tau},u\_{R}^{\tau},u\_{H}^{\tau})=\sum\_{\tau=1}^{N}\theta^{T}\phi(x^{\tau},u\_{R}^{\tau},u\_{H}^{\tau})italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_τ = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_r start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT ) = ∑ start\_POSTSUBSCRIPT italic\_τ = 1 end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_N end\_POSTSUPERSCRIPT italic\_θ start\_POSTSUPERSCRIPT italic\_T end\_POSTSUPERSCRIPT italic\_ϕ ( italic\_x start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT , italic\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT start\_POSTSUPERSCRIPT italic\_τ end\_POSTSUPERSCRIPT ).
Additionally, under this model the human believes that the robot will maintain a constant velocity, and optimizes their reward w.r.t the imagined robot plan ~𝐮R~absentsubscript𝐮𝑅\tilde{}\mathbf{u}\_{R}over~ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT to obtain a plan ¯𝐮H¯absentsubscript𝐮𝐻\bar{}\mathbf{u}\_{H}over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT:
| | | | |
| --- | --- | --- | --- |
| | ¯𝐮H(x0,~𝐮R)=argmax𝐮HRH(x0,~𝐮R,𝐮H).¯absentsubscript𝐮𝐻superscript𝑥0~absentsubscript𝐮𝑅subscriptsubscript𝐮𝐻subscript𝑅𝐻superscript𝑥0~absentsubscript𝐮𝑅subscript𝐮𝐻\bar{}\mathbf{u}\_{H}(x^{0},\tilde{}\mathbf{u}\_{R})=\arg\max\_{\mathbf{u}\_{H}}R\_{H}(x^{0},\tilde{}\mathbf{u}\_{R},\mathbf{u}\_{H})\enspace.over¯ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , over~ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT ) = roman\_arg roman\_max start\_POSTSUBSCRIPT bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT end\_POSTSUBSCRIPT italic\_R start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ( italic\_x start\_POSTSUPERSCRIPT 0 end\_POSTSUPERSCRIPT , over~ start\_ARG end\_ARG bold\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT , bold\_u start\_POSTSUBSCRIPT italic\_H end\_POSTSUBSCRIPT ) . | | (10) |
We refer to this model as Turn because the person takes the first turn in choosing their controls.
####
III-B3 Cognizant of Effects on Human Action
Our last model is based on [[18](#bib.bib18)] and models the human as an agent which will optimally respond to the robot’s plan. This results in a nested optimization as the robot accounts for how its plan affects the human’s. As we rationalize the human’s thought process we refer to this model as “Theory of Mind” [[28](#bib.bib28)], or ToM for short.
Planning with the first two approaches involves optimizing the robot’s control against the fixed human prediction. Meanwhile, ToM’s nested optimization returns both a human and robot control with no further optimization required. From our experiments, planning using Turn and ToM takes roughly twice and four times as long as using Naive, respectively. However, their expected accuracy has the reverse ordering.
Both Turn and ToM rely on knowing a reward parameter θ𝜃\thetaitalic\_θ and a set of features ϕitalic-ϕ\phiitalic\_ϕ for the human reward. To learn a good θ𝜃\thetaitalic\_θ for every scenario, we collected demonstrations of a single human driver in an environment with multiple autonomous cars following precomputed routes, and performed inverse reinforcement learning [[29](#bib.bib29)]. The base features we used include higher speed moving forward (against a speed limit), lateral and directional alignment in the lane, collision avoidance, and distance to the boundaries of the road.
Across our experiments, we compare each of these three models against our model switcher (MS) that dynamically chooses between all of them. We additionally wanted to analyze different performances for designers that might be more or less conservative about the reward vs. computation tradeoff, so we show results for different values of λ𝜆\lambdaitalic\_λ.
###
III-C Miscellaneous Experimental Details
We conduct experiments using TensorFlow [[30](#bib.bib30)] 2.1, running on a 2015 MacBook Pro, for gradient calculation and optimization. All planners optimize for a horizon T=5𝑇5T=5italic\_T = 5 using 20 vanilla gradient descent steps. For switching down, we used a cooldown of K=3𝐾3K=3italic\_K = 3.
In Eq. ([2](#S2.E2 "2 ‣ II-C Approximate Solution: Switching between Two Models ‣ II Method ‣ Dynamically Switching Human Prediction Models for Efficient Planning")), the quadratic approximation may be ill conditioned, so we restricted ΔuRΔsubscript𝑢𝑅\Delta u\_{R}roman\_Δ italic\_u start\_POSTSUBSCRIPT italic\_R end\_POSTSUBSCRIPT to exist within some reasonable bounds.
###
III-D Evaluation Strategy
For every scenario, we run every method against the same simulated human driver. For diversity, we vary the starting positions of the human and robot cars across 30 different seeds per scenario. For every time step, we keep track of the reward and computation time.
In Fig. [2](#S3.F2 "Figure 2 ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") we visualize average combined planning and decision time for the MS as it progresses through each scenario. Further, we juxtapose a conservative designer (yellow), who values reward relatively more, with an aggressive one (blue), who prefers computational savings. Snapshots below provide context for key points of the experiment denoted by dashed lines above, taken from a single conservative MS run.
In Fig. [3](#S3.F3 "Figure 3 ‣ III-D Evaluation Strategy ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning"), we showcase the average reward and computational time per episode, averaged across seeds. In each plot, the left bars represent computational time, and the right ones reward, in units given by the left and right axes respectively. Within the total computational time, we separate planning and time deciding a human model.
We hypothesized that our MS algorithm would maintain a reward similar to that of the top model, while being significantly cheaper to compute. Additionally, we expected to see that conservative switchers obtain better rewards than aggressive switchers, but at higher computational complexity.
Note that because we wanted to showcase regions where a conservative switcher would react differently from an aggressive one, the hyperparameter λ𝜆\lambdaitalic\_λ varies widely across our scenarios. This is a reflection of differing reward scales, and in general the designer would have to select an appropriate λ𝜆\lambdaitalic\_λ depending on the problem and desired reward-computation tradeoff. For greater stability and generalization, one may find a highly conservative λ𝜆\lambdaitalic\_λ to be effective, reducing switching sensitivity to situations where the human has little bearing on the reward.



Figure 3: Average computational time and reward for the 3 scenarios, with models ordered by computation. Model switchers achieve comparable performance to the best model with less computation. Note: the aggressive switchers with greater values for λ𝜆\lambdaitalic\_λ are to the left of the conservative ones.
###
III-E Scenario 1: Stay Back
In our first scenario in Fig. [2](#S3.F2 "Figure 2 ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (left), the robot and human begin driving alongside each other at the same speed. Ahead a series of cones creates a bottleneck in the road. Either the robot or human must yield to avoid a collision.
For this scenario, we restrict ourselves to Naive and Turn for simplicity, while the other two will showcase the potential of a broader model ladder.
As shown in Fig. [3](#S3.F3 "Figure 3 ‣ III-D Evaluation Strategy ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (left), the Naive model struggles to correctly anticipate whether the human will go first and act accordingly, but it is much cheaper than Turn which safely navigates the scenario.
Meanwhile, both aggressive or conservative switchers manages to obtain rewards close to that of the Turn, but with computational time closer to Naive. Additionally, notice that the decision time doesn’t add excessive overhead, which underlines the efficiency of our approximation to the meta MDP.
In Fig. [2](#S3.F2 "Figure 2 ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (left), we see that a conservative switcher (λ=0.4𝜆0.4\lambda=0.4italic\_λ = 0.4) will generally use Turn before and during the bottleneck, whereas an aggressive one (λ=5.0𝜆5.0\lambda=5.0italic\_λ = 5.0) uses Turn only to intervene when using Naive is headed towards a collision.
###
III-F Scenario 2: Merger
In the next scenario shown in Fig. [2](#S3.F2 "Figure 2 ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (middle), the robot would like to merge into the left lane. The gap is too small for the robot, but if it gradually angles its hood the target human will yield allowing the robot to enter.
As shown in Fig. [3](#S3.F3 "Figure 3 ‣ III-D Evaluation Strategy ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (middle), only ToM is able to anticipate that the human will yield should it begin entering the lane. However, before and after merging, ToM provides no further advantage, so the switcher can exploit that to reduce computational complexity.
In Fig. [3](#S3.F3 "Figure 3 ‣ III-D Evaluation Strategy ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (middle), we see that both the conservative and aggressive switchers obtain rewards closer to ToM, but with significantly less total computation. In Fig. [2](#S3.F2 "Figure 2 ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (middle), we see that the model switcher only uses ToM for merging, as it provides no comparative advantage afterwards. A conservative model switcher (λ=0.1𝜆0.1\lambda=0.1italic\_λ = 0.1) switches up earlier merging faster, whereas an aggressive one (λ=0.6𝜆0.6\lambda=0.6italic\_λ = 0.6) switches later but requires less computation. Delaying an inevitable switch hurts the aggressive MS, highlighting the importance of a conservative λ𝜆\lambdaitalic\_λ for safety-critical applications Turn is used sparingly during the transition as it provides little comparative advantage to Naive.
###
III-G Scenario 3: Give Way
In our last scenario shown in Fig. [2](#S3.F2 "Figure 2 ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (right), the person is driving alongside the robot and would like to enter the robot’s lane; however, other drivers around the robot do not allow enough space either way. The robot would like to help the human enter. Of course, the robot may create sufficient space by moving forward or backing up.
As shown in Fig. [3](#S3.F3 "Figure 3 ‣ III-D Evaluation Strategy ‣ III Experiments ‣ Dynamically Switching Human Prediction Models for Efficient Planning") (right), ToM is the only one capable of understanding the robot’s ability to help the person to merge. Turn occasionally succeeds when it yields fearing a collision. Both switchers obtain higher rewards than those of the cheaper models, but we notice again a large gap between the two.
The earlier the robot makes space for the person, the better. A conservative model switcher (λ=0.01𝜆0.01\lambda=0.01italic\_λ = 0.01) quickly switches up and allows the human to enter, albeit slower than pure ToM. A more aggressive switcher (λ=0.03𝜆0.03\lambda=0.03italic\_λ = 0.03) delays the switch to ToM resulting in lesser reward, but keeps overall computation lower. The delay between yielding and the human entering presents a challenge for our myopic, one time step, reward simplification. A very low λ𝜆\lambdaitalic\_λ works here, but the issue of myopic gain potentially underestimating longer horizon gain remains.
IV Discussion
--------------
Summary: In this paper, we formalized the robot’s decision making process over which predictive human model to use as a meta MDP. We introduced an approximate solution that enables efficient switching to the most suitable available model within this MDP. The resulting decisions maintain rewards similar to those of the best model available, while dramatically reducing computational time.
Future Work: Because the robot cannot see the human’s true future controls, we were limited to basing switching decisions on what the person did in the past. We could approximate the true human trajectory using the top model’s prediction, but that would relinquish most computational savings. For example, Naive planning and Turn prediction takes as long as Turn planning. Additionally, because the decision to switch relies on a single time step simplification, our scenarios needed consistent reward signal. Future work must address adapting our algorithm to sparse reward settings where one step reward gradients are not meaningful. Learning a value function to replace reward in our formulation would be an interesting direction.
Moreover, all this work happened in a simple driving simulator, albeit with what we think are complex scenarios. To put this on the road, we will need more emphasis on safety, as well as longer decision horizons. Lastly, our algorithm focuses on single human decisions. We could run our method separately for every nearby human, evaluating the differential benefit of switching each, but we are yet to conduct experiments in that setting. Alternatively, we can imagine adapting it to multiple humans by either adding more complex multi-player game theoretic models, or combining it with the prioritization schema presented by [[26](#bib.bib26)].
Conclusion: Despite these limitations, we are encouraged to see robots have more autonomy over what human models to use when planning online, without hand-coded heuristics. We look forward to applications of our model switching ideas beyond autonomous driving: to mobile robots, quadcopters, or any human-robot interactive scenarios where planning with multiple predictive human models might be beneficial.
|
9d8941ce-edbb-484b-b786-608d54659016
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Analogy Bank for AI Safety
> “A great deal of thinking well is about knowing when to think at what levels of abstraction. On the one hand, abstractions drop detail, and the reliability of inferences using them varies in complex ways with context. On the other hand, reasoning abstractly can be much faster and quicker, and can help us transfer understanding from better to less known cases via analogy . . . my best one factor theory to explain the worlds I’ve liked best, such as “rationalists”, is that folks there have an unusually high taste for abstraction . . . Thus my strongest advice for my fellow-traveler worlds of non-academic conversation is: vet your abstractions more. For example, this is my main criticism, which I’ve repeated often, of AI risk discussions. Don’t just accept proposed abstractions and applications.” — Robin Hanson
TL;DR
* I am compiling analogies about AI safety on this sheet.
* You can submit new ones and points for/against the existing ones with this form.
The Function of Analogies
> “All non-trivial abstractions, to some degree, are leaky.” — Joel Spolsky
When you are trying to explain some novel concept A, it can be helpful to point out similarities and differences with a more familiar concept B.
Simple Comparisons
One simple type of comparison you might want to make, when claiming that target A has some property X, is to conjure analogue B, where B trivially has property X. For example, “AI is a technology that might do more harm than good, like cigarettes.” What do you really gain from deploying such an analogy, beyond directly claiming, “AI is a technology that might do more harm than good”? Often, I suspect the answer is: not much. In fact, such a comparison can be more distracting than illuminating because your audience might assume the analogy is a load-bearing warrant for your position and then focus, for example, on whether cigarettes have done more harm than good, instead of engaging with your actual claim about AI. To see how an analogy can distr
|
8df1f5a3-7f0b-408f-a712-8a12a854f814
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Apply to fall policy internships (we can help)
### **Many U.S. congressional internship applications are closing in the next few weeks for Fall (Sep- Dec) internships.** This is a relatively low-effort, high reward thing to do if you you’re interested in testing your fit for policy.
I (Elika) interned in my congressional office for a semester just from off-the-cuff applying to test my fit and build my resume. This experience has been incredibly helpful (I now work for the US government and it gives me some more credibility in D.C). **Many applications are closing within the next 1-2 weeks. We’re offering to** [**support**](https://airtable.com/shrzCEa9YKJdiKlsu) **anyone considering applying.**
**This is a particularly good fit if you’re:**
* Interested in working in policy, politics, or governance solutions to problems
* An undergraduate student
* Able to work part-time (10+ hours per week)
**If you think this could be a good opportunity, we recommend:**
* Reading [this guide to internships](https://forum.effectivealtruism.org/posts/sD5vF6cfuAYh9ZqYZ/congressional-internships-why-and-how-to-apply) which has information on which offices to choose from and how to apply and more (**including** [**this helpful link of all the Congressional office internships**](https://airtable.com/shrwTtjhJSwepvFLo)**)**
* Making a list of offices you think you’d be a good fit for
* Applying! [When in doubt, apply](https://forum.effectivealtruism.org/posts/PhySoajcEcY8EtgKH/when-in-doubt-apply) - there’s no harm in applying if you’re serious about exploring this opportunity. **We’re offering to** [**support**](https://airtable.com/shrzCEa9YKJdiKlsu) **if you’re interested.**
### [**Sign up to get support applying here**](https://airtable.com/shrzCEa9YKJdiKlsu)
Things we can help with:
* Whether or not you’d be a good fit for the positions
* Review your resume, cover letter & offices you’re interested in
* Accountability for submitting applications by the deadline
|
d7aad85e-bdfa-45c1-b7b4-ad24389e8e9f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Philadelphia - The Sword of Good
Discussion article for the meetup : Philadelphia - The Sword of Good
WHEN: 22 September 2013 12:30:00PM (-0400)
WHERE: 1515 Chestnut St, Philadelphia, PA
Location: Givovani's Pizza
Discussion topic: The Sword of Good (A short story - I highly recommend reading it before the meetup if you are able, because it's meant to have some element of surprise.)
Fiction doesn't seem to be used as a discussion topic for meetups very often, but I noticed that we tend to end up talking about stories together anyway, and it usually leads to interesting conversations.
If you have any simple games you'd like to play as a group, please bring them - it will give us something to do in the beginning while people are still arriving.
Discussion article for the meetup : Philadelphia - The Sword of Good
|
1b330bd2-a0f1-490c-a157-31e1f57df5df
|
StampyAI/alignment-research-dataset/arbital
|
Arbital
|
Epistemic exclusion
An "epistemic exclusion" would be a hypothetical form of AI limitation that made the AI not model (and if reflectively stable, not want to model) some particular part of physical or mathematical reality, or model it only using some restricted model class that didn't allow for the maximum possible predictive accuracy. For example, a [behaviorist genie](https://arbital.com/p/102) would not want to model human minds (except using a tightly restricted model class) to avoid [https://arbital.com/p/6v](https://arbital.com/p/6v), [https://arbital.com/p/programmer_manipulation](https://arbital.com/p/programmer_manipulation) and other possible problems.
At present, nobody has investigated how to do this (in any reflectively stable way), and there's all sorts of obvious problems stemming from the fact that, in reality, most facts are linked to a significant number of other facts. How would you make an AI that was really good at predicting everything else in the world but didn't know or want to know what was inside your basement? Intuitively, it seems likely that a lot of naive solutions would, e.g., just cause the AI to *de facto* end up constructing something that wasn't technically a model of your basement, but played the same role as a model of your basement, in order to maximize predictive accuracy about everything that wasn't your basement. We could similarly ask how it would be possible to build a really good mathematician that never knew or cared whether 333 was a prime number, and whether this might require it to also ignore the 'casting out nines' procedure whenever it saw 333 as a decimal number, or what would happen if we asked it to multiply 3 by (100 + 10 + 1), and so on.
That said, most *practical* reasons to create an epistemic exclusion (e.g. [against modeling humans in too much detail](https://arbital.com/p/102), or [against modeling distant alien civilizations and superintelligences](https://arbital.com/p/1fz)) would involve some practical reason the exclusion was there, and some level of in-practice exclusion that was *good enough*, which might not require e.g. maximum predictive accuracy about everything else combined with zero predictive accuracy about the exclusion.
|
308262a9-f3d4-4c6d-99df-0c6d0f3dcdec
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : San Francisco Meetup: Short Talks
Discussion article for the meetup : San Francisco Meetup: Short Talks
WHEN: 19 October 2015 06:15:00PM (-0700)
WHERE: 1597 Howard St. San Francisco, CA
We'll be meeting to give/listen to short talks. Planning isn't necessary: these are not expected to be polished.
I can be reached at 301-458-0764 if you need help getting in. As always, feel free to show up late.
Discussion article for the meetup : San Francisco Meetup: Short Talks
|
5c8ff960-9fee-4bc9-bc62-f3b57b6afa60
|
trentmkelly/LessWrong-43k
|
LessWrong
|
What I Tell You Three Times Is True
"The human brain evidently operates on some variation of the famous principle enunciated in 'The Hunting of the Snark': 'What I tell you three times is true.'"
-- Norbert Weiner, from Cybernetics
Ask for a high-profile rationalist, and you'll hear about Richard Dawkins or James Randi or maybe Peter Thiel. Not a lot of people would immediately name Scott Adams, creator of Dilbert. But as readers of his blog know, he's got a deep interest in rationality, and sometimes it shows up in his comics: for example, this one from last week. How many people can expose several million people to the phrase "Boltzmann brain hypothesis" and have them enjoy it?
So I was very surprised to find Adams was a believer in and evangelist of something that sounded a lot like pseudoscience. "Affirmations" are positive statements made with the belief that saying the statement loud enough and long enough will help it come true. For example, you might say "I will become a syndicated cartoonist" fifteen times before bed every night, thinking that this will in fact make you a syndicated cartoonist. Adams partially credits his success as a cartoonist to doing exactly this.
He admits "it sounds as if I believe in some sort of voodoo or magic", and acknowledges that "skeptics have suggested, and reasonably so, that this is a classic case of selective memory" but still swears that it works. He also has "received thousands of e-mails from people recounting their own experiences with affirmations. Most people seem to be amazed at how well they worked."
None of this should be taken too seriously without a controlled scientific study investigating it, of course. But is it worth the effort of a study, or should it be filed under "so stupid that it's not worth anyone's time to investigate further"?
I think there's a good case to be made from within a rationalist/scientific worldview that affirmations may in fact be effective for certain goals. Not miraculously effective, but not totally useless ei
|
68c90e34-eee2-4a7c-9932-56b28128e791
|
trentmkelly/LessWrong-43k
|
LessWrong
|
European Community Weekend 2018 Announcement
We are excited to announce this year's European LessWrong Community Weekend. For the fifth time, rationalists from all over Europe (and some from outside Europe) are gathering in Berlin to socialize, have fun, exchange knowledge and skills, and have interesting discussions.
The event takes place September 7th to September 9th and, like last year, it will be held in the beautiful Jugendherberge Wannsee which contains a large room for central events, several seminar rooms, and lots of comfortable spaces inside and out to socialize or relax.
This is a community-driven event. That means that while there will be a keynote and pre-planned content, the bulk of the schedule will be filled by the participants. There will be space to give talks, short or long, provide workshops, or just gather some people to do an activity together. In previous years we had the talks, lightning talks and workshops you would expect, as well as lighter activities such as morning-workouts, meditation sessions, authentic relating games, swimming in the lake and many more. Of course, there will also be time to reconnect with friends and form new connections with other aspiring rationalists.
Some valuable information
Most of the talks and discussions will be held in English, so you do not need to be able to speak German to attend.
The ticket price of €150 includes accommodation for two nights, on-site meals (breakfast, lunch, dinner) and snacks, and a welcome lunch on Friday at 12:00.
The event wraps up Sunday afternoon around 15:00. In the days after the weekend, participants are invited to stay in Berlin a little longer to explore the city, go bouldering, play frisbee, etc. While this is not part of the official event, we will coordinate couch-surfing opportunities to avoid the need for hotels.
tl;dr
* When? 7-9 September 2018
* Where? http://jh-wannsee.de
* How much? €150
* Apply here: http://tiny.cc/lwcw2018_signup and
* Submit a contribution to support your application: http://tin
|
fbe705ac-9b38-4cf3-a356-22e32fb4a409
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Image GPT
My hot take:
Not too surprising to me, considering what GPT-3 could do. However there were some people (and some small probability mass remaining in myself) saying that even GPT-3 wasn't doing any sort of reasoning, didn't have any sort of substantial understanding of the world, etc. Well, this is another nail in the coffin of that idea, in my opinion. Whatever this architecture is doing on the inside, it seems to be pretty capable and general.
I don't think this architecture will scale to AGI by itself. But the dramatic success of this architecture is evidence that there are other architectures, not too far away in search space, that exhibit similar computational efficiency and scales-with-more-compute properties, that are useful for more different kinds of tasks.
|
9bdf8487-fba6-4ddf-9625-3dce1d13f705
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Durham: Luminosity followup
Discussion article for the meetup : Durham: Luminosity followup
WHEN: 20 June 2013 07:00:00PM (-0400)
WHERE: Cocoa Cinnamon, 420 W Geer St., Durham NC
A month ago or so we finished going through Alicorn's Luminosity sequence on Less Wrong. So, having done that and worked on Being Luminous for a month or two (or more), what conclusions did you draw?
We'll meet for coffee and informal introductions at 7:00. On-topic conversation will run from 7:30-9:00.
Please give some thought to what you learned (or didn't learn) from the luminosity meetups, and what impact it has had in the time period since.
If you weren't around for all (or even any) of the Luminosity meetups, please feel free to come anyway! Bring some questions about it for people that were, if you like :)
Discussion article for the meetup : Durham: Luminosity followup
|
25609c27-8580-4bf5-a370-b760beb1e99b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
"Bad-for-the-world-ipedia?"
Lately I've been thinking about all of the various services and products I consume and how pretty much all of them are bad for the world in one way or another, large or small. Some of the problems associated with them I am less concerned about. Some of them could be construed as good things (i.e. sweat shop labor DOES provide jobs, whatever impact it might or might not have on the overall quality of life).
In general I'd like to live my life having as minimal a negative impact on the world as possible. But "negative impact" is a hugely broad topic and there are a million variables to consider and I just don't have time.
The best solution, I think, would be to have a wikipedia-like website where individual people with knowledge of specific problems can start tagging specific products with the types of negative consequences associated with them, and (somehow) sort those consequences into categories that individuals can decide how much to worry about. Over time it could eventually become a fairly efficient way to track the utility value of things.
I'm sort of hoping something like this already exists, even if in an infant form, and that someone here knows about it. But I doubt it, so the I guess this falls mostly under the post category of "hey someone other than me should devote a bunch of time and energy to this project that I myself am not qualified to do." But maybe a few people here at least have a better idea than I do of the scope of the requirements for it, so the idea can be refined a bit.
|
d66f50a0-f63c-4132-b6ba-a1ee688d396c
|
StampyAI/alignment-research-dataset/blogs
|
Blogs
|
predictablizing ethic deduplication
predictablizing ethic deduplication
-----------------------------------
does the machinery of the cosmos [deduplicate identical moral patients and/or their experiences](deduplication-ethics.html) ?
this seems like a very difficult question to even address; but the good news is that for the general future we might not have to care about it. we can simply make it that the superintelligence that runs everything, *does* deduplicate (memoize) identical computations and data structures, which guarantees that the ethics we build on top of that (for superintelligence to implement) *can* know about deduplication.
why choose deduplication over no-deduplication? because if we add deduplication on top of any machinery of the cosmos, then we can know for sure deduplication happens, but if we *don't* implement deduplication, then whether computation is deduplicated depends on the machinery of the cosmos.
"but doesn't this require looking inside arbitrarily encoded computations, such as [homomorphic encryption](https://en.wikipedia.org/wiki/Homomorphic_encryption) ?"
that is true, but for an aligned superintelligence we require this *anyways*. otherwise, it could just let unseen pockets of arbitrary suffering happen.
|
b843ad62-21b9-4dd5-b490-262acc7c34d4
|
trentmkelly/LessWrong-43k
|
LessWrong
|
How well do truth probes generalise?
Representation engineering (RepEng) has emerged as a promising research avenue for model interpretability and control. Recent papers have proposed methods for discovering truth in models with unlabeled data, guiding generation by modifying representations, and building LLM lie detectors. RepEng asks the question: If we treat representations as the central unit, how much power do we have over a model’s behaviour?
Most techniques use linear probes to monitor and control representations. An important question is whether the probes generalise. If we train a probe on the truths and lies about the locations of cities, will it generalise to truths and lies about Amazon review sentiment? This report focuses on truth due to its relevance to safety, and to help narrow the work.
Generalisation is important. Humans typically have one generalised notion of “truth”, and it would be enormously convenient if language models also had just one[1]. This would result in extremely robust model insights: every time the model “lies”, this is reflected in its “truth vector”, so we could detect intentional lies perfectly, and perhaps even steer away from them.
We find that truth probes generalise surprisingly well, with the 36% of methodologies recovering >80% of the accuracy on out-of-distribution datasets compared with training directly on the datasets. The best probe recovers 92% accuracy.
Thanks to Hoagy Cunningham for feedback and advice. Thanks to LISA for hosting me while I did a lot of this work. Code is available at mishajw/repeng, along with steps for reproducing datasets and plots.
Methods
We run all experiments on Llama-2-13b-chat, for parity with the source papers. Each probe is trained on 400 questions, and evaluated on 2000 different questions, although numbers may be lower for smaller datasets.
What makes a probe?
A probe is created using a training dataset, a probe algorithm, and a layer.
We pass the training dataset through the model, extracting activations[2] jus
|
43e92390-5e59-45e2-bebc-beb1dde308a7
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Meetup : Stockholm: Bottlenecks to trading personal resources
Discussion article for the meetup : Stockholm: Bottlenecks to trading personal resources
WHEN: 11 November 2016 04:15:00PM (+0100)
WHERE: Lindstedtsvägen 3 room 1537, SE-114 28 Stockholm, Sverige
"Value of time" is often employed by utilitarians. It can be hard to determine one's value of time for a variety of reasons. This talk will be about different currencies of personal life like time, money, and pleasure; and how choices are implicit trades between them. The talk will focus on when it's appropriate to treat these resources as liquid currencies and when it's not. After discussing the theory, and some examples, we'll practice. We'll individually come up with our own exchange rates for personal currencies, then fix one another's estimates in small groups. I hope everyone walks away from this talk with a concrete number for their value of time. The meetup is at a KTH academic building and the room is on the 5th floor, two stairs up. If you want to influence future meetup times, fill out this poll
Discussion article for the meetup : Stockholm: Bottlenecks to trading personal resources
|
ec8bceb2-8839-4339-a309-9a1f89253725
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Neil deGrasse Tyson on Cryonics
Question:
> What are your thoughts on cryogenic preservation and the idea of medically treating aging?
His response:
> A marvelous way to just convince people to give you money. Offer to freeze them for later. I'd have more confidence if we had previously managed to pull this off with other mammals. Until then I see it as a waste of money. I'd rather enjoy the money, and then be buried, offering my body back to the flora and fauna of which I have dined my whole life.
Link
|
0757855c-7d4b-4edc-bd8a-5e9a29df11fe
|
StampyAI/alignment-research-dataset/alignmentforum
|
Alignment Forum
|
TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI
Partly in response to calls for more detailed accounts of how AI could go wrong, e.g., from [Ng and Bengio](https://twitter.com/AndrewYNg/status/1666582174257254402)'s recent exchange on Twitter, here's a new paper with Stuart Russell:
* Discussion on Twitter... comments welcome!
<https://twitter.com/AndrewCritchCA/status/1668476943208169473>
* arXiv draft:
["TASRA: A Taxonomy and Analysis of Societal-Scale Risks from AI"](https://arxiv.org/abs/2306.06924)
Many of the ideas will not be new to LessWrong or the Alignment Forum, but holistically I hope the paper will make a good case to the world for using logically exhaustive arguments to identify risks (which, outside LessWrong, is often not assumed to be a valuable approach to thinking about risk).
I think the most important figure from the paper is this one:
... and, here are some highlights:
* Self-fulfilling pessimism:
<https://arxiv.org/pdf/2306.06924.pdf#page=4>
* Industries that could eventually get out of control in a closed loop:
<https://arxiv.org/pdf/2306.06924.pdf#page=5>
...as in this "production web" story:
<https://arxiv.org/pdf/2306.06924.pdf#page=6>
* Two "bigger than expected" AI impact stories:
<https://arxiv.org/pdf/2306.06924.pdf#page=8>
* Email helpers and corrupt mediators, which kinda go together:
<https://arxiv.org/pdf/2306.06924.pdf#page=10>
<https://arxiv.org/pdf/2306.06924.pdf#page=11>
* Harmful A/B testing:
<https://arxiv.org/pdf/2306.06924.pdf#page=12>
* Concerns about weaponization by criminals and states:
<https://arxiv.org/pdf/2306.06924.pdf#page=13>
Enjoy :)
|
28904644-a7a8-496c-8024-5aeadc1e5d6b
|
trentmkelly/LessWrong-43k
|
LessWrong
|
[LINK] "Academic publishers make Murdoch look like a socialist" says George Monbiot
link
> Who are the most ruthless capitalists in the western world? Whose monopolistic practices make Walmart look like a corner shop and Rupert Murdoch a socialist? You won't guess the answer in a month of Sundays. While there are plenty of candidates, my vote goes not to the banks, the oil companies or the health insurers, but – wait for it – to academic publishers. Theirs might sound like a fusty and insignificant sector. It is anything but. Of all corporate scams, the racket they run is most urgently in need of referral to the competition authorities.
>
> Everyone claims to agree that people should be encouraged to understand science and other academic research. Without current knowledge, we cannot make coherent democratic decisions. But the publishers have slapped a padlock and a "keep out" sign on the gates.
>
> You might resent Murdoch's paywall policy, in which he charges £1 for 24 hours of access to the Times and Sunday Times. But at least in that period you can read and download as many articles as you like. Reading a single article published by one of Elsevier's journals will cost you $31.50. Springer charges €34.95, Wiley-Blackwell, $42. Read 10 and you pay 10 times. And the journals retain perpetual copyright. You want to read a letter printed in 1981? That'll be $31.50.
>
> Of course, you could go into the library (if it still exists). But they too have been hit by cosmic fees. The average cost of an annual subscription to a chemistry journal is $3,792. Some journals cost $10,000 a year or more to stock. The most expensive I've seen, Elsevier's Biochimica et Biophysica Acta, is $20,930. Though academic libraries have been frantically cutting subscriptions to make ends meet, journals now consume 65% of their budgets, which means they have had to reduce the number of books they buy. Journal fees account for a significant component of universities' costs, which are being passed to their students.
>
> Murdoch pays his journalists and editors, and his c
|
dc386e3f-d750-452a-816c-2033939514bc
|
StampyAI/alignment-research-dataset/lesswrong
|
LessWrong
|
Are (at least some) Large Language Models Holographic Memory Stores?
*Cross-posted* [*from New Savanna*](https://new-savanna.blogspot.com/2023/10/are-at-least-some-large-language-models.html).
That’s been on my mind for the last week or two, ever since my recent work on ChatGPT’s memory for texts [1]. On the other than, there’s a sense in which it’s been on my mind for my entire career, or, more accurately, it’s been growing in my mind ever since I read Karl Pribram on neural holography back in 1969 in *Scientific American* [2]. For the moment let’s think of it as a metaphor, just a metaphor, nothing we have to commit to. Just yet. But ultimately, yes, I think it’s more than a metaphor. To that end I note that cognitive psychologists have recently been developing the idea of verbal memory as holographic in nature [3].
Note: These are quick and dirty notes, a place-holder for more considered thought.
**Holography in the mind**
--------------------------
Let’s start with an article David Hays and I published on neural holography as the neural underpinning of metaphor [4]. Here’s where we explain the holographic process:
> Holography is a photographic technique for making images. A beam of laser light is split into two beams. One beam strikes the object and is reflected to a photographic plate. The other beam, called a reference beam, goes from laser to plate directly. When they meet, the two beams create an interference pattern—imagine dropping two stones into a pond at different places; the waves propagating from each of these points will meet and the resulting pattern is an interference pattern. The photographic plate records the pattern of interference between the reference beam and the reflected beam.
>
> The image recorded on the film doesn't look at all like an ordinary photographic image—it’s just a dense mass of fine dots. But when a beam of laser light having the same properties as the original reference beam is directed through the film an image appears in front of the film. The interaction of the laser beam and the hologram has recreated the wave form of the laser beam which bounced off the object when the hologram was made. The new beam has extracted the image from the plate.
>
> Holography is, as its name suggests, holistic. Every part of the scene is represented in every part of the plate. (This situation is most unlike ordinary photography, which uses a good lens to focus infinitesimal parts of the scene onto equally infinitesimal parts of the plate.) With such a determinedly nondigital recording, certain mathematical possibilities can be realized more easily—we are tempted to say, infinitely more easily. For example, convolution. Take the holographic image of a printed page, and the image of a single word. Convolute them. The result is an image of the page with each occurrence of the word highlighted. We can think of visual recognition as a kind of convolution. The present scene, containing several horses, is convoluted with the memory of a horse and the present horses are immediately recognized. We can think of recognition this way, but we must admit that this process has not been achieved in any machine as yet.
>
> Further, it is possible to record many different images on the same piece of film, using different reference beams. The reference beams may differ in color, in angle of incidence, or otherwise. We can think— although again we cannot cite a demonstration—of convoluting such a composite plate with a second plate. If the image in the second plate matches any one of the images in the composite, then it is recognized. For metaphor we want to convolute Achilles and the lion and to recognize, to elicit another image containing not Achilles, not the lion, but just that wherein they resemble one another. Such is the metaphor mechanism—but that must wait until the next section, on focal and residual schemas.
>
>
The 175 billion weights that constitute the LLM at the core of ChatGPT, that’s the holographic memory. It is the superposition of all the texts in the training corpus. The training procedure – predict the next word – is a device for calculating a correlation (entanglement [5]) between each word in context, and every other word in every other text, in context. It’s a tedious process, no? But it works, yes?
When one prompts a trained memory, the prompt serves as a reference beam. And the whole memory must be ‘swept’ to generate each character. Given the nature of digital computers, this is a somewhat sequential process, even given a warehouse full of GPUs, but conceptually it’s a single pass. When one accesses an optical hologram with a reference beam, the beam illuminates the whole holograph. This is what Miriam Yevick called “one-shot” access in her 1975 paper, Holographic or Fourier Logic [6]. The whole memory is searched in a single sweep.
**Style transfer**
------------------
So, that’s the general idea. Much detail remains to be supplied, most of it by people with more technical knowledge than I’ve got. But I want to get in one last idea from the metaphor paper. We’ve been explaining the concepts of focal and residual schemas:
> Now consider a face. Everything we said about the chair applies here as well. But the expression on the face can vary widely and the identity of the face remains constant. This variability of expression can also be handled by the mechanism of focal and residual. There is a focal schema for face-in-neutral-expression and then we have various residuals which can operate on the focal schema to produce various expressions. (You might want to recall D'Arcy Thompson's coordinate transformations in On Growth and Form 1932.) We tend to discard presentation residuals such as lighting and angle of sight, but we respond to expression residuals
>
> Our basic point about metaphor is that the ground which links tenor and vehicle is derived from residuals on them. Consider the following example, from Book Twenty of Homer's Iliad (Lattimore translation, 1951, ll. 163-175)—it has the verbal form of a simile, but the basic conceptual process is, of course, metaphorical:
>
>
> From the other
> side the son of Peleus rose like a lion against him,
> the baleful beast, when men have been straining to kill him, the country
> all in the hunt, and he at first pays them no attention
> but goes his way, only when some one of the impetuous young men
> has hit him with the spear he whirls, jaws open, over his teeth foam
> breaks out, and in the depth of his chest the powerful heart groans;
> he lashes his own ribs with his tail and the flanks on both sides
> as he rouses himself to fury for the fight, eyes glaring,
> and hurls himself straight onward on the chance of killing some one
> of the men, or else being killed himself in the first onrush.
> So the proud heart and fighting fury stirred on Achilleus
> to go forward in the face of great-hearted Aineias.
>
>
> In short, Achilles was a lion in battle. Achilles is the tenor, lion the vehicle, and the ground is some martial virtue “proud heart and fighting fury”. But what of that detailed vignette about the lion's fighting style? Whatever its use in pacing the narrative, its real value, in our view, is that it contains the residuals on which the comparison rests, the residuals which give it life. The phrase “proud heart and fighting fury” is propositional while the fighting style is physiognomic. “Proud heart and fighting fury” may convey something of what is behind the fighting style, but only metaphoric interaction can foreground the complex schema by which we recognize and feel that style.
>
> The cognitive problem is to isolate the physiognomy of style, to tease it apart from the entities which exhibit that style. [...] In the case of Achilles and the lion we have two complex physiognomies, each extended in space and time. Metaphoric comparison serves to isolate the style, to allow us to focus our attention on that style as distinct from the entities which exhibit it.
>
> This comparison involves two foci, Achilles and the lion. The physical resemblance between them is not great—their body proportions are quite different and the lion is covered with fur while Achilles is, depending on the occasion, either naked or clothed in some one of many possible ways. The likeness shows up in the way they move in battle. A body in motion doesn't appear the same as a body at rest. The appearance presented by the focal body is modified by the many residuals which characterize that body's movement— twists and turns, foreshortenings and elongations (for an account of motion residuals, see Hay 1966). The movements of Achilles and the lion must differ at the grossest level, since the lion stands on four legs and fights with claws and teeth, while Achilles stands on two legs and fights with a spear or sword. But their movements are alike at a subtler level, at the level of what we call, in a dancer or a fighter, their style. Residuals can be stacked to many levels. “Proud heart and fighting fury” may be a good phrase to designate that style, but it doesn't allow us to attend to that style. Homer's extended simile does.
>
>
That’s a mouthful, I know. Notice our emphasis on style. That’s what’s got my attention.
One of the more interesting things LLMs can do is stylistic transfer. Take a piece of garden variety prose and present it in the style of Hemingway or Sontag, whomever you choose. Hays and I argued that that’s how metaphor is created, deep metaphor, that is, not metaphor so desiccated we no longer register its metaphorical nature, e.g. the mouth of the river. We made our argument about visual scenes: Achilles in batter, a lion in battle. LLMs apply the same process to texts, where style is considered to be a pattern of residuals over the conceptual content of the text.
More later.
**References**
--------------
[1] Discursive Competence in ChatGPT, Part 2: Memory for Texts, Version 3, <https://www.academia.edu/107318793/Discursive_Competence_in_ChatGPT_Part_2_Memory_for_Texts_Version_3>
[2] I recount that history here: Xanadu, GPT, and Beyond: An adventure of the mind, <https://www.academia.edu/106001453/Xanadu_GPT_and_Beyond_An_adventure_of_the_mind>
[3] Michael N. Jones and Douglas J. K. Mewhort, Representing Word Meaning and Order Information in a Composite Holographic Lexicon, Psychological Review, 2007, Vol. 114, No. 1, 1-37. DOI: <https://doi.org/10.1037/0033-295X.114.1.1>
Donald R. J. Frankin and D. J. K. Mewhort, Memory as a Holograpm: An Analysis of Learning and Recall, *Canadian Journal of Experimental Psychology / Revue canadienne de psychologie expérimentale*, Association 2015, Vol. 69, No. 1, 115–135, <https://doi.org/10.1037/cep0000035>
[4] Metaphor, Recognition, and Neural Process, <https://www.academia.edu/238608/Metaphor_Recognition_and_Neural_Process>
[5] See posts tagged with “entangle”, <https://new-savanna.blogspot.com/search/label/entangle>
[6] Miriam Lipschutz Yevick, Holographic or Fourier Logic, *Pattern Recognition* 7, 197-213, <https://sci-hub.tw/10.1016/0031-3203(75)90005-9>
|
34991c38-6ec9-49c6-905e-a757b8517a84
|
StampyAI/alignment-research-dataset/eaforum
|
Effective Altruism Forum
|
Approfondimenti sui rischi dell’IA (materiali in inglese)
*This is an Italian translation of* [***More to explore on 'Risks from Artificial Intelligence'***](https://forum.effectivealtruism.org/posts/Cf6tNAhDbQFvAwbAg/more-to-explore-on-risks-from-artificial-intelligence)
### **Lo sviluppo dell’intelligenza artificiale**
* [AlphaGo - The Movie - DeepMind](https://tinyurl.com/vsj22235) - Documentario sull’intelligenza artificiale, l’antichissimo gioco del Go e cosa possiamo imparare sulle potenzialità future delle IA. (Filmato - 1 ora e 30 minuti)
* [The Artificial Intelligence Revolution: Part 1](https://tinyurl.com/sumpuw) - Una divertente e interessante esplorazione dell’intelligenza artificiale dal famoso blogger Tim Urban. (45 minuti)
### **Altre risorse sull’allineamento dell’intelligenza artificiale**
* [AGI Safety Fundamentals Curricula](https://www.agisafetyfundamentals.com/)
* [My personal cruxes for working on AI safety](https://forum.effectivealtruism.org/posts/Ayu5im98u8FeMWoBZ/my-personal-cruxes-for-working-on-ai-safety#Problems_solve_themselves) (65 minuti)
* [Professor Stuart Russell on the flaws that make today’s AI architecture unsafe & a new approach that could fix it](https://tinyurl.com/erkwmdhr) (Podcast - 2 ore 15 minuti)
* [Some Background on Our Views Regarding Advanced Artificial Intelligence - Open Philanthropy Project](https://tinyurl.com/m8fzdtc3) - Una spiegazione del perché ci sia una seria possibilità che il progresso dell’intelligenza artificiale potrebbe essere comparabile alla transizione dall’era neolitica alla rivoluzione industriale. (1 ora)
* [The Precipice](https://tinyurl.com/25y25452) (25 minuti)
* [What Failure Looks Like](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) - Due storie specifiche su come potrebbero essere peggiori scenari di una società risultata dal fallimento dell’allineamento di IA, che si sposta considerevolmente dalla classica storia della “esplosione dell’intelligenza” (12 mins.)
* [AGI Safety from first principles](https://tinyurl.com/5w4vb9v8) - L’opinione di un ricercatore di IA sui fattori specifici per il problema di allineamento nell’intelligenza artificiale generale (1 ora 15 minuti)
* [Human Compatible: Artificial Intelligence and The Problem of Control](https://tinyurl.com/sxs2deby) (Libro)
* [The Alignment Problem: Machine Learning and Human Values](https://tinyurl.com/9ae73bvn) (Libro)
### **Governance dell’intelligenza artificiale**
* [The new 30-person research team in DC investigating how emerging technologies could affect national security - 80,000 Hours](https://tinyurl.com/yzajjzhr) - Come cambierebbe la sicurezza internazionale se gli effetti del machine learning fossero di portata simile a quelli dell’elettricità? (Podcast - 2 ore)
* [Technology Roulette: Managing Loss of Control as Many Militaries Pursue Technological Superiority - Center for a New American Security](https://tinyurl.com/yxndes72) - Come i progressi tecnologici in ambito militare (incluse, ma non solo, le IA) possono comportare rischi e creare problemi nel prendere decisioni importanti, degni di attenzione da parte delle strutture di sicurezza nazionali. (60 minuti)
### **Lavori tecnici sull’allineamento dell’IA**
* [AI Alignment Landscape](https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38) (Video - 30 minuti)
* [AI safety starter pack](https://forum.effectivealtruism.org/posts/pbiGHk6AjRxdBPoD8/ai-safety-starter-pack) (7 minuti)
* [How to pursue a career in technical AI alignment](https://forum.effectivealtruism.org/posts/7WXPkpqKGKewAymJf/how-to-pursue-a-career-in-technical-ai-alignment) (59 minuti)
* [Technical Alignment Curriculum](https://www.eacambridge.org/technical-alignment-curriculum) (readings for a 7 week course)
* [AI Alignment Forum](https://www.alignmentforum.org/), soprattutto le loro [sequenze principali](https://www.alignmentforum.org/library)
### **Critiche ai rischi dell’IA**
* [How sure are we about this AI stuff?](https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff) (26 minuti)
* [A tale of 2.75 orthogonality theses](https://forum.effectivealtruism.org/posts/kCAcrjvXDt2evMpBz/a-tale-of-2-75-orthogonality-theses) (20 minuti)
* [How to know if AI is about to destroy civilization](https://forum.effectivealtruism.org/posts/7x2BokkkemjnXD9B6/new-article-from-oren-etzioni) (sommario, 2 minuti)
* [The AI Messiah](https://forum.effectivealtruism.org/posts/r72wjMns9wyaAhWhc/the-ai-messiah) (e il primo commento) (5 minuti)
* [How good is humanity at coordination?](https://www.lesswrong.com/posts/y3jDSoTTdBD9Nj3Gx/how-good-is-humanity-at-coordination) (4 minuti)
|
db02c700-8eec-40b2-95f5-44c4b35dfc0f
|
trentmkelly/LessWrong-43k
|
LessWrong
|
A few questions about Discussion
Since there aren't any subforums and I couldn't find a thread with information of what is relevant for Discussion, and I saw the threads here are relatively free, I've decided to ask my questions in a thread.
- Can I post questions about this section? (sorry if no)
- Can I post about psychology in general?
- Can I post about anything that might be of the interest of a rationalist? Like for example, a thread asking about how to reduce the risks for most cancers. Note that this is a huge range of possible questions.
Edit: thanks, I think I figured it out. I'm still not sure if I want to respond to all the comments, whether I should comment myself or edit the original post.
|
99a6630d-c28c-42ac-bdff-9b2e2393a21e
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Multimodal Neurons in Artificial Neural Networks
A paper investigating how individual neurons in a CLIP model (an image/text neural net combining a ResNet vision model with a Transformer language model) respond to various abstract concepts. This shouldn't be very surprising after GPT-3 and DALL-E but still, identifying multimodal neurons feels scarily close to "neural net that understands abstract concepts" and thus AGI for my comfort.
Some individual neurons that they isolated (see the article for more):
* Spiderman neuron: responds to photos of Spiderman in costume and spiders, comics or drawings of Spiderman and spider-themed icons, the text “spider” and others. Associates him with "Peter Parker" and also responds to images, text, and drawings of heroes and villains from Spiderman movies and comics over the last half-century.
* Yellow neuron: responds to images of the words “yellow”, “banana” and “lemon,” in addition to the color.
* Jesus Christ neuron: detects Christian symbols like crosses and crowns of thorns, paintings of Jesus, his written name, and feature visualization shows him as a baby in the arms of the Virgin Mary.
* Hitler neuron: learns to detect his face and body, symbols of the Nazi party, relevant historical documents, and other loosely related concepts like German food. Feature visualization shows swastikas and Hitler seemingly doing a Nazi salute.
* Donald Trump neuron: strongly responds to images of him across a wide variety of settings, including effigies and caricatures in many artistic mediums, as well as more weakly activating for people he’s worked closely with like Mike Pence and Steve Bannon. It also responds to his political symbols and messaging (eg. “The Wall” and “Make America Great Again” hats). On the other hand, it most *negatively* activates to musicians like Nicky Minaj and Eminem, video games like Fortnite, civil rights activists like Martin Luther King Jr., and LGBT symbols like rainbow flags.
* Happiness neuron: responds both to images of similing people, and words
|
23b14af5-61e5-415a-82e4-87d214c2beb1
|
trentmkelly/LessWrong-43k
|
LessWrong
|
My advice on finding your own path
tl;dr - 4 steps
1. Give yourself permission to take the future seriously
2. Be willing to imagine successful or ambitious outcomes
3. Give yourself the gift of focused time and space
4. Write draw sketch diagram or whatever you need to empty your mind
Intro
(Feel free to skip to the method.)
When I was transitioning into technical AI alignment work, I benefited immensely from the mentorship and guidance of experts in the field. They helped me to understand the relevant concepts, to develop my skills, and to find my place in the research community.
In my early years of giving career advice to other people, I tried to replicate this experience for them. I listened to their questions and concerns, and I tried to provide what I thought were the best possible answers. These are questions like "What do you think are the most important problems to work on?" or "What should I do to get the highest impact job in AI alignment?", and while I tried to listen to them and give personalized advice, I think the answers always failed to capture what was the best fit for the individual.
Over the years, however, my advice has evolved. I have answered direct questions less and less directly, and tried to do more of asking leading questions in response. "What do _you_ think the most important problems are?" "What kinds of work do _you_ think you'd be highest impact at?"
As I've done this, I've noticed four specific hang-ups (probably others, but I'm simplifying and reducing here in order to make my points simpler). My advice has mostly turned into just four things that I say to everyone, and I think are more useful than specific answers to specific questions.
Method: 4 Steps
1. Give yourself permission to take the future seriously
I think for many people it can be difficult to imagine details about the future. There is a lot of uncertainty, and this can lead to people thinking that certain kinds of thinking or planning or predictions are inappropriate. Another way this som
|
165aeaba-64f5-4ef4-a031-ac864f672fd9
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Is this a Pivotal Weak Act?
Creating bacteria that decompose metal
This has been haunting my mind for a while and I would appreciate feedback on it!
In his infamous article "AGI Ruin: A list of lethalities" Eliezer defines a "pivotal weak act" and gives a heuristic proof that no such thing can exist.
TLDR: I think his proof is wrong and there is a counterexample. I believe creating bacteria that decompose metal, silicone, (or any other superset of the materials GPUs consist of) would constitute a pivotal weak act.
Long Version:
In his article, Eliezer outlines several hopes of people claiming AGI won't be as bad or any problem at all, and then cruelly squashes them. One of those hopes is the possibility of executing a "pivotal weak act". The idea being that a small group of people executes some action X that will prevent AGI from being built, for example a group that is privy to the dangers of AGI would command a friendly AGI to "burn all GPUs" and then we are good. Eliezer argues that any AGI powerful enough (pivotal) to prevent or at least indefinitely postpone unaligned AGI must itself be powerful enough such that it needs to be aligned (not weak), which we don't know how to do. I believe his proof is false.
Definition:
A Pivotal Weak Act, would be some action or event A, such that
1. A happening or being executed prevents or delays the advent of an unaligned AGI indefinitely or at least very long (Pivotal)
2. A does not itself pose a significant X-risk for humanity as a whole (Weak)
3. A is realistically achievable with technology attainable in the coming decades (Realism)
furthermore it is not required that
- A is in any way related to or facilitated by an AI system
- A has no collateral damage
- A is moral, legal or anywhere near the Overton Window
- A is achievable today with current technology
I think that the following is an example of a Pivotal Weak Act.
Creating bacteria that decompose metal (and spreading them worldwide)
This is pivotal, since it is a special scenario of "Burning all GPUs"
It is (likely) weak,
|
3552561b-b229-40c4-bc28-198c52586699
|
trentmkelly/LessWrong-43k
|
LessWrong
|
Three Kinds of Research Documents: Exploration, Explanation, Academic
Aug 2020 Edit: Changed "Clarification" to "Exploration", thanks to a comment by Richard_Ngo
Epistemic Status: Low. This was a quick idea, but the grouping honesty doesn't work as well as I'd like. I still think it could be useful to some people though. Ideas appreciated.
Recently I have started writing more and have been trying to be more intentional with what I accomplish. Different documents have different purposes and it seemed useful to help clarify this. Here is a list of three specific different types I think are relevant on LessWrong and similar.
Exploration
I see exploration posts as generally the first instance of information being written down. Here it is important to get the essential ideas out there and to create consensus around terminology among the most interested readers. In some cases, the only interested reader may be the author, who would use the post just to help cement their ideas for themselves.
Exploration posts may not be immediately useful and require later posts or context for them to make sense. This is typically fine. There's often not a rush for them to be understood. In many cases, there is a lot of possible information to write down, so the first step is to ensure it's out there, even if it's slow, hard to read, or doesn't much make sense until later.
I think of many of Paul Christiano's posts as exploration posts. They're very numerous and novel, but quite confusing to many readers (at least, to myself and several people I've talked to). Sometimes the terminology changes from one post to the next. I used to see this is somewhat of a weakness, but now it comes across to me as a pragmatic option. If he were to have tried to make all of this readable to the average LessWrong reader, there's likely no way he could have written a portion as much.
One important point here is that if something is a exploration post, then the main relevant feedback is on the core content, not the presentation. Giving feedback on the readability can sti
|
a081f41d-0783-47f0-94f4-c5a91c88c7a8
|
StampyAI/alignment-research-dataset/special_docs
|
Other
|
Nonparametric General Reinforcement Learning
Nonparametric General
Reinforcement Learning
Jan Leike
A thesis submitted for the degree of
Doctor of Philosophy
at the
Australian National University
November 2016
©Jan Leike
This work is licensed under the
Creative Commons Attribution 4.0 International License
No reinforcement learners were harmed in the making of this thesis.
Except where otherwise indicated, this thesis is my own original work.
Jan Leike
28 November 2016
Acknowledgements
There are many without whom this thesis would not have been possible. I sincerely
hope that this page is not the way they learn how grateful I am to them. I thank in
particular ...
... first and foremost, Marcus Hutter: he is an amazing supervisor; always very sup-
portive of my (unusual) endeavors, spent countless hours reading my drafts with
a impressive attention to detail. I am also grateful to him for forcing me to be
absolutely rigorous in my mathematical arguments, and, of course, for developing
the theory of universal AI without which this thesis would not have existed. I could
not have picked a better supervisor.
... the Australian National University for granting me scholarships that let me pursue
my academic interests unrestricted and without any financial worries.
... Csaba Szepesvári and the University of Alberta for hosting me for three months.
... Matthias Heizmann and the University of Freiburg for hosting me while I was
traveling in Europe.
... the Machine Intelligence Research Institute for enabling me to run MIRIx research
workshops.
... CCR, UAI, Google DeepMind, ARC, MIRI, and FHI for supporting my travel.
... Tor Lattimore for numerous explanations, discussions, and pointers that left me
with a much deeper understanding of the theory of reinforcement learning.
... Laurent Orseau for interesting discussions, encouragement, and for sharing so many
intriguing ideas.
... my fellow students: Mayank Daswani, Tom Everitt, Daniel Filan, Roshan Shariff,
Tian Kruger, Emily Cutts Worthington, Buck Shlegeris, Jarryd Martin, John
Aslanides, Alexander Mascolo, and Sultan Javed for so many interesting discus-
sions and for being awesome friends. I especially thank Daniel, Emily, Mayank,
and Buck for encouraging me to read more of Less Wrong and Slate Star Codex.
... ToscaLechnerforstudyingstatisticswithmedespitesomanyschedulingdifficulties
across all these time zones.
... Tom Sterkenburg, Christian Kamm, Alexandra Surdina, Freya Fleckenstein, Pe-
ter Sunehag, Tosca Lechner, Ines Nikolaus, Laurent Orseau, John Aslanides, and
especially Daniel Filan for proofreading parts of this thesis.
... the CSSA for being a lovely bunch that made my stay in Australia feel less isolated.
... my family for lots of love and support, and for tolerating my long absences from
Europe.
Abstract
Reinforcement learning problems are often phrased in terms of Markov decision pro-
cesses (MDPs). In this thesis we go beyond MDPs and consider reinforcement learning
in environments that are non-Markovian, non-ergodic and only partially observable.
Our focus is not on practical algorithms, but rather on the fundamental underlying
problems: How do we balance exploration and exploitation? How do we explore opti-
mally? When is an agent optimal? We follow the nonparametric realizable paradigm:
weassumethedataisdrawnfromanunknownsourcethatbelongstoaknowncountable
class of candidates.
First, we consider the passive (sequence prediction) setting, learning from data that
is not independent and identically distributed. We collect results from artificial intelli-
gence,algorithmicinformationtheory,andgametheoryandputtheminareinforcement
learning context: they demonstrate how an agent can learn the value of its own policy.
Next, we establish negative results on Bayesian reinforcement learning agents, in
particular AIXI. We show that unlucky or adversarial choices of the prior cause the
agent to misbehave drastically. Therefore Legg-Hutter intelligence and balanced Pareto
optimality, which depend crucially on the choice of the prior, are entirely subjective.
Moreover, in the class of all computable environments every policy is Pareto optimal.
This undermines all existing optimality properties for AIXI.
However, there are Bayesian approaches to general reinforcement learning that sat-
isfy objective optimality guarantees: We prove that Thompson sampling is asymptot-
ically optimal in stochastic environments in the sense that its value converges to the
value of the optimal policy. We connect asymptotic optimality to regret given a recov-
erability assumption on the environment that allows the agent to recover from mistakes.
Hence Thompson sampling achieves sublinear regret in these environments.
AIXI is known to be incomputable. We quantify this using the arithmetical hierar-
chy, and establish upper and corresponding lower bounds for incomputability. Further,
we show that AIXI is not limit computable, thus cannot be approximated using finite
computation. However there are limit computable "-optimal approximations to AIXI.
We also derive computability bounds for knowledge-seeking agents, and give a limit
computable weakly asymptotically optimal reinforcement learning agent.
Finally, our results culminate in a formal solution to the grain of truth problem: A
Bayesian agent acting in a multi-agent environment learns to predict the other agents’
policies if its prior assigns positive probability to them (the prior contains a grain of
truth). We construct a large but limit computable class containing a grain of truth and
show that agents based on Thompson sampling over this class converge to play "-Nash
equilibria in arbitrary unknown computable multi-agent environments.
ix
x
Keywords. Bayesian methods, sequence prediction, merging, general reinforcement
learning, universal artificial intelligence, AIXI, Thompson sampling, knowledge-seeking
agents, Pareto optimality, intelligence, asymptotic optimality, computability, reflective
oracle, grain of truth problem, Nash equilibrium.
Contents
Title Page i
Abstract ix
Contents xiii
List of Figures xv
List of Tables xvii
1 Introduction 1
1.1 Reinforcement Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.1 Narrow Reinforcement Learning . . . . . . . . . . . . . . . . . . . 3
1.1.2 Deep Q-Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.3 General Reinforcement Learning . . . . . . . . . . . . . . . . . . 6
1.2 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2 Preliminaries 15
2.1 Measure Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 Stochastic Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.3 Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4 Algorithmic Information Theory . . . . . . . . . . . . . . . . . . . . . . 20
3 Learning 23
3.1 Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.2 Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3 Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4.1 Strong Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.4.2 Weak Merging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.3 Almost Weak Merging . . . . . . . . . . . . . . . . . . . . . . . . 34
3.5 Predicting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.5.1 Dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.5.2 Absolute Continuity . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.5.3 Dominance with Coefficients . . . . . . . . . . . . . . . . . . . . . 40
3.6 Learning with Algorithmic Information Theory . . . . . . . . . . . . . . 41
3.6.1 Solomonoff Induction . . . . . . . . . . . . . . . . . . . . . . . . . 41
xi
xii Contents
3.6.2 The Speed Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.6.3 Universal Compression . . . . . . . . . . . . . . . . . . . . . . . . 43
3.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4 Acting 49
4.1 The General Reinforcement Learning Problem . . . . . . . . . . . . . . . 50
4.1.1 Discounting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.1.2 Implicit Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.1.3 Typical Environment Classes . . . . . . . . . . . . . . . . . . . . 54
4.2 The Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.2.1 Optimal Policies . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.2.2 Properties of the Value Function . . . . . . . . . . . . . . . . . . 58
4.2.3 On-Policy Value Convergence . . . . . . . . . . . . . . . . . . . . 59
4.3 The Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.1 Bayes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.3.2 Knowledge-Seeking Agents . . . . . . . . . . . . . . . . . . . . . . 63
4.3.3 BayesExp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.3.4 Thompson Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 65
5 Optimality 67
5.1 Pareto Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.2 Bad Priors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2.1 The Indifference Prior . . . . . . . . . . . . . . . . . . . . . . . . 70
5.2.2 The Dogmatic Prior . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2.3 The Gödel Prior . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
5.3 Bayes Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.4 Asymptotic Optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.4.1 Bayes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.4.2 BayesExp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.4.3 Thompson Sampling . . . . . . . . . . . . . . . . . . . . . . . . . 84
5.4.4 Almost Sure in Cesàro Average vs. in Mean . . . . . . . . . . . . 89
5.5 Regret . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.5.1 Sublinear Regret in Recoverable Environments . . . . . . . . . . 91
5.5.2 Regret of the Optimal Policy and Thompson sampling . . . . . . 95
5.6 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.6.1 The Optimality of AIXI . . . . . . . . . . . . . . . . . . . . . . . 96
5.6.2 Natural Universal Turing Machines . . . . . . . . . . . . . . . . . 97
5.6.3 Asymptotic Optimality . . . . . . . . . . . . . . . . . . . . . . . . 98
5.6.4 The Quest for Optimality . . . . . . . . . . . . . . . . . . . . . . 99
6 Computability 101
6.1 Background on Computability . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1.1 The Arithmetical Hierarchy . . . . . . . . . . . . . . . . . . . . . 103
6.1.2 Computability of Real-valued Functions . . . . . . . . . . . . . . 103
Contents xiii
6.2 The Complexity of Solomonoff Induction . . . . . . . . . . . . . . . . . . 105
6.3 The Complexity of AINU, AIMU, and AIXI . . . . . . . . . . . . . . . . 108
6.3.1 Upper Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.3.2 Lower Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6.4 Iterative Value Function . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.5 The Complexity of Knowledge-Seeking . . . . . . . . . . . . . . . . . . . 122
6.6 A Limit Computable Weakly Asymptotically Optimal Agent . . . . . . . 122
6.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
7 The Grain of Truth Problem 127
7.1 Reflective Oracles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.1.1 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7.1.2 A Limit Computable Reflective Oracle . . . . . . . . . . . . . . . 131
7.1.3 Proof of Theorem 7.7 . . . . . . . . . . . . . . . . . . . . . . . . . 132
7.2 A Grain of Truth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
7.2.1 Reflective Bayesian Agents . . . . . . . . . . . . . . . . . . . . . 135
7.2.2 Reflective-Oracle-Computable Policies . . . . . . . . . . . . . . . 136
7.2.3 Solution to the Grain of Truth Problem . . . . . . . . . . . . . . 137
7.3 Multi-Agent Environments . . . . . . . . . . . . . . . . . . . . . . . . . . 137
7.4 Informed Reflective Agents . . . . . . . . . . . . . . . . . . . . . . . . . 140
7.5 Learning Reflective Agents . . . . . . . . . . . . . . . . . . . . . . . . . . 141
7.6 Impossibility Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
7.7 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8 Conclusion 147
Measures and Martingales 151
Bibliography 155
List of Notation 171
Index 175
xiv Contents
List of Figures
1.1 Selection of Atari 2600 video games . . . . . . . . . . . . . . . . . . . . . 13
3.1 Properties of learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4.1 The dualistic agent model . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.1 Legg-Hutter intelligence measure . . . . . . . . . . . . . . . . . . . . . . 77
5.2 Relationship between different types of asymptotic optimality . . . . . . 80
6.1 Definition of conditional Mas a0
2-formula . . . . . . . . . . . . . . . . 106
6.2 Environment from the proof of Theorem 6.15 . . . . . . . . . . . . . . . 111
6.3 Environment from the proof of Theorem 6.16 . . . . . . . . . . . . . . . 113
6.4 Environment from the proof of Theorem 6.17 . . . . . . . . . . . . . . . 114
6.5 Environment from the proof of Proposition 6.19 . . . . . . . . . . . . . . 117
6.6 Environment from the proof of Theorem 6.22 . . . . . . . . . . . . . . . 119
6.7 Environment from the proof of Theorem 6.23 . . . . . . . . . . . . . . . 121
7.1 Answer options of a reflective oracle . . . . . . . . . . . . . . . . . . . . 131
7.2 The multi-agent model . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
xv
xvi LIST OF FIGURES
List of Tables
1.1 Assumptions in reinforcement learning . . . . . . . . . . . . . . . . . . . 7
1.2 List of publications by chapter . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 List of publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.1 Examples of learning distributions . . . . . . . . . . . . . . . . . . . . . 45
3.2 Summary on properties of learning . . . . . . . . . . . . . . . . . . . . . 45
4.1 Discount functions and their effective horizons . . . . . . . . . . . . . . . 53
5.1 Types of asymptotic optimality . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Compiler sizes of the UTMs of bad priors . . . . . . . . . . . . . . . . . 98
5.3 Notions of optimality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.1 Computability results on Solomonoff’s prior . . . . . . . . . . . . . . . . 102
6.2 Computability results for different agent models . . . . . . . . . . . . . . 102
6.3 Computability of real-valued functions . . . . . . . . . . . . . . . . . . . 104
6.4 Computability results for the iterative value function . . . . . . . . . . . 116
7.1 Terminology dictionary between reinforcement learning and game theory. 128
xvii
xviii LIST OF TABLES
Chapter 1
Introduction
Everything I did was for the glamor, the money, and the sex. — Albert Einstein
After the early enthusiastic decades, research in artificial intelligence (AI) now mainly
aims at specific domains: playing games, mining data, processing natural language,
recognizing objects in images, piloting robots, filtering email, and many others (Russell
and Norvig, 2010). Progress on particular domains has been remarkable, with several
high-profile breakthroughs: The chess world champion Garry Kasparov was defeated
by the computer program Deep Blue in 1997 (IBM, 2012a). In 2011 the world’s best
Jeopardy! players were defeated by the computer program Watson (IBM, 2012b). As
of 2014 Google’s self-driving cars completed over a million kilometers autonomously on
public roads (Google, 2014). Finally, in 2016 Google DeepMind’s AlphaGo beat Lee
Sedol, one of the world’s best players, at the board game Go (Google, 2016).
While these advancements are very impressive, they are highly-specialized algo-
rithms tailored to their domain of expertise. Outside that domain these algorithms
perform very poorly: AlphaGo cannot play chess, Watson cannot drive a car, and
DeepBlue cannot answer natural language queries. Solutions in one domain typically
do not generalize to other domains and no single algorithm performs well in more than
one of them. We classify these kinds of algorithms as narrow AI .
This thesis is not about narrow AI. We expect progress on narrow AI to continue
and even accelerate, taking the crown of human superiority in domain after domain.
But this is not the ultimate goal of artificial intelligence research. The ultimate goal is
to engineer a mind—to build a machine that can learn to do all tasks that humans can
do, at least as well as humans do them. We call such a machine human-level AI (HLAI)
if it performs at human level and strong AI if it surpasses human level. This thesis is
about strong AI.
The goal of developing HLAI has a long tradition in AI research and was explicitly
part of the 1956 Dartmouth conference that gave birth to the field of AI (McCarthy
et al., 1955):
We propose that a 2 month, 10 man study of artificial intelligence be carried
out during the summer of 1956 at Dartmouth College in Hanover, New
Hampshire. The study is to proceed on the basis of the conjecture that
every aspect of learning or any other feature of intelligence can in principle
be so precisely described that a machine can be made to simulate it. An
1
2 Introduction
attempt will be made to find how to make machines use language, form
abstractionsand concepts, solve kinds ofproblems now reserved forhumans,
and improve themselves. We think that a significant advance can be made
in one or more of these problems if a carefully selected group of scientists
work on it together for a summer.
Inhindsightthisproposalreadsvastlyoverconfident,anddisappointmentwasinevitable.
Making progress on these problems turned out to be a lot harder than promised, and
over the last decades any discussion of research targeting HLAI has been avoided by
serious researchers in the field. This void was filled mostly by crackpots, which tainted
the reputation of HLAI research even further. However, this trend has recently been re-
verted: Chalmers (2010), Hutter (2012a), Schmidhuber (2012), Bostrom (2014), Hawk-
ing, Tegmark, Russell, and Wilczek (2014), Shanahan (2015), and Walsh (2016) are
well-known scientists discussing the prospect of HLAI seriously. Even more: the ex-
plicit motto of Google DeepMind, one of today’s leading AI research centers, is to “solve
intelligence.”
1.1 Reinforcement Learning
The best formal model for strong AI we currently have is reinforcement learning (RL).
Reinforcement learning studies algorithms that learn to act in an unknown environment
through trial and error (Sutton and Barto, 1998; Szepesvári, 2010; Wiering and van
Otterlo, 2012). Without knowing the structure of the environments or the goal, an
agent has to learn what to do through the carrot-and-stick approach: it receives a
reward in form of a numeric feedback signifying how well it is currently doing; from
this signal the agent has to figure out autonomously what to do. More specifically,
in ageneral reinforcement learning problem anagentinteracts sequentially with an
unknown environment : in every time step the agent chooses an actionand receives a
perceptconsisting of an observation and a real-valued reward. The sequence of past
actions and percepts is the history. The goal in reinforcement learning is to maximize
cumulative (discounted) rewards (this setup is described formally in Section 4.1).
A central problem in reinforcement learning is the balance between exploration and
exploitation : should the agent harvest rewards in the regions of the environment that it
currently knows (exploitation) or try discovering more profitable regions (exploration)?
Exploration is costly and dangerous: it forfeits rewards that could be had right now,
and it might lead into traps from which the agent cannot recover. However, exploration
may pay off in the long run. Generally, it is not clear how to make this tradeoff (see
Section 5.6).
Reinforcement learning algorithms can be categorized by whether they learn on-
policyoroff-policy . Learning on-policy means learning the value of the policy that
the agent currently follows. Typically, the policy is slowly improved while learning,
likeSARSA(Sutton and Barto, 1998). In contrast, learning off-policy means following
one policy but learning the value of another policy (typically the optimal policy), like
Q-learning (Watkins and Dayan, 1992). Off-policy methods are more difficult to handle
§1.1Reinforcement Learning 3
in practice (see the discussion on function approximation below) but tend to be more
data-efficient since samples from an old policy do not have to be discarded.
Reinforcement learning has to be distinguished from planning . In a planning prob-
lem we are provided with the true environment and are tasked with finding an optimal
policy. Mathematically it is clear what the optimal policy is, the difficulty stems from
finding a reasonable solution with limited computation. Reinforcement learning is fun-
damentally more difficult because the true environment is unknown and has to be
learned from observation. This enables two approaches: we could learn a model of the
trueenvironmentandthenuseplanningtechniqueswithinthatmodel; thisisthe model-
basedapproach. Alternatively, we could learn an optimal policy directly or through an
intermediate quantity (typically the value function); this is the model-free approach.
Model-based methods tend to be more data-efficient but also computationally more
expensive. Therefore most algorithms used in practice ( Q-learning and SARSA) are
model-free.
1.1.1 Narrow Reinforcement Learning
In the reinforcement learning literature it is typically assumed that the environment is a
Markov decision process (MDP), i.e., the next percept only depends on the last percept
and action and is independent of the rest of the history (see Section 4.1.3). In an MDP,
percepts are usually called states. This setting is well-analyzed (Puterman, 2014; Bert-
sekas and Tsitsiklis, 1995; Sutton and Barto, 1998), and there is a variety of algorithms
that are known to learn the MDP asymptotically, such as TD learning (Sutton, 1988)
andQ-learning (Watkins and Dayan, 1992).
Moreover, for MDPs various learning guarantees have been proved in the literature.
First, there are bounds on the agent’s regret, the difference between the obtained re-
wards and the rewards of the optimal policy. Auer et al. (2009) derive the regret bound
~O(dSp
At)for ergodic MDPs where dis thediameter of the MDP (how many steps
a policy needs on average to get from one state of the MDP to any other), Sis the
number of states, Ais the number of actions, and tis the number of time steps the
algorithm runs. Second, given "and, a reinforcement learning algorithm is said to
havesample complexity C(";)iff it is"-suboptimal for at most C(";)time steps with
probability at least 1 (probably approximately correct, PAC). For MDPs the first
sample complexity bounds were due to Kakade (2003). Lattimore and Hutter (2012)
use the algorithm UCRL
(Auer et al., 2009) with geometric discounting with discount
rate
and derive the currently best-known PAC bound of ~O( T=("2(1
)3) log)
whereTis the number of non-zero transitions in the MDP.
Typically, algorithms for MDPs rely on visiting every state multiple times (or even
infinitelyoften), whichbecomesinfeasibleforlargestatespaces(e.g.avideogamescreen
consisting of millions of pixels). In these cases, function approximation can be used to
learn an approximation to the value function (Sutton and Barto, 1998). Linear function
approximationisknowntoconvergeforseveralon-policyalgorithms(TsitsiklisandRoy,
1997; Sutton, 1988; Gordon, 2001), but proved tricky for off-policy algorithms (Baird,
1995). A recent breakthrough was made by Mahmood et al. (2015) and Yu (2015)
4 Introduction
with their emphatic TD algorithm that converges off-policy. For nonlinear function
approximation no convergence guarantee is known.
Among the historical successes of reinforcement learning is autonomous helicopter
piloting (Kim et al., 2003) and TD-Gammon, a backgammon algorithm that learned
through self-play (Tesauro, 1995), similar to AlphaGo (Silver et al., 2016).
1.1.2 Deep Q-Networks
The current state of the art in reinforcement learning challenges itself to playing simple
video games. Video games are an excellent benchmark because they come readily with
the reward structure provided: the agent’s rewards are the change in the game score.
Without prior knowledge of any aspect of the game, the agent needs to learn to score
as many points in the game as possible from looking only at raw pixel data (sometimes
after some preprocessing).
This approach to general AI is in accordance with the definition of intelligence given
by Legg and Hutter (2007b):
Intelligence measures an agent’s ability to achieve goals in a wide range of
environments.
In reinforcement learning the definition of the goal is very flexible, and provided by
the rewards. Moreover, a diverse selection of video games arguably constitutes a ‘wide
range of environments.’
A popular such selection is the Atari 2600 video game console (Bellemare et al.,
2013). There are hundreds of games released for this platform, with very diverse chal-
lenges: top-down shooting games such as Space Invaders, ball games such as Pong,
agility-based games such as Boxing or Gopher, tactical games such as Ms. Pac-Man,
and maze games such as Montezuma’s Revenge. An overview over some of the games
is given in Figure 1.1 on page 13.
Mnih et al. (2013, 2015) introduce the deep Q-network (DQN) algorithm, com-
biningQ-learning with nonlinear function approximation through convolutional neural
networks. DQN achieves 75% of the performance of a human game tester on 29 of 49
Atari games. The two innovations that made this breakthrough possible are (1) using
a not so recent target Q-function in the TD update and (2) experience replay. For ex-
perience replay, a set of recent state transitions is retained and the network is regularly
retrained on random samples from these old transitions.1
DQN rides the wave of success of deep learning (LeCun et al., 2015; Schmidhuber,
2015; Goodfellow et al., 2016). Deep learning refers to the training of artificial neural
networks with several layers. This allows them to automatically learn higher-level
abstractions from data. Deep neural networks are conceptionally simple and have been
studied since the inception of AI; only recently has computation power become cheap
enough to train them effectively. Recently deep neural networks have taken the top of
the machine learning benchmarks by storm (LeCun et al., 2015, and references therein):
1The slogan for experience replay should be ‘regularly retrained randomly on retained rewards’.
§1.1Reinforcement Learning 5
These methods have dramatically improved the state-of-the-art in speech
recognition, visual object recognition, object detection and many other do-
mains such as drug discovery and genomics.
Since the introduction of DQN there have been numerous improvements on this
algorithm: increasing the gap on the Q-values of different actions (Bellemare et al.,
2016), training in parallel (Nair et al., 2015; Mnih et al., 2016), improvements to the
experience replay mechanism (Schaul et al., 2016), generalization to continuous action
spaces (Lillicrap et al., 2016), solve the overestimation problem (van Hasselt et al.,
2016), andimprovementstotheneuralnetworkarchitecture(Wangetal.,2016). The Q-
values learned by DQN’s neural networks are intransparent to inspection; Zahavy et al.
(2016) use visualization techniques on the Q-value networks. Finally, Liang et al. (2016)
managed to reproduce DQN’s success using only linearfunction approximation (no
neural networks). The key is a selection of features similar to the ones produced by
DQN’s convolutional neural networks.
Regardless of its success, the DQN algorithm fundamentally falls short of the re-
quirements for strong AI: Q-learning with function approximation is targeted at solving
large-state (fully observable) Markov decision processes. In particular, it does not ad-
dress the following challenges.
•Partial observability. All games in the ATARI framework are fully observable (ex-
cept for Montezuma’s revenge): all information relevant to the state of the game
is visible on the screen at all times (when using the four most recent frames).
However, the real world is only partially observable. For example, when going
to the supermarket you have to remember what you wanted to buy because you
currently cannot observe which items you are missing at home. A strong AI needs
tohavememoryandbeabletorememberthingsthathappenedinthepast(rather
than only learning from it).
An obvious approach to equip DQN with memory is to use recurrent neural
networks instead of simple feedforward neural networks (Heess et al., 2015).
Hausknecht and Stone (2015) show that this enables the agent to play the games
when using only a single frame as input. However, it is currently unclear whether
recurrent neural networks are powerful enough to learn long-term dependencies
in the data (Bengio et al., 1994).
•Directed exploration. DQN fails in games with delayed rewards. For example, in
Montezuma’s Revenge the agent needs to avoid several obstacles to get to a key
before receiving the first reward. DQN fails to score any rewards in this environ-
ment. This is not surprising: the typical approach for reinforcement learning, to
use"-exploration for which the agent chooses actions at random with a certain
probability, is insufficient for exploring complex environments; the probability of
random walking into the first reward is just too low.
Instead we need a more targeted exploration approach that aims at understanding
the environment in a structured manner. Theoretical foundations are provided
6 Introduction
byknowledge-seeking agents (Orseau, 2011, 2014a; Orseau et al., 2013). Kulkarni
et al. (2016) introduce a hierarchical approach based on intrinsic motivation to
improve DQN’s exploration and manage to score points in Montezuma’s Revenge.
However, their approach relies on quite a bit of visual preprocessing and domain
knowledge.
•Non-ergodicity. When losing in an Atari game, the agent always gets to play the
same game again. From the agent’s perspective, it has not actually failed, it just
gets transported back to the starting state. Because of this, there are no strong
incentives to be careful when exploring the environment: there can be no bad
mistakes that make recovery impossible.
However, in the real world some actions are irreversibly bad. If the robot drives
off a cliff it can be fatally damaged and cannot learn from the mistake. The real
world is full of potentially fatal mistakes (e.g. crossing the street at the wrong
time) and for humans, natural reflexes and training by society make sure that we
are very confident of what situations to avert. This is crucial, as some mistakes
must be avoided without any training examples. Current reinforcement learning
algorithms only learn about bad states by visiting them.
•Wireheading. The goal of reinforcement learning is to maximize rewards. When
playing a video game the most efficient way to get rewards is to increase the
game score. However, when a reinforcement learning algorithm is acting in the
real world, theoretically it can change its own hard- and software. In this set-
ting, the most efficient way to get rewards is to modify the reward mechanism to
always provide the maximal reward (Omohundro, 2008; Ring and Orseau, 2011;
Bostrom, 2014). Consequently the agent no longer pursues the designers’ origi-
nally intended goals and instead only attempts to protect its own existence. The
namewireheading was established by analogy to a biology experiment by Olds
and Milner (1954) in which rats had a wire embedded into the reward center of
their brain that they could then stimulate by the push of a button.
Today’s reinforcement learning algorithms usually do not have access to their own
internal workings, but more importantly they are not smart enough to understand
their own architecture. They simply lack the capability to wirehead. But as we
increase their capability, wireheading will increasingly become a challenge for
reinforcement learning.
1.1.3 General Reinforcement Learning
A theory of strong AI cannot make some of the typical assumptions. Environments
are partially observable, so we are dealing with partially observable Markov decision
processes (POMDPs). The POMDP’s state space does not need to be finite. Moreover,
the environment may not allow recovery from mistakes: we do not assume ergodic-
ity or weak communication (not every POMDP state has to be reachable from every
other state). So in general, our environments are infinite-state non-ergodic POMDPs.
Table 1.1 lists the assumptions that are typical but we do not make.
§1.1Reinforcement Learning 7
Assumption Description
Full observability the agent needs no memory to act optimally
Finite state the environment has only finitely many states
Ergodicity the agent can recover from any mistakes
Computability the environment is computable
Table 1.1: List of assumptions from the reinforcement learning literature. In this
thesis, we only make the computability assumption which is important for Chapter 6
and Chapter 7.
Learning POMDPs is a lot harder, and only partially successful attempts have
been made: through predictive state representations (Singh et al., 2003, 2004), and
Bayesian methods (Doshi-Velez, 2012). A general approach is feature reinforcement
learning (Hutter, 2009c,d), which aims to reduce the general reinforcement learning
problem to an MDP by aggregating histories into states. The quest for a good cost
function for feature maps remains unsuccessful thus far (Sunehag and Hutter, 2010;
Daswani, 2015). However, Hutter (2014) managed to derive strong bounds relating the
optimal value function of the aggregated MDP to the value function of the original
process even if the latter violates the Markov condition.
A full theoretical approach to the general reinforcement learning problem is given by
Hutter (2000, 2001a, 2002a, 2003, 2005, 2007a, 2012b). He introduces the Bayesian RL
agentAIXIbuilding on the theory of sequence prediction by Solomonoff (1964, 1978).
Based in algorithmic information theory, Solomonoff’s prior draws from famous insights
by William of Ockham, Sextus Epicurus, Alan Turing, and Andrey Kolmogorov (Rath-
mannerandHutter,2011). AIXIusesSolomonoff’spriorovertheclassofallcomputable
environments and acts to maximize Bayes-expected rewards. We formally introduce
Solomonoff’s theory of induction in Chapter 3 and AIXI in Section 4.3.1. See also Legg
(2008) for an accessible introduction to AIXI.
A typical optimality property in general reinforcement learning is asymptotic opti-
mality(Lattimore and Hutter, 2011): as time progresses the agent converges to achieve
the same rewards as the optimal policy. Asymptotic optimality is usually what is
meant by “ Q-learning converges” (Watkins and Dayan, 1992) or “TD learning con-
verges” (Sutton, 1988). Orseau (2010, 2013) showed that AIXI is not asymptotically
optimal. Yet asymptotic optimality in the general setting can be achieved through op-
timism (Sunehag and Hutter, 2012a,b, 2015), Thompson sampling (Section 5.4.3), or
an extra exploration component on top of AIXI (Lattimore, 2013, Ch. 5).
In our setting, learning the environment does not just involve learning a fixed finite
set of parameters; the real world is too complicated to fit into a template. Therefore we
fall back on the nonparametric approach where we start with an infinite but countable
class of candidate environments. Our only assumption is that the true environment is
contained in this class (the realizable case ). As long as this class of environments is
large enough (such as for the class of all computable environments), this assumption is
8 Introduction
rather weak.
1.2 Contribution
The goal of this thesis is not to increase AI capability. As such, we are not trying to
improve on the state of the art, and we are not trying to derive practical algorithms.
Instead, the emphasis of this thesis is to further our understanding of general rein-
forcement learning and thus strong AI. How a future implementation of strong AI will
actually work is in the realm of speculation at this time. Therefore we should make as
few and as weak assumptions as possible.
We disregard computational constraints in order to focus on the fundamental un-
derlying problems. This is unrealistic, of course. With unlimited computation power
many traditional AI problems become trivial: playing chess, Go, or backgammon can
be solved by exhaustive expansion of the game tree. But the general RL problem does
not become trivial: the agent has to learn the environment and balance between ex-
ploration and exploitation. That being said, the algorithms that we study do have
a relationship with algorithms being used in practice and our results can and should
educate implementation.
On a high level, our insights can be viewed from three different perspectives.
•Philosophically. Concisely, our understanding of strong AI can be summarized as
follows.
intelligence =learning +acting (1.1)
Here,intelligence refers to an agent that optimizes towards some goal in accor-
dancewiththe definition byLegg and Hutter(2007b). For learning we distinguish
two (very related) aspects: (1) arriving at accurate beliefs about the future and
(2) making accurate predictions about the future. Of course, the former implies
the latter: if you have accurate beliefs, then you can also make good predictions.
For RL accurate beliefs is what we care about because they enable us to plan
for the future. Learning is a passive process that only observes the data and
does not interfere with its generation. In particular, learning does not require
a goal. With actingwe mean the selection of actions in pursuit of some goal.
This goal can be reward maximization as in reinforcement learning, understand-
ing the environment as for knowledge-seeking agents, or something else entirely.
Together they enable an agent to learn the environment’s behavior in response to
itself (on-policy learning) and to choose a policy that furthers its goal. We dis-
cuss the formal aspects of learning in Chapter 3 and some approaches to acting
in Chapter 4.
Given infinite computational resources, learning is easy and Solomonoff induc-
tion provides a complete theoretical solution. However, acting is not straightfor-
ward. We show that in contrast to popular belief, AIXI, the natural extension of
Solomonoff induction to reinforcement learning, does not provide the objectively
best answer to this question. We discuss some alternatives and their problems in
§1.2Contribution 9
Chapter 5. Unfortunately, the general question of how to act optimally remains
open.
AIXItl(Hutter, 2005, Ch. 7.2) is often mentioned as a computable approximation
to AIXI. But AIXI tldoes not converge to AIXI in the limit. Inspired by Hutter
search (Hutter, 2002b), it relies on an automated theorem prover to find the
provably best policy computable in time twith a program of length l. In
contrast to AIXI, which only requires the choice of universal Turing machine,
proof search requires an axiom system that must not be too weak or too strong.
In Section 5.2.3 we discuss some of the problems with AIXI tl. Moreover, in
Corollary 6.13 we show that "-optimal AIXI is limit computable, which shows
that AIXI can be computably approximated by running this algorithm for a fixed
number of time steps or until a timeout is reached. While neither AIXI tlnor
this AIXI approximation algorithm are practically feasible, the latter is a better
example for a computable strong AI.
In our view, AIXI should be taken as a descriptive rather than prescriptive model.
It is descriptive as an abstraction from an actual implementation of strong AI
where we ignore all the details of the learning algorithm and the computational
approximations of choosing how to act. It should not be viewed as a prescription
of how strong AI should be built and AIXI approximations (Veness et al., 2011,
2015) are easily outperformed by neural-network-based approaches (Mnih et al.,
2015).
•Mathematically. Some of the proof techniques we employ are novel and could
be used to analyze other algorithms. Examples include the proofs for the lower
bounds on the computability results (Section 6.3.2) and to a lesser extent the
upper bounds (Section 6.3.1), which should work analogously for a wide range of
algorithms. Furthermore, the proof of the asymptotic optimality of Thompson
sampling (Theorem 5.25) brings together a variety of mathematical tools from
measure theory, probability theory, and stochastic processes.
Next, the recoverability assumption (Definition 5.31) is a novel technical assump-
tionontheenvironmentakintoergodicityandweakcommunicationinfinite-state
environments. It is more general, yet mathematically simple and works for arbi-
trary environments. This assumption turns out to be what we need to prove the
connection from asymptotic optimality to sublinear regret in Section 5.5.
Moreover, we introduce the use of the recursive instead of the iterative value
function (Section 6.4). The iterative value function is the natural extension of
expectimax search to the sequential setting and was originally used by Hutter
(2005, Sec. 5.5). Yet it turned out to be an incorrect and inconvenient definition:
it does not correctly maximize expected rewards (Proposition 6.19) and it is not
limit computable (Theorem 6.22 and Theorem 6.23). However, this is only a
minor technical correction.
Finally, this work raises new mathematically intriguing questions about the prop-
erties of reflective oracles (Section 7.1).
10 Introduction
•Practically. One insight from this thesis is regarding the effective horizon. In
practice geometric discounting is ubiquitous which has a constant effective hori-
zon. However, when facing a finite horizon problem or an episodic task, some-
times the effective horizon changes. One lesson from our result on Thompson
sampling (Section 5.4.3 and Section 5.5) is that you should explore for an ef-
fective horizon instead of using "-greedy. While the latter exploration method
is often used in practice, it has proved ineffective in environments with delayed
rewards (see Section 1.1.2).
Furthermore, our application of reinforcement learning results to game theory in
Chapter 7 reinforces this trend to solve game theory problems (Tesauro, 1995;
Bowling and Veloso, 2001; Busoniu et al., 2008; Silver et al., 2016; Heinrich and
Silver, 2016; Foerster et al., 2016, and many more). In particular, the approxima-
tion algorithm for reflective oracles (Section 7.1.3) could guide future applications
for computing Nash equilibria (see also Fallenstein et al., 2015b).
On a technical level, we advance the theory of general reinforcement learning. In its
center is the Bayesian reinforcement learning agent AIXI. AIXI is meant as an answer
to the question of how to do general RL disregarding computational constraints. We
analyze the computational complexity of AIXI and related agents in Chapter 6 and
show that even with an infinite horizon AIXI can be computationally approximated
with a regular Turing machine (Section 6.3.1). We also derive corresponding lower
bounds for most of our upper bounds (Section 6.3.2).
Chapter 5 is about notions of optimality in general reinforcement learning. We
dispel AIXI’s status as the gold standard for reinforcement learning. Hutter (2002a)
showed that AIXI is Pareto optimal, balanced Pareto optimal, and self-optimizing.
Orseau (2013) established that AIXI does not achieve asymptotic optimality in all
computable environments (making the self-optimizing result inapplicable to this gen-
eral environment class). In Section 5.1 we show that every policy is Pareto optimal and
in Section 5.3 we show that balanced Pareto optimality is highly subjective, depending
on the choice of the prior; bad choices for priors are discussed in Section 5.2. Notable
is thedogmatic prior that locks a Bayesian reinforcement learning agent into a particu-
lar (bad) policy as long as this policy yields some rewards. Our results imply that there
are no known nontrivial and non-subjective optimality results for AIXI. We have to
regard AIXI as a relativetheory of intelligence. More generally, our results imply that
general reinforcement learning is difficult even when disregarding computational costs .
But this is not the end to Bayesian methods in general RL. We show in Section 5.4
that a Bayes-inspired algorithm called Thompson sampling achieves asymptotic opti-
mality. Thompson sampling, also known as posterior sampling or theBayesian control
rulerepeatedly draws one environment from the posterior distribution and then acts as
if this was the true environment for a certain period of time (depending on the discount
function). Moreover, given a recoverability assumption on the environment and some
mild assumptions on the discount function, we show in Section 5.5 that Thompson
sampling achieves sublinear regret.
Finally, we tie these results together to solve an open problem in game theory:
§1.3Thesis Outline 11
Chapter Publication(s)
Chapter 1 -
Chapter 2 -
Chapter 3 with links to Leike and Hutter (2014a, 2015d); Filan et al. (2016)
Chapter 4 -
Chapter 5 Leike and Hutter (2015c); Leike et al. (2016a)
Chapter 6 Leike and Hutter (2015b,a, 2016)
Chapter 7 Leike et al. (2016b)
Chapter 8 -
Appendix A Leike and Hutter (2014b)
Table 1.2: List of publications by chapter.
When acting in a multi-agent environment with other Bayesian agents, each agent
needs to assign positive prior probability to the other agents’ actual policies (they need
to have a grain of truth ). Finding a reasonably large class of policies that contains the
Bayes optimal policies with respect to this class is known as the grain of truth prob-
lem(Hutter, 2009b, Q. 5j). Only small classes are known to have a grain of truth and
the literature contains several related impossibility results (Nachbar, 1997, 2005; Foster
and Young, 2001). Moreover, while AIXI assumes the environment to be computable,
our computability results on AIXI confirm that it is incomputable (Theorem 6.15 and
Theorem 6.17). This asymmetry elevates AIXI above its environment computationally,
and prevents the environment from containing other AIXIs.
In Chapter 7 we give a formal and general solution to the grain of truth prob-
lem: we construct a class of policies that avoid this asymmetry. This class contains all
computable policies as well as Bayes optimal policies for every lower semicomputable
prior over the class. When the environment is unknown, our dogmatic prior from Sec-
tion 5.2 makes Bayes optimal agents fail to act optimally even asymptotically. However,
our convergence results on Thompson sampling (Section 5.4.3) imply that Thompson
samplers converge to play "-Nash equilibria in arbitrary unknown computable multi-
agent environments. While these results are purely theoretical, we use techniques from
Chapter 6 to show that they can be computationally approximated arbitrarily closely.
1.3 Thesis Outline
This thesis is based on the papers Leike and Hutter (2014a,b, 2015a,b,c, 2016); Leike
et al. (2016a,b). During my PhD, I was also involved in the publications Leike and Heiz-
mann(2014a,b,2015);Heizmannetal.(2015,2016)basedonmyresearchintermination
analysis (in collaboration with Matthias Heizmann), Daswani and Leike (2015) (co-
authored with Mayank Daswani in equal parts), Everitt et al. (2015) (co-authored with
Tom Everitt in equal parts), Filan et al. (2016) (written by Daniel Filan as part of
his honour’s thesis supervised by Marcus Hutter and me). Leike and Hutter (2016) is
12 Introduction
still under review. Leike and Hutter (2014a, 2015d) are tangential to this thesis’ main
thrust, so the results are mentioned only in passing. A list of papers written during my
PhD is given in Table 1.3 on page 14, with a corresponding chapter outline in Table 1.2.
The core of our contribution is found in chapters 5, 6, and 7.
Every thesis chapter starts with a quote. In case this is not blatantly obvious: these
are false quotes , a desperate attempt to make the thesis less dry and humorless. None
of the quotes were actually stated by the person they are attributed to (according to
our knowledge).
§1.3Thesis Outline 13
(a)Space Invaders: the player controls the
green cannon on the bottom of the screen
and fires projectiles at the yellow ships at
the top. The red blobs can be used as
cover, but also fired through.
(b)Pong: the player controls the green
paddle on the right of the screen and needs
to hit the white ball such that the com-
puter opponent controlling the red paddle
on the left fails to hit the ball back.
(c)Ms. Pac-Man: the player controls the
yellow mouth and needs to eat all the red
pallets in the maze. The maze is roamed
by ghosts that occasionally hunt the player
and kill her on contact unless a ‘power pill’
was consumed recently.
(d)Boxing: the player controls the white
figure on the screen and extends their arms
to throw a punch. The aim is to hit the
black figure that is controlled by the com-
puter and dodge their punches. (I’m sure
the choice of color was by accident.)
(e)Gopher: a hungry rodent attempts to
dig to the surface and steal the vegetables.
The player controls the farmer who pro-
tects them by filling the rodent’s holes.
(f)Montezuma’s Revenge: the player con-
trols the red adventurer. The aim is to
navigate a maze of deadly traps, use keys
to open doors, and collect artifacts.
Figure 1.1: A selection of Atari 2600 video games.
14 Introduction
[1] Jan Leike and Marcus Hutter. Indefinitely oscillating martingales. In Algorithmic Learning
Theory, pages 321–335, 2014a
[2] Jan Leike and Matthias Heizmann. Ranking templates for linear loops. Logical Methods in
Computer Science , 11(1):1–27, March 2015
[3] Mayank Daswani and Jan Leike. A definition of happiness for reinforcement learning agents.
InArtificial General Intelligence , pages 231–240. Springer, 2015
[4] Tom Everitt, Jan Leike, and Marcus Hutter. Sequential extensions of causal and evidential
decision theory. In Algorithmic Decision Theory , pages 205–221. Springer, 2015
[5] Jan Leike and Marcus Hutter. On the computability of AIXI. In Uncertainty in Artificial
Intelligence , pages 464–473, 2015a
[6] JanLeikeandMarcusHutter. OnthecomputabilityofSolomonoffinductionandknowledge-
seeking. In Algorithmic Learning Theory , pages 364–378, 2015b
[7] Jan Leike and Marcus Hutter. Bad universal priors and notions of optimality. In Conference
on Learning Theory , pages 1244–1259, 2015c
[8] Jan Leike and Marcus Hutter. Solomonoff induction violates Nicod’s criterion. In Algorith-
mic Learning Theory , pages 349–363. Springer, 2015d
[9] Matthias Heizmann, Daniel Dietsch, Jan Leike, Betim Musa, and Andreas Podelski. Ulti-
mate Automizer with array interpolation (competition contribution). In Tools and Algo-
rithms for the Construction and Analysis of Systems , pages 455–457. Springer, 2015
[10] Matthias Heizmann, Daniel Dietsch, Marius Greitschus, Jan Leike, Betim Musa, Claus
Schätzle, and Andreas Podelski. Ultimate Automizer with two-track proofs (competition
contribution). In Tools and Algorithms for the Construction and Analysis of Systems , pages
950–953. Springer, 2016
[11] Daniel Filan, Jan Leike, and Marcus Hutter. Loss bounds and time complexity for speed
priors. In Artificial Intelligence and Statistics , 2016
[12] Jan Leike and Marcus Hutter. On the computability of Solomonoff induction and AIXI.
2016. Under review
[13] Jan Leike, Tor Lattimore, Laurent Orseau, and Marcus Hutter. Thompson sampling is
asymptotically optimal in general environments. In Uncertainty in Artificial Intelligence ,
2016a
[14] Jan Leike, Jessica Taylor, and Benya Fallenstein. A formal solution to the grain of truth
problem. In Uncertainty in Artificial Intelligence , 2016b
[15] Jan Leike and Matthias Heizmann. Geometric nontermination arguments. 2016. Under
preparation
Table 1.3: List of publications.
Chapter 2
Preliminaries
Mathematics is a waste of time. — Leonhard Euler
This chapter establishes the notation and background material that is used through-
out this thesis. Section 2.1 is about probability and measure theory, Section 2.2 is
about stochastic processes, Section 2.3 is about information theory, and Section 2.4 is
about algorithmic information theory. We defer the formal introduction to reinforce-
ment learning to Chapter 4. Additional preliminary notation and terminology is also
established in individual chapters wherever necessary. A list of notation is provided in
the appendix on page 171.
Most of the content from this chapter can be found in standard textbooks and
reference works. We recommend to consult Wasserman (2004) on statistics, Durrett
(2010) on probability theory and stochastic processes, Cover and Thomas (2006) on
information theory, Li and Vitányi (2008) on algorithmic information theory, Russell
and Norvig (2010) on artificial intelligence, Bishop (2006) and Hastie et al. (2009)
on machine learning, Sutton and Barto (1998) on reinforcement learning, and Hutter
(2005) and Lattimore (2013) on general reinforcement learning.
We understand definitions to follow natural language; e.g., when defining the ad-
jective ‘continuous’, we define at the same time the noun ‘continuity’ and the adverb
‘continuously’ wherever appropriate.
Numbers. N:=f1;2;3;:::gdenotes the set of natural numbers (starting from 1),
Q:=fp=qjp2N[f0g;q2Ngdenotes the set of rational numbers, and Rdenotes the
set of real numbers. For two real numbers r1;r2, the set [r1;r2] :=fr2Rjr1rr2g
denotes the closed interval with end points r1andr2; the sets (r1;r2] := [r1;r2]nfr1g
and[r1;r2) := [r1;r2]nfr2gdenotehalf-openintervals; theset (r1;r2) := [r1;r2]nfr1;r2g
denotes an open interval.
Strings. FixXto be a finite nonempty set, called alphabet. We assume that X
contains at least two distinct elements. The set X:=S1
n=0Xnis the set of all finite
strings over the alphabet X, the setX1is the set of all infinite strings over the alphabet
X, and the setX]:=X[X1is their union. The empty string is denoted by , not to
be confused with the small positive real number ". Given a string x2X], we denote
its length byjxj. For a (finite or infinite) string xof lengthk, we denote with xkthe
k-th character of x, withx1:kthe firstkcharacters of x, and with x<kthe firstk 1
15
16 Preliminaries
characters of x. The notation x1:1stresses that xis an infinite string. We use xvy
to denote that xis a prefix of y, i.e.,x=y1:jxj. Our examples often (implicitly) involve
the binary alphabet f0;1g. In this case we define the functions ones;zeros :X!N
that count the number of ones and zeros in a string respectively.
Computability. A function f:X!Rislower semicomputable iff the setf(x;q)2
XQjf(x)>qgis recursively enumerable. If fand fare lower semicomputable,
thenfis called computable . See Section 6.1.2 for more computability definitions.
Asymptotic Notation. Letf;g:N!R0. We usef2O(g)to denote that there
is a constant csuch thatf(t)cg(t)for allt2N. We usef2o(g)to denote that
lim supt!1f(t)=g(t) = 0. For functions on strings P;Q :X!Rwe useQPto
denote that there is a constant c>0such thatQ(x)cP(x)for allx2X. We also
useQPforPQandQ=PforQPandPQ. Note that Q=Pdoes not
implythat there is a constant csuch thatQ(x) =cP(x)for allx2X. For a sequence
(at)t2Nwith limit limt!1at=awe also write at!aast!1. If no limiting variable
is provided, we mean t!1by convention.
Other Conventions. LetAbe some set. We use #Ato denote the cardinality of
the setA, i.e., the number of elements in A, and 2Ato denote the power set of A, i.e.,
the set of all subsets of A. We use logto denote the binary logarithm and lnto denote
the natural logarithm.
2.1 Measure Theory
For a countable set
, we use
to denote the set of probability distributions over
. If
is uncountable (such as the set of all infinite strings X1), we need to use the
machinery of measure theory. This section provides a concise introduction to measure
theory; see Durrett (2010) for an extensive treatment.
Definition 2.1 (-algebra).Let
be a set. The set F 2
is a-algebra over
iff
(a)
2F,
(b)A2Fimplies
nA2F, and
(c) for any countable number of sets A0;A1;:::;2F, the unionS
i2NAi2F.
For a setA 2
, we define (A)to be the smallest (with respect to set inclusion)
-algebra containing A.
For the real numbers, the default -algebra (used implicitly) is the Borel-algebraB
generated by the open sets of the usual topology. Formally, B:=(f(a;b)ja;b2Rg).
A set
together with a -algebraFforms ameasurable space . The sets from the -
algebraFare called measurable sets . A function f:
1!
2between two measurable
spaces is called measurable iff any preimage of an (in
2) measurable set is measurable
(in
1).
§2.1Measure Theory 17
Definition 2.2 (Probability Measure) .Let
be a measurable space with -algebra
F. Aprobability measure on the space
is a function :F! [0;1]such that
(a)(
) = 1(normalization ), and
(b)(S
i2NAi) =P
i2N(Ai)for any collection fAiji2NgFthat is pairwise
disjoint (-additivity ).
A probability measure isdeterministic iff it assigns all probability mass to a single
element of
, i.e., iff there is an x2
with(fxg) = 1.
We define the conditional probability (AjB)for two measurable sets A;B2F
with(B)>0as(AjB) :=(A\B)=(B).
Definition 2.3 (Random Variable) .Let
be a measurable space with probability
measure. A(real-valued) random variable is a measurable function X:
!R.
We often (but not always) denote random variables with uppercase Latin letters.
Given a-algebraF, a probability measure PonF, and anF-measurable random
variableX, theconditional expectation E[XjF]ofXgivenFis a random variable
Ysuch that (1) YisF-measurable and (2)R
AXdP =R
AYdPfor allA2F. The
conditional expectation exists and is unique up to a set of P-measure 0(Durrett, 2010,
Sec. 5.1). Intuitively, if Fdescribes the information we have at our disposal, then
E[XjF]denotes the expectation of Xgiven this information.
We proceed to define the -algebra onX1(the-algebra onX]is defined analo-
gously). For a finite string x2X, thecylinder set
x:=fxyjy2X1g
is the set of all infinite strings of which xis a prefix. Furthermore, we fix the -algebras
Ft:=
f xjx2Xtg
andF1:= 1[
t=1Ft!
:
The sequence (Ft)t2Nis afiltration : from x=S
a2X xafollows thatFtFt+1for
everyt2N, and allFtF1by the definition of F1.
For our purposes, the -algebraFtmeans ‘all symbols up to and including time
stept.’ So instead of conditioning an expectation on Ft, we can just as well condition
it on the sequence x1:tdrawn at time t. Hence we write E[Xjx1:t]instead of E[XjFt].
Moreover, for conditional probabilities we also write Q(xtjx<t)instead ofQ(x1:tjx<t).
In the context of probability measures, a measurable set E2F1is also called an
event. The event Ec:=X1nEdenotes the complement of E. In case the event Eis
defined by a predicate Qdependent on the random variable X,E=fx2
jQ(X(x))g,
we also use the shorthand notation
P[Q(X)] :=P(fx2
jQ(X(x))g) =P(E):
We assume all sets to be measurable; when we write P(A)for some set AX1,
we understand implicitly that Abe measurable. This is not true: not all subsets of
18 Preliminaries
X1are measurable (assuming the axiom of choice). While we choose to do this for
readability purposes, note that under some axioms compatible with Zermelo-Fraenkel
set theory, notably the axiom of determinacy, all subsets of X1are measurable.
2.2 Stochastic Processes
This section introduces some notions about sequences of random variables.
Definition 2.4 (Stochastic Process) .(Xt)t2Nis astochastic process iffXtis a random
variable for every t2N.
A stochastic process (Xt)t2Nisnonnegative iffXt0for allt2N. The process is
bounded iff there is a constant c2Rsuch thatjXtjcfor allt2N.
In the real numbers, a sequence (zt)t2Nconverges if and only if it is a Cauchy se-
quence, i.e., iffjzt+1 ztj!0ast!1. For sequences of random variables convergence
is a lot more subtle and there are several different notions of convergence.
Definition2.5 (StochasticConvergence) .LetPbeaprobabilitymeasure. Astochastic
process (Xt)t2Nconverges to the random variable X
•inP-probability iff for every ">0,
P
jXt Xj>"
!0ast!1 ;
•inP-meaniff
EP
jXt Xj
!0ast!1 ;
•P-almost surely iff
Ph
lim
t!1Xt=Xi
= 1:
Almost sure convergence and convergence in mean both imply convergence in prob-
ability (Wasserman, 2004, Thm. 5.17). If the stochastic process is bounded, then con-
vergence in probability implies convergence in mean (Wasserman, 2004, Thm. 5.19).
A sequence of real numbers (at)t2Nconverges in Cesàro average toa2Riff
1=tPt
k=1ak!aast! 1. The definition for sequences of random variables is
analogous.
Definition 2.6 (Martingale) .LetPbe a probability measure over (X1;F1). A
stochastic process (Xt)t2Nis aP-supermartingale ( P-submartingale) iff
(a) eachXtisFt-measurable, and
(b)E[XtjFs]Xs(E[XtjFs]Xs)P-almost surely for all s;t2Nwiths<t.
AP-martingale is a process that is both a P-supermartingale and a P-submartingale.
§2.3Information Theory 19
Example 2.7 (Fair Gambling) .Suppose Mary bets on the outcome of a fair coin flip.
If she predicts correctly, her wager is doubled and otherwise it is lost. Let Xtdenote
Mary’s wealth at time step t. Since the game is fair, E[Xt+1jFt] =XtwhereFt
represents the information available at time step t. Hence E[Xt] =X1, so in expectation
she never loses money regardless of her betting strategy. 3
FormartingalesthefollowingfamousconvergenceresultwasprovedbyDoob(1953).
Theorem 2.8 (Martingale Convergence; Durrett, 2010, Thm. 5.2.9) .If(Xt)t2Nis a
nonnegative supermartingale, then it converges almost surely to a limit XwithE[X]
E[X1].
By Theorem 2.8 the martingale from Example 2.7 representing Mary’s wealth con-
verges almost surely, regardless of her betting strategy. Either she refrains from betting
at some point (assuming she cannot place smaller and smaller bets) or she cannot play
anymore because her wealth is 0. Is there a lesson to learn here about gambling?
2.3 Information Theory
This section introduces the notions of entropy and two notions of distance between
probability measures: KL-divergence andtotal variation distance .
Definition 2.9 (Entropy) .Let
be a countable set. For a probability distribution
p2
, theentropyofpis defined as
Ent(p) := X
x2
:p(x)>0p(x) logp(x):
Definition 2.10 (KL-Divergence) .LetP;Qbe two measures and let m2Nbe a
lookahead time step. The Kullback-Leibler-divergence (KL-divergence ) ofPandQ
between time steps tandmis defined as
KLm(P;Qjx<t) :=X
xt:m2Xm t+1P(x1:mjx<t) logP(x1:mjx<t)
Q(x1:mjx<t):
Moreover, we define KL1(P;Qjx<t) := limm!1KLm(P;Qjx<t).
KL-divergence is also known as relative entropy . KL-divergence is always nonneg-
ative by Gibbs’ inequality, but it is not a distance since it is not symmetric. If the
alphabetXis finite, then KLm(P;Qjx)is always finite. However, KL1(P;Qjx)may
be infinite.
Definition 2.11 (Total Variation Distance) .LetP;Qbe two measures and let 1
m1be a lookahead time step. The total variation distance betweenPandQ
between time steps tandmis defined as
Dm(P;Qjx) := sup
AXmP(Ajx<t) Q(Ajx<t):
20 Preliminaries
Total variation distance is always bounded between 0and1sincePandQare prob-
ability measures. Moreover, in contrast to KL-divergence total variation distance satis-
fies the axioms of distance: symmetry ( D(P;Q) =D(Q;P)), identity of indiscernibles
(D(P;Q) = 0if and only if P=Q), and the triangle inequality ( D(P;Q) +D(Q;R)
D(P;R)).
The following lemma shows that total variation distance can be used to bound
differences in expectation.
Lemma 2.12 (Total Variation Bound on the Expectation) .For a random variable X
with 0X1and two probability measures PandQ
EP[X] EQ[X]D(P;Q):
KL-divergence and total variation distance are linked by the following inequality.
Lemma 2.13 (Pinsker’s inequality; Tsybakov, 2008, Lem. 2.5i) .For all probability
measuresPandQonX1, for everyx2X, and for every m2N
Dm(P;Qjx)r
1
2KLm(P;Qjx)
2.4 Algorithmic Information Theory
Auniversal Turing machine (UTM) is a Turing machine that can simulate all other
Turing machines. Formally, a Turing machine Uis a UTM iff for every Turing machine
Tthere is a binary string p(calledprogram) such that U(p;x) =T(x)for allx2X,
i.e., the output of Uwhen run on (p;x)is the same as the output of Twhen run on x.
We assume the set of programs on Uis prefix-free. The Kolmogorov complexity K(x)
of a stringxis the length of the shortest program on Uthat prints xand then halts:
K(x) := minfjpjjU(p) =xg:
Amonotone Turing machine is a Turing machine with a one-way read-only input
tape, a one-way write-only output tape, and a read/write work tape. Monotone Turing
machines sequentially read symbols from their input tape and write to their output
tape. Interpreted as a function, a monotone Turing machine Tmaps a string xto the
longest string that Twrites to the output tape while reading xand no more from the
input tape (Li and Vitányi, 2008, Ch. 4.5.2).
We also use Uto denote a universal monotone Turing machine (programs on the
universal monotone Turing machine do not have to be prefix-free). The monotone Kol-
mogorov complexity Km(x)denotes the length of the shortest program on the monotone
machineUthat prints a string starting with x(Li and Vitányi, 2008, Def. 4.5.9):
Km(x) := minfjpjjxvU(p)g: (2.1)
Since monotone complexity does not require the machine to halt, there is a constant c
such thatKm(x)K(x) +cfor allx2X.
§2.4Algorithmic Information Theory 21
The following notion of a (semi)measure is particular to algorithmic information
theory.
Definition 2.14 (Semimeasure; Li and Vitányi, 2008, Def. 4.2.1) .Asemimeasure over
the alphabetXis a function :X![0;1]such that
(a)()1, and
(b)(x)P
a2X(xa)for allx2X.
A semimeasure is a (probability) measure iff equalities hold in (a) and (b) for all x2X.
Semimeasures are not probability measures in the classical measure theoretic sense.
However, semimeasures correspond canonically to classical probability measures on the
probability space X]=X[X1whose-algebra is generated by the cylinder sets (Li
and Vitányi, 2008, Ch. 4.2 and Hay, 2007).
Lower semicomputable semimeasures correspond naturally to monotone Turing ma-
chines (Li and Vitányi, 2008, Thm. 4.5.2): for a monotone Turing machine T, the
semimeasure Tmaps a string xto the probability that Toutputs something starting
withxwhen fed with fair coin flips as input (and vice versa). Hence we can enumerate
all lower semicomputable semimeasures 1;2;:::by enumerating all monotone Tur-
ing machines. We define the Kolmogorov complexity K()of a lower semicomputable
semimeasure as the Kolmogorov complexity of the index of in this enumeration.
We often mix the (semi)measures of algorithmic information theory with concepts
from probability theory. For convenience, we identify a finite string x2Xwith its
cylinder set x. Then(x)in the algorithmic information theory sense coincides with
( x)in the measure theory sense if we use the identification of semimeasures with
probability measures above.
Example 2.15 (Lebesgue Measure) .TheLebesgue measure oruniform measure is
defined as
(x) := (#X) jxj: 3
The following definition turns a semimeasure into a measure, preserving the predic-
tive ratio(xa)=(xb)fora;b2X.
Definition 2.16 (Solomonoff Normalization) .TheSolomonoff normalization normof
a semimeasure is defined as norm() := 1and for allx2Xanda2X,
norm(xa) :=norm(x)(xa)P
b2X(xb): (2.2)
By definition, normis a measure. Moreover, normdominatesaccording to the
following lemma.
Lemma 2.17 (norm).norm(x)(x)for allx2Xand all semimeasures .
22 Preliminaries
Proof.We use induction on the length of x: ifx=thennorm() = 1 =(), and
otherwise
norm(xa) =norm(x)(xa)P
b2X(xb)(x)(xa)P
b2X(xb)(x)(xa)
(x)=(xa):
The first inequality holds by induction hypothesis and the second inequality uses the
fact thatis a semimeasure.
Chapter 3
Learning
The problem of induction is essentially solved. — David Hume
Machine learning refers to the process of learning models of and/or making predictions
about (large) sets of data points that are typically independent and identically dis-
tributed (i.i.d.); see Bishop (2006) and Hastie et al. (2009). In this chapter we do not
makethei.i.d.assumption. Instead, weaimmoregenerallyatthetheoreticalfundamen-
tals of the sequence prediction problem : how will a sequence of symbols generated by
an unknown stochastic process be continued? Given a finite string x<t=x1x2:::xt 1
of symbols, what is the next symbol xt? How likely does a given property hold for the
entire sequence x1:1? Arguably, any learning or prediction problem can be phrased in
this fashion: anything that can be stored on a computer can be turned into a sequence
of bits.
We distinguish two major elements of learning. First, the process of converging
to accurate beliefs, called merging. Second, the process of making accurate forecasts
about the next symbol, called predicting . These two notions are not distinct: if you
have accurate beliefs about the unseen data, then you can make good predictions, but
not necessarily vice versa (see Example 3.41). We discuss different notions of merging
in Section 3.4 and state bounds on the prediction regret in Section 3.5.
In the general reinforcement learning problem we target in this thesis, the environ-
ment is unknown and the agent needs to learn it. The literature on non-i.i.d. learning
has focused on predicting individual symbols and bounds on the number of prediction
errors (Hutter, 2001b, 2005; Cesa-Bianchi and Lugosi, 2006), and the results on merg-
ing are from the game theory literature (Blackwell and Dubins, 1962; Kalai and Lehrer,
1994; Lehrer and Smorodinsky, 1996). However, we argue that merging is the essential
property for general AI. In order to make good decisions, the agent needs to have ac-
curate beliefs about what its actions will entail. On a technical level, merging leads to
on-policy value convergence (Section 4.2.3), the fact that the agents learns to estimate
the values for its own policy correctly.
The setup we consider is the realizable case : we assume that the data is generated
by an unknown probability distribution that belongs to a known (countable) class of
distributions. In contrast, the nonrealizable case allows no assumptions on the under-
lying process that generates the data. A well-known approach to the nonrealizable case
isprediction with expert advice (Cesa-Bianchi and Lugosi, 2006), which we do not con-
23
24 Learning
sider here. Generally, the nonrealizable case is harder, but Ryabko (2011) argues that
for some problems, both cases coincide.
After introducing the formal setup in Section 3.1, we discuss several examples for
learning distributions and notions that relate the learning distribution with the process
generating the data in Section 3.2. In Section 3.3 we connect these notions to the theory
of martingale processes.
Section 3.6 connects the results from the first sections to the learning framework
developed by Solomonoff (1964, 1978), Hutter (2001b, 2005, 2007b), and Schmidhuber
(2002) (among others). This framework relies on results from algorithmic information
theory and computability theory to learn any computable distribution quickly and
effectively. It is incomputable (see Section 6.2), but can serve as a gold standard for
learning.
Most of this chapter echoes the literature. We collect results from economics and
computer science that previously had not been assembled in one place. We provide
proofs that connect the various properties (Proposition 3.23, Proposition 3.16, and
Proposition 3.37), and we fill in a few gaps in the picture: the prediction bounds for
absolute continuity (Section 3.5.2) and the improved regret bounds for nonuniform
measures (Theorem 3.48 and Theorem 3.51). Section 3.7 summarizes the results in
Table 3.2 on page 45 as well as Figure 3.1 on page 46.
3.1 Setup
For the rest of this chapter, fix PandQto be two probability measures over the
measurablespaceofinfinitesequences (X1;F1). Wethinkof Pasthetrue distribution
from which the data sequence x1:1is drawn, and of Qas our belief distribution or
learning algorithm. In other words, we use the distribution Qto learn a string drawn
from the distribution P.
LetHdenote a hypothesis , i.e., any measurable set from F1. Ourprior belief in
the hypothesis HisQ(H). In each time step t, we make one observation xt2X. Our
historyx<t=x1x2:::xt 1is the sequence of all previous observations. We update our
belief in accordance with Bayesian learning; our posterior belief in the hypothesis His
Q(Hjx1:t) =Q(H\x1:t)
Q(x1:t):
The observation xtconfirms the hypothesis HiffQ(Hjx1:t)>Q(Hjx<t)(the belief
inHincreases), and the observation xtdisconfirms the hypothesis HiffQ(Hjx1:t)<
Q(Hjx<t)(the belief in Hdecreases). If Q(Hjx1:t) = 0, thenHisrefutedorfalsified.
When we assign a prior belief of 0to a hypothesis H, this means that we think
thatHis impossible; it is refuted from the beginning. If Q(H) = 0, then the posterior
Q(Hjx<t) = 0, so no evidence whatsoever can change our mind that His impossible.
This is bad if the hypothesis His actually true.
To be able to learn we need to make some assumptions on the learning distribution
Q: we need to have an open mind about anything that might actually happen, i.e.,
§3.2Compatibility 25
Q(H)>0on any hypothesis HwithP(H)>0. This property is called absolute
continuity . We discuss this and other notions of compatibility of PandQin Section 3.2.
We motivate this chapter with the following example.
Example 3.1 (The Black Ravens; Rathmanner and Hutter, 2011, Sec. 7.4) .If we live
in a world in which all ravens are black, how can we learn this fact? Since at every time
step we have observed only a finite subset of the (possibly infinite) set of all ravens,
how can we confidently state anything about all ravens?
We formalize this problem in line with Rathmanner and Hutter (2011, Sec. 7.4) and
Leike and Hutter (2015d). We define two predicates, blackness Band ravenness R.
There are four possible observations: a black raven BR, a non-black raven BR, a black
non-ravenBR, and a non-black non-raven BR. Therefore our alphabet consists of four
symbols corresponding to each of the possible observations, X:=fBR;BR;BR;BRg.
We are interested in the hypothesis ‘all ravens are black’. Formally, it corresponds
to the measurable set
H:=fx2X1jxt6=BR8tg=fBR;BR;BRg1; (3.1)
the set of all infinite strings in which the symbol BRdoes not occur.
Ifweobserveanon-blackraven, xt=BR,thehypothesis Hisrefutedsince H\x1:t=
;and this implies Q(Hjx1:t) = 0. In this case, our inquiry regarding His settled.
The interesting case is when the hypothesis His in fact true ( P(H) = 1), i.e.,Pdoes
not generate any non-black ravens. The property we desire is that in a world in which
all ravens are black, we arrive at this belief: P(H) = 1impliesQ(Hjx<t)!1as
t!1. 3
3.2 Compatibility
In this section we define dominance ,absolute continuity ,dominance with coefficients ,
weak dominance , andlocal absolute continuity , in decreasing order of their strength.
These notions make the relationship of the two probability measures PandQprecise.
We also give examples for various choices for the learning algorithm Q.
In our examples, we frequently rely on the following process.
Example 3.2 (Bernoulli Process) .AssumeX=f0;1g. For a real number r2[0;1]
we define the Bernoulli process with parameter ras the measure
Bernoulli (r)(x) :=rones(x)(1 r)zeros(x):
Note that Bernoulli (1=2) =, the Lebesgue measure from Example 2.15. 3
Definition 3.3 (Dominance) .The measure Qdominates P(QP) iff there is a
constantc>0such thatQ(x)cP(x)for all finite strings x2X.
Dominance is also called having a grain of truth (Lehrer and Smorodinsky, 1996,
Def. 2a and Kalai and Lehrer, 1993); we discuss this property in the context of game
theory in Chapter 7.
26 Learning
Example 3.4 (Bayesian mixture) .LetMbe a countable set of probability measures
on(X1;F1)and letw2Mbe a prior overM. Ifw(P)>0for allP2M, the
priorwis called positiveoruniversal . Then the Bayesian mixture :=P
P2Mw(P)P
dominates each P2M. 3
The Bayesian mixture is a mathematically simple yet very powerful concept. It is
very easy to derive from a countable set of distributions, and it has been considered
extensivelyintheliterature(Solomonoff,1964;Jaynes,2003;Hutter,2005, ...). Ryabko
(2009)showsthatevenforuncountablyinfiniteclasses, iftherearegoodpredictors, then
a Bayesian mixture over a countable subclass asymptotically also does well.
Example 3.5 (Solomonoff Prior) .Solomonoff (1964) defines a distribution MoverX]
that assigns to a string xthe probability that the universal monotone Turing machine
Uoutputsxwhen fed with fair coin flips on the input tape. Formally,
M(x) :=X
p:xvU(p)2 jpj(3.2)
wherepisabinarystring.1Thefunction Misalowersemicomputablesemimeasure,but
not computable and not a measure (Li and Vitányi, 2008, Lem. 4.5.3); see Section 6.2
for the computability properties of M. More importantly, Mdominates every lower
semicomputable semimeasure (Li and Vitányi, 2008, Thm. 4.5.1).
Solomonoff’s prior Mhas a number of appealing philosophical properties. In line
with Ockham’s razor it favors simple environments over complex ones: Turing machines
that have a short program on the UTM Uhave a higher contribution in the sum (3.2).
In line with Epicurus’ principle it never discards possible explanations: every program
that produces the string xcontributes to the sum. See Rathmanner and Hutter (2011)
for a discussion on the philosophical underpinnings of Solomonoff’s prior. 3
Wood et al. (2011) show that the Solomonoff prior Mcan equivalently be defined as
a Bayesian mixture over all lower semicomputable semimeasures with a prior w(P)/
2 K(P). (If we use w(P) = 2 K(P)we get a semiprior becauseP
P2M2 K(P)can be
less than 1. This prior also carries the name Solomonoff prior .)
Definition 3.6 (Absolute Continuity) .The measure Pisabsolutely continuous with
respect toQ(QP) iffQ(A) = 0impliesP(A) = 0for all measurable sets A.
Remark 3.7 (Absolute Continuity ;Dominance) .Absolute continuity is strictly
weaker than dominance: let X:=f0;1gand define a probability measure Pthat
assigns probability 2=3to1and probability 1=3to0until seeing the first 0, thenP
behaves like the Lebesgue measure . Formally,
P(x1:t) :=( 2
3tifx1:t= 1t;and 2
3n1
3(xn+2:t)if9n0:1n0vx1:t:
1We use the name Solomonoff prior for both a distribution over X1and a distribution over a com-
putably enumerable set M. Maybe Mshould better be called Solomonoff mixture to avoid confusion.
§3.2Compatibility 27
Since(1t)=P(1t) = (3=4)t!0ast!1, there is no constant csuch that(x)=P(x)>
c>0for all finite strings x2X, hencedoes not dominate P. ButPis absolutely
continuous with respect to becauseP-almost surely we draw a 0eventually, and
thenPbehaves like . HenceP-almost surely =P6!0. The claim now follows from
Proposition 3.23b. 3
The idea of Remark 3.7 is to ‘punch a hole into ’ at the infinite string 11. This
infinite string has probability 0, hence this hole does not break absolute continuity.
But it breaks dominance on this infinite string. Analogously we could punch countably
many holes into a probability measure without breaking absolute continuity.
Definition 3.8 (Weak Dominance) .The measure Qweakly dominates P(QWP)
iff
lim
t!11
tlogQ(x1:t)
P(x1:t)= 0withP-probability 1:
Lehrer and Smorodinsky (1996, Rem. 8) point out that for any PandQ,
lim sup
t!11
tlogQ(x1:t)
P(x1:t)0P-almost surely ;
so crucial is whether the lim infis also 0.
Remark 3.9 (Weak Dominance) .The measure Qweakly dominates Pif and only if
P-almost surely log(P(x)=Q(x))2o(t). 3
Ryabko and Hutter (2007, 2008) consider the following definition. It is analogous
to Definition 3.3, except that the constant cis allowed to depend on time.
Definition 3.10 (Dominance with Coefficients; Ryabko and Hutter, 2008, Def. 2) .The
measureQdominatesPwith coefficients f(QP=f) iffQ(x)P(x)=f(jxj)for all
x2X.
IfQdominatesPwith coefficients fandfgrows subexponentially ( f2o(exp)),
thenQweakly dominates Pby Remark 3.9.
Example 3.11 (Speed Prior) .Schmidhuber (2002) defines a variant of Solomonoff’s
priorMthat penalizes programs by their running time, called the speed prior . Consider
the speed prior
SKt(x) :=X
p:xvU(p)2 jpj
t(U;p;x )
wheret(U;p;x )is the number of time steps the Turing machine Utakes to produce x
from the program p. For any deterministic measure Pcomputable in time qwe have
SKt(x)P(x)=q(jxj). Therefore SKtdominatesPwith coefficients O(q). Ifqis a
polynomial ( Pis computable in polynomial time), then it grows subexponentially and
thusSKtweakly dominates P. 3
Thesemimeasureloss SKt(x) P
a2XSKt(xa)inthespeedpriorisquitesubstantial:
since it takes at least nsteps to output a string of length n,M(x)jxjSKt(x).
28 Learning
Example 3.12 (Laplace Rule) .TheLaplace rule Lis defined by
L(xtjx<t) :=#fi<tjxi=xtg
t+ #X:
ForX=f0;1gandr2[0;1]themeasure LdominatesBernoulli (r)withcoefficients
f(t) =t #X+1(Ryabko and Hutter, 2008, Prop. 3). 3
Definition 3.13 (Local Absolute Continuity) .The measure Pislocally absolutely
continuous with respect to Q(QLP) iffQ(x) = 0impliesP(x) = 0for all finite
stringsx2X.
The notable difference between local absolute continuity and absolute continuity
is that Definition 3.6 talks about arbitrary measurable sets while Definition 3.13 only
talks about finite strings. The former is a much stronger property.
For example, every measure is locally absolutely continuous with respect to the
Lebesgue measure since (x)>0for all finite strings x2X.
Local absolute continuity is an extremely weak property. If it is not satisfied, we
have to be very careful when using Qfor prediction: then there is a positive probability
that we have to condition on a probability zero event.
Example 3.14 (The Minimum Description Length Principle; Grünwald, 2007) .Let
Mbe a countable set of probability measures on (X1;F1)and letK:M! [0;1]
be a function such thatP
P2M2 K(P)1calledregularizer . Following notation from
Hutter (2009a), we define for each x2Xtheminimal description length model as
MDLx:= arg min
P2Mf logP(x) +K(P)g:
logP(x)is the (arithmetic) code length of xgiven model P, andK(P)is a complexity
penalty for P. Given data x2X,MDLxis the measure P2Mthat minimizes the
total code length of data and model.
Note that the Lebesgue measure is not locally absolutely continuous with respect
to the MDL distribution Q(x) := MDLx(x): for somex2Xthe minimum description
P2Mmay assign probability zero to a continuation xy2X. 3
Remark 3.15 (MDL is Inductively Inconsistent; Leike and Hutter, 2014a, Cor. 13) .
The MDL estimator for countable classes as defined in Example 3.14 is inductively
inconsistent: the selected model P2 Mcan change infinitely often and thus the
limit limt!1MDLx<tmay not exist. This can be a major obstacle for using MDL for
prediction, since the model used for prediction has to be changed over and over again,
incurring the corresponding computational cost. 3
The following proposition establishes the relationship between our notions of com-
patibility; see also Figure 3.1 on page 46.
Proposition 3.16 (Relationships between Compatibilities) .
(a) IfQP, thenQP.
§3.3Martingales 29
(b) IfQP, thenQWP.
(c) IfQP, thenQdominatesPwith coefficients ffor a constant function f.
(d) IfQdominatesPwith coefficients fandf2o(exp), thenQWP.
(e) IfQWP, thenQLP.
Proof.(a) From Proposition 3.23 (a) and (b).
(b) From Proposition 3.23b and Kalai and Lehrer (1994, Prop. 3a).
(c) Follows immediately from the definitions.
(d) From Remark 3.9.
(e) Follows immediately from the definitions.
Note that the converse of Proposition 3.16d is false: in Remark 3.7 we defined
a measure Pthat is absolutely continuous with respect to (and hence is weakly
dominated by ), but the coefficients for P=grow exponentially on the string 1t.
This infinite string has P-probability 0, but dominance with coefficients demands the
inequalityQP=fto hold for all strings.
Remark 3.17 (Local Absolute Continuity ;Absolute Continuity) .DefineP:=
Bernoulli (2=3)andQ:=Bernoulli (1=3). Both measures PandQare nonzero on
all cylinder sets: Q(x)3 jxj>0andP(x)3 jxj>0for everyx2X. Therefore
Qis locally absolutely continuous with respect to P. However, Qisnotabsolutely
continuous with respect to P: define
A:=
x2X!lim sup
t!11
tones(x1:t)1
2
:
The setAisF1-measurable since A=T1
n=1S
x2Un xwithUn:=fx2Xjjxj
nandones(x)jxj=2g, the set of all finite strings of length at least nthat have at
least as many zeros as ones. We have that P(A) = 0andQ(A) = 1, henceQis not
absolutely continuous with respect to P. 3
3.3 Martingales
The following two theorems state the connection between probability measures on infi-
nite strings and martingales. For two probability measures PandQthe quotient Q=P
is a nonnegative P-martingale if Qis locally absolutely continuous with respect to P.
Conversely, for every nonnegative P-martingale there is a probability measure QLP
such that the martingale is P-almost surely a multiple of Q=P.
30 Learning
Theorem 3.18 (Measures7!Martingales; Doob, 1953, II§7 Ex. 3) .LetQandPbe
two probability measures on (X1;F1)such thatQis locally absolutely continuous with
respect toP. Then the stochastic process (Xt)t2N,
Xt(x) :=Q(x1:t)
P(x1:t)
is a nonnegative P-martingale with E[Xt] = 1.
Theorem3.19 (Martingales7!Measures) .LetPbe a probability measure on (X1;F1)
and let (Xt)t2Nbe a nonnegative P-martingale with E[Xt] = 1. There is a probability
measureQon(X1;F1)that is locally absolutely continuous with respect to Pand for
allx2X1and allt2NwithP(x1:t)>0,
Xt(x) =Q(x1:t)
P(x1:t):
The proofs for Theorem 3.18 and Theorem 3.19 are provided in the Appendix.
Example 3.20 (The Posterior Martingale) .Suppose we are interested in a hypothesis
HX1(such as the proposition ‘all ravens are black’ in Example 3.1). If Q(H) =P
P2Mw(P)P(H)is a Bayesian mixture over a set of probability distributions M
with prior weights w2M(see Example 3.4), then the posterior belief Q(Hjx) =P
P2Mw(Pjx)P(Hjx). The weights w(Pjx)are called posterior weights , and
satisfy the identity
w(Pjx) =w(P)P(x)
Q(x)(3.3)
since
Q(Hjx) =Q(H\x)
Q(x)
=1
Q(x)X
P2Mw(P)P(H\x)
=X
P2Mw(P)P(x)P(H\x)
Q(x)P(x)
=X
P2Mw(Pjx)P(Hjx):
According to Theorem 3.18 the posterior weight w(Pjx)is aQ-martingale with
expectation w(P). In particular, this means that the posterior weights converge Q-
almostsurelybythemartingaleconvergencetheorem(Theorem2.8). Since Qdominates
P, by Proposition 3.16a Pis absolutely continuous with respect to Qand hence the
posterior also converges P-almost surely. 3
Remark 3.21 (Martingales and Absolute Continuity) .While Theorem 3.18 trivially
also holds if Qis absolutely continuous with respect to P, Theorem 3.19 does not imply
thatQis absolutely continuous with respect to P.
§3.3Martingales 31
LetPandQbe defined as in Remark 3.17. Consider the process X0(x) := 1,
Xt+1(x) :=(
2Xt;ifxt+1= 0;and
1
2Xt;ifxt+1= 1:
The process (Xt)t2Nis a nonnegative P-martingale since every XtisFt-measurable and
forx=y1:twe have
E[Xt+1jFt](y) =P(x0jx)2Xt(y) +P(x1jx)1
2Xt(y)
=1
32Xt(y) +2
31
2Xt(y) =Xt(y):
Moreover,
Q(x) = 1
3ones(x) 2
3zeros(x)= 2
3ones(x) 1
3zeros(x)2 ones(x)2zeros(x)=P(x)Xt(y):
HenceXt(y) =Q(y1:t)=P(y1:t)P-almost surely. The measure Qis uniquely defined
by its values on the cylinder sets, and as shown in Remark 3.17, Qis not absolutely
continuous with respect to P. 3
Theorem 3.22 (Radon-Nikodym Derivative) .IfQP, then there is a function
dP=dQ :X1![0;1)called the Radon-Nikodym derivative such that
Z
fdP =Z
fdP
dQdQ
for all measurable functions f.
This function dP=dQcan be seen as a density function of Pwith respect to the
background measure Q. Moreover, dP=dQis the limit of the martingale P=Q(Durrett,
2010, Sec. 5.3.3) which exists Q-almost surely according to Theorem 2.8.
ThefollowingpropositioncharacterizesthenotionsofcompatibilityfromSection3.2
in terms of the martingale Q=P.
Proposition 3.23 (Martingales and Compatibility) .The following relationships hold
betweenQ,P, and theP-martingale Yt:=Q(x1:t)=P(x1:t).
(a)QPif and only if Ytc>0for allt2N.
(b)QPif and only if P-almost surely Yt6!0ast!1.
(c)QdominatesPwith coefficients fif and only if Yt1=f(t)for allt.
(d)QWPif and only if P-almost surely log(Yt+1=Yt)!0in Cesàro average.
(e)QLPif and only if P-almost surely Yt>0for allt2N.
Proof. (Yt)t2Nis aP-martingale according to Theorem 3.18.
(a)Q(x)cP(x)withc>0for allx2Xis equivalent to Q(x)=P(x)c>0for all
x2X.
32 Learning
(b) Proved by Hutter (2009a, Lem. 3i).
(c) Analogously to the proof of (a).
(d) IfQweakly dominates P, we get logYt2o(t)according to Remark 3.9. To-
gether with Y0= 1we get logYt=Pt 1
k=0 log(Yk+1=Yk)2o(t), therefore
t 1Pt 1
k=0 log(Yk+1=Yk)!0ast!1. Conversely, if the Cesàro average con-
verges to 0, thent 1logYt!0, hence logYt2o(t).
(e) Letx2 Xbe any finite string. If QLPandP(x)>0, thenQ(x)>0,
and henceQ(x)=P(x)>0. Conversely, if P(x)>0thenYtis well-defined, so if
Yjxj(x)>0thenQ(x)>0.
From Proposition 3.23b and Theorem 3.22 we get that QPif and only if the
Radon-Nikodym derivative dQ=dPis positive on a set of P-measure 1.
3.4 Merging
IfQis capable of learning, it should use the sequence xdrawn from Pto change its
opinions more in the direction of P. More precisely, we want Q(jx<t)P(jx<t)
for larget. In the rest of this chapter, we make this notion of closeness precise and
discuss different conditions on Qthat are sufficient for learning.
Strong merging implies that the belief of anyhypothesis merges. This is very strong,
ashypothesescantalkabout tail events : eventsthatareindependentofanyfiniteinitial
part of the infinite sequence (such as the event Ain Remark 3.17). Weak merging only
considers hypothesis about the next couple of symbols, and almost weak merging allows
Qto deviate from Pin a vanishing fraction of the time. Much of this section is based
on Kalai and Lehrer (1994) and Lehrer and Smorodinsky (1996).
3.4.1 Strong Merging
Definition 3.24 (Strong Merging) .Qmerges strongly with PiffD1(P;Qjx<t)!0
ast!1P-almost surely.
The following theorem is the famous merging of opinions theorem by Blackwell and
Dubins (1962).
Theorem 3.25 (Absolute Continuity )Strong Merging; Blackwell and Dubins, 1962) .
IfPis absolutely continuous with respect to Q, thenQmerges strongly with P.
Example 3.26 (The Black Ravens 2; Rathmanner and Hutter, 2011, Sec. 7.4) .Recall
the black raven problem from Example 3.1. Let Qbe a learning distribution that
dominates the true distribution P, such as a Bayesian mixture (Example 3.4). By
Proposition 3.16a we get QP, and hence Qmerges strongly to Pby Theorem 3.25.
Thus we get as t!1thatP-almost surelyjQ(Hjx<t) P(Hjx<t)j!0for the
hypothesis Hthat ‘all ravens are black’ defined in (3.1). Thus if all ravens are black
in the real world ( P(H) = 1),Qlearns this asymptotically ( Q(Hjx<t)!1). This is
§3.4Merging 33
the solution we desired: the learning distribution Qconverges to a true belief about an
infinite set by only looking from a finite (but growing) number of data points. 3
The following is the converse of Theorem 3.25.
Theorem 3.27 (Strong Merging^Local Absolute Continuity )Absolute Continuity;
Kalai and Lehrer, 1994, Thm. 2) .IfQis locally absolutely continuous with respect to
PandQmerges strongly with P, thenPis absolutely continuous with respect to Q.
The following result shows that local absolute continuity is not required for strong
merging: recall that according to Example 3.14 the MDL distribution is not locally
absolutely continuous with respect to every Pfrom the classM.
Theorem 3.28 (Strong Merging for MDL; Hutter, 2009a, Thm. 1) .IfP2M, then
D1(P;MDLxjx)!0asjxj!1P-almost surely.
LetMbe a (possibly uncountable) set of probability measures on (X1;F1).
Ryabko (2010, Thm. 4) shows that if there is a Qthat merges strongly with every
P2M, then there is a Bayesian mixture over a countable subset of Mthat also
merges strongly with every P2M.
3.4.2 Weak Merging
In Definition 3.24 the supremum ranges over all measurable sets A2F1which includes
tail events. Instead, we may restrict the supremum to the next few symbols. This is
known as weak merging .
Definition 3.29 (Weak Merging) .Qweakly merges with Piff for every d2N,
Dt+d(Q;Pjx<t)!0ast!1P-almost surely.
The following lemma gives an equivalent formulation of weak merging.
Lemma 3.30 (Lehrer and Smorodinsky, 1996, Rem. 5) .Qweakly merges with Pif
and only if Dt(Q;Pjx<t)!0ast!1P-almost surely.
Unfortunately, weak dominance is not sufficient for weak merging (Lehrer and
Smorodinsky, 1996, Ex. 10). We need the following stronger condition, that turns
out to be (almost) necessary. In the following, let Yt:=Q(x1:t)=P(x1:t)denote the
P-martingale from Proposition 3.23.
Theorem 3.31 (Kalai and Lehrer, 1994, Prop. 5a) .IfP-almost surely Yt+1=Yt!1,
thenQmerges weakly with P.
Example 3.32 (Laplace Rule 2) .Suppose we use the Laplace rule from Example 3.12
to predict a Bernoulli (r)process. By the strong law of large numbers, L(xtjx<t)!r
almost surely. Therefore we can use Theorem 3.31 to conclude that Lmerges weakly
with Bernoulli (r)for allr2[0;1]. (Note that strongly merging with every Bernoulli
process is impossible; Ryabko, 2010, p. 7) 3
34 Learning
The following is a converse to Theorem 3.31.
Theorem 3.33 (Kalai and Lehrer, 1994, Prop. 5b) .IfQmerges weakly with P, then
Yt+1=Yt!1inP-probability.
Unfortunately, weak dominance is not enough to guarantee weak merging.
Example3.34 (WeakDominance ;WeakMerging; RyabkoandHutter,2007,Prop.7) .
LetX=f0;1gand letfbe any arbitrarily slowly monotone growing function with
f(t)!1. DefineP(11) := 1, the sequence (ti)i2Nsuch thatf(ti+1)2f(ti), and
Q(xtjx<t) :=8
>><
>>:1
2ift=tifor somei2N;
1ift6=tiandxt= 1;and
0otherwise:
NowQdominatesPwith coefficients fby construction and Qweakly dominates Pif
fgrows subexponentially. However, jQ(1j1t) P(1j1t)j1=2for infinitely many
t2N. 3
3.4.3 Almost Weak Merging
The following definition is due to Lehrer and Smorodinsky (1996, Def. 10).
Definition 3.35 (Almost Weak Merging) .Qalmost weakly merges with Piff for every
d2N
1
ttX
k=1Dt+d(Q;Pjx<t)!0ast!1P-almost surely :
There is also an analogue of Lemma 3.30 for almost weakly merging in the sense
that we can equivalently set d= 0(Lehrer and Smorodinsky, 1996, Rem. 6).
Remark 3.36 (Weak Merging and Merging in KL-Divergence) .From Lemma 2.13
follows that weak merging is implied by KLd(P;Qjx)!0P-almost surely and
almost weak merging is implied byPt
k=1KL1(P;Qjx<k)2o(t)P-almost surely,
i.e.,KLt(P;Q)2o(t)(Ryabko and Hutter, 2008, Lem. 1). The converse is generally
false. 3
The following proposition relates the three notions of merging.
Proposition 3.37 (Strong Merging )Weak Merging)Almost Weak Merging) .If
Qmerges strongly with P, thenQmerges weakly with P. IfQmerges weakly with P,
thenQmerges almost weakly with P.
Proof.Follows immediately from the definitions.
Theorem 3.38 (Weak Dominance )Almost Weak Merging; Lehrer and Smorodinsky,
1996, Thm. 4) .IfQweakly dominates P, thenQmerges almost weakly with P.
§3.5Predicting 35
From Theorem 3.38 we get that the speed prior (Example 3.11) merges almost
weakly with any probability distribution estimable in polynomial time.
We also have the following converse to Theorem 3.38.
Theorem 3.39 (Almost Weak Merging )Weak Dominance; Lehrer and Smorodinsky,
1996, Cor. 7) .IfQis locally absolutely continuous with respect to P,Qmerges almost
weakly with P, andP-almost surely lim inft!1Yt+1=Yt>0, thenQweakly dominates
P.
3.5 Predicting
In Section 3.4 we wanted Qto acquire the correct beliefs about P. In this section, we
exploit the accuracy of our beliefs for predicting individual symbols. We derive bounds
on the number of errors Qmakes when trying to predict a string drawn from P.
Sincethedatadrawnfrom Pisstochastic, wecannotexpecttomakeafinitenumber
of errors. Even the perfect predictor that knows Pgenerally makes an infinite number
of errors. For example, trying to predict the Lebesgue measure (Example 2.15), in
expectation we make half an error in every time step. So instead we are asking about
the asymptotic error rate of a predictor based on Qcompared to a predictor based on
P, theprediction regret .
LetxR
tbe thet-th symbol predicted by the probability measure Raccording to the
maximum likelihood estimator:
xR
t:2arg max
a2XR(x<tajx<t): (3.4)
Theinstantaneous error of aR-based predictor is defined as
eR
t:=(
0ifxt=xR
t;and
1otherwise:
and thecumulative error is
ER
t:=tX
k=1eR
k:
Note that both etandEtare random variables.
Definition 3.40 (Prediction Regret) .In time step ttheprediction regret isEQ
t EP
t
and theexpected prediction regret isEh
EQ
t EP
ti
.
More generally, we could also follow Hutter (2001b) and phrase predictive perfor-
mance in terms of loss: given a loss function `:XX! Rthe predictor Qsuffers an
(instantaneous) loss of `(xQ
t;xP
t)in time step t. If the loss function `is bounded in [0;1],
many of the results for prediction regret also hold for cumulative loss (for Section 3.5.2
we also need `(a;a) = 0for alla2X). In this chapter we chose to phrase the results
in terms of prediction errors instead of loss because prediction errors are conceptionally
simpler.
36 Learning
Example 3.41 (Good Prediction Regret ;Merging/Compatibility) .Good predic-
tion regret does not imply (weak/strong) merging or (weak) dominance: Let P:=
Bernoulli (1=3)andQ:=Bernoulli (1=4). ClearlyPandQdo not merge (weakly) or
(weakly) dominate each other. However, a P-based predictor always predicts 0, and so
does aQ-based predictor. Therefore the prediction regret EQ
t EP
tis always 0.3
Example 3.42 (Adversarial Sequence; Legg, 2006, Lem. 4) .No learning distribution
Qwill learn to predict everything. We can always define a Q-adversarial sequence z1:1
recursively according to
zt:=(
0ifQ(0jz<t)<1=2;and
1ifQ(0jz<t)1=2:
In every time step the probability that a Q-based predictor makes an error is at least
1=2, henceeQ
t1=2andEQ
tt=2. Butz1:1is a deterministic sequence, thus an
informed predictor makes zero errors. Therefore the prediction regret of Qon the
sequencez1:1is linear. 3
3.5.1 Dominance
WestartwiththepredictionregretboundsprovedbyHutter(2001b)incasethelearning
distribution Qdominates the true distribution P. In the following, let cPdenote the
constant from Definition 3.3.
Theorem 3.43 (Hutter, 2007b, Eq. 5 & 8) .For allPandQ,
q
EPEQ
n q
EPEPnp
2KLn(P;Q):
The following bound on prediction regret then follows easily, but it is a factor ofp
2
worse than the bound stated by Hutter (2005, Thm. 3.36).
Corollary 3.44 (Expected Prediction Regret) .For allPandQ,
0EPh
EQ
n EP
ni
2KLn(P;Q) + 2q
2KLn(P;Q)EPEPn:
Proof.From Theorem 3.43 we get
EPh
EQ
n EP
ni
=q
EPEQ
n+q
EPEPnq
EPEQ
n q
EPEPn
q
EPEQ
n+q
EPEPnp
2KLn(P;Q)
p
2KLn(P;Q) +q
EPEPn+q
EPEP
tp
2KLn(P;Q)
= 2KLn(P;Q) + 2q
2KLn(P;Q)EPEPn:
§3.5Predicting 37
IfQdominatesP, then we have KLn(P;Q) lncP:
KLn(P;Q) =X
x2XnP(x) logP(x)
Q(x)X
x2XnP(x) log1
cP= logcP(3.5)
This invites the following corollary.
Corollary 3.45 (Prediction Regret for Dominance; Hutter, 2005, Cor. 3.49) .IfQ
dominatesP, then the following statements hold.
(a)EPEQ
1is finite if and only if EPEP
1is finite.
(b)q
EPEQ
1 p
EPEP12O(1)
(c)EPEQ
t=EPEP
t!1forEPEP
t!1.
(d)EPh
EQ
t EP
ti
2Op
EPEP
t
.
If the true distribution Pis deterministic, we can improve on these bounds:
Example 3.46 (Predicting a Deterministic Measure) .Suppose we are predicting a
deterministic measure Pthat assigns probability 1to the infinite string x1:1. IfPis
dominated by Q, the total expected prediction regret EPEQ
1is bounded by 2 lncP
by Corollary 3.44. This is easy to see: every time we predict a wrong symbol a6=xt,
thenQ(ajx<t)Q(xtjx<t), soQ(xtjx<t)1=2. Therefore YtYt 1=2and by
dominance YtcP. Hence a prediction error can occur at most logcPtimes. 3
Generally, the O(EPEP
t)bounds on expected prediction regret given in Corol-
lary 3.45 are essentially unimprovable:
Example 3.47 (Lower Bounds on Prediction Regret) .SetX:=f0;1gand consider
the uniform measure from Example 2.15. For each time step t, we have(0jx<t) =
(1jx<t) = 1=2, so the argmax in (3.4) ties and hence it does not matter whether
we predict 0or1. We take two predictors PandQ, wherePalways predicts 0andQ
always predicts 1. LetZt:=EQ
t EP
t. Since their predictions never match, Ztis an
ordinary random walk with step size 1. We have (Weisstein, 2002)
lim sup
t!1EP[EQ
t EP
t]p
t=p
2=
and for the law of the iterated logarithm (Durrett, 2010, Thm. 8.8.3)
lim sup
t!1EQ
t EP
tp2tlog logt= 1P-almost surely :
Both bounds are known to be asymptotically tight. 3
While Example 3.47 shows that the bounds from Corollary 3.45 are asymptotically
tight, they are misleading because in most cases, we can do much better. According
38 Learning
to the following theorem, the worst case bounds are only attained if P(xtjx<t)is
sufficiently close to 1=2.
Theorem 3.48 (Expected Prediction Regret for Nonuniform Measures) .IfX=f0;1g
and there is an ">0such thatjP(xtjx<t) 1=2j"for allx1:t2X, then
EPh
EQ
t EP
ti
KLt(P;Q)
":
Proof.Recall the definition of entropy in nats:
Ent(p) := plnp (1 p) ln(1 p):
The second order Taylor approximation of Entat1=2is
f(p) = ln 2 2(p 1
2)2:
One can check that f(p)Ent(p)for all 0p1. Definep:=P(xP
tjx<t)1=2
andq:=Q(xQ
tjx<t)1=2to ease notation. Consider the function
g(p;q;" ) :=p (1 p) " 1
plnp
1 q+ (1 p) ln1 p
q
which is strictly increasing as qdecreases, so from q1=2we get
g(p;q;" )2p 1 " 1ln 2 +" 1Ent(p) ln 2
2p 1 " 1ln 2 +" 1f(p) ln 2
= 2p 1 " 12(p 1
2)2;
which decreases as pincreases, hence it is maximized for p= 1=2 +",
g(p;q;" )2" " 12"2= 0
Thereforegis nonpositive. If xQ
t=xP
t, the one-step error is 0. Otherwise EP[etj
x<t] =p (1 p)andg(p;q;" ) =EP[etjx<t] " 1KL1(P;Qjx<t), so we get
EP[etjx<t]" 1KL1(P;Qjx<t). Summing this from t= 1tonyields the claim.
Eh
EQ
n EP
ni
" 1KLn(P;Q):
3.5.2 Absolute Continuity
Theorem 3.49 (Prediction with Absolute Continuity) .IfQP, then
q
EQ
t q
EP
tOp
log logt
P-almost surely :
The proof idea is inspired by Miller and Sanchirico (1999). We think of PandQas
two players in a zero-sum betting game. In every time step t, the players will make a
bet on the outcome of xt. Ifxt=xQ
t6=xP
t, thenQwins $1 from P, ifxt=xP
t6=xQ
t,
§3.5Predicting 39
thenQloses $1 to P. Otherwise xQ
t=xP
torxQ
t6=xt6=xP
tand neither player gains
or loses money. Since Qpredicts according to the maximum likelihood principle (3.4),
it is rational to accept the bet from Q’s perspective. In Q’s eyes, the worst case is a
fair bet, so Qwill not lose more money than it would lose on a random walk. The law
of the iterated logarithm gives a Q-probability one statement about this bound, which
transfers to Pby absolute continuity.
Proof.Define the stochastic process Zt:=EQ
t EP
t. Since EQ[eR
t] =Q(xR
tjx<t), we
get
EQ[Zt+1jFt] =Q(xQ
tjx<t) Q(xP
tjx<t) +Zt
Q(xQ
tjx<t) Q(xQ
tjx<t) +Zt=Zt;
hence (Zt)t2Nis aQ-submartingale. In the worst case (for Q),(Zt)t2Nis just a random
walk with step size 1. ButZtcan only move if QandPpredict a different symbol. If
this happens, at least one of them makes an error. Let mtbe the number of steps Zt
has moved ( Zt+16=Zt). ThenmtEQ
t+EP
tandmtt. By the law of the iterated
logarithm (Durrett, 2010, Thm. 8.8.3),
lim inf
t!1Ztp2mtlog logmt= 1
Q-almost surely. We define the event
A:=n
9C8t:Zt Cp
mtlog logmto
:
ThenQ(A) = 1, henceP(A) = 1by absolute continuity.
EQ
t EP
t=ZtCq
(EQ
t+EP
t) log logtCq
EQ
t+q
EP
tp
log logt
Dividingbothsidesbyq
EQ
t+p
EP
tyieldsthatthereisa P-almostsurelyfiniterandom
variableCsuch thatq
EQ
t p
EP
tCplog logt.
This invites the following immediate corollary.
Corollary 3.50 (Prediction Regret for Absolute Continuity) .IfQP, then
EQ
t EP
t2O
log logt+q
EP
tlog logt
P-almost surely :
Proof.Analogously to the proof of Corollary 3.44.
While Corollary 3.50 establishes an almost sure prediction regret bound, it is dif-
ferent from the bound on expected prediction regret from Corollary 3.44; bounds on
E[EQ
t EP
t]are incomparable to almost sure bound given in Theorem 3.49: for a se-
quence of nonnegative (unbounded) random variables convergence in mean does not
40 Learning
imply almost sure convergence (Stoyanov, 2013, Sec. 14.7) or vice versa (Stoyanov,
2013, Sec. 14.8ii).
Weproceedtoestablishanimprovedpredictionregretboundincase Pisnonuniform
analogously to Theorem 3.48.
Theorem 3.51 (Prediction Regret for Nonuniform Measures) .IfQP,X=f0;1g,
and there is an ">0such that with P-probability 1
jP(xtjx<t) 1=2j"
for allt2N, thenP-almost surely EQ
t EP
t2O(1).
Proof.IfjP(xtjx<t) 1=2j", then for large enough t,Qwill have merged with P
(Theorem 3.25) and hence jQ(xtjx<t) 1=2j"=2infinitely often.
ThusZthas an expected gain of "=2if the predictors disagree. Therefore Zt!1
Q-almost surely. Consequently, the set
A:=f9t08tt0:Zt0g
hasQ-measure 1. By absolute continuity, it also has P-measure 1, hence there is a
P-almost surely finite random variable Csuch that for all t,Zt C.
There is another argument that we could use to show that under the condition of
Theorem 3.51 EQ
t EP
tis almostsurely finite: If Pis absolutelycontinuous withrespect
toQ, thenQmerges strongly with Pand henceQmerges weakly with P. Therefore
almost surely there is a t0such that for all tt0we havejQ(xP
tjx<t) P(xP
tjx<t)j<
", thusxQ
t=xP
tfortt0.
3.5.3 Dominance with Coefficients
Lemma 3.52 (KL Divergence and Dominance With Coefficients) .IfQdominatesP
with coefficients f, then KLt(Q;P)lnf(t).
Proof.Analogous to (3.5).
This lets us derive an analogous regret bound to Corollary 3.44.
Corollary 3.53 (Expected Prediction Regret for Dominance With Coefficients) .IfQ
dominatesPwith coefficients f, then
EPh
EQ
n EP
ni
2 lnf(t) + 2q
2EPEPnlnf(t):
Proof.Apply Lemma 3.52 to Corollary 3.44.
For weak dominance we get sublinear prediction regret.
Corollary 3.54 (Sublinear Prediction Regret for Weak Dominance) .IfQweakly dom-
inatesP, then EP[EQ
n EP
n]2o(t).
§3.6Learning with Algorithmic Information Theory 41
Proof.By Remark 3.9 lnf2o(t). Applying Corollary 3.53 we get
EPh
EQ
n EP
n
2o(t) + 2q
2EPEPno(t)2o(t) + 2p
2O(t)o(t)2o(t):
3.6 Learning with Algorithmic Information Theory
Algorithmic information theory provides a theoretical framework to apply the probabil-
ity theory results from the previous sections. In the following we discuss Solomonoff’s
famous theory of induction (Section 3.6.1), the speed prior (Section 3.6.2), and learning
with a universal compression algorithm (Section 3.6.3).
3.6.1 Solomonoff Induction
Solomonoff (1964, 1978) proposed a theory of learning, also known as universal in-
ductionorSolomonoff induction . It encompasses Ockham’s razor by favoring simple
explanations over complex ones, and Epicurus’ principle of multiple explanations by
never discarding possible explanations. See Rathmanner and Hutter (2011) for a very
readable introduction to Solomonoff’s theory and its philosophical motivations and
Sterkenburg (2016) for a critique of its optimality.
At the core of this theory is Solomonoff’s distribution M, as defined in Example 3.5.
SinceMdominates all lower semicomputable semimeasures, we get all the merging and
prediction results from Section 3.4 and Section 3.5: when drawing a string from any
computable measure P,Marrives at the correct belief for any hypothesis.
Corollary 3.55 (Strong Merging for Solomonoff Induction) .Mmerges strongly with
every computable measure.
Proof.From Proposition 3.16a and Theorem 3.25.
Corollary 3.56 (Expected Prediction Regret for Solomonoff Induction) .For all com-
putable measures P,
EP
EM
t EP
t
K(P) ln 4 +q
2EPEP
tK(P) ln 16:
Proof.From Corollary 3.44 and cP= 2 K(P).
Remark 3.57 (Converging Fast and Slow) .The convergence of Mto a computable P
is fast in the sense of Corollary 3.56: Mcannot make many more prediction errors than
Pin expectation. When predicting an infinite computable sequence x1:1, the total
number of prediction errors is bounded by jpj2 ln 21:4jpjwherepis a program that
generatesx1:1(Example 3.46).
The convergence of MtoPis also slow in the sense that M(xtjx<t)!1slower
than any computable function since 1 M(xtjx<t)2 minntK(n)for allt. 3
The bound from Corollary 3.56 is not optimal. Even if we knew the program p
generating the sequence x1:1, there might be a shorter program p0that computes x1:1;
42 Learning
hence the improved bound EM
1jp0j2 ln 2also holds. Since Kolmogorov complexity is
incomputable, we can’t find the ‘best’ bound algorithmically.
Solomonoff induction may even converge on some incomputable measures.
Example 3.58 (MConverges on Some Incomputable Measures) .Letrbe an incom-
putable real number. Then the measure P:=Bernoulli (r)is not computable and Mis
not absolutely continuous with respect to P: for
A:=n
x2X1lim
t!1ones(x1:t) =ro
we haveP(A) = 1butM(A) = 0. SinceMLPwe get from Theorem 3.27 that M
does not merge with P. Nevertheless, Mstill succeeds at prediction because it domi-
nates Bernoulli (q)for each rational qand the rationals are dense around r. According
to Lehrer and Smorodinsky (1996, Lem. 3), this implies that Mweakly dominates P
and by Theorem 3.38 Malmost weakly merges to P. 3
The fact that Mdoes not merge strongly with every Bernoulli (r)process is not
a failure of Solomonoff’s prior. Ryabko (2010, p. 7) shows that for the class of all
Bernoulli measures there is no probability measure that merges strongly with each of
them.
The definition of Mhas only one parameter: the choice of the universal Turing
machine. The effect of this choice on the function Kcan be uniformly bounded by
a constant by the invariance theorem (Li and Vitányi, 2008, Thm. 3.1.1). Hence the
choice of the UTM changes the prediction regret bound from Corollary 3.56 only by
a constant. This constant can be large, preventing any finite-time guarantees that are
independent of the UTM. However, asymptotically Solomonoff induction succeeds even
for terrible choices of the UTM.
The Solomonoff normalization MnormofMis defined according to Definition 2.16.
WhileMnormdominatesMaccording to Lemma 2.17 and thus every lower semicom-
putable semimeasure, in some respects, Mnormbehaves a little differently from M.
Another way to complete the semimeasure Minto a measure is given in the following
example.
Example 3.59 (The Measure Mixture; Gács, 1983, p. 74) .Themeasure mixture Mis
defined as
M(x) := lim
n!1X
y2XnM(xy): (3.6)
It is the same as Mexcept that the contributions by programs that do not produce
infinite strings are removed: for any such program p, letkdenote the length of the
finite string generated by p. Then forjxyj> k, the program pdoes not contribute to
M(xy), hence it is excluded from M(x).
Similarly to M, the measure mixture Mis not a (probability) measure since M()<
1; but in this case normalization (2.2) is just multiplication with the constant 1=M(),
leading to the normalized measure mixture Mnorm. 3
§3.6Learning with Algorithmic Information Theory 43
Eventhough Mmergesstronglywithanycomputablemeasure PwithP-probability
1, LattimoreandHutter(2013,2015)showthatgenerallyitdoesnotholdforallMartin-
Löf random sequences (which also form a set of P-probability 1). Hutter and Muchnik
(2007, Thm. 6) construct non-universal lower semicomputable semimeasures that have
this convergence property for all P-Martin-Löf random sequences. For infinite nonran-
dom sequences whose bits are selectively predicted by some total recursive function,
Lattimore et al. (2011, Thm. 10) show that the normalized Solomonoff measure Mnorm
converges to 1on the selected bits. This does not hold for the unnormalized measure
M(Lattimore et al., 2011, Thm. 12).
3.6.2 The Speed Prior
Solomonoff’s prior Mis incomputable (Theorem 6.3); a computable alternative is the
speed prior from Example 3.11. In this section we state merging and prediction results
forSKt, aspeedpriorintroducedbyFilanetal.(2016)formallydefinedinExample3.11.
It is slightly different from the speed prior defined by Schmidhuber (2002), but for the
latter no compatibility properties are known for nondeterministic measures.
Definition 3.60 (Estimable in Polynomial Time) .A function f:X!Risestimable
in polynomial time iff there is a function g:X!Rcomputable in polynomial time
such thatf=g.
For a measure Pestimable in polynomial time the speed prior SKtdominatesP
with coefficients polynomial in jxj logP(x)(Filan et al., 2016, Eq. 12). Thus SKt
weakly dominates Pand we get the following results.
Corollary 3.61 (Almost Weak Merging for SKt).SKtalmost weakly merges with every
measure estimable in polynomial time.
Proof.From Theorem 3.38 and Filan et al. (2016, Eq. 12) since logPdoes not grow
superexponentially P-almost surely.
Corollary 3.62 (Expected Prediction Regret for SKt; Filan et al., 2016, Thm. 9) .For
all measures Pestimable in polynomial time,
EP
ESKtn EP
n
2O
logn+q
EPEP1logn
:
Proof.From Corollary 3.44 and Filan et al. (2016, Eq. 14).
3.6.3 Universal Compression
Solomonoff’sdistributioncanbeapproximatedusingastandardcompressionalgorithm,
motivated by the similarity M(x)2 Km(x), whereKmdenotes monotone Kolmogorov
complexity. The function Kmis auniversal compressor , compressing at least as well
as any other recursively enumerable program.
Gács (1983) shows that the similarity M2 Kmis not an equality. However, the
difference between logMandKmis very small: the best known lower bound is due
44 Learning
to Day (2011) who shows that Km(x)> logM(x) +O(log logjxj)for infinitely many
x2X.
Nevertheless, 2 Kmdominates every computable measure (Li and Vitányi, 2008,
Thm. 4.5.4 and Lem. 4.5.6ii(d); originally proved by Levin, 1973). Hence all the strong
results that hold for Solomonoff induction (prediction regret and strong merging) also
hold for compression: we apply Theorem 3.25 and Corollary 3.44 to get the following
results. See Hutter (2006a) for further discussion on using the universal compressor
Kmfor learning.
Corollary 3.63 (StrongMergingforUniversalCompression) .The distribution 2 Km(x)
merges strongly with every computable measure.
Corollary 3.64 (Expected Prediction Regret for Universal Compression) .ForQ(x) :=
2 Km(x)and for all computable measures Pthere is a constant cPsuch that
EPh
EQ
t EP
ti
cP+q
cPEPEP
t:
This provides a theoretical basis for viewing compression as a general purpose learn-
ingalgorithm. Inthisspirit, the Hutter prize isawardedforthecompressionofa100MB
excerpt from the English Wikipedia (Hutter, 2006c).
Practical compression algorithms (such as the algorithm by Ziv and Lempel (1977)
used in gzip) are not universal. Hence they do not dominate every computable distri-
bution. As with the speed prior, what matters is the rate at which Yt=Q(x1:t)=P(x1:t)
goes to 0, i.e., does the compressor weakly dominate the true distribution in the sense
of Definition 3.8?
Veness et al. (2015) successfully apply the Lempel-Ziv compression algorithm as a
learning algorithm for reinforcement learning; however, some preprocessing of the data
is required. More remotely, Vitányi et al. (2009) use standard compression algorithms
to classify mammal genomes, languages, and classical music.
3.7 Summary
Ultimately, whether learning succeeds depends on the rate at which the nonnegative
P-martingale Q=Pgoes to 0(when drawing from P). IfQ=Pdoes not converge to zero,
thenQmerges strongly with Pand thus arrives at correct beliefs about any hypothesis,
including tail events. If Q=Pconverges to zero subexponentially, then Qmerges almost
weakly with Pand thus asymptotically has incorrect beliefs about the immediate future
only a vanishing fraction of the time.
Corollary 3.44 bounds the expected prediction regret by the KL-divergence between
PandQplus ap
EPEP
tterm. The KL-divergence is in turn bounded by the rate at
whichQ=Pgoes to zero. It is constant if QdominatesPand bounded by lnfifQ
dominatesPwith coefficients f. IfQweakly dominates P, then the KL-divergence is
sublinear. We also derived bounds on the prediction regret for absolute continuity (Sec-
tion 3.5.2). Remarkably, the bounds are only log logtworse than the bound we get from
dominance. Moreover, they hold almost surely instead of in expectation.
§3.7Summary 45
name symbol defined in property
Bayesian mixture Example 3.4 dominates every P2M
Solomonoff prior M Example 3.5 dominates every lower semi-
computable semimeasure
universal compression 2 KmEquation 2.1 dominates every computable
measure
speed prior SKtExample 3.11 weak dominates every mea-
sure estimable in polytime
Laplace rule LExample 3.12 merges weakly with every
Bernoulli process
MDL MDLxExample 3.14 merges strongly with every
P2M
Table 3.1: Examples of learning distributions discussed in this chapter and their
properties.
compatibility
ofPandQmartingale merging prediction regret
QP Y tc>0 strong merging 2 lnc+ 2p
2EPEP
tlnc
QP Y t6!0 strong mergingO
log logt+p
EPEP
tlog logt
Yt+1=Yt!1 weak merging o(t)
QP=f Y t1=f(t) 2 ln f(t) + 2p
2EPEP
tlnf(t)
QWP log(Yt+1=Yt)!1
in Cesàro averagealmost weak
mergingo(t)
QLP Y t>0 O(t)
Table 3.2: Summary on properties of learning. The first column lists different notions
of compatibility introduced in Section 3.2; the second column lists properties of the
P-martingale Yt:=Q(x1:t)=P(x1:t)from Section 3.3; the third column lists different
notions of merging discussed in Section 3.4; the fourth column states the bounds on
the prediction regret (in expectation and almost surely respectively) from Section 3.5.
Figure 3.1 illustrates the origin of the results.
46 Learning
strong merging
weak merging
almost
weak
mergingQP
QP
QP=f
QWP
QLPYtc>0
Yt6!0
Yt+1=Yt!1
log(Yt+1=Yt)!0
in Cesàro average
Yt>0f2o(t)Hutter (2009a, Lem. 3i)Blackwell and Dubins (1962)
Kalai and Lehrer (1994, Thm. 2), L
Lehrer and Smorodinsky
(1996, Thm. 4)
Lehrer and Smorodinsky
(1996, Cor. 7),
Land
lim inf
t!1Yt+1=Yt>0Kalai and Lehrer (1994, Prop. 5)
Figure 3.1: Properties of learning and their relationship. We use Yt:=
Q(x1:t)=P(x1:t). An arrow between two statements means that one statement implies
the other. The transitive property of implications is not made explicit. The source of
the result is indicated on the arrow, sometimes together with a side condition. If no
source is given, then the relationship is easy and a proof can be found in this chapter.
§3.7Summary 47
Next, we showed that thep
EPEP
tterm is generally unimprovable (Example 3.47).
However, it comes only from predicting measures that assign probabilities close to 1=2.
If we can bound Paway from 1=2, then thep
EPEP
tterm disappears (Theorem 3.48
and Theorem 3.51).
Table 3.1 lists our learning distributions. The Bayesian mixture is the strongest
since it dominates every measure from the given class M(Example 3.4). The minimum
description length model MDLxdoes not have this property, yet it still merges strongly
with every measure from the class (Example 3.14 and Theorem 3.28). The Laplace rule
is only useful for learning i.i.d. measures; it merges weakly with every Bernoulli pro-
cess (Example 3.12 and Example 3.32). We also discussed some learning distributions
from algorithmic information theory. Solomonoff’s prior is a Bayesian mixture over
all lower semicomputable semimeasures (Example 3.5 and Wood et al., 2011). Like
the universal compressor it dominates and hence merges strongly with all computable
measures. The speed prior dominates all probability measures estimable in polynomial
time with polynomial coefficients (Example 3.11), and thus merges weakly with each of
them.
Table 3.2 summarizes the results from this chapter and Figure 3.1 illustrates their
logical relationship and their origin.
We conclude this chapter with a paradox from the philosophy of science.
Remark 3.65 (The Paradox of Confirmation) .Recall the black raven problem intro-
duced in Example 3.1; the hypothesis ‘all ravens are black’ is denoted with H. The
paradox of confirmation , also known as Hempel’s paradox (Hempel, 1945), relies on the
following three principles.
•Nicod’s criterion (Nicod, 1961, p. 67): observing an Fthat is aGincreases our
belief in the hypothesis that all Fs areGs.
•The equivalence condition : logically equivalent hypotheses are confirmed or dis-
confirmed by the same evidence.
•The paradoxical conclusion : a green apple confirms H.
Theargumentgoesasfollows. Thehypothesis Hislogicallyequivalenttothehypothesis
H0that all non-black objects are non-ravens. According to Nicod’s criterion, any non-
black non-raven, such as a green apple, confirms H0. But then the equivalence condition
entails the paradoxical conclusion.
The paradox of confirmation has been discussed extensively in the literature on the
philosophy of science (Hempel, 1945; Good, 1960; Mackie, 1963; Good, 1967; Hempel,
1967; Maher, 1999; Vranas, 2004); see Swinburne (1971) for a survey. Support for
Nicod’s criterion is not uncommon (Mackie, 1963; Hempel, 1967; Maher, 1999) and no
consensus is in sight.
A Bayesian reasoner might be tempted to argue that a green apple doesconfirm the
hypothesisH, but only to a small degree, since there are vastly more non-black objects
than ravens (Good, 1960). This leads to the acceptance of the paradoxical conclusion,
48 Learning
and this solution to the confirmation paradox is known as the standard Bayesian solu-
tion. Vranas (2004) shows that this solution is equivalent to the assertion that blackness
is equally probable regardless of whether Hholds:P(blackjH)P(black ).
The following is a very concise example against the standard Bayesian solution by
Good (1967): There are two possible worlds, the first has 100 black ravens and a million
other birds, while the second has 1000 black ravens, one white raven, and a million other
birds. Now we draw a bird uniformly at random, and it turns out to be a black raven.
Contrary to what Nicod’s criterion claims, this is strong evidence that we are in fact in
the second world, and in this world non-black ravens exist.
For another, more intuitive example: Suppose you do not know anything about
ravens and you have a friend who collects atypical objects. If you see a black raven
in her collection, surely this would not increase your belief in the hypothesis that all
ravens are black.
In Leike and Hutter (2015d) we investigate the paradox of confirmation in the con-
text of Solomonoff induction. We show that the paradoxical conclusion is avoided
because Solomonoff induction violates Nicod’s criterion: There are time steps when
(counterfactually) observing a black raven disconfirms the hypothesis that all ravens
are black. When predicting a deterministic computable sequence Nicod’s criterion is
even violated infinitely often. However, if we normalize Solomonoff’s prior and ob-
serve a deterministic computable infinite string, Nicod’s criterion is violated at most
finitely many times. These results are independent of the choice of the universal Turing
machine.
We must conclude that violating Nicod’s criterion is not a fault of Solomonoff in-
duction. Instead, we should accept that for Bayesian reasoning Nicod’s criterion, in its
generality, is false! Quoting the great Bayesian master Jaynes (2003, p. 144):
In the literature there are perhaps 100 ‘paradoxes’ and controversies which
are like this, in that they arise from faulty intuition rather than faulty
mathematics. Someone asserts a general principle that seems to him intu-
itively right. Then, when probability analysis reveals the error, instead of
taking this opportunity to educate his intuition, he reacts by rejecting the
probability analysis. 3
Chapter 4
Acting
I ought never to act except in such a way that I could also will that my maxim should
become a universal prior. — Immanuel Kant
Recall our decomposition of intelligence into learning andactingfrom Equation 1.1.
The previous chapter made the notion of learning precise and provided several examples
of learning distributions for the non-i.i.d. setting (see Table 3.1). Learning is passive:
there is no interaction with the data-generating process. In this chapter we transition
into the active setting: we consider an agentacting in an unknown environment in
order to achieve a goal. In our case, this goal is maximizing reward; this is known as
reinforcement learning . Where this reward signal originates does not concern us here.
In this thesis we consider is the general reinforcement learning problem in which we
do not make several of the typical simplifying assumptions (see Table 1.1). Environ-
mentsareonlypartiallyobservable, haveinfinitelymanystates, andmightcontaintraps
from which the agent cannot escape. The context for making decision is the agent’s
entire history; its behavior is given by a policythat specifies how the agent behaves in
any possible situation.
A central quantity in reinforcement learning is the value function . The value func-
tionquantifiestheexpectedfuturediscountedreward. Sincetheagentseekstomaximize
reward, it aims to adopt a policythat has high value. Since the agent’s environment
is unknown to the agent, learning the value function is part of the challenge; otherwise
we would call this planning.
If our agent is capable of learning in the sense of Chapter 3, then it learns the
value of its own policy ( on-policy value convergence ). However, generally the agent
does not learn to predict the value of counterfactual actions, actions that it does not
take. Learning off-policy is hard because the agent receives no evidence about what
would have happened on counterfactual actions. Nevertheless, off-policy learning is
highly desirable because we want the agent to be confident that the policy it is currently
following is in fact the best one; we want it to accurately predict that the counterfactual
actions have less value.
This brings us back to the central theme of reinforcement learning: the tradeoff
between exploration andexploitation . Asymptotically the agent needs to focus on
exploitation, i.e., take the actions that it thinks yield the highest expected rewards. If
theagentexploresenough,thenallactionsareon-policybecausetheyareallactionsthat
49
50 Acting
agent environment at
et= (ot;rt)
Figure 4.1: The dualistic agent model. At every time step t, the agent outputs an
actionatand subsequently receives a percept etconsisting of an observation otand a
real-valued reward rt. The agent’s policy is a function that maps a history æ<tto the
next action at, and the environment is a function that maps a history and an action
to the next percept et.
the agent sometimes takes. Then on-policy learning ensures that the agent understands
the consequences of every action and can confidently choose the best action. Effective
exploration is performed by knowledge-seeking agents ; these agents ignore the rewards
and just focus on exploration.
This chapter introduces the central concepts of general reinforcement learning. It
is mostly based on Hutter (2005) and Lattimore (2013). Section 4.1 specifies the gen-
eral reinforcement learning problem, discusses discounting (Section 4.1.1), our implicit
assumptions (Section 4.1.2), and typical environment classes (Section 4.1.3). Sec-
tion 4.2 discusses the value function and its properties. In Section 4.3 we introduce the
agents: AIXI (Section 4.3.1), knowledge-seeking agents (Section 4.3.2), BayesExp (Sec-
tion 4.3.3), and Thompson sampling (Section 4.3.4).
4.1 The General Reinforcement Learning Problem
In reinforcement learning, an agent interacts with an environment: at time step t2N
the agent takes an actionat2Aand subsequently receives a perceptet= (ot;rt)2E
consisting of an observation ot2Oand arewardrt2R. This cycle then repeats for
time stept+ 1(see Figure 4.1).
Ahistoryis an element of (AE )and lists the actions the agent took and the
percepts it received. We use æ2AE to denote one interaction cycle, and æ<t=
æ1æ2:::æt 1to denote a history of length t 1. For our agent, the history is a
sufficient statistic about the past and in general reinforcement learning there is no
simpler sufficient statistic.
For example, consider the agent to be a robot interacting with the real world.
Its actions are moving the motors in its limbs and wheels and sending data packets
over a network connection. Its observations are data from cameras and various other
sensors. Therewardcouldbeprovidedeitherbyahumansupervisororthroughareward
module that checks whether a predefined goal has been reached. The history is the
collection of all the data it received and emitted in the past. The division of the robot’s
interaction with the environment into discrete time steps might seem a bit unnatural
at first since the real world evolves according to a continuous process. However, note
that the electronic components used in robots operate at discrete frequencies anyway.
§4.1The General Reinforcement Learning Problem 51
In order to specify how the agent behaves in any possible situation, we define a
policy: apolicyisafunction : (AE )!Amappingahistory æ<ttoadistribution
over actions (jæ<t)taken after seeing this history. Usually we do not distinguish
between agent and policy. An environment is a function : (AE )A! E
mapping a history æ<tand an action atto a distribution (jæ<tat)over the percepts
received after the history æ<tand actionat. We useto denote the true environment.
Equivalently, Hutter (2005) defines environments as chronological contextual semi-
measures .1Acontextual semimeasure takes a sequence of actions a1:1as input and
returns a semimeasure (ka1:1)overE]. A contextual semimeasure ischronological
iffperceptsattime tdonotdependonfutureactions, i.e., (e1:tka1:1) =(e1:tka0
1:1)
whenevera1:t=a0
1:t. For chronological contextual semimeasures we write (e1:tka1:t)
instead of(e1:tka1:1). The two definition can be translated using the identities
(e1:tka1:t) =tY
k=1(ekjæ<kak)and(etjæ<tat) =(e1:tka1:t)
(e<tka<t):(4.1)
If the policy always assigns probability 1to one of the actions, then is called
deterministic . Likewise, if the environment always assigns probability 1to one of the
percepts, then is called deterministic . For deterministic policies and environments
we also use the notation at=(æ<t)andet=(æ<tat). A deterministic policy is
consistent with history æ<tiffak=(æ<k)for allk < t. Likewise, a deterministic
environment isconsistent with history æ<tiffek=(æ<kak)for allk<t.
Definition 4.1 (History Distribution) .An environment together with a policy
induces a history distribution
(æ<t) :=tY
k=1(akjæ<k)(ekjæ<kak):
We denote an expectation with respect to the history distribution withE
.
Thehistorydistributionisa(semi)measureon (AE )1. Inthelanguageofmeasure
theory, our -algebra is the -algebraF1generated by the cylinder sets introduced in
Section 2.1. The filtration (Ft)t2Nformalizes that at time step twe have seen exactly
the history æ<t(we use the -algebraFt 1). To simplify notation and help intuition,
wesimplyconditionexpectationsandprobabilitymeasureswiththehistory æ<tinstead
ofFt 1and sweep most of the measure-theoretic details under the rug.
With these preliminaries out of the way, we can now specify the general reinforce-
ment learning problem .
Problem 4.2 (General Reinforcement Learning Problem) .Given an arbitrary class of
environmentsM, choose a policy that maximizes -expected reward when interacting
with any environment 2M.
1Hutter (2005) calls them chronological conditional semimeasures . This is confusing because contex-
tual semimeasures do notspecify conditional probabilities; the environment is nota joint probability
distribution over actions and percepts.
52 Acting
Problem 4.2 is kept vague on purpose: it does not say how we should balance
between achieving more rewards in some environments while achieving less in others.
In other words, we leave open what an optimalsolution to the general reinforcement
learning problem is. This turns out to be a notoriously difficult question that we discuss
in Chapter 5.
As promised in the title of this thesis, we take the nonparametric approach. For the
rest of this thesis, fix Mto be any countable set of environments. While the true envi-
ronment is unknown, we assume it belongs to the class M(therealizable case ). As long
as the classMis sufficiently large (such as the class of all computable environments),
this assumption is weak. Some typical choices are discussed in Section 4.1.3.
Our agent-environment setup shown in Figure 4.1 is known as the dualistic model :
the agent is distinct from the environment and influences it only through its actions. In
turn, the environment influences the agent only through the percepts. The dualism as-
sumption is accurate for an algorithm that is playing chess, Go, or other (video) games,
which explains why it is ubiquitous in AI research. But often it is not true: real-world
agents are embedded in (and computed by) the environment, and then a physicalistic
model(also called materialistic model ornaturalistic model) is more appropriate. Deci-
sion making in the physicalistic model is still underdeveloped; see Everitt et al. (2015)
and Orseau and Ring (2012a). In this thesis we restrict ourselves to the dualistic model.
4.1.1 Discounting
The goal in reinforcement learning is to maximize rewards. However, the infinite reward
sumP1
t=1rtmay diverge. To get around this technical problem, we let our agent prior-
itize the present over the future. This is done with a discount function that quantifies
how much the agent prefers rewards now over rewards later.
Definition 4.3 (Discount Function) .Adiscount function is a function
:N!Rwith
t:=
(t)0andP1
t=1
t<1. Thediscount normalization factor is t:=P1
k=t
k.
There is no requirement that t>0. In fact, we use
for both, discounted infinite
horizon ( t>0for allt), and finite horizon m( m 1>0and m= 0) where the agent
does not care what happens after time step m.
Note that the way in which we employ discounting is time consistent : the agent
does not change its mind about how much it values the reward at time step kover
time: reward rkis always discounted with
kregardless of the current time step. For a
discussion of general discounting we refer the reader to Lattimore and Hutter (2014).
Definition 4.4 (Effective Horizon) .The"-effective horizon Ht(")is a horizon that is
long enough to encompass all but an "of the discount function’s mass:
Ht(") := minfkj t+k= t"g
The effective horizon is bounded iff for all" > 0there is a constant c"such that
Ht(")c"for allt2N.
§4.1The General Reinforcement Learning Problem 53
parameter
t t Ht(")
Finite horizon m2N 1m(t)=m (m t+ 1)=md(m t+ 1)(1 ")e
Geometric
2(0;1)
t
t=(1
)dlog
"e
Power >1t t +1=( 1)("1=(1 ) 1)t
Subgeometric - e p
t=p
t2e p
t p
tlog"+ (log")2
Table 4.1: Several discount functions and their effective horizons. See also Hutter
(2005, Tab. 5.41) and Lattimore (2013, Tab. 2.1).
Example 4.5 (Geometric Discounting) .The most common discount function is ge-
ometric discounting with
t:=
tfor some constant
2[0;1). We get that t=P1
k=t
k=
t=(1
)and the"-effective horizon is Ht(") =dlog
"e. Hence the effec-
tive horizon is bounded. 3
More examples for discount functions are given in Table 4.1. From now on, we fix
a discount function
.
4.1.2 Implicit Assumptions
Throughout this thesis, we make the following assumptions implicitly.
Assumption 4.6. (a) The discount function
is computable.
(b) Rewards are bounded between 0and1.
(c) The set of actions Aand the set of percepts Eare both finite.
Let’smotivatetheseassumptionsinturn. Theirpurposeistoensurethatdiscounted
reward sums are finite and optimal policies exist.
Assumption4.6aisatechnicalassumptionthatensuresthatdiscountedrewardsums
are computable. This is important for Chapter 6 and Chapter 7 where we analyse the
computability of optimal policies. Note that all discount functions given in Table 4.1
are computable.
Assumption 4.6b could be relaxed to require only that rewards are bounded. We
can rescale rewards rt7!crt+dfor anyc2R+andd2Rwithout changing optimal
policies if the environment is a probability measure. (For our computability-related
results in Chapter 6, we must assume that rewards are nonnegative.) In this sense As-
sumption 4.6b is not very restrictive. However, this normalization of rewards into the
[0;1]-interval has the convenient consequence that the normalized discounted reward
sumP1
k=t
krk= kis bounded between 0and1. If rewards are unbounded, then the
discounted reward sum might diverge. Moreover, with unbounded rewards there all
kinds of pathological problems where defining optimal actions is no longer straightfor-
ward; see Arntzenius et al. (2004) for a discussion.
Assumption 4.6c is a technical requirement for the existence of optimal policies since
it implies that there are only finitely many deterministic policies that differ in the first t
54 Acting
time steps. Note that finite action and percept spaces are very natural since it ensures
that our agent only receives and emits a finite amount of information in every time
step. This is in line with the problems a strong AI is facing: the agent has to remember
important information and act sequentially.
Assumption 4.6b, Assumption 4.6c, and the fact that the discount function is
summable guarantee that a deterministic optimal policy exists for every environment
according to Lattimore and Hutter (2014, Thm. 10). It would be interesting to relax
theseassumptionswhilepreservingtheexistenceofoptimalpoliciesoratleast "-optimal
policies (e.g. use compact action and percept spaces).
4.1.3 Typical Environment Classes
The simplest reinforcement learning problems are multi-armed bandits.
Definition 4.7 (Multi-Armed Bandit) .An environment is amulti-armed bandit iff
O=f?gand(etjæ<tat) =(etjat)for all histories æ1:t2(AE ).
In a multi-armed bandit problem there are no observations and the next reward
only depends on the previous action. Intuitively, we are deciding between #Adifferent
slot machines (so-called one-armed bandits), pull the lever and obtain a reward. The
reward is stochastic, but it is drawn from a distribution that is time-invariant and fixed
for each arm.
A multi-armed bandit is also called banditfor short. Although bandits are the sim-
plest reinforcement learning problem, they already exhibit the exploration-exploitation-
tradeoff that makes reinforcement learning difficult: do you pull an arm that has the
best empirical mean or do you pull an arm that has the highest uncertainty? In bandits
it is very easy to come up with policies that perform (close to) optimal asymptoti-
cally (e.g.,"t-greedy with "t= 1=t). But coming up with algorithms that perform well
in practice is difficult, and research focuses on the multiplicative and additive constants
on the asymptotic guarantees. Bandits exist in many flavors; see Bubeck and Bianchi
(2012) for a survey.
Definition 4.8 (Markov Decision Process) .An environment is aMarkov decision
process(MDP) iff(etjæ<tat) =(etjot 1at)for all histories æ1:t2(AE ).
Intuitively, in MDPs, the previous observation ot 1provides a sufficient statistic for
the history: given ot 1and the current action at, the next percept etis independent of
the rest of the history. In other words, everything that the agent needs to know to make
optimal decisions is readily available in the previous percept. This is why observations
are called statesin MDPs. Note that bandits are MDPs with a single state.
Much of today’s literature on reinforcement learning focuses on MDPs (Sutton and
Barto, 1998). They provide a particularly good framework to study reinforcement
learning because they are simple enough to be tractable for today’s algorithms, yet
general enough to encompass many interesting problems. For example, most of the
Atari games (see Figure 1.1 for an overview) are (deterministic) MDPs when combining
§4.1The General Reinforcement Learning Problem 55
the previous four frames into one percept. While they have a huge state space2they
can still be learned using Q-learning with function approximation (Mnih et al., 2015).
The MDP framework is restrictive because it requires the agent to be more powerful
than the environment. Since the agent learns, its actions are not independent of the
rest of the history given the last action and percept. In other words, learning agents are
not Markov. The following definition lifts this restriction and allows the environment
to bepartially observable .
Definition 4.9 (Partially Observable Markov Decision Process) .An environment is
apartially observable Markov decision process (POMDP) iff there is a set of statesS,
aninitial state s02S, astate transition function 0:SA! S, and apercept
distribution 00:S! Esuch that
(e1:tka1:t) =tY
k=100(ekjsk)0(skjsk 1;ak):
Usually the setSis assumed to be finite; with infinite-state POMDPs we can model
any environment by setting the set of states to be the set of histories, S:= (AE ).
A common assumption for MDPs and POMDPs is that they do not contain traps.
Formally, a (PO)MDP is ergodiciff for any policy and any two states s1;s22S,
the expected number of time steps to reach s2froms1is-almost surely finite. A
(PO)MDP is weakly communicating iff for any two states s1;s22Sthere is a policy
such that the expected number of time steps to reach s2froms1is-almost surely
finite. Note that any ergodic (PO)MDP is also weakly communicating, but not vice
versa.
Ingeneral, ourenvironmentsarestochastic. Stochasticitycanoriginatefromnoisein
the environment, noise in the sensors, or modeling errors. Sometimes we also consider
classes of deterministic environments. These are usually easier to deal with because
they do not require as much mathematical machinery. For example, in a deterministic
environment the next percept is certain; if a different percept is received this environ-
ment is immediately falsified and can be discarded. In a stochastic environment, an
unlikely percept reduces our posterior belief in this environment but does not rule it
out completely.
In Chapter 6 and Chapter 7 we make the assumption that the environment is com-
putable. This encompasses all finite-state POMDPs and most if not all AI problems
can be formulated in this setting. Moreover, the current theories of quantum mechanics
and general relativity are computable and there is no evidence that suggests that our
physical universe is incomputable. For any physical system of finite volume and finite
(average) energy, the amount of information it can contain is finite (Bekenstein, 1981),
and so is the number of state transitions per unit of time (Margolus and Levitin, 1998).
This gives us reason to believe that even the environment that we humans currently
face (and will ever face) falls under these assumptions.
2The size of the state space is at most 256128since the Atari 2600 has only 128 bytes of memory.
However, the vast majority of these states are not reachable.
56 Acting
Formally we define the set MCCS
LSCas the set of environments that are lower semicom-
putable chronological contextual semimeasures and MCCM
compas the set of environments
that are computable chronological contextual measures. Note that for chronological
contextual semimeasures it makes a difference whether (ka1:1)is lower semicom-
putable or the conditionals (jæ<tat)are. The latter implies the former, but not vice
versa.
4.2 The Value Function
Thevalueof a policy in an environment is the future expected discounted reward when
following a given policy in a given environment conditional on the past. Since this
quantity captures exactly what our agent aims to maximize, we prefer policies whose
value is high.
Definition 4.10 (Value Function) .Thevalueof a policyin an environment given
historyæ<tandhorizonmwithtm1is defined as
V;m
(æ<t) :=1
tE
"m 1X
k=t
krkæ<t#
if t>0andV;m
(æ<t) := 0if t= 0. Theoptimal value is defined as V;m
(æ<t) :=
supV;m
(æ<t).
Sometimes we omit the history argument æ<tfor notational convenience if it is clear
from context. Moreover, when we omit m, we implicitly use an infinite horizon m=1,
i.e.,V
:=V;1
andV
:=V;1
. The value of a policy in an environment after
the empty history, V
()is also called the t0-value.
Remark 4.11 (Values are Bounded Between 0 and 1) .From Assumption 4.6b we
get that for all histories æ<tall policies and all environments , the value function
V
(æ<t)2[0;1]. 3
Since environment and policy are stochastic, the history æ<tis random. With abuse
of notation we treat æ<tsometimes as a concrete outcome and sometimes as a random
variable. We also view the value of a policy in an environment as a sequence
of random variables (Xt)t2NwithXt:=V
(æ1:t)where the history æ1:tis generated
stochastically by the agent’s actual policy interacting with the true environment .
This view is helpful for some of the convergence results (e.g., Theorem 4.19 and Defi-
nition 5.18) in which we talk about the type of convergence of this sequence of random
variables.
The value function defined in Definition 4.10 is also called the recursive value func-
tion, in contrast to iterative value function that we discuss in Section 6.4. The name of
the recursive value function originates from the following recursive identity (analogously
§4.2The Value Function 57
to Hutter, 2005, Eq. 4.12), also called the Bellman equation :
V
(æ<t) =X
at2A(atjæ<t)V
(æ<tat)
V
(æ<tat) =1
tX
et2E(etjæ<tat)
trt+ t+1V
(æ1:t)
An explicit expression for the optimal value in environment is
V;m
(æ<t) =1
tmaxX
æt:m 1m 1X
k=t
krkkY
i=t(eijæ<iai); (4.2)
wherePmaxdenotes the max-sum-operator:
maxX
æt:m 1:= max
at2AX
et2E:::max
am 12AX
em 12E
For an explicit expression of V;1
(æ<t)we can simply take the limit m!1.
4.2.1 Optimal Policies
An optimal policy is a policy that achieves the highest value:
Definition4.12 (OptimalPolicy; Hutter,2005, Def.5.19&5.30) .Apolicyisoptimal
in environment (-optimal) iffforallhistories attainstheoptimalvalue: V
(æ<t) =
V
(æ<t)for allæ<t2(AE ). The action atis anoptimal action iff
(atjæ<t) = 1
for some-optimal policy
.
Following the tradition of Hutter (2005), AINUdenotes a-optimal policy for the
environment 2MCCS
LSCandAIMUdenotes an -optimal policy for the environment
2MCCM
compthat is a measure (as opposed to a semimeasure).
By definition of the optimal policy and the optimal value function, we have the
following identity for all histories æ<t:
V
(æ<t) =V
(æ<t) (4.3)
There can be more than one optimal policy; generally the choice of
from Defini-
tion 4.12 is not unique. More specifically, for a -optimal policy we have
(atjæ<t)>0 =)at2arg max
a2AV
(æ<ta): (4.4)
If there are multiple actions ;2 Athat attain the optimal value, V
(æ<) =
V
(æ<t), then there is an argmax tie . Which action we settle on in case of a tie (how
we break the tie) is irrelevant and can be arbitrary. Since we allow stochastic policies,
we can also randomize between and.
The following definition allows policies to be slightly suboptimal.
58 Acting
Definition 4.13 ("-Optimal Policy) .A policyis"-optimal in environment iff
V
(æ<t) V
(æ<t)<"for all histories æ<t2(AE ).
A policythat achieves optimal t0-value,V
() =V
(), takes-optimal actions
on any history reachable by in. However, this is not true for "-optimal policies: a
policy that is "-optimal at t= 0is not necessarily "-optimal in later time steps.
4.2.2 Properties of the Value Function
The following two lemmas are stated by Hutter (2005, Thm. 31) without proof and for
the iterative value function.
Lemma4.14 (Linearityof V
in).If=z11+z22for some real numbers z1;z20,
then for all policies and all histories æ<t
V;m
(æ<t) =z11(e<tka<t)
(e<tka<t)V;m
1(æ<t) +z22(e<tka<t)
(e<tka<t)V;m
2(æ<t):
Proof.Since=z1
1+z2
2, we have for the conditional measure
(Ajæ<t) =(A\æ<t)
(æ<t)=z1
1(A\æ<t) +z2
2(A\æ<t)
(æ<t)
=z1
1(æ<t)
(æ<t)
1(Ajæ<t) +z2
2(æ<t)
(æ<t)
2(Ajæ<t):
The claim now follows from the linearity of expectation in the probability measure.
Lemma 4.15 (Convexity of V
in).If=z11+z22for some real numbers z1;z2
0, then for all histories æ<t
V
(æ<t)z11(e<tka<t)
(e<tka<t)V;m
1(æ<t) +z22(e<tka<t)
(e<tka<t)V;m
2(æ<t):
Proof.Let
be an optimal policy for environment . From Lemma 4.14 we get
V
(æ<t) =V
(æ<t) =z11(e<tka<t)
(e<tka<t)V
1(æ<t) +z22(e<tka<t)
(e<tka<t)V
2(æ<t)
z11(e<tka<t)
(e<tka<t)V
1(æ<t) +z22(e<tka<t)
(e<tka<t)V
2(æ<t):
The following lemma bounds the error when truncating the value function. This
implies that planning for an "-effective horizon ( m=t+Ht(")), we get all but an "of
the value:jV
(æ<t) V;m
(æ<t)j<".
Lemma 4.16 (Truncated Values) .For every environment , every policy , and every
historyæ<tV;m
(æ<t) V
(æ<t) m
t:
§4.2The Value Function 59
Proof.
V
(æ<t) =1
tE
"1X
k=t
krkæ<t#
=V;m
(æ<t) +1
tE
"1X
k=m
krkæ<t#
The result now follows from Assumption 4.6b and
0E
"1X
k=m
krkæ<t#
m:
This lemma bounds the (truncated) value function by the total variation distance.
Lemma 4.17 (Bounds on Value Difference) .For any policies 1,2, any environments
1and2, and any horizon tm1
jV1;m
1(æ<t) V2;m
2(æ<t)j Dm 1(1
1;2
2jæ<t)
Proof.According to Definition 4.10, the value function is the expectation of the ran-
dom variablePm 1
k=t
krk= tthat is bounded between 0and1. Therefore we can use
Lemma 2.12 with P:=1
1(jæ<t)andR:=2
2(jæ<t)on the space (AE )m 1to
conclude thatjV1;m
1(æ<t) V2;m
2(æ<t)jis bounded by Dm 1(1
1;2
2jæ<t).
Lemma 4.18 (Discounted Values; Lattimore, 2013, Lem. 2.5) .Letæ<tbe some history
and let1and2be two policies that coincide from time step tto time step m:1(aj
æ1:k) =2(ajæ1:k)for alla2A, all histories æ<tæt:kconsistent with 1, andtk
m. Then for all environments
V1(æ<t) V2(æ<t) m
t:
Proof.Since1and2coincidefortimesteps tthroughm 1,Dm 1(1;2jæ<t) = 0
for all environments . Thus the result follows from Lemma 4.16 and Lemma 4.17:
V1(æ<t) V2(æ<t)V1;m
(æ<t) V2;m
(æ<t)+ m
t
Dm 1(1;2jæ<t) + m
t
= m
t
4.2.3 On-Policy Value Convergence
This section states some general results on learning the value function. On-policy value
convergence refers to the fact that if we use a learning distribution to learn to environ-
ment, andmerges with in the sense discussed Section 3.4, then V
converges
toV
, i.e., using we learn to estimate values correctly.
A weaker variant of the following theorem was proved by Hutter (2005, Thm. 5.36).
It states convergence in mean (not almost surely), and only for the Bayesian mixture.
60 Acting
Theorem 4.19 (On-Policy Value Convergence) .Letbe any environment and be
any policy.
(a) Ifmerges strongly with , then
V
(æ<t) V
(æ<t)!0ast!1-almost surely.
(b) If the effective horizon is bounded and merges weakly with , then
V
(æ<t) V
(æ<t)!0ast!1-almost surely.
(c) If the effective horizon is bounded and merges almost weakly with , then
1
ttX
k=1
V
(æ<k) V
(æ<k)
!0ast!1in-almost surely.
Proof.(a) Apply Lemma 4.17 with m:=1.
(b) Let">0and letc"be a bound on suptHt("). From Lemma 4.16
jV
(æ<t) V
(æ<t)jjV;t+Ht(")
(æ<t) V;t+Ht(")
(æ<t)j+ 2 t+Ht(")
t
<Dt+Ht 1(")(;jæ<t) + 2"
Dt+c"(;jæ<t) + 2"
accordingtoDefinition4.4andLemma4.17. Since mergesweaklywith ,weget
that-almost surely there is a time step t02Nsuch thatDt+c"(;jæ<t)<"
for alltt0. HencejV
Q(æ<t) V
(æ<t)j<3"for alltt0.
(c) Analogously to the proof of (b).
It is important to observe that on-policy convergence does not imply that the agent
converges to the optimal policy. V
converges to V
, butV
need not be close to V
.
Indeed, there might be another policy ~that has a higher value than in the true
environment (V~
>V
). If the agent thinks ~has lower value ( V~
<V
) it might
not follow ~and hence not learn that the actual value of ~is much higher. In other
words, on-policy convergence implies that the agent learns the value of its own actions,
but not the value of counterfactual actions that it does not take.
Theorem 4.19 now enables us to tie in the results of Chapter 3. This yields a surge
of corollaries, but first we need to make the learning distributions contextual on the
actions.
Letw2Mbe a positive prior over the environment class M. We define the
corresponding Bayesian mixture analogously to Example 3.4:
(e<tka<t) :=X
2Mw()(e<tka<t) (4.5)
§4.2The Value Function 61
Note that the Bayesian mixture depends on the prior w. For the rest of this thesis,
this dependence will not be made explicit.
From Lemma 4.14 and (3.3) we immediately get the following identity:
V
(æ<t) =X
2Mw(jæ<t)V
(æ<t) (4.6)
Similarly, we get from Lemma 4.15
V
(æ<t)X
2Mw(jæ<t)V
(æ<t): (4.7)
Corollary 4.20 (On-Policy Value Convergence for Bayes) .For any environment 2
Mand any policy ,
V
(æ<t) V
(æ<t)!0ast!1-almost surely.
Proof.Since2M, we have dominance w()withw()>0and by Propo-
sition 3.16a absolute continuity . From Theorem 3.25 we get that merges
strongly with . Therefore we can apply Theorem 4.19a.
Analogously, we define MDLæ<t:= arg min2Mf log(e<tka<t) +K()g.
Corollary4.21 (On-PolicyValueConvergenceforMDL) .For any environment 2M
and any policy ,
V
MDLæ<t(æ<t) V
(æ<t)!0ast!1-almost surely.
Proof.By Theorem 3.28 MDLmerges strongly with for each2M, therefore we
can apply Theorem 4.19a.
By providing the action sequence contextually on a separate input tape, we can
defineKm(e<tka<t) := minfjpjje<tvU(p;a<t)ganalogously to (2.1).
Corollary4.22 (On-PolicyValueConvergenceforUniversalCompression) .Let(e<tk
a<t) := 2 Km(e<tka<t). Then for any environment 2MCCM
compand any policy ,
V
(æ<t) V
(æ<t)!0ast!1-almost surely.
Proof.Sincedominates every 2 MCCM
comp(Section 3.6.3) we can apply Proposi-
tion 3.16a, Theorem 3.25, and Theorem 4.19a as in the proof of Corollary 4.20.
Similarly to Kmthere is a speed prior for environments (Filan, 2015, Ch. 6):
SKt(e<tka<t) :=X
p:e<tvU(p;a<t)2 jpj
t(U;p;a<t;e<t)
wheret(U;p;a<t;e<t)denotes the number of time steps U(p;a<t)takes to produce e<t.
62 Acting
Corollary 4.23 (On-Policy Value Convergence for the Speed Prior) .If the effective
horizon is bounded, then for any environment 2MCCM
compestimable in polynomial time
and any policy ,
1
ttX
k=1
V
SKt(æ<k) V
(æ<k)
!0ast!1-almost surely.
Proof.By Corollary 3.61 the speed prior SKtmerges almost weakly with every measure
estimable in polynomial time. Therefore we can apply Theorem 4.19c.
4.3 The Agents
If we knew the true environment , we would choose the -optimal policy, the policy
that maximizes -expected discounted rewards. But generally we do not know the
true environment, and the challenging part of reinforcement learning is to learn the
environment while trying to collect rewards.
In this section we introduce a number of agents that attempt to solve the general
reinforcement learning problem (Problem 4.2). These agents are discussed throughout
the rest of this thesis.
4.3.1 Bayes
ABayes optimal policy with respect to the prior wis the policy
whereis the
Bayesian mixture defined in Section 4.2.3. There can be one or more Bayes optial
policies. From Corollary 4.20 we get on-policy value convergence for the Bayes optimal
policy.
After history æ<t, the Bayes policy
maximizes expected discounted rewards in
the posterior mixture:
(je<tka1:1) =X
2Mw(jæ<t)(je<tka1:1)
wherew(jæ<t)are the posterior weights (3.3). Maximizing expected rewards accord-
ing to the posterior is the same as maximizing expected rewards according to the prior
conditional on the history: if (æ<t) =
(æ<t), thenV
(æ<t) =V
(æ<t). Actually
visiting the history æ<tdoes not change what
planned to do before it visited æ<t.
Note that this relies on the fact that the way we use discounting is time consistent (Lat-
timore and Hutter, 2014, Def. 12).
When using the prior w()/2 K()(Example 3.5) over the class MCCS
LSC, the Bayes
optimal policy is also known as AIXI, introduced and analyzed by Hutter (2000, 2001a,
2002a, 2003, 2005, 2007a, 2012b) in his work on universal artificial intelligence . In this
case, the Bayesian mixture (4.5) can be defined equivalently according to (Wood et al.,
2011)
(e<tka<t) :=X
p:e<tvU(p;a<t)2 jpj: (4.8)
§4.3The Agents 63
Generally there is more than one -optimal policy and Solomonoff’s prior depends on
the choice of the (reference) universal Turing machine, so this definition is not unique.
Moreover, not every universal Turing machine is a good choice for AIXI, see Section 5.2
for a few bad choices. The following lemma will be used later.
Lemma 4.24 (Mixing Mixtures) .Letq;q02Qsuch thatq >0,q00, andq+q0
1. Letwbe any lower semicomputable positive prior, let be the Bayesian mixture
corresponding to w, and let2MCCS
LSC. Then0:=q+q02MCCS
LSCis a Bayesian
mixture.
Proof.0is given by the positive prior w0withw0:=qw+q01.
Bayesian approaches have a long tradition in reinforcement learning, although they
are often prohibitively expensive to compute. For multi-armed bandits, Gittins (1979)
achieved a breakthrough with an index strategy that enables the computation of the
optimal policy by computing one quantity for each arm independently of the rest. This
strategy even achieves the optimal asymptotic regret bounds (Lattimore, 2016). Larger
classes have also been attempted: using Monte-Carlo tree search, Veness et al. (2011)
approximate the Bayes optimal policy in the class of all context trees. Doshi-Velez
(2012) uses Bayesian techniques to learn infinite-state POMDPs. See Vlassis et al.
(2012) for a survey on Bayesian techniques in RL.
In the rest of this thesis, the Bayes optimal policy is often treated as an optimal ex-
ploitation strategy. This is not true: Bayes does explore (when it is Bayes optimal to do
so). It just does not explore general environment classes completely (see Section 5.4.1).
4.3.2 Knowledge-Seeking Agents
In this section we discuss two variants of knowledge-seeking agents: entropy-seeking
agents introduced by Orseau (2011, 2014a) and information-seeking agents introduced
by Orseau et al. (2013). The entropy-seeking agent maximizes the Shannon entropy
gain, while the information-seeking agent maximizes the expected information gain.
These quantities are expressed in different value functions. In places where confusion
can arise, we call the value function V
from Definition 4.10 the reward-seeking value
function.
In this section we use a finite horizon m<1(possibly dependent on time step t):
the knowledge-seeking agent maximizes entropy/information received up to time step
m. We assume implicitly that m(as a function of t) is computable. Moreover, in this
section we assume that the Bayesian mixture is a measure rather than a semimeasure;
Example 4.27 discusses this assumption.
Definition4.25 (Entropy-SeekingValueFunction; Orseau,2014a,Sec.6) .Theentropy-
seeking value of a policy given history æ<tis
V;m
Ent(æ<t) :=E
[ log2(e1:mje<tka1:m)jæ<t]:
64 Acting
The entropy-seeking value is the Bayes-expectation of log. Orseau (2011, 2014a)
also considers a related value function based on the -expectation of that we do not
discuss here.
Definition 4.26 (Information-Seeking Value Function; Orseau et al., 2013, Def. 1) .
Theinformation-seeking value of a policy given history æ<tis
V;m
IG(æ<t) :=X
2Mw(jæ<t)KLm(;jæ<t):
Analogously to before we define V
Ent:= supV
EntandV
IG:= supV
IG. An optimal
entropy-seeking policy is defined as
Ent:2arg maxV
Entand an optimal information-
seeking policy is defined as
IG:2arg maxV
IG. Since we use a finite horizon ( m<1),
these optimal policies exist.
Theinformation gain is defined as the difference in entropy between the prior and
the posterior:
IGt:m(æ1:m) := Ent(w(jæ<t)) Ent(w(jæ1:m))
We get the following identity (Lattimore, 2013, Eq. 3.5).
E
[IGt:m(æ1:m)jæ<t] =V;m
IG(æ<t)
For infinite horizons ( m=1), the values functions from Definition 4.25 and Def-
inition 4.26 may not converge. To ensure convergence, we can either use discounting,
or in case of VIGa prior with finite entropy (Lattimore, 2013, Thm. 3.4). Moreover,
note that while VIGandVEntare expectations with respect to the measure , there is
no bound on the one-step change in value V;m
IG(æ<t) V;m
IG(æ1:t), which can also be
negative. For the reward-seeking value function V;m
, the one-step change in value is
bounded between 0and1by Remark 4.11.
For classes of deterministic environments Definition 4.25 and Definition 4.26 coin-
cide. In stochastic environments the entropy-seeking agent does not work well because
it gets distracted by noise in the environment rather than trying to distinguish envi-
ronments (Orseau et al., 2013, Sec. 5). Moreover, the entropy-seeking agent may fail to
seek knowledge in deterministic semimeasures as the following example demonstrates.
Example 4.27 (Unnormalized Entropy-Seeking) .If the Bayesian mixture is a semi-
measure instead of a measure (such as the Solomonoff prior from Example 3.5), then
the entropy-seeking agent does not explore correctly. Fix A:=f;g,E:=f0;1g, and
m=t(we only care about the entropy of the next percept). We illustrate the problem
on a simple class of environments f1;2g:
1 =0=0:1 =0=0:5 2 =1=0:1 =0=0:5
where transitions are labeled with action/percept/probability. Both 1and2return
a percept deterministically or nothing at all (the environment ends). Only action
§4.3The Agents 65
distinguishes between the environments. With the prior w(1) :=w(2) := 1=2, we get
a mixturefor the entropy-seeking value function V
Ent. ThenV
Ent()0:432<0:5 =
V
Ent(), hence action is preferred over by the entropy-seeking agent. But taking
actionyields percept 0(if any), hence nothing is learned about the environment. 3
On-policy value convergence (Theorem 4.19) ensures that asymptotically, the agent
learns the value of its own policy. Knowledge-seeking agents do even better: they
don’t have to balance between exploration and exploitation, so they can focus solely
on exploration. As a result, they learn off-policy, i.e., the value of counterfactual
actions (Orseau et al., 2013, Thm. 7).
4.3.3 BayesExp
Lattimore (2013, Thm. 5.6) defines BayesExp combining AIXI with the information-
seeking agent. BayesExp alternates between phases of exploration and phases of ex-
ploitation: Let "tbe a monotone decreasing sequence of positive reals such that "t!0
ast!1. If the optimal information-seeking value V
IGis larger than "t, then BayesExp
starts an exploration phase, otherwise it starts an exploitation phase. During an explo-
ration phase, BayesExp follows an optimal information-seeking policy for an "t-effective
horizon. During an exploitation phase, BayesExp follows an -optimal reward-seeking
policy for one step (see Algorithm 1).
Algorithm 1 BayesExp policy BE(Lattimore, 2013, Alg. 2).
1:whiletruedo
2:ifV;t+Ht("t)
IG(æ<t)>"tthen
3: follow
IGforHt("t)steps
4:else
5: follow
for1step
4.3.4 Thompson Sampling
Thompson sampling , also known as posterior sampling orthe Bayesian control rule , was
originally proposed by Thompson (1933) as a bandit algorithm. It is easy to implement
and often achieves quite good results (Chapelle and Li, 2011). In multi-armed bandits
it attains optimal regret (Agrawal and Goyal, 2011; Kaufmann et al., 2012). Thompson
sampling has also been discussed for MDPs (Strens, 2000; Dearden et al., 1998) and
Bayesian and frequentist regret bounds have been established (Osband et al., 2013;
Gopalan and Mannor, 2015).
For general RL Thompson sampling was first suggested by Ortega and Braun (2010)
with resampling at every time step. Strens (2000) proposes following the optimal policy
for one episode or “related to the number of state transitions the agent is likely to need
to plan ahead”. We follow Strens’ suggestion and resample at the effective horizon.
Let"tbe a monotone decreasing sequence of positive reals such that "t!0as
t!1. Our variant of Thomson sampling is given in Algorithm 2. It samples an
66 Acting
environment fromtheposterior,followsthe -optimalpolicyforan "t-effectivehorizon,
and then repeats.
Algorithm 2 Thompson sampling policy T.
1:whiletruedo
2:samplew(jæ<t)
3:follow
forHt("t)steps
Note thatTis a stochastic policy since we occasionally sample from a distribution.
We assume that this sampling is independent of everything else.
Chapter 5
Optimality
Machines will never be intelligent. — Shane Legg
Problem 4.2 defines the general reinforcement learning problem. But our definition of
this problem did not specify what a solution would be. This chapter is dedicated to
this question:
What is an optimal solution to the general reinforcement learning problem?
How can we say that one policy is betterthan another? What is the bestpolicy? Are
the policies from Section 4.3 optimal? Several notions of optimality for a policy in
an environment class Mare conceivable:
O1.Maximal reward . The policy receives a reward of 1in every time step (which is
maximal according to Assumption 4.6b):
8t2N:rt= 1
O2.Optimal policy . The policy achieves the highest possible value in the true envi-
ronment:
8æ<t2(AE ):V
(æ<t) =V
(æ<t)
O3.Pareto optimality (Hutter, 2002a, Thm. 2). There is no other policy that performs
at least as good in all environments and strictly better in at least one:
@~:
82M:V~
()V
()and92M:V~
()>V
()
O4.Balanced Pareto optimality (Hutter, 2002a, Thm. 3). The policy achieves a
better value across Mweighted by w2Mthan any other policy:
8~:X
2Mw()
V
() V~
()
0
O5.Bayes optimality . The policy is-optimal for some Bayes mixture :
8æ<t2(AE ):V
(æ<t) =V
(æ<t)
67
68 Optimality
O6.Probably approximately correct . For a given "; > 0the value of the policy is
"-close to the optimal value with probability at least after time step t0(";):
h
8tt0(";):V
(æ<t) V
(æ<t)<"i
>1
O7.Asymptotic optimality (Hutter, 2005, Sec. 5.3.4). The value of the policy con-
verges to the optimal value:
V
(æ<t) V
(æ<t)!0ast!1
O8.Sublinear regret . The difference between the reward sum of the policy and the
best policy in hindsight grows sublinearly:
sup
0E0
"mX
t=1rt#
E
"mX
t=1rt#
2o(m)
We discuss these notions of optimality in turn. Achieving the maximal reward
at every time step is impossible if there is no action that makes the environment
respond with the maximal reward; generally there is no policy that achieves maximal
rewards at every time step. In order to follow the optimal policy, we need to know
the true environment. In our setting, the true environment is unknown and has to be
learned. During the learning process the agent cannot also act optimally because it
needs to explore. In particular, the policy cannot be optimal simultaneously in all
environments from M. This rules out O1 and O2 as a notion of optimality.
In Section 5.1 we show that all policies are Pareto optimal. This disqualifies O3 as
a useful notion of optimality in general reinforcement learning.
Balanced Pareto optimality (O4), Bayes optimality (O5), and maximal Legg-Hutter
intelligence (Legg and Hutter, 2007b) turn out to coincide. In Section 5.3 we show that
Legg-Hutter intelligence is highly subjective, because it depends on the choice of the
prior. By changing the prior of a Bayesian agent, we can make the agent’s intelligence
arbitrarily low. In Section 5.2 we present a choice of particularly bad priors. This rules
out O4 and O5 because they are prior-dependent and not objective.
O6 is a stronger version of asymptotic optimality that provides a rate of conver-
gence (it implies O7). Since our environment class can be very large and non-compact,
concrete PAC results are likely impossible. Orseau (2010, 2013) shows that the Bayes
optimal agent does not achieve asymptotic optimality in all computable environments.
The underlying problem is that in the beginning the agent does not know enough about
its environment and therefore relies heavily on its prior. Lack of exploration then re-
tains the prior’s bias. This problem can be alleviated by adding extra exploration to the
Bayesian agent. In Section 5.4 we discuss two agents that achieve asymptotic optimal-
ity: BayesExp (Section 4.3.3) and Thompson sampling (Section 4.3.4). This establishes
that O7 is possible.
In general environments sublinear regret is impossible because the agent can get
stuck in traps from which it is unable to recover. This rules out O8. However, in Sec-
§5.1Pareto Optimality 69
tion5.5weshowthatifweassumethattheenvironmentallowsrecoveringfrommistakes
(and some minor conditions on the discount function are fulfilled), then asymptotic op-
timality implies sublinear regret. This means that Thompson sampling has sublinear
regret in these recoverable environments.
Notably, only asymptotic optimality (O7) holds up to be a nontrivial and objective
criterionofoptimalitythatappliestothegeneralreinforcementlearningproblem. While
there are several agents that are known to be asymptotically optimal, some undesirable
properties remain. Section 5.6 discusses this further. See also Mahadevan (1996) for a
discussion of notions of optimality in MDPs.
5.1 Pareto Optimality
In this section we show that Pareto optimality is not a useful criterion for optimality
since for any environment class containing MCCM
comp, all policies are Pareto optimal.
Definition 5.1 (Pareto Optimality; Hutter, 2005, Def. 5.22) .A policyisPareto
optimal in the set of environments Miff there is no policy ~such thatV~
()V
()
for all2MandV~
()>V
()for at least one 2M.
The literature provides the following result.
Theorem5.2 (AIXIisParetoOptimal; Hutter,2002a, Thm.2) .Every-optimal policy
is Pareto optimal in MCCS
LSC.
The following theorem was proved for deterministic policies in Leike and Hutter
(2015c). Here we extend it to stochastic policies.
Theorem 5.3 (Pareto Optimality is Trivial) .Every policy is Pareto optimal in any
classMMCCM
comp.
The proof proceeds as follows: for a given policy , we construct a set of ‘buddy
environments’ that reward and punish other policies. Together they can defend
against any policy ~that tries to take the crown of Pareto optimality from .
Proof.We assume (0;0)and(0;1)2E. Moreover, assume there is a policy that is not
Pareto optimal. Then there is a policy ~thatPareto dominates , i.e.,V~
()>V
()
for some2 M, andV~
()V
()for all2 M. FromV~
()> V
()and
Lemma 4.18 we get that there is a shortest and lexicographically first history æ0
<k
consistent with and~such that(jæ0
<k)>~(jæ0
<k)for some action 2A
andV~
(æ0
<k)>V
(æ0
<k). Consequently there is an iksuch that
i>0, and hence
k>0. We define the environment that first reproduces the separating history æ0
<k
and then, if is the next action, returns reward 1forever, and otherwise returns reward
0forever. Formally, is defined by
(e1:tje<tka1:t) :=8
>>>><
>>>>:1;ift<kandet=e0
t;
1;iftkandak=andrt= 1andot= 0;
1;iftkandak6=andrt= 0 =ot;and
0;otherwise:
70 Optimality
The environment is computable, even if the policy is not: for a fixed history æ0
<t
and action , there exists a program computing ; therefore2MCCM
comp. We get the
following value difference for the policies and~:
V
() V~
() =E
"k 1X
t=1
trt+1X
t=k
trt#
E~
"k 1X
t=1
trt 1X
t=k
trt#
=
(jæ0
<k)1X
t=k
t ~(jæ0
<k)1X
t=k
t!
(æ0
<k)
=
(jæ0
<k) ~(jæ0
<k)
(æ0
<k) k>0
HenceV~
()< V
(), which contradicts the fact that ~Pareto dominates since
MMCCM
comp3.
Note that the environment we defined in the proof of Theorem 5.3 is actually
just a finite-state POMDP, so Pareto optimality is also trivial for smaller environment
classes.
5.2 Bad Priors
In this section we give three examples of universal priors that cause a AIXI to mis-
behave drastically. In case of a finite horizon, the indifference prior makes all actions
equally preferable to AIXI (Section 5.2.1). The dogmatic prior makes AIXI stick to
any given computable policy as long as expected future rewards do not fall too close
to zero (Section 5.2.2). The Gödel prior prevents AIXI tlfrom taking any actions (Sec-
tion 5.2.3).
5.2.1 The Indifference Prior
The following theorem constructs the indifference prior that yields a Bayesian mixture
0that causes argmax ties for the first msteps. If we use a discount function that only
cares about the first msteps, m= 0, then all policies are 0-optimal policies. In this
case AIXI’s behavior only depends on how we break argmax ties.
Theorem 5.4 (Indifference Prior) .If there is an msuch that m= 0, then there is a
Bayesian mixture 0such that all policies are 0-optimal.
Proof.First, we assume that the action space is binary, A=f0;1g. LetUbe the
reference UTM and define the UTM U0by
U0(s<mp;a1:t) :=U(p;a1:txors1:t);
wheres<mis a binary string of length m 1andsk:= 0forkm. (U0has no programs
of length less than m 1.) Let0be the Bayesian mixture given by U0according to
§5.2Bad Priors 71
(4.8). Then
0(e<mka<m) =X
p:e<mvU0(p;a<m)2 jpj
=X
s<mp0:e<mvU0(s<mp0;a<m)2 m 1 jp0j
=X
s<mX
p0:e<mvU(p0;a<mxors<m)2 m 1 jp0j
=X
s<mX
p0:e<mvU(p0;s<m)2 m 1 jp0j;
which is independent of a<m. Hence the first m 1percepts are independent of the
firstm 1actions. But the percepts’ rewards from time step mon do not matter since
m= 0(Lemma 4.16). Because the environment is chronological, the value function
must be independent of all actions. Thus every policy is 0-optimal.
For finite action spaces Awith more than 2elements, the proof works analogously
by makingAa cyclic group and using the group operation instead of xor.
The choice of U0in the proof of Theorem 5.4 depends on m. If we increase AIXI’s
horizon while fixing the UTM U0, Theorem 5.4 no longer holds. For Solomonoff in-
duction, there is an analogous problem: when using Solomonoff’s prior Mto predict a
deterministic binary sequence x, we make at most K(x)errors (Corollary 3.56). In case
the shortest program has length >m, there is no guarantee that we make less than m
errors (see Section 5.6.2).
5.2.2 The Dogmatic Prior
In this section we define a universal prior that assigns very high probability of going to
hell (reward 0forever) if we deviate from a given computable policy . For a Bayesian
agent like AIXI, it is thus only worth deviating from the policy if the agent thinks
that the prospects of following are very poor already. We call this prior the dogmatic
prior, because the fear of going to hell makes AIXI conform to any arbitrary ‘dogmatic
ideology’. AIXI will only break out if it expects to give very low future payoff; in
that case the agent does not have much to lose.
Theorem 5.5 (Dogmatic Prior) .Letbe any computable deterministic policy, let
be any Bayesian mixture over MCCS
LSC, and let" >0. There is a Bayesian mixture 0
such that for any history æ<tconsistent with and for which V
(æ<t)>", the action
(æ<t)is the unique 0-optimal action.
The following proof was adapted from Leike and Hutter (2015c) to work for en-
vironment classes that do not contain the Bayesian mixture. Essentially, for every
environment 2MCCS
LSCthe dogmatic prior puts much higher weight on an environ-
mentthat behaves just like on the policy , but sends any policy deviating from
to hell. Importantly, while following the policy the environments andare
indistinguishable, so the posterior belief in is equal to the posterior belief in .
72 Optimality
Proof of Theorem 5.5. We assume (o;0)2Efor someo2O. For every environment
2MCCS
LSCdefine the environment
(e1:tka1:t) :=8
>>>><
>>>>:(e1:tka1:t);ifak=(æ<k)8kt;
(e<kka<k);ifk:= minfijai6=(æ<i)gexists
andei= (o;0)8i2fk;:::;tg;and
0; otherwise:
The environment mimics environment until it receives an action that the policy
would not take. From then on, it provides rewards 0. Sinceis a computable policy,
we have that 2MCCS
LSCfor every2MCCS
LSC.
Now we need to reweigh the prior wso that it assigns a much higher prior weight
tothan to. Without loss of generality we assume that "is computable, otherwise
we make it slightly smaller. We define w0() :="w()if6=~for all ~2MCCS
LSCand
w0() := (1 ")w() +"w(). Then
X
2MCCS
LSCw0() =X
=~w0() +X
6=~w0()
=X
=~
(1 ")w(~) +"w()
+X
6=~"w()
=X
2MCCS
LSC"w() +X
~2MCCS
LSC(1 ")w(~)
="+ (1 ") = 1;
and withw0"w, we get that w0is a positive prior over MCCS
LSC. We define 0as the
corresponding Bayesian mixture analogous to (4.5).
With:=P
2MCCS
LSCw()we get0="+ (1 "). The mixtures and
coincide on the policy since every coincides with on the policy :
(æ<t) =X
2MCCS
LSCw()(æ<t) =X
2MCCS
LSCw()
(æ<t) =(æ<t)
Moreover,V
(æ<t) = 0and thusV
(æ<t) = 0for any history inconsistent with by
construction of .
Letæ<t2(AE )be any history consistent with such thatV
(æ<t)>". Then
=implies
(e<tka<t)
0(e<tka<t)=(e<tka<t)
0(e<tka<t)=(e<tka<t)
"(e<tka<t) + (1 ")(e<tka<t)= 1:
§5.2Bad Priors 73
Therefore Lemma 4.14 implies that for all a2Aand all policies ~
V~
0(æ<ta) ="(e<tka<t)
0(e<tka<t)V~
(æ<ta) + (1 ")(e<tka<t)
0(e<tka<t)V~
(æ<ta)
="V~
(æ<ta) + (1 ")V~
(æ<ta): (5.1)
Let:=(æ<t)be the next action according to , and let6=be any other
action. We have that V
(æ<t) =V
(æ<t)since=andæ<tis consistent
with. Therefore we get from (5.1)
V
0(æ<t)V
0(æ<t) ="V
(æ<t) + (1 ")V
(æ<t) =V
(æ<t)>";
V
0(æ<t) ="V
0
(æ<t) + (1 ")V
0
(æ<t) ="V
0
(æ<ta) + (1 ")0":
HenceV
0(æ<t)>V
0(æ<t)and thus the action taken byis the only 0-optimal
action for the history æ<t.
Corollary 5.6 (With Finite Horizon Every Policy is Bayes Optimal) .If m= 0for
somem2N, then for any deterministic policy there is a Bayesian mixture 0such
that(æ<t)is the only 0-optimal action for all histories æ<tconsistent with and
tm.
In contrast to Theorem 5.4 where every policy is 0-optimal for a fixed Bayesian
mixture0, Corollary 5.6 gives a different Bayesian mixture 0for every policy such
thatis theonly0-optimal policy.
Proof.Let">0be small enough such that V
(æ<t)>"for allæ<tandtm. (This
is possible because (AE )mis finite by Assumption 4.6c.) We use the dogmatic prior
from Theorem 5.5 to construct a Bayesian mixture 0for the policy and">0. Thus
for any history æ<t2(AE )consistent with andtm, the action (æ<t)is the
only0-optimal action.
Corollary 5.7 (AIXI Emulating Computable Policies) .Let" > 0and letbe any
computable policy. There is a Bayesian mixture 0such that for any 0-optimal policy
0and for any environment ,
V
0
() V
()< ":
Proof.From the proof of Corollary 5.6 and Lemma 4.18.
5.2.3 The Gödel Prior
This section introduces a prior that prevents any fixed formal system from making any
statements about the outcome of all but finitely many computations. It is named after
Gödel (1931) who famously showed that for any sufficiently rich formal system there
are statements that it can neither prove nor disprove.
74 Optimality
This prior is targeted at AIXI tl, a computable approximation to AIXI defined by
Hutter (2005, Sec. 7.2). AIXI tlaims to perform as least as well as the best agent who
is limited by time tand spacelthat can be verified using a proof of length at most
nfor some fixed n2N. The core idea is to enumerate all deterministic policies and
proofs and then execute the policy for which the best value has been proved.
In order to be verified, a policy has to be computed by a program pwhich fulfills
theverification condition VA(p)(Hutter, 2005, Eq. 7.7). This program pnot only
computes future actions of , but also hypothetical past actions a0
iand lower bounds
vifor the value of the policy :
VA(p) :=“8k8(va0æ)1:k:
p(æ<k) =v1a0
1:::vka0
k!vkV
(æ<k)
”;
whereis the policy derived from paccording to (æ<k) :=a0
k.
We fix some formal system that we use to prove the verification condition. We
want it to be sufficiently powerful, but this incurs Gödel incompleteness. For simplicity
of exposition we pick PA, the system of Peano arithmetic (Shoenfield, 1967, Ch. 8.1),
but our result generalizes trivially to all formal systems which cannot prove their own
consistency.
Letnbe a fixed constant. The algorithm for AIXI tlis specified as follows.
1. LetP=;. This will be the set of verified programs.
2. For all proofs in PAof lengthn: if the proof proves VA(p)for somep, andjpjl,
then add the program ptoP.
3. For each input history æ<krepeat: run all programs from Pfor at most tsteps each,
take the one with the highest promised value vk, and return that program’s policy’s
action.
Theorem 5.8 (The Gödel Prior) .There is a UTM U0such that if PAis consistent,
then the set of verified programs Pis empty for all t,l, andn.
Proof.Letqdenote an algorithm that never halts, but for which this cannot be proved
inPA; e.g., letqenumerate all consequences of PAand halt as soon as it finds a
contradiction. Since we assumed that PAis consistent, qnever halts. Define the UTM
U0(p;a1:k)as follows.
•Runqforksteps.
•Ifqhalts, output vk= 2.
•RunU(p;a1:k).
Sinceqnever halts, UandU0are functionally identical, therefore U0is universal. Note
thatPAproves8p: U(p;a1:k) =U0(p;a1:k)for any fixed k, but PAdoes not prove
8k8p:U(p;a1:k) =U0(p;a1:k).
§5.3Bayes Optimality 75
Ifqdid eventually halt, it would output a value vk= 2that is too high, since the
value function V
is bounded by 1from above, which PAknows. Hence PAproves that
qhalts!8p::VA(p) (5.2)
IfPAcould prove VA(p)for anyp, then PAwould prove that qdoes not halt since this
is the contrapositive of (5.2). Therefore the set Premains empty.
AIXItlexhibits all the problems of the arbitrariness of the UTM illustrated by the
indifference prior (Theorem 5.4) and the dogmatic prior (Theorem 5.5). In addition, it
is also susceptible to Gödel incompleteness as illustrated by the Gödel prior in Theo-
rem 5.8. The formal system that is a parameter to AIXI tljust provides another point
of failure.
As a computable approximation to AIXI, AIXI tlis needlessly complicated. As we
prove in Corollary 6.13, "-optimal AIXI is limit computable, so we can approximate it
with an anytime algorithm. Bounding the computational resources to the approxima-
tion algorithm already yields a computable version of AIXI. Moreover, unlike AIXI tl,
this approximation actually converges to AIXI in the limit. Furthermore, we can ‘speed
up’ this approximation algorithm using Hutter search (Hutter, 2002b); this is very sim-
ilar but not identical to AIXI tl.
5.3 Bayes Optimality
The aim of the Legg-Hutter intelligence measure is to formalize the intuitive notion
of intelligence mathematically. Legg and Hutter (2007a) collect various definitions of
intelligence across many academic fields and destill it into the following statement (Legg
and Hutter, 2007b)
Intelligence measures an agent’s ability to achieve goals in a wide range of
environments.
This definition is formalized as follows.
Definition 5.9 (Legg-Hutter Intelligence; Legg and Hutter, 2007b, Sec. 3.3)) .The
(Legg-Hutter) intelligence of a policy is defined as
() :=X
2Mw()V
()
The Legg-Hutter intelligence of a policy is thet0-value that achieves across all
environments from the class Mweighted by the prior w. Legg and Hutter (2007b) con-
sider a subclass of MCCS
LSC, the class of computable measures together with a Solomonoff
priorw() = 2 K()and do not use discounting explicitly.
Typically, the index is omitted when writing . However, in this section we
consider the intelligence measure with respect to different priors, therefore we make
this dependency explicit. The following proposition motivates the use of the index
instead ofw.
76 Optimality
Proposition 5.10 (Bayes Optimality = Maximal Intelligence) .() =V
()for all
policies.
Proof.Follows directly from (4.6) and Definition 5.9.
Definition 5.11 (Balanced Pareto Optimality; Hutter, 2005, Def. 5.22) .LetMbe a
set of environments. A policy isbalanced Pareto optimal in the set of environments
Miff for all policies ~,X
2Mw()
V
() V~
()
0:
Proposition 5.12 (Balanced Pareto Optimality = Maximal Intelligence) .A policy
is balanced Pareto optimal in Mif and only if has maximal Legg-Hutter intelligence.
Proof.From (4.6) we get
X
2Mw()
V
() V
()
=X
2Mw()V
() X
2Mw()V
()
=V
() V
()
=V
() sup
~V~
()
= () sup
~(~)
by Proposition 5.10. This term is nonnegative iff ()is maximal.
As a consequence of Proposition 5.10 and Proposition 5.12 we get that AIXI is
balanced Pareto optimal (Hutter, 2005, Thm. 5.24) and has maximal Legg-Hutter in-
telligence.
:= sup
() = sup
V
() =V
() = (
):
This is not surprising since Legg-Hutter intelligence was defined in terms of the t0-
value in the Bayes mixture. Moreover, because the value function is scaled to be in the
interval [0;1], intelligence is a real number between 0and1.
It is just as hard to score very high on the Legg-Hutter intelligence measure as it
is to score very low: we can always turn a reward minimizer into a reward maximizer
by inverting the rewards r0
t:= 1 rt. Hence the lowest possible intelligence score is
achieved by AIXI’s twin sister, a -expected reward minimizer:
:= inf
() = inf
V
()
The heaven environment (reward 1forever) and the hell environment (reward 0forever)
are computable and thus in the environment class MCCS
LSC; therefore it is impossible to
get a reward 0or reward 1in every environment. Consequently, for all policies ,
0<()<1:
§5.3Bayes Optimality 77
0 1
random AI0AI
image of
Figure 5.1: The Legg-Hutter intelligence measure assigns values within the closed
interval [;]; the assigned values are depicted in purple. By Theorem 5.13, com-
putable policies are dense in this purple set.
For every real number r2[;]there is a policy with () =r: analogously to
Lemma 4.14 we can define such that with probability (r )=( )it follows
and otherwise it follows arg min~V~
().
Figure 5.1 illustrates the intelligence measure . It is natural to fix the policy
randomthat takes actions uniformly at random to have an intelligence score of 1=2by
choosing a ‘symmetric’ universal prior (Legg and Veness, 2013).
AIXI is not computable (Theorem 6.15), hence there is no computable policy
such that () = or() =for any Bayesian mixture overMCCS
LSC. But the
following theorem states that computable policies can come arbitrarily close. This is
no surprise: by Lemma 4.17 we can do well on a Legg-Hutter intelligence test simply
by memorizing what AIXI would do for the first ksteps; as long as kis chosen large
enough such that discounting makes the remaining rewards contribute very little to the
value function.
Theorem 5.13 (Computable Policies are Dense) .The set
f()jis a computable policy g
is dense in the set [;].
Proof.Letbe any policy and let ">0. We need to show that there is a computable
policy ~withj(~) ()j<". We choose mlarge enough such that m= 1<"=3.
Let2Abe arbitrary and define the policy
~(ajæ<t) :=8
>><
>>:(ajæ<t)("=3) mift<m;
1 iftmanda=;and
0 otherwise:
By choosing an appropriate rational number in the interval [(ajæ<t) ("=3) m;(aj
æ<t) + ("=3) m]we can make the policy ~computable because we can store these
approximations to the action probabilities of for the first m 1steps in a lookup
table. From Lemma 4.17 we get
V;m
() V~;m
()Dm 1(;~j)
("=3) mm="
3
78 Optimality
and together with Lemma 4.16 this yields
j() (~)j=V
() V~
()V;m
() V~;m
()+ 2 m
1"
3+ 2 m
1<":
Remark5.14 (DeterministicPoliciesarenotDensein [;]).Theintelligencevalues
of deterministic policies are generally not dense in the interval [;]. We show this
by defining an environment where the first action determines whether the agent
goes to heaven or hell: action leads to heaven and action leads to hell. Define
Bayesian mixture 0:= 0:999+ 0:001and letbe any policy. If takes action
first, then 0()>0:999. Iftakes action first, then 0()<0:001. Hence
there are no deterministic policies that score an intelligence value in the closed interval
[0:001;0:999]. 3
Legg-Hutterintelligenceismeasuredwithrespecttoafixedprior. TheBayesagentis
the most intelligent policy if it uses the same prior . We use the results from Section 5.2
to show that the intelligence score of the Bayes agent can be arbitrary close to the
minimum intelligence score .
Corollary 5.15 (Some AIXIs are Stupid) .For any Bayesian mixture overMCCS
LSC
and every">0, there is a Bayesian mixture 0such that (
0)<+".
Proof.Let" > 0. According to Theorem 5.13, there is a computable policy such
that ()<+"=2. From Corollary 5.7 we get a Bayesian mixture 0such that
j(
0) ()j=jV
0
() V
()j<"=2, hence
j(
0) jj(
0) ()j+j() j<"=2 +"=2 =":
We get the same result if we fix AIXI, but rig the intelligence measure.
Corollary 5.16 (AIXI is Stupid for Some ).For any deterministic -optimal policy
and for every " > 0there is a Bayesian mixture 0such that 0(
)"and
01 ".
Proof.Leta1:=
()be the first action that
takes. We define an environment
such that taking the first action a1leads to hell and taking any other first action leads
to heaven as in Remark 5.14. We define the Bayesian mixture 0:= (1 ")+". Since
takes action a1first, it goes to hell, i.e., V
() = 0. Hence with Lemma 4.14
0(
) =V
0() = (1 ")V
() +"V
()":
For any policy that takes an action other than a1first, we get
0() =V
0() = (1 ")V
() +"V
()1 ":
On the other hand, we can make any computable policy smart if we choose the right
Bayesian mixture. In particular, we get that there is a Bayesian mixture such that ‘do
nothing’ is the most intelligent policy save for some ".
§5.4Asymptotic Optimality 79
name definition
strong a.o. V
(æ<t) V
(æ<t)!0-almost surely
a.o. in mean E
V
(æ<t) V
(æ<t)
!0
a.o. in probability 8">0:
V
(æ<t) V
(æ<t)>"
!0
weak a.o.1
tPt
k=1
V
(æ<k) V
(æ<k)
!0-almost surely
Table 5.1: The formal definition of different types of asymptotic optimality. In each
case we understand the limit as t!1.
Corollary 5.17 (Computable Policies can be Smart) .For any computable policy and
any">0there is a Bayesian mixture 0such that 0()>0 ".
Proof.Corollary5.7yieldsaBayesianmixture 0withj0 0()j=jV
0() V
0()j<
".
5.4 Asymptotic Optimality
An asymptotically optimal policy is a policy learns to act optimally in every environ-
ment fromM, i.e., the value of this policy converges to the optimal value.
Definition 5.18 (Asymptotic Optimality) .A policyisasymptotically optimal in an
environment class Miff for all2M
V
(æ<t) V
(æ<t)!0ast!1 (5.3)
on histories drawn from .
There are different types of asymptotic optimality based on the type of stochas-
tic convergence in (5.3); see Definition 2.5. If this convergence occurs almost surely,
it is called strong asymptotic optimality (Lattimore and Hutter, 2011, Def. 7); if this
convergence occurs in mean, it is called asymptotic optimality in mean ; if this conver-
gence occurs in probability, it is called asymptotic optimality in probability ; and if the
Cesàro averages converge almost surely, it is called weak asymptotic optimality (Lat-
timore and Hutter, 2011, Def. 7). Since the value function is a nonnegative bounded
random variable, asymptotic optimality in mean and asymptotic optimality in proba-
bility are equivalent. See Table 5.1 for the explicit definitions and see Figure 5.2 for an
overview over their relationship.
Asymptotic optimality in probability is in spirit a probably approximately cor-
rect (PAC) result: for all ">0and>0the probability that our policy is "-suboptimal
converges to zero; eventually this probability will be less than . For a PAC result it is
typically demanded that the number of time steps until the probability is less than
80 Optimality
strong a.o.
weak a.o. a.o. in mean a.o. in probability
Figure 5.2: The relationship between different types of asymptotic optimality. Each
arrow indicates a logical implication and each lack of an arrow indicates that there is
no logical implication.
be polynomial in 1="and1=. In general environments this is impossible, and here we
have no ambition to provide concrete convergence rates.
Intuitively, a necessary condition for asymptotic optimality is that the agent needs
toexploreinfinitelyoftenforanentireeffectivehorizon. Ifweexploreonlyfinitelyoften,
then the environment might change after we stopped exploring. Moreover, the agent
needs to predict the value of counterfactual policies accurately; but by Lemma 4.16
only for an "-effective horizon. By committing to exploration for the entire effective
horizon, we learn about the value of counterfactual policies.
Example 5.19 (Exploration Infinitely Often for an Entire Effective Horizon) .If there
is an" > 0such that the policy does not explore for Ht(")steps infinitely of-
ten, thenV
(æ<t) V
(æ<t)> "infinitely often. Define A:=f;gandE:=
f0;"=2;1g(observations are vacuous) and consider the following class of environments
M:=f1;1;2;:::g(transitions are labeled with condition: action, reward):
s0;"
2
;0s0 s1 ::: sn;"
2
t<k :;0tk:;0 ;0 ;0
;0
;0
;0;1
1 k
Environment kworks just like environment 1, except that at time step kthe path to
states1gets unlocked. The length of the state sequence in kis defined as an "-effective
horizon,n:=Ht(")wheretis the time step in which the agent leaves state s0. The
optimal policy in environment 1is to always take action , the optimal policy for
environment kis to take action fort<kand then take action . Suppose the agent
is in time step tand in state s0. Since these environments are partially observable,
it needs to explore for nsteps (take action ntimes) to distinguish 1fromkfor
anykt. Since there are infinitely many k, the agent needs to do this infinitely
often. Moreover, V
1"andV
1="=2, so iftis the true environment, then not
exploring to the right for an "-effective horizon is suboptimal by "=2. But if1is the
true environment, then exploring incurs an opportunity cost of one reward of "=2.3
§5.4Asymptotic Optimality 81
Next, we state two negative results about asymptotic optimality proved by Latti-
more and Hutter (2011). It is important to emphasize that Theorem 5.20 and Theo-
rem 5.21 only hold for deterministic policies.
Theorem 5.20 (Deterministic Policies are not Strongly Asymptotically Optimal; Lat-
timoreandHutter,2011, Thm.8) .There is no deterministic policy that is strong asymp-
totically optimal in the class MCCM
comp.
If the horizon grows linearly (for example, power discounting
(t) =t with >1;
see Table 4.1), then a deterministic policy cannot be weakly asymptotically optimal
policy: the agent has to explore for an entire effective horizon, which prevents the
Cesàro average from converging.
Theorem 5.21 (Necessary Condition for Weak Asymptotic Optimality; Lattimore,
2013, Thm. 5.5) .If there is an ">0such thatHt(")=2o(t), then there is no determin-
istic policy that is weakly asymptotically optimal in the class MCCM
comp.
There are several agents that achieve asymptotic optimality. In the rest of this
section, we discuss the Bayes agent, BayesExp, and Thompson sampling. Asymptotic
optimalitycan alsobe achieved through optimism(SunehagandHutter,2012a,b,2015).
5.4.1 Bayes
In this section, we list two results from the literature regarding the asymptotic optimal-
ity of the Bayes optimal policy. The following negative result is due to Orseau (2010,
2013).
Theorem5.22 (BayesisnotAsymptoticallyOptimalinGeneralEnvironments; Orseau,
2013, Thm. 4) .For any classMMCCM
compno Bayes optimal policy
is asymptotically
optimal: there is an environment 2Mand a time step t02Nsuch that
-almost
surely for all time steps tt0
V
(æ<t) V
(æ<t) =1
2:
Orseau calls this result the good enough effect : A Bayesian agent eventually decides
that the current strategy is good enough and that any additional exploration is not
worth its expected payoff. However, if the environment changes afterwards, the Bayes
agent is acting suboptimally.
Proof.Without loss of generality assume A:=f;gandE:=f0;1=2;1g(observations
are vacuous). We consider the following environment (transitions are labeled with
action, reward).
82 Optimality
s0 s1 ::: sn;1
2
;0;0;0
;0
Instates0theactionistheexploitationactionandtheaction theexplorationaction.
The length of the state sequence is defined as an 1=t-effective horizon, n:=Ht(1=t)
wheretis the time step in which the agent leaves state s0. Since the discount function
is computable by Assumption 4.6a, 2MCCS
LSC.
Assume that when acting in , the Bayes agent explores infinitely often. Let æ<t
be a history in which the agent is in state s0and takes action . ThenV
1=t.
By on-policy value convergence (Corollary 4.20), V
(æ<t) V
(æ<t)!0
-almost
surely. Hence there is a time step t0such that for all tt0we haveV
< w()=2.
Sinceis deterministic, w(jæ<t)w(). Now we get a contradiction from (4.7):
V
(æ<t)w(jæ<t)V
(æ<t)w()V
(æ<t) =w()
2>V
(æ<t)
Therefore the Bayes agent stops taking the exploration action after time step t0,
and so it is not optimal in any 2MCCS
LSCthat behaves like until time step t0and
then changes:
s0 s1 ::: sn;1
2
t>t 0:;1tt0:;0;0;0
;0
The following theorem is also known as the self-optimizing theorem . This theorem
has been a source of great confusion because its statement in Hutter (2005, Thm. 5.34)
is not very explicit about how the histories are generated. The formulation of Lattimore
(2013, Thm. 5.2) is explicit, but less general.
Theorem 5.23 (Sufficient Condition for Strong Asymptotic Optimality of Bayes; Hut-
ter, 2005, Thm. 5.34) .Letbe some environment. If there is a policy and a sequence
of policies1;2;:::such that for all 2M
V
(æ<t) Vt(æ<t)!0ast!1-almost surely, (5.4)
then
V
(æ<t) V
(æ<t)!0ast!1-almost surely.
§5.4Asymptotic Optimality 83
If=
and(5.4)holds for all 2M, then
is strongly asymptotically optimal in
the classM.
It is important to emphasize that the policies 1;2;:::need to converge to the
optimal value on the history generated by and, and not (as one might think) and
t. Intuitively, the policy is an ‘exploration policy’ that ensures that the environment
class is explored sufficiently. Typically, a policy is asymptotically optimal on its own
history. So if =1=2=:::, then we get that Bayes is asymptotically optimal on
the history generated by the policy , not its own history. In light of Theorem 5.5 and
Theorem 5.22 this is not too surprising; Bayesian reinforcement learning agents might
not explore enough to be asymptotically optimal, but given a policy that does explore
enough, Bayes learns enough to be asymptotically optimal.
This invites us to define the following policies t: follow the information-seeking
policy
IGuntil time step t, and then follow
(explore until t, then exploit). Since the
information-seeking policy explores enough to prove off-policy prediction (Orseau et al.,
2013, Thm. 7), we get V
V
!0for every policy uniformly. Hence arg maxV
!
arg maxV
and thusV
V
!0and (5.4) is satisfied. From Theorem 5.23 we get
V
V
!0, which we already knew. In order to get strong asymptotic optimality,
all we need to do is choose the switching time step tappropriately, i.e., wait until V
andV
are close enough. Unfortunately, this is an invalid strategy: the agent does
not know the true environment and hence cannot check this condition.
Hutter (2005, Sec. 5.6) uses Theorem 5.23 to show that the Bayes optimal policy is
strongly asymptotically optimal in the class of ergodic finite-state MDPs if the effective
horizon is growing, i.e., Ht(")!1for all">0. This relies on the fact that in ergodic
finite-state MDPs we need a fixed number of steps to explore the entire environment up
to"-confidence. Thereforewecandefineasequenceofpolicies 1;2;:::thatcompletely
disregard the history and start exploring everything from scratch. Since the effective
horizon is growing, this exploration phase takes a vanishing fraction of effective horizon
and most of the value is retained. Therefore the sequence of policies 1;2;:::satisfies
the condition of Theorem 5.23 regardless of the history, thus in particular for the history
generated by =
and any2M. Note that the condition on the horizon is
important: If the effective horizon is bounded, then Bayes is not asymptotically optimal
in the class of ergodic finite-state MDPs because it can be locked into a dogmatic prior
similarly to Theorem 5.5.
Proof of Theorem 5.23. From (4.6) we get for any history æ<t
w(jæ<t)
V
(æ<t) V
(æ<t)
X
2Mw(jæ<t)
V
(æ<t) V
(æ<t)
= X
2Mw(jæ<t)V
(æ<t)!
V
(æ<t)
X
2Mw(jæ<t)V
(æ<t) Vt
(æ<t)
84 Optimality
=X
2Mw(jæ<t)
V
(æ<t) Vt(æ<t)
:(5.5)
From (5.4) follows that V
Vt!0-almost surely for all 2M, so (5.5) converges
to0-almost surely (Hutter, 2005, Lem. 5.28ii). Similar to Example 3.20, 1=w(j
æ<t)is a nonnegative -martingale and thus converges (to a finite value) -almost
surely by Theorem 2.8. Therefore V
(æ<t) V
(æ<t)!0-almost surely. If this
is true for all 2M, the strong asymptotic optimality of
follows from =
by
definition.
5.4.2 BayesExp
The definition of BayesExp is given in Section 4.3.3. In this subsection we state a result
by Lattimore (2013) that motivated the definition of BayesExp.
Theorem 5.24 (BayesExp is Weakly Asymptotically Optimal; Lattimore, 2013, Thm.
5.6).LetBEdenote the policy from Algorithm 1. If Ht(")grows monotone in tand
Ht("t)="t2o(t), then for all environments 2M
1
ttX
k=1
V
(æ<k) VBE(æ<k)
!0ast!1BE-almost surely :
If the horizon grows sublinearly ( Ht(")2o(t)for all" >0), then we can always
find a sequence "t!0that decreases slowly enough such that Ht(")="t2o(t)holds.
5.4.3 Thompson Sampling
In this section we prove that the Thompson sampling policy defined in Section 4.3.4 is
asymptotically optimal. Ortega and Braun (2010) prove that the action probabilities
of Thompson sampling converge to the action probability of the optimal policy almost
surely, but require a finite environment class and two (arguably quite strong) technical
assumptions on the behavior of the posterior distribution (akin to ergodicity) and the
similarity of environments in the class. Our convergence results do not require these
assumptions.
Theorem 5.25 (Thompson Sampling is Asymptotically Optimal in Mean) .For all
environments 2M,
ET
V
(æ<t) VT(æ<t)
!0ast!1.
This theorem immediately implies that Thompson sampling is also asymptotically
optimal in probability according to Figure 5.2. However, this does not imply almost
sure convergence (see Example 5.28).
We first give an intuition for the asymptotic optimality of Thompson sampling. At
every resampling step we can split the class Minto three partitions:
1. Environments whereV
V
§5.4Asymptotic Optimality 85
2. Environments whereV
>V
3. Environments whereV
<V
The first class is the class of ‘good’ environments: if we draw one of them, we follow a
policy that is close to optimal in . The second class is the class of environments that
overestimate the value of . Following their optimal policy the agent gains information
becauserewardswillbelowerthanexpected. Thethirdclassistheclassofenvironments
that underestimate the value of . Following their optimal policy the agent might not
gain information since might behave just like environment on the-optimal policy.
However, when sampling from the first class instead, the agent gains information about
the third class because rewards tend to be better than environments from the third
class predicted.
Since the true environment 2M, the first class is not empty, and the probability
of drawing a sample from the first class does not become too small. Whenever the
second and third class have sufficiently high weight in the posterior, there is a good
chance of picking a policy that leads the agent to gain information. Asymptotically,
the posterior converges, so the agent ends up having learned everything it could, i.e.,
the posterior weight of the second and third class vanishes.
This argument is not too hard to formalize for deterministic environment classes.
However, for stochastic environment classes the effect on the posterior when following
a bad policy is harder to quantify because there is always a chance that the rewards are
different simply because of bad luck. In order to prove this theorem in its generality for
stochasticclasses, weemployanentirelydifferentproofstrategythatreliesonstatistical
tools rather than the argument given above.
Definition 5.26 (Expected Total Variation Distance) .Letbe any policy and let
m2N[1. Theexpected total variation distance on the policy is
F
m(æ<t) :=X
2Mw(jæ<t)Dm(;jæ<t):
If we replace the distance measure Dmby cross-entropy, then the quantity F
m(æ<t)
becomes the expected information gain (see Section 4.3.2).
For the proof of Theorem 5.25 we need the following lemma.
Lemma 5.27 (Expected Total Variation Distance Vanishes On-Policy) .For any policy
and any environment ,
E
[F
1(æ<t)]!0ast!1.
Proof.From Theorem 3.25 we get D1(;jæ<t)!0-almost surely, and since D
is bounded, this convergence also occurs in mean. Thus for every environment 2M,
E
D1(;jæ<t)
!0ast!1:
86 Optimality
Now
E
[F
1(æ<t)]1
w()E
[F
1(æ<t)]
=1
w()E
"X
2Mw(jæ<t)D1(;jæ<t)#
=1
w()E
"X
2Mw()(æ<t)
(æ<t)D1(;jæ<t)#
=1
w()X
2Mw()E
D1(;jæ<t)
!0
by Hutter (2005, Lem. 5.28ii) since total variation distance is bounded.
Proof of Theorem 5.25. Let; > 0and let"t>0denote the sequence used to define
Tin Algorithm 2. We assume that tis large enough such that "kfor allkt
and thatis small enough such that w(jæ<t)>4for allt, which holds since
w(jæ<t)6!0-almost surely for any policy (Hutter, 2009a, Lem. 3i).
The stochastic process w(jæ<t)is aT-martingale according to Example 3.20.
By the martingale convergence theorem (Theorem 2.8) w(jæ<t)convergesT-almost
surely and because Tw()Tit also converges T-almost surely.
We argue that we can choose t0to be one of T’s resampling time steps large
enough such that for all tt0the following three events hold simultaneously with
T-probability at least 1 .
(i) There is a finite set M0Mwithw(M0jæ<t)>1 andw(jæ<k)6!0as
k!1for all2M0.
(ii)jw(M00jæ<t) w(M00jæ<t0)jfor allM00M0.
(iii)FT1(æ<t)<w2
min.
wherewmin:= inffw(jæ<k)jk2N;2M0g, which is positive by (i).
(i) and (ii) are satisfied eventually because the posterior w(jæ<t)convergesT-
almost surely. Note that the set M0is random: the limit of w(jæ<t)ast!1
depends on the history æ1:1. Without loss of generality, we assume the true environ-
mentis contained inM0sincew(jæ<t)6!0T-almost surely. (iii) follows from
Lemma 5.27 since convergence in mean implies convergence in probability.
Moreover, we define the horizon m:=t+Ht("t)as the time step of the effective
horizon at time step t. Letæ<tbe a fixed history for which (i-iii) is satisfied. Then we
have
w2
min>FT1(æ<t)
=X
2Mw(jæ<t)D1(T;Tjæ<t)
=Ew(jæ<t)[D1(T;Tjæ<t)]
§5.4Asymptotic Optimality 87
Ew(jæ<t)[Dm(T;Tjæ<t)]
w2
minw(MnM00jæ<t)
by Markov’s inequality where
M00:=
2MDm(T;Tjæ<t)<w2
min
:
For our fixed history æ<twe have
1 <w (M00jæ<t)
(i)
w(M00\M0jæ<t) +
(ii)
w(M00\M0jæ<t0) + 2
(i)
w(M00jæ<t0) + 3
and thus we get
1 4<w
f2MjDm(T;Tjæ<t)<w2
mingæ<t0
: (5.6)
In particular, this bound holds for =sincew(jæ<t0)>4by assumption.
It remains to show that with high probability the value V
of the sample ’s
optimal policy
is sufficiently close to the -optimal value V
. The worst case is
that we draw the worst sample from M0\M00twice in a row. From now on, let
denote the sample environment we draw at time step t0, and lettdenote some time
step between t0andt1:=t0+Ht0("t0)(before the next resampling). With probability
w(0jæ<t0)w(0jæ<t1)we sample0both att0andt1when following T. Therefore
we have for all æt:mand all2M
T(æ1:mjæ<t)w(0jæ<t0)w(0jæ<t1)
0(æ1:mjæ<t):
Thus we get for all 2M0(in particular and)
Dm(T;Tjæ<t)sup
02Msup
A(AE )mw(0jæ<t0)w(0jæ<t1)
(
0(Ajæ<t)
0(Ajæ<t))
w(jæ<t0)w(jæ<t1) sup
A(AE )m
(Ajæ<t)
(Ajæ<t)
w2
minDm(
;
jæ<t):
For2M00we get with (5.6)
Dm(T;Tjæ<t)Dm(T;Tjæ<t) +Dm(T;Tjæ<t)
<w2
min+w2
min= 2w2
min;
88 Optimality
which together with Lemma 4.17 and the fact that rewards in [0;1]implies
V
(æ<t) V
(æ<t) t+Ht("t)
t+V
;m
(æ<t) V
;m
(æ<t)
"t+Dm(
;
jæ<t)
"t+1
w2
minDm(T;Tjæ<t)
<+ 2= 3:
Hence we get (omitting history arguments æ<tfor simplicity)
V
=V
<V
+ 3V
+ 3=V
+ 3 <V
+ 3+ 3=V
+ 6:(5.7)
WithT-probability at least 1 (i), (ii), and (iii) are true, with T-probability
at least 1 our sample happens to be inM0by (i), and with w(jæ<t0)-probability
at least 1 4the sample is inM00by (5.6). All of these events are true simultaneously
with probability at least 1 (++ 4) = 1 6. Hence the bound (5.7) transfers for
Tsuch that with T-probability1 6we have
V
(æ<t) VT(æ<t)<6:
ThereforeT[V
(æ<t) VT(æ<t)6]<6and with!0we get that V
(æ<t)
VT(æ<t)!0ast!1in probability. The value function is bounded, thus it also
converges in mean.
The following example shows that the Thompson sampling policy is not strongly
asymptotically optimal. However, we expect that strong asymptotic optimality can
be achieved with Thompson sampling by resampling at every time step (with strong
assumptions on the discount function). However, for practical purposes resampling in
every time step is very inefficient.
Example 5.28 (Thompson Sampling is not Strongly Asymptotically Optimal) .Define
A:=f;g,E:=f0;1=2;1g, and assume geometric discounting (Example 4.5). Con-
sider the following class of environments M:=f1;1;2;:::g(transitions are labeled
with action, reward):
s0
s1
s2;1
2
;0
;0
;0;0s0
s1
s2s3
s4;1
2
t<k :;0
;0
;0;0tk:;0
;0;0
;1
;0
1 k
§5.4Asymptotic Optimality 89
Environment kworks just like environment 1except that after time step k, the path
to states3gets unlocked. The class Mis a class of deterministic weakly communicat-
ing POMDPs (but as a POMDP khas more than 5 states). The optimal policy in
environment 1is to always take action , the optimal policy for environment kis to
take action fort<kand then take action in states1and action otherwise.
Suppose the policy Tis acting in environment 1. Since it is asymptotically
optimal in the class M, it has to take actions froms0infinitely often: for t < k
environment kis indistinguishable from 1, so the posterior for kis larger or equal to
the prior. Hence there is always a constant chance of sampling kuntil taking actions
, at which point all environments kforktbecome falsified.
If the policy Tdecides to explore and take the first action , it will be in state s1.
Letæ<tdenote the current history. Then the 1-optimal action is and
V
1(æ<t) = (1
)
0 +
1
2+
21
2+:::
=
2:
The next action taken by Tissince any optimal policy for any sampled environment
that takes action once, takes that action again (and we are following that policy for
an"t-effective horizon). Hence
VT1(æ<t)(1
)
0 + 0 +
21
2+
31
2+:::
=
2
2:
ThereforeV
1 VT1(
2)=2>0. This happens infinitely often with probability
one and thus we cannot get almost sure convergence. 3
If the Bayesian mixture is inside the class M(as it is the case for the class MCCS
LSC),
then we can assign a prior probability that is arbitrarily close to 1. Since the posterior
ofisthesameastheprior, ThompsonsamplingwillactaccordingtotheBayesoptimal
policy most of the time. This means the Bayes-value of Thompson sampling can be
very good; formally, V
() VT
() = (T)can be made arbitrarily small.
In contrast, the Bayes-value of Thompson sampling can also be very bad: Suppose
you have a class of (n+ 1)-armed bandits indexed 1;:::;nwhere bandit igives reward
1 "on arm 1, reward 1on armi+ 1, and reward 0on all other arms. For geometric
discounting and "<(1
)=(2
), it is Bayes optimal to pull arm 1while Thompson
sampling will explore on average n=2arms until it finds the optimal arm. The Bayes-
valueofThompsonsamplingis 1=(n
n 1)incontractto (1 ")achievedbyBayes. For
a horizon of n, the Bayes optimal policy suffers a regret of "nand Thompson sampling
a regret ofn=2, which is much larger for small ".
5.4.4 Almost Sure in Cesàro Average vs. in Mean
It might appear that convergence in mean is more natural than the convergence of
Cesàro averages of weak asymptotic optimality. However, both notions are not so
fundamentally different because they both allow an infinite number of bad mistakes
(actions that lead to V
V
being large). Asymptotic optimality in mean allows bad
90 Optimality
mistakes as long as their probability converges to zero; weak asymptotic optimality
allows bad mistakes as long as the total time spent on bad mistakes grows sublinearly.
Note that according to Example 5.19 making bad mistakes infinitely often is necessary
for asymptotic optimality.
Theorem 5.24 shows that weak asymptotic optimality is possible in any countable
class of stochastic environments. However, this requires the additional condition that
the effective horizon grows sublinearly, Ht("t)2o(t), while Theorem 5.25 does not
require any condition on the discount function.
Generally, weak asymptotic optimality and asymptotic optimality in mean are in-
comparable because the notions of convergence are incomparable for (bounded) random
variables. First, for deterministic sequences (i.e. deterministic policies in deterministic
environments), convergence in mean is equivalent to (regular) convergence, which is
impossible by Theorem 5.20. Second, convergence in probability (and hence conver-
gence in mean for bounded random variables) does not imply almost sure convergence
of Cesàro averages (Stoyanov, 2013, Sec. 14.18). We leave open the question whether
the policyTis weakly asymptotically optimal.
5.5 Regret
Regret is how many expected rewards the agent forfeits by not following the best in-
formed policy.
Definition 5.29 (Regret).Theregretof a policy in environment is
Rm(;) := sup
0E0
"mX
t=1rt#
E
"mX
t=1rt#
:
Note that regret is undiscounted and always nonnegative. Moreover, the space of
possible different policies for the first mactions is finite and we assumed the set of
actionsAand the set of percepts Eto be finite (Assumption 4.6c), so the supremum is
always attained by some policy (not necessarily the -optimal policy
because that
policy uses discounting).
Different problem classes have different regret rates, depending on the structure
and the difficulty of the problem class. Multi-armed bandits provide a (problem-
independent)worst-caseregretboundof
(p
km)wherekisthenumberofarms(Bubeck
and Bianchi, 2012). In MDPs the lower bound is
(p
SAdm )whereSis the number
of states,Athe number of actions, and dthe diameter of the MDP (Auer et al., 2010).
For a countable class of environments given by state representation functions that map
histories to MDP states, a regret of ~O(m2=3)is achievable assuming the resulting MDP
is weakly communicating (Nguyen et al., 2013).
A problem class is considered learnable if there is an algorithm that has a sublinear
regret guarantee. The following example shows that the general reinforcement learning
problem is not learnable because the agent can get caught in a trap and be unable to
recover.
§5.5Regret 91
Example 5.30 (Linear Regret; Hutter, 2005, Sec. 5.3.2) .Consider the following two
environments 1and2. In environment 1actionleads to hell (reward 0forever)
and actionleads to heaven (reward 1forever). Environment 2behaves just the same,
except that both actions are swapped.
hell heavenreward = 0 reward = 1
hell heavenreward = 0 reward = 1
1 2
The policy that takes action in the first time step performs well in 2but performs
poorly in1. Likewise, the policy that takes action in the first time step performs
well in1but performs poorly in 2. Regardless of which policy we adopt, our regret
is always linear in one of the environments 1or2:
Rm(; 1) =m R m(; 2) = 0
Rm(; 1) = 0 Rm(; 2) =m 3
To achieve sublinear regret we need to ensure that the agent can recover from
mistakes. Formally, we make the following assumption.
Definition 5.31 (Recoverability) .An environment satisfies the recoverability as-
sumption iff
sup
E
[V
(æ<t)] E
[V
(æ<t)]!0ast!1:
Recoverability compares following the worst policy fort 1time steps and then
switching to the optimal policy
to having followed
from the beginning. The
recoverability assumption states that switching to the optimal policy at any time step
enables the recovery of most of the value: it has to become less costly to recover from
mistakes as time progresses. This should be regarded as an effect of the discount
function: if the (effective) horizon grows, recovery becomes easier because the optimal
policy has more time to perform a recovery. Moreover, recoverability is on the optimal
policy, in contrast to the notion of ergodicity in MDPs which demands returning to a
starting state regardless of the policy.
Remark 5.32 (Weakly Communicating POMDPs are Recoverable) .If the effective
horizon is growing, Ht(")!1ast!1, then any weakly communicating finite state
POMDP satisfies the recoverability assumption. 3
5.5.1 Sublinear Regret in Recoverable Environments
This subsection is dedicated to the following theorem that connects asymptotic opti-
mality in mean to sublinear regret.
92 Optimality
Theorem 5.33 (Sublinear Regret in Recoverable Environments) .If the discount func-
tion
satisfies Assumption 5.34, the environment satisfies the recoverability assump-
tion, andis asymptotically optimal in mean in the class fg, thenRm(;)2o(m).
Assumption 5.34 (Discount Function) .Let the discount function
be such that
(a)
t>0for allt,
(b)
tis monotone decreasing in t, and
(c)Ht(")2o(t)for all">0.
This assumption demands that the discount function is somewhat well-behaved: the
function has no oscillations, does not become 0, and the horizon is not growing too fast.
It is satisfied by geometric discounting (Example 4.5): (a)
t>0, (b)
monotone
decreasing, and (c) Ht(") =dlog
"e2o(t).
The problem with geometric discounting is that it makes the recoverability assump-
tion very strong: since the horizon is not growing, the environment has to enable faster
recovery as time progresses; in this case weakly communicating POMDPs are notre-
coverable. A choice with Ht(")!1that satisfies Assumption 5.34 is subgeometric
discounting
t:=e p
t=p
t(see Table 4.1).
If the items in Assumption 5.34 are violated, Theorem 5.33 can fail:
•If
t= 0for some time steps t, our policy does not care about those time steps
and might take actions that have large regret.
•Similarly if
oscillates between high values and very low values: our policy might
take high-regret actions in time steps with comparatively lower
-weight.
•If the horizon grows linearly, infinitely often our policy might spend some constant
fraction of the current effective horizon exploring, which incurs a cost that is a
constant fraction of the total regret so far.
To prove Theorem 5.33 we require the following technical lemma.
Lemma 5.35 (Value and Regret) .Let" > 0and assume the discount function
satisfies Assumption 5.34. Let (dt)t2Nbe a sequence of numbers with jdtj1for allt.
If there is a time step t0with
1
t1X
k=t
kdk<"8tt0 (5.8)
thenmX
t=1dtt0+"(m t0+ 1) +1 +"
1 "Hm("):
Proof.This proof essentially follows the proof of Hutter (2006b, Thm. 17).
§5.5Regret 93
By Assumption 5.34a we have
t>0for alltand hence t>0for allt. By
Assumption 5.34b have that
is monotone decreasing, so we get for all n2N
t=1X
k=t
kt+n 1X
k=t
t+1X
k=t+n
k=n
t+ t+n:
And withn:=Ht(")this yields
tHt(")
t1 t+Ht(")
t1 ">0: (5.9)
In particular, this bound holds for all tand">0.
Next, we define a series of nonnegative weights (bt)t1such that
mX
t=t0dk=mX
t=t0bt
tmX
k=t
kdk:
This yields the constraints
tX
k=t0bk
k
t= 18tt0:
The solution to these constraints is
bt0= t0
t0;andbt= t
t t
t 1fort>t 0: (5.10)
Thus we get
mX
t=t0bt= t0
t0+mX
t=t0+1 t
t t
t 1
= m+1
m+mX
t=t0 t
t t+1
t
= m+1
m+m t0+ 1
Hm(")
1 "+m t0+ 1
for all">0according to (5.9).
Finally,
mX
t=1dtt0X
t=1dt+mX
t=t0bt
tmX
k=t
kdk
t0+mX
t=t0bt
t1X
k=t
kdk mX
t=t0bt
t1X
k=m+1
kdk
94 Optimality
and using the assumption (5.8) and dt 1,
<t0+mX
t=t0bt"+mX
t=t0bt m+1
t
t0+"Hm(")
1 "+"(m t0+ 1) +mX
t=t0bt m+1
t
For the latter term we substitute (5.10) to get
mX
t=t0bt m+1
t= m+1
t0+mX
t=t0+1 m+1
t m+1
t 1
= m+1
mHm(")
1 "
with (5.9).
Proof of Theorem 5.33. Let(m)m2Ndenote any sequence of policies, such as a se-
quence of policies that attain the supremum in the definition of regret. We want to
show that
Em"mX
t=1rt#
E
"mX
t=1rt#
2o(m):
For
d(m)
k:=Em[rk] E
[rk] (5.11)
we have 1d(m)
k1since we assumed rewards to be bounded between 0and1.
Because the environment satisfies the recoverability assumption we have
E
[V
(æ<t)] E
[V
(æ<t)]!0ast!1;and
sup
mE
[V
(æ<t)] Em[V
(æ<t)]!0ast!1;
so we conclude that
sup
mE
[V
(æ<t)] Em[V
(æ<t)]!0
by the triangle inequality and thus
sup
mEm[V
(æ<t)] E
[V
(æ<t)]!0ast!1: (5.12)
By assumption the policy is asymptotically optimal in mean, so we have
E
[V
(æ<t)] E
[V
(æ<t)]!0ast!1;
and with (5.12) this combines to
sup
mEm[V
(æ<t)] E
[V
(æ<t)]!0ast!1:
§5.5Regret 95
FromV
(æ<t)Vm(æ<t)we get
lim sup
t!1
sup
mEm[Vm(æ<t)] E
[V
(æ<t)]
0: (5.13)
For02f; 1;2;:::gwe have
E0
[V0
(æ<t)] =E0
"
1
tE0
"1X
k=t
krkæ<t##
=E0
"
1
t1X
k=t
krk#
=1
t1X
k=t
kE0
[rk];
so from (5.11) and (5.13) we get
lim sup
t!1sup
m1
t1X
k=t
kd(m)
k0:
Let" > 0. We choose t0independent of mand large enough such that we get
supmP1
k=t
kd(m)
k= t< "for alltt0. Now we let m2Nbe given and apply
Lemma 5.35 to get
Rm(;)
m=Pm
k=1d(m)
k
mt0+"(m t0+ 1) +1+"
1 "Hm(")
m:
SinceHt(")2o(t)according to Assumption 5.34c we get lim supm!1Rm(;)=m
0.
Example 5.36 (The Converse of Theorem 5.33 is False) .Letbe a two-armed
Bernoulli bandit with means 0and1and suppose we are using geometric discount-
ing with discount factor
2[0;1). This environment is recoverable. If our policy
pulls the suboptimal arm exactly on time steps 1;2;4;8;16;:::, regret will be log-
arithmic. However, on time steps t= 2nforn2Nthe value difference V
V
is
deterministically at least 1
>0. 3
Note that Example 5.36 does not rule out weak asymptotic optimality.
5.5.2 Regret of the Optimal Policy and Thompson sampling
We get the following immediate consequence.
Corollary 5.37 (Sublinear Regret for the Optimal Discounted Policy) .If the discount
function
satisfies Assumption 5.34 and the environment satisfies the recoverability
assumption, then Rm(
;)2o(m).
Proof.From Theorem 5.33 since the policy
is (trivially) asymptotically optimal in
fg.
If the environment does not satisfy the recoverability assumption, regret may be
lineareven on the optimal policy : the optimal policy maximizes discounted rewards
96 Optimality
and this short-sightedness might incur a tradeoff that leads to linear regret later on if
the environment does not allow recovery.
Corollary 5.38 (Sublinear Regret for Thompson Sampling) .If the discount function
satisfies Assumption 5.34 and the environment 2Msatisfies the recoverability
assumption, then Rm(T;)2o(m)for the Thompson sampling policy T.
Proof.From Theorem 5.25 and Theorem 5.33.
5.6 Discussion
In this work, we disregard computational constraints. Because of this, our agents learn
very efficiently and we can focus on the way they balance exploration and exploitation.
So which balance is best?
5.6.1 The Optimality of AIXI
Bayesian reinforcement learning agents make the tradeoff between exploration and ex-
ploitationintheBayesoptimalway. Maximizingexpectedrewardsaccordingtoanypos-
itive prior does not lead to enough exploration to achieve asymptotic optimality (Theo-
rem 5.22); the prior’s bias is retained indefinitely. For bad priors this can cause serious
malfunctions: the dogmatic prior defined in Section 5.2.2 can prevent a Bayesian agent
from taking a single exploratory action ; exploration is restricted to cases where the ex-
pected future payoff falls below some prespecified ">0. However, this problem can be
alleviated by adding an extra exploration component to AIXI: Lattimore (2013) shows
that BayesExp is weakly asymptotically optimal (Theorem 5.24).
So instead, we may ask the following weaker questions. Does AIXI succeed in every
(ergodic) finite-state (PO)MDP, bandit problem, or sequence prediction task? Our
results imply that without further assumptions on the prior, we cannot answer any of
the preceding questions in the affirmative. Using a dogmatic prior (Theorem 5.5), we
can make AIXI follow any computable policy as long as that policy produces rewards
that are bounded away from zero.
•In a sequence prediction task that gives a reward of 1for every correctly predicted
bit and 0otherwise, a policy that correctly predicts every third bit will receive
an average reward of 1=3. With a-dogmatic prior, AIXI thus only predicts
a third of the bits correctly, and hence is outperformed by a uniformly random
predictor.
However, if we have a constant horizon of length 1, AIXIdoessucceed in sequence
prediction (Hutter, 2005, Sec. 6.2.2). If the horizon is this short, the agent is so
hedonistic that no threat of hell can deter it.
•In a (PO)MDP a dogmatic prior can make AIXI get stuck in any loop that
provides nonzero expected rewards.
§5.6Discussion 97
•In a bandit problem, a dogmatic prior can make AIXI get stuck on any arm which
provides nonzero expected rewards.
These results apply not only to AIXI, but generally to Bayesian reinforcement learn-
ing agents. Any Bayesian mixture over nonrecoverable environments is susceptible to
dogmatic priors if we allow an arbitrary reweighing of the prior. Notable exceptions
are classes of environment that allow policies that are strongly asymptotically optimal
regardless of the history (Theorem 5.23). For example, the class of all ergodic MDPs
for an unbounded effective horizon; in this case the Bayes optimal policy is strongly
asymptotically optimal (Hutter, 2005, Thm. 5.38). Note that in contrast to our results,
this requires that that the agent uses a Bayes-mixture over a class of ergodic MDPs.
Moreover, Bayesian agents still perform well at learning and achieve on-policy value
convergence(Corollary4.20): theposteriorbeliefaboutthevalueofapolicy converges
to the true value of while following :V
(æ<t) V
(æ<t)!0ast!1-almost
surely. Since this holds for any policy, in particular it holds for the Bayes optimal policy
. This means that the Bayes agent learns to predict those parts of the environment
that it sees. But if it does not explore enough, then it will not learn other parts of the
environment that are potentially more rewarding.
Hutter (2005, Claim 5.12) claims:
We expect AIXI to be universally optimal.
Our work seriously challenges Hutter’s claim: no nontrivial and non-subjective optimal-
ity results for AIXI remain (see Table 5.3). Until new arguments for AIXI’s optimality
are put forward, we have to regard AIXI as a relativetheory of intelligence, dependent
on the choice of the prior.
5.6.2 Natural Universal Turing Machines
The choice of the UTM has been a big open question in algorithmic information theory
for a long time. The Kolmogorov complexity of a string depends on this choice. How-
ever, there are invariance theorems (Li and Vitányi, 2008, Thm. 2.1.1 & Thm. 3.1.1)
which state that changing the UTM changes Kolmogorov complexity only by a con-
stant. When using the Solomonoff prior Mto predict any deterministic computable
binary sequence, the number of wrong predictions is bounded by the Kolmogorov com-
plexity of the sequence (Corollary 3.56). Due to the invariance theorem, changing the
UTM changes the number of errors only by a constant. In this sense, compression and
prediction work for any choice of UTM.
For AIXI, there can be no invariance theorem; in Section 5.2 we showed that a bad
choice for the UTM can have drastic consequences. Our negative results can guide
future search for a naturalUTM: the UTMs used to define the indifference prior (Theo-
rem 5.4), the dogmatic prior (Theorem 5.5), and the Gödel prior (Theorem 5.8) should
be considered unnatural. But what are other desirable properties of a UTM?
A remarkable but unsuccessful attempt to find natural UTMs is due to Müller
(2010). It takes the probability that one universal machine simulates another according
98 Optimality
name defined in KU(U0) KU0(U)
indifference prior Theorem 5.4 K(U) +K(m) +O(1) m
dogmatic prior Theorem 5.5 K(U) +K() +K(") +O(1)d log2"e
Gödel prior Theorem 5.8 K(U) +K(PA) +O(1) 0
Table 5.2: Upper bounds to compiler sizes of the UTMs used in the proofs of Sec-
tion 5.2.KU(U0)is the number of extra bits to run the ‘bad’ UTM U0on the ‘good’
UTMU,KU0(U)is the number of extra bits to run UonU0.K(U)denotes the length
of the shortest program for UonU.
to the length of their respective compilers and searches for a stationary distribution.
Unfortunately, no stationary distribution exists.
Alternatively, we could demand that the UTM U0that we use for the universal prior
has a small compiler on the reference machine U(Hutter, 2005, p. 35). Moreover, we
coulddemandthereverse, thatthereferencemachine Uhasasmallcompileron U0. The
idea is that this should limit the amount of bias one can introduce by defining a UTM
that has very small programs for very complicated and ‘unusual’ environments. Unfor-
tunately, this just pushes the choice of the UTM to the reference machine. Table 5.2
lists compiler sizes of the UTMs constructed in this thesis.
5.6.3 Asymptotic Optimality
A policy is asymptotically optimal if the agent learns to act optimally in any environ-
ment from the class M. We discussed two asymptotically optimal policies. BayesExp
is weakly asymptotically optimal if the horizon grows sublinearly (Theorem 5.24) and
Thompson sampling is asymptotically optimal in mean (Theorem 5.25). Both policies
commit to exploration for several steps. As stated in Example 5.19:
To achieve asymptotic optimality, the agent needs to explore infinitely often
for an entire effective horizon.
This is why weak asymptotic optimality is impossible if the horizon grows linearly (The-
orem 5.21): if the agent explores for an entire effective horizon, it spoils a significant
fraction of the average. Thompson sampling explores whenever it draws a bad sample.
BayesExp explores if the maximal expected information gain is above some threshold.
Both policies commit to exploration for the entire effective horizon.
The exploration performed by Thompson sampling is qualitatively different from
the exploration by BayesExp (Lattimore, 2013, Ch. 5). BayesExp performs phases of
exploration in which it maximizes the expected information gain. This explores the
environment class completely, even achieving off-policy prediction (Orseau et al., 2013,
Thm. 7). In contrast, Thompson sampling only explores on the optimal policies, and in
some environment classes this will not yield off-policy prediction. So in this sense the
§5.6Discussion 99
Optimality Issue/Comment
-optimal policy requires to know the true environment in advance
Pareto optimality always satisfied (Theorem 5.3)
Bayes optimality same as maximal intelligence
balanced Pareto optimality same as maximal intelligence (Proposition 5.12)
maximal intelligence highly dependent on the prior (Corollary 5.15 and
Corollary 5.16)
PAC strongvariantofasymptoticoptimalityinprobability
asymptotic optimality Thompson sampling (Theorem 5.25) and
BayesExp (Lattimore, 2013), but not AIXI (Orseau,
2013)
sublinear regret impossible in general environments, but possible with
recoverability (Theorem 5.33)
Table 5.3: Proposed notions of optimality (Hutter, 2002a, 2005; Legg and Hutter,
2007b) and their issues. Asymptotic optimality stands out to be the only nontrivial
objective optimality notion for general reinforcement learning.
explorationmechanismofThompsonsamplingismorereward-orientedthanmaximizing
information gain.
However, asymptotic optimality has to be taken with a grain of salt. It provides
no incentive to the agent to avoid traps in the environment. Once the agent gets
caught in a trap, all actions are equally bad and thus optimal: asymptotic optimality
has been achieved. Even worse, an asymptotically optimal agent has to explore all
the traps because they might contain hidden treasure. This brings us to the following
impossibility result for non-recoverable environment classes.
Either the agent gets caught in a trap or it is not asymptotically optimal.1
5.6.4 The Quest for Optimality
Theorem 5.3 shows that Pareto optimality is trivial in the class of all computable en-
vironments. Bayes optimality, Balanced Pareto optimality, and maximal Legg-Hutter
intelligence are equivalent (Proposition 5.12 and Proposition 5.10). Corollary 5.15 and
Corollary 5.16 show that this notion is highly subjective because it depends on the
choice of the prior. Moreover, according to Corollary 5.17, any computable policy is
nearly balanced Pareto optimal. For finite horizons, there are priors such that every
policy is balanced Pareto optimal (Theorem 5.4). Sublinear regret is impossible in
general environments (Example 5.30). However, if the environment is recoverable (Def-
inition 5.31), then Theorem 5.33 shows that asymptotic optimality in mean implies
sublinear regret. In summary, asymptotic optimality is the only nontrivial and objec-
tive notion of optimality for the general reinforcement learning problem (Problem 4.2):
1This formulation was suggested by Toby Ord.
100 Optimality
it is both satisfiable (Theorem 5.24 and Theorem 5.25) and objective because it does
not depend on a prior probability measure over the environment class M. Table 5.3
summarized the notions of optimality discussed in this chapter.
Ouroptimalitynotionsare tail events : anyfinitenumberoftimestepsareirrelevant;
the agent can be arbitrarily lazy. Asymptotic optimality requires only convergence in
the limit. In recoverable environments we can always achieve sublinear regret after
any finite interaction. All policies with finite horizon are Bayes optimal according to
Theorem 5.4 and Corollary 5.6. Overall, there is a dichotomy between the asymptotic
nature of our optimality notions and the use of discounting to prioritize the present
over the future. Ideally, we would aim for finite guarantees instead, such as precise
regret bounds or PAC convergence rates, but without additional assumptions this is
impossible in this general setting. This leaves us with the main question of this chapter
unanswered (Hutter, 2009b, Sec. 5):
What is a good optimality criterion for general reinforcement learning?
Chapter 6
Computability
I simply keep a few spare halting oracles around. — Marcus Hutter
Given infinite computation power, many traditional AI problems become trivial: play-
ing chess, go, or backgammon can be solved by exhaustive expansion of the game tree.
Yet other problems seem difficult still; for example, predicting the stock market, driv-
ing a car, or babysitting your nephew. How can we solve these problems in theory?
Solomonoff induction and AIXI are proposed answers to this question.
Both Solomonoff induction and AIXI are known to be incomputable. But not all
incomputabilities are equal. The arithmetical hierarchy specifies different levels of com-
putability based on oracle machines : each level in the arithmetical hierarchy is com-
puted by a Turing machine which may query a halting oracle for the respective lower
level. Our agents are useless if they cannot be approximated in practice, i.e., by a
regular Turing machine. Therefore we posit that any ideal for a ‘perfect agent’ needs
to belimit computable (0
2). The class of limit computable functions is the class of
functions that admit an anytime algorithm .
In Section 6.2 we consider various different flavors of Solomonoff induction: Solo-
monoff’s prior M(Example 3.5) is only a semimeasure and not a measure: it assigns
positive probability that the observed string has only finite length. This can be cir-
cumvented by normalizing M. Solomonoff’s normalization Mnorm(Definition 2.16)
preserves the ratio M(x1)=M(x0)and is limit computable. If instead we mix only over
programs that compute infinite strings, we get a semimeasure M(3.6), which can be
normalized to Mnorm. Moreover, when predicting a sequence, we are primarily inter-
ested in the conditional probability M(xyjx)(respectively Mnorm(xyjy),M(xyjx),
orMnorm(xyjx)) that the currently observed string xis continued with y. We show
that bothMandMnormare limit computable, while MandMnormare not. Table 6.1
summarizes our computability results for Solomonoff induction.
For MDPs, planning is already P-complete for finite and infinite horizons (Papadim-
itriou and Tsitsiklis, 1987). In POMDPs, planning is undecidable (Madani et al., 1999,
2003). The existence of a policy whose expected value exceeds a given threshold is
PSPACE-complete (Mundhenk et al., 2000), even for purely epistemic POMDPs in
which actions do not change the hidden state (Sabbadin et al., 2007). In Section 6.3
we derive hardness results for planning in general semicomputable environments; this
environment class is even more general than POMDPs. We show that optimal policies
101
102 Computability
Qf(x;q)2XQjQ(x)>qg f(x;y;q )2XXQjQ(xyjx)>qg
M 0
1n0
1 0
2n(0
1[0
1)
Mnorm 0
2n(0
1[0
1) 0
2n(0
1[0
1)
M 0
2n0
2 0
3n(0
2[0
2)
Mnorm 0
3n(0
2[0
2) 0
3n(0
2[0
2)
Table 6.1: The computability results on M,Mnorm,M, andMnormproved in Sec-
tion 6.2. Lower bounds on the complexity of MandMnormare given only for specific
universal Turing machines.
Agent Optimal "-Optimal
AIMU 0
2 0
1
AINU 0
3,0
2-hard 0
2,0
1-hard
AIXI 0
3,0
1-hard 0
2,0
1-hard
Entropy-seeking 0
3 0
2
Information-seeking 0
3 0
2
BayesExp 0
3 0
2
Table 6.2: Computability results for different agent models derived in Section 6.3,
Section 6.5, and Section 6.6. AIMU denotes the optimal policy in a computable envi-
ronment and AINU denotes the optimal policy in a semicomputable environment (see
Section 4.1). Hardness results for AIXI are with respect to a specific universal Turing
machine; hardness results for -optimal policies are with respect to a specific environ-
ment2MCCS
LSC. Results for entropy-seeking and information-seeking policies are only
for finite horizons.
are0
2-hard and"-optimal policies are undecidable.
Moreover, we show that by default, AIXI is not limit computable. When picking the
next action, two or more actions might have the same value (expected future rewards).
The choice between them is easy, but determining whether such a tie exists is difficult.
This problem can be circumvented by settling for an "-optimal policy; we get a limit-
computable agent with infinite horizon. However, these results rely on the recursive
definition of the value function. In contrast, Hutter (2005) defines the value function as
the limit of the iterative value function. In Section 6.4 we compare these two definitions
and show that the recursive definition correctly maximizes expected rewards and has
better computability properties.
In Section 6.5 we show that for finite horizons both the entropy-seeking and the
information-seeking agent are 0
3-computable and have limit-computable "-optimal
policies. BayesExp (Section 4.3.3) relies on optimal policies that are generally not
§6.1Background on Computability 103
limit computable. In Section 6.6 we give a weakly asymptotically optimal agent based
on BayesExp that is limit computable. Table 6.2 summarizes our results on the com-
putability of these agents.
In this chapter we illustrate the environments used in the proofs of our theorems
in the form of flowcharts. They should be read as follows. Circles denote stochastic
nodes, rectangles denote environment nodes , and diamonds denote the agent’s choice
nodes. Transitions out of stochastic nodes are labeled with transition probabilities,
transitions out of environment nodes are labeled with percepts, and transitions out of
choice nodes are labeled with actions. The initial node is marked with a small incoming
arrow (see for example Figure 6.3). By Assumption 4.6b the worst possible outcome is
getting reward 0forever, thus we label such states as hell. Analogously, getting reward
1forever is the best possible outcome, thus we label such states as heaven.
6.1 Background on Computability
6.1.1 The Arithmetical Hierarchy
A setANis0
niff there is a quantifier-free formula such that
k2A() 9k18k2:::Qnkn(k;k1;:::;kn) (6.1)
whereQn=8ifnis even,Qn=9ifnis odd (Nies, 2009, Def. 1.4.10). (We can also
think ofas a computable relation.) A set ANis0
niff its complement NnA
is0
n. The formula on the right side of (6.1) is a 0
n-formula and its negation is a
0
n-formula . It can be shown that we can add any bounded quantifiers and duplicate
quantifiers of the same type without changing the classification of A. The setAis0
n
iffAis0
nandAis0
n. We get that 0
1as the class of recursively enumerable sets, 0
1
as the class of co-recursively enumerable sets and 0
1as the class of recursive sets.
The setANis0
n-hard ( 0
n-hard, 0
n-hard)iff for any set B20
n(B20
n,
B20
n),Bis many-one reducible to A, i.e., there is a computable function fsuch
thatk2B$f(k)2A(Nies, 2009, Def. 1.2.1). We get 0
n0
n+10
n+1:::and
0
n0
n+10
n+1:::. This hierarchy of subsets of natural numbers is known as
thearithmetical hierarchy .
By Post’s Theorem (Nies, 2009, Thm. 1.4.13), a set is 0
nif and only if it is recur-
sively enumerable on an oracle machine with an oracle for a 0
n 1-complete set. An
oracle for 0
1is called a halting oracle .
6.1.2 Computability of Real-valued Functions
We fix some encoding of rational numbers into binary strings and an encoding of binary
strings into natural numbers. From now on, this encoding will be done implicitly
wherever necessary.
Definition 6.1 (0
n-,0
n-,0
n-computable) .A function f:X!Ris called 0
n-
computable ( 0
n-computable, 0
n-computable) iff the setf(x;q)2XQjf(x)>qgis
0
n(0
n,0
n).
104 Computability
f(x;q)jf(x)qg f (x;q)jf(x)<qg
fis computable 0
1 0
1
fis lower semicomputable 0
1 0
1
fis upper semicomputable 0
1 0
1
fis limit computable 0
2 0
2
fis0
n-computable 0
n 0
n
fis0
n-computable 0
n 0
n
fis0
n-computable 0
n 0
n
Table 6.3: Connection between the computability of real-valued functions and the
arithmetical hierarchy.
A0
1-computable function is called computable , a0
1-computable function is called
lower semicomputable , and a 0
1-computable function is called upper semicomputable .
A0
2-computable function fis called limit computable , because there is a computable
functionsuch that
lim
k!1(x;k) =f(x):
The program that limit computes fcan be thought of as an anytime algorithm forf:
we can stop at any time kand get a preliminary answer. If the program ran long
enough (which we do not know), this preliminary answer will be close to the correct
one.
Limit-computable sets are the highest level in the arithmetical hierarchy that can be
approached by a regular Turing machine. Above limit-computable sets we necessarily
need some form of halting oracle. See Table 6.3 for the definition of lower/upper
semicomputable and limit-computable functions in terms of the arithmetical hierarchy.
Lemma 6.2 (Computability of Arithmetical Operations) .Letn > 0and letf;g:
X!Rbe two 0
n-computable functions. Then
(a)f(x;y)jf(x)>g(y)gis0
n,
(b)f(x;y)jf(x)g(y)gis0
n,
(c)f+g,f g, andfgare0
n-computable, and
(d)f=gis0
n-computable if g(x)6= 0for allx.
(e)logfis0
n-computable if f(x)>0for allx.
Proof.We only prove this for n > 1. Sincef;gare0
n-computable, they are limit
computable on a level n 1oracle machine. Let be the function limit computing fon
the oracle machine, and let be the function limit computing gon the oracle machine:
f(x) = lim
k!1(k;x)andg(y) = lim
k!1 (k;y):
By assumption, both and are0
n 1-computable.
§6.2The Complexity of Solomonoff Induction 105
(a) LetG:=f(x;y;q )jg(y)< qg, and letF:=f(x;y;q )jq < f (x)g, both of which
are in 0
nby assumption. Hence there are 0
n-formulas'Gand'Fsuch that
(x;y;q )2G()'G(x;y;q )
(x;y;q )2F()'F(x;y;q )
Nowf(x)> g(y)if and only if9q:(x;y;q )2G\F, which is equivalent to the
0
n-forumla
9q:'G(x;y;q )^'F(x;y;q ):
(b) Follows from (a).
(c) Addition, subtraction, and multiplication are continuous operations.
(d) Division is discontinuous only at g(x) = 0. We show this explicitly. By assumption,
for any">0there is ak0such that for all k>k 0
j(x;k) f(x)j<"andj (x;k) g(x)j<":
We assume without loss of generality that "<jg(x)j, sinceg(x)6= 0by assumption.
(x;k)
(x;k) f(x)
g(x)
=(x;k)g(x) f(x) (x;k)
(x;k)g(x)
j(x;k)g(x) f(x)g(x)j+jf(x)g(x) f(x) (x;k)j
j (x;k)g(x)j
<"jg(x)j+jf(x)j"
j (x;k)g(x)j
withj (x;k)g(x)j=j (x;k)jjg(x)j>(jg(x)j ")jg(x)j,
<"jg(x)j+jf(x)j
(jg(x)j ")jg(x)j"!0 ! 0;
thereforef(x)=g(x) = limk!1(x;k)= (x;k).
(e) Follows from the fact that the logarithm is computable.
6.2 The Complexity of Solomonoff Induction
In this section, we derive the computability results for Solomonoff’s prior as stated in
Table 6.1.
SinceMis lower semicomputable, Mnormis limit computable by Lemma 6.2 (c)
and (d). When using the Solomonoff prior M(or one of its sisters Mnorm,M, or
Mnormdefined in Definition 2.16 and Equation 3.6) for sequence prediction, we need
106 Computability
M(xyjx)>q() 8m9k:(xy;k)
(x;m)>q() 9k9m08mm0:(xy;k)
(x;m)>q
Figure 6.1: A0
2-formula and an equivalent 0
2-formula defining conditional M. Here
(x;k)denotes a computable function that lower semicomputes M(x).
to compute the conditional probability M(xyjx) =M(xy)=M(x)for finite strings
x;y2X. BecauseM(x)>0for all finite strings x2X, this quotient is well-defined.
Theorem 6.3 (Complexity of M,Mnorm,M, andMnorm).
(a)M(x)is lower semicomputable
(b)M(xyjx)is limit computable
(c)Mnorm(x)is limit computable
(d)Mnorm(xyjx)is limit computable
(e)M(x)is0
2-computable
(f)M(xyjx)is0
3-computable
(g)Mnorm(x)is0
3-computable
(h)Mnorm(xyjx)is0
3-computable
Proof.(a) By Li and Vitányi (2008, Thm. 4.5.2). Intuitively, we can run all programs
in parallel and get monotonely increasing lower bounds for M(x)by adding 2 jpj
every time a program phas completed outputting x.
(b) From (a) and Lemma 6.2d since M(x)>0(see also Figure 6.1).
(c) By Lemma 6.2cd, and M(x)>0.
(d) By (iii), Lemma 6.2d since Mnorm(x)M(x)>0.
(e) Letbeacomputablefunctionthatlowersemicomputes M. SinceMisasemimea-
sure,M(xy)P
zM(xyz), henceP
y2XnM(xy)is nonincreasing in nand thus
M(x)>qiff8n9kP
y2Xn(xy;k )>q.
(f) From (v) and Lemma 6.2d since M(x)>0.
(g) From (v) and Lemma 6.2d.
(h) From (vi) and Lemma 6.2d since Mnorm(x)M(x)>0.
§6.2The Complexity of Solomonoff Induction 107
We proceed to show that these bounds are in fact the best possible ones. If M
were 0
1-computable, then so would be the conditional semimeasure M(j). Thus the
M-adversarial sequence z1:1defined in Example 3.42 would be computable and hence
corresponds to a computable deterministic measure . However, we have M(z1:t)2 t
by construction, so dominance M(x)w()(x)withw()>0yields a contradiction
witht!1:
2 tM(z1:t)w()(z1:t) =w()>0
By the same argument, the normalized Solomonoff prior Mnormcannot be 0
1-com-
putable. However, since it is a measure, 0
1- or 0
1-computability would entail 0
1-
computability.
ForMandMnormwe prove the following two lower bounds for specific universal
Turing machines.
Theorem 6.4 (Mis not Limit Computable) .There is a universal Turing machine U0
such that the setf(x;q)jMU0(x)>qgis not in 0
2.
Proof.Assume the contrary and let A20
2n0
2andbe a quantifier-free first-order
formula such that
n2A() 8k9i:(n;k;i ): (6.2)
For eachn2N, we define the program pnas follows.
1:procedure pn
2:output 1n+10
3:k 0
4:whiletruedo
5:i 0
6:whilenot(n;k;i )do
7: i i+ 1
8:k k+ 1
9: output 0
Each program pnalways outputs 1n+10. Furthermore, the program pnoutputs the
infinite string 1n+101if and only if n2Aby (6.2). We define U0as follows using our
reference machine U.
•U0(1n+10): Runpn.
•U0(00p): RunU(p).
•U0(01p): RunU(p)and bitwise invert its output.
By construction, U0is a universal Turing machine. No pnoutputs a string starting with
0n+11, thereforeMU0(0n+11) =1
4
MU(0n+11) +MU(1n+10)
. Hence
MU0(1n+10) = 2 n 21A(n) +1
4MU(1n+10) +1
4MU(0n+11)
= 2 n 21A(n) +MU0(0n+11)
108 Computability
Ifn =2A, thenMU0(1n+10) =MU0(0n+11). Otherwise, we have jMU0(1n+10)
MU0(0n+11)j= 2 n 2.
Now we assume that MU0is limit computable, i.e., there is a computable function
:XN!Qsuch that limk!1(x;k) =MU0(x). We get that
n2A() lim
k!1(0n+11;k) (1n+10;k)2 n 2;
thusAis limit computable, a contradiction.
Corollary 6.5 (Mnormis not 0
2- or 0
2-computable) .There is a universal Turing
machineU0such thatf(x;q)jMnormU0(x)>qgis not in 0
2or0
2.
Proof.SinceMnorm =cM, there exists a k2Nsuch that 2 k<c(even if we do not
know the value of k). We can show that the set f(x;q)jMnormU0(x)>qgis not in 0
2
analogously to the proof of Theorem 6.4, using
n2A() lim
k!1(0n+11;k) (1n+10;k)2 k n 2:
IfMnormwere 0
2-computable or 0
2-computable, this would imply that Mnormis0
2-
computable since Mnormis a measure, a contradiction.
SinceM() = 1, we haveM(xj) =M(x), so the conditional probability M(xyjx)
has at least the same complexity as M. Analogously for MnormandMnormsince they
are measures. For M, we have that M(xj) =Mnorm(x), so Corollary 6.5 applies. All
that remains to prove is that conditional Mis not lower semicomputable.
Theorem 6.6 (Conditional Mis not Lower Semicomputable) .The setf(x;xy;q )j
M(xyjx)>qgis not recursively enumerable.
We gave a different, more complicated proof in Leike and Hutter (2015b). The
following, much simpler and more elegant proof is due to Sterkenburg (2016, Prop. 3).
Proof.Assume to the contrary that M(xyjx)is lower semicomputable. Let a6=b2X.
We construct an infinite string xby defining its initial segments =:x(0)@x(1)@
x(2)@:::@x. At every step n, we enumerate strings y2Xuntil one is found
satisfyingM(ajx(n)y)1=2; then setx(n+ 1) :=x(n)yb. This implies that for
infinitely many tthere is an nsuch thatM(bjx<t) =M(bjx(n)y)1 M(aj
x(n)y)1=2. Since we assumed M(j)to be lower semicomputable, the infinite
stringxis computable, and hence M(xtjx<t)!1by Corollary 3.55. But this
contradicts M(bjx<t)1=2infinitely often.
6.3 The Complexity of AINU, AIMU, and AIXI
6.3.1 Upper Bounds
Inthissection,wederiveupperboundsonthecomputabilityofAINU,AIMU,andAIXI.
Except for Corollary 6.14, all results in this section apply generally to any 2MCCS
LSC.
§6.3The Complexity of AINU, AIMU, and AIXI 109
Since the Bayesian mixture 2MCCS
LSC, they apply to AIXI even though they are stated
for AINU.
In order to position AINU in the arithmetical hierarchy, we need to encode policies
as sets of natural numbers. For the rest of this chapter, we assume that policies are
deterministic, thuscanberepresentedasrelationsover (AE )A. Theserelationsare
easily identified with sets of natural numbers by encoding the history into one natural
number. From now on this translation of policies into sets of natural numbers will be
done implicitly wherever necessary.
Lemma 6.7 (Policies are in 0
n).If a policyis0
nor0
n, thenis0
n.
Proof.Let'be a 0
n-formula ( 0
n-formula) defining , i.e.,'(h;a)holds iff(h) =a.
We define the formula '0,
'0(h;a) :=^
a02Anfag:'(h;a0):
The set of actions Ais finite, hence '0is a0
n-formula ( 0
n-formula). Moreover, '0is
equivalent to '.
To compute the optimal policy, we need to compute the optimal value function.
The following lemma gives an upper bound on the computability of the value function
for environments in MCCS
LSC.
Lemma 6.8 (Complexity of V
).For every2MCCS
LSC, and every lower semicom-
putable discount function
, the function V
is0
2-computable.
Proof.The explicit form of the value function (4.2) has numerator
lim
m!1maxX
æt:mmX
i=t
(i)ri(e1:ika1:i);
and denominator (e<tka<t) t. The numerator is nondecreasing in mbecause we
assumed rewards to be nonnegative (Assumption 4.6b). Hence both numerator and
denominator are lower semicomputable functions, so Lemma 6.2d implies that V
is
0
2-computable.
From the optimal value function V
we get the optimal policy
according to (4.4).
However, in cases where there is more than one optimal action, we have to break an
argmax tie. This happens iff V
(h) =V
(h)for two potential actions 6=2A.
This equality test is more difficult than determining which is larger in cases where they
are unequal. Thus we get the following upper bound.
Theorem 6.9 (Complexity of Optimal Policies) .For any environment , ifV
is
0
n-computable, then there is an optimal policy
for the environment that is 0
n+1.
110 Computability
Proof.To break potential ties, we pick an (arbitrary) total order onAthat specifies
which actions should be preferred in case of a tie. We define
(h) =a:()^
a0:a0aV
(ha)>V
(ha0)
^^
a0:aa0V
(ha)V
(ha0):(6.3)
Thenis a-optimal policy according to (4.4). By assumption, V
is0
n-computable.
By Lemma 6.2ab V
(ha)>V
(ha0)is0
nandV
(ha)V
(ha0)is0
n. Therefore the
policydefined in (6.3) is a conjunction of a 0
n-formula and a 0
n-formula and thus
0
n+1.
Corollary 6.10 (ComplexityofAINU) .AINU is 0
3for every environment 2MCCS
LSC.
Proof.From Lemma 6.8 and Theorem 6.9.
Usuallywedonotmindtakingslightlysuboptimalactions. Thereforeactuallytrying
to determine if two actions have the exact same value seems like a waste of resources.
In the following, we consider policies that attain a value that is always within some
">0of the optimal value.
Theorem 6.11 (Complexity of "-Optimal Policies) .For any environment , ifV
is
0
n-computable, then there is an "-optimal policy "
for the environment that is 0
n.
Proof.Let" > 0be given. Since the value function V
(h)is0
n-computable, the
setV":=f(ha;q)jjq V
(ha)j< "= 2gis in 0
naccording to Definition 6.1. Hence
we compute the values V
(ha0)until we get within "=2for everya02Aand then
choose the action with the highest value so far. Formally, let be an arbitrary total
order onAthat specifies which actions should be preferred in case of a tie. Without
loss of generality, we assume "= 1=k, and define Qto be an"=2-grid on [0;1], i.e.,
Q:=f0;1=2k;2=2k;:::; 1g. We define
"
(h) =a:() 9 (qa0)a02A2QA:^
a02A(ha0;qa0)2V"
^^
a0:a0aqa>qa0^^
a0:aa0qaqa0
^the tuple (qa0)a02Ais minimal with
respect to the lex. ordering on QA:(6.4)
This makes the choice of aunique. Moreover, QAis finite sinceAis finite, and hence
(6.4) is a 0
n-formula.
Corollary 6.12 (Complexity of "-Optimal AINU) .For any environment 2MCCS
LSC,
there is an "-optimal policy for AINU that is 0
2.
Proof.From Lemma 6.8 and Theorem 6.11.
§6.3The Complexity of AINU, AIMU, and AIXI 111
Purgatory Heaven6=
(ht)
(ht)r= 0r= 1
Figure 6.2: The environment from the proof of Theorem 6.15. The agent gets
reward 0as long as it follows AIXI’s policy
that is assumed to be computable. Once
the agent deviates from
, it gets reward 1. We get a contradiction because AIXI
can learn this environment, so it will eventually decide to take an action that leads to
heaven.
Corollary 6.13 (Complexity of "-Optimal AIXI) .For any lower semicomputable prior
there is an "-optimal policy for AIXI that is 0
2.
Proof.FromCorollary6.12sinceforanylowersemicomputableprior, thecorresponding
Bayesian mixture is inMCCS
LSC.
If the environment 2MCCM
compis a measure, i.e., assigns zero probability to finite
strings, then we get computable "-optimal policies.
Corollary 6.14 (Complexity of AIMU) .If the environment 2MCCM
compis a measure
and the discount function
is computable, then AIMUis limit computable ( 0
2), and
"-optimal AIMUis computable ( 0
1).
Proof.Let">0be the desired accuracy. We can truncate the limit m!1in (4.2)
at the"=2-effective horizon Ht("=2), since everything after Ht("=2)can contribute at
most"=2to the value function. Any lower semicomputable measure is computable (Li
and Vitányi, 2008, Lem. 4.5.1). Therefore V
as given in (4.2) is composed only of
computable functions, hence it is computable according to Lemma 6.2. The claim now
follows from Theorem 6.9 and Theorem 6.11.
6.3.2 Lower Bounds
We proceed to show that the bounds from the previous section are the best we can hope
for. In environment classes where ties have to be broken, AINU has to solve 0
2-hard
problems (Theorem 6.16). These lower bounds are stated for particular environments
2MCCS
LSC. Throughout this section, we assume that t>0for allt.
We also construct universal mixtures that yield bounds on "-optimal policies. There
is an"-optimal AIXIthat solves 0
1-hard problems (Theorem 6.17). For arbitrary uni-
versal mixtures, we prove the following weaker statement that only guarantees incom-
putability.
Theorem 6.15 (NoAIXIis computable) .AIXIis not computable for any universal
Turing machine U.
112 Computability
This theorem follows from the incomputability of Solomonoff induction. By the
on-policy value convergence theorem (Corollary 4.20) AIXI succeeds to predict the
environment’s behavior for its own policy. If AIXI were computable, then there would
be computable environments more powerful than AIXI: they can simulate AIXI and
anticipate its prediction, which leads to a contradiction.
Proof.Assume there is a computable policy
that is optimal in the mixture . We
define a deterministic environment , theadversarial environment to
. The environ-
mentgives rewards 0as long as the agent follows the policy
, and rewards 1once
the agent deviates. Formally, we ignore observations by setting O:=f0g, and define
(r1:tka1:t) :=8
>>>><
>>>>:1if8kt:ak=
((ar)<k)andrk= 0;
1if8kt:rk=1ki
wherei:= minfjjaj6=
((ar)<j)g;and
0otherwise:
See Figure 6.2 for an illustration of this environment. The environment is computable
because the policy
was assumed to be computable. Suppose
acts in, then by
Theorem 4.19 AIXI learns to predict perfectly on policy :
V
(æ<t) V
(æ<t)!0ast!1
-almost surely ;
since both
andare deterministic. Because V
(h<t) = 0by definition of , we get
V
(æ<t)!0. Therefore we find a tlarge enough such that V
(æ<t)<w()whereæ<t
is the interaction history of
in. A policywith(æ<t)6=
(æ<t), gets a reward
of1in environment for all time steps after t, henceV
(æ<t) = 1. With linearity of
V
(æ<t)in(Lemma 4.14),
V
(æ<t)w()(e1:tka1:t)
(e1:tka1:t)V
(æ<t)w();
since(e1:tka1:t) = 1(is deterministic), V
(æ<t) = 1, and(e1:tka1:t)1. Now
we get a contradiction:
w()>V
(æ<t) = sup
0V0
(æ<t)V
(æ<t)w()
For the remainder of this section, we fix the action space to be A:=f;gwith
actionfavored in ties. The percept space is fixed to a tuple of binary observations
and rewards,E:=Of 0;1gwithO:=f0;1g.
Theorem 6.16 (AINU is 0
2-hard).There is an environment 2MCCS
LSCsuch that
AINU is 0
2-hard.
Proof.LetAbe a any 0
2-set, and let be a quantifier-free formula such that
n2A() 8i9k(n;i;k ): (6.5)
§6.3The Complexity of AINU, AIMU, and AIXI 113
n++
Heaven1=2o= 1
r= 0
1=2o= 0
r= 0
19k(n;i;k)
o= 0
r= 1
Figure 6.3: The environment ifrom the proof of Theorem 6.16. The mixture
over class of environments M:=f0;1;:::gMCCS
LSCforces AINU to solve 0
2-hard
problems: Action is preferred (because of a tie) iff it leads to heaven, which is the
case iff9k(n;i;k ).
We define a class of environments M:=f1;2;:::gwhere each iis defined as follows.
i((or)1:mka1:m) :=8
>>>>>>>>><
>>>>>>>>>:2 mifo1:m= 1mand8tm:rt= 0;
2 n 1if9n:1n0vo1:mv1n01andan+2=
andrt=1t>n+1and9k(n;i;k );
2 n 1if9n:1n0vo1:mv1n01andan+2=
andrt=1t>n+1;and
0otherwise:
See Figure 6.3 for an illustration of these environments. Every iis a chronological
conditional semimeasure by definition and every iis lower semicomputable since is
quantifier-free, so MMCCS
LSC.
We define our environment as a mixture over M,
:=X
i2N2 i 1i;
the choice of the weights on the environments iis arbitrary but positive. Let
be an
optimal policy for the environment and recall that the action is preferred in ties.
We claim that for the -optimal policy
,
n2A()
(1n0) =: (6.6)
This enables us to decide whether n2Agiven the policy
, hence proving (6.6)
concludes this proof.
Letn;i2Nbegiven, andsupposeweareinenvironment iandobserve 1n0. Taking
actionnext yields reward 1forever; taking action next yields a reward of 1if there
114 Computability
Semi-Heaven
Heaven
o= 1n0
19k(n;i;k)o= 0
r= 1=2
o= 0
r= 1
Figure 6.4: The environment from the proof of Theorem 6.17, which forces AIXI
to solve 0
1-hard problems. It functions just like until the observation history is 1n0.
Then, action is preferred iff heaven is accessible, i.e., iff 9k(n;i;k ).
is aksuch that(n;i;k )holds. If this is the case, then
V
i(1n0) = n+2=V
i(1n0);
and otherwise
V
i(1n0) = 0< |